id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
253598117
|
pes2o/s2orc
|
v3-fos-license
|
Paternal epigenetic influences on placental health and their impacts on offspring development and disease
Our efforts to understand the developmental origins of birth defects and disease have primarily focused on maternal exposures and intrauterine stressors. Recently, research into non-genomic mechanisms of inheritance has led to the recognition that epigenetic factors carried in sperm also significantly impact the health of future generations. However, although researchers have described a range of potential epigenetic signals transmitted through sperm, we have yet to obtain a mechanistic understanding of how these paternally-inherited factors influence offspring development and modify life-long health. In this endeavor, the emerging influence of the paternal epigenetic program on placental development, patterning, and function may help explain how a diverse range of male exposures induce comparable intergenerational effects on offspring health. During pregnancy, the placenta serves as the dynamic interface between mother and fetus, regulating nutrient, oxygen, and waste exchange and coordinating fetal growth and maturation. Studies examining intrauterine maternal stressors routinely describe alterations in placental growth, histological organization, and glycogen content, which correlate with well-described influences on infant health and adult onset of disease. Significantly, the emergence of similar phenotypes in models examining preconception male exposures indicates that paternal stressors transmit an epigenetic memory to their offspring that also negatively impacts placental function. Like maternal models, paternally programmed placental dysfunction exerts life-long consequences on offspring health, particularly metabolic function. Here, focusing primarily on rodent models, we review the literature and discuss the influences of preconception male health and exposure history on placental growth and patterning. We emphasize the emergence of common placental phenotypes shared between models examining preconception male and intrauterine stressors but note that the direction of change frequently differs between maternal and paternal exposures. We posit that alterations in placental growth, histological organization, and glycogen content broadly serve as reliable markers of altered paternal developmental programming, predicting the emergence of structural and metabolic defects in the offspring. Finally, we suggest the existence of an unrecognized developmental axis between the male germline and the extraembryonic lineages that may have evolved to enhance fetal adaptation.
Introduction
Sperm are principally known for carrying DNA, specialized cells that deliver one-half of the genome required to give rise to healthy offspring. However, we now know these cells carry much more than just a haploid set of chromosomes. During spermatogenesis, sperm cells undergo widespread transcriptional and structural changes as they differentiate (Larose et al., 2019). During this process, changes in DNA methylation and posttranslational histone modifications, followed by the sequential replacement of most histones by protamines, yield an incredibly specialized cell type with a remarkably unique epigenome (Le Blévec et al., 2020). Subsequently, during transit through the epididymis, additional epigenetic signals are conferred to sperm as they mature to become fertilization competent, including alterations in noncoding RNAs and additional changes in posttranslational histone modifications (Yoshida et al., 2018;Bedi et al., 2022a;Conine and Rando, 2022) (Figure 1). Over the past 10 years, clinical and biomedical studies have demonstrated that epigenetic factors carried in sperm significantly influence the health of future generations (Lane et al., 2014;Fleming et al., 2018). These studies have challenged the exclusive importance of gestational exposures in mediating environmentally-induced disease and provide compelling evidence to help redress the notion that exposure-induced birth defects are solely the woman's fault. Notably, these studies also demonstrate that some aspects of teratogenesis are programmed; epigenetic changes pass through common progenitors to exert successive tissue-specific effects in an ensuing life stage. However, there is still a foundational lack of knowledge concerning how environmental stressors impact epigenetic processes controlling sperm production, and, as yet, the mechanisms by which these inherited epimutations persist to influence offspring health remain almost entirely undefined.
The most plausible track in this endeavor is determining the influence of paternal exposures on the development and function of the placenta. In this review, we seek to understand how changes in sperm-borne epigenetic signals broadly influence offspring health by focusing on the impacts on placental biology. We predominantly focus our review on mouse models, for which a growing body of literature is available. Finally, we will endeavor to explain how altered epigenetic programming in sperm influences embryogenesis and placentation. In doing so, we aim bridge the gap between Paternal obesity C57BL/6J Mice Igf2 increased No significant changes seen Placental Hypoxia, increased angiogenesis with loss of integrity in vessels Alcohol (3%, 6%, 10%) C57BL/6J Mice Ascl2, Cdkn1c, H19, Slc22a18, Peg3 dysregulated Higher Placental Weights for 3% and 6% Increase in labyrinth and alterations in vascular space Gao et al. (2021) Microcystin-LR Mice Imprinted genes not reported No significant changes Decreased proliferation of labyrinth cells.
Impaired vasculature in MC-LR exposed placentae Rokade et al. paternal exposures and pediatric disease and identify potential markers of altered developmental programming common between divergent models examining preconception paternal exposures.
Preconception paternal stressors and placental function
The placenta is the dynamic interface between mother and fetus that regulates nutrient, oxygen, and waste exchange, coordinates fetal growth, metabolism, and maturation, and determines gestation length. Consequently, factors influencing placental development and function are not only crucial in determining successful pregnancy outcomes; they set the stage for multiple aspects of lifetime health (Burton et al., 2016). Although rarely considered when assessing child-health outcomes, paternally-inherited epigenetic factors have been long-known to play critical roles in controlling the development and differentiation of extraembryonic tissues across multiple mammalian species (McGrath and Solter, 1984;Surani et al., 1984;Wang et al., 2013). Accordingly, several studies examining the intergenerational impacts of paternal stressors report alterations in placental growth within the next generation (Table 1). Below, we review these data and discuss potential mechanistic pathways linking paternal exposures to placental dysfunction while also infusing some caution into the interpretation of these changes.
Preconception paternal stressors and alterations in placental imprinted gene expression: Causal drivers or additional symptoms?
During the mid-1980s, pioneering studies by McGrath and Solter (McGrath and Solter, 1984) and Surani et al. (Surani et al., 1984) demonstrated that the sperm and egg contain information beyond the genetic code and make unequal contributions to offspring development, with the paternal contribution predominantly driving the growth and differentiation of the placenta and yolk sac. From this early work, the field of genomic imprinting was born, which has since revealed that the appropriate dosage and function of a small cohort of monoallelically expressed genes is critical to controlling fetoplacental development (Bartolomei et al., 1991;DeChiara et al., 1991;Giannoukakis et al., 1993;Constância et al., 2002;Lee and Bartolomei, 2013). Moreover, gene loss-of-function studies examining Ascl2, Cdkn1c, Grb10, Igf2, Igf2r, Peg1, Peg3, Peg10, Phlda2, Rtl1, and several others, have revealed that imprinted genes play foundational roles in directing placental differentiation and patterning (Bressan et al., 2009;Piedrahita, 2011).
Notably, multiple aspects of paternal health influence the epigenetic regulation of imprinted genes in sperm, which affects offspring fetoplacental growth. For example, Denomme and colleagues report that age-related changes in sperm DNA methylation are associated with altered placental imprinted gene expression and growth (Denomme et al., 2020). Further, recent clinical studies suggest paternal imprints (here, we reference the inheritance of a silenced paternal allele) may be less stable than maternal imprints, and loss of genomic imprinting impacts placental and infant weight (Vincenz et al., 2020). Therefore, given the established role paternally-expressed imprinted genes have in controlling the development and differentiation of extraembryonic tissues, imprinted genes and their epigenetic regulatory mechanisms represent the logical first suspects in our efforts to understand how paternal stressors and environmental exposures impact offspring fetoplacental health.
Although a relatively small number of studies investigating the influence of paternal exposures on offspring health have examined aspects of placental development, a notable number have identified altered imprinted gene expression (Table 1). For example, placentae derived from the offspring of obese males exhibit altered expression of the imprinted genes Igf2, Peg3, Peg9, and Peg10 (Mitchell et al., 2017;Jazwiec et al., 2022). Males exposed to exogenous glucocorticoids during gestation sire offspring with reduced placental weights, which correlated with altered expression of the imprinted genes Igf2, Cdkn1c, Phlda2, and Slc22a18 in both the placenta and fetal liver (Drake et al., 2011). Adult males exposed to the toxicant Tetrachlorodibenzo-p-dioxin (TCDD) display reductions in placental weight and altered placental DNA methylation profiles at the Igf2-H19 imprint control region (Ding et al., 2018). The offspring of adult males exposed to cannabinoids present with disruptions in the histoarchitecture of the placenta, including reductions in the placental junctional zone and increases in the labyrinth layer, which correlated with altered methylation of the paternally expressed imprinted genes Peg10 and Plagl1 (Innocenzi et al., 2019). Likewise, our group has identified changes in placental histology induced by chronic preconception paternal alcohol exposure, in which, similar to the offspring of cannabinoid-exposed males, the labyrinth layer increases and junctional zone decreases (Thomas et al., 2021;. We also identify alterations in placental imprinted gene expression, including changes in Ascl2, Igf2, H19, and Slc22a18. In contrast, placentae derived from the offspring of males maintained on a low protein diet exhibit increased size of the placental junctional zone and a decreased labyrinth layer but also display abnormal expression of multiple imprinted genes, including Cdkn1c, Grb10, H19, Mest, and Snrpn (Watkins et al., 2017;Morgan et al., 2020). Therefore, paternal exposures appear to transmit a stressor to their offspring, frequently resulting in altered placental imprinted gene expression.
However, determining if alterations in imprinted gene expression are phenotypic drivers or additional symptoms remains a challenging question central to determining how the paternally-inherited epigenetic program influences offspring phenotype. Although human studies suggest genomic imprints transmitted through sperm are more labile than those in oocytes (Vincenz et al., 2020), few studies report correlative DNA methylation profiles between exposed sperm and imprint control regions within offspring placentae. To this point, genome-wide studies using a mouse model of paternal folic acid deficiency, which also reported a thinning of the placental junctional zone and increases in the labyrinth layer, identified 300 differentially expressed placental genes, but only two candidates exhibited differential methylation in sperm; none of the candidates were imprinted genes (Lambrot et al., 2013;Radford et al., 2014). Moreover, neither the aforementioned studies examining paternal glucocorticoid exposures (Drake et al., 2011) nor our work examining alcohol-exposed sperm (Chang et al., 2017;Chang et al., 2019a) identified any DNA methylation changes in sperm or alterations in monoallelic imprinted gene expression. Finally, the remaining studies that report common changes between exposed sperm and offspring placentae identified very modest changes of 2-10%, which previous reports suggest are insufficient to disrupt monoallelic gene expression patterns (Mann et al., 2003;Mann et al., 2004;Susiarjo et al., 2013), and did not employ a mouse model capable of confirming parent-of-origin expression patterns. Therefore, the altered placental imprinted gene expression observed in intergenerational models of paternal exposures are likely additional symptoms and unlikely to represent the primary epigenetic memory influencing offspring phenotypes.
In support of this assertion, recent studies examining the offspring of obese males identify altered Igf2 expression at gestational day 14.5, but these differences disappear by gestational day 18.5 (Jazwiec et al., 2022). Furthermore, most studies report sex-specific changes in placental imprinted gene expression. These sex-specific patterns and transient alterations indicate that altered imprinted gene expression likely arises as part of a cellular response to a paternally-inherited stressor rather than as a primary driver of altered developmental programming. This argument is consistent with studies examining placental defects induced by assisted reproductive techniques, including superovulation and in vitro embryo culture, which do not consistently report altered imprinted gene expression or imprint control region DNA methylation profiles, despite invariably observing placentomegaly and junctional zone overgrowth (de Waal et al., 2015;Chen et al., 2017;Vrooman et al., 2022).
Preconception male exposures and the epigenetic transmission of placental stressors
The murine placenta consists of four main histological layers, the chorion, the labyrinth layer, the junctional zone, and the maternal decidua ( Figure 2). The functional organization of these layers serves to bring the fetal and maternal blood systems into close contact. Here, the maternal blood supply passes through the spongiotrophoblast cells of the junctional zone via a large central sinus. Subsequently, blood becomes distributed into the tortuous, small spaces of the labyrinth, directly bathing the fetal trophoblastic villi. The labyrinth layer, therefore, serves as the primary site of fetomaternal exchange, while the junctional zone functions as the primary endocrine compartment of the placenta, releasing a vast suite of hormones, growth factors, and cytokines that act on both maternal and fetal physiology to regulate pregnancy progression (please see excellent reviews by Rossant andCross 2001 (Rossant andCross, 2001) and Woods, et al., 2018(Woods et al., 2018).
During times of stress, the placenta allocates priority to the growth and expansion of either the junctional zone or labyrinth, depending on the specific stressor or stage of pregnancy. For example, the processes of superovulation and in vitro embryo culture induce an expansion of the junctional zone, accompanied by placentomegaly, reduced placental efficiency, and altered metabolic function in the offspring (Collier et al., 2009;Delle Piane et al., 2010;Bloise et al., 2012;Tan et al., 2016;Chen et al., 2017;Dong et al., 2021;Bai et al., 2022;Vrooman et al., 2022). In contrast, maternal starvation reduces the growth of the junctional zone (at gestational day 16.5), characterized by a prominent reduction in glycogen-producing trophoblast cells (Coan et al., 2010;Sferruzzi-Perri et al., 2011). Decreases and increases in the junctional zone also emerge in loss-of-function studies examining imprinted genes, emphasizing the role these genetic factors have in driving placental histology and adaptation (Tunster et al., 2020).
As briefly mentioned above, multiple studies examining paternal stressors also report changes in placental histoarchitecture, with reallocations primarily occurring between the junctional and labyrinth zones (Watkins et al., 2017;Innocenzi et al., 2019;Morgan et al., 2020;Gao et al., 2021;. Interestingly, however, these programmed changes often contrast those observed during maternal exposures. For example, in contrast to maternal starvation, which associates with decreased size of the junctional zone (Coan et al., 2010), paternal nutrient restriction programs junctional zone hypertrophy (Watkins et al., 2017). Further, while the offspring of adult males exposed to cannabinoids and alcohol present with reductions in the junctional zone and increases in the labyrinth layer (Innocenzi et al., 2019;, maternal alcohol exposures promote an expansion of the junctional zone (Gårdebjer et al., 2014). The duality of these responses is intriguing and emphasizes the divergence between paternal and maternal experiences in programming aspects of placental adaptation.
In some reports, paternally programmed alterations in placental histology correlate with altered glycogen content of the junctional zone. For example, paternal toxicant exposures (Ding et al., 2018;Gao et al., 2021), cannabinoid use (Innocenzi et al., 2019), and long-term maintenance on a low protein diet (Watkins et al., 2017) all induce reductions in placental glycogen stores, while chronic preconception paternal alcohol use is associated with increased glycogen levels (Thomas et al., 2021). Reductions in placental glycogen content are also present in mouse models of maternal nutrient restriction (Coan et al., 2010;Sferruzzi-Perri et al., 2011), while studies examining the impacts of assisted reproductive technologies, maternal alcohol use, and gestational glucocorticoid exposures all report increases in placental glycogen (O'Connell et al., 2013;Gårdebjer et al., 2014;Dong et al., 2021). In humans, both decreases and increases in placental glycogen content accompany pregnancy complications that adversely affect fetal development, including intrauterine growth restriction, gestational diabetes, and preeclampsia (Akison et al., 2017). Although we do not yet fully understand the significance of placental glycogen flux, these changes consistently emerge in circumstances where maternal-placental stressors have begun to impact fetal growth (Akison et al., 2017;Tunster et al., 2020).
Because of their glycogen content and location, placental biologists believe spongiotrophoblast cells of the junctional zone serve as a critical energy store, providing additional nutrition to the placenta and/or embryo during specific phases of pregnancy (Tunster et al., 2020). As both maternal and paternal stressors program changes in junctional zone growth and glycogen content, we propose that the placenta's histological organization and glycogen content offers a dynamic readout of altered paternal epigenetic programming. In support of this hypothesis, we recently reported alterations in placental growth and architecture that varied depending on the dose of alcohol encountered by the father . Notably, these dose-dependent changes are non-linear, with low doses inducing placental overgrowth with no histological changes, while higher doses induce growth restriction, which is accompanied by a male-specific reduction in the junctional zone. Combined with other works (Vallaster et al., 2017), these data imply that paternal exposures can program hormetic growth responses, which may bolster offspring toxicant resistance and adaptability to counter adverse environmental conditions.
Although an emerging body of work describes consistent impacts on placental growth and histoarchitecture, the developmental origins of these changes remain obscure.
FIGURE 2
Paternal Exposure and Placental Phenotypes. Paternal exposures induce a wide variety of gross and histological placental phenotypes. For example, paternal alcohol, cannabis, and high-fat diet exposures induce increases in the labyrinth layer (layer responsible for nutrient and gaseous exchange) and decreases in the junctional zone (containing the spongiotrophoblasts and glycogen cells). In contrast, maintaining sires on a lowprotein diet induced an increase in the junctional zone and a corresponding decrease in the labyrinth. Regardless of the changes to placental histology, most paternal exposures led to dysregulation of imprinted genes like Ascl2, Igf2, H19, Slc22a18, Cdkn1c, Grb10, Mest, and Snrpn.
Frontiers in Genetics frontiersin.org
However, a small number of studies report associations between paternal stressors and alterations in early embryonic growth. For example, paternal low-protein and high-fat diets are both associated with delayed progression of embryos through the earliest cleavage events, with the most prolonged delays coinciding with embryonic genome activation around the 2cell stage (Binder et al., 2012;Sharma et al., 2016). Further, blastocyst-stage embryos derived using sperm from obese males report reductions in the number of cells within the inner cell mass and an expansion of the trophectoderm lineage (Mitchell et al., 2011;Binder et al., 2012). These observations suggest that paternally-inherited epigenetic stressors may impede the earliest phases of embryonic differentiation and lineage specification. Similar to embryos generated using in vitro fertilization (Bai et al., 2022), it is plausible that paternal stressors alter the allocation and developmental trajectory of the extraembryonic endoderm, with downstream consequences on placental patterning and function. However, these placental deficiencies may not measurably impact fetal development until late gestation, when the placenta has reached its maximal size and is required to support the near logarithmic increase in late-stage fetal growth (Mu et al., 2008). Notably, deficiencies in late gestation are purported to predominantly impact male offspring, which may help explain the emergence of some sexspecific outcomes across multiple models examining the intergenerational impacts of paternal stressors (Kalisch-Smith et al., 2017).
Alterations in the sperm-inherited epigenome and altered embryonic development
Several lines of evidence have emerged to help explain how epigenetic changes in sperm may impact embryonic development. However, each of these proposed mechanisms has limitations that complicate our understanding of how paternal exposures influence offspring health and morphogenesis. Below, we will briefly review each epigenetic signal and discuss evidence supporting and limiting the involvement of these mechanistic pathways as drivers of paternal epigenetic inheritance.
DNA methylation
Of the known epigenetic mechanisms examined to date, DNA methylation is the best characterized across all subdisciplines of developmental programming, including studies examining paternal inheritance. Because early studies contrasting DNA methylation across transposable elements and imprinted genes identified correlative patterns between sperm and embryonic tissues, researchers have long suspected this epigenetic modification participates in the paternal transmission of environmentally-induced phenotypes (Monk et al., 1987;Yoder et al., 1997). Supporting this suspicion, nearly every paternal exposure model or stressor examined to date yields some degree of change in the sperm DNA methylome. For example, high-fat and low-protein diets, exposure to stressful conditions, cold, drugs of abuse, and multiple environmental toxicants modify the DNA methylation profiles of sperm (Anway et al., 2005;Ouko et al., 2009;Knezovich and Ramsay, 2012;Martínez et al., 2014;Radford et al., 2014;Wei et al., 2014;Shea et al., 2015;Chen et al., 2016a;Wu et al., 2016a;Chamorro-Garcia et al., 2017;Le et al., 2017;Ly et al., 2017;Baptissart et al., 2018;Ben Maamar et al., 2019;Innocenzi et al., 2019;Skinner et al., 2019). Further, several of these studies report consistent alterations between the methylation profiles of exposed sperm and gene regulatory regions driving pathological changes in gene expression in adult animals. Thus, these data suggest that some modified loci in sperm may survive embryonic remodeling, persist into adulthood, and associate with pathological changes in gene expression.
However, although bolstered by the emergence of altered methylation in clinical studies examining the sperm of obese males (Donkin et al., 2016), reported changes in DNA methylation are frequently modest and do not reliably align with pathology-associated gene expression patterns in subsequent generations. For example, studies reporting correlative changes in DNA methylation between sperm and affected tissues in the next generation frequently describe differences ranging from 1% to 5% (Martínez et al., 2014;Wei et al., 2014;Wu et al., 2016a;Innocenzi et al., 2019). These very subtle differences are unlikely to appreciably impact transcription, and, as discussed previously (Shea et al., 2015), the low frequency of these identified changes in exposed sperm cannot account for the consistent penetrance of offspring phenotypes. Further, most studies examining paternal epigenetic inheritance do not consistently report any direct correlations between changes in sperm DNA methylation and alterations in offspring gene expression or only report the emergence of transcriptional dysregulation in similar genomic regions; sometimes megabases away (Radford et al., 2014;Shea et al., 2015;Terashima et al., 2015;Chen et al., 2016a;de Castro Barbosa et al., 2016;Chamorro-Garcia et al., 2017;Chang et al., 2017;Le et al., 2017;Ly et al., 2017;. Therefore, despite early enthusiasm, there is no compelling evidence that, outside of imprinted genes and select transposable elements, the inheritance of this epigenetic modification through sperm stably influences fetal or adult gene expression in the next generation. As most DNA methylation is stripped off during syngamy (Smallwood et al., 2011;Smith et al., 2012) and the epigenome is heavily remodeled during histogenesis (Guo et al., 2014), it is unlikely that altered DNA methylation in sperm persists through development to influence transcription in fetal or adult tissues Frontiers in Genetics frontiersin.org directly. However, emerging evidence indicates that some regions may escape the reprogramming wave during early embryonic development (Hackett et al., 2013;Tang et al., 2015;Zhu et al., 2018;Hao et al., 2021), which could impact gene expression within the developing embryo. Although DNA methylation does have causal roles in the transcriptional regulation of imprinted genes, the suppression of transposable elements, and the process of X-chromosome inactivation, its role in controlling the expression of most protein-coding appears to be responsive rather than causal and is frequently context-specific (Bestor et al., 2015;de Mendoza et al., 2022). As single-cell technologies improve, we may indeed track changes in sperm DNA methylation that persist through the erasure at syngamy and impact the initiation of the earliest transcriptional programs diving development. However, these are likely acute changes altering embryonic transcription, not permanent ones driving pathology in adult tissues.
Histone posttranslational modifications
Although less explored than DNA methylation, a small number of studies examining alterations in sperm histone posttranslational modifications have also emerged. Despite the replacement of most histones with protamines during spermatogenesis, some genomic loci in sperm retain histones, which carry select posttranslational modifications to the zygote. These nucleosome-enriched regions colocalize with regulatory regions of developmentally crucial genes (Gardiner-Garden et al., 1998;Arpanahi et al., 2009;Hammoud et al., 2009;Brykczynska et al., 2010;Erkek et al., 2013;Royo et al., 2016;Yamaguchi et al., 2018;Yoshida et al., 2018) or gene-poor domains enriched in repetitive elements (Zalenskaya et al., 2000;Carone et al., 2014;Samans et al., 2014;Sillaste et al., 2017), depending on the method of analysis. Similar to studies examining DNA methylation, researchers suspect that a subset of these histone-enriched loci transmits to the early embryo to influence embryonic development.
Whether the environment modulates this form of epigenetic information to heritably influence offspring development is still in the initial stages of investigation. However, research reveals that sperm from males exposed to alcohol, a folic acid-deficient diet, and a high-fat diet all display altered amounts of trimethylated histone H3 lysine 4 (H3K4me3) or dimethylated histone H3 lysine 9 (H3K9me2) (Terashima et al., 2015;Claycombe-Larson et al., 2020;Yoshida et al., 2020;Lismer et al., 2021;Bedi et al., 2022b;Cambiasso et al., 2022;Pepin et al., 2022). Inheritance of these changes may directly impact chromatin accessibility in the developing embryo, impacting the earliest transcriptional programs governing lineage specification and developmental patterning. For example, recent work examining sperm derived from alcohol-exposed males identified a significant increase in global levels of H3K4me3 (Bedi et al., 2022b). This increase in sperm-retained histones may alter chromatin decondensation dynamics during syngamy and delay embryonic genome activation (Binder et al., 2012).
Alternatively, regions of the sperm genome displaying altered chromatin enrichment may persist through the early cleavage stages and directly influence gene expression patterns driving early development. For example, recent studies by Sarah Kimmins's group have revealed that sperm from folic acid deficient males transmit some H3K4me3-modified loci in preimplantation stage embryos, which are associated with deregulated embryonic gene expression (Lismer et al., 2021). Similarly, sperm from obese males exhibit alterations in H3K4me3 that predominantly map to transcriptionally-active loci of the placental genome; regions controlling inflammation, metabolism, and placental glycogen storage, all of which are transcriptionally dysregulated in this model (Pepin et al., 2022). Notably, there was very little to no conservation between the histone changes identified in sperm and adult offspring liver, arguing against the direct inheritance of these changes as drivers of metabolic syndrome. Furthermore, most regions exhibiting altered H3K4me3 enrichment in sperm isolated from obese, alcohol-exposed, or folic acid deficient males localize to gene enhancer regions controlling embryonic patterning (Lismer et al., 2021;Bedi et al., 2022b;Pepin et al., 2022). Therefore, altered chromatin states may transmit to the embryo and alter embryonic transcription directly.
However, many studies suggest that sperm posttranslational histone modifications and larger aspects of chromatin structure are entirely erased, with H3K4me3 stripped from the paternal genome, and most histone H3.3, which is enriched over gene regulatory regions, is extruded in the second polar body (Du et al., 2017;Flyamer et al., 2017;Gassler et al., 2017;Ke et al., 2017;Kong et al., 2018;van der Weide and de Wit, 2019;Liu et al., 2020). In contrast, other studies report the conservation of multiple histone posttranslational modifications, including H3K4me3, and higher-order chromatin folding between the sperm and early zygote (van der Heijden et al., 2006;Dahl et al., 2016;Wu et al., 2018;Alavattam et al., 2019;Jung et al., 2019;Collombet et al., 2020). As the resolution of chromatin mapping techniques continues to improve, researchers will determine how many regions of the sperm genome escape reprogramming in the early embryo and if regions altered by paternal stressors directly transmit to the offspring, impacting early development. However, as with DNA methylation, it is unlikely that any histone modifications persist into adulthood to drive pathophysiological changes in gene expression directly.
An alternative mechanism to direct transmission of histone modifications and DNA methylation could be the interrelationship between the enrichment of these epigenetic signals and the binding of chromatin accessibility factors either in sperm or immediately after fertilization. Here, increases or decreases in chromatin accessibility may influence larger aspects of embryonic chromatin organization and, therefore, not rely on the direct inheritance of histone-mediated epigenetic marks in sperm.
Although few studies have considered this perspective, some reports describe consistent changes in chromatin accessibility despite inconsistent associations between epigenetic signals. For example, rather than the direct transmission of altered DNA methylation across generations, ancestral exposure to the obesogen tributyltin associates with altered sperm chromatin accessibility at essential metabolic genes dysregulated in adipose cells (Chamorro-Garcia et al., 2017). As another, altered enrichment of H3K4me3 in alcohol-exposed sperm correlates with changes in placental CTCF enrichment and disrupted gene expression patterns at gestational day 14.5 (Bedi et al., 2022b). Similar correlations between altered H3K4me3 and CTCF binding site enrichment exist in sperm isolated from males deficient in folic acid (Lismer et al., 2021). In these scenarios, modified chromatin structure in sperm serves as a bookmark for other factors, persisting after the primary epigenetic signals in sperm are lost. In models of transgenerational epigenetic inheritance, this paradigm may explain the inconsistency of differential DNA methylation and histone enrichment in F0, F1, F2, and F3 sperm, despite conserved pathological phenotypes across generations (Beck et al., 2021). Future studies integrating multiple omics approaches across generations are necessary to determine if these separate epigenetic signals and chromatin accessibility interact in the paternal transmission of growth and disease phenotypes.
Sperm noncoding RNAs
Perhaps the most exciting discovery emerging from studies examining paternal epigenetic inheritance has been the unanticipated influence of sperm noncoding RNAs (ncRNAs) on offspring phenotype (Chen et al., 2016b;Sharma, 2019). Multiple stressors, including exercise, drug abuse, environmental toxicants, inflammation, malnutrition, obesity, and stress, alter the repertoire of sperm-inherited ncRNAs, which correlate with alterations in offspring phenotypes (Conine and Rando, 2022). Many of these ncRNAs originate from extracellular vesicles called epididymisomes, secreted by the luminal epithelium of the epididymis, the portion of the male reproductive tract directing sperm maturation (Zhou et al., 2018) (Figure 3). During epididymal transit, these vesicles fuse with and transmit their ncRNA cargos to maturing sperm (Belleannée et al., 2013;Nixon et al., 2015;Reilly et al., 2016;Sharma et al., 2016;Sharma et al., 2018). Researchers hypothesize that these ncRNAs serve as signaling molecules that modulate genetic
FIGURE 3
Epididymis as an Environmental Sensor. The epididymal (mainly the caput and corpus) epithelium functions as a sensor of paternal environmental stressors. This epithelium may respond to these stressors by altering its transcriptional program to deliver payloads of molecular cargo through extracellular vesicles (epididymisomes) to the passing spermatozoa. These epididymisomes contain a variety of small RNAs that may deliver a layer of epigenetic information to the maturing spermatozoa, which can alter gene programming events in the early embryo.
Frontiers in Genetics frontiersin.org pathways driving growth, metabolic, or other adaptive processes in the embryo (Grandjean et al., 2015). Significantly, across multiple experimental models, injection of naive zygotes with small RNAs derived from exposed males is sufficient to induce similar, if not identical phenotypic changes in the resulting offspring (Gapp et al., 2014;Grandjean et al., 2015;Rodgers et al., 2015;Chen et al., 2016a;Conine et al., 2018;Gapp et al., 2018;Sarker et al., 2019;Zhang et al., 2021a;Raad et al., 2021) ( Figure 4). Therefore, sperm-derived ncRNAs represent a viable means by which epigenetic information transmits to the embryo to alter physiological function. Although the mechanisms by which sperm-inherited ncRNAs alter embryonic development remain poorly described, one fascinating theme emerging from studies examining the impact of sperm ncRNAs on embryonic gene expression is an interaction with transposable elements. During preimplantation development, embryos transcribe multiple transposable element families in stage-specific patterns (Vitullo et al., 2012;Fadloun et al., 2013;Liu et al., 2020;Lu et al., 2020;Modzelewski et al., 2021). These transposable elements participate in diverse biological processes, including driving the expression of genes controlling embryonic pluripotency, serving as alternative promoters enabling the generation of novel splice variants, modulating chromatin accessibility to influence the timing of embryonic genome activation, and serving as stage-specific gene regulatory elements (Faulkner et al., 2009;Macfarlan et al., 2012;Elsässer et al., 2015;Wu et al., 2016b;De Iaco et al., 2017;Hendrickson et al., 2017;Jachowicz et al., 2017;Liu et al., 2020;Lu et al., 2020). Importantly, multiple lines of evidence across diverse mammalian species indicate that proper transcriptional control of transposable elements is essential for embryonic development and that manipulating their expression or sequence impacts fundamental aspects of embryo physiology (Beraldi et al., 2006;Jachowicz et al., 2017;Modzelewski et al., 2021). Therefore, sperm ncRNA interactions with transposable element biology may influence multiple facets of embryonic development. This Injection of Sperm noncoding RNAs Recapitulate Environmentally-Induced, Paternally-Inherited Phenotypes in Offspring. Environmental exposures and paternal stressors alter the repertoire of noncoding RNAs carried in sperm. Isolation of these noncoding RNAs from exposed sperm and injection into naïve, in vitro-produced embryos induces similar growth and metabolic phenotypes in the offspring as those emerging from in vivo-derived embryos. These experiments demonstrate a causal role of sperm noncoding RNAs in the paternal transmission of environmentally-induced phenotypes.
Frontiers in Genetics frontiersin.org influence may be especially significant for placental development and function, where transposable elements serve as core regulatory elements for this tissue (Rawn and Cross, 2008;Chuong et al., 2013). Sperm contains a vast repertoire of ncRNAs interacting with transposable elements, including Piwi-interacting RNA (piRNAs), tRNA fragments (tRFs), and microRNAs (miRNAs). PiRNAs are germline-derived small RNAs that direct the transcriptional and posttranscriptional silencing of transposable elements in the male germline (Siomi et al., 2011). Fragments derived from cleaved tRNAs either directly bind multiple clades of endogenous retroviral elements to block their replication or recruit the RNAi machinery to induce their degradation (Schorn et al., 2017). Finally, multiple oocyte-derived miRNAs and endogenous short interfering RNAs map to transposable elements and constrain their expression (Stein et al., 2015). Therefore, multiple small RNA species present in sperm can interact with transposable elements.
A small number of studies tracking the impacts of sperm ncRNAs on embryonic transcription report effects on transposable elements. For example, paternal exposure to a low-protein diet induces alterations in several ncRNA species but most prominently in 5′ fragments of the Glycine tRNA (tRF-Gly). Injection of these tRFs into naive zygotes upregulated genes proximal to the MERVL transposon (Sharma et al., 2016). Subsequent experiments using embryonic stem cells revealed that these tRNA fragments interact with a U7 small nuclear RNA, modulating the translational control of histone proteins; potentially modifying the timing of embryonic genome activation (Boskovic et al., 2020). As highlighted above, paternal low-protein diets retard preimplantation development (Sharma et al., 2016). Similarly, the injection of ncRNAs derived from normal sperm into embryos generated using somatic cell nuclear transfer reduced global levels of H3K9me3, a critical modification constraining transposable element transcription and overcoming a significant barrier in cloned embryo development . Therefore, sperm-derived ncRNAs may modify transposable element transcriptional activity and their regulatory effects through chromatin-based mechanisms.
Alternatively, sperm ncRNAs may upregulate gene expression via direct interactions with the genome. For example, multiple transcripts mapping to transposable element fragments appear in the sperm of males subjected to traumatic experiences (Gapp et al., 2018). Injection of LINE1derived small RNAs into embryos upregulates LINE1 element transcription, potentially by forming triple-helical RNA-DNA hybrids (Fadloun et al., 2013). Similarly, tRFs identified in the sperm of obese males map to genomic regions near transposable elements and proximal to many genes dysregulated in 8-cell stage embryos (Chen et al., 2016a). These correlative data suggest sperm ncRNAs may exert their transcriptional control by either directly binding to gene regulatory regions (promoters or enhancers) or via their proximity to TEs.
Although the correlations with altered transposable element activity and chromatin accessibility are tantalizing, it remains difficult to reconcile the negligible scale of RNAs carried by a single sperm compared to the vast repositories found in the oocyte and early zygote (Yang et al., 2016). Therefore, determining how the minor contribution of sperm-derived ncRNAs exerts a lasting impact on embryonic development and influences adult physiology remains a challenging and unresolved question. Nonetheless, microinjection of caudaspecific small RNAs into developmentally incompetent zygotes generated using caput epididymis-derived sperm improves embryo survival and restores embryonic gene expression .
Researchers speculate that chemical modifications, including methylation at multiple bases (m5C, m6A, and m1A), increase RNA stability, extending the half-life of sperm ncRNAs until well after fertilization (Chen et al., 2016b). However, despite the enhanced stability these modifications confer, ncRNAs only exist for discrete periods and must stably manipulate gene regulatory mechanisms to achieve a lasting impact on animal phenotype. As an alternative to transposable elementcentered interactions, researchers have identified an influence of tRFs on ribosome biogenesis (Kim et al., 2017). Like transposable elements, ribosomal sequences are repetitive and challenging to map. Notably, paternal exposure to a lowprotein diet also decreases ribosomal gene expression, which may also explain why low-protein embryos developed slower than controls (Sharma et al., 2016). Alternatively, ncRNAs also recruit chromatin-binding factors like CTCF, which could modify the embryonic developmental program (Kung et al., 2015). However, experiments examining the transgenerational inheritance of metabolic phenotypes suggest that although sperm RNAs can act as vectors of intergenerational inheritance, they do not mediate stable transgenerational transmission of diet-induced metabolic alterations (Raad et al., 2021). Although fascinating, much work remains to determine how the minuscule amount of RNA carried in sperm impacts offspring embryonic growth and long-term health.
Future directions: The placenta as a mediator of early life mitohormesis and the paternal inheritance of protective adaptations Although most models of paternal epigenetic inheritance report adverse health outcomes, some studies have identified positive changes potentially conferring protective adaptations to adverse environmental challenges. For example, repeated Frontiers in Genetics frontiersin.org paternal exposures to sublethal doses of the hepatotoxin carbon tetrachloride (CCl4), constant low-level systemic inflammation, and nicotine all suppressed the fibrotic response in the next generation, improving the wound healing response (Zeybel et al., 2012;Zhang et al., 2020;Zhang et al., 2021a;Zhang et al., 2021b). In addition, paternal nicotine exposure also enhances offspring xenobiotic responses to toxicants by upregulating hepatic detoxification genes (Vallaster et al., 2017). The mechanisms by which the memories of these stressors achieve protective germline programming remain poorly described. However, many of these reports share similarities with investigations of stressor-induced germline programming in insects, plants, and worms, which also describe enhanced growth, adaptability, and toxicant resistance in the offspring of organisms exposed to lowdose stressors (Agathokleous et al., 2022). In worms, multiple reports can link transgenerational germline programming to early-life mitochondrial dysfunction and the epigenetic regulation of antioxidant pathways (Kishimoto et al., 2017;Zhang et al., 2021c). Significantly, similar pathways are also present in mammalian systems, and transient, intrauterine episodes of placental oxidative stress induce improvements in hepatic metabolism, priming of antioxidant pathways, and resistance to high-fat dietinduced obesity; a phenomenon broadly termed mitohormesis (Yun and Finkel, 2014;Cox et al., 2018;Dimova et al., 2020). Our data examining low-level paternal alcohol exposures also identify altered transcription of placental mitochondrial genes (Thomas et al., 2021;, and the male offspring of alcoholexposed sires exhibit resistance to the effects of a high-fat diet (Chang et al., 2019b). Therefore, hormetic alterations in placental mitochondrial function may represent a mechanistic pathway by which paternal exposures program fetoplacental adaptive responses, which may or may not be compatible with the gestational or postnatal environment. Additionally, changes in oocyte mitochondrial function are also observed in maternal models of obesity, suggesting this pathway may not be unique to the male germline.
As discussed above, many placental changes induced by paternal exposures are sex-specific, with paternal stressors inducing diametrically opposite changes in the directionality of affected gene sets between males and females (Chang et al., 2019a;Cissé et al., 2020). It is also noteworthy that emerging research reveals that male cells contain more mitochondria than females (Cao et al., 2022). Therefore, sex differences in mitochondrial function may help explain the sexual dimorphisms observed across studies examining paternal stressors and why males are more sensitive to specific exposures. However, as we know almost nothing about the dynamics of placental mitochondrial function, we require additional studies to determine the validity of this hypothesis.
Conclusions and future directions
Ultimately, just as maternal exposures do not occur in isolation, a myopic focus on paternal exposures offers limited insights. Although limited in scope, a small number of studies have emerged examining dual-parental exposures to obesity and stress (McPherson et al., 2015;Ornellas et al., 2015;Cissé et al., 2020). Importantly, these studies reveal that maternal and paternal exposures tend to disproportionately impact one sex and, when combined, that parental sex-specific effect becomes exacerbated (Cissé et al., 2020). Moving forward, additional dose-response studies are necessary to determine if environmental stressor-induced transgenerational hormesis plays a prominent role in mammalian development, as in insects, worms, plants, and microbes (Agathokleous et al., 2022). Further, we need to develop more multiplex exposure models to determine how preconception paternal exposures may interact with maternal stressors to influence offspring growth and disease development. Only by examining the combined experiences of both parents will we truly understand the developmental origins of disease. Finally, we believe that, in these endeavors, the placenta offers the best direct readout of altered developmental programming.
Author contributions
SB and MG co-authored and edited the paper.
Funding
This work was supported by a Medical Research Grant from the WM. Keck Foundation (MCG) and NIH grant R01AA028219 from the NIAAA (MCG).
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Frontiers in Genetics frontiersin.org
|
2022-11-18T14:34:41.029Z
|
2022-11-18T00:00:00.000
|
{
"year": 2022,
"sha1": "aefedcba16d70c59ceb0f7c83139168f9a2a55bc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "aefedcba16d70c59ceb0f7c83139168f9a2a55bc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219637747
|
pes2o/s2orc
|
v3-fos-license
|
Hematopoietic stem and progenitor cell-restricted Cdx2 expression induces transformation to myelodysplasia and acute leukemia
The caudal-related homeobox transcription factor CDX2 is expressed in leukemic cells but not during normal blood formation. Retroviral overexpression of Cdx2 induces AML in mice, however the developmental stage at which CDX2 exerts its effect is unknown. We developed a conditionally inducible Cdx2 mouse model to determine the effects of in vivo, inducible Cdx2 expression in hematopoietic stem and progenitor cells (HSPCs). Cdx2-transgenic mice develop myelodysplastic syndrome with progression to acute leukemia associated with acquisition of additional driver mutations. Cdx2-expressing HSPCs demonstrate enrichment of hematopoietic-specific enhancers associated with pro-differentiation transcription factors. Furthermore, treatment of Cdx2 AML with azacitidine decreases leukemic burden. Extended scheduling of low-dose azacitidine shows greater efficacy in comparison to intermittent higher-dose azacitidine, linked to more specific epigenetic modulation. Conditional Cdx2 expression in HSPCs is an inducible model of de novo leukemic transformation and can be used to optimize treatment in high-risk AML.
T he caudal-related homeobox gene CDX2 is not expressed in normal hematopoietic stem cells (HSCs), but is expressed in~90% of acute myeloid leukemia (AML) patients 1,2 , as well as those with high-risk myelodysplastic syndrome (MDS) and advanced chronic myeloid leukemia (CML). Retroviral Cdx2 expression in bone marrow (BM) progenitor cells facilitates in vitro self-renewal and causes a serially transplantable AML in vivo [1][2][3] . CDX2 is thought to be necessary for leukemia growth, as knockdown of human CDX2 by lentiviral-mediated short hairpin RNA (shRNA) impairs growth of AML cell lines and reduces clonogenicity in vitro 1 . These data indicate that aberrant Cdx2 expression may promote HSC transformation to leukemia stem cells (LSCs).
Cdx2 plays a critical role in embryogenesis and early developmental hematopoiesis [4][5][6] . Loss of Cdx2 in murine blastocysts results in lethality at 3.5 days post-coitum 7 . Cdx2 is a critical regulator of the trophectoderm layer, the first cell lineage to differentiate in mammalian embryos 8 . Cdx2 downregulation in embryonic stem cells (ESCs) causes ectopic expression of the pluripotency markers Oct4 and Nanog, while Cdx2 upregulation triggers trophectoderm differentiation. Cdx2 is also essential for in vitro trophoblast stem cell self-renewal, demonstrating a pivotal role for Cdx2 in ESC fate specification [7][8][9][10] . In developmental hematopoiesis, CDX2 and other caudal-related family members (CDX1 and CDX4) are transcriptional regulators of homeobox (HOX) genes [11][12][13] . HOX gene function has been closely linked to self-renewal pathways in ESCs and HSCs, and the reactivation of these pathways by aberrant HOX expression has been implicated in leukemogenesis [14][15][16][17] . Despite this association, evidence of direct interaction between CDX2 and the HOX cluster is lacking 18,19 . CDX2 may also act via non-HOX pathways including via downregulation of KLF4 20,21 . Therefore, understanding targets of CDX2 in hematological malignancy and mechanisms of transformation may provide new opportunities to treat patients with leukemia.
Retroviral overexpression models of oncogenesis provide a powerful tool to study the functional consequences of genetic mutations. However, these models also have limitations including the ex vivo manipulation of cells and preferential transduction of proliferative progenitor cells, rather than long-term HSCs. To overcome these barriers and to understand the mechanism of in vivo transformation of HSCs, we generated a transgenic model of Cdx2 overexpression in hematopoietic stem and progenitor cells (HSPCs) to depict the cellular dynamics of transcriptional deregulation. Ectopic Cdx2 expression in HSPCs results in lethal MDS, characterized by abnormal blood cell counts, dysgranulopoiesis, and thrombocytopenia, followed by secondary transformation to acute leukemia (AL) in a percentage of surviving mice. This is dependent on Cdx2 expression within HSPCs, as myeloidrestricted Cdx2 expression attenuates the phenotype. Unexpectedly, we observe reduced expression of Hox cluster genes and upregulation of differentiation factors in Cdx2 HSPCs, signifying that non-Hox-mediated pathways drive these hematological diseases. Cdx2-driven leukemia is sensitive to azacitidine, with enhanced sensitivity when administered at a lower-dose on an extended schedule in comparison to a higher-dose on a shorter schedule. This work provides a model of MDS with stepwise transformation to AML that can be used to provide clinically relevant information for patients with MDS and AML with multilineage dysplasia.
Results
Ectopic expression of Cdx2 alters function of HSPCs. To examine the effects of Cdx2 expression in adult hematopoiesis, we generated a transgenic mouse by insertion of Cdx2 (NCBI gene ID: 12591) and mCherry (Clontech) open reading frames downstream from a CAG promoter and a loxP-flanked stop cassette in the mouse Rosa26 locus of C57BL/6 ES cells (LSL-Cdx2-mCherry, TaconicArtemis). The Cdx2 and mCherry cDNAs were separated by a T2A self-cleaving peptide, which allowed for co-expression of the two proteins after Cre excision of the stop cassette between the loxP sites. Thus, mCherry reported expression of Cdx2 in cells following Cre-recombinase mediated activation ( Supplementary Fig. 1a). The LSL-Cdx2-mCherry mice were crossed to Scl-CreER T mice 22 to generate offspring that can inducibly express Cdx2-mCherry in HSCs following tamoxifen exposure (Scl:Cdx2; Supplementary Fig. 1b). Scl:Cdx2 and control mice (Ctrl; consisting of mice from the genotypes: C57BL/6 wildtype [WT], Scl-CreER T , and LSL-Cdx2-mCherry) were fed a diet of rodent chow containing tamoxifen (400 mg/kg) for two weeks. Cdx2 expression was confirmed in mCherry-positive BM cells by western blot on whole BM and by quantitative reverse transcriptase PCR (qRT-PCR) ( Supplementary Fig. 1c, d). Scl:Cdx2 mice showed mCherry expression by flow cytometry at two weeks, and this rose further by four weeks after tamoxifen (Fig. 1a). To evaluate Cdx2 expression differences between previously published retroviral models 1 and Scl:Cdx2 transgenic cells, we transduced Scl-CreER T lineage-negative BM with MSCV-IRES-GFP (MIG)-Cdx2 and MIG-Empty retrovirus. Retroviral CDX2 overexpression resulted in approximately 900-fold higher CDX2 expression than Scl:Cdx2 transgenic cells, potentially accounting for phenotypic differences ( Supplementary Fig. 1d).
Cdx2 expression in HSPC also led to depletion of long-term hematopoietic stem cells (LTHSCs [LKS + CD150 + CD48 − ]) and short-term HSCs (STHSCs [LKS + CD150 − CD48 − ]) in Scl: Cdx2 BM (Fig. 1h, i, Supplementary Fig. 1m). In vitro colony forming cell (CFC) assays with BM cells four weeks after tamoxifen induction demonstrated enhanced colony formation after two weeks (Fig. 1j), with enrichment of mCherry-positive cells at each passage ( Supplementary Fig. 1n) but Cdx2 expression did not facilitate in vitro serial replating beyond two weeks. These LTHSCs exclusively harbor self-renewing potential 23 , implying that the cellular mechanism of LTHSC exhaustion might involve enforced cell cycle entry and loss of quiescence. To test this hypothesis, we performed in vivo competitive BM transplantations using Scl-CreER T or Scl:Cdx2 donor BM (expressing CD45.2) from uninduced (naïve) mice, mixed with congenically marked competitor wild type (WT) BM (expressing CD45.1) (Fig. 1k). There was equivalent engraftment of donor cells of both genotypes four weeks after transplantation in the absence of tamoxifen. Induction of Cdx2 expression following intraperitoneal (IP) injection of 5 mg of tamoxifen caused a progressive loss of PB chimerism (Fig. 1l), which was associated with reduced BM HSPC populations (LKS+ and LTHSC) in induced mice transplanted with Scl: Cdx2 BM ( Supplementary Fig. 1o-r). Altogether, these data indicate that cell-intrinsic expression of Cdx2 impairs HSC Peripheral blood chimerism Input Week 4 Week 6 Week 8 Week 12 Week 16 Week 20 Week 24 Week function with reduced capacity to sustain long-term hematopoiesis (Fig. 1m).
Cdx2 expression in HSPCs induces MDS and AL. After tamoxifen induction of Cre-recombinase, Scl:Cdx2 mice developed a variety of hematological diseases including MDS, myeloproliferative neoplasm (MPN) and AL (Fig. 2a, b). Scl:Cdx2 mice had a median survival of 43 weeks, while no disease was seen in Scl-CreER T controls (Fig 2a, b, Supplementary Fig. 2a). MDS was evidenced by reduced blood counts together with reticulocytosis, fragmented erythrocytes, anisopoikilocytosis, and neutrophil dysplasia (Fig. 2a). MPN was characterized by leukocytosis, reticulocytosis, and hypersegmented neutrophils (Fig. 2a, c, d). AL was diagnosed by >20% blasts in PB and BM (Fig. 2a, Supplementary Fig. 2b), together with leukocytosis, splenomegaly, and anemia ( Fig. 2c, d, g). All moribund mice had reduced hemoglobin compared with controls ( Fig. 2e) while all Scl:Cdx2 mice (regardless of health state) showed mild to profound thrombocytopenia (Fig. 2f, Supplementary Fig. 2c). All Scl:Cdx2 mice showed a propensity for hypersegmented neutrophils, and expansion of Gr1-positive myeloid cells and decrease in B220 B cells compared with Scl-CreER T controls ( Supplementary Fig. 2d). Approximately 20% of Scl:Cdx2 mice did not develop overt hematological disease ( Fig. 2b) aside from thrombocytopenia and neutrophil dysplasia. In mice that developed AL, we observed biphasic disease, with initial MDS (dysplasia, leukopenia and thrombocytopenia; Supplementary Fig. 2h, i) followed by the later onset of leukocytosis, anemia, and increased mCherry+ and c-Kit+ cells in PB ( Supplementary Fig. 2j-m). Immunophenotyping revealed distinct leukemia lineage commitments (Fig. 2h). Scl:Cdx2 #252 showed a clonal expansion of c-Kit + B220 int CD3 int cells ( Supplementary Fig. 2e), Scl:Cdx2 #882 PB leukemic cells were c-Kit + CD3 + mCherry + , representative of acute T-cell leukemia ( Supplementary Fig. 2f), but most mice (#2259, #2261, and #472) developed acute myeloid/erythro-myeloid leukemia with a c-Kit + mCherry + population predominately Gr1 + CD11b − (Supplementary Fig. 2g). The evolution of MDS to AL in Scl:Cdx2 mice ( Supplementary Fig. 2h-j) with an expansion of mCherryexpressing c-Kit+ cells ( Supplementary Fig. 2k-m) is likely due to the acquisition of transformation events and is consistent with secondary leukemia after MDS observed in patients. The leukemias were transplantable as irradiated recipient mice phenocopied the primary donor in all cases (example in Fig. 2i) and had shortened survival compared with the primary setting (Fig. 2j), demonstrating rapid expansion of the leukemic clone.
Taken together, these data show that Cdx2 is able to transform HSPC populations in situ into a faithful model of MDS with secondary AML.
Secondary genetic lesions cooperate with Cdx2 expression. AML transformation is mediated through co-operative mutations in genes that confer a proliferative advantage to cells together with pathways that primarily impair cellular differentiation 24 . To determine whether co-operating mutations had contributed to Scl:Cdx2 HSPC full transformation, we performed whole exome sequencing (WES) of three AL samples and one MPN sample. WES was performed on genomic DNA of CD45.2-sorted cells (ie. donor cells) from transplanted leukemic mice. Tumor samples were sourced from mCherry-positive donor cells and compared with germline samples that were mCherry-negative donor cells. We found a number of frameshift and non-synonymous somatic mutations in known tumor-associated genes, including positive (Jak1, Raf1, Zap70) and negative regulators (Pten, Cgref1) of signal transduction, cell adhesion molecules (Fat1), transcription factors (Etv6, Ikzf1, Trp53), and DNA-binding proteins (Nabp2) (Supplementary Table 1). PTEN is a known tumor suppressor commonly altered in human AML 25 , and was mutated in Scl: Cdx2 AML along with ETV6, a recurring fusion partner with CDX2 2,26 . Other AML single nucleotide variants (SNVs) were uncovered in Fat1 and Raf1. Loss-of-function of cadherin-like protein Fat1 and mutations in the Ras effector Raf1 are also previously described in AML 27,28 . Bilineage ALL cells harbored a frameshift insertion in Ikzf1 zinc-finger protein, which is frequently mutated in human B-ALL and to a lesser extent in T-ALL [29][30][31][32] . Mutations in the tyrosine kinase Jak1 (sample #252), are more prevalent in T-ALL than B-ALL and are associated with poor prognosis 33,34 . Finally, Cdx2-induced erythro-myeloid leukemia #472 harbored a loss of heterozygosity (LOH) event in the commonly mutated tumor suppressor gene Trp53. To further determine the significance of these SNVs, we confirmed their presence in functional protein domains similar to pathogenic SNVs in human orthologues 35 We did not observe any SNVs in cancer-associated genes (as listed in MSK-HemePACT cancer panel and COSMIC 36 ) from Scl:Cdx2 MPN BM demonstrating that the emergence of secondary mutations was found exclusively in AL.
Using the Beat AML trial cohort 37 , we found significant coexpression of CDX2 and FLT3 in AML patients, as well as increased CDX2 expression in FLT3-internal tandem duplication (ITD)-positive samples compared with FLT3-ITD-negative . j Colony forming cell (CFC) assay of BM cells initially plated (p0) and replated (p1) in M3434 methylcellulose. Each BM sample was plated in triplicate and each data point represents the mean of triplicate plates (Ctrl n = 9; Scl:Cdx2 n = 7). k Diagram of BM transplant experiment setup. Scl-Cre (n = 5) and Scl:Cdx2 (n = 10, split into n = 5 per treatment arm) BM chimeras. Tamoxifen or corn oil (vehicle) was administered by intraperitoneal (IP) injection to indicated groups. l PB chimerism to monitor relative contribution of Scl-Cre or Scl:Cdx2 BM to peripheral hematopoiesis. Experiment was performed in duplicate. Arrow indicates IP injection time point. (m) Model of Scl:Cdx2 hematopoietic cell hierarchy showing decreases in LTHSC, CMP and MEP leading to a loss of platelets (thrombocytopenia) and erythrocytes (anemia), and a relative increase in GMP resulting in greater levels of myeloid cells: monocytes and granulocytes. N = biologically independent animals. Statistical analyses performed using two-tailed Mann-Whitney test except (l) which used mixed-effects model with Tukey's multiple comparisons test. Data are plotted as mean values +/− SD. n.s.; not significant. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001. Fig. 3b, c). We therefore tested whether Scl:Cdx2 mice would accelerate development of AML when crossed with mice harboring Flt3-ITD, a common oncogene in AML 38 (Fig. 3a). Scl:Cdx2/Flt3 ITD/+ double mutant mice had shorter survival and disease latency compared with Scl:Cdx2 mice and Scl/Flt3 ITD/+ alone (Fig. 3b). There was a trend to leukocytosis in some Scl:Cdx2/Flt3 ITD/+ mice compared with controls ( Fig. 3c, Supplementary Fig. 3d). Hemoglobin levels showed a wide range across biological replicates, however there was no significant difference between the means of Scl:Cdx2/ Flt3 ITD/+ and controls (Fig. 3d). Scl:Cdx2/Flt3 ITD/+ double mutants showed severe thrombocytopenia (Fig. 3e) and succumbed to advanced MPN (Fig. 3b) characterized by splenomegaly (Fig. 3f) and increased in Gr1 − positive myeloid cells in PB compared with control or single knockin Cdx2 mice (Fig. 3g).
Unexpectedly, there was downregulation of Hox cluster genes in Scl:Cdx2 HSPCs (Supplementary Fig. 5d). Other groups have shown increased expression of Hox genes after enforced Cdx2 overexpression 1,3 , however the specific Hox gene targets were non-overlapping. In the context of normal HSC function, decreased Hox gene function is associated with loss of self-renewal 41,42 and progressive downregulation of Hox genes is seen in normal differentiation ( Supplementary Fig. 5e) 43 . We therefore hypothesized that Cdx2 may bind to factors that regulate myeloid differentiation, leading to concomitant downregulation of Hox genes in stem cell populations. To understand the regulatory activity of Cdx2 within rare HSPCs, we utilized Assay for Transposase Accessible Chromatin with high-throughput sequencing (ATAC-Seq) on purified Scl:Cdx2 HSPCs vs. controls (Scl-CreER T alone), to identify changes in chromatin accessibility mediated by Cdx2 44 . In total, 62,711 peaks were identified in Cdx2-expressing cells and 28,282 peaks in the Scl-CreER T . The majority of chromatin accessible regions (26,099) were shared between both groups. These common regions were dominated by promoter elements whereas condition-specific regions were dominated by distal elements (Fig. 6a, Supplementary Fig. 6b). Within the Scl:Cdx2 specific distal elements, we found the Cdx2 motif (p = 1.6e −34 ) centrally enriched and also motifs belonging to the CCAAT/enhancer-binding protein family (Cebpb [p = 9.8e −89 ], Cebpe [p = 1.6e −89 ], Cebpa [7.5e −83 ], Cebpd [p = 9.2e −89 ]) (Fig. 6b) confirmed with another algorithm (HOMER; 45 Supplementary Fig. 6c). We also compared our data to the publicly available CEBPα ChIP-Seq dataset (GSM1187163) performed in GMP and found a significant overlap in peaks in Scl: Cdx2 BM but not Scl-Cre control samples (Fig. 6c, Supplementary Fig. 6d). ATAC-Seq provides representation of Cdx2 binding and suggests that Cdx2 expression associates with chromatin changes that increase the accessibility of pro-differentiation myeloid transcription factor binding sites of the CCAAT/enhancer-binding protein family.
Coordinated RNA-Seq and ATAC-Seq data provide evidence of transcriptional and epigenetic reprogramming of leukemic stem cell populations. ATAC-Seq showed enrichment of early myeloid progenitor programs in pre-leukemia samples, with progressive acquisition of committed megakaryocyte erythroid progenitor chromatin architecture in erythroid leukemia, and lymphoid chromatin architecture in lymphoid leukemias, even though these cells retained a stem cell surface immunophenotype ( Supplementary Fig. 6e, f). Furthermore, we used RNA-Seq profiles of each Cdx2-expressing leukemia to identify differentially expressed genes that were upregulated in T-ALL (#882) and B/T-ALL (#252) but not other samples (Supplementary Data 3). Here using the tool Enrichr 46,47 , we found significant enrichment for genes deregulated upon transcription factor alteration in T lymphocytes and T cell leukemia (p < 0.05), again showing lymphoid priming within the stem cell populations (Supplementary Data 4). These data are consistent with a Cdx2-induced transcriptional program priming LKS + towards progenitor cell differentiation. In support of this, RNA-Seq also showed upregulation of Cebp family genes in Scl:Cdx2 LKS+ (representing pre-leukemic HSPC, Supplementary Fig. 5f) in keeping with myeloid differentiation. Interestingly, transformed Scl:Cdx2 LKS+ BM cells from acute leukemic mice showed similar or decreased levels of Cebp gene transcripts compared with control cells, with the sole exception of Cebpb (Supplementary Fig. 5f), suggesting these leukemia cells downregulate effectors of myeloid commitment as a mechanism of transformation.
Next, we performed chromatin immunoprecipitation sequencing (ChIP-Seq) to identify Cdx2 binding sites in hematopoietic cells. We validated MSCV-IRES-GFP-Cdx2 1 tagged with a FLAG epitope (Cdx2-FLAG) by immunoprecipitation with rabbit anti-FLAG monoclonal antibody and confirmed expression and binding of FLAG-tagged Cdx2 in Ba/F3 cells ( Supplementary Fig. 6a). We next transduced lineage-negative WT mouse BM with Cdx2-FLAG or empty vector (EV) and performed ChIP-Seq. Cdx2-FLAG ChIP-Seq confirmed strong central enrichment of Cdx2 motifs at peaks in both promoter and distal regions (Fig. 6b). To overcome any dilution of binding signal of Cdx2expressing HSPC, we sought to integrate Cdx2-FLAG ChIP-Seq data from lineage-negative cells with ATAC-Seq on LKS+ and publicly available CEBPα ChIP-Seq on GMP. The top 1000 gained peaks in either Scl:Cdx2 or Scl-Cre ATAC-Seq showed correlation with Cdx2-FLAG and CEBPα ChIP-Seq peaks (Fig. 6c), suggesting these cell populations share similar chromatin identity despite immunophenotypic differences. To further functionally assess the relevance of the Scl:Cdx2 gained or lost distal accessible chromatin regions in HSPCs and myeloid progenitors, we analyzed ATAC-Seq and histone methylation marks associated with enhancers (H3K4me1) and ChIP-Seq data of LKS+ (or MPPs), CMPs, and GMPs 43 . Scl:Cdx2 HSPCs had less accessibility at enhancer regions (regions that are ATACaccessible or histones with H3K4me1 modification) that regulate fate in LKS/MPP cells (Fig. 6c, d; Supplementary Fig. 6g, h). In contrast, Scl:Cdx2 HSPCs show an increase in accessibility of distal enhancer peaks that regulate committed myeloid progenitor cell differentiation (Fig. 6c, d; Supplementary Fig. 6g, h). These data suggest that Cdx2 results in chromatin remodeling at distal enhancers, with a bias towards increased accessibility of enhancers associated with myeloid differentiation and reduced accessibility at enhancers of cell types with self-renewal potential. Concordantly, ChIP-Seq revealed Cdx2 peaks at HoxA and HoxB loci consistent with Cdx2 binding (Fig. 6e) ( Supplementary Fig. 5d). Together these data suggest that Cdx2 represses certain Hox genes and primes HSPCs for myeloid differentiation.
Cdx2 leukemia is sensitive to myeloid disease therapy. Scl: Cdx2-induced MDS with secondary transformation to AML (sAML) is mediated by common oncogenic mutations seen in human disease, and thus, this model provides an opportunity to examine the preclinical efficacy of anti-leukemic drugs. sAML is refractory to standard chemotherapy and is associated with dismal survival. We performed preclinical studies to evaluate the activity and in vivo mechanism of 5-azacitidine (Aza), a clinically approved therapy for high-risk MDS and AML 48 . We evaluated mice that had received secondary transplants from Scl:Cdx2 AML, together with support WT BM cells. Aza treatment commenced once donor engraftment was established at 2 mg/kg IP injection daily for one week followed by three weeks of rest ( Supplementary Fig. 7a), mimicking the clinical schedule 49 . After one cycle of treatment, we observed a dramatic reduction in WBC counts in Aza-treated mice but not in vehicle treated controls (Fig. 7a). This was supported by similar pronounced reduction in mCherry cells and c-Kit expression in PB of Aza mice (Fig. 7b, c). In all experiments, the leukemia relapsed by the end of cycle one, however a second cycle of Aza dosing led to reduced leukemic burden and significant improvement in overall survival (Fig. 7d). There was increased apoptosis of Aza-treated Cdx2 cells, showing direct cytotoxicity of Aza on leukemia cells, with minimal effects on WT support cells or vehicle treated mice (Fig. 7e, Supplementary Fig. 7b). We also compared the standard 7 day regimen (2 mg/kg, 14 mg total per cycle) to a lower dose of Aza administered for 14 days over a 28 day cycle (1 mg/kg, qd, Monday-Friday, 14 mg total per cycle) ( Supplementary Fig. 7c), to mimic the prolonged exposure to low level drug that is seen with oral Aza dosing 50,51 . Interestingly, greater improvement was seen in mice receiving low exposure, extended duration (LE-ED) with Aza, compared with high exposure, limited duration (HE-LD) Aza (Fig. 7f, g). These data were confirmed in an independent myeloid leukemia model, also driven by Cdx2 (#2261) (Fig. 7h, i), suggesting that dose and scheduling may be relevant in optimizing clinical responses to Aza in MDS/AML. RNA-Seq was performed on mCherry-positive LKS+ cells from mice treated with vehicle vs. HE-LD and LE-ED Aza ( Supplementary Fig. 7d, Supplementary Data 5). LE-ED Aza treatment enriched for gene signatures associated with DNA hypomethylation (Fig. 7j), in accordance with mechanistic changes supported through extended oral dosing of Aza 50,51 . In contrast, HE-LD Aza treatment enriched for DNA damage and apoptosis signatures suggesting cytotoxicity of this regimen (Fig. 7k, Supplementary Fig. 7e, f). Both groups of Aza-treated cells showed significant upregulation of Trp53 and downregulation of Mycn (Fig. 7l), supporting a general mechanism of Aza in the induction of p53 and suppression of cellular proliferation 52,53 . The gene expression changes seen after Aza treatment mimicked the signature found in Cdx2 expressing cells prior to AML transformation (Fig. 7m), suggesting that Aza may revert AML to a pre-leukemic state, and also upregulates Klf4 (Fig. 7l), a gene known to be repressed by Cdx2 and has been shown to have a tumor suppressor function in AML 21 . Altogether, these data demonstrate the preclinical efficacy of Aza in MDS/AML and suggest that extended schedules of low-dose therapy may have improved efficacy compared with standard regimens.
Discussion
Transcriptional deregulation is a common leukemic mechanism that is thought to perturb cellular self-renewal and differentiation by modifying developmental cues. CDX2 is essential for ESC fate determination and is aberrantly expressed in myeloid malignancy. We generated a conditional transgenic mouse model of Cdx2 activation and characterized the de novo phenotype of Cdx2 expression in various hematopoietic subsets. Mice expressing Cdx2 in HSPCs develop lethal hematological diseases with prominent features of MDS and subsequent transformation into AL. The development of AL shows a long clinical latency with stepwise acquisition of oncogenic mutations, suggesting that Cdx2 expression predisposes cells to a pre-leukemic state with permissive conditions to the accumulation of cooperating secondary genetic events. This closely reflects the progression of human MDS to AML, where stepwise genetic mutations occur within HPSC and identifies these immature populations as the reservoir for leukemia initiating activity in vivo. Importantly, this model allows temporal control of Cdx2 expression within HSCs, leading to in situ transformation of HSPCs to LSCs, thereby eliminating the confounding effects of ex vivo manipulation of HSPC populations and retroviral models.
In humans, ectopic CDX2 expression is described in AML but also approximately 80% of newly diagnosed ALL or pediatric ALL 54,55 , underscoring the clinical relevance of this model. Unexpectedly, our model shows strong downregulation of Hox factors in fully transformed leukemia, which contrasts with other studies 3 , and suggests that Cdx2 can activate a number of discrete oncogenic pathways for leukemogenesis. We suggest that CDX2 expression correlates differently with HOX expression in different contexts. For example, expression levels of CDX2 are comparable in ALL and AML samples 3 , however HOX deregulation is much less common in ALL than AML. Furthermore, in embryogenesis, Cdx2 coordinates posterior development via Hox-independent mechanisms 56 . In keeping with other publications 21 , we frequently observed repression of Klf4 in all cases of AL.
When Cdx2 is expressed in HSPCs, mice show a propensity to develop secondary mutations followed by the development of a range of ALs of varying lineages. Conversely, when Cdx2 expression is restricted to myeloid cells in LysM:Cdx2 mice, there is a more homogeneous phenotype, typified by myelocytic expansion, leukocytosis, and splenomegaly, but without the thrombocytopenia that is hallmark to Scl:Cdx2 mice. Transformation to leukemia was not observed in this model, consistent with the hypothesis that HSPCs represent a leukemia-initiating seed population that is required for full disease penetrance.
As mCherry is not observed in LysM:Cdx2 MEPs, platelet numbers are not affected in these mice. In contrast, hypersegmented neutrophils are present in both LysM:Cdx2 and Scl:Cdx2 models, suggesting that Cdx2 expression within GMP cells is key to this phenotype. In addition, Cdx2 expression at the HSPC level is also seen to affect lymphoid lineages, highlighting the multipotent nature of Scl:Cdx2 cells. Emphasizing this, we observe a key regulatory role of aberrant Cdx2 on common hematopoietic developmental pathways.
BM characterization of moribund pre-leukemic Scl:Cdx2 mice show a relative decrease in MEPs and relative increase in GMP compared with controls. This phenotype may represent differentiation arrest at this cellular level and is consistent with reports from human high-risk MDS patients 57 such as MDS with excess blasts. Initially, we observed a modest increase in in vitro colony formation of Scl:Cdx2 BM but no immortalization. In competitive BM transplant assays, Scl:Cdx2 cells had cell-autonomous HSC self-renewal defects. These data indicate that Cdx2 leads to impaired clonogenicity, a trait that is similar to other animal models of MDS mutations 58 .
Pathological changes in high-risk MDS cells may confer apoptotic resistance and provide growth and survival advantages, leading to leukemia progression 59 . This is consistent with the observation that apoptosis rates are elevated in low-risk MDS and decrease in high-risk MDS and AML 60,61 . We observe mild abnormalities in apoptosis induction in Scl:Cdx2 HSPCs, and these cells are prone to enhanced cycling. It is thought that arrest in G 1 ("proliferative quiescence") is critical for cell fate decisions 62 , and commitment to self-renewal or proliferation are determined in G 1 phase by G 1 cyclins 63 . Given the importance of maintaining the balance between self-renewal and differentiation in HSPCs and LSCs, thorough investigation of these aberrant processes in Scl:Cdx2 cells may identify key regulators of leukemia evolution.
We did not observe leukemia cooperativity between Scl:Cdx2 and Flt3 ITD/+ models. Transgenic single mutant models of known oncogenes are frequently observed to be insufficient for Advanced MDS and leukemic transformation has traditionally been challenging to model in animals. For this reason, studies into the use of azacitidine in MDS have largely come from primary patient samples 67 . In transplant experiments of Scl: Cdx2 secondary leukemias, we find that Aza prolongs survival of mice compared with vehicle treated controls, and Aza is preferentially toxic to Cdx2-mCherry-positive cells. Using dosing schedules comparable to CC-486 oral Aza regimens used in human clinical trials 50,51 , Aza appears to be more effective and more specific for hypomethylating genes when administered in a lower-dose, extended schedule compared with higher-dose, limited schedule. These preclinical findings warrant follow-up clinical trials, for example, through the use of extended schedules of oral Aza in patients with MDS that do not respond to standard Aza. Our data suggest that Aza alone is insufficient to deplete LSC as all mice relapsed after 1-2 cycles of treatment. This is consistent with the clinical scenario and it is likely that combination strategies (for example, with venetoclax 68 ) may be required to induce meaningful long-term remissions.
Altogether, this work characterizes a model of conditional Cdx2 expression that demonstrates transformation of normal HSPCs to MDS and AL in situ. Cdx2 alters HSPC identity and confers pre-leukemic progenitor cell characteristics, facilitating clonal evolution with important biological correlates of human leukemia. This model can be used to study the clinical effects of Aza, and demonstrates that prolonged, low doses of hypomethylating agents may increase specificity and efficacy of these agents against MDS and AML.
Methods
Animals and phenotypic analysis. Experimental animals were maintained on a C57BL/6J strain in a pathogen-free animal facility and procedures were approved by the QIMR Berghofer Animal Ethics Committee (A11605M). Mice were housed in clean cages with shredded tissue as nesting material, and environmental enrichment provided as often as possible. Cages were maintained at an ambient temperature of 20-26°C on a 12 h light/dark cycle. LSL-Cdx2-mCherry mice were generated by TaconicArtemis. Flt3 ITD/+ mice 38 were obtained from Dr. Wallace Langdon, Perth. Scl-CreER T mice 22 were obtained from Dr. Carl Walkley, Melbourne. LysM-Cre mice were obtained from Jackson Laboratories. Azacitidine was dissolved in 0.9% saline by vortexing for 60 seconds and injected intra-peritoneally within two hours. Any remaining solution was discarded after use due to shortterm stability of the drug. Peripheral blood (PB) was collected by retro-orbital venous blood sampling into EDTA-coated tubes and analyzed on a Hemavet 950 analyzer (Drew Scientific). PB smears were prepared and stained with Wright-Giemsa (BioScientific) according to the manufacturer's protocol. Twenty microliters of fresh PB was lysed with 1 mL Pharmlyse (BD Biosciences) and stained with B220, CD33, Gr1, Mac1, and c-Kit for 15-30 min at 4°C. Flow cytometric data collection was performed on a fluorescence-activated cell sorter LSRII Fortessa (BD Biosciences) with BD FACSDiva software (version 8.0.1) and analyzed using FlowJo (version 9.9.6). Flow cytometry antibodies were used at 1:100 dilution unless otherwise specified (Supplementary Table 5). BM cells were harvested by flushing femur and tibia bones. LKS + (Lineage low cKit + Sca1 + ) cells were stained as previously described 69 . In brief, cells were stained with a lineage cocktail comprising of biotinylated antibodies (B220, CD3e, CD5, Gr1, Mac1, Ter-119). Cells were then stained with Streptavidin, c-Kit and Sca1. Common myeloid progenitors (CMP), granulocyte-macrophage progenitors (GMP) and megakaryocyte-erythroid progenitors (MEP) cells were identified with the addition of CD34 and CD16/32. Short-term (ST-) and long-term hematopoietic stem cells (LTHSC) were stained with the addition of CD48 and CD150. Incubations were performed for 20-30 min at 4°C. For sorting, cells were purified using a FACSAriaIII (BD Biosciences). Cell cycle analysis was performed by staining cells with surface markers for LKS+ followed by fix and permeabilization according to the manufacturer's instructions (Fix & Perm kit, Thermo Fisher). Cells were stained with Ki-67 (B56) (1:100) in permeabilization buffer for 30 min at 4°C. Cells were washed and resuspended in PBS with Hoechst 33342 (20 μg/mL, Invitrogen) prior to flow cytometry analysis. Events were acquired at <1000 events/s. Apoptosis analysis was performed by staining cells for LKS+ markers and keeping incubation times to 15 min to minimize cell death. Washed cells were then stained with 2.5 μL Annexin V (Biolegend) in 50 μL Annexin V binding buffer (BD Biosciences) (1:20) for 15 min in the dark at room temperature. Cells were not washed and 250 μL of Annexin V binding buffer containing 0.25 μL of Sytox blue (Invitrogen) was added. Cells were analyzed by flow cytometry within one hour.
Colony forming assay. BM cells were washed with PBS and seeded into 1 mL of methylcellulose (M3434; Stem Cell Technologies) in 35 × 10 mm dishes (Corning). 1 × 10 3 BM cells were plated in triplicate and cultured at 37°C. Colonies were counted after 7 days, prior to passage.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The RNA-Seq datasets generated and analysed during the current study are available in the GEO database (https://www.ncbi.nlm.nih.gov/geo) under the SuperSeries accession number GSE133829 (GSE133679, BM LKS four weeks after tamoxifen treatment; GSE133680, Cdx2-mediated AML treated with Azacitidine or vehicle; GSE133828, Cdx2mediated acute leukemia BM LKS). Publicly available datasets published by Lara-Astiaso et al. 43 was obtained from the GEO database, accession numbers GSE60101 (RNA-Seq) and GSE59992 (ATAC-Seq). Publicly available dataset for Cebpa ChIP-Seq on mouse GMP was obtained from GEO database, accession number GSM1187163.
The ChIP-Seq dataset generated during this study on Cdx2-FLAG-transduced mouse BM is available at the accession number GSE146598. The ATAC-Seq dataset performed on BM LKS four weeks after tamoxifen treatment can be accessed here: https://genome. ucsc.edu/s/JasminS/VU_2019_CDX2_ATAC. Whole exome sequencing (WES) datasets performed on Cdx2 mouse BM are available at the Sequence Read Archive (SRA) with accession number PRJNA552223. Whole exome sequencing data for Supplementary Fig. 3a and Supplementary Table 1 can be found in Supplementary Data 1. RNA-Seq data for Fig. 5a-h and Supplementary Fig. 5d-f can be found in Supplementary Data 2. RNA-Seq data for Fig. 7j-m, Supplementary Fig. 5f and Supplementary Fig. 7d-g can be found in Supplementary Data 5. All other data supporting the findings of this study are available within the article and its supplementary information files and from the corresponding authors upon reasonable request.
|
2020-06-15T14:23:44.606Z
|
2020-06-15T00:00:00.000
|
{
"year": 2020,
"sha1": "fe2b15d8fe8c4ba8c7073e2d27e7e8f85df71936",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-16840-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe2b15d8fe8c4ba8c7073e2d27e7e8f85df71936",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
234180879
|
pes2o/s2orc
|
v3-fos-license
|
VARIABILITY OF RENDERING GERUND INTO UKRAINIAN IN LITERARY TEXT TRANSLATIONS
The article analyzes the use of the gerund and gerundial constructions in the literary text and ways of its rendering into the Ukrainian language. After studying the novel “The Great Gatsby” by S. Fitzgerald, the author identified and analyzed various forms of the gerund and defined their functions in the sentence. It is noted that the grammatical aspect of translation, which involves the reproduction of the formal side of the original, is currently a pressing problem in translation studies, as translation analysis of the text is impossible without taking into account the purely linguistic component. The author conducted a comparative analysis of the text of the original and its translations into Ukrainian. The research material was Ukrainian translations of the novel made by M. Pinchevskyi, O. Makrovolskyi, and A. Pekhnyk. It is emphasized that the method of translation of the gerund depends on its function in the sentence. In Ukrainian, gerunds can be rendered by a noun, verb, adverbial participle, subordinate clause, or can be omitted. In the novel, the gerund is mostly used in the function of an object, adverbial modifier and attribute and is translated by a noun and infinitive. In most cases, the authors of the translation were unanimous in choosing the way of rendering the gerund in the recipient language. Translation of the English gerund into Ukrainian sometimes causes difficulties. One of the techniques that helps a translator is transformations. Since the gerund is a grammatical category, in the process of its translation grammatical transformations of inner partitioning and replacement of word order are used first of all. The noun characteristics of the gerund is the basis for lexical-semantic transformations of contextual and synonymous substitution, compression and decompression, permutation, transposition, descriptive and antonymous translation. The translations made by M. Pinchevskyi, O. Makrovolskyi, and A. Pekhnyk are full of various transformations, the application of which brings the text closer to the norms of the Ukrainian language and makes its perception easier for the Ukrainian reader.
Defining the problem and argumentation of the topicality of its consideration. As it is known, translation is of great importance in the process of intercultural communication. One of the most interesting aspects of this process is the translation of works of fiction. Of course, when translating novels and stories, it is first of all very important to convey the style of the author of the work. But no less important is the translation of certain grammatical structures.
The gerund and gerundial constructions are an extremely important and widely used grammatical phenomenon in the English language. The gerund is a non-finite form of the verb that has the properties of both a verb and a noun [4]. The gerund helps to avoid the use of cumbersome subordinate clauses and facilitates the creation of short and concise phrases, so in English the gerund is quite often used.
The main difficulties in their translation are, firstly, the absence in the language of translation of a consistency which would name the referent expressed by a gerund, and, secondly, the need to preserve its stylistic colour alongside with the denotative meaning. Difficulties in rendering this grammatical phenomenon in the Ukrainian language arise from the probability of its double interpretation, it is also worth deciding whether we are dealing with a gerund or a participle.
The issue of the gerund and gerundial constructions translation due to its controversy is a great field for a variety of studies, as to give a complete and absolute list of methods of gerund translations is currently impossible.
Setting the goals and tasks of the article. The goal of the article is to study the translation, the functions performed in the sentence and the complex application of transformations in the translation of the gerund and gerundial constructions based on the novel "The Great Gatsby" by Linguists' interest in the problem of translation transformations and their comprehensive study are already traditional in the course of translation theory and practice. Well-known linguists such as L. Barkhudarov, E. Breus, A. Fedorov, А. Hordieieva, L. Naumenko, V. Komissarov, J. Retsker, and many others have devoted their numerous articles and monographs to the study of translation transformations.
The analysis of different researches connected with the gerund is made in the works of Ukrainian scientists. Thus, V. Hryhorenko [5], M. Nahorna [8], T. Besiedina [2] consider different aspects of gerund rendering into Ukrainian. V. Standret [11] conducts the analysis of peculiarities of gerund translations based on the novel "Harry Potter" by J. K. Rowling. S. Ostapenko [10] analyses practical principles of gerund inferencing from the examples of "The Great Gatsby" by F. S. Fitzgerald.
It should be noticed that the novel "The Great Gatsby" and aspects of its translation into Ukrainian are in the center of attention of many scholars. O. Babenko and Ye. Marchukov [1] point at stylistic devices of psychological state of the character perception, V. Smarovoz [12] researches ways of national and cultural identity of American people rendering, O. Borysova [3] studies metaphor reproduction, K. Matvieieva and O. Mikriukov [7] -lexical transformations, Yu. Frolova [13] analyses lexical and grammatical transformation application in the process of F. S. Fitzgerald's novel translation.
The novelty of this research consists in the comparative analysis of the gerund rendering into Ukrainian and its variability based on three different translations of the novel "The Great Gatsby" by F. S. Fitzgerald made by M. Pinchevskyi, O. Makrovolskyi and A. Pekhnyk.
The outline of the main research material. The problem of researching gerund translation methods still remains open. This is due not only to the different views of translators on this issue, but also to the presence of numerous factors and nuances that play a significant role here.
The translation of the gerund, according to I. Korunets [6], depends on its function in the sentence. A gerund in the function of a subject or an adverbial modifier is translated mainly by a verb noun, a gerund in various functions of adverbial modifier -by different forms of a verb (including a participle) or a subordinate clause.
However, it should be noted that the decisive factor in the translation of the gerund is often not its function in the sentence, but its lexical meaning and ease of use of a Ukrainian grammatical form.
As it is known, the translation process is not a simple replacement of units of one language with units of another language. On the contrary, it is a complex process that involves many difficulties that the translator must overcome. One of the tools that helps a translator is transformations.
Since the gerund is a grammatical category, in the process of its translation, first of all, one can observe the use of grammatical translation transformations: inner partitioning and replacement [9].
The noun properties of the gerund are the basis for lexical-semantic transformations: contextual and synonymous substitution, descriptive and antonymous translation, compression and decompression, transposition and permutation [9]. Since there is no corresponding gerund form in the Ukrainian language, transformations are mostly of complex nature and are an inevitable process on the way to creating an adequate translation.
Let As we can see in all translations infinitive is used for gerund rendering into Ukrainian.
In the last example according to S. Ostapenko [10] the author of translation applied the transformation of generalization of meaning.
In the process of gerund rendering into the Ukrainian language, the translator comprehensively applied the inner partitioning, replacement and contextual substitution [10].
As one can see O. Mokrovolskyyi translated gerund with help of abstract noun, M. Pinchevskyi used descriptive translation and A. Pekhnyk applied contextual substitution.
Gerunds in the function of an adverbial modifier was rendered by the following ways: 1) by a finite form of the verb: I felt that Tom would drift on forever seeking (4) -Мені здавалося, що Том довіку блукатиме.
In the process of this sentence translating M. Pinchevskyi applied the transformations of inner partitioning and negativation.
But Pekhtyk translate the gerund in this phrase with help of the infinitive applying at the same time inner partitioning: замість того, щоб бути центром невсипущої світобудови (2).
While translating gerund with infinitive M. Pinchevskyi and O. Mokrovolskyyi applied contextual substitution, and A. Pekhnyk -adaptive transcoding.
In this sentence, the gerund watching was translated by an adverbial participle, and on the level of the whole sentence an inner partitioning was applied.
It should be mentioned that the Ukrainian infinitive is also used in the function of attribute by all the translators.
While working with the texts of the original and translation only a few examples of gerund in the function of a subject were spotted. To render them into Ukrainian M. Pinchevskyi applied a noun (alongside with nominalization), infinitive and a finite form of the verb: …my rushing anxiously up the front steps was the first thing that alarmed any one (4) -…нервова квапливість моєї ходи стала для слуг першим сигналом тривоги (1).
Conclusions and perspectives of further research in this field. Having analyzed various forms of the gerund and gerundial constructions in the novel "The Great Gatsby" by F. S. Fitzgerald and its translations performed by M. Pinchevskyi, O. Makrovolskyi and A. Pekhnyk. we concluded that: The Ukrainian language lacks the form of the gerund, which is rater often used in English. The gerund can be translated into Ukrainian: by an infinitive, a noun, an adverb, a predicative form of a verb (predicate) of a subordinate sentence. After the verbs need, deserve, require, want and the adjective worth (while), the active form of the gerund can be used with a passive meaning.
The largest part of the sample consists of sentences with gerunds, which are translated into Ukrainian by infinitives, nouns and adverbial participles. Sentences that are translated by the predicative form of the verb also have a prominent place among the analyzed sentences.
The gerund, which is used with the prepositions on, before, in, into, about, at, as well as after the verbs mind, start, finish, etc., is translated by the infinitive.
The gerund with the prepositions for, before, of, etc., as well as gerund, which is used in the sentence in the function of direct object, is translated by Ukrainian noun.
The gerund used with the prepositions in, by, without, after, as well as sentences in which the gerund performs the function of adverbial modifier (of manner, time, etc.) is translated by the adverbial participle.
Quite often we can observe that the gerund is translated into Ukrainian by the predicative form of the verb (predicate) of the subordinate sentence.
One of the techniques that helps a translator to make an accurate and concise translation is transformations.
Thus, literary translation is freer than the translation of texts of other genres. Often the translator departs from the direct transmission of the original in order to enhance the artistic and aesthetic effect. Of course, when choosing a method of translation not only the idea of the author of the text does play a big role, but the views of the translator as well. Translator chooses one or another method, relying on his translation instinct, based on knowledge and experience, so the last word in most cases is after a translator-practitioner. Based on the analysis by systematizing the material studied by many scholars, we have identified the most common translation options of gerund and gerundial constructions translation, as well as its features and difficulties.
|
2021-05-11T00:03:33.481Z
|
2021-01-19T00:00:00.000
|
{
"year": 2021,
"sha1": "6345f87821fbb551fd03f1dce34e0152205285f2",
"oa_license": null,
"oa_url": "https://doi.org/10.24919/2663-6042.14.2020.223441",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ba87d8d6d99684d15b38bdb42dbcfbcb32e440b0",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Art"
]
}
|
222186113
|
pes2o/s2orc
|
v3-fos-license
|
Realization of Fractance Device using Fifth Order Approximation
The realization of Fractance device is an important topic of research for the people working in Fractional Calculus, control systems, signal processing, and other allied fields. Having multifaceted applications, the realization of the device has gained importance from the past few years. The important step in the realization of Fractance device is finding the rational approximation that best fits its behavior. In this paper, the rational approximation is calculated using the continued fraction expansion formula. The rational approximation thus obtained is synthesized as a passive circuit using MATLAB. The active circuit is obtained by making use of the Operational Amplifier. The passive active circuits are simulated using TINA-TI software. The working of the proposed circuits is studied. It has been observed that both theoretical and simulated results match each other. General Terms Continued Fraction Expansion, Rational Approximation, Fractional order systems, Fractance device
INTRODUCTION
Fractional calculus deals with the differentiation and integration to an arbitrary order [1][2]. The equation which contains fractional order differentiation and integration is called as Fractional order Equation. Any system which is defined by the Fractional order differential equation is called as a Fractional order system. Transmission lines, Diffusion of heat into solids, PI λ D µ controllers are some of the examples of Fractional order systems [3,4]. Some of the possible applications of the Fractional Calculus are discussed in detail in [3].
Fractance device is also an example of Fractional order System. The device is defined by the impedance which is proportional to 1/s α , where α is a fractional order. As the value of the α changes, the behavior of the fractance changes. At α=0, it behaves as a resistor. When α changes from -1 to +1 its behavior changes from inductor to capacitor. It acts as a Frequency dependent Negative Resistor (FDNR) when α=-2 [9,10]. Since a mixed behavior is present in a single element the realization of the element has gained interest. For an ideal Fractance device the phase angle is constant independent of the frequency range of operation. But the phase angle depends only on the value of fractional order. So fractance device is also called as constant phase angle device or simply fractor [11,12,13]. In [17], Karabi Biswas et.al has proposed a commercially available fractance device and studied its operation and performance with respect to a differentiator circuit. This is preliminary attempt and the commercially available device need to be yet investigated. Fractional order Lowpass, Highpass and other types of filters are also possible with Fractance device [14,15,16,17]. Time domain response calculations of the Fractance device are presented in [18].
The realization of Fractance is possible with passive elements connected in different manners. Fractance device can be of tree type or chain type .M.Nakagawa and K.Sorimachi, have proposed a circuit consisting of self-similar tree type circuit with resistors and capacitors [34].Oldham and spanier have realized fractance circuit using N pairs of RC connected as chain network [16] .Recently a net grid type circuit for the realization of fractance device is proposed [16] .
The key point in the realization of fractance device is finding the rational approximation of the fractional order operator. There are many of the procedures exist for the calculation of rational approximations in both time and frequency domain. Oustaloup approximation, Carlson approximation, Mastuda Approximation, are some of the prominently used approximation techniques for the calculation of the rational approximation [5,6,7,19,21,22]. Even though there are several approximations possible, they are having their own advantages and disadvantages. In the Literature, in 2011 rational approximation using Continued Fraction Expansion method is proposed [21]. The rational approximation for α=1/2, is presented in [21]. It is realized using operational amplifiers. But the realizations for α=1/3,1/4 and other fractional orders are not yet realized. In this paper an attempt is made to realize fractional order operator or Fractance for α=-1/3,-1/4 and its operation is verified in both time and frequency domains. A spice based software TINA is used for the simulation purpose.
The paper is organized as follows. In Section 2, the derivation of the rational approximation is presented. The realization using partial fraction expansions is presented in Section 3. Section 4 deals with the Active realization of the circuit using TINA-TI software. Finally conclusions are drawn in Section 5.
RATIONAL APPROXIMATION
In this section, a novel rational approximation proposed in [21] is studied for various values of fractional order α. This proposed approximation is based on continued fraction expansion formula. It is given as [23,21], (1) This continued fraction expansion is convergence in the finite complex s-plane from to . For two number of terms of the expansion, first order rational order approximation is possible. It can also be observed that to have higher range of values of operation, higher order approximations are to be chosen. Substituting by and taking first ten terms of the series we obtain the truncated rational approximation expansion of which is given as, Where It is also to be mentioned that the Eqn. (2) has been proved to be more efficient compared to Oustaloup approximation in [19] for a fractional order of ½.
The values of the coefficients for various values of α is tabulated as in Table.1.The value of α is varied from 0.1 to 0.9 in steps of 0.1. By selecting α as -1/3 &-1/4 the rational approximations will be given as, The magnitude and phase responses for α = -1/3 &-1/4 are shown in Figs.1,2 and 3 respectively. From the figures it can be observed that the Fractance device works well in low frequency regions. In this paper, the rational approximations obtained for α=-1/3,-1/4 has been chosen for realization of the Fractance device.
REALIZATION USING PARTIAL FRACTION EXPANSION
The rational approximations obtained in the previous section need to be realized using the basic elements such as resistor and capacitors. In order to do this a MATLAB built in function residue() is used. As per the residue, the transfer function is expanded as, The values of R a , R b , C b , -----will be calculated by writing a MATLAB programme. For fifth order approximation the number of RC sections will be 5. The passive circuit realization for 5 th order transfer function is as shown in Fig.4.
Fig.4.Passive realization of 5 th order approximation
The values of the resistances and capacitances for various values of α in steps of 0.1 are tabulated in Table.2. The values of resistances and capacitances for α=-1/3, -1/4 are tabulated in Table3.
ACTIVE REALIZATION
The circuit used for the active realization is as shown in Fig.7.
Here F stands for Fractance device and two operational amplifiers are used. First operational amplifier provides a phase shift which is nullified to zero when passed through an inverting amplifier connected to it. The value of R in is to be small and R is selected as1KΩ.
CONCLUSIONS
Active and passive realization of the Fractance device for fifth order rational approximation at values of α=-1/3,-1/4 is presented in this paper. Initially by making use of the continued fraction expansion formula the rational approximation for fifth order is calculated. Next, by making use of the MATLAB residue function, the passive circuit is realized. The realized passive circuit is converted into active one by making use of an operational amplifier.
Figs.1-3 present the magnitude , phase responses for the values of fractional order -1/3, -1/4. It can be observed that the Fractance device works good at low frequencies. The outputs of the passive realized circuits is shown in Figs.5 and 6. From the graphs it is evident that a square waveform is converted into a ramp signal, there is a phase shift for the sinusoidal input. Passive Circuit is converted into active one by making use of an operational amplifier. Figs.8,9 and 10 represents the output wave forms for different excitations and bode plot. Fig.10 is similar to the magnitude and phase response obtained theoretically in Fig.3.It can be observed that the experimental and theoretical results match each other.
ACKNOWLEDGMENTS
This work is carried out in support of the DST project SERB No. SB/FTP/ETA-048/2012 on dated 06-01-2017. Author thanks to the sponsoring agency for the support. Author also acknowledges the university authorities, Jawaharlal Nehru Technological University Kakinada, Kakinada, Andhra Pradesh, India for providing necessary facilities to carry out this work. [
|
2020-10-08T05:30:34.654Z
|
2020-09-28T00:00:00.000
|
{
"year": 2020,
"sha1": "d49784a56bd2a0a303789badc93a12c31531f828",
"oa_license": null,
"oa_url": "https://doi.org/10.5120/cae2020652869",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d49784a56bd2a0a303789badc93a12c31531f828",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
221340765
|
pes2o/s2orc
|
v3-fos-license
|
Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery
Our previous work classified a taxonomy of suturing gestures during a vesicourethral anastomosis of robotic radical prostatectomy in association with tissue tears and patient outcomes. Herein, we train deep-learning based computer vision (CV) to automate the identification and classification of suturing gestures for needle driving attempts. Using two independent raters, we manually annotated live suturing video clips to label timepoints and gestures. Identification (2395 videos) and classification (511 videos) datasets were compiled to train CV models to produce two- and five-class label predictions, respectively. Networks were trained on inputs of raw RGB pixels as well as optical flow for each frame. Each model was trained on 80/20 train/test splits. In this study, all models were able to reliably predict either the presence of a gesture (identification, AUC: 0.88) as well as the type of gesture (classification, AUC: 0.87) at significantly above chance levels. For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice (LSTM vs. convLSTM) on performance. Our results demonstrate CV's ability to recognize features that not only can identify the action of suturing but also distinguish between different classifications of suturing gestures. This demonstrates the potential to utilize deep learning CV towards future automation of surgical skill assessment.
Introduction
Growing evidence supports that superior surgical performance is associated with superior clinical outcomes.1,2 Yet how we presently assess surgery --manual evaluation by peers --is fraught with subjectivity and is not scalable.3,4 Tremendous work has been done already to better assess surgeon performance during robot-assisted surgeries. For example, with suturing, the robotic anastomosis competency evaluation (RACE) has been developed to streamline technical skills assessment with objective criteria for each suturing skill domain5. Yet even with such a rubric, manual assessment and feedback of every suture performed by a training surgeon is not feasible. Our group previously deconstructed suturing into a clinically meaningful manner to consist of 3 phases (needle position, needle driving, and suture cinching; Fig 1), and further developed a classification system for suturing gestures to standardize the training and assessment of robot-assisted suturing (Fig 2)6. We have demonstrated that surgeon selection of gestures at specific anatomic positions during the vesico-urethral anastomosis (VUA) during the robot-assisted radical prostatectomy (RARP) is linked to surgeon efficiency and clinical outcomes (i.e., tissue tear)6.
We have also demonstrated that when surgeons are instructed on what specific gesture to utilize during the VUA, they are able to shorten the learning curve for this step of the RARP7.
Computational approaches have already been tapped towards the goal of recognizing and evaluating surgical gestures. Classical computer vision techniques8, as well as recurrent models using kinematics9 have been employed previously with modest success. In recent years, neural networks for extracting information from video data have made tremendous strides. 10,11 Indeed, some groups have started to apply such deep learning approaches to commonly available datasets such as the JIGSAWS suture classification dataset.12 While these prior works have been largely limited to the well-controlled laboratory environment, live application of computer vision-based identification and classification of suturing gestures will ultimately determine the real-world utility of such technology.
Herein, we utilize deep learning-based computer vision to 1) identify suture needle driving activity during live robot-assisted surgery; 2) classify suturing needle driving gestures based on a clinically validated categorization we previously described.
Methods
In this study, we set out to characterize commonly used architectures employed in action recognition towards the goal of recognizing and classifying surgical stitches. To undertake this study, we started by generating two complementary datasets for training models from videos of a live VUA during a RARP to identify when a suturing gesture is happening (gesture identification) and what gesture is happening (gesture classification). Using annotated video data from a previous study,6 we generated a dataset of short clips corresponding to moments of "needle driving" (Fig 1b) (positive samples) and short clips corresponding to non-needle driving surgical activity (negative samples). This dataset which we call the "identification dataset" contained 2,395 total video clips (1209 positive; 1186 negative) with an average duration of 12.2 seconds. For gesture classification, we generated a dataset of 511 total clips to distinguish five selected gestures from our established taxonomy (Fig 2). These five were selected based on the adequate sample size per class (Gesture 1 -150 samples, 2-101, 3-96, 4-117, 5-47). The clips had an average duration of 6.6 seconds and each one was manually labeled by two independent trained annotators. We refer to this dataset as the "classification dataset".
The computational task of identifying actions from video inputs is commonly known in computer vision as action recognition. Although a challenging problem, neural networks have recently shown promise in their ability to reason from such spatiotemporal data. The most common example of such networks is so-called "two-stream networks" in which networks take two streams of inputs as features: the raw RGB pixels of the video as well as an optical flow representation in which momentary direction and magnitude of motion are defined at each pixel ( Fig 3). These inputs are usually passed through a standard feature extractor (usually a deep network) and the representations produced by these networks are further passed into a temporally recurrent classification layer, usually some flavor of a long short term memory unit (LSTM13). In practice, one can add complexity or inductive biases to the recurrent classification for example by making this layer convolutional (convLSTM14), which may aid in performance and training time. In this work, we explore specific hyperparameter choices in this framework for the recurrent classification model (Figure 2). For the comparisons presented here we chose a fixed 7-layer network (AlexNet15), which was initialized from weights trained on a large corpus of natural images (ImageNet). We vary the recurrent classification layer (LSTM, convLSTM) in our experiments.
Using the two curated datasets as our starting point, we set out to evaluate commonly used deep learning architectures used in action recognition for the task of identifying when (identification) and what (classification) suturing gestures happened. Taken together, we hope this work serves as a preliminary demonstration of a potential approach towards merging the latest research in deep learning with the identification, classification, and potential evaluation of surgical skills to improve patient outcomes.
Results
We started by training a model to identify short clips as either containing "needle driving" (positive label) or did not contain such an action (negative label) using the annotated identification dataset. We trained all models on three 80/20 train/test splits, using hyperparameters shown in Table 1 and report AUC and accuracy in Figure 4. We observed significantly above chance values for both accuracy (79%) and AUC (0.88) in the identification task, however we found no effect of recurrent classification model on the model performance.
We further moved on to train a model for identifying when a gesture happened using the classification dataset to output gesture type probabilities over the 5 selected gestures in our dataset ( Figure 2). We varied the same hyperparameters as before (classification layer) and 6 found that similar to the identification task that there was no effect of the specific type of classification model. We do however note that convolutional versions of the LSTM (convLSTM) reached convergence in fewer epochs than LSTM counterparts (data not shown). In this classification task, we achieved an average 1st guess (top1) accuracy of 62% for the models trained. Additionally, we also managed to maintain a high AUC (0.87), indicating that the model does not take a biased approach to the classification task to achieve good results. This is further evident in the confusion matrix in Figure 5, where a strong diagonal is present, indicative of reasonable performance in all classes.
Discussion
In summary, we present a novel annotated dataset for the study of suture gestures in the context of a robot-assisted surgical procedure. We produced annotation for two types of tasks, one with clips annotated with when "needle driving" is present (gesture identification dataset) and another dataset labeled with gesture clips and their corresponding type according to the presented taxonomy (gesture classification dataset). We further show that applying standard deep network approaches, commonly used in action recognition, can be used to train models that achieve promising performance on both tasks.
The results presented here, in many ways, present a conservative estimate of the sort of performance that can be achieved from these models. We are training in a relatively data-limited regime in both tasks so these models will further improve as labeled data becomes available. In addition, we did not yet employ any inference "tricks" such as ensembling or majority votes commonly used in action recognition models.10,16 Our present study is foundational to future work on automating technical skills evaluation. Having completed the first steps to identify and classify suturing gestures, we will transition to evaluating how well a suture is executed. Part of how well suturing is performed is simply gesture selection at specific anatomic positions6, in which the present study can help 7 streamline. But the suturing performance also depends on the actual technical skill of the surgeon in carrying out the maneuver, and the models we develop in this study hold promise for such automatic evaluation as well.
On a higher level, our present work is foundational not only for evaluation of suturing, but it also builds the starting point for eventual autonomous suturing. Such future platforms must first be capable of recognizing and assessing ideal suturing skills before becoming capable of performing it autonomously. The full classification system is presented here, which is derived from our prior work.6 Boxed gestures refer to those evaluated for our classification task in the present study. Average model performance across three 80/20 train-test splits of the dataset broken down by task. Models were trained either to predict whether or not a gesture was happening (identification) or trained to identify the type of gesture being performed in a clip (classification).
We vary the recurrent model (LSTM, convLSTM). For the 5-way classification in the gesture classification task, AUC represents the average of one vs. rest across classes and accuracy represents top1 accuracy.
|
2020-08-28T01:01:17.065Z
|
2020-08-26T00:00:00.000
|
{
"year": 2020,
"sha1": "6ac3a69722b1d261fc50c8b4de046d0f501caf74",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/105591/6/2008.11833.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6ac3a69722b1d261fc50c8b4de046d0f501caf74",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
231866273
|
pes2o/s2orc
|
v3-fos-license
|
The Contribution of Dietary Magnesium in Farm Animals and Human Nutrition
Magnesium (Mg) is a mineral that plays an essential role as cofactor of more than 300 enzymes. Mg in farm animals’ and human nutrition is recommended to avoid Mg deficiency, ensure adequate growth and health maintenance. Mg supplementation above the estimated minimum requirements is the best practice to improve farm animals’ performances (fertility and yield) and food products’ quality, since the performance of farm animals has grown in recent decades. Mg supplementation in pigs increases meat quality and sows’ fertility; in poultry, it helps to avoid deficiency-related health conditions and to improve meat quality and egg production by laying hens; in dairy cows, it serves to avoid grass tetany and milk fever, two conditions related to hypomagnesaemia, and to support their growth. Thus, Mg supplementation increases food products’ quality and prevents Mg deficiency in farm animals, ensuring an adequate Mg content in animal-source food. These latter are excellent Mg sources in human diets. Sub-optimal Mg intake by humans has several implications in bone development, muscle function, and health maintenance. This review summarizes the main knowledge about Mg in farm animals and in human nutrition.
Introduction
The average content of Mg in the body of most animals is~0.4 g Mg per kilogram of body weight [1]. In the human body, the total Mg concentration is around~20 mmol/kg of fat-free tissue. This value corresponds to~24 g of total Mg in an average 70 kg adult with 20% (w/w) fat [2,3]. In comparison, the body content of calcium is~1000 g (i.e., 42 times greater than the body content of Mg) [4]. Assuming that a similar relationship exists for other mammals, the total body Mg 2+ of a cow with a body weight of 700 kg should be roughly 455 g, of which approximately 320 g would be skeletal (approximately 60-70% of Mg is located in the skeleton), about 130 g intracellular, while only about 4-5 g would be found in the total extra-cellular space (i.e., 35% is distributed in soft tissue and extracellular fluid) [4,5]. For the same cow the calcium content is between 7-9.6 kg, which means~21 times greater than the body content of Mg.
However, Mg is important for many functions in animals' body and its deficiency results in several dysfunctions. Accordingly, as reported for humans, also in the case of farm animals Mg requirements and recommendations have been defined.
In light of this, the aims of the present review are to: (i) provide an overview of Mg requirements and recommendations in farm animals; (ii) describe the main effects of Mg Table 1. Production efficiency trend: feed conversion rate (FCR), kg feed per kg of animal product. Adapted from [11,12]. As for other farm animal species, Mg is a key dietary element and it is essential for animal growth and survival. Notably, it has essential functions in cellular metabolism and bone development [2,13]. In terms of supplementation, oxide, carbonate and sulphate are all sources of highly available Mg for farm animals [14]. Generally, Mg oxide (MgO) is the most used and the highest Mg-concentration mineral source available as an animal feed ingredient (Table 2). Magnesium oxide usually guarantees an adequate absorption of Mg ions. Not all sources of MgO are equal to the task of providing efficiently the necessary Mg 2+ ion amount to a living organism. Solubility, reactivity, and bioavailability are all characteristics that differ from one MgO product to another [4]. The mineral feed bioavailability is also different: for example, the average Mg bioavailability of magnesium oxide, compared to magnesium phosphate, is around 20 vs. 45% [15]. Table 2. Mg content of mineral supplements. Adapted from [15].
Mg Source
Mg Content (g/100 g) The recommendations of the National Research Council (NRC) for different farm species are as follows: 400 mg/kg Mg dry matter (DM) for pigs [16], 500 mg/kg Mg DM for broilers, turkey poults and laying hens (with a food intake of 100 g/day) [17]. A different scenario exists for ruminant animals (beef and dairy cattle, sheep, and goat). Insufficient absorption or availability of Mg in ruminants leads to Mg deficiency which manifests in clinical signs such as tetany (grass tetany) or parturient paresis (milk fever). Intuitively, excessive Mg supplementation has also some detrimental effects. In farm animals, diarrhea is the most obvious effect of high intake of Mg. Very high dietary Mg intake (e.g., about seven times fold the minimum requirement for pigs) [18] can reduce feed consumption and weight gain.
Mg
However, combining quantities of Mg recommended in each species per kg of metabolic weight (body weight 0.75 ; Table 3), it is evident that the quantities recommended for pig and poultry are higher than ruminants. These differences might depend from several factors that can be linked to the animals and their diets. Poultry and pigs are omnivorous species, with very fast growth rates that reach in modern breeds 100 g and 1 kg/day, respectively. These figures speak for themselves. Such growth performance needs a lot of energy and nutrients including minerals. Cow, considered as the reference ruminant's animal in the present work, is an adult herbivorous animal in which the Mg absorption and metabolism, starting from the rumen, is different and in which the main output is in milk. The lowest values reported for cow probably explain its sensitivity to the Mg deficiency especially at the onset of lactation (e.g., milk fever). By contrast, the recommended quantities in humans (see below) are enough to reach an adequate steady-state condition in typical adult male humans (maintenance).
Mg Supplementation in Pig Nutrition
The minimum Mg requirement for pigs receiving a purified diet is 325 mg/kg DM and, accordingly to NRC [16], 400 mg/kg DM are recommended. Higher supplementations have been reported for optimum growth and reproductive performance in pigs (400-500 mg/kg DM). Thus, the dietary intake of 400 mg/kg is considered sufficient and 500-650 mg/kg Mg is recommended for pigs. On the other hand, the demand for Mg increases proportionally to the protein content of the diet [15]. Deficiency symptoms in pigs include a strong response of the nervous system (hypersensitivity, anxiety, fear), muscle contractions and a drop in productivity (a slower growth rate because of loss of appetite). The kidney is the major site of Mg homeostasis and is able to excrete Mg at high dietary concentrations and reabsorb Mg with greater efficiency at low dietary concentrations.
In terms of sources, Mg can be found in several feeds, such as green forage, animal derived feed, and mineral supplements. Feed ingredients like wheat bran, dried yeast, linseed meal, and cottonseed meal are good sources of Mg. The average content (g/kg DM) of Mg in cereals, oil meals and fish meals is: 1.1-1.3 g, 3.0-5.8 g, and 1.7-2.5 g, respectively [15]. However, when Mg digestibility is considered, these figures must be reconsidered: in common pig feeds only 20 to 30% is digestible [18]. For this reason, supplements like MgO are commonly used in pig formulas. As in the other non-ruminant animals (pigs and poultry), Mg is absorbed primarily in the small intestine, at an efficiency of approximately 60%, mostly via passive transport. In this site, potassium, calcium and ammonia are its antagonists [15].
The Effects of Mg on Meat Quality
Regarding pigs, a nutritional regime is one of the key environmental factors affecting fattening results, farm financial return and meat quality. Dietary Mg supplementation in pigs has generally been ineffective for increasing growth of fattening pigs (average daily gain), but has been observed to improve pork quality [18], specifically colour and drip loss [19].
Colour is one of the most important meat quality characteristics. It is a visual element that depends on the presence of pigments, the tissue composition, and texture of meat. There is a correlation between meat colour and the pH of muscles. Changes in meat colour are, in 50% of cases, determined by pH values measured 24 h post-harvest. Meat appearance is positively affected by nutritional factors, such as vitamin C, vitamin E, selenium, and Mg content. In post-harvest processes in muscles, glycogen is converted into lactic acid and the pH of meat decreases, leading to the occurrence of Pale Soft Exudative (PSE) meat defects. The PSE is a condition that occurs usually during the conversion of muscle to meat. PSE has been documented mostly in pork carcasses, even though it is also reported in other species. The typical pH in pork would be 6.5-6.7 with a temperature of 37 • C at 45 min post-mortem. However, in unusual carcasses, the pH may drop to 6.0 in the same time period. In this latter case, the combination of rapidly decreasing pH and high carcass temperature results in the denaturation of some of the contractile proteins, with consequent loss of water holding capacity (drip loss). Denatured proteins are not capable of holding or binding muscle water, as well as fully native proteins. More specifically, the length of the myosin filaments decreases by 8-10% during this process. PSE meat is usually of pale colour, wet in appearance, and very soft in texture, thus making PSE one of the major quality defects in meat industry [20]. This defect reduces consumer's acceptability, shelf life, and yield of meat, thus affecting profits tremendously. To cope with this problem, it has been shown that Mg inhibits stress-induced glycolysis, thus improving meat quality [21,22]. That's why the addition of Mg to finisher diets has been found to reduce the incidence of PSE meat from 15 to 50% of carcasses. Therefore, adding this mineral could decrease drip loss and improve meat colour from 3.6 to 6.6% with a short-term administration. Specifically, Mg improves colour stability and reduces drip loss.
Mg supplementation is a relatively easy method of improving pork quality [18]. Animal diets can be supplemented with organic (proteinate, aspartate) or inorganic (oxide, sulphate, chloride, phosphate) forms of Mg. A good solution to obtain this effect is to add Mg to drinking water: the administration of 600 mg of Mg per litre of water, for two days before slaughter (short term), has been found to be also effective [23].
Mg for Sows
The reproductive performance of high producing sows has increased dramatically over the past decades, which may contribute to the changes in their nutritional requirements. It has been proven that Mg supplementation improves the conception rate of sows by 11-15% [10]. Moreover, its supplementation significantly reduces the weaning to oestrus interval in gilts and enhances the total number of born piglets, born alive, and weaned. This increase is particularly evident for sows fed with 150-300 mg/kg of supplemental Mg (basal diet contains 210 mg/kg of Mg). The improvement of sows' performance may be related to a reduced incidence of constipation, which has been shown to negatively affect the reproductive performance of sows.
In addition, the increased levels of Mg in sows' lactation diet has a repercussion on its concentration in colostrum, as well as in the serum of piglets. This has been recently reported by Zang et al. [10], who evidenced that the increase in Mg content in sow's lactation diets can lead to the increase, not only of the concentration of Mg in colostrum, but also of the serum Mg concentration in suckling piglets. These results highlight the role of the maternal diet in defining piglets' nutritional status (e.g., their Mg status).
However, these effects observed in sows appeared to be age-related, which may be due to depleted body stores of minerals in high producing sows as they age [24]. Therefore, it is possible that, as the sows age, Mg stores in their body decline, increasing the reliance on the diet to provide it. In addition, dietary Mg supplementation positively affects pork quality by enhancing meat colour and reducing drip loss.
Mg supplementation also improves sows' fertility (e.g., conception rate) and helps during pregnancy in controlling constipation problems. Furthermore, the increase in dietary Mg in lactating sows leads to the increase in both Mg colostrum content and Mg serum content of suckling piglets (i.e., their Mg status).
Mg Supplementation in Poultry Nutrition
The minimum Mg requirement for broilers, turkey poults, and laying hens is around 500 mg/kg DM, accordingly to NRC [17]. Mg supplementation in poultry is affected by the growth rate and reproductive performance [6], but it is usually suggested after the third week of age, for preventing leg bones malformation. After this phase, Mg supplementation is recommended specially to prevent its deficiency. Indeed, Mg deficiency in avian species could lead to serious biochemical and symptomatic variations: for example, in young poultry (older than 3 weeks), it has been observed that it caused poor growth of body and feathering, decreased muscle tone, incoordination, squatting, fine palpable tremors, convulsive attacks, coma, and ultimately death [7]. In laying hens, the symptoms are different: reduced egg production, decreased feed consumption, nervous tremor, and seizures are the most reported deficiency signs. By contrast, adequate Mg supplementation in poultry exerts beneficial effects, increasing weight gain of broilers and meat quality, and egg production of laying hens. The influence of increased Mg levels fed to parent stock on progeny performance is another area of interest. Parent stock's breeders supplementation with Mg (up to 500 mg/Mg/day) positively affects egg quality and hatchability [4,6]. Recent results also showed that MgO supplementation improved FCR and skeletal integrity [4,7,25] and exerted a positive effect on pullet skeletal development, body weight and onset of egg production [26].
Interaction with Ca and P
Mg metabolism is closely associated with Ca and P. These are two important minerals for laying hens that affect productive performance and eggshell quality. The use of Ca and P compounds appears to be determined largely by the relative proportions in which these elements and Mg are present in the ration. The commercial diet of chickens younger than 3 weeks of age should not be supplemented with Mg, as this leads to leg bone malformation and development of perosis-like symptoms. An antagonistic relationship also seems to exist between Ca and Mg in relation to skeletal integrity and eggshell quality in laying hens. An increased dietary Mg supply in laying hens, although not affecting Ca retention, reduces eggshell Ca content and bone Ca content, whereas shell Mg content is increased [7]. The variety of mechanisms related to Mg-Ca interaction demonstrates the need of close regulation of any variation in Mg level in poultry diets. Nutritionists today strive for optimisation of P content in poultry diets because of the high costs of P supplements, finiteness of phosphate rock supply and negative ecological impact of high P excretions. A supplementation with extra-nutritional levels of Mg to commercial poultry feed may disturb P as well as Ca availability, and thus negatively impacting bird performance and bone mineralization, especially in laying hens [27]. From another point of view, other dietary constituents can affect Mg bioavailability, retention and finally Mg status of poultry. Among these, the phytate effect is one of the most known: dietary phytate generally decreases Mg absorption in poultry through the formation of insoluble Ca-Mg-phytate complexes under the pH conditions of the small intestine. Use of phytase enzymes (common practice in poultry diets) might prevent this detrimental effect [28].
Mg is an essential element in poultry nutrition. Although most compound feeds for poultry contain Mg to an extent that makes Mg deficiency unlikely under practical conditions, other dietetic features of poultry formulas merit attention. Indeed, in specific poultry compound feeds (e.g., laying hens, breeders, specific Ca and P ratios, presence of phytate, etc.) Mg supplementation can be recommended for designing balanced diets aimed at achieving maximal performance.
Mg Supplementation in Cow Nutrition
In dairy and beef cows' diets, Mg is generally recommended at 1.2 to 3 g/kg DM [29,30]. An adequate dietary supply of Mg supports animal's health and prevents deficient conditions. The most important deficient conditions are grass tetany and milk fever. Grass tetany is a clinical sign of hypomagnesaemia in cows, in which Mg level in cerebrospinal fluid decreases below a critical level (<0.7 mmol/L), following a decrease in blood plasma. This impairs the synaptic activity of neurons and causes symptoms such as excitement and muscular spasms (tetany). It is recognized that the incidence of grass tetany in cows is related to the fertilization of pastures with fertilizers containing K, which impairs Mg absorption. Milk fever (or parturient paresis) is another pathological condition characterized by hypomagnesaemia and low plasma Ca concentrations (<1.4 mmol/L). Milk fever typically occurs around calving when there is a sudden increase in Ca losses through milk. Subclinical hypomagnesaemia reduces the ability of cows to mobilise calcium in response to hypocalcemia. In particular, Mg is required and involved in Ca absorption from the gut and Ca mobilization from bones, in order to maintain Ca homeostasis in plasma [4].
Apart from Mg deficient conditions, Mg supplementation is crucial to sustain ruminants' performance. Mg requirement of modern dairy cows has increased, partly due to increased use of nitrogen (N) and potassium (K) fertilizers, and partly due to an increase in cow genetic merit. All cows are to some extent deficient in Mg in late pregnancy and early lactation. High producing cows (typically producing more than 40 kg of milk per day) are more at risk of Mg deficiency.
Due to pasture and forage consumption by ruminants, Mg in soil is important in defining Mg availability for these animals. Mg content in soil differs between the various soil types and its availability to plants is influenced by several factors such as soil pH, organic matter content and fertilization [31]. This latter is an important feature on which depends the availability of minerals, including Mg. It has been observed that fertilization of soil with MgO provided and increased Mg content in grass, but it was considered insufficient to prevent Mg deficiency. Instead of this approach, direct Mg supplementation in cows' diets is considered the best practice to prevent grass tetany and milk fever [5,8,9].
Dietary Interactions on Mg Absorption
There are some dietary interactions between single components of feedstuffs, such as minerals, and Mg absorption. One of the most known in ruminants is a negative interaction between K intake and Mg absorption at ruminal level, as seen by the use of manure as fertilizer. The rumen is an important site of Mg absorption for cows [4]. Indeed, at low K level in ruminal epithelial cells, the apical membrane potential provides a driving force for Mg uptake by the cells, whereas at high ruminal K level there is a depolarization of the membrane potential, thereby causing a reduction in Mg uptake by cells. It can be assumed that ruminal K concentration is linked to apical membrane potential [4,8,32]. This phenomenon was clearly observed in sheep, in which an increase of 1 g/kg DM in dietary Nutrients 2021, 13, 509 7 of 15 K concentration decreased Mg absorption by 0.3% [33] (Figure 1). Mg absorption occurs also in small intestine at duodenal level, although a minor absorption rate is observed also in the large intestine. for Mg uptake by the cells, whereas at high ruminal K level there is a depolarization of the membrane potential, thereby causing a reduction in Mg uptake by cells. It can be assumed that ruminal K concentration is linked to apical membrane potential [4,8,32]. This phenomenon was clearly observed in sheep, in which an increase of 1 g/kg DM in dietary K concentration decreased Mg absorption by 0.3% [33] (Figure 1). Mg absorption occurs also in small intestine at duodenal level, although a minor absorption rate is observed also in the large intestine. Furthermore, Na deficiency is also linked to lowered Mg absorption, because Na level decreases at the expense of K level, thereby resembling the condition of high K level that impairs Mg absorption. Finally, it has been observed that starch supplementation increases Mg absorption in rats and humans [34]. This effect has not been observed in cows yet, but the reason could be that the intake of high amounts of carbohydrates, such as starch, could cause a decrease in ruminal pH, thereby raising Mg solubility and consequently its absorption.
Prevention of Mg Deficiency
The prevention of Mg deficiency must be performed both at short and long term, in order to prevent acute and chronic adverse conditions related to Mg deficiency. If there is a sudden need to avoid Mg deficiency, it is recommended to raise the dietary Mg content to adequate levels through the use of compound feeds. There are three main different forms of Mg that are used in ruminants' compound feed: Mg sulphate, Mg chloride, and Mg oxide. Mg sulphate is considered a good bioavailable source of Mg as well as Mg oxide, which is the most common source of Mg used to prevent milk fever. Both Mg sulphate and Mg chloride can contribute to decreasing the so-called dietary cation-anion difference (DCAD), commonly calculated as ((Na + + K + ) − Cl − + S 2− ) and expressed in milliequivalents (mEq). When Mg sulphate or Mg chloride are used as a source of supplemental Mg, their accompanying anions can reduce that balance, even if in terms of bioavailability Mg chloride should be intuitively preferred to both manipulate DCAD and prevent milk fever in dairy cows [8]. Furthermore, Na deficiency is also linked to lowered Mg absorption, because Na level decreases at the expense of K level, thereby resembling the condition of high K level that impairs Mg absorption. Finally, it has been observed that starch supplementation increases Mg absorption in rats and humans [34]. This effect has not been observed in cows yet, but the reason could be that the intake of high amounts of carbohydrates, such as starch, could cause a decrease in ruminal pH, thereby raising Mg solubility and consequently its absorption.
Prevention of Mg Deficiency
The prevention of Mg deficiency must be performed both at short and long term, in order to prevent acute and chronic adverse conditions related to Mg deficiency. If there is a sudden need to avoid Mg deficiency, it is recommended to raise the dietary Mg content to adequate levels through the use of compound feeds. There are three main different forms of Mg that are used in ruminants' compound feed: Mg sulphate, Mg chloride, and Mg oxide. Mg sulphate is considered a good bioavailable source of Mg as well as Mg oxide, which is the most common source of Mg used to prevent milk fever. Both Mg sulphate and Mg chloride can contribute to decreasing the so-called dietary cation-anion difference (DCAD), commonly calculated as ((Na + + K + ) − Cl − + S 2− ) and expressed in milliequivalents (mEq). When Mg sulphate or Mg chloride are used as a source of supplemental Mg, their accompanying anions can reduce that balance, even if in terms of bioavailability Mg chloride should be intuitively preferred to both manipulate DCAD and prevent milk fever in dairy cows [8].
Mg supplementation in ruminants' feeding is important both to sustain the metabolic activity of the enzymes that use Mg as cofactor and to prevent hypomagnesaemic clinical conditions such as grass tetany and milk fever. Mg intake and absorption in small intestine are strictly correlated and are subject to the influence of several factors, of which K level is one of the most important: a high K intake inhibits Mg absorption, thus increasing the risk of Mg deficiency. The K-induced inhibitory mechanism can be counteracted using supplemental dietary Mg to raise Mg level at short and long term.
Animal-Derived Food as Source of Dietary Mg
Mg supplementation in farm animals' diets ensures an adequate Mg content in animal derived foods and consequently the Mg intake from these foods for humans. Whilst in the typical European diet cereals or cereal-derived foods are the largest source of Mg intake, animal-derived foods also make an important contribution. Typically, the recommended dietary intake of Mg for humans is around 300-400 mg/day. However, the reference values vary in relation to age and sex. For example, the recommended dietary intake for adult males is 350 mg/day, whereas for adult females is 300 mg/day [35]. Table 4 summarizes the contribution that animal-derived foods make to Mg intake in a selection of studies in several European countries. The data relate primarily to adults and some are relatively old but broadly indicate that meat, milk and dairy products make the largest contribution, with some notable differences between countries. The contributions seen in these studies contrast considerably with the values from the Mediterranean Healthy Eating, Ageing and Lifestyle (MEAL) study in Sicily which reported contributions of only 7, 4, 3 and 0% from milk and dairy products, fish, meat, and eggs, respectively [36]. In addition, the data in Table 4 mask the substantial variation in the supply of Mg that age of populations can make. For example, in the recent UK National Diet and Nutrition Survey (NDNS), milk and dairy products provide 25, 15, and 13% of Mg intake of children aged 1.5-3 and 4-10 years and subjects aged ≥75 years, respectively, compared with 9% in adults aged 19-64 years [37]. 1 Based on food purchases so will include children 2 No value given.
It is noteworthy that milk makes a greater contribution to Mg intake in very young and elderly subjects who are likely to be at greater risk of sub-optimal nutrition and will benefit from the high bioavailability of Mg in milk. A number of studies have shown that lactose in dairy products can enhance intestinal absorption of Mg in infants [41] and animal models [35]. This enhancement of Mg absorption has been attributed to the lowering of pH in the ileum by lactose fermentation which reduces the synthesis of insoluble Ca-Mg-phosphate complexes thus increasing absorption of Mg in the ileum. The benefits of lactose in this regard will of course be lost to subjects that are lactose intolerant and thus choose lactose-free dairy products. Table 5 summarizes the content of Mg in several animal-derived foods. Whilst the data in Table 5 consistently show the importance of milk and meat as dietary sources of Mg, they do not reflect differences in Mg intake with some recent trends giving rise for concern. For example, in the recent UK NDNS, Roberts et al. [37] report that 50, 14, and 27% of adolescent females (11-18 years), adult females (19-64 years), and elderly females (≥75 years), respectively, have Mg intakes below the Lower Reference Nutrient Intake (LRNI). Equivalent values for males (27,11, and 22%) are less extreme but are also concerning. The LRNI is that which is assumed to satisfy the nutrient requirements of the bottom 2.5% of the population so intakes considerably lower than this reflect how serious this situation is. It is noteworthy that in the UK milk and red meat consumption, especially by young females, has reduced over recent decades and this will have contributed to the substantially suboptimal intake of Mg and some other nutrients currently seen [45]. It is also interesting that the European Food Safety Authority (EFSA) [13] has recommended what it describes as 'adequate intakes' of Mg which for children aged 3 to 15 years are substantially higher than the UK Reference Nutrient Intakes for that age group.
The role of Mg as a cofactor in many body enzyme systems has been known for some time. Many of these involve adenosine triphosphate (ATP), which is involved in a wide range of biochemical pathways including intermediary metabolism related to the synthetic pathways for carbohydrates, lipids and proteins. About 60% of body Mg is in bone [46] and some 25% is in muscle mitochondria [47] and it is now becoming clear that its role in the musculoskeletal system is vital in relation to diet-related chronic diseases [48].
Mg and Bone Health
Whilst it has been recognised for some considerable time that adequate intakes of protein and Ca together with an optimum vitamin D status are important prerequisites for bone development it is now becoming clear that Mg also has a crucial role. Research with children aged 4-8 years reported that Ca intake, when not very sub-optimal, was not substantially linked to bone mineral status, whereas Mg intake, and particularly the amount absorbed, were important predictors of bone mineral density and bone mineral content [49]. The authors highlight that this work provides good evidence that Mg should be more considered as an important nutrient in relation to bone development. In addition, more recently the Japanese Kuopio Ischemic Heart Disease prospective study has shown that low serum Mg concentrations in men aged 42-61 years were associated with increased bone fracture risk [50]. To what extent these findings are relevant to other populations is uncertain at present, but ensuring that adequate Mg intake is clearly and especially important during the phase of rapid bone growth in late childhood/ and early adolescence. Mg is now also known to have a considerable interaction with vitamin D being an essential cofactor for vitamin D synthesis and its subsequent activation, which in turn can increase intestinal absorption of Mg [51]. This further highlights the importance of Mg in bone health. Given the co-existence of sub-optimal vitamin D status, the substantially sub-optimal Mg intakes in UK female adolescents noted above is a matter of substantial concern.
There is also increasing evidence of a benefit of Mg for bone health in later life. Erem et al. [52] reviewed studies which showed that the risk of osteoporosis in older subjects can be a consequence of low Mg intake. This can lead to excess Ca release from the bones with the resultant increased excretion leading to increased bone fragility and hence a higher risk of bone fractures. In addition, high intakes of Ca can lead to lower retention of Mg and it has been proposed that the optimal dietary ratio of Ca:Mg is between 2.0:1.0 and 2.8:1.0 [52] but they highlight that in a lot of current US diets the ratio above 3.0:1.0.
There is clearly an urgent need for further research on the interaction of Mg with Ca and vitamin D in relation to bone development in the young and bone strength in the elderly. It is well known that milk and dairy products are excellent sources of Ca and, as noted above, also an important source of Mg for the young and elderly, as well as being an excellent vehicle for vitamin D fortification.
Mg and Sarcopenia
Sarcopenia is a condition mainly associated with chronic loss of muscle mass and muscle function with advancing age [53]. It also predicts functional decline, hospitalization, and living in community dwelling for the elderly. It is therefore a condition of increasing importance in the elderly (although it can occur in middle age) with an increasing prevalence associated with the increasing age of many populations worldwide. The condition can have consequences additional to simple muscle loss, as for example, it reduces the protection of the bone with increased risk of bone breakage in a fall which can have an immense effect on mobility, disability and general quality of life. A less well appreciated outcome of reduced muscle mass and the associated reduced mobility is the increased risk of metabolic diseases, particularly type 2 diabetes [54]. Since skeletal muscles are the major site of glucose uptake and clearance from the circulation, reduction in muscle mass can adversely affect glycemic control [55].
As with the influence of Mg intake on bone mineralization noted earlier, there is also increasing evidence of an association between Mg and preservation and functionality of skeletal muscle. Dominguez et al. [56] used baseline data from the prospective study named "Invecchiare in Chianti" (InCHIANTI, Aging in the Chianti area of Tuscany) on risk factors for late-life disability. They selected 1138 men and women (aged 66.7 ± 15.2 y) with full data on muscle performance and blood Mg. After adjustments for key confounders (age, sex, etc.) serum Mg concentrations were significantly and positively associated with muscle performance as assessed by measures including grip strength (p = 0.0002), lower leg muscle power (p = 0.001), and knee extension torque (p < 0.0001). More recently Welch et al. [57] studied the cross-sectional associations between Mg intake and skeletal muscle mass (expressed as fat-free mass (FFM) as a percentage of body weight (FFM%)) and grip strength in 56,575 males and females aged 39-72 years from the UK Biobank cohort. They found positive associations between quintiles of Mg intake and grip strength (p trend < 0.001) and FFM% (p trend < 0.001). They reported that the relationship with grip strength was stronger for men ≥60 years of age than in younger men, although the opposite was the case for women. The authors indicated that this study was the largest population to date used to study the association between Mg intake and direct functionality measures of skeletal muscle.
Zhang et al. [58] reviewed the evidence from animal and human studies as to whether Mg can enhance performance during exercise. They concluded that animal studies showed that Mg might improve exercise performance, possibly by increasing glucose availability to the brain and muscles whilst lowering and delaying lactate accumulation in the muscles. They found that human studies had primarily examined physiological effects such as blood pressure, heart rate and maximal oxygen uptake (VO 2 max) rather than direct muscle performance but they did report evidence that Mg supplementation might enhance some performance parameters in both aerobic and anaerobic exercise regimes. Despite blood only containing about 1% of total body Mg, serum Mg concentration has been used as a measure of Mg status in most studies. Recently however, Cameron et al. [59] showed that the measurement of intramuscular ionised Mg using phosphorus magnetic resonance spectroscopy ( 31 PMRS) was positively associated with knee-extension strength (p < 0.001 in women; p = 0.003 in men), while total serum Mg was not associated with muscle strength (p = 0.27). The authors propose that intramuscular ionised Mg by 31 PMRS is a superior measure of Mg status than total serum Mg, perhaps particularly when muscle weakness of an uncertain cause is found.
Clearly more work on the increasingly important relationship between Mg and muscle function is needed. Given the substantially sub-optimal Mg intakes in elderly populations such as in the UK [37] and the US [52], and the increasing prevalence of sarcopenia, this work is now urgent.
Mg and Cancer Risk
Although this area of work is relatively new there is an increasing interest in the possible association between Mg status and cancer risk. The recent case-control study of Huang et al. [60] explored the effect of dietary Mg intake on breast cancer risk directly and indirectly via the effect of Mg on the inflammatory markers C-reactive protein (CRP) and interleukin-6 (IL-6). Multivariable logistic regression was used to assess the odds ratio (OR) and 95% confidence interval (95% CI), together with path analysis to explore mediating effects. The results showed that a higher Mg intake (≥280 mg/d) was associated with a significantly lower risk of breast cancer (OR 0.80, 95% CI 0.65, 0.99) than intakes <280 mg/day and there was an overall dose-response between Mg intake and breast cancer risk ( Figure 2). Additionally, circulating CRP concentration was positively associated with the risk of breast cancer (OR 1.43, 95% CI 1.02, 2.01). IL-6 showed no association with breast cancer risk but the path analysis identified that dietary Mg influenced breast cancer risk directly and indirectly by its lowering effect on CRP. As the authors noted, this study was the first of its kind but had weaknesses including the well-recognised limitations of case-control studies plus the fact that the measurement of the inflammatory markers was only made in relatively small number of subjects (322 cases and controls). Nevertheless, this study clearly supports the objective of increasing Mg intake including some populations noted earlier with substantial sub-optimal Mg intakes. There is increasing evidence of an inverse association between vitamin D status (circulating 25(OH)D3) and mortality in colo-rectal cancer (CRC) patients and the meta-analysis of Maalmi et al. [61] involving 11 studies and 7718 CRC patients showed that those with the highest vitamin D status had significantly lower risk of all-cause mortality with a hazard ratio (HR) of 0.68 (95% CI: 0.55, 0.85) and CRC cause mortality (HR 0.67, 95% CI There is increasing evidence of an inverse association between vitamin D status (circulating 25(OH)D3) and mortality in colo-rectal cancer (CRC) patients and the metaanalysis of Maalmi et al. [61] involving 11 studies and 7718 CRC patients showed that those with the highest vitamin D status had significantly lower risk of all-cause mortality with a hazard ratio (HR) of 0.68 (95% CI: 0.55, 0.85) and CRC cause mortality (HR 0.67, 95% CI 0.57, 0.78) than those with the lowest vitamin D status. As noted earlier, Mg is heavily involved in biochemical pathways for vitamin D synthesis and the conversion of 25(OH)D3 to the active 1,25(OH) 2 D3 form of vitamin D. The study of Wesselink et al. [62] with 1169 newly diagnosed patients examined the associations between circulating 25(OH)D3 concentrations, Mg or Ca dietary intake (including supplements) and recurrence rate and all-cause mortality. Overall, the study concluded that having an adequate vitamin D status together with an adequate Mg intake is essential for reducing the risk of mortality in CRC patients although the wide applicability and exact mechanisms are not known and should be investigated.
Conclusions
Mg is required in animal nutrition because of its major role in cellular metabolism and bone development and further to avoid adverse health conditions that impair animals' health and consequently their productivity. Usually, Mg minimum requirements are met only using common feed ingredients. However, the dramatic increase in productivity of high producing farm animals over the past decades has led to new challenges in nutritional requirements to support higher animal performance. For this reason, Mg supplementation in animal nutrition above the minimum requirements has been regarded as a best practice to face with higher performance, mainly in terms of fertility and product quality. Mg supplementation is essential also because it ensures an adequate Mg content in animalsource food. To summarize, Mg supplementation exerts beneficial effects in high producing farm animals in terms of productive and reproductive performances and is essential for their health and wellbeing.
In human nutrition Mg is also essential. It is a cofactor in more than 300 enzyme systems which regulate diverse biochemical reactions in the body, including protein synthesis, muscle and nerve transmission, neuromuscular conduction, signal transduction, blood glucose control, and blood pressure regulation. In light of this, the impact of sub-optimal Mg intake by humans can be substantial as there is increasing evidence of its key role in bone development, muscle function and an association with some health risk. In this respect dietary intake and source become also important. It is clear that for many populations the animal-derived foods, and notably meat, milk and dairy products are important dietary sources of Mg [35]. This also seems to be particularly important in age groups which have substantial nutrient insecurity such as adolescents and the elderly. It is also becoming increasingly clear that Mg and vitamin D have an interdependence and are involved in the aetiology of several chronic diseases which have an increasing prevalence. Whilst much needs to be known about the association of Mg with risk of chronic diseases, a concerted effort should be made by public health bodies to ensure Mg intake and vitamin D status are satisfactory.
Overall, the recommendation for both animals and humans is the same, do what is necessary to ensure an adequate dietary supply of Mg.
|
2021-02-11T06:18:15.942Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "395463d74a0960e51d20414486ecf67b3723752d",
"oa_license": "CCBY",
"oa_url": "https://air.unimi.it/bitstream/2434/812510/2/nutrients-13-00509.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc33ab023c6e68d9e7b939e5ad460f17ade130b2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265308018
|
pes2o/s2orc
|
v3-fos-license
|
Lipid fingerprint‐based histology accurately classifies nevus, primary melanoma, and metastatic melanoma samples
Probably, the most important factor for the survival of a melanoma patient is early detection and precise diagnosis. Although in most cases these tasks are readily carried out by pathologists and dermatologists, there are still difficult cases in which no consensus among experts is achieved. To deal with such cases, new methodologies are required. Following this motivation, we explore here the use of lipid imaging mass spectrometry as a complementary tool for the aid in the diagnosis. Thus, 53 samples (15 nevus, 24 primary melanomas, and 14 metastasis) were explored with the aid of a mass spectrometer, using negative polarity. The rich lipid fingerprint obtained from the samples allowed us to set up an artificial intelligence‐based classification model that achieved 100% of specificity and precision both in training and validation data sets. A deeper analysis of the image data shows that the technique reports important information on the tumor microenvironment that may give invaluable insights in the prognosis of the lesion, with the correct interpretation.
Here, the authors evaluate the potential of matrix-assisted laser desorption/ionization lipid imaging mass spectrometry (LIMS) together with classification models built using artificial intelligence to classify samples of nevus, melanoma, and metastatic melanoma.They find that, looking at the alterations in the lipid profile of the tissues and having built a library of lipid signatures using LIMS, it is possible to automatically detect the presence of tumor cells and even determine if the sample is a primary tumor or a metastasis.The findings pave the way for the development of fast, accurate, and automatized protocols for the screening of melanoma samples.
| INTRODUCTION
Although cutaneous melanomas represent only 5% of all skin cancers, it is the most lethal due to its high rate to metastasis and the lack of effective treatments in advanced stages.In clinical practice, prognosis of melanoma is based on the Breslow index, the presence of ulceration, and sentinel node evaluation. 1 Around 10% of melanoma recurrences within 5 years of follow-up have been described 2 for early stage lesions (I and II stages according to AJCC 8th edition). 3Furthermore, a recent study showed that in a cohort of 784 melanoma patients, 53.8% of all metastatic patients had an I-II initial stage. 4These facts support the view that many early melanomas with a biological ability to metastasize are not identified by classical pathological markers.Detection of melanoma, even at early stages, is routinely performed in the diagnosis services by pathologists and dermatologists.However, the steady raise in the number of possible cases of melanoma is increasing the pressure in such services.Furthermore, despite the accuracy of a well-trained pathologist, there are still complex cases in which diagnosis is not simple. 5Therefore, developing new complementary methodologies that may help with the diagnosis and prognosis is a very active research field.For example, in the last years several optical and proteomic approaches have appeared, 6,7 aiming at improving diagnosis.
A different approach to the problem of achieving an accurate diagnosis is offered by lipid imaging mass spectrometry (LIMS).This technique enables direct exploration of tissue sections using a mass spectrometer. 8e starting point is a fresh frozen sample, which is sectioned using a cryomicrotome (Figure 1).The sections, usually 10-20 μm-thick, are deposited in a microscope glass slide and covered with a matrix that enables the extraction of the molecules with the aid of a laser.Then, a grid of coordinates is defined, which will become the pixels in the final images.The mass spectrometer scans the sample, acquiring a mass spectrum at each coordinate.The result is the distribution map of each of the analytes extracted from the tissue, without the need of a previous labeling. 9When the targeted molecules are lipids, the technique becomes a kind of digital molecular histology that enables visualization of each cell type in a given tissue, ultimately in base to their metabolic signature. 10,113][14][15] Therefore, each cell in a given metabolic stage presents a well-defined, tightly regulated, lipid fingerprint. 16In other words, subtle variations in the metabolic status of a cell involve modifications in lipid expression.For example, we demonstrated in the past that the maturation of the colonocytes as they differentiate along the crypts involves a quantifiable modification of the lipidome. 17Some of the species exhibit a gradient along the crypt that is maintained among individuals and that is different from the lipid fingerprint of the cells in the lamina propria.Furthermore, the fingerprint of the colonocytes is strongly altered in a sample of neoplastic tissue and therefore, it may serve for early detection of the disease. 18,19 have also demonstrated recently that it is possible to establish at least seven different segments in the nephron according to the lipid fingerprint and that the, in some case subtle, differences in lipid profile between these segments are preserved among individuals. 20Similar observations were also made regarding nevus. 21In that case, we demonstrated that epidermis, dermis, and melanocytes present also welldefined fingerprints.Deep analysis of the results from nevi pointed to a difference in the lipid expression between superficial melanocytes and those that have migrated to deeper areas of the dermis.Very likely, such variation is due to the maturation process of the melanocytes. 22,23 exploit here the malignancy state-associated changes in lipid metabolism of melanocytic cells for diagnostic benefit and explore the use of LIMS to classify patient samples into nevus, primary melanoma, and metastasis from melanoma.Previous works already explored the metabolite expression in sections of primary melanoma using mass spectrometry, 24 demonstrating that the technique enables extracting the metabolic signature from different cellular populations: melanoma cells, connective tissue, macrophages, and lymphocytes, although the spatial resolution at which images were recorded, 50 μm/pixel, limited somehow precise extraction of such signatures.In a recent work, Lazova et al used mass spectrometry to probe different points across sections and TMA cores of nevus and primary melanoma, concluding that it is possible to build classifiers using the protein fingerprint recovered directly from the tissue. 25Using the same approach, Lazova et al. demonstrated that the technique is also able to identify proliferative nodules within a benign congenital nevus 26 and to distinguish between Spitz nevi from Spitzoid malignant melanoma. 27Using DESI in a reduced cohort of samples, Qi et al. identified cholesterol as a possible biomarker in human melanocytic nevi, 28 although the conclusions reached in that study need further validation in a larger collection of samples and their technique did not offer F I G U R E 1 Flow diagram of the protocol used in this work in LIMS experiments.Samples are fresh-frozen, trying to avoid the use of OCT or any other substance that may alter the lipid distribution.Then, 16 μm-thick sections are obtained with the aid of a cryomicrotome and are deposited on a plain glass slide.Next, they are covered with 1,5-diaminonaphthalene and introduced in the MALDI source of the mass spectrometer, where they are scanned following a grid of coordinates separated 25 μm.The spectrometer acquires one mass spectrum at each coordinate.The original distribution of the detected species is reconstructed by integrating each m/z and representing its intensity against the coordinates.Finally, a segmentation analysis is performed to determine the lipid fingerprints (= cell populations) in the section.
enough spatial resolution to associate cholesterol changes to cell populations.Other studies using techniques without spatial localization also reported the possibility of using lipid biomarkers for detection of melanoma. 29As it can be seen, none of those previous studies included samples from nevi, primary melanoma, and metastatic melanoma.On the other hand, our own studies in cell cultures point to a differential lipid profile of benign and tumor melanocytic cells, and between the latter, depending on their proliferation potential. 30Demonstrating that LIMS is not only able to classify samples into tumor and non-tumor but also to identify if a sample is a metastasis or a primary tumor is of paramount importance, because it would set the foundations for the development of new and powerful diagnostic techniques.
| MATERIALS AND METHODS
Samples and data from patients were collected from 2017 to 2020 and provided by the Pathology Departments of Cruces (Barakaldo, Spain) and Galdakao-Usansolo (Galdakao, Spain) University Hospitals, and by the Department of Dermatology of the Arnau de Vilanova University Hospital (Lleida, Spain).Disease stages were classified according to AJCC, 8th edition. 3Clinical and diagnostic data for each patient were collected retrospectively from centralized electronic and/or paper medical records.A total of 15 nevi, 24 primary melanomas and 14 metastasis were collected, along with the clinical information, including gender, age at the primary tumor, localization of the primary melanoma, localization of metastasis, stage, and histological subtypes (SSM-superficial spreading melanoma, NM-nodular melanoma, ALM-acrolentiginous melanoma, LMM-lentigo maligna melanoma, Table 1).
The age of the patients ranged from 19 to 96 years, with a similar gender distribution.Regarding the histopathological classification, most of the nevus were intradermal (n = 11), although a compound nevus and a junctional nevus were also included.The primary melanoma samples corresponded to SSM (n = 19), NM (n = 2), LMM (n = 1), and ALM (n = 2).Finally, all metastasis samples were diagnosed as locoregional in transit metastasis.
| Immunohistochemistry
Biopsies were frozen at À80 C, avoiding the use of OCT or any other compound that could perturb the original lipid distribution.Stepwise sectioning of the frozen tissues was conducted by a conventional cryomicrotome.First, 4 μm-thick sections were collected for hematoxylin-eosin (H&E) staining (Geminis AS Automated Stainer, Einhoven, Netherland) and for Immunohistochemistry (IHC) of Melan A and HMB45 melanocyte biomarkers.Melan A, also known as MART-1 (melanoma antigen recognized by T cells) marks both melanocytes from nevi and melanoma, while HBM45 specifically marks melanoma cells.IHC was performed using the EnvisionTMGj2 System/AP kit.Optical images were recorded using a NanoZoomer S210 Digital slide scanner (Hamamatsu C13239-01).Melan A (Anti-MelanA antibody [EP1422Y]) and MB45 (Anti-Melanoma gp100 antibody [EPR4864]), both from Abcam (Cambridge, CB2 0AX, UK) were used to identify melanoma cells because they recognize the group of proteins gp-100/Pmel-17 specific to melanosomes.Second, sections of 16 μm thickness were obtained for LIMS.The areas to be scanned were selected using the H&E and IHC optical images.One must take into account that the whole area of the sections could not be explored, due the speed of the mass spectrometer (see below), and lipid propensity to oxidation.Finally, the sections explored by LIMS were stained with H&E, so the pathologists could annotate the histological areas and structures in order to correlate them with the segments obtained from the analysis of the LIMS images.In a small number of cases, the post-MALDI H&E could not be used because technical problems (mainly, the tissue folded during staining, because the glass slides did not have any fixation to avoid altering the original lipid distribution).In those cases, the H&E of a consecutive section had to be used, which slightly complicated the annotation of the segmentation images.
| LIMS experiments
Histological sections from 53 different samples were prepared and analyzed by LIMS as described in Garate et al. 21A schematics of the protocol may be found in Figure 1.Briefly, 1,5-diaminonaphthalene (DAN) was used as matrix and deposited with the aid of our in-house designed sublimator. 31The sections were scanned in negative polarity, using the orbitrap analyzer of a MALDI-LTQ-Orbitrap XL (Thermo Fisher, San Jose, CA, USA), equipped with a modified MALDI source. 32Previous works have demonstrated that such polarity enables the detection of a larger number of lipid classes, with less interference of adducts formation. 33ta were acquired with a mass resolution of 60,000 at m/ z = 400.Two microscans of 10 laser shots were averaged for each pixel using a 25 μm raster size.With these settings, the spectrometer recorded one pixel every 2 s, and therefore, it was necessary to limit the area of the sample to be scanned, to avoid long exposures of the tissue to room temperature conditions.Spectra were processed using in-house developed software, built in Matlab (MathWorks, Natick, USA).Lipid assignment was carried out using the m/z value, the ontissue MS/MS and MS 3 (whenever the signal intensity permitted it) data and UHPLC/ESI-MS/MS results obtained from the analysis of the extracts from the same samples.With this procedure, in most cases it was not possible to distinguish between ether and vinyl-ether lipids.
Data from each section were analyzed using a segmentation algorithm (HD-RCA) 34 to isolate and identify the lipid signatures of each histological area in the section.To establish the number of segments on each image, a heuristic approach was used: the initial number of segments was set to 10 for nevus and melanoma and 5 for metastasis.
Then, the segments suggested by the algorithm were verified by examining their correlation: those segments with correlations higher than 95% were grouped together, because such a high correlation seems to indicate that they define similar histologic areas.Each segment was colored using a color scale bar and the correlation between segments.The two segments with the lowest correlation between their lipid signatures were assigned the colors at both ends of the scale and the rest of the segments received colors according to their correlation.Thus, those segments that present colors that are closer in the scale, present more similar lipid fingerprint.Besides, each section presents a unique composition and therefore, similar colors in different sections may correspond to different cell populations.The signatures from the segments obtained from each section were later used in subsequent multi-experiment statistical analysis.
The spectra were filtered before statistical analysis, to avoid introduction of excessive noise.Of the 488 mass channels, only 90 exhibited a SD lower than their mean intensity in the three conditions.Then, the MS/MS and MS 3 (if available) spectra of each species was analyzed to provide a sound assignment.In the segmentation image of the nevus, the three main histological regions are clearly visible: epidermis in white, dermis in yellow and the melanocyte-rich regions in red and brown.Conversely, melanomas are, in general, highly heterogeneous, presenting a variable number of regions in each sample.Figure 3, middle row, shows a histological section of a superficial spreading primary melanoma in stage IIB. 3 The lesion exhibits an asymmetric growth and a zone of dermal invasion with large nodules, separated by connective tissue and some Interestingly, heterogeneity is lower in most of the analyzed metastasis.For example, the metastasis section in Figure 3, bottom row, clearly shows three regions: tumor nodes (bright and dark red), a border segment that delimits the tumor nodes that is not evident in the optical image (green segment) and collagen-rich tissue (white segment).metastasis.However, the analysis of the samples in the validation set using only those lipid species that showed statistically significant differences in the training set (potential biomarkers), resulted in a perfect separation of the three types of samples (Figure 4B).Three classification models were tested using the potential lipid biomarkers: support vector machine (SVM), Naïve Bayes and Logistic Regression, achieving with the latter a perfect classification of the samples (see Table S1).
| Differential lipid fingerprint of melanocytic cells from nevus, primary melanoma and metastasis
Next, the lipid fingerprints of nevus melanocytes were compared with those of primary melanoma and metastasis (Figure 4C-F).A perfect separation was achieved all cases, highlighting the strong metabolic changes that accompany tumor transformation.The classification models also exhibited outstanding performances, both in the training and validation sets, with logistic regression achieving perfect separation.
Classification of primary and metastatic samples was not perfect in the training set (Figure 4G,H), with only two of the metastasis misclassified as primary tumors.However, sample classification was perfect in the validation group, using the potential lipid biomarkers (Table S1).
F I G U R E 4 Scores plots of principal components analysis (PCA) of the lipid fingerprints.The samples were randomly divided into training and validation sets (2/3 and 1/3 of the samples, respectively) before being subjected to several PCA: (A and B) joint analysis of all samples; (C and D) analysis of nevus and primary melanoma lipidomes; (E and F) nevus versus metastasis and (G and H) metastasis versus primary melanoma.In all cases, a classifier built using logistic regression and the discriminant lipids selected in the training set, was able to achieve a perfect classification of the validation group samples (Table S1 of the supplemental material).
| DISCUSSION
The results presented herein clearly demonstrate a substantial change in the lipid fingerprint accompanies tumor transformation.
Such changes enable building classification models with performances between 90% and 100% (depending on the algorithm) when biopsies from nevus, primary melanoma and metastatic melanoma are included and with perfect performance in binary comparisons.A consequence of this observation is that LIMS can be used in conjunction with such were also grouped in a single class.Apparently, there is an increase of PC/PE in primary melanoma, which is partly reverted in metastasis.
Interestingly, the changes in PC/PE O/P follow an opposite trend to those of the di-acyl PC/PE species.Also significant is the decrease in LPI detected in primary melanoma.Finally, there is an increase in PG and a decrease in SM from nevus to melanoma that are again partially reverted in metastasis.
Detailed analysis of the relative abundance of the individual species may be found in Figure S2 and Table S2.A close look at the PI class-significantly involved in signaling-shows a decrease in the relative abundance of arachidonic acid (AA, 20:4)-containing species, and an increase in the species with mono-and di-unsaturated fatty acids (MUFA and DUFA) from nevi to primary melanoma.For some of the species, this trend is reverted in metastasis, which exhibits expression levels closer to those of nevus.This kind of trans-acylation process would explain the absence of changes in the relative abundance of total PI between the three conditions and it was also observed in other systems. 18reduced number of LPI and LPI-O/P species presented changes between samples.The most remarkable variation is the decrease in the relative abundance of LPI 20:4, which mimics those observed in AA-containing PI species.It is tempting to speculate that this results from a higher demand of AA for provision of inflammation-related eicosanoid mediators.
A limited number of PG and PS species were also detected, but in both cases, there is an increase in the relative abundance of MUFA/ DUFA-containing species and a decrease in PUFA in primary melanoma.Such trend is reversed in metastasis, showing a more similar composition to that found in melanocytes.
The analysis in Figure 4 may be regarded as a simplified approach.
Certainly, there is still a wealth of information present in the LIMS The sample in Figure S5 corresponds to a metastasis and constitutes a case in which an extraordinary heterogeneity may be appreciate.A subcutaneous nodule dominates the image, with walls with intense fibrosis.Several solid tumor nodules are clearly seen, with slightly pleomorphic cells of middle size (Figure S5C), with epithelioid and plasmacytoid aspect.Nuclei are mainly hyperchromatic and some present cytoplasmic folding; the cells' cytoplasm are eosinophil and contain a variable amount of pigment.This can also be seen in the septa, where some lymphocytes and macrophages can be identified, together with other stromal cells.The segmentation image shows a variety of lipid signatures (segments), some of them difficult to identify, unless both the optical and segmentation images are superimposed.Tumoral cells appear divided into several segments: from white to blue.But also the stromal cells appear divided in several segments: red, green yellow and orange.In addition, a large fibrotic area correlates with the dark brown segment and there is also a green segment that correlates with a portion of the tissue that was lost during H&E staining.Thus, this sample not only presents heterogeneous tumor cell populations, but also the stromal cells that compose the microenvironment in which the tumor is immersed, show a collection of lipid signatures.Understanding why some tumors present a larger heterogeneity than others and the source of this variation may be the key for the development of new and more precise methodologies for tumor prognosis.
| CONCLUSIONS
We present here the LIMS analysis of 53 samples of nevus, primary melanoma and metastatic melanoma.The spatial resolution of the technique allowed us to extract the lipid signature of the melanocytic cells and isolate them from those of other cell populations in the biopsies.Using such lipid fingerprints, it was possible to set up statistical models able to classify the samples attending to their nature: nevus, primary melanoma and metastasis from melanoma.
Detailed analysis of the histology of the samples and comparison with the LIMS segmentation images demonstrate that the latter captured key aspects of the samples, such as degree of heterogeneity, the existence of different tumor populations or different tumor microenvironments.Full understanding of the information in the segmentation images is a cumbersome task, as it requires of manual analysis of hundreds of samples by well-trained pathologists and comparison with the segmentation images, but it may enable the design of new automated methodologies for the early and accurate diagnosis of melanoma.
3 | RESULTS 3 . 1 |Figure 2 ,
Figure2, where the comparison between the H&E image of a nevus, the segmentation of a LIMS experiment and the distribution of three representative lipids is shown.The nevus section shows slight papillomatous growth, at the expense of melanocytes in the dermis, with growth in more superficial aggregates.Melanocytes in the dermal depth tend to separate or to be more diffusely arranged.A certain fibrosclerosis in the papillary dermis is also observed.The epidermis is thin and an annex to the middle dermis can be identified.The segmentation image is built grouping together pixels with similar lipid fingerprint and assigning each group (segment) a color, using the color bar in the figure.The resulting image highlights the tissue's architecture from a molecular point of view.For example, clear difference between the epidermis (white segment) and the rest of the tissue is readily seen.The melanocytes and the surrounding stroma exhibit a high degree of heterogeneity, highlighted by their division in several segments that follow the melanocytic aggregates (light blue, yellow, and orange) and the surrounding stroma (black and dark blue).Ultimately, the segmentation images reflect the differential spatial distribution of the lipid species detected.As an example, the distribution of three representative species is shown in the figure.Sphingomyelin (SM) 34:1 presents a higher abundance in the stroma of nevus, while is slightly less abundant in melanocytes.Conversely, phosphatidylethanolamine (PE) 36:2 follows the opposite trend, being more abundant in the epidermis of nevus.The concentration of phosphatidylinositol (PI) 38:3 is also more abundant in the stroma, but with a different distribution, pointing to changes in expression between cell populations.Actually, the segmentation image in Figure2is a sort of summary of these lipid distributions: each cell population presents a well-defined lipid profile.Consequently, when the pixels in the LIMS image are grouped based on their lipidome, as in the segmentation image, the whole tissue architecture emerges.In some sense, LIMS is a kind of molecular histology.
Figure 2 (
Figure 2 (and Figure S1 of the supplemental material) also show examples of melanoma and metastasis sections.The primary tumor presents an epidermal component that affects the hair follicle at a level deeper than the infundibular portion.The most eye-catching
Figure 3
Figure 3 shows the comparison between example images of H&E staining, Melan A and the segmentation of a LIMS experiment carried out over samples of intradermic nevus, primary melanoma and metastasis.
F I G U R E 2
Lipid fingerprint highlights the histology of the tissue.H&E optical images, segmentation and distribution images of SM 34:1, PE 36:2 and PI 38:3 over sections of nevus, melanoma and metastasis from melanoma.The LIMS experiment was recorded in negative polarity at 25 μm/pixel of spatial resolution and the segments were colored using the color bar shown in the figure and the degree of correlation between segments: the proximity of the colors in the scale indicates the correlation between the lipid fingerprint of the segments.Relative abundance of the lipids follows a black-blue-red-yellow-white scale.Scale bar = 1 mm.lymphocytes and melanophages.Tumor nests are clearly defined and represented in the LIMS segmentation image as violet and white segments, indicating that at least two different tumor cell populations exist.Moreover, the lymphocytes surrounding the melanocyte nodules appear as a blue segment.The epidermis matches the red segment, while other cells such as fibroblasts and stroma in general appear in green.
Figure 4
Figure 4 shows the scores plot of the principal component analysis (PCA) of the lipid signatures extracted from the samples using LIMS.Melanocytic lipid signatures come from nevus sections, while more than one lipid fingerprint was extracted from each primary and metastatic melanoma sample.Within each condition, the samples were randomly divided into training and validation sets and multiple comparisons were carried out.When the three conditions were included in the analysis, Figure 4A,B, a good separation of melanocytic samples from those of tumor cells was observed in the training set, while incomplete separation was observed between melanoma and classifiers to identify tumor cells directly from tissue sections, opening the door to the use of this technique in clinics for fast and automated sample screening.Comparison of the relative abundance of the main lipid classes recorded by LIMS may be found in Figure 5, while the potential biomarkers are collected in Figure S2.There are statistically significant variations in lysophosphatidylinositol (LPI), phosphatidylcholine/ phosphatidylethanolamine (PC/PE), PC/PE ether/plasmalogens (O/P), phosphatidylglycerol (PG) and SM classes.PC and PE were grouped together in a single graphics, due to the existence of isobaric species contributing to each peak in the mass spectrum.Similarly, PC/PE O/P Figure S3 shows very homogenous tumor cells, in brown, forming nests, surrounded by lymphocytes and some eosinophils in blue.Such organization is clearly captured by the segmentation image of the corresponding LIMS experiment, in which the tumor cells are grouped in the red segment, while the inflammatory cellularity appears in blue and the dermis in white.Interestingly, all tumor cells show a similar lipidome.Actually, increasing the number of segments of the image did not divide the tumor nodules in different groups, demonstrating that they truly present non-differentiable lipidomes.The observations in Figure S3 contrast with the findings shown in Figure S4 from a different primary melanoma.In this second example, the lymphocytes are grouped forming a lymphoid follicle surrounded by tumor and appear in the same segment as the sub-epidermal lymphocytes intermixed with the epidermis.Here, the tumor melanocytes are grouped in three different segments: red, white and brown.Further inspection T A B L E 1 Clinical information of the patients included in this work.
The Levene test determines the homogeneity of variance (H 0 = groups have equivalent variance) to choose the post hoc method: Tukey if Levene p ≥ .05 and Games-Howell if Levene p ≤ .05.PCA analysis and classification models were carried out using Orange Biolab V.2.7.8 (Ljubljana, Slovenia).
To evaluate the statistical significance of the differences in the lipid fingerprints among nevus melanocytes, primary melanoma and metastasis, Levene test, ANOVA univariate statistical analysis and Tukey/Games Howell post hoc were computed using SPSS Statistics 17.0 (IBM, Armonk, NY, USA).
the H&E sample revealed that the tumor cells grouped in the red segment present an important amount of melanin in cytoplasm, compared to the cells in the other two segments.It would be interesting to use orthogonal techniques to determine if the lipidome of one of the three segments corresponds with a stronger proliferative potential. of
|
2023-11-22T06:17:38.755Z
|
2023-11-20T00:00:00.000
|
{
"year": 2023,
"sha1": "7e156caad4587309c9fac9538a62407887a6b556",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ijc.34800",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "f9ff9f3f62187592913bd43a66a7822df9667185",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119225421
|
pes2o/s2orc
|
v3-fos-license
|
Fresh look at randomly branched polymers
We develop a new, dynamical field theory of isotropic randomly branched polymers, and we use this model in conjunction with the renormalization group (RG) to study several prominent problems in the physics of these polymers. Our model provides an alternative vantage point to understand the swollen phase via dimensional reduction. We reveal a hidden Becchi-Rouet-Stora (BRS) symmetry of the model that describes the collapse ($\theta$-)transition to compact polymer-conformations, and calculate the critical exponents to 2-loop order. It turns out that the long-standing 1-loop results for these exponents are not entirely correct. A runaway of the RG flow indicates that the so-called $\theta^\prime$-transition could be a fluctuation induced first order transition.
A single linear (non-branched) polymer in solution undergoes a second order phase transition from a swollen to a collapsed state when the solvent temperature sinks below the so-called θ-point. In the swollen phase, the polymer can be thought of as a self-avoiding walk, and its radius of gyration or Flory radius scales with monomer number N as R N ∼ N νSAW (ν SAW ≥ 1/2). In the collapsed phase, the polymer assumes a compact globulelike conformation, and R N ∼ N 1/d where d is the dimensionality of space. The understanding of this collapse transition as a critical phenomenon has advanced considerably over the years [1].
In comparison, much less is known about the collapse transition of randomly branched polymers (RBPs). There exists a number of numerical studies [2,3,4,5,6,7] that, taken together, indicate that the phase diagram is fairly complex including a line of collapse transitions that has qualitatively distinct parts. One part, called the θline, corresponds to continuous transitions with universal critical exponents of swollen RBP configurations with mainly tree-like character to compact coil-like configurations. The other part of the transition line, called the θ ′line, corresponds to the collapse of foam-or sponge-like RBPs to vesicle-like compact structures. In 2 dimensions, one finds nonuniversal exponents if this transition is considered as continuous [2]. The two different parts of the collapse-transition line are separated by a multicritical point which belongs to the isotropic percolation universality class. One of the open questions is the existence of a possible further transition line between the different configurations of collapsed RBPs. As far as theory is concerned, it is the swollen phase that is best understood mainly because the statistics of swollen RBPs can be formulated in terms of an asymmetric Potts model [8,9,10] although Flory theory [11] and real space renormalization [12] have also been successfully applied. The former approach was used in particular to solve the field theoretic problem via a mapping of the relevant part of the asymmetric Potts model to the Yang-Lee edge problem using dimensional reduction [13]. In contrast, the collapse of RBPs has been much less studied, and the current understanding mainly rests on the seminal field theoretic work of Lubensky and Isaacson (LI) [8] and Harris and Lubensky [9]. However, it turns out that these papers, as far as they consider the collapse (θ-)transition, contain a fundamental error in the renormalization procedure, and as a consequence the long-standing 1-loop results for the collapse transition are strictly speaking not correct. In addition, it is not clear to date whether the θ ′ transition is a second order transition or not. Therefore, we feel that the important RBP problem deserves a fresh look.
In this paper we develop a new, dynamical field theory [15] for RBPs based on a model for dynamical percolation with a tricritical instability [16] in the nonpercolating phase whose very large clusters (lattice animals), at critical values of the control parameters, have the same statistics as collapsing RBPs. We discuss the relation of our model to the asymmetric Potts model and carefully analyze its symmetries. In the swollen phase, the model has a high super-symmetry including translation and rotation invariance in super-space and leads to the well known Parisi-Sourlas dimensional reduction [13]. At the collapse transition, super-rotation symmetry is broken, and we only have translation invariance in superspace, i.e., Becchi-Rouet-Stora (BRS) symmetry [18]. We perform a 2-loop renormalization group (RG) calculation, that corrects and extends the long standing LI results for the collapse transition. Furthermore, we show that the θ ′ -transition is characterized by a runaway of the RG flow which suggests that this transition is a fluctuation induced first order transition contrary to what has been assumed in recent numerical studies [2].
Our field theory (for background on field theory methods, we refer to [19,20]) is based on a generalization of the general epidemic process. For a related approach to the somewhat simpler problem of directed randomly branched polymers, see [21]. The primary fields of our theory are the field of agents n(r, t) and the field of the in-active debris m(r, t) = λ t −∞ dt ′ n(r, t ′ ) which ultimately forms the polymer cluster. The minimal non-Markoffian Langevin equations describing the process are given by The parameter r tunes the "distance" to the percolation threshold. Below this threshold r is positive. The term proportional to c describes the influence of the debris on diffusion. For the ordinary percolation problem, this term is irrelevant. As long as g ′ > 0, the second order term f ′ m 2 is irrelevant near the transition point and the process models ordinary percolation. We permit both signs of g ′ so that our model allows for a tricritical instability. Consequently we need the second order term f ′ > 0 for stabilization purposes, i.e., to limit the density to finite values. The process is assumed to be locally absorbing, and thus all terms in the noise-correlation function contain at least one power of n. The first part of the noise correlation takes into account that the debris arises from spontaneous decay of agents, and thus g > 0.
The term proportional to f > 0 simulates the anticorrelating behavior of the noise in regions where debris has already been produced. Now, we refine these Langevin equations into a field theoretic model for RBPs. This procedure involves a number of nontrivial steps that we will briefly sketch in the following and that will be presented in detail elsewhere [14]. As the first step, we represent the Langevin equations as a stochastic response functional in the Ito-sense [15,22,23,24]. This functional has the benefit that it allows us to systematically calculate averages · · · of all sorts of observables via functional integration with weight exp[−J ]. For studying polymers, we focus on a single cluster of a given size N which we assume to emanate from a small source of strength q at the origin r = 0 at time t = 0. Then, the key quantity is the probability distribution for finding a cluster of mass N given by [17] where M = d d r m(r, ∞). P(N )/N is expected to be proportional to the partition sum for interacting lattice animals [2] up to an non-universal exponential factor ∼ p N 0 if N becomes large. In actual calculations, the delta function appearing in averages like in Eq. (3) is hard to handle. This problem can be simplified by averaging over Laplace-transformed observables, which are function of a variable conjugate to N , say z, and applying inverse Laplace transformation in the end. The switch to Laplace-transformed observables can be done in a pragmatic way by augmenting the original J with a term zM and then working with the new functional J z = J +zM. Because we are interested here only in the static properties of the final cluster after the epidemic has become extinct, we can greatly simplify the theory by focusing on the frequency zero part of J z , that is taking the quasistatic limit [15,16,25] m(r, t) → m ∞ (r) = iφ(r), n(r, t) → −iϕ(r). Taking this limit, one has to be careful to account for the causal ordering of fields that results from the Ito calculus. In diagrammatic perturbation theory, this means that one has to rule out diagrams with closed propagator loops. An elegant way to achieve this is to use so-called ghost fields whose sole purpose is to generate additional diagrams that cancel any diagrams with non-causal loops. Such a procedure does not change the physical content of the theory but simplifies calculations and makes it easier to find higher symmetries. The required cancellations can be achieved [14] by using D commuting (bosonic) fields χ i subject to the constraint D i=1 χ i = 0 so that they form the irreducible representation (D, 1) of the permutation-group S D , and taking the limit D → −1 at the end of the calculation. Furthermore, we eliminate redundant parameters by rescaling, mixing, and shifting the fields. After all, we obtain the quasistatic Hamiltonian where we use the shorthand notation χ (k) = D i=1 χ k i , and where h is a shifted version of the Laplace variable z. The τ 's and the g's are combinations of the original parameters, cf. Eqs. (1) and (2). In particular, τ 0 and τ 1 are linearly related to r and g ′ , respectively, so that, in mean-field theory, the collapse transition corresponds to vanishing τ 0 , τ 1 , and h and swollen RBPs correspond to vanishing τ 0 and h, and positive and finite τ 1 .
What is the connection between our Hamiltonian (4) and other, established models for RBPs, Percolation and the Yang-Lee problem? To address this question, we rescale the fields so that g ′ 1 = g 0 , (which is possible, of course, only if both are non-vanishing, in particular at RG fixed points), and we define a new order parameter field with (D + 2) components, s 1 = iφ, s 2 = iϕ, and for i ≥ 3: s i = χ i−2 − (s 1 + s 2 )/D. This field satisfies the Potts constraint D+2 i=1 s i = 0, and the resulting Hamiltonian with S D+1 permutation-symmetry is that of the asymmetric (D + 2)-state Potts model which lies at the heart of the known formulations of the RBP problem [8,9,10]. For g 1 = g 2 = 0, the model reduces to the symmetric (D + 2)-state Potts model with S D+2symmetry and thus produces the field theory of percolation in the limit D → −1. For g 0 + 2g 1 = g 0 + 4g 2 = 0, the Hamiltonian decomposes in a sum of D+1 uncoupled Hamiltonians each describing the Yang-Lee edge problem.
To reveal the connection of our work to the results by Parisi and Sourlas [13] for swollen RBPs and to shed light on the collapse transition from a symmetry perspective, it is interesting to discuss the super-symmetries of our model. If g ′ 1 is zero (or irrelevant like for finite τ 1 > 0) non-causal loops are isolated, and can therefore be eliminated with a pair of Fermionic ghost fields ψ andψ [25]. Using anticommuting super-coordinates θ,θ with integration rules dθ 1 = dθ 1 =0, dθ θ = dθθ =1 and defining a super-field Φ(r,θ, θ) = ϕ(r) +θψ(r) + ψ(r)θ + θθφ(r), we can recast our model Hamiltonian as This Hamiltonian shows BRS-symmetry [18,20], i.e., H ss is invariant under a super-translation θ → θ + ε,θ → θ+ε. Moreover, if the control parameter τ 1 is positive and finite, i.e., if we consider the problem of swollen RBPs, τ 1 can be reset by a scale transformation to 2. The supercoordinates become massive, and the derivatives combine to a super-Laplace ∇ 2 + τ 1 ∂θ∂ θ → ∇ 2 + 2∂θ∂ θ =: . The coupling constants, g 1 and g 2 become irrelevant, and can be neglected. Then the Hamiltonian takes the super-Yang-Lee form and attains, besides the super-translation invariance, super-rotation invariance. Now dimensional reduction [13] can be used to reduce the problem to the usual Yang-Lee problem in two lesser dimensions which culminates into to well known results for swollen RBPs. Now we come to the heart of our RG analysis, where we focus on the case that the control parameters τ 0 and τ 1 take critical values (zero in mean-field theory) where the correlation length diverges, and correlations between different polymers vanish. The actual objects of our perturbation theory are the naively UV-divergent vertex functions Γk ,k which consist of irreducible diagrams withk and k amputated legs ofφ and ϕ, respectively, as functions of the wavevector q. We calculate these functions in dimensional regularization and ε-expansion about d = 6 dimensions (ε = 6 − d) to 2-loop order and then remove their UV divergences in minimal subtraction using the scheme where µ is an inverse length scale that is used to make the coupling constants dimensionless, where (τ i ) = (τ 0 , τ 1 ) and (g α ) = (g 0 , g ′ 1 , g 1 , g 2 ). Note that the renormalization scheme introduces a counter term proportional K that has no counterpart in the Hamiltonian (4). This term can be viewed as a remnant of the term proportional to c in the original response functional (2) which we removed in our journey towards H because c is in the sense of the RG redundant. As a counter term this term is indispensable, however, because the quadratically divergent vertex function Γ 2,0 (q) =Γ 2,0 (0) + q 2 Γ ′′ 2,0 (0) + . . . contains a UV-divergent Γ ′′ 2,0 (0). This fact was overlooked by LI [8] in their calculation, and their long-standing 1-loop results are incorrect although, fortunately, the numeric deviations from the correct 1-loop results are rather small.
As it stands, the Hamiltonian (4) has a remaining rescaling invariance that makes one of the coupling constants redundant. Before we can analyze the RG flow, we need to remove this redundancy. To this end, we switch to rescaling invariant fields ϕ → g −1 0 ϕ,φ → g 0φ , control parameters τ 0 → τ 0 , The fixed points of the RG flow are determined by the zeros of the Wilson functions for the three effective couplings, β u = µ∂ µ u| 0 (| 0 indicates that unrenormalized quantities are kept fixed while taking derivatives) and so on. Our calculation produces where we refrain from showing the 2-loop parts of our results due to space constraints. The picture of the RG flow that arises from these equations is the following: The BRS-plane u = 0 is an invariant surface of the flow equations (7a-7c) to all orders and divides the (u, v, w)space in two parts: the percolation-part with u > 0 and the Yang-Lee-part with u < 0 which is non-physical for the branched polymer problem. The percolation line v = w = 0 is an invariant line for both signs of u. For u > 0 the flow goes to the percolation fixed point whereas for u < 0 the flow tends to infinity. The Yang-Lee-line with a = b = 0, where a = u + 2v and b = u 2 + 4w, is also an invariant line for both signs of u. For u < 0 the flow goes to the Yang-Lee fixed point whereas for u > 0 the flow runs away to infinity. Altogether we have six fixed points which are compiled in Table I flow in the other part is again running away. The stability plane of Percolation for u > 0 is a continuation of the separatrix found above on the BRS-plane for u = 0.
In the Yang-Lee-part of the (u, v, w)-space, we also find a plane which is the continuation of the BRS-separatrix now into the region with u < 0. This plane is separated in two parts by the Yang-Lee-line. One part is attracting to an instable fixed point (Inst1), the other part shows runaway flow. Both planes divide the (u, v, w)-space in a wedge-shaped part attracting to Collapse, and a part where the flow goes to infinity. The edge of the wedge is the separatrix in the BRS-plane. Note that the flow diagram has the following perhaps unexpected implication for the θ ′ -transition. The region behind the percolation plane where u runs away to ever more positive values indicate that this transition might be discontinuous and not, as previously assumed, a second order transition. Finally, we compile our main results for the collapse transition. Our RG analysis leads to three independent critical exponents. For the probability distribution P(N ), we find the asymptotic form where µ 0 is a non-universal constant, f P is a scaling function, and the effective control parameter for the "distance" from the transition is given by the scaling variable y which is a linear combination of τ 0 and τ 1 . For the radius of gyration, we obtain To second order in ε-expansion, the critical exponents of the collapse transition read Note that these results compare well within the expectations for large ε with recent simulations in d = 2 [2]. In summary, we have presented a new renormalized field theory for RBPs. Though almost a classic physics problem, RBPs are still a lively subject of current research with important open questions some of which our work can help to settle.
|
2009-11-09T17:15:06.000Z
|
2009-11-09T00:00:00.000
|
{
"year": 2009,
"sha1": "0dc1518bdb547e42844f9a1e1b9b02cc84887ec8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0911.1729",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0dc1518bdb547e42844f9a1e1b9b02cc84887ec8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
61540578
|
pes2o/s2orc
|
v3-fos-license
|
ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS
Students‟ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students‟ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found were classified into 24 types and the top ten most common errors committed by the students were article, preposition, spelling, word choice, subject-verb agreement, auxiliary verb, plural form, verb form, capital letter, and meaningless sentences. The findings about the students‟ frequency of committing errors were, then, contrasted to their midterm test result and in order to find out the reasons behind the error recurrence; the students were given some questions to answer in a questionnaire format. Most of the students admitted that careless was the major reason for their errors and lack understanding came next. This study suggests EFL teachers to devote their time to continuously check the students‟ language by giving corrections so that the students can learn from their errors and stop committing the same errors.
INTRODUCTION
Writing is a medium of communication that represents language through the inscription of signs and symbols (Writing, 2014).Generally, there is one message delivered whenever someone is writing a text.The message can only be understandable if the writing comprises vocabulary, grammar and semantics (Writing, 2014).Unfortunately, for EFL (English as a Foreign Language) students, English is not innate but they have to do lots of efforts/practices to write something in English correctly and very often there are some errors in their written text.These errors could be very precious sources for teaching.Klassen (1994) did research that used the students" errors in writing as resources to teach the students.The research seemed to be successful that students most willingly learnt from their errors.
Limited knowledge of the grammatical rules and very rare occasions using the language in daily life conversation or interaction make EFL students find writing in English is more difficult than writing in their first language.Because of the limited knowledge, the EFL students often committed errors in their pieces of writings.Yet, many researchers have acknowledged that making errors in producing English in speaking and writing is a common issue for all students learning English whose mother tongue is other than English (Hussain, Hanif, Asif & Rehman, 2013;Gustilo & Magno, 2012;Yahya, Ishak, Zainal, Faghat & Yahaya, 2012;Yang, 2010).
There are two kinds of errors in language learning i.e. interlingual errors (L1) and intrallingual errors (L2) (Bryant, 1984).Interlingual errors were errors resulted from one"s mother tongue"s intrusion; while intralingual errors were errors resulted from one"s misinterpretation or overgeneralization of English grammar rules.Bryant (1984) found that the Japanese ESL students mostly made errors by the interference of their mother tongue or generally called L1 errors.While errors that resulted from L2 were only about the incorrect use of S-genitive and verb tense.However, the most dominant factor would not always be the same for every country.Silalahi (2013) found that L2 was the greatest cause for Indonesian students" errors in their spoken English while L1 contributed only very little amount.
There are some definitions of error analysis, yet all derived from James" definition (James, cited in Gustilo & Magno, 2012;James, cited in Sarfraz, 2011, p. 31) who considered error analysis as "the study of linguistic ignorance, the investigation of what IJEE, Vol. 1, No. 2, 2014| 153 people do not know and how they attempt to cope with their ignorance."Dulay, Burt, andKrashen (1982, cited in Gustilo &Magno, 2012, p. 98) strongly emphasized the definition of error analysis as "flawed side of learner speech or writing that deviates from selected norm of mature language performance".Gustilo and Magno (2012, p. 98) then simplified the definition of errors as "alterations of the rules of the accepted norm and are termed as surface errors which may be further classified as omission errors, addition errors, misinformation errors, wrong order, spelling errors, systems error, and the like".Hussain et al. (2013) even made research on an error analysis to suggest changes in teaching curriculum in Pakistan.He argued that learners could commit errors in their language "due to memory lapses; physical states such as tiredness and psychological such as strong emotions" (p.829).Wang, 2008, p. 185) presented some steps in analyzing errors including "collection of a sample of learner language, identification of errors, description of errors, explanation of errors, and error evaluation."Knowing so many errors occurred in EFL students" language throughout the world encouraged the writer to do an error analysis towards her students" writings in order to find some ways to help them improve their skills and avoid committing the same errors.This study was carried out by implementing the steps introduced by Rod Ellis.
METHOD
This study presents descriptive data which identified and analyzed errors in EFL students" writings.This study is aimed at determining the types of errors made by 28 freshmen students majoring in IT in their English writing assignments which were collected and marked for 8 consecutive weeks during their first half semester studying at the university.The writing assignments were given every week after a completion of one topic about tenses.The writing assignments were to write sentences using the tense learnt before.The tenses taught to students were: Present Simple (Pr S), Present Continuous Tense (Pr C T), Past Simple (Ps S), Past Continuous Tense (Ps C T), Present Perfect (Pr Pf), Present Perfect Continuous Tense (Pr Pf C T), Simple Future (S Ft), and Future Continuous Tense (Ft C T).During the writing assignments" collection time, the author made an experiment in which she collected the students" writing tasks and checked their language for grammatical errors and also gave mark.The author then made possible corrections for the students" errors by 154| IJEE, Vol. 1, No. 2, 2014 explicitly pointing the errors and published them for the students to read and learn from their errors.
The research was continued by collecting the students" midterm test result by the end of week 8 to be contrasted with the students" frequency of committing errors in the writing assignments to see the correlation between them.After getting the required data, it was analyzed and calculated in terms of percentage.The findings were then discussed and explained in terms of factors influencing the occurrence of errors in writing by giving questionnaires to the 28 students.
The analysis steps follow Ellis" (cited in Sarfraz, 2011), which consist of collection of samples of learner language (writing assignments), identification of errors, description of errors, and evaluation of errors.These steps can be found under the discussion on the types of errors.
Types of Errors: Collection of Samples of Learner Writing Assignments
In the half semester; 8 weeks, the students had learnt 8 Tenses: Present Simple, Present Continuous Tenses, Past Simple, Past Continuous Tense, Present Perfect, Present Perfect Continuous Tense, Simple Future, and Past Future.The class meeting for every topic was held twice in a week for a total of 3 hours.Every time finishing the topic, the author, who was also the one teaching the students in the 8 weeks, asked the students to write sentences in English by using the formulas of the tenses learnt in positive, negative, and interrogative using a question word and auxiliary verb.
The students wrote at least 8 sentences and at most 22 sentences for the tense writing assignments.The students were only to write their own sentences by following the patterns that the teacher had given them in the classroom.The author then collected all of the students" sentences for analysis.
Identification of Errors
The 28 students collected their writing assignments every week for the class-teacher to mark and make some notes.The notes were about the students" errors and possible corrections to the errors.Usually, the students" errors were not about the wrong application of patterns learnt but other grammatical issues such as the improper use of article, preposition, and the like.It was not that hard to identify the errors as it was in the spoken language because the errors could explicitly be observed on the students" paper as Yang (2010) also claimed that error detection was easier with written text.
Description and Evaluation of Errors
From the data, the students" original sentences, there are a total of 387 errors occurred during the writing assignments for all Tenses as shown on table 1 There are 24 numbers of common errors the students frequently made in their English writing including article, sentence structure, L1 interference, word choice, verb form, plural form, phrasal verb, spelling, preposition, conjunction, capital letter, subject-verb agreement, auxiliary verb, tense agreement, uncountable noun, word order, redundancy, missing object, apostrophe, missing verb, adverb, noun form, missing noun, and meaningless.
From Table 1, it can be seen that there had been decreases in numbers of errors regularly from the first to the fourth Tenses; from 177 to 13 errors only.However, the number increased for Present Perfect before it fell for Present Perfect Continuous and increased again for Simple Future before it fell drastically for Past Future to only 2 times errors.It was worth noted that it was a huge improvement that the students" errors in the second assignment were more than half less than theirs in the first assignment.
The shock therapy that the author applied to mark the students" writing assignments seemed to be successful.Through this, the students learned from their errors which were highlighted by the author with provided possible corrections.During the observation weeks, the author hardly made any effort to explain the students" errors in the classroom.She only explained the errors once after the announcement of the students" first assignment and barely on the next coming weeks.For the rest, the students did grasp the idea and made fewer errors in their later assignments.
Below are students" sample sentences with errors for every Tenses.The errors are indicated by an underline.The table above only presented one example of error for each tense.However from the data collected there were some findings that most errors that the students made did not automatically related to tenses forms but other grammatical issues or topics Below are students" sample sentences in which the author had made corrections to and were published on the academic site in which all students had access to and therefore they were able to read and learn from the teacher"s correction to their sentences.The author did the same strategy in marking the students" writing assignment for 8 consecutive weeks.This method of correcting students" errors could initiate a comfortable learning environment for students because they did not only get scores but also comments from the teacher in the form of possible corrections.The errors are indicated by underline and possible corrections to the errors are indicated by italic.meaningless Unlike the findings found by Bryant (1984) in his research, this study found out that L1 interference was only a minor factor in causing errors in the students" writing.The L1 interference occurred only five times on the data analyzed.
After collecting the questionnaire from the students, the author found out that the L1 intrusion occurred because of the students" limited vocabulary and they did not make any attempt to consult dictionary.Moreover, it happened because of the students" strong confidence that they had written the words correctly and the words were really English words while in fact, they"re totally wrong.
For instance the student (student 8) who wrote "blender" instead of "blend" answered the question in the questionnaire saying "Because I think blender in English is blender, not blend."Her original sentence in the exercise was: "If you want to get a fresh juice, you must buy a fresh fruit and you must blender a fresh fruit soon." The student used a noun instead of a verb for her sentence and that made her sentence incorrect.However, the student admitted her unfamiliarity with the verb form of "blend" for the noun "blender".The word "blender" was actually a common word in Bahasa Indonesia for it was an adopted word and had generally been used to refer to a tool people used to blend fruits and the like.In her sentence, student 8 actually produced some other errors in which she used article "a" for uncountable noun.
Looking at the frequency of errors" occurrence on table 1, it is worth reported that the top ten most common errors students produced in their writings from the highest to the lowest number of occurrence are as presented in table 4. The biggest number of occurrence on "article" indicated that the topic was the most difficult grammatical issue for students and the table showed that "article" and "preposition" were the hardest topics for students to understand and apply in their sentences.With these findings, the teacher should have given some time to explain the topics to the students so that the students saw the corrections not only to their error sentences but also to understand the theory really well.However, the teacher did not allocate time for that but planned to make the two topics as part of the topics to be taught in the second semester for the students.
Relation Between Students' Midterm Test Result and Frequency Committing Errors in Writing Assignments
The students sat a midterm test by the end of week 8.It meant that the test was held after the students had learnt all of the Tenses in the previous weeks.The midterm test was designed by the teacher who taught the students about the Tenses.The midterm test tested the students for all 8 Tenses they already learnt and no other than those.However, when observing the relation between the students" midterm test results to the frequency of errors students made in every writing assignment, a quite interesting finding indicating discrepancies is worth reported.
The discrepancies are highlighted as follows: The author made an assumption that making errors every time during the writing assignment for Tenses will certainly mean that the students most probably get low marks in the midterm test and vice versa.However, there are some interesting findings in which students who made errors on every assignment could get high scores in the midterm test as for students number 4 and 8 in which both got scores 60.83 and 66.66 consecutively.Likewise, an unusual finding occurred when 9 students who did well during the writing assignments in which they did errors only two or even once during the assignments collections ended up with low scores in their midterm test.These 9 students could only get scores ranging from 30.83 to 58.33 and only 3 students showed relevancy between their good achievement at writing assignments and their midterm test scores.
From the data found, the author could withdraw conclusion that making good progress in assignments did not necessarily ensure that the students would make good scores in the test but making good progress in assignments indicated that the students would make good preparation for the test.That was initially because the students did not know what the test would be like and the test was not all about writing sentences.Schachter (1974) did an analysis towards the learners" recurring errors in order to discover the reasons why the errors occur.However, for this study, in order to find concrete reasons for the error occurrence, the author collected the students" opinions by having them
Factors Influencing The Occurrence of Errors in Students' Writing Assignments
answer some questions through a questionnaire.
Through the questionnaire, the author discovered 14 general reasons for the students to create errors and there were two very outstanding reasons among them.As it was an open ended question, the students could give more than one reason for their answer.The author made a tally counting for the students" errors and found out that 29.16% of the students mentioned "being careless" was the major reasons of their errors and 20.83% of the students mentioned "lack of understanding towards the lessons" as a major cause of their errors.Below are the complete 15 reasons the students mentioned as the causes of their errors: 1. was careless 2. lack of understanding towards the lessons 3. was rushing in doing the exercises 4. was impatient 5. very often forgot the lessons 6. lack of vocabulary 7. forgot to use articles 8. lack of understanding in translation 9. forgot to use preposition 10. forgot to use auxiliary verbs 11. less practice 12. did not notice the instruction well 13. the questions were confusing 14. could not memorize formulas When asked whether the teacher"s corrections were useful or not, all of the students agreed upon one answer by saying "yes".The students gave some reasons to their positive response and 67.85% of their answers, in their own words, implied that the teachers" corrections were very useful in helping them to know their mistakes and helped them not to do the same mistakes in the following assignments.
Further, the questionnaire presented the students with a table consisted of 9 types of errors.They were asked to rank the errors from the most difficult topic to the least one.The topics were about article, preposition, spelling, word choice, subject-verb agreement, auxiliary verb, plural form, verb form, and capital letter.In order to analyze the data, the author calculated the average percentage of students" choices to each type of error and chose the biggest percentage for each type of error and the rank it referred to, to be reported.Most of the students chose "article" and "word choice" as the most difficult topics and prioritized them as number one, and "auxiliary verb" as the second most difficult topic.Both "subject-verb agreement" and "spelling" positioned on the third place with "preposition" came next.Interestingly, the majority students voted for capital letter as the least difficult one as 66.66% of them put number 9 on the column for capital 162| IJEE, Vol. 1, No. 2, 2014 letter.The data below will show clearer percentages among the topics.Having the data for which students thought to be the most difficult topics for them from the top 9 common errors students made on their writing assignments, the author then compared the ranks between those the author found out from the students" writing assignments and the students" responses to the questionnaire.It is worth noted that although there were four similarities on the ranking between the findings from the students" writing assignments and the students" responses, there are some other significant discrepancies in which the students thought preposition was much less difficult topic than auxiliary verb, but in the real exercises, the students did most errors in terms of preposition than auxiliary verb.It would suggest that the teacher needed to take the topic "auxiliary verb" seriously as well and also taught the students more about preposition because it seemed that the students had little consciousness of their lacking understanding about preposition.
The final question in the questionnaire was to ask the students if the writing assignments were helpful in preparing them to sit the midterm test.99% of the students answered "yes" and mentioned some reasons to their positive attitude.However, quite unexpectedly, 1 of the students answered "no" to the question because she said she needed the kind of questions that would be potential to appear on the midterm test.After analyzing the students" answers to the last question, the author was interested to compare the midterm test class average score between this observed class and the other two classes who were not given correction errors for their writing assignments.From the data collected, the author found out that this class average score was much higher from one class but a bit lower from the other one.This observed class average score in the midterm test was 52.19, while the other two were 47.58 and 54.67 respectively.This would suggest that the class teacher needed to see further whether the class observed consisted of students who were majority weak in English as it was 2.48 lower than the other class who had no treatment for error correction to their writing assignment.
CONCLUSION AND SUGGESTION
Writing in English is not an easy task for EFL learners because it involves not only the logic in thinking to put the ideas in order and meaningful but also have enough vocabulary to use understand English sentence structures and tenses.When it comes into written texts, the teacher should encourage students to be aware of grammar or language accuracy.
From the errors on the students" writing assignments, the majority of the students had problems with articles and prepositions, though the students categorized auxiliary verb as more difficult than preposition.These findings were quite similar with research conducted in other schools or universities around the world whose students speak English as a second or foreign language.Some conclusions could be worth noted and worth considered from this study as recommendation to the lecturers to improve their teaching practices to help the students learn better and in a long run to make the students as active learners and to enable them to use English fluently and grammatically correct.The pointed top ten errors students made should be considered as precious findings that the lecturer should put more emphasis on in teaching the current students and the future freshmen students.In order to avoid L1 interference, the lecturer should recommend the students to own and bring their dictionaries in English classes and use them as often as possible as it was found that the students hardly consult their dictionaries for unfamiliar words.
One big concern is that the lecturer should warn the students of their common error because there is possibility that the students did not know that they made errors.In addition, they might think that their 164| IJEE, Vol. 1, No. 2, 2014 works were correct as people mostly ignore the repeated errors, rather due to lack of linguistic competence.Poeple in generall are sometimes unable to identify them as errors and "this leads to the social acceptance of an error" (Sarfaz, 2011, p. 38).Therefore, continuous efforts from the lecturers are highly needed in order to spend time to check students" language and give correction and comment on their errors.In China, Wang (2008) claimed that teachers were given a primary responsibility to analyze students" errors because they were worth studying in order to help students learned more comfortably and without pressures of being marked wrong but being encouraged for the positive comments they got from their teachers.
Another suggestion to the lecturer is to give students more time to practice their lessons.Practice makes perfect and with more drills and directions, the students can master their lesson well.This effort will avoid students making errors as Sarfaz (2011, p. 37) claimed "in the absence of sufficient practice, the learners produce the language system which deviates from the system of TL."
Table 4 .
The Top Ten Students" Most Common Errors
Table 6 .
The Highest Percentage for Each Type of Error and Its Rank
Table 7 .
The Types of Errors According to the Students" Questionnaire and Writing Assignments Error Analysis on Information and Technology Students' Sentence Writing Assignments IJEE, Vol. 1, No. 2, 2014| 163
|
2018-12-07T15:58:36.581Z
|
2014-12-28T00:00:00.000
|
{
"year": 2014,
"sha1": "b770332b57c8e577520b14957bdcc53ce13f6cb5",
"oa_license": "CCBYSA",
"oa_url": "https://journal.uinjkt.ac.id/index.php/ijee/article/download/1342/1191",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b770332b57c8e577520b14957bdcc53ce13f6cb5",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
3722192
|
pes2o/s2orc
|
v3-fos-license
|
Second-Hand Exposure of Staff Administering Vaporised Cannabinoid Products to Patients in a Hospital Setting
Background In many health settings, administration of medicinal cannabis poses significant implementation barriers including drug storage and safety for administering staff and surrounding patients. Different modes of administration also provide different yet potentially significant issues. One route that has become of clinical interest owing to the rapid onset of action and patient control of the inhaled amount (via breath timing and depth) is that of vaporisation of cannabinoid products. Although requiring a registered therapeutic device for administration, this is a relatively safe method of intrapulmonary administration that may be particularly useful for patients with difficulty swallowing, and for those in whom higher concentrations of cannabinoids are needed quickly. A particular concern expressed to researchers undertaking clinical trials in the hospital is that other patients, nurses, and clinical or research staff may be exposed to second-hand vapours in the course of administering vaporised products to patients. Objective The objective of this study was to take samples from two research staff involved in administering vaporised Δ9-tetrahydrocannabinol to participants in a clinical trial, to examine and quantitate cannabinoid presence. Methods Blood samples from two research staff were taken during the exposure period for three participants (cannabis users) over the course of approximately 2.5 h and analysed using tandem mass spectrometry. Results Blood samples taken over a vaporised period revealed exposure below the limit of detection for Δ9-tetrahydrocannabinol and two metabolites, using tandem mass spectrometry analytical methods. Conclusions These results are reassuring for hospital and clinical trial practices with staff administering vaporised cannabinoid products, and helpful to ethics committees wishing to quantify risk.
Introduction
Medicinal cannabis use, whilst now legal in many jurisdictions, remains a topic of great controversy. For its consideration for use in mainstream medical treatment pathways as a 'therapeutic good', or in clinical trials in hospital settings, it is crucial to understand the acceptability and side effects of the route of administration for different products and dosing regimens. One route that has become of clinical interest is that of vaporisation of cannabinoid products. Although requiring a registered therapeutic device for administration, this is a relatively safe method of intrapulmonary administration that avoids risks associated with smoking and the formation of pyrolytic toxic compounds as it does not involve combustion [1]. It is also less likely to be associated with the cultural and societal assumptions linked with recreational cannabis use. The vaporisation route of administration may be particularly useful for patients with difficulty swallowing and for those in whom higher concentrations of cannabinoids are needed quickly. Peak plasma D 9 -tetrahydrocannabinol (THC) concentrations are reached within minutes of inhalation and have a rapid distribution phase [2][3][4].
The concern that other patients, nurses, and clinical or research staff may be exposed to second-hand vapours in the course of administering vaporised products to patients may limit the uptake of this form of treatment. Similar concerns have been raised for other medications, such as potential antimicrobial resistance development from exposure to nebulised antibiotics [5]. Previous well-controlled studies have determined that second-hand exposure to cannabis smoke may produce positive blood and urine test results and minor drug effects in non-smokers only under extreme conditions: non-smokers being in very close proximity to smokers using medium-high potency cannabis ad libitum in a small unventilated area for 1 h and using sensitive urinary assays with low cut-off criteria [6,7]. Under extreme exposure conditions to inhaled cannabis smoke within a motor vehicle, no THC was detected in the oral fluid of those passively exposed [8], noting limitations with the interpretation of salivary cannabinoid assays in detecting the time of use and overall exposure, reviewed in [9]. No studies have investigated systemic exposure from second-hand vaporised cannabinoid product use. We used opportunistic sampling from staff administering vaporised pure THC within a clinical trial in a hospital setting to examine the likely risk.
Methods
In a clinical trial involving a vaporised ethanolic solution of 6 mg of THC [ISRCTN24109245] [10] using the Volcano Ò 'Digit' model vaporiser (Storz & Bickel GmbH & Co., Tuttlingen, Germany) set at 230°C, two female clinical research staff gave informed consent to contribute blood samples to ascertain their exposure. Vaporisation of THC into the balloon and administration of the balloon filled with vapours for inhalation by trial participants (cannabis users and nonusers) was conducted in a small standard clinical assessment room on a hospital ward, away from other patients and near to imaging facilities. The approximate size of the room was 3 m 9 2 m. One of the staff (A; whose BMI was 20.1) administered the balloon to the participant and remained approximately 1 m away from the participant during inhalation and exhalation. The other staff member (B; BMI 20.2) was positioned inside the room but closer to the partially opened door, approximately 2 m away from the participant. There was no specific ventilation in the room aside from a standard small air conditioning vent. Participants inhaled and exhaled on average six to ten times to empty a balloon, and two balloons were administered. The first contained vaporised THC, the second contained the placebo (ethanol flavoured air; see [10] for methodology) and participants took on average 9 min to complete inhalation of both balloons (* 5-6 min for the THC balloon and 3-4 min for the placebo balloon). Four blood samples were collected from staff over the course of approximately 2.5 h. The first was taken prior to any drug administration. The subsequent three were taken 5 min after each of the three participants completed inhalation of the balloons, with participants spaced approximately 1 h apart. Administration to the three participants occurred in the same room following the same procedures. As such, there was the possibility of cumulative exposure over the course of this approximate 2.5 h period.
Staff gave 5 mL of blood, collected into EDTA tubes, which were covered with aluminium foil to prevent light exposure and kept on ice until the end of the day when they were centrifuged at 20009g for 10 min at 4°C and the plasma extracted. Plasma samples were stored frozen at -80°C and subsequently defrosted for assay by tandem mass spectrometry [11]. Plasma (50 lL) samples were combined with 100 lL of acetonitrile containing deuterated internal standards. Samples were then vortexed before being centrifuged at 15,0009g for 5 min. The supernatant was transferred into vials for measurement using liquid chromatography tandem mass spectrometry. The instrument was composed of a Shimadzu Nexera X2 ultra-high performance liquid chromatograph (Shimadzu Corporation, Kyoto, Japan) with a SCIEX 6500QTrap, a Kinetex Biphenyl column using a gradient of acetonitrile and 0.1% formic acid. The limit of quantitation was 0.5 ng/mL for each THC, and the metabolites 11-hydroxy-D 9 -tetrahydrocannabinol (OH-THC) and 11-nor-9-carboxy-D 9 -tetrahydrocannabinol (COOH-THC). The limit of detection was 0.2 ng/mL for THC, 0.15 ng/mL for OH-THC and 0.25 ng/mL for COOH-THC.
One of the research staff (B) also performed a urinary drug test several hours after these procedures (ProScreen TM Dip Test (US Diagnostics Inc, Huntsville, AL, USA); cutoff 50 ng/mL). Both staff also performed salivary tests for THC (Oratect Ò IIIB (Alere TM Toxicology Services, Portsmouth, VA, USA); cut-off 40 ng/mL).
Results
No cannabinoids were detected in plasma from either staff member (A or B) at baseline, nor, as shown in Table 1, at any of the three timepoints taken 5 min after completion of inhalation of THC vapours by each of three participants spaced 1 h apart. The urinary drug test was negative for cannabinoids. The salivary THC tests were both negative.
That the experiment and assays were valid, is evidenced by the quantification of THC and metabolites in the plasma of two of the THC-exposed male research participants (X and Y) shown in Table 2 (blood was not successfully drawn from the third participant because of unviable veins). Plasma concentrations in Table 2 correspond to baseline (pre-drug administration; 1), 5 min after inhalation of the two balloons (2) and 1 h later (3). Participant Y was a heavy cannabis user, explaining cannabinoid concentrations present at baseline.
Conclusions
These results suggest that there is little risk of second-hand exposure to clinical or research staff from administering vaporised THC within a clinical setting. Previous research has suggested that 35% of THC vapours inhaled are exhaled directly after inhalation [1] and we previously showed that 80% of the THC loaded into the vaporiser is delivered into the balloon [10]. Overall, this efficiency of delivery method is comparable to that achieved through a smoking route of cannabis administration [1]. These conditions and the conditions within which this small study was performed emulate administration of medicinal cannabis on a hospital ward, without the smoke, and optimised the opportunity to detect cannabinoids in the biological fluids of staff, yet none were detected. Together with the fact that newer vaporisers, e.g. MiniVap (Hermes Medical Engineering, San Sebastián, Spain) have less 'gas escape' than the one used in this study, these outcomes should reassure researchers of the safety for staff in administering medicinal cannabis to patients in this setting. Nevertheless, the THC dose used in this study was relatively low (6 mg), and while higher doses are also not expected to result in detectable cannabinoids in clinical staff exposed under these conditions, replication of these findings with a larger sample size, more timepoints, alternate vaporisers, and with vaporisation of cannabis plant matter is warranted. NSW, Australia.
Compliance with Ethical Standards
Funding The original study that enabled the analyses reported here was funded by the National Health and Medical Research Council of Australia (APP1007593). Nadia Solowij was supported by an Australian Research Council Future Fellowship (FT1101007752). Ethics approval All procedures performed involving human participants were in accordance with the ethical standards of the institutional research committees and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Conflict of interest
Consent to participate Informed consent was obtained from all individual participants included in the study. Table 1 Results of liquid chromatography tandem mass spectrometry analysis of D 9 -tetrahydrocannabinol (THC) and metabolites (ng/ mL) in plasma from two staff (A and B) exposed three times to exhaled vapours over the course of a 2.5 h period. Samples (1), (2) and (3) drawn 5 min after each of three participants spaced * 1 h apart were exposed to vaporised THC COOH-THC 11-nor-9-carboxy-D 9 -tetrahydrocannabinol, LOD limit of detection, OH-THC 11-hydroxy-D 9 -tetrahydrocannabinol Table 2 Results of liquid chromatography tandem mass spectrometry analysis of D 9 -tetrahydrocannabinol (THC) and metabolites (ng/ mL) in plasma from two cannabis users (X and Y) exposed to vaporised THC. Samples drawn prior to THC administration (1); 5 min after THC administration (2) COOH-THC 11-nor-9-carboxy-D 9 -tetrahydrocannabinol, LOD limit of detection, OH-THC 11-hydroxy-D 9 -tetrahydrocannabinol Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2018-04-03T00:46:47.257Z
|
2018-01-12T00:00:00.000
|
{
"year": 2018,
"sha1": "d8134c56c19e0f244e1160e427a2c6632174971a",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40268-017-0225-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8134c56c19e0f244e1160e427a2c6632174971a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267522099
|
pes2o/s2orc
|
v3-fos-license
|
Altered 5-HT2A/C receptor binding in the medulla oblongata in the sudden infant death syndrome (SIDS): Part II. Age-associated alterations in serotonin receptor binding profiles within medullary nuclei supporting cardiorespiratory homeostasis
Abstract The failure of chemoreflexes, arousal, and/or autoresuscitation to asphyxia may underlie some sudden infant death syndrome (SIDS) cases. In Part I, we showed that some SIDS infants had altered 5-hydroxytryptamine (5-HT)2A/C receptor binding in medullary nuclei supporting chemoreflexes, arousal, and autoresuscitation. Here, using the same dataset, we tested the hypotheses that the prevalence of low 5-HT1A and/or 5-HT2A/C receptor binding (defined as levels below the 95% confidence interval of controls—a new approach), and the percentages of nuclei affected are greater in SIDS versus controls, and that the distribution of low binding varied with age of death. The prevalence and percentage of nuclei with low 5-HT1A and 5-HT2A/C binding in SIDS were twice that of controls. The percentage of nuclei with low 5-HT2A/C binding was greater in older SIDS infants. In >80% of older SIDS infants, low 5-HT2A/C binding characterized the hypoglossal nucleus, vagal dorsal nucleus, nucleus of solitary tract, and nuclei of the olivocerebellar subnetwork (important for blood pressure regulation). Together, our findings from SIDS infants and from animal models of serotonergic dysfunction suggest that some SIDS cases represent a serotonopathy. We present new hypotheses, yet to be tested, about how defects within serotonergic subnetworks may lead to SIDS.
A B S T R A C T
The failure of chemoreflexes, arousal, and/or autoresuscitation to asphyxia may underlie some sudden infant death syndrome (SIDS) cases.In Part I, we showed that some SIDS infants had altered 5-hydroxytryptamine (5-HT) 2A/C receptor binding in medullary nuclei supporting chemoreflexes, arousal, and autoresuscitation.Here, using the same dataset, we tested the hypotheses that the prevalence of low 5-HT 1A and/or 5-HT 2A/C receptor binding (defined as levels below the 95% confidence interval of controls-a new approach), and the percentages of nuclei affected are greater in SIDS versus controls, and that the distribution of low binding varied with age of death.The prevalence and percentage of nuclei with low 5-HT 1A and 5-HT 2A/C binding in SIDS were twice that of controls.The percentage of nuclei with low 5-HT 2A/C binding was greater in older SIDS infants.In >80% of older SIDS infants, low 5-HT 2A/C binding characterized the hypoglossal nucleus, vagal dorsal nucleus, nucleus of solitary tract, and nuclei of the olivocerebellar subnetwork (important for blood pressure regulation).Together, our findings from SIDS infants and from animal models of serotonergic dysfunction suggest that some SIDS cases represent a serotonopathy.We present new hypotheses, yet to be tested, about how defects within serotonergic subnetworks may lead to SIDS.K E Y W O R D S : Autoradiography, Blood pressure recovery, Gasping, Hypoxia, Inferior olive, Nucleus of the solitary tract, Raphe
I NTR ODUCT ION
The sudden infant death syndrome (SIDS) is characterized by the sudden, unexpected death of an infant in the first postnatal year that remains unexplained by complete autopsy and forensic investigation (1).SIDS infants, who are seemingly normal, are found dead unexpectedly, typically during a sleep period (2).SIDS remains the leading cause of post-neonatal infant mortality in the United States (3).Over the last 2 decades, the rate of SIDS has not declined despite increased public health campaigns promoting awareness of safe infant sleep (e.g.supine position) and infant care practices that may reduce the risk of SIDS (e.g.breastfeeding (4,5)).Instead, the overall SIDS rate has plateaued in the United States (6) and has even increased recently in African-American infants (7).There is an urgent need to establish the definitive cause(s) and basic mechanism(s) of SIDS, upon which specific therapeutic remedies and more effective means of prevention can be developed (6).
During sleep, infants and adults alike can be subjected to intermittent blood gas disturbances due to re-breathing (i.e.breathing in exhaled air), a brief loss of airway patency (i.e.obstructive sleep apnea), or events in the central nervous system that halt the activity of the diaphragm (i.e.central apnea).The first line of defense is activation of chemoreceptors triggered by hypoxia and elevated tissue carbon dioxide (CO 2 ) with associated acidosis.Chemoreceptor activation provides excitatory signals to regions of the brain that promote arousal from sleep (8)(9)(10).The activation of chemoreceptors also increases neural activity within respiratory and cardiovascular networks that together combat the blood gas disturbance (11)(12)(13)(14).If this first line of defense fails or is insufficient, hypercapnia and hypoxia (i.e.asphyxia) become progressively more severe, potentially leading to hypoxic coma (15).Survival then hinges on a complex behavior (autoresuscitation) relying on multiple, integrated physiological processes to support reoxygenation of the brain to allow the restoration of normal breathing (eupnea) and ultimately reversal of hypoxic coma.
There is a wealth of pathological and molecular evidence that SIDS infants experience chronic hypoxia, perhaps linked to repeated periods of re-breathing, or obstructive or central apnea and bradycardia, prior to a terminal event.Brainstem gliosis (16)(17)(18) and elevated hypoxic markers have been identified in a variety of tissues, suggesting that SIDS infants were chronically hypoxic prior to the terminal event (19)(20)(21)(22)(23). Analyses of cardiorespiratory records obtained immediately prior to death suggest that the final event often involved acute, severe asphyxia that the infant could not overcome.For example, prolonged apnea, bradycardia, and gasping (all indicating severe asphyxia) have been observed in SIDS infants prior to death (24,25).In a normal infant, the integrated physiological components of arousal and autoresuscitation are successful; eupnea and consciousness are restored.In at least some SIDS cases, after hypoxia and/or CO 2 -induced arousal failed or was ineffective, gasping was not sufficient to restore cardiorespiratory function, leading to death.
Neuropathological evidence obtained from SIDS infants by our group over the last 3 decades is in keeping with the concept that SIDS is associated with decreased signaling from serotonin (5-hydroxytryptamine ), a monoamine neuromodulator, in key medullary nuclei supporting arousal and autoresuscitation.Neurons that synthesize 5-HT reside in the medulla, pons, and midbrain and regulate diverse functions within the CNS.We initially reported reduced 5-HT receptor binding using 3 H-lysergic acid diethylamide, a relatively nonspecific 5-HT receptor ligand (26,27).SIDS infants have reduced 5-HT in the raphe obscurus (ROb) and the nucleus paragigantocellularis lateralis (PGCL), two medullary 5-HT source nuclei (i.e.those nuclei containing serotonergic neuronal cell bodies) that release 5-HT through projections to target nuclei (i.e.those nuclei containing 5-HT receptors but not serotonergic cell bodies) (28).Moreover, tryptophan hydroxylase 2 (TPH2), the enzyme responsible for the bulk of 5-HT synthesis in the central nervous system, is reduced in the ROb of SIDS infants compared to controls (28).Various isoforms of 14-3-3, a family of regulatory proteins with multiple functions, including regulating the activity of TPH2, are also reduced in the nucleus gigantocellularis (GC) of SIDS infants (29).
Based on the findings from SIDS infants, we have more recently used animal models to investigate hypotheses positing that serotonergic defects in key neuromodulatory systems regulating CO 2 chemoreception, arousal (the first line of defense), and autoresuscitation (the last resort) underlie some cases of SIDS.As a group, we have shown that animal models of serotonergic dysfunction display defects in CO 2 chemoreception (14,(30)(31)(32)(33), arousal (8,(34)(35)(36), and autoresuscitation, with animals unable to recover heart rate, blood pressure, eupnea, and consciousness following multiple episodes of asphyxia (37)(38)(39)(40)(41).These findings support the concept that abnormal 5-HT receptor binding in SIDS reflects serotonergic dysfunction in these key medullary nuclei, compromising these vital processes that normally protect an infant during sleep periods.
The current paper is the second of a 2-paper series.In Part I, we presented data demonstrating that 5-HT 2A/C receptor binding was reduced in nuclei comprising tegmental and olivocerebellar subnetworks of SIDS infants compared to ageadjusted autopsy controls; both subnetworks are 5-HT targets and participate in arousal and cardiorespiratory reflexes.For some nuclei, reduced receptor binding was dependent on age, with the greatest reduction at the oldest ages.Over the course of our research, we came to appreciate that the affected medullary pathways in the SIDS cases share two critical features: (1) They are all involved in protective responses that ultimately help to restore oxygen (O 2 ) and CO 2 status of vital tissues (e.g.brain) during a cardiorespiratory event, including during sleep (e.g.apnea); and (2) 5-HT and the neurons that produce it play an essential role in these processes (30-33, 36, 38, 39, 41-46).Based on the findings described in Part I, we proposed the existence of an integrative brainstem network that in SIDS infants fails to preserve breathing, facilitate arousal, and/or induce successful autoresuscitation.In Part II of this series, we address the overall hypothesis that 5-HT 1A and 5-HT 2A/C receptor binding, statistically defined as low, manifest differently in SIDS in key source and target nuclei, in a manner depending on age at death.Here, in a continued analysis of the Part I database, we address, for the first time, the prevalence of low 5-HT receptor binding in SIDS infants across nuclei and within different age bins.We more deeply examine our autoradiography data from Part I, given the unique features of this precious SIDS autopsy cohort, the likes of which is becoming progressively more difficult to obtain due to decreasing autopsy rates and general complications in obtaining parental consent during the period between death and autopsy.
MAT E RI ALS AND ME T HODS Clinicopathologic database and tissue processing for receptor autoradiography
Historically, we accrued autopsy samples continuously and created independent datasets when enough were present to merit analysis.The published 5-HT 1A and 5-HT 2A/C data were obtained from cohorts of SIDS infants and controls previously collected in our lab over 3 different time periods and designated as independent datasets (datasets 3-5 from our laboratory).The 5-HT 1A receptor binding database used here comprised data from SIDS and controls from datasets 3, 4, and 5 and the 5-HT 2A/C receptor binding database used here comprised SIDS and controls from datasets 4 and 5.As depicted in Figure 1, a combined database was derived from infants in the foregoing individual databases who had both 5-HT 1A and 5-HT 2A/C -binding receptor measurements.All brainstems previously analyzed came from the Office of the Chief Medical Examiner, San Diego, CA and were available for research under the auspice of the California Code, Section 27491.41.The reader is referred to Part I for the definition and adjudication of SIDS infants and autopsy controls, the protocol for tissue processing and sectioning, methods for 5-HT 1A and 5HT 2A/C receptor autoradiography, choice of atlases for human brainstem anatomy, and tabulation of nuclei in the medulla that were sampled for receptor binding levels (47).Examples of the autoradiograms utilized for binding measurements are shown in Figure 2. The tabulation of the causes of death in the autopsy controls has been published (47,48).Using the databases described above (Fig. 1) and a statistical definition of low binding, we examined the prevalence of low 5-HT 1A and 5-HT 2A/C binding in SIDS and control infants.It should be noted that the autoradiographic methods used here and previously cannot resolve the binding affinity or levels of receptors on specific cellular phenotypes within the nuclei examined.
Neuroanatomy
In both Parts I and II, we focused on nuclei in the previously defined medullary 5-HT network comprising the 5-HT source neurons and their medullary target sites (47).The 5-HT system has rostral and caudal domains (47).The rostral domain includes raphe nuclei in the midbrain and pons.Serotonergic neurons in this domain modulate various aspects of cognitive function, sleep, CO 2 chemosensitivity, arousal, and some aspects of upper airway muscle control.Although this rostral domain plays an important role in cognitive aspects of arousal and consciousness (49), we did not pursue investigation of the rostral domain as our earliest studies did not show consistent binding defects in this region (26).The caudal domain, the focus of our binding studies over the last several decades, contains 5-HT source neurons in the medulla (ROb, raphe magnus [RMg], and raphe pallidus [RPa]), as well as the extra-raphe 5-HT source nuclei in the ventromedial medulla (GC) and ventrolateral medulla (PGCL and intermediate reticular zone [IRZ]), that is reticular formation of the rostral medulla (see Figs. 1 and 2 in Part I (47)).As ROb and RMg have overlapping boundaries, we considered them as a single source nucleus and referred to them as the ROb/RMg in our analyses.This caudal domain is critical for modulation of many physiological functions required for life support, including subcortical arousal which helps support cardiorespiratory homeostasis during sleep periods (14,50).
Statistical analysis of low receptor binding
In Part I and earlier publications, we asked whether mean receptor binding in each nucleus differed between infants who died of SIDS and control infants who died of other causes (47).This approach tested for population-level differences between the SIDS and control cohorts but did not assess for abnormalities in individual SIDS infants.Therefore, the current analysis aimed to define "low" on a per-infant and pernucleus basis, in direct comparison to receptor binding defined as normal in control infants.
We defined low binding as falling below the 95th percentile confidence interval (CI) observed in control tissues.More specifically, because 5-HT 2A/C binding in control infants varies with age (47), the CI used was that from regression modeling of binding on postconceptional age, separately for each nucleus.One control infant had excessively high 5-HT 2A/C binding (i.e. an outlier) in 6 nuclei and was, therefore, excluded from the modeling.Modeling for 5-HT 1A used a quadratic effect of age for greater precision, whereas modeling for 5-HT 2A/C used a linear effect, due to the sparseness of the data for the latter.Because laboratory measurements were done in batches over time for each dataset used in prior publications, regression modeling also controlled for dataset.This analysis resulted in an age-and dataset-varying 95% CI around the normal binding levels within control infants.This statistical definition of "normal" is used for a wide variety of laboratory values, but it has, as a consequence, the outcome that some normal (control) babies are expected to have lowbinding values by definition (51).Therefore, the study question became, "is the prevalence of low receptor binding greater in infants who died of SIDS compared to infants who died of other causes?" We defined overall prevalence of low binding in SIDS and control infants based on the percentage of SIDS and control infants that had one or more nuclei statistically defined as having low binding (as defined above).5-HT 1A -binding levels were measured in 10 nuclei.While the arcuate nucleus was included in the 10 nuclei measured for 5-HT 1A , it was not measured for the 5-HT 2A/C analysis and thus is not included in the figures.5-HT 2A/C -binding levels were measured in 11 nuclei, including 2 different levels of the principal inferior olive ([PIO]; rostral and mid medulla).Only the rostral level of the PIO is included in the figures.We defined the "combined database" as infants with measurements of both 5-HT 1A and 5-HT 2A/C in the 9 nuclei measured for both (Fig. 1).The prevalence of SIDS and control infants with any binding defined as low was calculated separately for 5-HT 1A and 5-HT 2A/C and compared between groups via the Fisher exact test.The same data were calculated for the "combined database," and we further considered whether infants had at least one low binding of either type (5-HT 1A or 5-HT 2A/C ).Finally, we considered whether infants had at least one nucleus with low 5-HT 1A binding and low 5-HT 2A/C binding.
We determined the percentage of nuclei with low binding in each SIDS and control infant.Descriptive statistics of this percentage were calculated separately for 5-HT 1A and 5-HT 2A/C and compared between groups via t-test.In the combined database, we further considered the percentage of nuclei with low binding for one receptor (5-HT 1A or 5-HT 2A/C ) and for both receptors (5-HT 1A and 5-HT 2A/C ).
To examine age effects, we defined 3 age groups: early infancy (38-44.9postconceptional weeks; "Early"), mid infancy (45-59.9postconceptional weeks; "Mid"), and late infancy (60-76 postconceptional weeks; "Late").The age groups were chosen based on breaks observed in the 5-HT 2A/ C age distribution, and on the peak age for SIDS.For example, for an infant born at 37 weeks (term), the mid-infancy group corresponds to 8-22.9 postnatal weeks, roughly the peak age for SIDS.
Postconceptional age was defined as time since (estimated) conception, that is the sum of gestational age (time before birth) and postnatal age (time since birth).Differences between age groups with respect to the percentage of nuclei defined as low binding were assessed via ANOVA, and differences in prevalence of low binding by subnetwork and postconceptional age were tested via logistic regression.
We analyzed 5-HT 1A and 5-HT 2A/C binding in SIDS infants and controls within 2 serotonergic subnetworks vital for blood pressure regulation: the nucleus of the solitary tract (NTS)medial accessory olive (MAO)-GC subnetwork and the PGCL-GC-IRZ-ROb source subnetwork.Abnormalities in each circuit were defined in 2 ways: (1) Any nuclei low: scored if at least one nucleus within the subnetwork had low binding, even if data were missing in one or more nuclei; (2) All nuclei low: scored if all nuclei within the subnetwork had low binding; not scored if data were missing in any nuclei.For each subnetwork, significant differences between SIDS and control infants were assessed with Fisher exact tests.The small number of controls prevented us from analyzing the effect of postconceptional age on receptor binding within the subnetworks in controls, but the effect in SIDS infants was tested via logistical regression.
Last, we asked whether risk factors for SIDS (prematurity, sex, prenatal exposure, history of illness within a week before death, prone sleep position, found face down, adult bed/cosleeping) were associated with particular defects in receptor binding within subnetworks.This analysis was negative (we were unable to define associations between risk factors and specific patterns of receptor binding), and the data are, therefore, not presented or discussed further.
RE SUL TS 5-HT receptor binding in the medulla in SIDS versus control infants
The results of the 5-HT 2A/C receptor binding studies using tissue autoradiography are presented in detail in Part I (47); those data form the basis for the 5-HT 2A/C analyses presented here in Part II.Results of the 5-HT 1A receptor binding studies are published (28,48,52); those data form the basis for the 5-HT 1A analyses presented here.
Examples of the distribution of receptor binding within the ROb/RMg are shown in Figure 3. 5-HT 1A and 5-HT 2A/C receptor binding are shown for control infants (gray circles), SIDS infants classified as "normal binding" (blue circles), and SIDS infants classified as "low binding" (red circles).Per the statistical definition of low binding, a fine line exists between normal binding (blue circles) and low binding (red circles) in SIDS infants.There is also overlap between SIDS infants and controls (gray circles).
Increased prevalence of low 5-HT 1A and 5-HT 2A/C receptor binding in SIDS infants
The overall prevalence (percentage) of SIDS infants with at least one nucleus defined as having low binding, was !77% across all 3 databases (Table 1).In the 5-HT 1A database, 79% of SIDS infants have low binding in one or more nuclei, compared to 33% of the control infants (p < 0.001).In the 5-HT 2A/C database, 88% of SIDS infants have low binding in one or more nuclei, compared to 42% of control infants (p ¼ 0.001).In the combined database utilizing SIDS infants with both 5-HT 1A and 5-HT 2A/C receptor binding data available, 87% (45/52) of the SIDS infants had low 5-HT 1A receptor binding, 87% (45/52) had low 5-HT 2A/C binding, 96% (50/52) had either low 5-HT 1A or 5-HT 2A/C binding, and 77% (40/52) had both low 5-HT 1A and 5-HT 2A/C binding.While some control infants had low receptor binding in at least one nucleus, the prevalence of low 5-HT 1A and/or 5-HT 2A/C binding in the SIDS infants was significantly greater than in the control infants for all 3 databases (p ¼ 0.042 to <0.001, Table 1), and few controls (17%) were identified as having both low 5-HT 1A and 5-HT 2A/C binding.
SIDS infants have a higher percentage of nuclei with low 5-HT 1A and 5-HT 2A/C binding
To examine the scope or extent of low receptor binding in infants who died of SIDS compared to infants who died of defined causes, we calculated the number of nuclei per infant with low 5-HT 1A or 5-HT 2A/C binding.As some infants did not have binding measurements for every nucleus, we calculated the percentage of all nuclei measured that had low binding (Table 2).On average, between 37% and 53% of all nuclei measured in SIDS infants displayed low binding for 5-HT 1A or 5-HT 2A/C receptors, respectively, in the individual databases.Using the combined database, an average of 79% of nuclei in SIDS infants had either low 5-HT 1A or 5-HT 2A/C binding and 22% of nuclei had low binding for both 5-HT 1A and 5-HT 2A/C receptors.By contrast, on average, in the control infants 0%-39% of all nuclei measured had low binding for 5-HT 1A and/or 5-HT 2A/C receptors in the individual and combined databases.Notably, no nucleus in the control infants had low binding for both 5-HT 1A and 5-HT 2A/C receptors (Table 2) .Overall, SIDS infants had a higher percentage of nuclei with low 5-HT 1A and/or 5-HT 2A/C binding (p ¼ 0.02 to <0.001) (Table 2).
Overall percentage and prevalence of nuclei with low 5-HT 1A and 5-HT 2A/C receptor binding Given the distinct functions of specific nuclei in chemoreception, arousal, and/or autoresuscitation, we examined the prevalence of low 5-HT 1A and 5-HT 2A/C binding across nuclei to determine whether unique patterns existed in SIDS infants compared to controls.To maximize the available data, we utilized the full individual database of SIDS infants from the published 5-HT 1A studies (84 SIDS; 18 controls) and the full individual database of SIDS infants from the published 5-HT 2A/C study (57 SIDS; 12 controls [Fig.4]).The percentage of SIDS infants with low 5-HT 1A or 5-HT 2A/C binding in individual nuclei presented differently across 5-HT source and target nuclei.Within individual nuclei, 22%-56% of SIDS infants had low 5-HT 1A receptor binding: the highest prevalence of a 5-HT 1A deficit occurred in the hypoglossal nucleus (HG), a 5-HT target nucleus (56%), and the lowest prevalence occurred in the GC, a 5-HT source nucleus (22%).
Age distribution of the percentage and prevalence of nuclei with low 5-HT 1A and 5-HT 2A/C receptor binding Figure 5 shows the percentage of nuclei with low 5-HT 1A or 5-HT 2A/C binding in SIDS and control infants within 3 different age bins (Early, Mid, and Late).While the percentage of nuclei with low 5-HT 1A binding did not vary significantly across age bins (p ¼ 0.35), the percentage of nuclei with low 5-HT 2A/C binding was significantly greater in the older infants ("Late") compared to the younger age bins ("Early" and "Mid") (p < 0.006) (Fig. 5).The percentage of nuclei with low 5-HT 1A and/or 5-HT 2A/C binding was less (0%-33%) in control infants across all 3 age bins.Figure 6 shows the prevalence (percentage) of infants that showed low binding in specific nuclei across the 3 different age bins.In the youngest SIDS infants ("Early"), low 5-HT 1A binding appeared infrequently in the 5-HT source nuclei (ROb/RMg, GC, PGCL, IRZ) but frequently in the 2 target nuclei: DMX (71% of SIDS infants) and HG (55% of SIDS infants).In contrast, low 5-HT 2A/C binding occurred much more frequently in the source nuclei of the youngest SIDS infants, especially in the ROb/RMg (67%), PGCL (50%), and IRZ (50%), and less frequently in the DMX (0%) and HG The percent of nuclei was calculated based on all available nuclei measured.‡ The percent of nuclei was calculated based on measured nuclei common to both 5-HT 1A and 5-HT 2A/C .
Reduced 5-HT Receptor Binding Across Key Medullary Nuclei in SIDS 149 (40%) (Fig. 6).In the oldest SIDS infants ("Late"), the percentage of SIDS with low 5-HT 1A binding varied across nuclei (0%-71%) (Fig. 6).For 5-HT 2A/C binding, however, the percentage of SIDS cases with low binding was >70% across all nuclei measured.Most notable was the percentage of older SIDS cases with low 5-HT 2A/C binding in 5-HT target nuclei: HG (89%), DMX (89%), NTS (89%), and all olivocerebellar nuclei (MAO, PIO, dorsal accessory olive [DAO]) (70%-100% of SIDS infants for each).One hundred percent of the oldest SIDS infants had low 5-HT 2A/C binding in the MAO (Fig. 6).In the age bin consistent with the peak period of SIDS ("Mid"), all nuclei displayed some level of vulnerability with prevalence of low binding for either 5-HT 1A or 5-HT 2A/ C being between 29% and 63% in SIDS infants.The prevalence (percentage) of control infants with low binding in specific nuclei, varied with receptor, nuclei, and across age bins.Within younger controls ("Early"), the percentage of controls infants with low binding in either receptor was <30% except for the 5-HT 2A/C binding in the ROb/RMg (43%).Low binding was infrequently observed in control infants at the "Mid" and "Late" age bins.An exception includes 5-HT source nuclei in the older infants (ROb/RMg, GC, PGCL, IRZ, all 33.3% for 5-HT 1A ) and the target nuclei in the older infants (DMX and NTS, both 50% for 5-HT 2A/C ) (Fig. 6).
Prevalence of low binding in specific serotonergic subnetworks
In both SIDS infants and controls, we examined the prevalence of low 5-HT 2A/C and 5-HT 1A binding within key subnetworks vital for blood pressure regulation (NTS-MAO-GC; that is "olivocerebellar" subnetwork) as well as the subnetwork containing the 5-HT source neurons (ROb/RMg-GC-PGCL-IRZ subnetwork) (53).For each subnetwork, we calculated the percentage of SIDS infants and controls who displayed low receptor binding in any or all nuclei of the subnetwork.
Olivocerebellar subnetwork
Table 3 shows the percentage of SIDS and control infants displaying low 5-HT 2A/C and 5-HT 1A binding in SIDS within each subnetwork.Fifty-one percent of SIDS infants had at least one nucleus in the olivocerebellar subnetwork (NTS-MAO-GC) displaying low 5-HT 1A binding, compared to 23% of control infants (p ¼ 0.13).Three percent of SIDS infants displayed low 5-HT 1A binding in all component nuclei, similar to control infants (0%; p ¼ 1.0).In contrast, SIDS infants were distinct from controls with respect to 5-HT 2A/C binding within the subnetwork: 81% of SIDS infants had at least one of the component nuclei within the olivocerebellar subnetwork with low 5-HT 2A/C binding, compared to 18% of control infants (p < 0.001).Most striking, nearly half (46%) of SIDS infants had low 5-HT 2A/C binding in all 3 component nuclei of this subnetwork, while there were no controls that displayed low 5-HT 2A/C binding in all nuclei (p ¼ 0.008).When examined within the previously defined age bins (Early, Mid, and Late), 88% of the oldest SIDS infants (Late) had low 5-HT 2A/C binding in all component nuclei of the olivocerebellar circuit compared to 33%-35% of the younger SIDS infants (Early and Mid) (age effect: p ¼ 0.04; Table 4).In comparison, 0%-6% of SIDS infants had low 5-HT 1A binding in all of the component nuclei of the network across the Early, Mid, and Late age bins.These analyses demonstrate that low 5-HT 2A/C receptor binding is the more common finding in the olivocerebellar subnetwork, especially in the older SIDS cohort.
5-HT source subnetwork
We previously reported that 5-HT 1A receptor binding was heavily concentrated in the rostral reticular formation, including the IRZ, PGCL, and GC (source nuclei) compared to the caudal reticular formation (28, 47, 52) (Fig. 2).In the source subnetwork containing the ROb/RMg, GC, PGCL, and IRZ of SIDS infants, the prevalence of low 5-HT 1A binding in any nucleus (53%) was not statistically different from its prevalence in controls (28%; p ¼ 0.07; Table 3).Fourteen percent of SIDS infants had low 5-HT 1A binding in all component nuclei, which was also no different than the prevalence of low binding in controls (6%; p ¼ 0.45; Table 3).As in the olivocerebellar network, low 5-HT 2A/C binding was much more prevalent within the source subnetwork of SIDS infants compared to controls: 77% of SIDS infants displayed low 5-HT 2A/ C binding in any of the nuclei, compared to 27% of controls (p ¼ 0.003).Similar to the olivocerebellar subnetwork, nearly half (47%) of SIDS infants had low HT 2A/C binding in all component nuclei of the source subnetwork, compared to 9% of controls (p ¼ 0.038; Table 3).Seventy-one percent of the oldest SIDS infants had low 5-HT 1A binding in any nuclei of this source subnetwork, compared to 23% and 54% of the younger SIDS infants in the Early and Mid bins, respectively (age effect: p ¼ 0.006) (Table 4).There was no difference in prevalence of low 5-HT 2A/C receptor binding across age bins in the source subnetwork.These analyses demonstrate that low 5-HT 2A/C and 5-HT 1A binding occur frequently in the 5-HT source subnetwork; the prevalence of low 5-HT 1A binding emerges to a greater extent in the older SIDS cohort.
DI SC USSI O N
In Part I of this series, we reported that SIDS infants have altered 5-HT 2A/C binding in key medullary nuclei supporting chemoreception, arousal, and autoresuscitation, in addition to altered 5-HT 1A binding reported previously (47).In Part II, we address the hypothesis that low 5-HT 1A and 5-HT 2A/C binding manifest differentially in serotonergic source and target nuclei of SIDS infants across the 3 age bins examined.For each nucleus, we defined the low binding as binding below the lower boundary of the 95% CI of the control infant binding.Utilizing previously published data in Part I and new statistical approaches, we provide evidence that: (1) compared to control infants, SIDS infants have a greater prevalence of low 5-HT 1A or 5-HT 2A/C binding; (2) the percentage of nuclei with low 5-HT 1A or 5-HT 2A/C binding was 2-3 times greater in SIDS infants compared to controls; and (3) 5-HT source nuclei exhibited a higher prevalence of low 5-HT 2A/C binding compared to 5-HT 1A .Related to postconceptional age, we showed: (1) the low 5-HT 1A binding was observed more frequently in the HG and DMX of the youngest SIDS infants but, with the exception of the GC, was more prevalent in the source nuclei of the oldest SIDS infants; (2) in the source nuclei of the youngest SIDS infants, low 5-HT 2A/C binding occurred much more frequently than low 5-HT 1A binding; (3) in the oldest SIDS infants, low 5-HT 2A/C binding was more prevalent across all source and target nuclei, with low 5-HT 2A/ C binding in the MAO notably observed in 100% of older SIDS infants; (4) that at ages consistent with the peak risk of SIDS (Mid), the prevalence of low binding was widespread ($30%-70% of SIDS infants) through all measured nuclei for both 5-HT 1A and 5-HT 2A/C receptors; and (5) nearly half of all SIDS infants had low 5-HT 2A/C binding in every component nucleus of the source (ROb/RMg-GC-PGCL-IRZ) and olivocerebellar subnetworks (NTS-MAO-GC).These findings support the hypothesis that the patterns of low 5-HT 1A and 5-HT 2A/C binding are distinct, appearing differentially in 5-HT source and target nuclei and differentially as a function of age at the time of death.
5-HT 1A and 5-HT 2A/C receptors: balancing excitation and inhibition in neural circuits involved in chemoreception, arousal, and autoresuscitation 5-HT 1A and 5-HT 2A/C receptors have been a focus of our research because they are expressed in key medullary nuclei and participate in the maintenance of cardiorespiratory homeostasis in sleep, including processes involved in chemoreception, arousal, and autoresuscitation.5-HT 2A/C receptors are excitatory, activating downstream signaling pathways that are permissive for several excitatory post-synaptic currents, including glutamate and persistent sodium currents (14).In this way, 5-HT 2A/C receptor activation facilitates the bursting of pacemaker and other neurons in the Pre-B€ otzinger complex (PreBotC) (45,54,55), presympathetic neurons (56,57), and specific neurons that promote arousal (9).On the other hand, 5-HT 1A receptors are generally inhibitory and, unlike 5-HT 2A/ C receptors, are expressed somato-dendritically as autoreceptors, reducing the activity of 5-HT neurons.5-HT 1A receptors are also expressed by inhibitory GABAergic and glycinergic neurons at target nuclei, and their activation can dampen inhibitory neurotransmission at these sites (58,59).These concepts should be considered with respect to the functional consequences of reduced 5-HT 1A binding in SIDS infants within nuclei critical for arousal, cardiorespiratory homeostasis, and autoresuscitation (described in the next section).For example, reduced 5-HT 1A activity may augment inhibitory neurotransmission at target sites, compromising the function of these critical processes.It is also possible that, as an autoreceptor, reduced 5-HT 1A activity in the source nuclei could increase 5-HT release at target sites.
Consequences of reduced 5-HT 1A and 5-HT 2A/C binding on chemoreception, arousal, and autoresuscitation
Pathological findings coupled with analyses of cardiorespiratory records obtained from SIDS infants immediately prior to death suggest that at least a subset of SIDS infants die from severe hypoxemia that is not reversed by gasping.Gasping occurs, but there is a failure of at least one of the
Ã
For the analysis of "any nuclei defined as low," the subject was not included if data from all nuclei were missing.For the analysis of "all nuclei defined as low," the subject was not included if data from any nuclei were missing.† p Value for the effect of PC age, logistic regression.
Reduced 5-HT Receptor Binding Across Key Medullary Nuclei in SIDS 153 cardiovascular or autonomic processes that support and are necessary for autoresuscitation and survival (24,25).The data presented here in Part II are in keeping with previous findings from our group suggesting that serotonergic dysfunctionthat is reduced drive from 5-HT source neurons and compromised 5-HT 1A and 5-HT 2A/C signaling in the 5-HT target nuclei-is highly associated with a substantial proportion of SIDS deaths.It is worth noting from previous studies that SIDS is not associated with a loss of receptor binding globally; for example, both alpha2 adrenergic (60) and m-opioid receptor binding (61) are normal in SIDS infants.
5-HT neurons in the reticular formation of the rostral ventral medulla (i.e.ROb/RMg, IRZ, GC, and PGCL) innervate the nuclei in which we assessed 5-HT 1A and 5-HT 2A/C binding.Serotonergic source nuclei and subnetworks (i.e.circuits) that they innervate bestow the infant with an array of physiological responses that improve brain oxygenation, including arousal, O 2 , and CO 2 chemoreflex-induced increases in ventilation and apnea termination, gasping, sympathoexcitation, regulation of heart rate, and contractility and cerebral vasodilation.Reduced 5-HT 1A or 5-HT 2A/C activity in target nuclei of critical subnetworks may compromise these processes.Alternatively, reduced serotonergic drive to the targets, due to either reduced 5-HT and/or immature 5-HT neurons, may be the primary defect, with altered receptor expression representing a secondary response.With this caveat in mind, below we suggest potential physiological consequences of low 5-HT 1A and 5-HT 2A/C binding for an infant during sleep.
Low receptor binding in key motor nuclei
In the current study, the patterns of low 5-HT 1A and 5-HT 2A/ C binding are distinct in different 5-HT source and target nuclei and depend on age.In the youngest SIDS infants, low 5-HT 2A/C binding appears frequently in 5-HT source nuclei, with low 5-HT 1A binding appearing most often in 2 major motor nuclei: the HG which provides drive to the tongue and a number of upper airway muscles to reduce airway resistance during inspiration, and the DMX which contains neurons that provide parasympathetic control of the viscera.As described in the previous section, 5-HT 1A receptor activation leads to neuronal inhibition, whether on 5-HT neurons themselves or neurons within target nuclei, some of which may be inhibitory.Glycinergic and GABAergic neurons provide tonic inhibitory drive to HG neurons in sleep (62,63).Inhibition from 5-HT 1A receptors may constrain GABAergic and glycinergic inhibitory activity within the HG of infants; reduced 5-HT 1A activity on these inhibitory neurons may augment GABAergic or glycinergic tone, compromising airway patency.This scenario is plausible when considering previous studies on the role of 5-HT 1A in respiratory circuits, including HG neurons (59,64), but requires evaluation in animal models at ages relevant to SIDS.
The prevalence of low 5-HT receptor binding was more widespread across source and target nuclei at mid-infancy (when the risk of SIDS increases) and in the oldest cohort of SIDS infants, particularly for 5-HT 2A/C .One potential consequence of this is total network insufficiency during severe apnea and bradycardia, rendering the infant incapable of reoxygenating the brain.
Low receptor binding in the NTS and olivocerebellar subnetwork
A striking finding was that $3/4 of SIDS infants had low 5-HT 2A/C binding in at least one component nucleus of the olivocerebellar subnetwork (NTS-MAO-GC), and nearly half of SIDS cases had low 5-HT 2A/C binding in all 3 component nuclei.Low 5-HT 2A/C binding was far less prevalent in the olivocerebellar subnetwork of control infants: $1/4 had low binding in at least one component nucleus, and we could identify no infants who had low binding in all 3.This difference between SIDS and control infants was unique to 5-HT 2A/C binding.The NTS is critical for baroreflex-mediated control of arterial blood pressure and 5-HT, acting through 5-HT 2A receptors in the NTS, facilitates the baroreflex (65), in part via subsequent neuromodulation at the rostral ventrolateral medulla (RVLM) (65).5-HT may also act at the NTS to facilitate the sympathoexcitation required to reverse the fall in arterial blood pressure during severely hypoxic conditions (66).Within the olivocerebellar subnetwork (Fig. 7), the NTS acts in conjunction with the MAO and the cerebellar fastigial nucleus (FN) to restore blood pressure in a variety of physiologic and pathophysiological contexts (67,68), in part through interactions with presympathetic neurons in the RVLM (69).
Although our group has not specifically investigated the FN of the cerebellum, 5-HT 2A/C receptors are expressed in this nucleus, and their activation is sufficient to alter the activity of FN neurons and modify behavior (70).Thus, reduced drive through 5-HT 2A/C receptors within the NTS and/or other components of the olivocerebellar subnetwork may negatively impact multiple, integrated processes that facilitate blood pressure recovery following a hypotensive event, including those events associated with severe hypoxia.The functional consequences of potential 5-HT 2A/C dysfunction within the olivocerebellar subnetwork are especially intriguing when considering the pathological findings at the death scene (71).In addition to the evidence from cardiorespiratory records suggesting cardiovascular collapse, additional signs of autonomic dysfunction have been documented, including a "shock-like" appearance: marked sweating with pallor, indicative of a sympathetic burst followed by blood pressure loss (72).Prone positioning is a clear risk factor for SIDS (73).It is possible that SIDS infants have an impaired vestibular response to such body positioning that would compromise the control of blood pressure in the prone versus supine position, especially if critical components of the vestibular-cerebellar network, for example the MAO, rely on drive through 5-HT 2A/C receptors.In normal infants, input from the vestibular system targets the MAO, which subsequently provides drive to Purkinje cells (via climbing fibers) that, in turn, modulate deep cerebellar FN activity to regulate blood pressure and heart rate or terminate apnea (Fig. 7); part of that regulation is to dampen extreme alterations in blood pressure, including recovery from marked hypotension (71).
Low receptor binding in 5-HT source nuclei
The relevance of low 5-HT 2A/C binding within the IRZ and GC are worth highlighting specifically, as these nuclei likely contain critical groups of neurons required in human infants for the ventilatory response to hypercapnia (32), full arousal, and successful autoresuscitation.Although it has yet to be convincingly identified in the human medulla, by extrapolating the 3-dimensional region containing the preBotC in the rodent medulla, the human preBotC, and presympathetic neurons in the RVLM are likely contained within the IRZ and GC (Fig. 8).In rodents, the preBotC is recognized as a cluster of interneurons within the ventrolateral medulla (i.e. the equivalent of the IRZ in humans) that drives gasping via intrinsically bursting, hypoxia resistant "pacemaker" neurons (74,75).The preBotC is a target of 5-HT neurons and its output is stimulated by 5-HT projections (44,50).During autoresuscitation, gasping is required to rapidly increase pulmonary ventilation and the diffusion of O 2 into the blood.As important as it may be, gasping alone is insufficient for surviving severe hypoxia.Fortuitously, for the normal infant, the preBotC operates in an integrative fashion with the presympathetic neurons in the RVLM to drive phasic increases in sympathetic nerve activity during severe hypoxia (76).The RVLM, in turn, activates neurons in the cerebral vasodilating region of the medulla to ensure sufficient delivery of newly oxygenated blood to the brain (77).5-HT 2A/C activation in the rodent preBotC is permissive to the bursting of hypoxia resistant pacemaker neurons that drive gasping in conscious animals (54,55) as well as the coupled phasic increase in sympathetic activity.In controls, 5-HT 2A/C binding is concentrated in the rostral aspect of the ventral medulla-an area that contains the 5-HT source subnetwork (ROb/RMg-GC-PGCL-IRZ) and as discussed above, likely contains the preBotC (Fig. 2).If low 5-HT 2A/C binding in this region reflects low receptor activity in vivo, then gasping and the sympathetic response to severe hypoxia may be compromised, preventing successful autoresuscitation and possibly culminating in SIDS.
The prevalence of low receptor binding across the medullary nuclei differs at the 3 age intervals examined.For example, in the youngest SIDS infants, low 5-HT 1A binding was most prevalent in 2 target nuclei (the HG and DMX), while low 5-HT 2A/C binding was most prevalent in the 5-HT source nuclei.In contrast, in the eldest SIDS infants low 5-HT 2A/C binding was highly prevalent across all target nuclei.Development strongly impacts aspects of cardiorespiratory control in infancy (78).In the youngest infants, death may result with relatively minor or circumscribed receptor defects.As neural networks controlling cardiorespiratory function and arousal develop and become more adult-like, a more extensive loss of serotonergic function may be required to compromise chemoreflexes, arousal, and autoresuscitation that together protect an infant during sleep periods.It is also unknown whether older infants acquired more extensive receptor defects with time or whether they simply did not encounter a specific environmental stressor or combination of stressors that would have precipitated death at a younger age.The key site in SIDS infants is the mid-to-rostral medulla (rectangle), a critical ("segmental") feature of the 5-HT-related pathology uncovered by us in SIDS over at least 2 decades of research, including in Parts I and II.The nuclei with abnormal 5-HT receptor binding in the SIDS cases are denoted in green.The RVLM is included in this regional tissue "block" (rectangle) of the hindbrain that is affected in the SIDS and includes the anatomic loci of the putative human homologue of the pre-B€ otzinger complex, intermediate reticular zone (IRZ), paragigantocellularis lateralis (PGCL), and more medially, gigantocellularis (GC).The rostral medulla also contains the major 5-HT-synthesizing (source) neurons in the caudal (medullary [as opposed to the rostral (mesopontine dorsal and median raphe)]) domains of the brainstem serotoninergic system, that is caudal raphe, IRZ, PGCL, and GC.The caudal raphe includes the raphe obscurus, raphe magnus, and raphe pallidus.Of note, the source neurons and other affected (non-source) nuclei in SIDS cases (i.e.hypoglossal nucleus [HG], dorsal motor nucleus of the vagus [DMX], nucleus of the solitary tract [NTS], and inferior olivary complex [IOC]), receive 5-HT (target) projections, albeit not exclusively in the entire brainstem or forebrain (not shown).The non-HT-source (target) nuclei of the HG, NTS, DMX, and IOC all demonstrate abnormal 5-HT 1A and/or 5-HT 2A/C receptor binding in the SIDS cases, anatomic portions of which are included in the affected hindbrain segment (rectangle) of the rostral medulla.The medial accessory olive (MAO), which is a critical component of the affected olivocerebellar circuit involved in blood pressure recovery (see text), is included in the IOC.The major sites involved in chemosensory processing are highlighted with notation in blue asterisk for central and peripheral oxygen (O 2 ) (hypoxia) sensors and black asterisk for carbon dioxide (CO 2 ) (hypercapnia) sensors.Peripheral chemoreceptors are located in the carotid body, and peripheral baroreceptors in the carotid sinus and aortic arch.Important sites of chemosensory circuitry in the abnormal hindbrain segment in the SIDS cases (rectangle), support their role in the putative defective defense responses in SIDS.The retrotrapezoid nucleus, known to be essential for brainstem CO 2 chemosensitivity (98) is not shown in this diagram because the anatomic locus of its site in humans is not resolved, and thus receptor autoradiography and other studies have yet to be performed in SIDS versus controls.This schematic representation makes the key points of the results of Parts I and II in SIDS cases that brainstem: (1) serotoninergic-related pathology reported to date by us appears to be concentrated in the mid-torostral medulla; (2) is segmentally constricted in the medullary hindbrain; and (3) involves essential anatomic loci that mediate serotoninergic defense responses in arousal, chemosensitivity, autoresuscitation, and cardiopulmonary reflexes, many of which are operative during a sleep period.Other abbreviations include: PoO, pontis oralis; LDT, lateral dorsal tegmentum; PPN, pedunculopontine nucleus; PoC, pontis caudalis; LC, locus coeruleus; FN, fastigial nuclei.Created with Biorender.com.
Limitations
Our statistical approach was devised to define the prevalence and extent of statistically defined low 5-HT receptor binding in the medulla of SIDS.However, the limitations of this approach must be noted.Our definition of "low" was based on a small dataset of controls that may not themselves be normal, having succumbed to known and varied causes of death as described in more detail in Part I (47).The CI of control data is large due to the small number of control infants, and some controls were identified as falling below this CI because of a statistical definition of normal based on 95% confidence intervals; our ability to define "low" receptor binding and to differentiate true differences between low and "normal" receptor bindings is modest.Furthermore, the relatively small sample size for the younger and older age groups (compared to the mid-infancy age group) results in low power for testing across age and multiple testing within the small sample.Additionally, the CI used for controls is not an ideal definition of low receptor binding since it is the CI for the expected value of the bindings, not for the binding itself.The lower bound of the prediction interval for the regression of receptor bindings on postconceptional age in controls would be the more appropriate metric; however, given the small sample size, the prediction interval was extremely wide, and this method identified few infants as low.Finally, because laboratory measurements were done in batches over time, regression modeling was used to control for dataset; however, batch effects may not have been constant across all infants.
The data utilized in this study were examined previously for abnormalities of 5-HT receptors in the key medullary regions involved in chemoreception, arousal, and cardiorespiratory function.However, other neurotransmitter systems and nuclei may be abnormal in SIDS compared to controls-these nuclei and neurotransmitters may play a role, direct or indirect, in serotonergic function (79)(80)(81)(82)(83)(84)(85)(86)(87)(88)(89)(90)(91)(92)(93).Given the significant risk of nicotine exposure for SIDS (94), and the known neuroanatomical deficiencies in a high-risk population with such exposure (95,96), interactions between 5-HT and nicotinic acetylcholine receptors are a special concern.Finally, while our work has focused on the medulla, regions rostral to the medulla (i.e.pons and limbic forebrain-hypothalamus, hippocampus, and amygdala), and cerebellum are critical for other processes (e.g.sleep, protective cardiac, and respiratory reflexes) not necessarily involved in autoresuscitation but possibly affected in SIDS (83,(97)(98)(99)(100).Despite these limitations, the data provide novel insight into a subset of SIDS infants identified by our previous studies as having a serotonopathy.
Conclusions and future directions
In Parts I and II of this 2-part series, our aim was to utilize tissue autoradiography from the medulla of SIDS infants to further test the hypothesis that 5-HT 1A and 5-HT 2A/C receptor binding is altered in SIDS cases as compared to controls, which could underlie a failure to arouse and/or autoresuscitate leading to death.Information regarding such biological abnormalities is essential for the potential development of pharmacological or neuromodulatory interventions that aim to prevent or minimize SIDS in at-risk populations.If we equate our statistically defined low receptor binding with a receptor binding deficit or abnormality, we propose that the key serotonergic receptor deficits that we have identified are important surrogate markers.We further propose that these surrogate markers identify subsets of SIDS infants that are burdened with serotonergic defects within critical subnetworks supporting cardiorespiratory function; in essence, these defects represent a "serotonopathy."Indeed, serotonopathies with variable origins (i.e.receptor deficits, altered 5-HT release, immature 5-HT neurons among others) may reflect recently proposed "endophenotypes" of pediatric and adult diseases of genetic origin (101).Infants harboring a serotonergic abnormality may survive the critical period of SIDS; they may have avoided a key exogenous stressor that might otherwise expose their latent vulnerability.They may nevertheless remain susceptible to a diverse array of diseases manifesting with unique, age-or developmentally dependent trajectories to the extent those disorders rely on serotonergic activity.We stress that our proposed "serotonopathy" hypothesis of SIDS, while not necessarily of genetic origin, is generally in keeping with this endophenotype concept.
Our overall goal as a group has been to provide novel conceptual insights on the possible pathophysiological mechanisms that lead to SIDS, using neurochemical findings from SIDS infants and controls (e.g. the alterations in 5-HT receptor binding activity described in Part I and herein), coupled with mechanistic experimental data from animal models in which these subnetworks have been perturbed.Indeed, leaning heavily on what we have learned as a group from modeling serotonergic dysfunction in animals (30-35, 37-42, 50, 102-107), we believe that dysfunction within the medullary 5-HT system represents an underlying inherent vulnerability that may intersect with an exogenous stressor at a critical period of infant development, with the outcome manifesting as SIDS (i.e. Triple Risk Model of SIDS ( 108)).Until novel diagnostic methods are developed to reliably identify serotonergic and other biomarkers of SIDS that help guide biological treatments, behavioral modifications that ensure safe sleep environments (i.e.supine sleep position, no bed-sharing) should continue to be an integral component of the quest for the total eradication of SIDS.
Reduced 5 -
HT Receptor Binding Across Key Medullary Nuclei in SIDS 145
Figure 1 .
Figure 1.The diagram illustrates the different laboratory databases utilized in this study and the numbers of SIDS and control cases within each database.The individual databases are comprised of infants with 5-HT 1A or 5-HT 2A/C binding data from receptor ligand autoradiography.The combined database is comprised of infants that have both 5-HT 1A and 5-HT 2A/C binding data.Created with Biorender.com.
Figure 2 .
Figure 2. Representative autoradiograms of 125 I-DOI binding to 5-HT 2A/C receptors and 3 H-DPAT binding to 5-HT 1A receptors in a 53 postconceptional week SIDS infant.Mid and rostral levels of the medulla are shown and measured nuclei are labeled.A representation of a radioactivity standard is shown with fentamol/mg (fmol/mg) given from high binding (red) to low binding (dark blue).Binding to 5-HT 2A/C receptors is heavily concentrated in the reticular formation, that is IRZ, PGCL, and GC, in the rostral medulla compared to low binding in the reticular formation of the mid-medulla.This finding is relevant to the chemoarchitecture of gasping because the 5-HT 2A receptor is essential for gasping.HG, hypoglossal nucleus; NTS, nucleus of the solitary tract; DMX, dorsal motor nucleus of vagus; PIO, principal inferior olive; MAO, medial accessory olive; DAO, dorsal accessory olive; RO, raphe obscurus; GC, nucleus gigantocellularis; IRZ, intermediate reticular zone; PGCL, nucleus paragigantocellularis lateralis.The figure is reproduced from Haynes et al (47) with modifications.
Figure 3 .
Figure 3. Examples of binding data plotted versus postconceptional age for I25 I-DOI binding to 5-HT 2A/C receptors and 3 H-DPAT binding to 5-HT 1A receptors in the raphe obscures (ROb/RMg).Controls are shown as gray circles, SIDS infants who statistically have normal binding are shown as blue circles, and SIDS infants who have low binding are shown as red circles.Because the definition of low binding adjusted for laboratory dataset (see Statistical analysis of low receptor binding section), the plots show binding adjusted by dataset for ease of visualization.
Figure 5 .
Figure 5. Graphs of percentages of nuclei defined as low binding in SIDS (left) compared to controls (right) for 5-HT 1A and 5-HT 2A/C .The infants are separated into bins based on postconceptional (PC) age; postnatal (PN) age ranges are given for reference.The numbers in each group are indicated above the bar.ROb/RMg, raphe obscurus; GC, nucleus gigantocellularis; PGCL, nucleus paragigantocellularis lateralis; IRZ, intermediate reticular zone; HG, hypoglossal nucleus; DMX, dorsal motor nucleus of vagus; NTS, nucleus of the solitary tract; MAO, medial accessory olive; DAO, dorsal accessory olive; PIO, principal inferior olive.Created with Biorender.com.
Figure 7 .
Figure 7. Schematic diagram of vestibular nucleus (VN), nucleus of solitary tract (NTS), medial accessory olive (MAO), climbing fiber (CF), Purkinje cells (PC), fastigial nuclei (FN), and gigantocellularis (GC) circuitry that underlie dampening and recovery for blood pressure changes signaled by the NTS and VN.Change in body position (e.g.prone versus supine), or factors inducing shock are mediated by the VN and NTS.Signals are transmitted to MAO neurons and then to Purkinje cells, via climbing fibers, that project to FN. FN induces changes in autonomic motor output, body movement, arousal and upper airway tone via projections to GC and HG, among others.The circuitry is also sensitive to chemoreceptor activation, with the FN terminating prolonged apnea periods.Created with Biorender.com.
Figure 8 .
Figure 8. Schematic representation of regional involvement of sites (nuclei) with abnormal 5-HT 1A and/or 5-HT 2A/C binding in SIDS infants compared to autopsy controls.Notably all of the involved sites are involved in protective responses to asphyxia and modulated by 5-HT in the putative subset of SIDS representing a serotonopathy (see text).The key site in SIDS infants is the mid-to-rostral medulla (rectangle), a critical ("segmental") feature of the 5-HT-related pathology uncovered by us in SIDS over at least 2 decades of research, including in Parts I and II.The nuclei with abnormal 5-HT receptor binding in the SIDS cases are denoted in green.The RVLM is included in this regional tissue "block" (rectangle) of the hindbrain that is affected in the SIDS and includes the anatomic loci of the putative human homologue of the pre-B€ otzinger complex, intermediate reticular zone (IRZ), paragigantocellularis lateralis (PGCL), and more medially, gigantocellularis (GC).The rostral medulla also contains the major 5-HT-synthesizing (source) neurons in the caudal (medullary [as opposed to the rostral (mesopontine dorsal and median raphe)]) domains of the brainstem serotoninergic system, that is caudal raphe, IRZ, PGCL, and GC.The caudal raphe includes the raphe obscurus, raphe magnus, and raphe pallidus.Of note, the source neurons and other affected (non-source) nuclei in SIDS cases (i.e.hypoglossal nucleus [HG], dorsal motor nucleus of the vagus [DMX], nucleus of the solitary tract [NTS], and inferior olivary complex [IOC]), receive 5-HT (target) projections, albeit not exclusively in the entire brainstem or forebrain (not shown).The non-HT-source (target) nuclei of the HG, NTS, DMX, and IOC all demonstrate abnormal 5-HT 1A and/or 5-HT 2A/C receptor binding in the SIDS cases, anatomic portions of which are included in the affected hindbrain segment (rectangle) of the rostral medulla.The medial accessory olive (MAO), which is a critical component of the affected olivocerebellar circuit involved in blood pressure recovery (see text), is included in the IOC.The major sites involved in chemosensory processing are highlighted with notation in blue asterisk for central and peripheral oxygen (O 2 ) (hypoxia) sensors and black asterisk for carbon dioxide (CO 2 ) (hypercapnia) sensors.Peripheral chemoreceptors are located in the carotid body, and peripheral baroreceptors in the carotid sinus and aortic arch.Important sites of chemosensory circuitry in the abnormal hindbrain segment in the SIDS cases (rectangle), support their role in the putative defective defense responses in SIDS.The retrotrapezoid nucleus, known to be essential for brainstem CO 2 chemosensitivity (98) is not shown in this diagram because the anatomic locus of its site in humans is not resolved, and thus receptor autoradiography and other studies have yet to be performed in SIDS versus controls.This schematic representation makes the key points of the results of Parts I and II in SIDS cases that brainstem: (1) serotoninergic-related pathology reported to date by us appears to be concentrated in the mid-torostral medulla; (2) is segmentally constricted in the medullary hindbrain; and (3) involves essential anatomic loci that mediate serotoninergic defense responses in arousal, chemosensitivity, autoresuscitation, and cardiopulmonary reflexes, many of which are operative during a sleep period.Other abbreviations include: PoO, pontis oralis; LDT, lateral dorsal tegmentum; PPN, pedunculopontine nucleus; PoC, pontis caudalis; LC, locus coeruleus; FN, fastigial nuclei.Created with Biorender.com.
Table 1 .
Prevalence of low binding in SIDS versus controls: percent (%) of cases with one or more nuclei defined as low binding ÃFisher exact test.†
Table 2 .
Percent (%) of nuclei per infant with low binding in SIDS versus controls à t-Test.
Table 3 .
Prevalence of low binding by subnetwork in SIDS versus controls: percent (%) of cases with any or all nuclei defined as low N % of SIDS N % of controls p value † ÃSubject was not included if all nuclei had missing data.‡ Subject was not included if any nuclei had missing data.† Fisher exact test.
Table 4 .
Prevalence of low binding by subnetwork and by age: percent (%) of SIDS cases with any or all nuclei defined as low NA, no test due to inadequate sample size; PC, postconceptional; wks, weeks.
|
2024-02-08T06:17:23.867Z
|
2024-02-07T00:00:00.000
|
{
"year": 2024,
"sha1": "f94a63034fe4a58752155f3b9c26b59649276886",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jnen/advance-article-pdf/doi/10.1093/jnen/nlae004/56609817/nlae004.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cee6b52608a02ce94dafbcc64debdaaf0f080377",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252465618
|
pes2o/s2orc
|
v3-fos-license
|
Public engagement in decision‐making regarding the management of the COVID‐19 epidemic: Views and expectations of the ‘publics’
Abstract Background In the management of epidemics, like COVID‐19, trade‐offs have to be made between reducing mortality and morbidity and minimizing socioeconomic and political consequences. Traditionally, epidemic management (EM) has been guided and executed attentively by experts and policymakers. It can, however, still be controversial in the public sphere. In the last decades, public engagement (PE) has been successfully applied in various aspects of healthcare. This leads to the question if PE could be implemented in EM decision‐making. Methods From June to October 2020, seven deliberative discussion focus groups were executed with 35 Dutch citizens between 19 and 84 years old. Their views on PE in COVID‐19 management were explored. The deliberative approach allows for the education of participants on the topic before the discussion. The benefits, barriers, timing and possible forms of PE in EM were discussed. Results Almost all participants supported PE in EM, as they thought that integrating their experiences and ideas would benefit the quality of EM, and increase awareness and acceptance of measures. A fitting mode for PE was consultation, as it was deemed important to provide the public with possibilities to share ideas and feedback; however, final authority remained with experts. The publics could particularly provide input about communication campaigns and control measures. PE could be executed after the first acute phase of the epidemic and during evaluations. Conclusions This paper describes the construction of an empirically informed framework about the values and conditions for PE in EM from the perspective of the public. Participants expressed support to engage certain population groups and considered it valuable for the quality and effectiveness of EM; however, they expressed doubts about the feasibility of PE and the capabilities of citizens. In future studies, these results should be confirmed by a broader audience. Patient or Public Contribution No patients or members of the public were involved in the construction and execution of this study. This study was very exploratory, to gain a first insight into the views of the public in the Netherlands, and will be used to develop engagement practices accordingly. At this stage, the involvement of the public was not yet appropriate.
expressed doubts about the feasibility of PE and the capabilities of citizens. In future studies, these results should be confirmed by a broader audience.
Patient or Public Contribution: No patients or members of the public were involved in the construction and execution of this study. This study was very exploratory, to gain a first insight into the views of the public in the Netherlands, and will be used to develop engagement practices accordingly. At this stage, the involvement of the public was not yet appropriate. spread across countries worldwide leading to a pandemic. It has heavily impacted the health and safety of citizens, as well as other aspects of society, such as the economy, social structures and politics. 1 When an epidemic such as COVID-19 occurs, its management is pivotal in containing the virus. According to the World Health Organization (WHO), the goal of epidemic management (EM) is: 'to mitigate its impact and reduce its incidence, morbidity and mortality as well as disruptions to economic, political, and social systems'. 2 EM is used in this study as an overarching term that entails the step-by-step process of decision-making regarding all necessary actions before, during and after an infectious disease outbreak, to minimize the impact of the outbreak on all aspects of society. 3,4 In the urgency of EM decision-making, various societal principles, such as solidarity, justice and liberty, have to be weighted, within a climate of fear and distress. Other characteristics of epidemics, such as social disruption and scientific uncertainty, complicate these tradeoffs even more. 5 Traditionally, EM has been mostly guided by public health organizations, governmental bodies, and scientific experts. 6,7 Their blend of expertize and experience is used to trade-off between reducing mortality and morbidity and minimizing its associated socioeconomic and political consequences, within troubling circumstances. 8,9 This complex interplay of principles, troubling circumstances and strong decision-related impacts within EM raises questions about how decisions are being made. As we currently rely heavily on experts, valuable input from other sources might be overlooked, for instance, that of the public. 10 Recently, public health officials, such as the WHO and ECDC, have been emphasizing the importance of public engagement (PE) in the management of various epidemics. 2,11 PE is the spectrum of processes and activities that brings the public into a decision-making process. In the literature, three main rationales for PE exists. [12][13][14] First, the normative rationale describes engagement itself as a valuable process that increases the democratic validity of decisionmaking. Second, the instrumental rationale describes PE as a means to obtain the most beneficial outcome. Deliberation with citizens provides policymakers with information about the failure or success of certain policies. Simultaneously, citizens acquire information about the intent and context of policies, which can foster trust and understanding. Overall, both the public and policy makers can gain insight into EM from PE, which could potentially result in a more fitting course of action, mitigation of opposition to a chosen policy, and an increase in support. 8,15 This could especially be important when the public has already been showing much discontent with implemented EM policies. During COVID-19, this happened in the Netherlands on several occasions, as many demonstrations, protests and petitions were set up by the public. 16,17 Even riots arose as a backlash to the implemented nightly curfew. 18 Third, the substantive rationale entails using the values of the public as a foundation for policies. These values transcend interests attached to certain positions or systems. Experiential knowledge is respected in decision-making and could complement expert knowledge. 9,19 Moreover, the public could perceive problems and solutions that experts may not notice. 20 The desired mode of PE is context-specific and can vary between informing, consulting, collaborating with and empowering the public. 21 Despite the seemingly promising potential of PE, until now, a few efforts have been made to integrate the perspective of the publics (this 'public' cannot be classified as monolithic, but actually comprises people with a diverse range of demographic, epidemiologic, social and economic characteristics. To respect this complexity and diversity, the term 'public' is replaced by 'publics'. Publics refers to all persons living in the Netherlands, with no limitations on a particular group based on demographic, epidemiologic, social or economic conditions) in EM. 22,23 Which could be an indicator of how challenging integrating PE in EM is, due to the complex nature of EM. For instance, in the United States, Mexico and Nicaragua, communities were consulted to shape culturally appropriate control strategies and communication efforts concerning Zika virus and dengue virus, which resulted in a higher-quality EM on a local level. 24,25 Specifically in the Netherlands, valuable citizen assemblies and consultations have been executed to reveal public preferences on for instance vaccination strategies and relaxation of measures. [26][27][28] However, many of these examples are one-time engagement efforts without clear follow-up.
Besides, most of these practices are predefined, and the engaged publics to be are not asked about their preferences on the forehand.
In this study, we try to take a step backwards and gain insight into the views of the publics concerning their engagement in the management of COVID-19, to identify accompanying possibilities and challenges.
As the opening quote stated, the COVID-19 epidemic can be seen as an opportunity to learn. This paper explores the possibilities for the role of the publics in COVID-19 EM in the Netherlands, which leads us to the following research question: What views and expectations on public engagement are present in the management of the COVID-19 epidemic from the perspective of the publics in the Netherlands?
| METHODS
Between June and October 2020, seven Online Deliberative Discussion Focus Groups (DDFGs) were held with members of the general public in the Netherlands. The deliberative approach leads to more knowledgeable and thoughtful participants, especially on subjects that may be somewhat unfamiliar. 29 We expected that EM might be unfamiliar to participants. All sessions were moderated by two researchers (S. K. with F. K. or L. S. K. K.) and lasted 2 h. The online sessions were facilitated via the meeting software GoToMeeting and were executed in Dutch. The DDFGs were not intended to yield a representative sample of the Dutch population but to provide an in-depth exploration of the diversity of views that exist among the publics. The first three DDFGs were held in June 2020, when the first epidemic wave in the Netherlands had ended and the situation had stabilized. In response, the government decided to relieve restriction measures. The second set of DDFGs (number 4-7) was held in October 2020, when the outbreak situation was deteriorating again and new restriction measures were announced.
Participants were recruited via two panels. The first three DDFGs were executed with panel members of the Dutch Health Care Consumer Panel, which is managed by Nivel, the Netherlands Institute for Health Services Research. To maintain social homogeneity in the sessions, age stratification was applied. This decision was made because age has an influence on risk perceptions and protective behaviour during the COVID-19 epidemic. 30 Per age category (see Table 1), a random sample was taken from the panel members (around 1500 panel members), who subsequently received an e-mail invitation to participate. From the panel members who wanted to participate, a selection was made based on gender, age (within the designated age category), education level and place of residence, to strive for maximum diversity within all three DDFGs.
The remaining four DDFGs were executed in collaboration with CG Research, a general market research firm. After the first three DDFGs, the research team (S. K., F. K., M. B., A. T.) decided that the views captured did not entirely correspond with the whole range of views present in the publics. This decision was based on a rough analysis of the public discourse at that period by means of news articles, and a national study concerning the attitudes and behaviour of the public. Overall, much criticism was expressed regarding the management of COVID-19, and the publics felt that they were not being heard. [31][32][33] These views did not entirely correspond with what the participants of DDFGs 1-3 expressed, as they appeared to be more satisfied with how COVID-19 was managed at that time. To broaden the diversity of views within the sample population, a second panel was used. For the sampling procedure, stratification by age was again applied, of which a random sample was taken. The age categories per DDFG are displayed in Table 1
| Data analysis
The recordings of the sessions were transcribed verbatim, and a thematic analysis was executed using MAXQDA 2020 software. The thematic analysis approach, which used both inductive and deductive coding, was chosen to identify, organize and reveal patterns of meaning derived from the content of the data itself. 35 The public is provided with timely and consistent information. a The public is asked for feedback on questions or problems. This feedback is nonbinding.
The public is asked for advice for the whole process, and their advice is integrated into the final decisions.
The public is seen as a partner. The public has the ultimate decisionmaking power. They receive support. a The first mode of engagement presented was inform, which is in itself not necessarily a mode to actively integrate the perspective of the other. Nevertheless, it was important to mention as in some situations, there may be no need for active engagement. 3 | RESULTS
| Participants characteristics
In total, 35 citizens participated in seven online DDFGs. Every session had 4-6 participants, which was suitable for the online nature of the sessions. The characteristics of the participants are displayed in Table 3.
| Why and why not: Values regarding PE in EM
Most participants stated that their current role in EM mostly entails receiving information about COVID-19 management (see Table 4 Furthermore, a few participants stated that they felt overwhelmed by the information overload online at the start of the epidemic. Other participants also held the media accountable for unclear information.
According to the participants, the inadequate communication resulted in public unrest, lack of confidence in the government and decreased compliance with restriction measures.
According to almost all participants, the most fitting mode of PE in EM would be consult. It was important for them that the publics receive more influence in EM. The participants explained that with consult, the publics feel that they are being listened to and taken seriously, and they can provide ideas to improve the quality of EM.
Participants stressed that experts should still be in the lead.
Participants found it valuable to let the publics function as a sounding board, to provide decision-makers with insight into their experiences.
Feedback from the publics could be asked to prevent unclarities in communication efforts. A few participants, however, found that consultation was already taking place, due to the existence of the representative democracy in the Netherlands. As such, the repre- 'We live in a free country with many different opinions, so there will always be people complaining that their opinion was not heard. As such, there will always be dissatisfied people'. Collaborate and empower were deemed not suitable by most participants because of the publics' lack of knowledge and experience with EM. This undermined the trust they had in the capabilities of the publics to contribute. All participants agreed that providing the publics with the decision-making power in EM would not be desirable. People expressed suspicions that those engaged might put their own interests ahead of the interests of the general public.
Overall, many participants found it meaningful to receive more details about why certain decisions were made. In addition, when integrating PE in EM in practice, it would be important to hear why the views of the publics would be integrated or why not. In line with this, some participants felt a lack of transparency in EM from the government. They expected that increased transparency leads to increased trust of the publics in the government. This trust was deemed crucial during an epidemic by a few participants.
| When: Period for PE
According to most participants, the priority at the start of an epidemic is to rapidly control it. At that stage, swift action is necessary and a lack of knowledge is likely. Due to these beliefs, PE was deemed not beneficial at the start of the epidemic ( 'I think that if you want to engage everyone, the ones who scream the loudest will get their way, which is what is happening now. If a minority thinks we should do A, and the silent majority thinks we should do B, A will be implemented because of the fuss. I think the only possibility is inform'. (Female, DDFG 1) 'Conspiracy theorists will say; this is not necessary, and that should not happen, and this is wrong…. They are going to interfere with aspects they think they know about, but in reality do not have any knowledge on'. (Female, DDFG 5) 'I am still thinking about two groups within the public. You have the people who are analyzing everything, who are considerate, who are sensible and who can make correct conclusions. And you have the sheep, who do not understand everything well but who are constantly stomping their feet. And of these groups who do we have most in society? … I can conclude that these people are a big part of society. If you can calm them… but then again, this is a dangerous statement as I am judging myself, which is also not correct'. Overall, this is the first exploratory study to reflect upon this type of insight into PE in COVID-19 management from the perspective of the publics. Moreover, the data were collected during the COVID-19 epidemic, which yielded relevant outcomes of current interest.
| Principal findings
Overall, participants expressed positive attitudes towards PE in EM. control strategies, to make strategies more feasible and acceptable. 8,22,40,41 Altogether, there are clearly specific aspects of EM that remain challenging, and which explain the views of the publics with regard to the extent to which PE can be incorporated into EM. These should be taken into account when doing this in practice.
| LIMITATIONS
Multiple characteristics of the DDFGs could have impacted the attitudes of the participants, such as socially desirable behaviour in a group setting, influence of facilitators and information that was provided. To combat this, multiple strategies were implemented such as creating an open context, limiting the number of participants and establishing rapport, which was occasionally difficult to do because of the online nature of the DDFGs. 29 With regard to our study population, no persons living in the north of the Netherlands were included, persons with an education level between 0 and 2 were underrepresented and no stratification for ethnicity was applied. This is unfortunate, as these groups could have experienced the epidemic differently; for instance, there were fewer COVID-19 cases at the time of the DDFGs in the North and people with a migration background in the Netherlands suffered more health and societal consequences from COVID-19. 60, 61 We are aware that our sampling strategy could have led to a sample population with two opposing views on PE in EM, and might not be comparable to each other.
Besides, participants of DDFG 4-7 might have biased the overall results towards a less critical view of PE in EM. Furthermore, results might have been different if we did not use panels, as these populations differ in certain characteristics from the general population, for instance, they might display a more positive attitude towards engagement, if they regularly attend focus groups to share opinions. Regarding the study context, the future course of the epidemic was uncertain, these feelings of uncertainty and fear could have impacted the attitudes of the participants, mostly on the sense of urgency about PE in EM. These participants could have graded PE in EM as less important in hindsight, knowing that the epidemic is over. On the other hand, recall bias is minimized. 62 Overall, it is important to keep this context in mind, and the fact that the study is conducted during two different time periods in the outbreak.
| Future research
To our knowledge, this is the first study that directly explores the views of publics on possibilities for PE in EM. The next step would be to identify the views, expectations and needs of various groups within the publics. 'The public' is not a homogeneous entity but a complex and dynamic collection of multiple groups with various characteristics. This could impact the approach to PE in EM and its diversity. In line with this, more attention should be given to conceptual clarification of the various groups within the publics who can contribute to EM decision-making such as the representatives our participants suggested, and be aware of inclusivity and diversity within these groups.
| CONCLUSIONS
This paper explored the perspective of the 'publics' on PE in decisionmaking regarding the management of the COVID-19 epidemic in the Netherlands. This exploration was done in the midst of the COVID-19 epidemic itself, which was a unique opportunity. The participants agreed that targeted PE could positively influence the quality and effectiveness of COVID-19 EM. Furthermore, the participants called for more accountability of the decision-makers, and more transparency in the EM decision-making process.
As our participants are clearly aware of the complexity of EM, they are not asking to replace current decision-makers in EM. What they do wish is for their voices to be heard and their experiences, ideas and feedback to be taken seriously in developing and improving COVID-19 management.
|
2022-09-24T06:18:27.328Z
|
2022-09-23T00:00:00.000
|
{
"year": 2022,
"sha1": "cce81fe26cb0be4a14e91d576c05ed378182b21f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/hex.13583",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46477d9de1863671a4b8075ef780d4b0b7fd845d",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225923180
|
pes2o/s2orc
|
v3-fos-license
|
IT-Based Financial Management and Governance Training Role Toward Village Government Employee Understanding on Financial Management
This research aims to test the information technology-based financial management and governance of village government to optimize the village financial management in local government of Indragiri. The research method is done by giving survey questionnaires to 100 of village government employees. A paired sample t-test is used to test the capability of the employees in managing the financial before and after the training. The result shows that there is an increase in capacity in financial management and governance by all the village government employees. The theoretical maximum score from the respondents is 50, and the minimum is 10, so the average score is 30. Based on the data above, before the training, the employee’s understanding scores are below the average score, i.e., 24, 15. After the training, the employee’s understanding increases above the average score, i.e. 38, 77. E-JA e-Jurnal Akuntansi e-ISSN 2302-8556 Vol. 30 No. 4 Denpasar, April 2020 Hal. 851-860 Artikel Masuk: 20 Januari 2020 Tanggal Diterima: 25 April 2020
INTRODUCTION
The amendment of village act in 2014 has changed village governance in Indonesia. The act gives the village a full autonomous region. This indicates t hat all Indonesian villages have a right to regulate and manage their government affairs. As a new autonomous region, villages that have met requirements have a right to receive a certain budget from central government that is around one billion rupiahs (equivalent to about 71,439 US dollars) per year. Ministry of Finance of Republic of Indonesia reported that village funds allocated are increased every year from IDR 20.67 trillion (2015), IDR 46.98 trillion (2016), IDR 60 trillion (2017), IDR 60 trillion (2018), and IDR 70 trillion (2019). That amount is around 3 to 3.3% of Indonesia total state revenue budget.
However, some scholars and politicians argue that human resource of the village government still has not developed well (Sofyani, Suryanto, Wibowo, & Widiastuti, 2018). This condition may not bring advantages in the village development, but instead, the new additional budget received by the village government would be managed inefficiently and eventually would be a source of waste in the use of state finance at the village level. To overcome t his pr oblem, human resource quality improvement in the village government, especially related to the ability to manage village finance is very significant.
Several studies have found evidence that training is an important determinant for improving employee performance in preparing financial reporting with good quality in the public sector (Budiono, Muchlis, & Masri, 2018), (Muzahid, 2014), (Sofyani & Akbar, 2013), (Wungow, Lambey, & Pont oh, 2016). However, there is a research gap that examines how the t raining model was carried out. This research has tested the village government financial management training model in the Indragiri regency. This village government 's financial management training is conducted on a digital basis. Specifically, t his research examines the understanding of village government employees before and after participating in the training. Nurkhamid (2008) contend that training can be a means for employees t o understand about the innovations that are present, and reduce t he pr essure or confusion of employees over the demands of the process of implementing an innovation that is present. For example, training in preparing Government Agency Accountability and Performance Reports, Government St r ategic Plans, and Working Plans (Bastian, 2017). Thus, training conducted by the government is pivotal and has a positive impact on civil servants in implementing a particular policy, particularly related to village government financial management (Sofyani & Akbar, 2013). This is in line with the institutional theory proposed by DiMaggio and Powell (2000). Based on their point of view, institutionalization process of good governance in organizations, including the public sector, requires professionalism and adequate quality human resources. This can be obtained from formal education and training. The presence of qualified employees will make the organization successfully carry out certain policies in accordance with normative objectives (Sofyani, Akbar, & Ferrer, 2018). E-JURNAL AKUNTANSI VOL 30 NO 1 APRIL 2020 HLMN. 851-860 DOI: https://doi.org/10.24843/EJA.2020.v30.i04.p04 853 The research conducted by Herlin & Effendi (2017) suggested that there is a need to increase the quality of the human resource to gain performance. Halen & Astuti (2013) argued that the level of understanding, training, and guid ing of local government employee significantly affect the accrual basis of the local financial governance. The knowledge-based on government accounting standards with transparency strengthens the influence of transparency in the budget performance (Arista & Suartana, 2016). Based on the description above, it can be hypothesized as below.
To conduct effective financial management and governance, which can support accountability and transparency, the information system (IS) suppor t is pivotal (Kim, Shin, Kim, & Lee, 2011). IS can help village government management to obtain the accuracy and timeliness of the village financial repor t. Therefore, specific training pertaining to financial management in the village government supported with IS is expected can enhance the understanding of village government employees in terms of village financial management. H1: There is a different understanding level of the government employee towar d the village financial governance between before and after the village financial governance training using the village financial information system supported by information technology.
RESEARCH METHOD
This research was conducted in the last of 2018 using survey method by distributing a questionnaire to the respondents. The questionnaire contains questions about budget implementation and village financial governance. The elements of those questions are related to village management, financial governance, responsibility, accountability, transparency, responsibility, independence, and justice. The population of this research is all the heads of villages in Indragiri regency, Riau Province. The number of village heads is 480, and from that population, samples are taken randomly 100. Then, those villages' head is given the questionnaires to answer. The independent variable of this research is training on IT-based financial management and governance of village governments. While the dependent variable is the understanding of village government employees. The scale used in this study is the Likert scale 1-5. In this research, the test of validity used for testing the instrument. This research uses the SPSS 17 program testing, which contains the formula of Rank Spearman. Hypothesis testing is done by paired sample t-test. The hypothesis is supported if the understanding of village government employees in the post-test phase is significantly higher than d ur ing the pre-test (Ghozali, 2016).
RESULT AND DISCUSSION
The model of IT-based financial management and governance design conduct ed in the form of Sikades program. Sikades is computer software that issued to manage and report village government finance. start with Open/click t he work unit folder (e.g., 01.00 Headman) contain the Sikades4unit file. Then open/click Sikades4unit file until the display appears as follow. Fill the Username and password appropriate to the data given by the Admin officer. After clicking the LOGIN, the main menu contains a work unit will appear as follows: The next step is deciding the budget of every activity by clicking the Anggaran button from the main menu to work unit. The display as follows: Choose the working program and the activity, click acceptance item on acceptance column (if the activity needs acceptance source), continued by fulfilling the description, unit, unit price, and quantity, then press Enter. The next step is to click the mutakhirkan data button to refresh the data. On the expenditure column, click the expenditure button then continued by completing t he column field of description, unit, unit price, and quantity then press Enter . Again, click mutakhirkan data to refresh. Source : Research Data, 2020 To see the detail display of income, cost, financing by clicking the icon to print the data by clicking the button , for exiting the budgeting menu click . The but t on used for checking the existing budget, budget proposal, and the rest of the budget.
Figure 5. Filling Window
Click button and to see the recapitulation activities display and budget recapitulation from the arranged budget. To end the SIKADE S pr ogr am click . The validity test of the research instrument is presented in T able 1. According to the correlation test, it is found that output analysis correlated wit h the number of questions, and the total score of questions showed that all correlations between all of the questions start from number one t o t en, and t he E-JURNAL AKUNTANSI VOL 30 NO 1 APRIL 2020 HLMN. 851-860 DOI: https://doi.org/10.24843/EJA.2020.v30.i04.p04 total value of all questions is significant. It means the research instr ument, i. e. , questions are valid. According to Table 1. the calculation result of (2x0.998)/(1+0.998) = 0. 99. It means all of the scores from the questions on the research instrument is reliable. It can be concluded that questioner, as the research instrument, is able to be used in this research. It can be seen that the understanding score of village government employees increases from 24.15 to 38.77. The significance test shows that the difference in understanding level over village financial management and governance is significant. It can be seen from the value of sig, i.e., 0.000. It indicates that there is a difference in the understanding level of village government employees toward IT-based financial management and governance of the village government before and after training. Moreover, it is found that the maximum theoretical value of the average respondent's answer is 50, and the minimum is 10. Because the number of questions is 10 with one to five score for each question, then the average is 30. Based on the data mention before, even though the result does not exceed the maximum enough, the understanding level of the village government employees increased after attended the training. Thus, this research supports the hypothesis. The result of this study is in line with several previous studies such as Sofyani & Akbar (2013), Muzahid, (2014, Wungow, Lambey, & Pontoh (2016) who found that training provided by government institutions had a positive contribution to improve the ability of government employees to complete t heir tasks. However, this research extends the previous findings by suggesting t hat the training held should be integrated with IT, so that the work of financial management of village government can run more easily and quickly (Sofyani, Riyadh, & Fahlevi, 2020). This result confirms the institutional theory, as suggested by DiMaggio & Powell (2000), that one of the successes of the institutionalization process in the organization is determined by the quality of human resources that can be improved from the presence of training programs.
CONCLUSSION
Based on the result, it may conclude that it is critical that the village government conducts training and provide better information system facilities to support governance practices. The training of IT-based financial management and governance of the village government may improve the performance of the village government employees in understanding the philosophy of financial and implementing the principle of good governance. When good governance occurred in the village government, it may lead to the welfare of civil society. This research has several limitations. First, this research was only car ried out in the village government scope of Indragiri Regency, Riau province. Therefore, these results cannot be generalized to the larger scope. Future stud ies are suggested to examine similar topics in other village governments in other regions of Indonesia. Furthermore, this study only tested one independent variable. Future studies are suggested to examine other factors that can impr ove the quality of human resources in the village government, especially those related to financial management and governance practices. Since this research approach is only carried out with a survey method and less detailed results, further research is also recommended to adopt other research met hods such as qualitative or mixed-methods. E-JURNAL AKUNTANSI VOL 30 NO 1 APRIL 2020 HLMN. 851-860 DOI: https://doi.org/10.24843/EJA.2020.v30.i04.p04
|
2020-07-02T10:28:38.517Z
|
2020-04-23T00:00:00.000
|
{
"year": 2020,
"sha1": "4bd62f50e48554a6171a8bc3729d5c0a1bef33ca",
"oa_license": "CCBY",
"oa_url": "https://ojs.unud.ac.id/index.php/Akuntansi/article/download/56552/34428",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ce380511ca73c9cc898618a23587ea30dc6c185f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10650983
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of Bait Station Density for Oral Rabies Vaccination of Raccoons in Urban and Rural Habitats in Florida
Efforts to eliminate the raccoon variant of the rabies virus (raccoon rabies) in the eastern United States by USDA, APHIS, Wildlife Services and cooperators have included the distribution of oral rabies vaccine baits from polyvinyl chloride (PVC) bait stations in west-central Florida from 2009 to 2015. Achieving sufficient vaccine bait uptake among urban raccoons is problematic, given limitations on aerial and vehicle-based bait distribution for safety and other reasons. One or three bait stations/km2 were deployed across four 9-km2 sites within rural and urban sites in Pasco and Pinellas Counties, Florida. Based on tetracycline biomarker analysis, bait uptake was only significantly different among the urban (Pinellas County) high and low bait station densities in 2012 (p = 0.0133). Significant differences in RVNA were found between the two bait station densities for both urban 2011 and 2012 samples (p = 0.0054 and p = 0.0031). Landscape differences in terms of urban structure and human population density may modify raccoon travel routes and behavior enough for these differences to emerge in highly urbanized Pinellas County, but not in rural Pasco County. The results suggest that, in urban settings, bait stations deployed at densities of >1/km2 are likely to achieve higher seroprevalence as an index of population immunity critical to successful raccoon rabies control.
Introduction
Globally, rabies kills approximately 59,000 humans annually, and impacts on human and animal health result in a significant economic burden [1]. In the United States, the cost of living with the virus ranges from $245-510 million annually [2]. Oral rabies vaccination (ORV) is an effective and socially-acceptable approach to wildlife rabies control [3]. ORV has been used to control fox rabies in western Europe [4,5] and in Canada [6][7][8]. In the U.S., ORV is currently aimed at the elimination and prevention of new epizootics of canine rabies in coyotes (Canis latrans) [9,10], the elimination of rabies in gray fox (Urocyon cinereoargenteus) in Texas [11] and the containment and elimination of the raccoon (Procyon lotor) variant of the rabies virus (raccoon rabies) in the eastern U.S. [3]. While there are many variants of the rabies virus, and many vector species, raccoon rabies is primarily perpetuated within the raccoon. Raccoons often occur at extremely high population densities along the rural-urban interface, and are ecological generalists [12]. Raccoon rabies has spread rapidly in the abundant raccoon populations of eastern North America; however, the virus has not moved west of the Appalachian Mountain Range. Using this range as a natural barrier, USDA, APHIS, Wildlife Services (WS), National Rabies Management Program (NRMP) has implemented a large-scale ORV program to prevent the westward spread of raccoon rabies [3]. WS NRMP is conducting cooperative ORV operations to continue preventing the spread of raccoon rabies into the mid-western U.S. and eastern Canada (Phase I), and has begun work towards its elimination from the eastern U.S. (Phase II) [3], much of which is highly urbanized.
Bait stations for distribution of oral rabies vaccine baits have become an increasingly important bait delivery method in urban areas where aerial and vehicle-based (or hand) vaccine bait delivery is hampered by high human and pet densities, and in rural areas where raccoon densities are low, but target species may be concentrated in smaller localized populations, reducing the need to widely broadcast vaccine baits. Bait station use began in New York in 2003, and in key locations in Massachusetts in 2006 [13,14], with important questions regarding optimal design and effectiveness left unanswered. Although bait station design and deployment has been evaluated, including modification to reduce non-target uptake, especially by opossums (Didelphis virginiana), future design improvements and optimized strategies for their use require additional study [13,15,16]. Opossums are a non-target species due to their low incidence of rabies. They are attracted to vaccine baits and are able to remove baits from the bait stations with little difficulty. Direct competition with raccoons for the baits can confound rabies management efforts [17,18].
To better understand the best management strategies for using bait stations to control raccoon rabies in central Florida during 2011 and 2012, presence of tetracycline (TTCC) biomarker and rabies virus neutralizing antibodies (RVNA) as indices of bait station performance [3] were compared between two bait station densities in rural and urban settings using fishmeal polymer (FMP) baits containing RABORAL V-RG ® (Merial, Athens, GA, USA) vaccine. It was predicted that placing 3 bait stations/km 2 would result in significantly higher RVNA and TTCC percentages than having 1 bait station/km 2 among the urban study sites, and that there would be no significant difference between 3 bait stations/km 2 and 1 bait station/km 2 among the rural study sites.
Materials and Methods
Rural study sites were selected within the Starkey Wilderness Preserve in Pasco County, Florida, which is owned and managed by the Southwest Florida Water Management District (SWFWMD), and urban sites were selected within St. Petersburg in Pinellas County, Florida ( Figure 1). The rural study sites were dominated by oak (Quercus spp.) and pine (Pinus spp.) woodlands, with few to no houses in the area. In this study, there were 87 houses within the northeast corner of the study site and bait stations were set at least 0.04 km from the property lines. The rural study sites were interspersed with dirt trails maintained by SWFWMD. An understory of scrub and shrub species was throughout the rural study sites. The urban sites were located within St. Petersburg, Florida, which had a population of approximately 245,300 at the time of the studies, with a population density of approximately 3970 people/mi 2 (or 1533 people/km 2 ) [19]. The landscape was dominated by residential and commercial properties. The study sites will be referenced as rural (Pasco County) or urban (Pinellas County) high bait station density (HBSD)-those sites with 3 bait stations/km 2 , and rural or urban low bait station density (LBSD)-those sites with 1 bait station/km 2 . Bait stations were constructed of 2.5-foot sections of 4-inch diameter polyvinyl chloride (PVC) schedule 40 pipe, painted in camouflaged colors to reduce the likelihood of human tampering. Open PVC-tops were covered with 4-inch flexible Qwik ® (United States Plastic Corp, Lima, OH, USA) caps to prevent rain and bait access for animals from the top. PVC elbows (90 degree angle) were attached to the 2.5-foot PVC section bottoms, and a 3-4 inch PVC pipe extended from the elbow with a nut and bolt to prevent baits from falling out of the bait station ( Figure 2). The bolt acts as a stop to prevent the baits from sliding out and the nut holds the bolt in place. This design was based on the bait station design by Boulanger et al. [13], and then modified to accommodate more baits at one time.
Bait stations were deployed over 10 consecutive nights during 9-20 May 2011 and 21 February-5 March 2012. Due to the number of bait stations to be deployed, not all bait stations were set on the same day. Each bait station area was active for 10 days, though the total number of study days was >10 days. Study sites were selected within 5 km of previous WS raccoon density study or bait station study sites to provide working knowledge of the raccoon populations in the study areas. The raccoon density in Pasco County was estimated at approximately 10 raccoons/km 2 during a density study conducted in 2011; however, there were no density studies conducted in Pinellas County. Target bait densities on all sites were 75/km 2 , the standard base rate for distributing baits based on current raccoon densities [3]. The bait densities were kept constant across the study sites to ensure study sites could be compared to one another. Two 9-km 2 study sites were selected (within one habitat type (i.e., woodland-dominated or urban/residential-dominated) to the greatest degree possible) at least 5 km Bait stations were constructed of 2.5-foot sections of 4-inch diameter polyvinyl chloride (PVC) schedule 40 pipe, painted in camouflaged colors to reduce the likelihood of human tampering. Open PVC-tops were covered with 4-inch flexible Qwik ® (United States Plastic Corp, Lima, OH, USA) caps to prevent rain and bait access for animals from the top. PVC elbows (90 degree angle) were attached to the 2.5-foot PVC section bottoms, and a 3-4 inch PVC pipe extended from the elbow with a nut and bolt to prevent baits from falling out of the bait station ( Figure 2). The bolt acts as a stop to prevent the baits from sliding out and the nut holds the bolt in place. This design was based on the bait station design by Boulanger et al. [13], and then modified to accommodate more baits at one time. The LBSD site in each county was equipped with one bait station/km 2 , containing 75 baits each (9 bait stations/site). The HBSD site in each county was equipped with three bait stations/km 2 containing 25 baits each (27 bait stations/site). Even distribution of the bait stations within the rural sites was possible due to the rural nature (woodlands with scrub/shrub understory) of the site, and single ownership; only vegetation and a lack of trails influenced bait station distribution. These sites were dominated by saw palmettos (Serenoa repens), oaks, and pines. Thorny vines, like greenbrier (Smilax bona-nox), made human movement difficult. Access to trails in 2012 that were available for use in 2011 was reduced by storm damage. The distribution of bait stations within the urban HBSD Bait stations were deployed over 10 consecutive nights during 9-20 May 2011 and 21 February-5 March 2012. Due to the number of bait stations to be deployed, not all bait stations were set on the same day. Each bait station area was active for 10 days, though the total number of study days was >10 days. Study sites were selected within 5 km of previous WS raccoon density study or bait station study sites to provide working knowledge of the raccoon populations in the study areas. The raccoon density in Pasco County was estimated at approximately 10 raccoons/km 2 during a density study conducted in 2011; however, there were no density studies conducted in Pinellas County. Target bait densities on all sites were 75/km 2 , the standard base rate for distributing baits based on current raccoon densities [3]. The bait densities were kept constant across the study sites to ensure study sites could be compared to one another. Two 9-km 2 study sites were selected (within one habitat type (i.e., woodland-dominated or urban/residential-dominated) to the greatest degree possible) at least 5 km apart in each of the two counties (urban LBSD and HBSD, and rural LBSD and HBSD). Thirty-six bait stations were deployed in each county, and 1350 vaccine baits containing TTCC hydrochloride as a biomarker were deployed on Day 0 in each county. The FMP baits containing RABORAL V-RG ® vaccine were 1.25 inch × 1.25 inch × 0.5 inch brown square blocks made of fishmeal. Inside the bait was a sachet sealed in the block with wax. The pink liquid inside the sachet was the vaccine. The amount of vaccine was intended to be a single dose.
The LBSD site in each county was equipped with one bait station/km 2 , containing 75 baits each (9 bait stations/site). The HBSD site in each county was equipped with three bait stations/km 2 containing 25 baits each (27 bait stations/site). Even distribution of the bait stations within the rural sites was possible due to the rural nature (woodlands with scrub/shrub understory) of the site, and single ownership; only vegetation and a lack of trails influenced bait station distribution. These sites were dominated by saw palmettos (Serenoa repens), oaks, and pines. Thorny vines, like greenbrier (Smilax bona-nox), made human movement difficult. Access to trails in 2012 that were available for use in 2011 was reduced by storm damage. The distribution of bait stations within the urban HBSD site was clustered in several 1-km 2 sections based on landowner permission, vegetative cover and the need to hide bait stations from the view of the public to reduce tampering.
Bait stations were visited three to five times during each study period to monitor activity, equipment and site conditions. In the urban sites, four infrared automated cameras were positioned in LBSD, and six in HBSD, while in the rural sites, five were set in LBSD, and six in HBSD. All cameras were Moultrie ® (EBSCO Industries, Inc., Birmingham, AL, USA), and both Gamespy D55IR and I40 Digital Game Camera models were used. The D55IR was a 5.0-megapixel camera and accepted SD cards of up to 16 GB. The I40 was a 4.0-megapixel camera and accepted SD cards of up to 4 GB. Photos were set at high image quality, with 1-minute activation intervals and with the multi-shot function turned on to capture 3 photos for each activation on both cameras. No flash was used; only the infrared flash was used at night. Sensor, aperture, and focal lengths were adjusted automatically as needed; these were not changed from the original setting as there was no means to adjust them. Cameras were set 12-24 inches from the ground, a minimum of 3 feet from the bait station, and aimed toward bait station openings to determine the species (raccoon vs. non-target) taking bait. Direction of the cameras was not accounted for, as most of the bait stations were set within clumps of vegetation so direct sunlight was not a factor. Each camera was given a unique ID number, which was printed on the photos to enable proper location of the photos. While setting the bait stations and cameras, the bait station number was recorded along with the corresponding camera ID. Any removal of vegetation in the rural sites that may have interfered with the cameras capturing photos was kept to a minimum so as not to make changes to the habitat that could deter animal visitations. In the urban sites, no vegetation changes were made, since the bait stations were set primarily on private property, and damage the landowners' plants was not desired. Camera event counters were reset during each site visit, and the time between photographs was minimized. During each bait station visit, the following information was recorded: date of visit, bait station ID, camera type, number of photos on camera (since last visit), number of images by species, and bait condition. The photos were viewed on a laptop computer. Each new event was determined by a 15-minute interval between photos showing individual animals. If the animal could be accurately identified by its markings as the same animal in the previous event photo (15 min prior), then this was considered a new event but not a new individual. If a bait station was emptied prior to the end of the of the 10-day study period, it was removed along with the camera, if one was associated with the bait station, to reduce tampering and damage to the bait station.
Trapping began 24 days after bait stations were removed to allow sufficient time for TTCC biomarker deposition and RVNA development, and to approximate the time between standard ORV bait distribution and post-ORV sampling. Trapping occurred within 0.5 km of each study site to optimize capture rates, and ≥30 unique raccoons/study site were targeted to facilitate TTCC biomarker and serological analyses. Trapping was completed 84 days post-station removal. Raccoons were marked with a metal #4 ear tag (National Band & Tag Co., Newport, KY, USA) stamped with a unique identifying number so as to identify each individual raccoon captured. Given that all past and present captured raccoons are marked in the same fashion, any animals recaptured from previous studies could be easily identified and removed from testing if treated in a manner (i.e., given vaccination by injection) that would affect this study's results. Standard biological samples were collected, including blood sera to determine vaccine-induced immunity, and first premolar (PM1) teeth for biomarker evaluation. Although biomarking frequently occurs in fewer animals than actually demonstrate vaccine-induced serological responses due to extraction of a first premolar from live tapped and released raccoons, it remains useful when considered with other vaccination assessment tools [20,21]. The teeth were labeled and prepared for shipment to Matson's Laboratory, LLC (Manhattan, MT, USA) where the tetracycline biomarker analysis was performed. Methods used for this test were performed as stated in Algeo et al. [20] and Linhart and Kenelly [22]. Rabies virus neutralizing antibody tests were conducted at the Centers for Diseases Control and Prevention (CDC) in Atlanta, GA, using the rapid fluorescent focus inhibition test (RFFIT). Methods for this test were performed as stated in Smith et al. [23] and CDC [24]. A cut-off of both ≥0.05 and ≥0.1 IU/mL were used to indicate a positive RVNA response. It was desired to determine if there was a detectable difference with using the lower 0.05 IU/mL versus the higher 0.1 IU/mL.
Fisher's exact tests were used to compare RVNA rates within and between treatments and sites. GraphPad QuickCalcs (GraphPad Software, Inc., La Jolla, CA, USA) statistical software was used for analyses [25], with α = 0.05.
Results
The photographs captured by each camera were examined and the number of individual animals photographed was documented. One camera was removed from the rural HBSD counts in both 2011 and 2012 for lack of photographs showing any individual animals. In 2011, total camera days were 44 (rural LBSD, 5 cameras), 42 (rural HBSD, 5 cameras), 40 (urban LBSD, 4 cameras) and 22 (urban HBSD, 6 cameras). Total camera days in 2012 were 34 (rural LBSD, 5 cameras), 24 (rural HBSD, 5 cameras), 40 (urban LBSD, 4 cameras) and 31 (urban HBSD, 6 cameras). Photographs were analyzed for individual identifiable animals by markings. If an animal was not identified as the same with certainty, then it was counted as a new individual. Raccoons were photographed more frequently in five of the eight sampling periods than were opossums, the primary non-target species in the area. A total of 244 raccoons was trapped and sampled during 2011 and 2012; seven of these were removed from the results due to previous vaccination by injection during 2011. RVNA rates ranged from 6.3% (urban LBSD 2012) to 53.8% (rural HBSD 2012) ( Table 1). The HBSD sites resulted in more elevated RVNA rates in 2012 (53.8% and 51.6%) than did the LBSD sites (44.4% and 6.3%). The 2012 rural and urban HBSD sites also had more elevated RVNA rates (53.8% and 51.6%) than both rural and urban HBSD in 2011 (35.1% and 45.2%). Tetracycline biomarker was present in more teeth collected from both rural and urban HBSD sites in 2012 (30.4% and 33.3%) than in both 2012 rural and urban LBSD sites (26.9% and 0.0%) ( Table 1).
Bait removal from bait stations varied between sites and between years. The nine urban LBSD bait stations started each year with a total of 675 baits, and had only 179 baits removed (26.5%) by the end of the 10-day study period in 2011 and 265 baits removed (39.3%) in 2012 (Table 1). A greater percentage of baits were taken from bait stations in 2012 than 2011. Within the urban sites, a larger number of baits were taken from the bait stations in the HBSD site regardless of year. However, within the rural areas, one bait station in the HBSD site did not have any baits removed in 2012 while all the baits within the LBSD site were removed from the bait stations (Table 1). A total of 244 raccoons was trapped and sampled during 2011 and 2012; seven of these were removed from the results due to previous vaccination by injection during 2011. RVNA rates ranged from 6.3% (urban LBSD 2012) to 53.8% (rural HBSD 2012) ( Table 1). The HBSD sites resulted in more elevated RVNA rates in 2012 (53.8% and 51.6%) than did the LBSD sites (44.4% and 6.3%). The 2012 rural and urban HBSD sites also had more elevated RVNA rates (53.8% and 51.6%) than both rural and urban HBSD in 2011 (35.1% and 45.2%). Tetracycline biomarker was present in more teeth collected from both rural and urban HBSD sites in 2012 (30.4% and 33.3%) than in both 2012 rural and urban LBSD sites (26.9% and 0.0%) ( Table 1). Bait removal from bait stations varied between sites and between years. The nine urban LBSD bait stations started each year with a total of 675 baits, and had only 179 baits removed (26.5%) by the end of the 10-day study period in 2011 and 265 baits removed (39.3%) in 2012 (Table 1). A greater percentage of baits were taken from bait stations in 2012 than 2011. Within the urban sites, a larger number of baits were taken from the bait stations in the HBSD site regardless of year. However, within the rural areas, one bait station in the HBSD site did not have any baits removed in 2012 while all the baits within the LBSD site were removed from the bait stations (Table 1).
RVNA rates were significantly higher (p = 0.0054 and 0.0031, respectively) in urban HBSD sites in 2011 and 2012 using an RVNA cutoff of ≥0.05 IU/mL, indicating a relationship with increased bait station density in urban areas. However, the rural sites did not differ (Table 2A). RVNA rates were significantly higher (p = 0.0081 and 0.0031, respectively) in urban HBSD sites in 2011 and 2012 using an RVNA cutoff of ≥0.1 IU/mL, indicating a relationship with increased bait station density in urban areas. However, the rural sites did not differ (Table 2B). Table 3). No significant differences were found between the rural sites or the 2011 urban sites. Table 3. Comparison of raccoon tetracycline deposition (at 75 baits/km 2 ) with deployment of LBSD versus HBSD in rural and urban environments in Florida, 2011-2012.
County/Year Percent Tetracycline-Positive (n) Fisher's Exact Test Result
Rural
Discussion
Achieving sufficient vaccine bait uptake among urban raccoons is critical. Limitations on aerial and vehicle-based (hand) bait distribution for safety and other reasons necessitate finding other bait distribution means and optimized strategies for achieving RVNA rabies management goals. Bait stations represent one potential tool for specific settings that may achieve management goals while reducing non-target bait loss and pet and human bait contact, mitigating many concerns from managers, cooperating agencies, and the public. Even though the bait stations were studied in May 2011 and February 2012, it was anticipated that these differences in times of year did not have any impact on the results. Though warmer temperatures may impact raccoon movements during the day, when looking at the total baits removed per year, it does not appear that time of year had any effect on the results of this study. It was believed that raccoon movement was sufficient to ensure that many raccoons came in contact with the baits in both years. The warmer temperatures during the 2011 trapping period may have been thought to negatively impact the capture rate, but capturing raccoons within the urban LBSD site in 2012 proved to be more difficult (n = 16; Table 1).
Bait removal from bait stations resulted in higher RVNA percentages in the urban HBSD sites, irrespective of RVNA cutoff level. The lack of uptake in the urban LBSD site (<40% of baits removed in both 2011 and 2012 out of 675 baits) may be due to a perceived relatively low localized raccoon density at the time of this study, which can be evidenced from the relatively low percentage of raccoon photos in this area in 2011 and 2012 (Figure 3). A lack of travel corridors due to roads through the area, fenced and relatively barren yards, few park and recreational areas, and people and pet interference may also have had negative impacts on raccoon movements through this site. In contrast, the urban HBSD site (74.2% and 94.7% of baits removed in 2011 and 2012, respectively, out of 675 baits) contained multiple parks, ideal tree cover, food resources, and habitats for raccoons, as well as a golf course and conservation areas with multiple fresh water ponds. Many of the house lots in this site contained several large trees as potential denning sites.
Domestic dogs (Canis familiaris) and cats (Felis catus) were both captured in photos only at the urban bait stations; however, none were documented taking a bait from the bait stations. Therefore they were not reported in the results. Since these animals did not take any baits and were not observed eating a bait that was on the ground, neither dogs nor cats were considered a non-target species for the bait.
Greater bait removal from the bait stations in 2012 than in 2011 may be due to identical bait station locations in both years, possibly resulting in bait stations being more easily found in the second year. Several authors [26][27][28][29][30] have documented learned behavior in raccoons-from traversing a maze after being shown the end, to pulling a lever for a reward after watching someone pull the same lever, to gaining access to garbage cans after lid modifications have been made. Female raccoons with young were documented in the 2011 photos. It is possible that those young returned the next year after learning the bait stations provided bait. A single bait station in the rural HBSD site had no baits taken, possibly due to the presence of acrobat ants (Crematogaster ashmeadi) that were observed covering the baits. These native fire ants potentially reduced the bait scent attractant to wildlife. Fire ants have been observed by multiple wildlife personnel throughout the southeastern U.S. covering bait in traps, as well as vaccine baits on the ground, thereby preventing raccoons and other animals access to the bait.
Two RVNA cutoff levels of ≥0.05 IU/mL (used by CDC [24]) and ≥0.1 IU/mL (suggested by Canadian and European counterparts) were examined. No comparable difference in the results was found when using the higher cutoff (Table 2). For this study, no justifiable reason was found to conclude that using ≥0.05 IU/mL as the cutoff was limiting or accounted for animals with false-elevated RVNA results. There remains much debate about the levels of rabies antibodies that confer resistance to rabies virus infection. No single cutoff level of RVNA is recognized as being invariably protective [31]. Repeated observations that small fractions of animals presenting detectable levels of antibody prior to challenge have shown the animals can still succumb to rabies infection, and conversely that some seronegative animals survive challenge [32,33]. While these discrepancies exist, Blanton et al. [33] did observe that no raccoon succumbed to rabies challenge after vaccination with RABORAL ® V-RG, even with an RVNA level of 0.06 IU/mL at the time of challenge. This result indicates to us that a cutoff of ≥0.05 IU/mL for this study was sufficient.
Lower RVNA response in the urban LBSD site may be related to a lower population of raccoons ( Figure 3) or a preponderance of private properties surrounded by fences and smaller lots than in the urban HBSD site. To set the bait stations on private property in the LBSD site, the bait stations were placed inside the fences, as requested by property owners. This placement may have reduced the opportunities for raccoons to find the baits. Bozek et al. [34] found raccoons in urban areas had smaller home ranges than those in rural areas. Raccoons in urban habitats have access to anthropogenic food sources and can thereby reduce their foraging distances and patterns. These human food sources may also explain why baits were left in the urban LBSD bait stations. By providing increased bait station density within yards without fences, raccoons likely had easier access to the bait stations in the urban HBSD site, resulting in more baits taken and a significantly higher RVNA response.
Tetracycline biomarker results and RVNA rates were not compared in this study for a few reasons. First, canine teeth and mandibular bone are superior tissues for tetracycline biomarking [20,22], but first premolar teeth were collected for this study as a less intrusive procedure, so that raccoons could be released after full recovery from sedation. Canine tooth sampling would have required euthanasia and eliminated the opportunity to obtain valuable biologic information in future field trials through recaptures. First premolar teeth continue to be the most acceptable, least intrusive sample to collect from live-trapped raccoons. Second, while not noted earlier in this study, unpunctured sachet packets were found at every bait station. The FMP coating was missing, presumably eaten by a raccoon or opossum. This would result in a positive biomarker in the tooth, but no positive RVNA response. Third, background sources could have contributed to tetracycline biomarking. The most likely background tetracycline sources may have included consumption of medicated feeds sometimes used for cattle production and nonspecific fluorescence that may be found naturally [35]. While all study sites had lower biomarker percentages than elevated RVNA percentages, the 2011 urban LBSD site had a higher percentage of biomarkers present than percentage of elevated RVNA. It is unsure why this could be, unless tetracycline is present in the environment or the raccoons were avoiding the vaccine sachet and strictly eating the FMP coating. The sites with higher percentages of elevated RVNA than percentage of tetracycline biomarker may suggest a natural response to rabies in the area, poor tetracycline uptake in the first premolar tooth samples, or, as could be the case in 2012, trapping 'missed' animals from 2011 that ingested the vaccine. Additionally, the easier locating of bait stations by the animals, as evidenced by the increased number of baits removed from the bait stations in 2012, could have resulted in the higher percentages of elevated RVNA ( Table 1).
The findings from this study support a higher bait station distribution density, to provide greater access to raccoons in urban settings to achieve higher RVNA to meet raccoon rabies management goals. However, additional well-designed studies are required to better understand optimized bait station density and distribution to achieve raccoon rabies elimination in the urban environments that form the mosaic of landscapes on which raccoon rabies occurs.
|
2017-10-24T01:47:13.987Z
|
2017-08-22T00:00:00.000
|
{
"year": 2017,
"sha1": "775fb89427371bf3120942c2e645664a5d39c484",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2414-6366/2/3/41/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "775fb89427371bf3120942c2e645664a5d39c484",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
225606718
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of culture conditions for enhanced decolorization of Amido black by Calocybe indica (CBE 1515) spent mushroom substrate
The objective of this study was to exploit the decolorization potential of Spent Mushroom Substrate (SMS) of Calocybe indica (Strain-CBE 1515) for the biodegradation of reactive textile dye Amido black. Initial studies have been made on screening of mycelia and crude enzyme extract of spent compost of Milky mushroom (Calocybe indica) strains for their dyes (Amido black, Congo red and RBBR) decolorization potential. It was observed that strains of C. indica were found to be hyper specific to laccase production and significant decolorization potential was showed by CBE 1515 for Amido black. Various process parameters like composition of basal nutrient medium, pH, temperature, additional carbon and nitrogen sources, and initial dyestuff concentration were optimized to develop an economic decolorization process. The optimum dye decolorization was achieved in LME medium containing dextrose and yeast extract as carbon and nitrogen sources respectively and adjusting the pH to7.5 and incubated at 30 °C.
Introduction
The textile finishing generates a large amount of dyes and pigments containing wastewater from dyeing and subsequent steps that forms one of the largest contributions to water pollution (Santhy and Selvapathy, 2006) [19] . Color present in dye-containing effluents gives a straightforward indication of water being polluted, and discharge of this highly colored effluent can damage directly the aquatic life in receiving water (Senthilkumar et al., 2005) [20] . Due to their chemical structures, dyes are resistant to fading on exposure to light, water, and many chemicals (Robinson et al., 2001) [18] . Conventional treatment methods of textile effluents are either ineffective, costly, complicated, or have sludge problems (Stolz, 2001;Robinson et al., 2001) [24,18] . The economic and safe removal of the polluting dyes is still an important issue. Among the most economically viable methods available for decolorization, the most practical in terms of manpower requirement and expenses appears to be biological system (Murugesan and Kalaichelvan, 2003;Boer, 2002) [14,4] . Although decolorization is a challenging process to the textile industry, the great potential of microbial decolorizing can be adopted as an effective tool. In the recent past, there has been an intensive research on bioremediation of dyes, and the use of ligninolytic fungi is turning into a promising alternative to replace or supplement present treatment processes (Boer, 2002;Dos-Santos et al., 2004; [4,6,2] . Lignolytic fungi can mineralize xenobiotics to CO 2 and H 2 O through their highly oxidative and non-specific ligninolytic system, which is also responsible for the decolorization and degradation of a wide range of dyes (Boer et al., 2004;Mazmanci et al., 2005) [5,13] . These ligninases including laccase, lignin peroxidase (LiP), and manganese peroxides (MnP) are able to decolorize dyes of different chemical structures (Levin et al., 2004; [12,1] . In continuation of our previous studies (Sowjanya et al. 2018) [23] , decolorization of reactive textile dye Amido Black by Calocybe indica CBE 1515 is a part of our efforts for developing indigenous technology for decolorization of textile dyes and thus dye-containing effluents.
Basal nutrient media:
Five different growing media Potato Dextrose Broth (PDB), LME broth, Complete Yeast Extract broth (CYM), Richard's Medium (RM), Malt Extract Broth (MEB) were used for studying effect of media composition on decolorization of Amido black by using spent substrate of CBE-1515 which was considered as potential dye decolourizer among the tested C. indica strains.
Optimization protocol
The decolorization process was optimized by studying the effect of different cultural parameters (medium, temperature, pH, initial concentration of dye, carbon and nitrogen source) on per cent decolorization by SMS of CBE-1515 strain in 250 mL flasks supplemented with different working concentrations of Amido black which had showed maximum decolorization in previous experiments. The classical method for medium optimization was followed, varying one parameter at a time and maintaining the pre optimized at constant level. The 0.5 mL of 0.5% (w/v) stock solution of dye was added in pre sterilized media flask to make up 50 ppm concentration of dye followed by addition of 5 g spent compost of C. indica in each flask using sterilized forcep. The dye supplemented flasks devoid of SMS were kept as control. Decolorization if any was recorded by recording decrease in optical density at max from 0 day up to 3 days of incubation.
Step1:
The modified LME basal medium (LME) was used in order to eliminate the possible dye absorbing effect (Pointing, 1999) [16] and the composition of LME can be easily altered to get optimized medium as it is not a synthetic medium.
2.4.2.
Step2: LME was prepared by using 5 different carbon sources viz., tartaric acid, sucrose, lactose, xylose and dextrose in combination with other ingredients of the medium and inoculated with dye and SMS.
2.4.3.
Step3: Taking one carbon source (dextrose) which had shown maximum decolorization in previous trial as constant, modified LME was prepared with five different nitrogen sources viz., peptone, ammonium oxalate, yeast extract, tryptone and NaNO 3 in combination with other ingredients and inoculated with dye and SMS.
2.4.4.
Step4: Three different pH (4.0, 6.0 and 8.0) were used for studying effect of pH on decolorization of Amido black using SMS of C. indica. The pH which showed maximum decolorization was further optimized with narrow range of pH value (8±0.5). Modified LME was prepared with optimized carbon & nitrogen source and the obtained broth's pH was adjusted with HCl/NaOH.
2.4.5.
Step5: Modified LME was prepared by using optimized carbon (dextrose) and nitrogen (yeast extract) sources in combination with other ingredients, adjusted to optimized pH and inoculated with dye and SMS and incubated at four different temperatures (25 °C, 30 °C, 35 °C and 40 °C).
2.4.6.
Step6: Modified LME was prepared with optimized carbon & nitrogen source and pH was adjusted to an optimized value (7.5) obtained from previous step using HCl/NaOH. Four different concentrations viz., 25, 50, 100 and 200 ppm were made by adding 0.25 mL, 0.50 mL, 1 mL and 2 mL of 0.5% (w/v) stock solution of dye respectively in pre sterilized modified LME media flasks followed by addition of 5 g spent compost of C. indica in each flask using sterilized forcep.
Measurement of decolorization extent
Sample (4 mL) was collected at each step from each replication and centrifuged at 5000 rpm for 20 min. The decolorization extent was determined by measuring absorbance of supernatant at specific wavelength max (610 nm) for Amido black by using UV-Visible spectrophotometer. Decolorization extent was calculated as Where OD i is initial absorbance at 0 day, OD t is absorbance after incubation for different periods under different experimental conditions, t is incubation time.
Cultural medium
Statistically there was no significant difference among the five culture media used in this study. However, on 3 rd day of incubation the % decolorization was recorded as follows: LME (75.5%) > RH (72.0%) > ME (71.5%) > CYM (70.0%) > PDB (64.5%) (Fig 1) Maximum decolorization obtained may be related to the type of medium used in the mycelial growth of C. indica impregnated in SMS. Apart from being rich in salts, the medium also contained lignocellulose, which is essential for the enzyme induction. Elisashvili and co workers (2006) [7] found that the presence of a lignocellulosic substrate is obligatory for manganese peroxidase production by P. dryinus IBB 903 since there the enzyme production was observed to be ceased when the fungus was grown in a synthetic medium with various carbon sources.
Carbon source
Decolorization experiments were conducted using sucrose, lactose, xylose, dextrose and tartaric acid as additional carbon sources. A dramatic increase in decolorization of Amido black has been observed with dextrose (63.4%) addition after 24 hours incubation only. The decolorization efficiency showed a stready increase with the increase in incubation period. All the carbon sources enhanced decolorization of Amido black. However, addition of dextrose caused maximum decolorization (79.1%) followed by xylose (76.5%) and lactose (73.2%) whereas tartaric acid and sucrose could cause only 65.7% and 64.9% of decolorization respectively after 3 days of incubation (Fig 2).
Decolorization of Poly R 478 dye by ten white-rot fungi was reported to vary in response to different carbon regimes and fastest decolorization rates were achieved with monomers (glucose, xylose) as carbon source (Leung and Pointing, 2002) [11] . The findings of Leung and Pointing (2002) [11] are in similarity with present study in which dextrose and xylose supplementations enhanced the decolorization potential of SMS. As the glucose is a monomer, it can be easily consumable by fungi thereby causing a significant shortening of lag phase and increasing its productivity. Supplementation of monosaccarides (glucose, dextrose) to the medium containing dye provides easily metabolizable energy source to the fungus and creates an environment to enhance decolorization rate of dyes.
Nitrogen source
Effect of different additional nitrogen sources on percent decolorization was investigated under optimum conditions of dextrose as carbon source. After three days of incubation, 83.3% decolorization was observed in the medium supplemented with yeast extract which was recorded greater than other nitrogen sources. At initial days of incubation there was no significant difference among different nitrogen sources in enhancing dye decolorization potential but, statistically significant difference was observed on 2 nd and 3 rd day after inoculation with SMS. Dye decolorization potential by different nitrogen sources was observed to be as follows: yeast extract (83.3%) > tryptone (82.6%) ≥ Ammonium oxalate (79.2%) > NaNO 3 (75.4%) > peptone (66.8%). (Fig 3) The nitrogen levels in the medium influence the rate of lignolytic enzyme production and dye decolorization by white-rot fungi. In general the higher concentration of nitrogen source declines the enzyme production (Leung and Pointing, 2002) [11] . In contrast, Lee and co workers (2004) [10] has reported that the addition of nitrogen source has resulted in significant enhancement of color removal efficiency by S. commune. The mycelial growth of fungi has increased with supplementation of nitrogen source. The results indicated that the rate of decolorization efficiency was increased up to a concentration of 0.5% due to the requirement of nitrogen as a nutrient whereas, the further raise in nitrogen levels declined the rate of color removal since the breakage of azo bonds decreased due to the presence of easily accessible excess nitrogen in the form of ammonium ions (Vahabzadeh et al., 2004) [26] . Different combinations of carbon and nitrogen sources were studied by Iqbal et al., (2011) [9] to record a variation in lignin degrading enzyme profile of Trametes versicolor. For enhancing production of peroxidases (LiP and MnP) maltose and urea were found to be best, while for laccase production glucose and yeast extract found to be the best combination.
pH
Results for the effect of pH on% decolorization of Amido black showed that maximum decolorization efficiency (80.9%) was observed in medium adjusted with pH 7.5 after 3 days of incubation. The effect of pH on dye decolorization by SMS was obtained as follows: 7.5 (80.9%) ≥ 8 (79.5%) ≥ 8.5 (75.6%) > 6 (70.5%) > 4 (55.3%) on third day of incubation (Fig 4). The results show that basic pH is favouring growth of fungi and maximum decolorization of Amido black. The chemistry of dye molecule and fungal biomass were greatly affected by the initial pH of growing medium. The effect of pH on the decolorization process is important, since it was noted that dyes are soluble over a certain basic pH ranges and insoluble at acid pH (Fu and Viraraghavan, 2001) [8] . In present study, maximum decolorization efficiency was observed at neutral to basic pH. The results are in accordance with previous studies of Senthilkumar and co workers (2011) [21] , where 95 per cent of dye degradation efficiency was obtained by white-rot fungus Phanerochaete chrysosporium on synthetic dye bath effluent containing Amido black 10B at the pH of 7. The optimum pH for decolorization of RB 5 (150 mg/l) by F. trogii was pH 4. 5-7.5 (Park et al., 2004) [15] .
Temperature
Results for the effect of temperature on% decolorization of Amido black showed that maximum percent decolorization (85.5%) was observed in flasks incubated at 30 °C followed by flasks incubated at 25 °C (80.6%) and 35 °C (73.2%) after 3 days of incubation. Minimum decolorization was observed in flasks incubated at 40 °C (Fig 5).
The results obtained in present study were in accordance with the reports of Poornima et al. (2016) [17] , the optimum temperature for decolourizing Amido black was 30 °C, Bhatti et al. (2008) [3] , at the low-medium temperature range (25-35 °C) fungus shown increased efficiency of dye decolorization and Singh et al. (2010) [22] , the optimum temperature for decolorizing all the dyes (tested by author) except for crystal violet by SMS of P. sajor caju was between 30 °C -35 °C and there was a decrease in the rate of decolourisation at temperatures above 35 °C. The maximum decolorization (71%) was observed in the shake flasks incubated at 30 °C for 7 days under optimum conditions. The decline in dye removal potential was observed at higher temperatures (40-45 °C). White-rot fungi show adverse growth under optimum temperature conditions as compared to at higher temperatures (Toh et al., 2003) [25] . Temperature optima of 30-37 °C have also previously been reported (Ashger et al., Boer et al., 2004) [1,5] for different white-rot fungi for decolorization of chemically diverse dyestuffs. The rate of decolourisation was increased with the increase in dye concentration exhibiting apparent first order reaction. In general, high dye concentration will cause a slower decolourisation rate (Young and Yu 1997) [27] . However, in the present study, the rate of decolourisation increased to a certain optimum concentration of dye after which the rate of decolourisation decreased. These results were in accordance with Singh et al. (2010) [22] .
Conclusion
Among the variable concentrations of synthetic dyes used, 0.5 mL in of 0.5% of Amido black for 50 mL medium was best suited concentration for optimizing different cultural conditions to enhance the dye decolorization potential of C.
indica spent mushroom substrate. The degradation potential can be enhanced in modified LME medium containing dextrose and yeast extract as carbon and nitrogen sources respectively and adjusting the pH to 7.5 and incubated at 30 °C.
|
2020-08-06T09:07:59.783Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "96e7d260263c27d16bc0d85d4defbbdb85ce460a",
"oa_license": null,
"oa_url": "https://www.chemijournal.com/archives/2020/vol8issue4/PartAB/8-4-199-941.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bcea70fa4184b924bc49cba5ff58397743c95118",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
3646928
|
pes2o/s2orc
|
v3-fos-license
|
Aqueous Extract of Huang-lian Induces Apoptosis in Lung Cancer Cells via P 53-Mediated Mitochondrial Apoptosis
The current study was designed to evaluate the activity of the aqueous extract of Huang-lian, and the main apoptosis pathway induced by the extracts of Huang-lian was detected on lung cancer. Antiproliferactive activity was evaluated by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and TUNEL methods against human lung cancer cells (A549). Huang-lian regulated the Bcl-2 family protein-mediated mitochondrial pathway via p53 and was detected by western blot. It increased the activation of caspase-3 and caspase-3 cleavage increased as the time increased. These results suggested that Huang-lian regulated the Bcl-2 family protein-mediated mitochondrial via p53 pathway, suggesting that Huang-lian should be further investigation as a natural agent for treating and preventing cancer.
INTRODUCTION
At present, four types of cancer-prostate, breast, lung, and colorectal-exceed 100,000 new cases per year in the United States.Of these cancers, lung cancer carries the worst prognosis and has been estimated to result 159,260 deaths in 2014.Characterized risk factors include genetic susceptibility as well as environmental exposure to carcinogens such as radon, asbestos and fineparticulate matter.This multifactorial etiology for lung cancer could include long-term exposure to an inhaled carcinogen.Apoptosis is the process of programmed cell death and is considered to be a key process for manipulation in cancer prevention (Li et al. 2012).Activation of apoptosis occurs by extrinsic and intrinsic pathways.The extrinsic pathway is characterized by caspase-8 cleavage, whereas the intrinsic pathway is characterized by cytochrome c release and caspase-9 activation (Lu et al. 2011).During apoptosis, mitochondria play a key role and in the mitochondria-dependent intrinsic apoptosis pathway, the Bcl-2 family members are very important players.The Bcl-2 family includes pro-and anti-apoptotic proteins that maintain a dynamic balance between the cell survival and death through interactions with each other and with other proteins (Ma et al. 2008).Huang-Lian is a famous traditional Chinese recipe that has been used to treat the toxic heat syndromes and infectious diseases.In this study, the lung cancer cell line A549 was used to investigate the activity of aqueous Huang-Lian extract (AHLE).In addition, the antitumor mechanisms of AHLE in lung cancer cells were investigated.
Preparation of AHLE
One kilogram of Huang-lian was crushed to powder and the powder was soaked in water at approximately 30 to 35°C for 24 h.The supernatant was collected by centrifugation at 4,000 × g for 10 min and separated using zeolite dialysis membranes with apertures of 55 × 10 4 µm, 40 × 10 4 µm, 25 × 10 4 µm and 1 × 10 4 µm.Different extract fractions were obtained using lyophilized.
Cell culture
The cancer cells were grown as a monolayer in RPMI-1640 (Hyclone, Logan, UT, USA) containing 10% fetal bovine serum.Cells were maintained at 37°C in a humidified incubator under a 5% CO 2 atmosphere.
Cell proliferation assay
Cell proliferation was analyzed using the MTT assay.Cells were seeded in 96-well plates at densities of 1000-3000 cells/well and incubated for different time periods with or without different concentrations of AHLE.At different time points, 10 µL of MTT solution (5 mg/mL) was added to each well.The plates were incubated for an additional 4 h at 37°C.The medium was removed and 200 µL DMSO was added to each well and pipetted repeatedly to dissolve the formazan.The absorbance of each well was measured at 570 nm with a microplate reader (Tecan Group Ltd., Ma¨nnedorf, Switzerland).
TUNEL Assay
A549 were grown on chamber slides.After treatment with or without the AHLE, the slides were gently washed three times in 0.1 mol/L PBS (pH 7.4), fixed with 4% paraformaldehyde-PBS solution (Boston Bioproducts, Worcester, MA), and immediately transferred to a freezer until use.
To study the apoptosis of cultured cells, TUNEL assay was performed using in situ Cell Death Detection Kit, POD (number 11684 817910) according to the manufacturer's instructions (Roche, Indianapolis, IN).The terminal deoxyribonucleotidyl transferase (TDT)-mediated TUNEL was used to detect DNA fragmentation in situ.Fewer than 3% of the cells detached from the culture dishes and were not counted.
RNA extraction and qRT-PCR
Total RNA was isolated from the cells using the RNeasy kit (Qiagen, Hilden, Germany).The RNA content of samples was too low to be accurately quantified by spectrometry, and thus 6.5 μL RNA aliquots were amplified.All RNA samples were treated with RNasefree DNase I to remove any possible genomic DNA contamination.For amplification of the targets, RT and PCR were run in two separate steps (TaKaRa, Inc., Dalian, China).Primers used are shown in Table 1.
Western blotting
Cells were harvested, washed with PBS at 37°C, and lysed with a phenylmethanesulfonyl fluoride lysis buffer (Invitrogen).After centrifugation at 13,200×g for 30 min, the protein content of the supernatant was determined using the bicinchoninic acid reagent (Sigma).Total protein (50 μg) from each sample was electrophoresed on a 10% SDS-PAGE gel and transferred to a polyvinylidene fluoride membrane (Millipore, Billerica, MA, USA).The membranes were blocked with PBS containing 5% fat-free milk (Becton Dickinson, Franklin Lakes, NJ, USA) and 0.1% Tween-20 (Sigma) for 30 min at room temperature and then incubated with primary antibody for at least 1 h at room temperature or overnight at 4°C.The membranes were washed three times with PBS containing 0.1% Tween 20, incubated with peroxidase-conjugated secondary antibodies (Millipore), and developed using the ECL reagent (Pierce, Rockford, IL, USA).And the photograph was obtain by Imaging Systems (UVP, USA)
Statistical analysis
All the experiments were performed at least in triplicate.Data are presented as mean ± standard deviation (SD).p values were calculated using the Student's t-test accompanied by the analysis of variance (ANOVA) where appropriate.
A549 cell proliferation upon exposure to AHLE
To determine the activity of the AHLE, the A549 cells line were exposed to 10, 20, 40 and 60 µg/mL of AHLE and the cell growth was examined at 12, 24, 48 and 72 h.The maximum inhibition rate appeared at an AHLE concentration of 40 µg/mL on the 24 h of treatment (Fig. 1).
Effects of AHLE on TUNEL assay
First, the AHLE induced apoptosis, as assessed by TUNEL.However, although TUNEL was undetectable under control conditions, punctuate staining indicated that essentially all the cell nuclei exposed to the AHLE had DNA nicking (Fig. 2) and in different time, the DNA nicking was different.At 48 h, the DNA nicking was most strong.
Effects of AHLE on expression of p53, Bcl-2 proteins (Bcl-2, Bcl-xL, and Bax)
To further investigate the effect of AHLE on A549 cells, the protein expression levels of Bcl-2 proteins (Bcl-2, Bcl-xL, and Bax) and p53 were examined upon exposure of the to 40 µg/mL AHLE at 0, 20, 40 and 60 min.As shown in Figure 3, the protein expression levels of p53, Bcl-2 and Bcl-xL decreased as the time increased.However, Bax expression increased with increasing time.Then, the p53 was inhibited by PFT-α(p53 inhibitor) and the protein expression levels of Bcl-2 proteins (Bcl-2, Bcl-xL, and Bax) and p53 were examined upon exposure of cells to 40 µg/mL AHLE at 0, 20, 40 and 60 min.The p53, Bcl-2 and Bcl-xL increased as the time increased.But, the Bax was decreased (Fig. 4).
Effects of AHLE on caspase-3 expression
To determine whether caspase-3 and PARP was involved in apoptosis mediated by AHLE, their expression levels were analyzed by western blotting.As shown in Figure 5, caspase-3 cleavage increased as the time increased.
DISCUSSION
Huang-lian is used widely in TCM to treat different ailments for a long time.Recently, many biological activities of Huang-lian such as anticancer, antidiabetic, antimutagenic, antibacterial, antifungal, and antiviral effects have been reported (Zhang et al. 2007;Zeng et al. 2009).The product of Huang-lian decoction has been shown to inhibit the bacterial growth as well as cancer (Kim et al. 2012).Therefore, Huang-lian seemed very useful in TCM.The present results showed that AHLE could inhibit the growth lung cancer cell line and caused cell morphology changes as well (data not shown).The MTT assay results showed that the inhibition of cancer cell viability was time and dose dependent.
Apoptosis induced by some anticancer agents constitutes one aspect of their treatment effect.Two major pathways involved in the process have been investigated in great depth (Gustafsson and Gottlieb 2007).As p53 responds to both DNA damage and oxidative stress to trigger downstream apoptotic signalling, the mechanism for apoptosis seems to originate both from the pharmacological consequences of DNA injury and from oxidative stress (Yeh et al. 2009;Zhou et al. 2012).In this study, AHLE induced reduced expression of Bcl-2, Bcl-xl and greater expression of Bax.Therefore, AHLE could promote A549 cell apoptosis via p53.The study on the mechanism of apoptosis induced by AHLE showed anti-proliferative activity in A549 cells.Apoptosis is initiated through the mitochondrial pathway under physiological conditions such as oxidative stress, mitochondrial disturbance, and DNA damage (Gotoh et al. 2012) (another important key player in the mitochondrial pathway of apoptosis).The typical executioners of apoptosis are proteolytic enzymes called caspases (Floros et al. 2006).The present results clearly demonstrated that AHLE increased the activation of caspase-3.Given that aqueous AHLE could play a novel role as a complementary medicine in lung cancer treatment, further studies on its anticancer mechanisms should be done.However, aqueous AHLE extract did not show this effect on normal mouse lung cells.
CONCLUSION
In summary, results showed that aqueous HUANG-LIAN inhibited cell proliferations by inducing apoptosis and cell-cycle arrest in A549 breast cancer cells.These results contributed to the understanding of the anticancer activity of Huanglian.
|
2018-03-02T16:42:09.996Z
|
2015-06-01T00:00:00.000
|
{
"year": 2015,
"sha1": "a9f3a36df9de8d3570cdb5945e01bff543aaaa67",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/babt/a/bMKrjTYNFjgwyznXqBRVV5D/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a9f3a36df9de8d3570cdb5945e01bff543aaaa67",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
}
|
218719043
|
pes2o/s2orc
|
v3-fos-license
|
Data Consistent CT Reconstruction from Insufficient Data with Learned Prior Images
—Image reconstruction from insufficient data is common in computed tomography (CT), e.g., image reconstruction from truncated data, limited-angle data and sparse-view data. Deep learning has achieved impressive results in this field. However, the robustness of deep learning methods is still a concern for clinical applications due to the following two challenges: a) With limited access to sufficient training data, a learned deep learning model may not generalize well to unseen data; b) Deep learning models are sensitive to noise. Therefore, the quality of images processed by neural networks only may be inadequate. In this work, we investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases. Since learning-based images with incorrect structures are likely not consistent with measured projection data, we propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning: First, a prior image is generated by deep learning. Afterwards, unmeasured projection data are inpainted by forward projection of the prior image. Finally, iterative reconstruction with reweighted total variation regularization is applied, integrating data consistency for measured data and learned prior information for missing data. The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively. For example, for truncated data, DCR achieves a mean root-mean-square error of 24HU and a mean structure similarity index of 0.999 inside the field-of-view for different patients in the noisy case, while the state-of-the-art U-Net method achieves 55HU and 0.995 respectively for these two metrics.
I. INTRODUCTION
C OMPUTED tomography (CT) is a widely used medical imaging technology for disease diagnosis and interventional surgeries. CT reconstructs a volume which provides cross sectional images and offers good visualization of patients' anatomical structures and disease information. In order to acquire sufficient data for image reconstruction, certain acquisition conditions need to be satisfied. First of all, the detector of a CT system needs to be large enough to cover an imaged object. Second, the angular range of a scan should be at least 180 • plus a fan angle. Third, the angular step should be small enough to meet sampling theorems [1].
However, such conditions may be violated in practical applications, raising the issues of interior tomography, limited-Y. Huang, A. Preuhs and A. Maier are with Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany (email: yixing.yh.huang@fau.de).
A. Maier is also with Erlangen Graduate School in Advanced Optical Technologies (SAOT), Erlangen, Germany.
angle tomography, and sparse-view reconstruction. In interior tomography, X-rays are collimated to a certain region-ofinterest (ROI) to reduce the amount of dose exposure to patients. In addition, data truncation is a common problem for large patients whose body cannot be entirely positioned inside the field-of-view (FOV) due to the limited detector size. The problem of limited-angle tomography arises when the rotation of a gantry is restricted by other system parts or an external obstacle. Sparse-view reconstruction is preferred for the sake of low dose, quick scanning time, or avoidance of severe motion. In these situations, artifacts, typically cupping artifacts, streak artifacts and view aliasing, occur due to missing data.
To deal with missing data, data inpainting is the most straight-forward solution. For interior tomography, heuristic extrapolation methods are widely applied, including symmetric mirroring [2], cosine function fitting [3], and water cylinder extrapolation (WCE) [4]. With such extrapolations, a smooth transition between measured and truncated areas is pursued to alleviate cupping artifacts inside the FOV. However, anatomical structures outside the FOV are still corrupted. For limitedangle and sparse-view reconstruction, many researchers attempted to restore missing data based on sinusoid-like curves [5], [6], band-limitation properties [7]- [10], and data consistency conditions [11]- [14]. Such approaches achieved improved image quality for certain scanning configurations and particular subjects only, but only achieve limited performance for clinical applications.
With the advent of compressed sensing technologies, iterative reconstruction with total variation (TV) regularization became popular for CT reconstruction from insufficient data. So far, many TV algorithms have been developed, including ASD-POCS [15], improved TV (iTV) [16], spatio-temporal TV (STTV) [17], anisotropic TV (aTV) [18], soft-thresholding TV [19], total generalized variation (TGV) [20], and scalespace anisotropic TV (ssaTV) [21]. For insufficient data, they achieve superior image quality compared to FBP-based reconstruction, as TV regularization takes the advantage of the sparsity prior in the gradient domain. Except for image gradient domain, sparsity prior can also be employed in other transformed domains [22]- [24].
Recently, deep learning has achieved impressive results in CT reconstruction [25], [26]. For image reconstruction from insufficient data, deep learning has been applied for sinogram inpainting in the projection domain [27]- [31], artifact postprocessing in the image domain and image transformed domains [32]- [41], and direct projection-to-image reconstruction [42]- [48]. For sinogram inpainting and artifact post-processing, incomplete sinograms and artifact corrupted images need to be translated to complete sinograms and artifactfree images, respectively. For these tasks, the U-Net [49] and generative adversarial networks (GANs) [50], [51] are the most frequently used techniques. For projection-to-image reconstruction, known operators for image reconstruction [52] are typically integrated into the architecture design of deep neural networks [45]- [48].
The above achievements have shown a promising prospect of the clinical application of deep learning into CT reconstruction. However, the robustness of deep learning in practice is still a concern [53]- [55]. It is well-known that deep learning cannot generalize well to unseen data [56], especially with insufficient training data. Meanwhile, it is reported that adversarial examples [57]- [60] are ubiquitous in deep neural networks. While small perturbations hardly visible to human eyes can cause deep neural networks to predict an entirely wrong label [57]- [59], objects with significant changes might be classified as the same label by deep neural networks since a trained neural network typically cannot capture all necessary features to distinguish objects of different classes [60], [61]. The instability of deep learning is widely investigated in the field of computer vision [53]- [60]. However, it has not been adequately investigated in the field of CT reconstruction. In our previous work [62], we found that deep learning is sensitive to noise in the application of limited-angle tomography. In this work, more false negative and false positive examples will be given for the applications of deep learning in CT reconstruction from insufficient data.
Due to the factors of insufficient training data and noise as aforementioned, generating reconstructed images directly from a neural network appears inadequate since incorrect structures might occur in the learned images. Learned images with incorrect structures are likely not consistent with measured projection data. Therefore, enforcing data consistency can improve their image quality in principle. Since images generated by deep learning methods can provide substantial beneficial information of anatomical structures, they potentially offer good prior for unmeasured projection data. Accordingly, we propose a data consistent reconstruction (DCR) method using learned prior images for CT reconstruction from insufficient data, where iterative reconstruction with TV regularization is applied to integrate data consistency for the measured data and learned image prior for the unmeasured data.
This work is an extension of our previous preliminary work [63] and [64]. The major contributions of this work lie in the following two aspects: a) Investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases; b) Propose the DCR method to improve the image quality of learned images for CT reconstruction from insufficient data, which is a hybrid method to combine the advantages of deep learning and compressed sensing.
II. MATERIALS AND METHODS
Our proposed DCR method mainly consists of three steps: artifact reduction using deep learning, data inpainting with learned prior images, and iterative reconstruction with TV regularization.
A. Artifact Reduction Using Deep Learning conv 3 x 3 + ReLU + BN max pool resize-conv 2 x 2 copy conv 1 x 1 Fig. 1. The U-Net architecture for artifact reduction in images reconstructed from insufficient data. 1) Neural network: As mentioned above, various neural networks have been reported effective for image reconstruction from insufficient data, for example, the U-Net, GANs, or the iCT-Net [47]. In principle, images processed by any one of these networks can provide prior information for the inpainting of missing data. In this work, we choose the state-of-the-art U-Net for its concise architecture design, fast convergence in training and high representation power. Its architecture is displayed in Fig. 1.
2) Input: For limited-angle tomography and sparse-view CT, images reconstructed by FBP directly from insufficient data, denoted by f FBP , are used as input images. For truncation correction, it is more effective for a neural network to learn artifacts in images reconstructed by FBP from extrapolated data than truncated data directly, according to our previous research [41], [64]. Therefore, in this work, an image reconstructed from WCE [4] processed projections, denoted by f WCE , is chosen as the input to the network.
As the artifacts are mainly caused by erroneous extrapolation of WCE, sparse-view sampling or limited-angle acquisition, the influence of cone-beam artifacts is neglected. To reduce computational burden, we process the data slice-wise instead of volume-vise. The correlation among different 2-D slices in the third dimension will be brought back in the third step of DCR.
3) Output: The training target of the network is the corresponding artifact image of the input image, denoted by f artifact . It is computed as the subtraction of the input image by the data-complete reconstruction. For training, the 2 loss function is used for truncation correction and limited-angle tomography, while the perceptual loss in [65] is used for sparse-view CT since the 2 loss is not sensitive to high frequency, low magnitude aliasing. For inference, the network estimates the artifact image f artifact , and an artifact-free image f DL is computed by the subtraction of the artifact image from the input image, i. e., f DL = f WCE − f artifact for truncation correction and f DL = f FBP − f artifact for sparse-view and limited-angle reconstruction.
B. Data Inpainting with Learned Prior Images
For data consistent reconstruction, we propose to preserve measured projections entirely and use the learned reconstruction for data inpainting of missing projections. We denote measured projections by p m and their corresponding system matrix by A m . We further denote unmeasured projections by p u and their corresponding system matrix by A u . The learned reconstruction f DL provides prior information for the unmeasured projections p u . Therefore, an estimation of p u , denoted byp u , is achieved using digitally rendered radiographs (DRRs) of f DL ,p Combiningp u with p m , a complete projection set is obtained for reconstruction.
C. Iterative Reconstruction with TV Regularization
Due to intensity discontinuity betweenp u and p m at the transition area as well as noise in the measured projections, iterative reconstruction with TV regularization is applied to further reduce artifacts and noise.
In this work, particularly the iterative reweighted total variation (wTV) regularization [21] is utilized, as wTV intrinsically reduces staircasing effect and preserves fine structures well. The wTV of an image f is defined as [66], where f (n) is the image at the n th iteration, w (n) is the weight vector for the n th iteration which is computed from the previous iteration, and is a small positive value added to avoid division by zero. A smaller value of results in finer image resolution but slower convergence speed. The overall objective function for the n th iteration is, with initialization, Here e 1 is a noise tolerance parameter for the data fidelity term of the measured projections and e 2 is a tolerance parameter that accounts for the inaccuracy of the prior image f DL . The iterative reconstruction is initialized by f DL to accelerate convergence. With the above objective function, the data consistency for the measured data and the learned image prior for the unmeasured data are integrated. To solve the above objective function, simultaneous algebraic reconstruction technique (SART) is utilized to minimize the data fidelity constraints, while a gradient descent method is utilized to minimize the wTV term [21]. The implementation details of the SART + wTV algorithm are presented in the following pseudo-code. The error tolerance parameters e 1 and e 2 are integrated by two soft-thresholding operators S e1 and S e2 respectively in Line 9-13, where x,y,z = 1/ ||Df (0) x,y,z || + 4: repeat 5: SART update for data fidelity: 6: for β = β min ; β <= β max ; β = β + ∆β do 7: for all p i ∈ P β simultaneously do 8: if p i is measured then 10: end for 16: for all f j ∈ f simultaneously do 17: Enforce nonnegativity: 23: end for 26: 27: wTV minimization: 28: for l = 0; l < l max ; l++ do 29: compute wTV gradient: while ||f (n) − t · g|| wTV > ||f (n) || wTV + α · t · g g do 35: t := γ · t 36: end while 37: x,y,z = 1/ ||Df (n) x,y,z || + 40:
D. Experimental Setup
In this work, the proposed DCR method is evaluated on image reconstruction tasks from truncation data, limited-angle data and sparse-view data respectively in cone-beam CT systems.
1) System configurations: The source-to-detector distance of the cone-beam CT is 1200 mm and the source-to-isocenter distance is 600 mm. The reconstruction volume size is 256 × 256×256 with a voxel size of 1.25 mm × 1.25 mm × 1.0 mm. The detector pixel size is 1.0 mm × 1.0 mm. For a regular short scan, the angular range is 210 • with an angular step of 1 • using a large detector size of 1240 × 960 pixels. For lateral data truncation study, the detector is switched to a small size of 600 × 960 pixels. For sparse-view CT, the angular step is switched to 4 • in a full scan of 360 • . For limited-angle tomography, the angular range is switch to 150 • . To generate DRRs for data inpainting, a ray-driven forward projection method with a sampling rate of 7.5/mm is applied. For all experiments considering noise, Poisson noise is simulated assuming an exposure of I 0 = 10 5 photons at each detector pixel before attenuation.
2) Neural network training and test: In this work, 18 patients' CT data sets [67] are used, with a split of 16-1-1 for training, validation and test, respectively. To investigate the performance of deep learning on different training and test data, leave-one-out cross-validation is performed among 17 patients' data sets, while one patient is always used for validation. For training, 25 slices from each patient are chosen with a neighbouring slice distance of 10 mm. For validation, the root-mean-square error (RMSE) of 25 slices is used for monitoring the training process to avoid over-fitting. For noisefree evaluation, the U-Net is trained on noise-free data, while it is trained on data with Poisson noise in the noisy cases. The 2 -norm is applied to regularize neural network weights with a regularization parameter of 10 −4 . The U-Net is trained on the above data for 500 epochs. The initial learning rate is 10 −3 and the decay rate is 0.97 for each epoch. For the test patient, all the 256 slices are fed to the U-Net for evaluation.
3) Iterative reconstruction parameters: For iterative reconstruction, the error tolerance parameter e 1 is set to 0.005, while a relatively large value of 0.5 is chosen empirically for e 2 in Eqn. (3) in the noise-free case. In the noisy case, e 1 is set to 0.05 to incorporate Poisson noise, while e 2 is kept as 0.5. Please see Fig. 6 for the above parameter selection. For the wTV regularization, the parameter is set to 5 HU for weight update in Eqn. (2). 10 iterations of SART + wTV are applied using the U-Net reconstruction f DL as initialization to get the final reconstruction, i.e., n max = 10 in the pseudo-code of the SART + wTV algorithm. For SART, the relaxation parameter λ is set to 0.8. For wTV minimization, 10 subiterations are applied, i.e., l max = 10.
4) Image quality assessment: To assess image quality, the metrics of RMSE and structure similarity (SSIM) are utilized. In addition, whether false negative and false positive lesions are reconstructed is analysed, since the standard image quality metrics (e.g., RMSE and SSIM) cannot fully indicate the clinical value of images.
A. Truncation Correction
The reconstruction results of two example slices from the test patients in the noise-free case are displayed in Fig. 2. The RMSE of the whole body area for each method is displayed in the corresponding subcaption. In the FBP reconstruction f FBP (Figs. 2(b) and (h)), the original FOV boundary is clearly observed. The structures inside the FOV suffer from severe cupping artifacts, while the anatomical structures outside this FOV are entirely missing. For f WCE in Figs. 2(c) and (i), the cupping artifacts inside the FOV are notably alleviated by WCE. Anatomical structures outside the FOV are partially reconstructed. In the wTV reconstruction f wTV (Figs. 2(d) and (j)), the cupping artifacts are removed. Nevertheless, the structures outside the FOV are still missing due to data truncation. Figs. 2(e) and (k) demonstrate that the U-Net is able to reduce the cupping artifacts as well. Moreover, it is able to reconstruct the anatomical structures outside the FOV. For example, the ribs on the left side are well reconstructed by the U-Net in Fig. 2(k). Nevertheless, the proposed DCR method further improves the accuracy of the U-Net reconstructions, achieving the smallest RMSE values of 49 HU and 42 HU for (f) and (l), respectively.
The above two example slices are redisplayed in a narrow window of [-200, 200] HU in Fig. 3. The RMSE inside the FOV for each method is displayed in the corresponding subcaption. In this window, the detail structures of the liver can be observed, including the lesions indicated by the arrows in Fig. 3(a) and (g). In the FBP reconstructions displayed in Figs. 3(b) and (h), the cupping artifacts are clearly present, which are mitigated by WCE, wTV, U-Net and DCR, as the RMSE inside the FOV is significantly reduced. Among these methods, DCR achieves the lowest RMSE values of 19 HU and 15 HU respectively inside the FOV for the two slices. Although the U-Net reconstructs anatomical structures outside the FOV, these structures may not have accurate intensity values. For example, the tissue at the center of the ROI in Fig. 3(e) has low intensity. In contrast, it is observed better in the DCR reconstruction in Fig. 3(f). In addition, some structures inside the FOV may also have incorrect intensity values. For example, the ROI in Fig. 3(k) is brighter than that in the reference image ( Fig. 3(g)), which is caused by the remaining cupping artifacts. This intensity bias is corrected in the DCR reconstruction in Fig. 3(l), as indicated in the ROI.
The lesions in Fig. 3 are reconstructed by all the methods. However, not all lesions observed in the U-Net reconstruction are reliable, even in the noise-free case. Two example results are displayed in Fig. 4. In the ROI of Fig. 4(e), a lesion is located at the bottom tip of the liver, which looks very realistic. In Fig. 4(k), a large lesion is clearly visible. However, in the reference ROIs in Figs. 4(a) and (g), these two lesions do not exist. With our proposed data consistency constraint, these fake lesions are removed, as demonstrated in the ROIs of Figs. 4(f) and (l).
The truncation correction methods are also evaluated in the noisy case. The results of two example slices (the same two slices in Fig. 3) are displayed in Fig. 5. Noise patterns are observed in the FBP reconstruction (Figs. 5(b) and (h)) and the WCE reconstruction (Figs. 5(c) and (i)), where the lesions in the corresponding ROIs are obscured by noise. Figs. 5(d) and (j) demonstrate that wTV is able to reduce Poisson noise very well and reveal the lesion areas. For the U-Net result displayed in Fig. 5(e), the lesion boundary in the ROI is very blurry and its area is much larger than that in the reference ROI of Fig. 5(a). For the U-Net result in Fig. 5(k), Poisson noise remains and the low contrast (about 20 HU) lesion in the ROI is obscured by the severe noise. These observations indicate that deep learning is not robust enough in the existence of noise, even if the U-Net is trained on data with Poisson noise. The proposed DCR method combines the advantages of wTV and U-Net. It reduces both the cupping artifacts and the Poisson noise inside the FOV, preserving organ details. In the proposed DCR algorithm, e 1 and e 2 in Eqn. (3) are two important performance parameters. In the noise-free case, a small value close to zero is fine for e 1 . However, in the noisy case, a proper value for e 1 needs to be set for noise tolerance. Therefore, several example images in the noisy case are displayed in Fig. 6 to indicate the influence of these two parameters. All the images are obtained with other parameters of = 10 HU for weight update, λ = 0.8 for SART update and n max = 10 for the total number of iterations. Fig. 6(a) (the same as Fig. 5(l)) is the result with the empirically chosen parameters e 1 = 0.05 and e 2 = 0.5. Fig. 6(b) is the result with the parameters e 1 = 0.05 and e 2 = 0.5 as well, but only with the SART update for data fidelity without the wTV minimization. As expected, Poisson noise remains in Fig. 6(b), indicating the necessity of the wTV minimization. Fig. 6(c) and (d) are the results with e 1 = 0.005 and e 1 = 0.5, respectively. In Fig. 6(c), a large portion of Poisson noise is reduced. However, still some Poisson noise remains due to the small value of e 1 . In Fig. 6(d), the Poisson noise is entirely removed. However, fine structures are over-smoothed. These observations indicate that e 1 a parameter controlling the tradeoff between noise reduction and high spatial resolution. Hence, e 1 = 0.05 is recommended empirically for the noisy case in this work. Regarding the parameter e 2 , Fig. 6(e) and (f) are the results with e 2 = 0.1 and e 2 = 5, respectively. In Fig. 6(e), the FOV boundary is present, caused by the discontinuity betweenp u and p m at the transition area. Fig. 6(f) is very similar to Fig. 6(a) where the pixel values at the FOV boundary are corrected while fine structures are preserved. This indicates that the selection of e 2 is relatively flexible as long as it is large enough. With the chosen parameters e 1 = 0.05 and e 2 = 0.5, the average RMSE values for the whole image and the area inside the FOV of Patient NO. 17 are plotted over iterations in Fig. 7. The RMSE inside the FOV converges faster that that of the whole image. However, both of them have little change after 10 iterations. Therefore, in this work, choosing the total iteration number n max = 10 is sufficient.
B. Limited-Angle Tomography
The reconstruction results of one example slice (the 196th slice of Patient NO. 3) in 150 • cone-beam limited-angle tomography with Poisson noise are displayed in Fig. 8. The top row images are displayed in a wide window of [-1000, 1000] HU. The top body outline is severely distorted due to missing data in the FBP reconstruction in Fig. 8(b). Meanwhile, it also suffers from Poisson noise. In the wTV reconstruction in Fig. 8(c), the Poisson noise is reduced. Due to the large angular range of missing data, wTV is able to restore the top body outline only partially. In Fig. 8(d), the body outline is fully restored by the U-Net. However, not all structures are accurate in the U-Net reconstruction, when it is redisplayed in a narrow window of [-200, 200] HU in Fig. 8(i). Especially, the lesion in the ROI is hardly seen, while it is well observed in the ROIs of the reference image ( Fig. 8(f)) and the wTV reconstruction (Fig. 8(h)). By combining deep learning with compressed sensing, the lesion is reconstructed again in the DCR reconstruction in Fig. 8(j), while the body outline is reconstructed as well.
C. Sparse-View CT
The reconstruction results of one slice (the 162th slice of Patient NO. 3) in sparse-view (90 projections) cone-beam CT are displayed in Fig. 9. The top row images are displayed in a wide window of [-1000, 1000] HU. In the FBP reconstruction in Fig. 9(b), streak artifacts and aliasing are observed. The high frequency aliasing and streaks are reduced effectively by wTV, as displayed in Fig. 9(c). In the U-Net reconstruction displayed in Fig. 9(d), the streak artifacts and aliasing are reduced to some degree. However, artifacts remain in the background area and the patient bed area. The streak artiacts and aliasing are reduced effectively in the DCR reconstruction in Fig. 9(e).
In the narrow window, most anatomical structures are obscured by the streak artifacts and aliasing in the FBP reconstruction in Fig. 9(g). For wTV in Fig. 9(h), most anatomical structures are reconstructed. However, the intensities of the lung vessels in the bottom left ROI, which consist of a few pixels, are reduced. As a consequence, they are not visible in the given window in Fig. 9(h). Note that they are still visible in the wide window in Fig. 9(c). For the U-Net reconstruction in Fig. 9(i), the dark cavity structure in the center of the top right ROI is not reconstructed. Nevertheless, the lung vessels in the bottom left ROI are preserved. In the DCR reconstruction in Fig. 9(j), both the lung vessels and the dark cavity structure in the two ROIs are reconstructed.
IV. DISCUSSION Deep learning based methods achieve encouraging image reconstructions from insufficient data, as displayed in Figs. 2(e) and (k), Fig. 8(d), and Fig. 9(d). In a wide display window, high contrast structures (e.g., bones) and body outlines are typically reconstructed with high confidence. However, many fine details maybe reconstructed incorrectly when they are displayed in a narrow window, although they appear very realistic. In this work, false negative lesion cases of deep learning methods are exhibited in Fig. 5(k), Fig. 8(i), and Fig. 9(d). Moreover, false positive lesion cases of deep learning are discovered, as shown in Figs. 4(e) and (k). Especially, the lesion generated by the U-Net in Fig. 4(e) is so realistic that radiology experts may draw false diagnostic conclusions. Therefore, the observations in this work serve as a warning to the robustness of deep learning in clinical applications.
Insufficient training data and noise are two main potential factors to influence the robustness of deep learning, as indicated by the experiments in this work. In the noisefree scenarios, false positive lesions are observed in the U-Net results in Figs. 4(e) and (k). This is potentially caused by the insufficient training data, as in this work only 425 slices are used for training in each experiment due to the limited access to patient data. In addition, the observations in Figs. 5(e) and (k), Fig. 8(i), and Fig. 9(i) demonstrate that deep learning is very sensitive to noise. In deep neural networks, as a consequence of high dimensional dot products, noise will accumulate layer by layer and adds up to a large change to the output [58]. Therefore, even if noise has a small magnitude, it still has a severe impact on the output images. In the noisy cases, the U-Net is trained on data with Poisson noise, which endows the U-Net the ability to deal with noisy images to some degree, as we observed previously [62]. However, it is still not sufficient to reduce Poisson noise entirely as indicated by Fig. 5(k), or it tends to over-smooth images as indicated by Fig. 5(e). In either way, fine structures are lost.
Instability is a general problem for deep learning in solving inverse problems [68]. Therefore, generating reconstructed images directly from a neural network appears inadequate. Our proposed DCR method, with the help of compressed sensing, has superior performance than deep learning only for different CT reconstruction scenarios with insufficient data. For truncated data, Fig. 3 demonstrates that DCR can improve the image quality of learning-based images both inside and outside the FOV. Fig. 4 and Fig. 5 show that DCR can correct false negative and false positive lesions in learning-based images, which highlights the important clinical value of DCR. For limited-angle data and sparse-view data, our example results in Fig. 8 and Fig. 9 demonstrate the efficacy of DCR as well.
As a hybrid method combining deep learning with compressed sensing, DCR preserves their respective advantages: a) DCR is as robust as compressed sensing, since both of them keep the data consistency with measured data. b) DCR is more efficient than compressed sensing. For DCR, 10 iterations only (Fig. 7) are sufficient to converge using the learned images as initialization, while compressed sensing only typically requires much more iterations. c) DCR is more effective for image reconstruction from insufficient data. Compressed sensing only typically fails to reconstruct the regions where a large amount of data are missing, as indicated by Figs. 2(d) and (j), Fig. 8(c), and Fig. 9(h). Thanks to the information provided by the learned image prior for the missing data, DCR is able to reconstruct those regions better than compressed sensing only.
V. CONCLUSION
In this work, the robustness of deep learning for CT image reconstruction from insufficient data is investigated. Particularly, false positive and false negative lesion cases generated by the state-of-the-art U-Net are exemplified for image reconstruction from truncated data, limited-angle data and sparse-view data. To improve deep learning reconstruction, the DCR method is proposed combining the advantages of deep learning and compressed sensing. It utilizes compressed sensing to compute a final reconstruction which is consistent with the measured data, while using the learned reconstruction as prior for unmeasured data. In such a combination, the high representation power of deep learning and the high robustness of compressed sensing are integrated together in the proposed DCR method.
Disclaimer: The concepts and information presented in this paper are based on research and are not commercially available.
|
2020-05-21T01:01:37.131Z
|
2020-05-20T00:00:00.000
|
{
"year": 2020,
"sha1": "b2c668d5e48a65b418bb80052d375a6b98b844df",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b2c668d5e48a65b418bb80052d375a6b98b844df",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
258560155
|
pes2o/s2orc
|
v3-fos-license
|
CircRNAs in osteoarthritis: research status and prospect
Osteoarthritis (OA) is the most common joint disease globally, and its progression is irreversible. The mechanism of osteoarthritis is not fully understood. Research on the molecular biological mechanism of OA is deepening, among which epigenetics, especially noncoding RNA, is an emerging hotspot. CircRNA is a unique circular noncoding RNA not degraded by RNase R, so it is a possible clinical target and biomarker. Many studies have found that circRNAs play an essential role in the progression of OA, including extracellular matrix metabolism, autophagy, apoptosis, the proliferation of chondrocytes, inflammation, oxidative stress, cartilage development, and chondrogenic differentiation. Differential expression of circRNAs was also observed in the synovium and subchondral bone in the OA joint. In terms of mechanism, existing studies have mainly found that circRNA adsorbs miRNA through the ceRNA mechanism, and a few studies have found that circRNA can serve as a scaffold for protein reactions. In terms of clinical transformation, circRNAs are considered promising biomarkers, but no large cohort has tested their diagnostic value. Meanwhile, some studies have used circRNAs loaded in extracellular vesicles for OA precision medicine. However, there are still many problems to be solved in the research, such as the role of circRNA in different OA stages or OA subtypes, the construction of animal models of circRNA knockout, and more research on the mechanism of circRNA. In general, circRNAs have a regulatory role in OA and have particular clinical potential, but further studies are needed in the future.
Introduction
Osteoarthritis (OA) is a classic degenerative chronic disease with significant symptoms, including pain, morning stiffness, and joint instability, leading to disability and ultimately impairing quality of life (Martel-Pelletier et al., 2016). The incidence of osteoarthritis is still high worldwide, with approximately 303.1 million hip and knee osteoarthritis cases, according to the Global Burden of Disease Project (GBD). As presented by Safari et al.'s analysis of GBD data up to 2017, the incidence of osteoarthritis has increased by approximately 8%-10% since 1990, which is based only on hip and knee osteoarthritis (Peat and Thomas, 2021). As a chronic disease, osteoarthritis impacts patients' quality of life and burdens the country and society long-term. Although, at present, the subtypes of OA, risk factors, or etiologic factors, the mechanism of development cannot be revealed entirely clearly. The corresponding treatment methods are also under study. Osteoarthritis is characterized by cartilage degeneration, osteophyte formation, damage and remodeling of the cartilage, and varying degrees of synovitis and other joint structural damage, including ligaments and menisci. Identifying the molecular biological mechanisms of osteoarthritis development is essential to the treatment of osteoarthritis (Katz et al., 2021). The current mechanism of osteoarthritis development mainly focuses on changes in the structure of tissues and their functions in the joints, with the participation of various molecular biological processes and a variety of cells and cytokines (van den Bosch, 2021). The understanding of the molecular biological mechanism is constantly improving, including epigenetics as a new hot research direction in recent years. The osteoarthritis research process is increasingly often mentioned, of which noncoding RNA research is also increasingly in-depth, especially circRNA research (Ratneswaran and Kapoor, 2021). Osteoarthritis is an inflammatory disease produced by various factors, and circRNAs also participate in and regulate its progression (Zhang et al., 2021a). Circular RNA is currently one of the heated topics in research, and the purpose of this paper is to review the progress of its research in osteoarthritis and discuss its significance, breakthroughs, and deficiencies in current research and future research directions.
Introduction to CircRNA
A wide variety of noncoding RNAs are involved in osteoarthritis, including microRNAs, lncRNAs, and circRNAs. CircRNA is a longchained and closed-loop RNA with better stability and a longer halflife due to its unique circular structure, which is more resistant to RNase R (Jeck and Sharpless, 2014), making it a potential candidate for diagnostic biomarkers and therapeutic targets. CircRNAs form by reverse splicing, wherein the 3′end of the exon is connected to its own or upstream exon's 5′end through a 3'-5′phosphate bond, forming a closed-loop structure with a reverse splicing connection site (Chen and Yang, 2015). They participate in a variety of physiological or pathological processes through a variety of mechanisms.
In terms of mechanism, the most common studies have focused on its molecular sponge as a microRNA, competitively inhibiting its host gene by affecting its intervention transcription function, also known as the competing endogenous RNAs (ceRNA) mechanism (Panda, 2018). An increasing number of studies have shown that its mechanisms and physiological functions are diverse (Chen, 2020a), such as regulating transcription and binding with its host gene to form an R-loop structure and upregulating the transcription process of skipping exons or intercepts (Conn et al., 2017), cooperating with the U1 snRNP junction for Pol II regulation of nuclear transcription (Li et al., 2015). The translation function of circRNA has received increasing attention in recent years, and circRNA with IRES structure can be used as a template for translation, translating the corresponding biologically functional peptide segment (Legnini et al., 2017). Moreover, circRNAs with ORFs are translated in a rolling circle manner, even up to a hundredfold linear translation, due to the property of their circularity (Abe et al., 2013). However, more studies have also found that its recognition by YTHDF3 after N6-methyladenosine (m6A) methylation, which recruits eIF4G2, can also enable circRNA translation (Di Timoteo et al., 2020). CircRNAs can also affect protein function by interacting with DNA or RNA-binding proteins (Du et al., 2017;Luo et al., 2019;Zhou et al., 2020a;Huang et al., 2020) or affect protein-to-protein interactions (Zhou et al., 2020a). (Figure 1) The metabolic mechanisms of circRNAs have also been clarified in recent years, including their upstream regulation and downstream metabolism (Xiao et al., 2020). CircRNAs are regulated by cis-acting elements and transcription factors (Ashwal-Fluss et al., 2014). The reverse repeated Alu sequence in the flank inclusion promotes exon circularization (Zhang et al., 2014). According to recent literature, N6-methyladenosine (m6A) controls the biogenesis of circRNA. Methyltransferase-like 3 (METTL3) or YTH domain 1 (YTHDC1) is
FIGURE 1
Biological functions of circRNAs. An increasing number of studies have shown that its mechanisms and physiological functions are diverse, such as regulating transcription and binding with its host gene to form an R-loop structure and upregulating the transcription process of skipping exons or intercepts, cooperating with the U1 snRNP junction for Pol II regulation of nuclear transcription. CircRNA with IRES structure can be used as a template for translation. More studies have also found that its recognition by YTHDF3 after m6A methylation, which recruits eIF4G2, can also enable circRNA translation. CircRNAs can also affect protein function by interacting with DNA or RNA-binding proteins or affect protein-to-protein interactions.
reported to regulate circRNA (Di Timoteo et al., 2020). It has also been reported to be regulated by m6A concerning its transport out of the nucleus and degradation, methylated circRNA translocated into the cytosol through YTHDC1 recognition (Chen et al., 2019a), and its degradation through YTHDF2 (Park et al., 2019). Other degradation mechanisms, including RNase L, have also been reported to degrade circRNAs .
The physiopathological functions of circRNAs have been elucidated in a significant fraction of diseases as regulatory roles, involving almost all physiological processes in all organisms and regulating the function of cells and organs. Its clinical transformation has also been documented, and circRNAs can be used as biomarkers of liquid biopsies for the early diagnosis of diseases such as tumors and the evaluation of disease progression (Li et al., 2021a;Li et al., 2021b;Gui et al., 2021;Kuo et al., 2021;Zhang et al., 2022a;Kristensen et al., 2022).
CircRNAs in OA chondrocytes
Cartilage degeneration is a significant event in the development and progression of OA, and in this study, we summarize the various pathological processes in which circRNAs participate in cartilage degeneration.
CircRNAs in cartilage development and differentiation
CircRNAs have significantly different expression levels at different stages of developmental processes and promote the differentiation of bone marrow-derived stem cells or adiposederived stem cells (Zhou et al., 2021a). In recent years, research on OA has also focused on bone and cartilage development and differentiation. In terms of noncoding RNAs, it has been well documented that miRNAs act as important regulators involved in cartilage development and differentiation. Especially in recent years, there has been a blowout increase in the study of miRNAs in cartilage differentiation, and many miRNAs have been identified to play a regulatory role in cartilage differentiation. miRNAs can also regulate mesenchymal stem cell differentiation by targeting the transcription of growth-related genes such as IHH, SOX5/6, and Sox9. (Iaquinta et al., 2021).
As a result, circRNAs, as noncoding RNAs closely related to miRNAs, are also likely to play a regulatory role in the development and differentiation of cartilage. CircPSM3 has been proven to regulate cartilage differentiation in cartilage and is upregulated in OA, targeting miRNA-296-5p. After detecting BMP2, BMP4, BMP6, and Runx2 at the mRNA and protein levels, it was found that high expression of miRNA-296-5p could effectively promote OA chondrocyte differentiation, while a miRNA-296-5p inhibitor could reverse the differentiation of OA chondrocytes promoted by si-circPSM3 (Ni et al., 2020). The recent study also showed that the circATRNL1/Sox9 pathway presented a clear high expression and a positive correlation with increased chondrogenesis in adipose mesenchymal stem cells and was regulated by miR-145-5p (Zhu et al., 2021a). CircNFIX/miR758-3p/KDM6A axis has also been reported as a possible target for regulating chondrogenesis . Furthermore, differentiation into proliferative chondrocytes may effectively increase the likelihood of cartilage repair, and studies on the roles of circRNAs in chondrogenesis may be necessary.
circRNA is involved in cartilage degeneration
Cartilage degeneration and loss as typical osteoarthritis and explicit pathophysiological changes are most studied in osteoarthritis research. Many studies have shown that circRNAs are involved in a variety of microRNA-regulated cartilage loss through ceRNA mechanisms, including apoptosis, proliferative function change, autophagic function change, inflammatory status, and degradation of the extracellular matrix of chondrocytes (Mao et al., 2021a). Although a considerable number of current studies confirm it, most of them appear too similar, making it quite difficult and necessary to identify critical circRNAs and pathways with regulatory functions. Nevertheless, an increasing number of studies have verified multifaceted functions, such as validating the simultaneous regulation of cartilage inflammation, chondrocyte apoptosis, and cartilage extracellular matrix degradation by circRNAs, which reflects that inflammation, apoptosis, and extracellular matrix degradation may be common in the terminal state of OA cartilage ( Figure 2).
In this review, we summarize the existing research on the regulatory mechanisms of circRNAs in OA cartilage and present the results in Table 1. The mechanisms, pathways, biological functions, and animal models used in the research of circRNAs in OA progression are summarized. Considering that these studies all experimentally validated circRNAs and their targets for expression levels at the nucleic acid or protein level, we pooled and analyzed these circRNAs and their targets. We found that 42 circRNAs were downregulated and 61 upregulated in OA cartilage. After performing an analysis of circRNAs differentially expressed in OA, we found that the CircRNAs play a significant role in OA mainly through the following pathways and genes, including the classic Ras/ MAPK pathway, PI3K/AKT pathway, TGF-β/SMAD path, JAK/STAT path, and FGF/ERK signal path. In addition, circRNAs in OA regulate SOX9 related cartilage differentiation pathway, BMPR2 represented osteoblastic differentiation pathway, CCDN1 represented cell cycle pathway, PON2 related oxidative stress pathway or directly regulate genes related to extracellular matrix metabolism such as TIMP3, MMP13 and ADAMTS5 related matrix metalloproteinase. Meanwhile, circRNAs also participate in the regulation of the expression of a considerable number of transcription factors like FOXO1 and KLF5 as well as ubiquitination related genes like FBXO21 and FBXW7, to affect the expression of downstream genes (Table 1).
Regulation of chondrocyte proliferation, apoptosis, and autophagy
In studies of cartilage degeneration loss, the status of chondrocytes is undoubtedly crucial. Involvement of circRNAs has been found in apoptotic pathways, changes in chondrocyte proliferation function, and changes in autophagy function . A large number of studies have shown that circRNAs play a regulatory role in the apoptosis of chondrocytes (Table 1).
Almost all of the existing studies focus on the ceRNA mechanism. CircRNA regulates the expression of target genes through the competitive binding of miRNA, thus regulating the apoptosis of chondrocytes. Therefore, chondrocyte apoptosis and the weakening of proliferation should be considered the final outcome of cell fate. Targets for therapy should focus on more upstream pathways.
CircRNAs regulate the inflammatory state of chondrocytes
Osteoarthritis is an inflammatory disease. The inflammatory state of chondrocytes is naturally also considered in the study, and circRNAmediated inflammatory processes also play an essential role. CircRNAs have a regulatory role in several inflammatory processes (Saaoud et al., 2021). In osteoarthritis models, upregulated or downregulated circRNAs have also been observed to impact the production and degradation of inflammatory factors. The main inflammatory factors regulated by circRNAs were IL-6/IL-8/TNF-α/IL-17, and IL-1β and TNF-α also induced most osteoarthritis in vitro cell models. In addition, macrophages within the joint environment have also been implicated in the inflammation of OA cartilage. CircRNAs have also been found to play a regulatory role in inducing macrophage polarization. One study showed that the expression of hsa_circ_0005567 in OA synovium is downregulated. Overexpression of hsa_circ_0005567 inhibits M1-type macrophage polarization and promotes M2-type macrophage polarization. After being treated with the supernatant of LPSinduced THP-1 macrophages, the proliferation of chondrocytes was significantly reduced, while the apoptosis rate was significantly increased. Hsa_circ_0005567 overexpression reversed this phenomenon. Mechanistically, hsa_circ_0005567 acts through the miR-492/SOCS2 axis to suppress M1 macrophage polarization and thereby mitigate chondrocyte apoptosis in OA cartilage (Zhang et al., 2021b). A large number of studies have shown that circRNAs play a regulatory role in the inflammation of chondrocytes (Table 1). These studies demonstrate that circRNAs can influence the fate of chondrocytes and their extracellular matrix through the involvement of miRNAs in the regulation of inflammatory factors.
Changes in the extracellular matrix of chondrocytes
Degradation of the extracellular matrix is an essential mechanism for the development of osteoarthritis and is also thought to be a characteristic phenotype of osteoarthritis. MMP1, MMP3, MMP13, aggrecanase, ADAMTS4, ADAMTS5, and cathepsins have been proven to be specific markers of matrix degradation. Currently, many studies have confirmed that circRNAs, by competitively inhibiting miRNAs, affect the function of their target genes, causing the composition of the extracellular mechanism to change, eventually leading to the occurrence and progression of osteoarthritis.
circRNA is involved in the oxidation stress process
Oxidative stress regulated by circRNAs has also been observed to be involved in several other mechanisms that cause damage to chondrocytes. YANG Y et al. found that circRSU1 in human Summary of the role of CircRNAs in OA. CircularRNAs function in multiple tissues within the OA joint and their upregulation or downregulation in chondrocytes is associated with apoptosis, abnormal autophagy, impaired proliferation, oxidative stress, and cellular inflammatory status of chondrocytes, thereby mediating catabolism of extracellular matrix. Further exacerbates chondrocyte degeneration after losing the extracellular matrix environment. It also mediates fibroblast proliferation and polarization of macrophages in synoviocytes, leading to their release of inflammatory factors that exacerbate chondrocyte degeneration. Abnormally high expression of circRSU1 leads to the production of more ROS and the loss of cartilage extracellular matrix, leading to the occurrence and development of osteoarthritis . The role of circRNA in oxidative stress deserves attention, and Zhao found that in NASH (nonalcoholic fatty liver disease), mitochondrial circRNA-circSCAR reduces mitochondrial oxidative stress processes affecting the interaction between CypD and ATP5B. In the case of lipid exposure, circSCAR is regulated by endothelial mesh stress and PCG-1. After overexpression of circSCAR in mitochondria through mitochondrial-targeted nanograms in mice, oxidative stress in the NASH liver was significantly reversed, and liver function was improved . This study reveals that circRNAs also play an essential regulatory role in metabolic diseases and mitochondrial function. At the same time, a significant proportion of patients with OA have significant metabolic syndrome, considering the possibility of metabolic osteoarthritis (Kuusalo et al., 2021). On the other hand, mitochondria also play an essential role in aging-induced degenerative diseases such as OA . CircRNAs have also been reported to play an essential role in mitochondrial stabilization in OA. SIRT3, an essential gene for mitochondrial function, significantly decreased expression in OA chondrocytes and was regulated by miR-505-3p. Overexpression of miR-505-3p can cause ROS to rise, and apoptosis of chondrocytes increases, while overexpression of CircFAM160A2 plays a therapeutic role. Both in vivo and in vitro experiments have demonstrated that CircFAM160A2 regulates chondrocyte mitochondrial stability and chondrocyte apoptosis through the miR-505-3p/SIRT3 axis . Recent studies have also demonstrated that CircHIPK3, through the mir-30a-3p/PON2 axis, regulates mitochondrial function, which in turn affects chondrocyte apoptosis . These findings suggest that our circRNA may also regulate mitochondria in OA, providing new research ideas.
circRNAs act as protein scaffolds in OA chondrocytes
It is clear that circRNAs' rich mechanisms of action have been increasingly recognized in recent years, such as translation function and interactions with proteins, and in OA studies, little is known about mechanisms other than ceRNA. In a recent study, Shen et al. found that the reduction in circPDE4B in cartilage in OA patients was regulated by upstream FUS (an RNA-binding protein). The downregulation of circPDE4B resulted in the degradation of the extracellular matrix and decreased viability of chondrocytes. At the same time, AGO2 RIP was found to not function through the ceRNA mechanism. After cloning and amplification of its ORF sequence, no protein translated by circPDE4B was identified. Therefore, after RPD-MS and qRT-PCR verification, the authors found that RIC8 guanine-nucleotide exchange factor A (RIC8A) interacted with circPDE4B. Mass spectrometry has identified MID1 as an E3 ligase interacting with circPDE4B. After identifying the downstream pathway, the author screened out the p38/MAPK pathway. In medial meniscal enucleation models (DMM) model mice, overexpressed circPDE4B has also been proven to inhibit activating the RIC8A and p38/MAPK pathways and reverse the OA phenotype in mice, indicating that circPDE4B can be an effective molecular target drug for OA . Recent studies have shown that circNFKB1 regulates the expression of its host gene NFKB1 by interacting with the ENO1 protein (Tang et al., 2022a). A novel study found that circFOXO3 regulated the downstream PI3K/Akt pathway and affected chondrocyte autophagy through interaction with its parental gene FOXO3 . In the future, more circRNA functions and mechanisms of action may be revealed in OA, which also suggests the possibility of circRNA as a molecularly targeted drug.
4 CircRNA is involved in the regulation of the environment within the joint As a multi-component organization, osteoarthritis is a complex mechanism and an unclear chronic inflammatory disease, and changes in the joint environment are also essential in its development. Current studies of osteoarthritis also focus on other components of the joint, including synovium and subchondral bone, and abnormal vascular and neural factors in osteoarthritis (Ching et al., 2021;Zhang and Wen, 2021).
CircRNA is abnormally expressed in the synovium or infrapatellar fat pad of patients with osteoarthritis
The role of the synovium is not negligible in the development of osteoarthritis. Synovitis is often observed in the OA joint. Both synovial hyperplasia and the secretion of proinflammatory factors drive the progression of osteoarthritis (van den Bosch et al., 2020). MRI and ultrasound have identified a positive correlation between the risk of structural progression of synovial and osteoarthritis and joint symptoms (Roemer et al., 2011). Shuai et al. performed circRNA sequencing of OA synovial samples and controls and identified 122 circRNAs differentially expressed in osteoarthritic synovium. Using GO analysis and KEGG enrichment analysis, differentially expressed circRNAs were rich in adhesion molecules, tumor pathways, TGF-β, and some osteoarthritis pathways, such as Hippo and WNT pathways. This article also establishes the circRNA-miRNA network, exploring possible molecular regulation mechanisms for specific expression of circRNAs, and the miR-20, miR-29, and miR-136 families have all been reported in previous OA studies and interact with several differentially expressed circRNAs (Xiang et al., 2019). On the other hand, the infrapatellar fat pad and the surrounding synovium are also essential tissues involved in regulating the intra-articular environment. Circular RNA expression profiles of the infrapatellar fat pad/synovium unit reveal that hsa_circ_ 0005265 was down-expressed in both OA synovium and IPFP, targeting hsa-miR-6769b-5p and hsa-miR-1249-5p (Jiang et al., 2021a). However, the molecular mechanisms of differentially expressed circRNAs in OA synovium or infrapatellar fat pad in the progression of OA have not been thoroughly investigate.
Abnormal expression of circRNA in osteoclasts of osteoarthritis
Changes in subchondral bone, especially the imbalance between rupture and remodeling, are also essential mechanisms for the development of osteoarthritis . In such a process, osteoclasts play a central role . Dou C and others first identified the differential expression of circRNAs in abnormally activated osteoclasts. Many differentially expressed circRNAs were identified in osteoclasts and activated osteoclasts, suggesting that circRNA may also be extensively involved in remodeling subchondral bone. In another study, the expression of circRNA in bone marrow stromal cells during RANKL-and CSF1-stimulated osteoclast formation was sequenced, and their difference was analysed. This study focuses on the role of circRNA-28313 in bone marrow osteoclast differentiation. The results showed that circRNA-28313 knockout inhibited the differentiation of bone marrow mesenchymal stem cells and RANKL-induced osteoclasts and partially prevented the bone loss induced by ovariectomy (OVX). Further downstream experiments further found that circRNA-28313 alleviated miR-195a-mediated inhibition of CSF1 through a ceRNA mechanism, thereby regulating osteoclast differentiation (Chen et al., 2019b). The destruction of subchondral bone in OA is related to bone destruction between OA and osteoporosis, and RANKL was also identified in OA (Kovacs et al., 2019). These studies also indicate that circRNA is more likely to participate in the pathogenesis of osteoarthritis.
Abnormal expression of circRNA in osteoarthritis meniscus
Meniscal degeneration and wear are also prevalent in the knee tissue of OA patients, and a large number of OA model animals have used medial meniscal enucleation models to mimic OA (Zaki et al., 2021). Few studies have focused on the mechanism of meniscal changes in OA, but the meniscus plays an essential role in the stability of joints and the protection of articular cartilage in the Frontiers in Genetics frontiersin.org function of the knee joint. Bin Wang et al. performed bioinformatics analysis and prediction using public databases (GEOs) and revealed 360 differentially expressed genes in the OA meniscus. It is expected that hsa_circ_0025119, hsa_circ_0025113, hsa_circ_0009897, and hsa_circ_0002447 are the most critical circRNAs. The article failed to verify and identify the expression and function of circRNA in subsequent experiments, it also suggested that circRNA might also play a regulatory role in the OA meniscus (Wang et al., 2020a). Meanwhile, The hsa_ circ_ 0018069/mir-147b-3p/tjp2 axis was also found to play a regulatory role in the OA meniscus (Jiang et al., 2021b).
Upstream regulation and downstream metabolism of circRNAs in OA
The reason for the change in circRNA in OA is nothing more than the increase and decrease in metabolism. The metabolism and upstream regulation of circRNAs have also been hot topics of research, with clear evidence that circRNAs are regulated by transcription factors. In OA studies, there has not been much research on the upstream regulatory factors of circRNAs in osteoarthritis. Wang et al. found that LEF1, as a transcription regulator, affects the expression changes of downstream circRNF121, regulates the miR-665/MYD88/NF-Lb pathway, and ultimately regulates the apoptosis and proliferation of chondrocytes and the metabolism of extracellular matrix (Wang et al., 2020b). RNA binding proteins regulate back splicing mainly by directly bridging distal splice sites and by binding to intronic complementary sequence (ICS) (Chen, 2020a). RNA binding proteins including QKI, HNRNPL, Mbl, SLU7, NF110, NF110, DHX9, ADAR1 were reported to potentially regulate back splicing of circRNAs (Ashwal-Fluss et al., 2014;Conn et al., 2015;Ivanov et al., 2015;Aktas et al., 2017;Li et al., 2017a). Some of them, such as QKI and DHX9, were reported to be differentially expressed in osteoarthritic tissues as well (Li et al., 2016;Tang et al., 2022b), which may also be one of the upstream mechanisms regulating expression of circRNAs in osteoarthritis. However, another possible change in the expression of circRNA in OA is its downstream metabolic changes. Changes in downstream metabolic clearance have rarely been mentioned in OA studies. With a further understanding of the circRNA metabolic pathway, this may be one of the follow-up research directions.
6 The role of circRNA in OA of different etiologies and stages 6.1 circRNAs in different subtypes of OA In fact, from a clinical point of view, some risk factors for osteoarthritis should also be taken into account. OA is a highly heterogeneous disease, and different drivers tend to shape different OA phenotypes (Van Spil et al., 2019). In existing circRNA-related studies, different types of osteoarthritis caused by different factors have not yet been considered by most researchers. However, some factors were considered, such as the specimen selection of load-bearing stress areas in osteoarthritis cartilage compared with nonload-bearing stress areas. Some studies selected chondrocytes from load-bearing versus non-load-bearing areas to identify differential expression. The etiology of osteoarthritis in the clinic is diverse and complex, and it has been established as a risk factor for several conditions: obesity, physical activity, structural factors, and genetics (Martel-Pelletier et al., 2016). Patients with OA of different etiologies or etiological factors should be treated separately. Therefore, studies should be more refined regarding OA of different etiologies, and corresponding studies aimed at different etiological subtypes of OA, such as RNA sequencing and identification of differential expression in patients with distinct metabolic profiles or those with distinct stress factors, may also yield different results. The construction of different OA models usually simulates different OA initiating factors, especially the design of animal models (Zaki et al., 2021). Most research on OA circRNAs is anterior cruciate ligament transection (ACLT) and medial meniscal enucleation models (DMM) models in rats and mice. These two models can simulate the initiating factors of traumatic osteoarthritis. More pathogenetic factors for OA should be considered. Designing different animal models may be a solution to this problem. Additionally, molecular subtype of OA has been mentioned by more and more researchers. Julia Steinberg et al. conducted a cluster analysis of mRNA sequencing data obtained from cartilage and synovium samples of OA patients. They discovered two subgroups in the synovium, one related to inflammation and the other related to extracellular matrix metabolism. High-grade inflammation of cartilage was also linked to female gender and proton pump inhibitor use (Steinberg et al., 2021). In another study, researchers from China sequenced cartilage samples from 131 OA patients and carried out a cluster analysis of their expression profiles. The patients were divided into four subtypes: a subtype characterized by glycosaminoglycan metabolism disorder, a subtype marked by collagen metabolism disorder, a subtype with sensory neuron activation, and an inflammation subtype . Biochemical markers have also been employed to cluster and classify patients into three groups, C1, C2, and C3. C1 is associated with low tissue turnover, including low repair and turnover of articular cartilage and subchondral bone. C2 is characterized by structural damage, such as high bone formation and resorption and cartilage degeneration. C3 is linked to systemic inflammation, joint tissue degeneration, and cartilage degeneration. In the FNIH/OAI cohort, C1 had the highest proportion of progressors, C2 was linked to the progression of mechanical structure, and C3 was associated with pain, which was consistent with molecular typing (Angelini et al., 2022). Future research can construct circRNA expression profiles of patients with different OA subtypes and conduct in-depth exploration of the relevant mechanisms.
circRNAs in different stages of OA
Another important aspect is that OA, as a chronic disease, has a significantly long pathological process, and different stages of OA have different characteristics. Sun et al. (2011) measured the miRNA expression profiles of rat articular cartilage at different developmental stages and sequenced femoral head cartilage on Frontiers in Genetics frontiersin.org zeroth days, twenty-first days, and forty-second days after birth. The authors observed in the results that, on the one hand, some miRNA clusters were continuously expressed at three stages, which may illustrate the existence of specific conserved sequences of miRNAs during development. On the other hand, the expression of some miRNAs changed obviously at different stages, and in the sequencing results combined with PCR validation, it could be found that some miRNAs showed early high expression. In contrast, some showed high expression at later stages of developmen. This study reveals the phased change in the miRNA expression profile at different stages of development. S. A. Ali et al. classified 91 OA patients into early OA (n = 41) and late OA groups (n = 50) using K-L grading of standard X-ray, performed nextgeneration sequencing on plasma samples from patients, and identified that hsa-mir-335-3p, hsa-mir-199a-5p, hsa-mir-671-3p, hsa-mir-1260b, hsa-mir-191-3p, hsa-mir-335-5p, and hsa-mir-543 were more highly expressed in early OA compared to late OA, while hsa-mir-193b-5p, hsa-mir-193a-5p, and hsa-mir-455-5p were more highly expressed in late OA. Meanwhile, the authors also identified some novel miRNAs, some of which showed high expression in early OA, some of which showed high expression in late OA, and others that showed high expression in both early and advanced stages (Ali et al., 2020). This result illustrated that the expression of miRNAs might change continuously with the progression of OA. Therefore, we can infer that in the different stages of the disease, such as early and late OA, the characteristic circRNA expression profile in the determination stage may bring different results. Determination of circRNA expression profiles based on the patient's symptoms, signs, functional assessment such as WOMAC or KOOS score of the knee (Roos and Toksvig-Larsen, 2003), and imaging grading such as Kellgren-Lawrence grade to stratify different patients may be considered, which may help us to understand the role of circRNAs in OA more deeply.
7 Clinical application and potential of circRNA 7.1 circRNA is the marker of a liquid biopsy CircRNAs are less susceptible to degradation by RNase R due to their lack of 3′and 5′ends, which confers a longer half-life and stability than other noncoding RNAs or mRNAs, suggesting the possibility of circRNAs as indicators for clinical testing. CircRNA as a liquid biopsy indicator has been frequently mentioned in tumor studies (Li et al., 2022a;Wang et al., 2022a;Ruan et al., 2022;Xue et al., 2022). CircRNA was detected in cartilage, synovial fluid, and serum in OA patients and was different from the control group, suggesting that it is an indicator of early diagnosis of osteoarthritis. Chen C et al. measured hsa_circ_101178 levels in serum and fluid and control group serum and fluid in OA patients and found that group OA was significantly higher than the control group. Meanwhile, there was a positive correlation between circ-101178 levels in serum and synovial fluid. In addition, serum hsa_ circ_101178 was positively correlated with OA's KL score and WOMAC pain score (Chen, 2020b). Fangyu et al. identified five circRNAs in the synovial fluid of OA patients who were significantly elevated compared with healthy controls. AUC analysis of diagnostic value found that hsa_circ_0104873, hsa_circ_0104595, and hsa_ circ_0101251 can effectively distinguish between OA patients and healthy controls. However, it was also found that three circRNAs were positively correlated with the radiological grade and symptom severity of OA patients (Yu et al., 2018). Similarly, Ying Wang et al. also identified that circ-0032131 levels in the peripheral blood of OA patients were significantly different from those in the healthy population . Additional studies have also reported that plasma circRNA-016901 can effectively distinguish osteoarthritis from rheumatoid arthritis and is correlated with disease severity . Although these studies failed to carry out clinical trials to prove the diagnostic value of circRNA, they also suggested, to some extent, the possibility of circRNA as a molecular marker for the diagnosis of osteoarthritis.
Clinical transformation potential of circRNA
The study of circRNAs is constantly moving from basic research to clinical translation, and its clinical potential has been explored as it has become more aware of its mechanisms and functions. For example, several animal trials have demonstrated that intraarticular injection of AVV or AV carried shRNA or plasmid targeting circRNAs, and silencing or overexpressing osteoarthritisassociated circRNAs in articular cartilage can effectively alleviate the progression of osteoarthritis in animal models. Treatment with intra-articular injection of sodium hyaluronate and glucocorticoids is widely used in the clinic . However, it remains controversial. The development of precision medicine approaches targeting specific nucleic acid drugs within OA joints, combined with individual sequencing results, may lead to better outcomes and personalized treatment options. While circRNAs are stable and are not easily degraded by RNase, the potential for translation and interaction with proteins may make them suitable carriers for nucleic acid drugs (Figure 3).
Extracellular vesicles loaded with circRNA
Extracellular vesicles are also a significant research hotspot, and extracellular vesicles have been suggested to be promising drug carriers, particularly nucleic acid drugs. Extracellular vesicles (EVs) are small vesicles released from different cells into the extracellular matrix, classified by origin and size, and include three subtypes: 1. Apoptotic bodies (500 nm-5 μm); 2. Microvesicles (150-500 nm); 3. Exosomes (40-150 nm), which can participate in intercellular communication . Exosomes are a subset of EVs secreted by most cells and have good biocompatibility, low toxicity and immunogenicity as well as great designability. They have received extensive attention over the past decades as therapeutic carriers and diagnostic markers. Circular RNAs naturally carried by exosomes have been widely used in the treatment of cancer, cardiovascular, and metabolic diseases (Zhang et al., 2022b).
In the treatment of OA, several studies have proposed that extracellular vesicles may be used as carriers to loading on specific circRNAs and thereby exert their therapeutic effects on OA. As an example, Songlong Li et al. isolated extracellular vesicles from MSCs and observed an increase in COL2A1, Sox9, and aggrecan expression Frontiers in Genetics frontiersin.org after coculturing circHIPK3-overexpressing extracellular vesicles with chondrocytes, along with inhibited expression of OA-related markers such as MMP-13 and Runx2. Meanwhile, functional experiments also found that CircHIPK3 could alleviate IL-1βinduced inhibition of chondrocyte apoptosis, proliferation, and migration (Li et al., 2021c). Shi Cong Tao et al. found that circRNA3503 was significantly increased after melatonin (MT)induced cellular sleep. Mechanistically, circRNA3503 acted as a sponge for hsa-mir-181c-3p and hsa-let-7b-3p. Prepared and isolated circRNA3503-loaded extracellular vesicles (circRNA3503-OE-SEVs) from SMSCs. The feasibility of circRNA3503-OE-SEVs in preventing OA progression was validated by in vivo and in vitro experiments (Tao et al., 2021). In addition, circRNAs transported by exosomes (circ_0001236) suppresses IL-1 β via the miR-3677-3p/ Sox9 pathway in TGF-β induced cartilage degeneration, promoting chondrocyte proliferation (Mao et al., 2021b).
mRNA therapeutics
mRNA could theoretically produce any protein on demand inside the cell, enabling treatment of disease. However, linear mRNAs also face many challenges due to their own defects. CircRNAs do not contain a 5′-end cap and a 3'-end PolyA tail and form a covalently closed circular structure by back splicing, protecting circRNAs from degradation by exonucleases. At the same time, circRNAs also need no complex modification when they are synthesized in vitro, so circRNAs have the advantages of high stability, low immunogenicity, and long-lasting expression compared to linear mRNA (Santer et al., 2019). Recent studies identified nearly a thousand endogenous circular RNAs that are translatable, half of which can synthesize large molecular weight proteins by rolling circle translation. The authors explore factors that influence translation of circular RNAs and, by optimizing relevant conditions, increase circular RNA protein production several hundred fold, providing a more abundant and longlasting translated protein product under in vitro and in vivo conditions .
In vitro synthesis of circRNAs is the basis of drug development, and there have been studies that successfully prepared circRNAs in vitro and elucidated their functions. There are currently two major routes known for the in vitro synthesis of circular RNAs: direct intramolecular ligation into circles based on catalysis by T4 RNA ligase, and self splicing into circles based on type I intronic ribozymes (T4 bacteriophage or Anabaena) (Chen et al., 2017;Wesselhoeft et al., 2019;Rausch et al., 2021;Qu et al., 2022). There are studies showing that circular RNA synthesized by T4 RNA ligase does not elicit an intracellular innate immune response (Liu et al., 2022a). It provides an important foundation for the further application of circular RNAs synthesized in vitro and also holds promising prospects for the further development of nucleic acid aptamers and gene therapy fields based on circular RNA technology. Artificially manufactured circular RNA technology has also been successfully applied in drug development. CircINSR was screened for significant underexpression in heart tissue from patients with heart failure and mice with left ventricular pressure overload induced cardiac remodeling. The authors explored the therapeutic effect using two modalities: AAV loading overexpression plasmid and in vitro transcribed CircINSR, and found that in vitro transcribed CircINSR could achieve superior therapeutic and protective effects against doxorubicin mediated cardiomyocyte death .
Prospect of research
At present, there are a considerable number of studies on the function of circRNAs in osteoarthritis, which fully shows that circRNAs play a regulatory role in OA. However, there are still many deficiencies in the existing research. For example, mechanistic research is limited to acting as a sponge combined with miRNA, which fails to consider the characteristics of diseases, such as OA of different types and stages. However, the potential of circRNAs has been confirmed Clinical application potential of circRNA. CircRNA has great potential for both diagnostic and therapeutic applications. In diagnosis, it can be used as a liquid biopsy tool for early screening of OA patients. On the therapeutic side, circRNA is considered to be good for treatment by in vitro synthesis (linear RNA cyclization) or stem cell-derived circRNA piggybacked by materials such as extracellular vesicles and injected through the joint cavity.
Frontiers in Genetics frontiersin.org in many other areas, giving us some directions for studying the role of circRNAs in OA. Many studies have confirmed the critical role of miRNA in osteoarthritis, and miRNA as a molecular target for precision therapy has also been proven feasible (Ji et al., 2020). CircRNA is an essential competitive inhibitor of miRNA, and its high degree of conservatism and stability makes it possible to fight OA effectively. However, the richness of the mechanism of action of circRNA has been described earlier. The mechanism of circRNA should not be limited to ceRNA. In comparison, only a few articles illustrate its function in OA through interactions with proteins. With the continuous progress of sequencing technology, many studies have shown that circRNAs have the potential for translation and can regulate transcription. Breakthrough progress has been made in the study of circRNA translation products as disease-specific molecular targets in cancer. In triple-negative breast carcinoma, circ-HER2 is expressed in approximately 30% of triple-negative breast carcinoma cells. The translated peptide HER2-103 promotes the proliferation and invasion of triple-negative breast cancer cells, and HER2-103 can also be used as a target molecule for the anti-HER2 targeted drug pertuzumab . In addition, many studies have confirmed the critical role of circRNA translation products in disease development (Zhou et al., 2021b). In addition, a database on circRNA translation has been established. However, up to now, no study of osteoarthritis has found a translational function of circRNA, which may be due to the previous researchers' lack of indepth understanding of circRNA, and there are good reasons why we can assume that circRNA in OA may also have the translational function to regulate the progress of the disease.
At present, research on circRNAs in OA has also been performed in a considerable number of experimental animals. As shown in Table 1, animal models used extensively in existing studies are rats, mice, and rabbits. The knockout or overexpression of circRNA was achieved by intra-articular injection of adenovirus (AV), adeno-associated virus (AAV), or lentivirus. However, because of the unique and complex structure of the articular cavity, intra-articular injections can be well absorbed or unknown by articular cartilage and surrounding tissues. Therefore, the effect of such knockout or overexpression in animal models has not been effectively verified, setting obstacles to the development of drug targets and drug safety. All of these findings demonstrate the importance of establishing a circRNA knockout model. Methods to knock out circRNA have always been a challenge. In the past, such knockdowns may have been limited by technical reasons, but the emergence and advancement of new technologies have made it possible to achieve a specific knockout of circRNA . Benyu Liu et al. used CRISPR-CAS9 to build circKcnt2 knockout mice to study the role of circRNA in intestinal inflammation. The knockout site was selected as a intronic complementary sequences mediated by flanking introns, which is thought to be the critical sequence for back splicing of circRNAs. circKcnt2 knockout was constructed by deleting the intronic complementary sequences of the genome. Analysis of the expression of circKcnt2 and its host genes in knockout models proved that circKnct2 could be knocked out specifically, while the expression of its host genes was not affected . The experiment successfully constructed the circRNA knockout model and demonstrated the function of circRNA. Furthermore, a circRNA knockout model was established in OA to improve the current understanding.
In conclusion, circRNA may function as a regulator in OA. A growing body of evidence suggests that circRNAs can regulate chondrocyte proliferation, apoptosis, differentiation, and autophagy, regulate extracellular matrix degradation, and regulate oxidative stress processes and inflammatory processes in chondrocytes. On the other hand, circRNAs can modulate the intra-articular environment, such as the synovium, meniscus, and subchondral bone, and can serve as biomarkers for liquid biopsy. Although the number of studies available is already significant, many deficiencies remain regarding mechanisms, animal model construction, and disease heterogeneity.
Author contributions
JL supervised and was responsible for the whole study. ZL was responsible for the literature search and data analysis and wrote the first draft., drawing figures and tables. All authors revised the paper critically for intellectual content and approved the final version.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2023-05-09T13:14:27.421Z
|
2023-05-09T00:00:00.000
|
{
"year": 2023,
"sha1": "71b308b665fefbadc044033da4ee7c82bd5664ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "71b308b665fefbadc044033da4ee7c82bd5664ca",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
41457257
|
pes2o/s2orc
|
v3-fos-license
|
Fully consistent CFD methods for incompressible flow computations
Nowadays collocated grid based CFD methods are one of the most efficient tools for computations of the flows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure-velocity coupling on collocated grids use the so-called momentum interpolation method of Rhie and Chow [1]. As known, the method and some of its widely spread modifications result in solutions, which are dependent of time step at convergence. In this paper the magnitude of the dependence is shown to contribute about 0.5% into the total error in a typical turbulent flow computation. Nevertheless if coarse grids are used, the standard interpolation methods result in much higher non-consistent behavior. To overcome the problem, a recently developed interpolation method, which is independent of time step, is used. It is shown that in comparison to other time step independent method, the method may enhance the convergence rate of the SIMPLEC algorithm up to 25 %. The method is verified using turbulent flow computations around a NACA 64618 airfoil and the roll-up of a shear layer, which may appear in wind turbine wake.
Introduction
During the last decade a large effort has gone into developing CFD tools for prediction of wind turbine aerodynamics. In 1999, the flow over the NREL Phase II was computed using overset grid method by Duque [2]. Later, accurate predictions of the NREL Phase IV rotor were performed by Sørensen [3] and Johansen [4] and the stalled rotor prediction was also presented by Duque in [5]. To decrease the computational costs Xu and Sankar proposed in [6] an approach where small zones surrounding each blade were solved, whereas the rest of the domain was treated using a significantly less expensive full potential solver. Rather than modeling the entire rotor Pape in [7] modeled a single blade omitting the tower and nacelle. To predict the rotor-tower interactions, computations of the fully resolved rotors were performed by Zahle using incompressible overset grid method in [8]. Compressible sliding grid method was used by Gomez-Iradi in [9] for computations of the NREL Phase IV rotor. CFD modeling of laminar-turbulent transition for the wind turbine rotors was presented by Sørensen in [10]. In addition to the computations on structured grids computations using unstructured grid solvers were presented by Sezer-Udol in [11] and Potsdam in [12]. Wake simulations behind the wind turbines using CFD methods have been also an intensive field of study. Recent comparison of CFD wake simulations with the MEXICO experimental data can be found e.g. in Bechmann [13]. To decrease the computational costs, actuator disk and actuator line models have been applied for the wake modeling as can be seen in [14][15][16].
Nowadays, most of the incompressible CFD tools for wind turbine computations are based on the Semi-Implicit Method for Pressure Linked Equations (SIMPLE) to link the velocity and pressure. To achieve a high robustness of the SIMPLE algorithm there is a common practice to employ it on collocated grids. One of the complexities of the collocated grid-based algorithms is the well known problem of the pressure-velocity decoupling. To overcome the problem, vast majority of the methods used today in commercial codes are based on the so-called momentum interpolation methods, initially proposed by Rhie and Chow [1].
Since the last two decades various modifications of the momentum interpolations have been presented in order to ensure an accurate solution on collocated grids. Originally the Rhie-Chow interpolation was developed for steady flow computations and is known to possess a dependence of velocity underrelaxation parameter at convergence. The problem with the dependence was solved in [17,18], nevertheless it was later shown in [19,20] that if the Rhie-Chow interpolation is used for unsteady flow computations, pressure wiggles appear for small time steps. An interpolation method for unsteady flow computations free from the pressure wiggles was later proposed independently by Choi [21] and Shen et al. [19]. Note that the method of Shen et al possesses the same properties as the method of Choi, but contrary to the Choi's method, it is based on second order scheme in time. As shown in [22] both methods of Shen et al and Choi possess a weak dependence of time step and relaxation parameter at convergence. To overcome the difficulty, several methods, which are independent of time step and relaxation parameter, were proposed in [20,[23][24][25]. Nowadays in spite of the existence of the time step independent methods, the standard methods of Choi and Shen et al are widely used as can be seen in example in [26][27][28][29].
On collocated grids both the solution accuracy and the convergence rate of the SIMPLE-like algorithms strictly depend on the choice of the interpolation method. Originally the SIMPLElike algorithms, such as SIMPLE [30], SIMPLEC [31] and PISO [32], were developed for the staggered grids, where mass flux interpolation is not necessary. In most of the computational codes used in engineering application the choice of the SIMPLE-like algorithm and the choice of the mass flux interpolation method have been done independently [33,34]. In the current paper, it is shown that using such the approach convergence rate is not always optimal and in order to achieve a high efficiency of the SIMPLE-like algorithms, the mass flux interpolation should be used consistently with the algorithms.
By taking an example of the SIMPLEC algorithm we will demonstrate that the usage of interpolation methods, which are fully compatible with the SIMPLEC algorithm, results in a convergence rate up to 25 % higher than the rate of the standard SIMPLEC algorithm. Theoretical justification of this fact is given in [35], whereas here by using a typical turbulent flow computations we will show that the proper choice of interpolation method may also increase the accuracy of the SIMPLEC algorithm.
Standard interpolation methods on collocated grids, such as the methods of Choi [21] and Shen et al [19], are known to result in solution, dependent of time step at convergence. In this work the magnitude of the dependence is estimated and is shown to be negligible for a typical turbulent flow computation. Nevertheless, when coarse grids are used the inconsistency of the solution becomes non-negligible. To overcome the problem the recently developed Modified Momentum Interpolation method (MMI) of Kolmogorov et al [35], will be used. The efficiency of the standard and the MMI interpolation methods will be tested in the roll-up case of a shear layer vortex, which appears in wind turbine wake. For the standard interpolation methods of Shen [19,36], the magnitude of the solution dependence of time step will be measured in application to the turbulent flow field around a NACA 64618 airfoil. For the test case the MMI method is proved to result in the solutions, which are independent of time step at convergence.
Discretization of the Navier-Stokes equations
The discretization of the incompressible Navier-Stokes equations for unsteady flow computations on collocated grids is presented below. First, the momentum equations are discretized. Second, the discrete continuity equation is presented with one of the recently developed interpolation methods of Kolmogorov et al in [35].
Momentum equations
Using the second order backward difference scheme in time and grouping two momentum equations in the x-and the y-directions together, the system of the underrelaxed momentum equations on collocated grids can be expressed for a control volume p in the following form: where vector υ p denotes the vector of the velocity flow field (u p , v p ) T , whereas vector S p indicates the explicitly treated source terms (S x p , S y p ) T . Terms A p , A nb are the diagonal and non-diagonal matrix terms accounting for the discrete convective and diffusion terms, the term A V p is the coefficient of the time derivative and equal to ρdVp τ , where τ is the time step and ρdV p is the control volume mass. The term A p accounts for the diagonal term of the matrix of the momentum equations, which are underrelaxed using the spatial term A p similarly to [19,24,36] as below: where α is the velocity underrelaxation parameter.
The superscripts m and n are the subiteration-and time step-counters respectively, such that the solution at time step n + 1 is obtained at convergence. In order to compute the flow field at the subiteration step m + 1, the coefficients A p , A nb and A p are taken from the former subiteration step m. In the notations of the coefficients the superscript counter m is dropped in the following section.
Mass flux interpolation methods
To ensure flow field continuity the momentum equations in Eq. (1) have to be solved together with the continuity equation: which essentially represents the fact that the sum of mass fluxes f m+1 k through the control volume faces k equals to zero.
On collocated grids the cell face fluxes are not available. To identify the fluxes one of the widely spread methods is the momentum interpolation method originally proposed by Rhie and Chow [1]. Nowadays, for unsteady flow computations there is a common practice [26][27][28][29] to employ the modified Rhie-Chow interpolations, proposed by Choi or Shen et al in [19,21]. It is known that, the methods are weakly dependent of time step at convergence, but the magnitude of the dependence in real life applications is not known yet.
Nowadays, there exist several interpolation methods on collocated grids, resulting in solutions, independent of time step at convergence [20,[22][23][24][25]. Most of the methods are aimed for [35]. From one hand the MMI method results in the solution, which is independent of time step at convergence. From the other hand, as will be later seen in the paper, it may increase the convergence rate of the SIMPLEC algorithm up to 25% in comparison to other interpolation method existing in literature.
According to the MMI method the mass flux at some cell face k is defined as below: where [ ] k denotes linear interpolation from cell centers to cell face k, the term χ k and the vector h p m+1 are defined respectively as: where the new parameters γ and β are the constant parameters described below. In Eq. The MMI method above results in solution independent of time step and relaxation parameter, as will be seen in the result section. The method has two forms, namely the first form with γ = 0, β = 0 and the second form with γ = 1, β > 0. The two forms of the method possess different properties as described below: 1) The MMI method with γ = 0 and β = 0 is similar to an existing method of Pascau in [25], but contrary to Pascau's method, it is based on momentum equations, which are discretized using second order backward difference scheme in time. As shown in [35], the MMI method in this form is fully compatible with SIMPLE algorithm.
2) The MMI method with γ = 1 and β > 0 is another interpolation method, independent of time step and relaxation parameter at convergence. According to [35], for the robust performance of the method, the parameter β has to be equal 0.04. Contrary to other time step independent methods, the MMI method in this form is fully compatible with SIMPLEC algorithm. As will be later seen in the result section, if the MMI method is applied with SIMPLEC algorithm, it becomes advantageous in the convergence speed over other time step independent interpolation method.
Results
Two test cases are computed, namely an idealized case of a shear layer [37], which appears in wind turbine wake, and the turbulent flow around a NACA 64618 arifoil. The steady and unsteady state solutions are achieved when the residuals, computed in L 1 norm, are reduced by a factor of 10 8 and 10 6 , respectively. For the tests the velocity underrelaxation parameter α = 0.8 is used.
Roll-up of a shear layer vortex
An idealized case of a shear layer, which appears in wind turbine wake, is computed at Re = 100 using SIMPLEC algorithm and the MMI interpolation method in Eq. (4). To achieve the high efficiency of the SIMPLEC algorithm, the MMI method is used with γ = 1 and β = 0.04 (see Section 2.2). To verify the efficiency of the MMI method, the method is compared with the interpolation method of Pascau [25].
The flow is initialized in the domain of a square of [0 π] with periodic boundary conditions as following: , y > π δ = π/15, = 0.05 An example of the flow field at a dimensionless time t = 8 is shown in Fig. 1. The solutions of the two methods are compared at the dimensionless time t = 4 and the errors are measured in comparison to the reference solution of the MMI method obtained on the fine grid of 512 2 cells and with a small time step of 0.005. The computations are performed on a sequence of the successively coarsened grids. For each coarse grid, the error is computed by interpolating the reference solution on the coarse grid and subtracting the solution from that grid. As seen from Fig. 2, the second order of the spatial accuracy is obtained and the error tolerances of both methods are nearly identical. The corresponding work load of the methods, measured in CPU seconds, is plotted in Fig. 3. The efficiencies of the methods can be compared if the work loads are compared for same accuracy levels. Therefore, the work load for each of the method is found first as a function of tolerance error. Then the ratio of the work loads of the MMI method, W ork M , and Pascau's method, W ork P , is computed and plotted in Fig. 4.
As seen in Fig. 4, the work load of the MMI method is lower then the work load of Pascau's method for nearly all accuracy levels. For the highest accuracy level, the efficiency of the MMI method is 25% higher in comparison to Pascau's method. As seen in Fig. 4 there is a narrow accuracy region where the MMI method is less efficient. This is explained by the fact that the optimal β parameter of the MMI method depends on grid resolution. In spite of the fact that the optimal β is not known in advance, the results of the test case and the deliberate analysis of the optimal β presented in [35] show that for general use β = 0.04 can be employed to ensure higher efficiency of the MMI method.
Turbulent flow around a NACA 64618 airfoil
The turbulent flow around NACA 64618 airfoil at zero angle of attack is computed at Re = 1.6 · 10 6 using SIMPLEC algorithm. The grid of O-type is used with boundaries placed in a distance of 20 chords from the airfoil. Shen et al [19] MMI, γ = 0, β = 0 (a) Shen's method [19] and the MMI method. [19,36] in the turbulent flow computations around a NACA 64618 airfoil on grids with 64 x 32 and 128 x 64 cells. Two standard interpolation methods of Shen et al in [19] and [36] are used as the representatives of interpolation methods, resulting to the solutions, dependent of time step at convergence. To compare with the standard methods, the MMI method in Eq. (4) is used.
The k − ω SST turbulence model is used on two relatively coarse grids with 64 x 32 and 128 x 64 cells. For the two grids the maximum y + at points one cell away from the airfoil equals to 2.6 and 0.4, respectively. To measure the time step dependence of the standard methods, the lift coefficient is compared against an experimental value in [38], which is equal to 0.44. As seen from Figs. 5(a) and 5(b), contrary to the MMI method, the solutions of the standard methods of Shen, are dependent of time step at convergence. For the standard methods, the change of the error due to the time step dependence is about 1-2% on the grid with y + = 0.4, whereas for the grid with y + = 2.6 the error variations may achieve up to 5.5%, as seen from Fig. 5(b).
It should be noted, that a typical grid set-up for the turbulent flow computations is based on a grid with 128 cells in the normal direction. Results of computations on a grid with 256 x 128 cells using the standard and the MMI methods are shown in Fig. 6. It is seen from the figure that on such fine grid, the time step dependence of the error contributes about 0.5% into the total error.
The solution, obtained using the MMI method with γ = 0, β = 0 on the grid with 256 x 128 cells is seen to be about 4% different from the experimental value. The solution is less accurate than the solution obtained on the coarser grid using the MMI method with γ = 1, β = 0.04. This is explained by the fact that for the steady state problems, the MMI method with γ = 1 and β = 0.04 may result in superconvergence, as was reported in [35].
Conclusions
It is concluded, that for a typical turbulent flow computation the standard momentum interpolation methods still can be used. For the methods the solution dependence of the time step is negligible. However, in optimization tasks, where coarse grids are used, the standard methods may result in non consistent solution behavior. The inconsistency may become crucial in 3D computations, where employing the fine grids becomes computationally demanding. For the turbulent flow computations, the MMI method of Kolmogorov et al [35] was proved to result in solutions, independent of time step at convergence. The results of the unsteady flow computations have also shown, that when SIMPLEC algorithm is used, the MMI method, results in up to 25% higher convergence rate, than an existing method. In general, the MMI method can be used on both coarse and fine grids to ensure both high convergence rate and high accuracy of the SIMPLE-like algorithms.
|
2017-09-14T02:05:12.338Z
|
2014-06-16T00:00:00.000
|
{
"year": 2014,
"sha1": "a317b9d54267b948357abf76fbff949d434f50fe",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/524/1/012128/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "044aa21a5354fc51748e067d1ca3d9e325a58d43",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
46893471
|
pes2o/s2orc
|
v3-fos-license
|
Deconvoluting the Biological Roles of Vitamin D-Binding Protein During Pregnancy: A Both Clinical and Theoretical Challenge
The teleological purpose of an ongoing pregnancy is to fulfill its fundamental role of a successful, uncomplicated delivery, in conjunction with an optimal intrauterine environment for the developing fetus. Vitamin D metabolism is adapted to meet both these demands during pregnancy; first by stimulation of calcium absorption for adequate intrauterine bone mineral accrual of the fetus, and second, by enhancing systemic and local maternal tolerance to paternal and fetal alloantigens. Vitamin D-binding protein (VDBP) is one of the key biomolecules that optimize vitamin D homeostasis and also contributes as an immune regulator for a healthy, ongoing pregnancy. In this regard, recent results indicate that dysregulation of VDBP equilibrium could be a risk factor for adverse fetal, maternal, and neonatal outcomes, including preeclampsia, preterm birth, and gestational diabetes. Moreover, it has been hypothesized to be also implicated in the interpretation of vitamin D status in the pregnant state. The aim of this review is to assess available literature regarding the association of VDBP with clinical outcomes during pregnancy, as a potential biomarker for future clinical practice, with a discourse on current knowledge gaps and future research agenda.
The teleological purpose of an ongoing pregnancy is to fulfill its fundamental role of a successful, uncomplicated delivery, in conjunction with an optimal intrauterine environment for the developing fetus. Vitamin D metabolism is adapted to meet both these demands during pregnancy; first by stimulation of calcium absorption for adequate intrauterine bone mineral accrual of the fetus, and second, by enhancing systemic and local maternal tolerance to paternal and fetal alloantigens. Vitamin D-binding protein (VDBP) is one of the key biomolecules that optimize vitamin D homeostasis and also contributes as an immune regulator for a healthy, ongoing pregnancy. In this regard, recent results indicate that dysregulation of VDBP equilibrium could be a risk factor for adverse fetal, maternal, and neonatal outcomes, including preeclampsia, preterm birth, and gestational diabetes. Moreover, it has been hypothesized to be also implicated in the interpretation of vitamin D status in the pregnant state. The aim of this review is to assess available literature regarding the association of VDBP with clinical outcomes during pregnancy, as a potential biomarker for future clinical practice, with a discourse on current knowledge gaps and future research agenda.
Keywords: vitamin D-binding protein, 25-hydroxyvitamin D, Gc-globulin, pregnancy, clinical outcomes, polymorphisms inTRODUCTiOn The vitamin D-binding protein (VDBP), also known as group-specific component of serum (Gc-globulin), is a member of the albumin, α-fetoprotein, and α-albumin/afamin gene family and the major plasma carrier protein of vitamin D and its metabolites (1,2). Vitamin D sterols are important for preserving normal serum calcium levels and electrolyte homeostasis. In addition to its specific sterol-binding capacity, VDBP has been shown to be involved in a plethora of other essential biological functions, including actin scavenging to fatty acid transport and macrophage activation and chemotaxis (2).
The teleological purpose of an ongoing pregnancy is to fulfill its fundamental role of a successful, uncomplicated delivery, in conjunction with an optimal intrauterine environment for the developing fetus. Vitamin D is adapted to meet both these demands during pregnancy; first by stimulation of calcium absorption for adequate intrauterine bone mineral accrual of the fetus, and second, by enhancing systemic and local maternal tolerance to paternal and fetal alloantigens (1)(2)(3). In this context, it is believed that VDBP is one of the key biomolecules that optimize vitamin D homeostasis and also contributes as an immune regulator for a healthy, ongoing pregnancy. VDBP concentrations are increased in the pregnant state; however, the functional significance of this fact has not, so far, fully been clarified (3). There are emerging theories, that in clinical terms and under certain conditions, biodynamics of VDBP compound could reflect the health status of an ongoing pregnancy, as well as being predictors of neonatal birth parameters or adverse outcomes (2). From an analytical aspect, VDBP could interfere with available assays and confound interpretation of maternal and neonatal vitamin D status.
The aim of this review is to assess available literature regarding the association of VDBP with clinical outcomes during pregnancy, as a potential biomarker for future clinical practice, with a discourse on current knowledge gaps and future research agenda.
OveRview OF vDBP BiODYnAMiCS non-Pregnant State
In humans, vitamin D3 (cholecalciferol) is naturally obtained through sunlight in the UVB range of 290-315 nm, through a membrane enhanced thermal-dependent isomerization reaction, which results in 7-dehydrocholesterol conversion into vitamin D3 (4). Alternatively, vitamin D, either as D2 or D3, can enter the body from its absorption in the intestine. In either case, D2 or D3 then diffuse into the circulation through the capillary bed and reversibly bound to the vitamin D-binding (globulin) protein (VDBP) (5). VDBP is a 58 kDa glycosylated α-globulin that carries the lipophilic vitamin D in the plasma until it reaches target tissues (6). It is composed of 458 amino acid residues in length and folds into a disulfide-bonded, triple-domain structure. The latter is further divided into two repeated, homologous domains of 186 acids (domains I and II) and a shorter domain of 86 residues at the C-terminus (domain III) (7). It is considered as the principal transporter of vitamin D molecules. Liver is the main organ where VDBP is synthesized, whereas it is also expressed in kidney, gonads and fat tissue (8).
Apart from its function as a carrier protein, affinity for VDBP is the major parameter regulating the half-life of a vitamin D metabolite in the systemic circulation (17)(18)(19). The "free hormone hypothesis" suggests that only free steroid hormones are physiologically active, because their lipophilic ability allows them to passively diffuse across cell membranes. According to the "free hormone hypothesis, " it is only the free 25(OH)vitamin D3 that is taken up into the tubular epithelium, to be converted by CYP27B1 to calcitriol. Similarly, in the case of calcitriol, biological actions are mediated through passive diffusion of the free calcitriol to its cognate nuclear vitamin D receptor (VDR), which is a highaffinity ligand-activated transcription factor (13,14).
Vitamin D-binding protein was believed to solely regulate the amount of free 25(OH)D available in the circulation (20). However, a landmark study by Nykjaer et al. (21) revealed an important transport system that affects vitamin D metabolism, being the megalin-cubilin endocytotic system. In this case, 25(OH)D/VDBP complex in the circulation is endocytosed into the proximal tubular cell via the apical-membrane receptor, megalin, the largest member of the LDL receptor super family (22). Megalin-mediated endocytosis of 25(OH)D/VDBP, also requires the receptor-associated protein and cubilin, a protein required for sequestering VDBP on the cell surface prior to its internalization by megalin (21). This system is a key player in the delivery of 25(OH)D to the 25-hydroxyvitamin D-1-αhydroxylase in the kidney (21), since 25(OH)D molecules bound to VDBP taken up via this receptor pathway, are converted to calcitriol. The megalin-cubilin system has been also recognized in the placenta and several other tissues (22,23). These results underline that VDBP, in addition to its carrier protein functions and regulation of free vitamin D fractions available in the circulation, presents pleiotropic actions: contributes significantly to renal and extra-renal production of calcitriol and ensures vitamin D molecule reabsorption in the kidney, by preventing urinary loss of vitamin D.
Systemic Circulation
A limited number of studies have determined longitudinal increase of VDBP concentrations during pregnancy (24,25). The magnitude of the increase varies: highest concentrations reach a 40-50% increase compared to non-pregnant women, with a maximum at the beginning of the third trimester before starting to decrease at term. VDBP increase was accompanied by an increase in calcitriol concentrations in most available studies (24,25). As expected, a negative association between free 25(OH)D and VDBP concentrations was evident, resulting in a consistent decrease of free 25(OH)D from 15 to 36 weeks of gestational age (25). On the other hand, as the affinity of calcitriol is much lower for VDBP compared to 25(OH)D, a significant increase in both total 1,25(OH)2D and VDBP levels was observed during pregnancy, while the free 1,25(OH)2D concentrations remained nearly constant (26,27). However, whereas the increase in total 1,25(OH)2D and VDBP concentrations in the pregnant state has been repeatedly reported in different studies (24,26), reports on the free 1,25(OH)2D are conflicting. In general, an increase in 1,25(OH)2D-being of both renal and placental origin-throughout pregnancy seems to be replicated by the majority of data (28)(29)(30). Nevertheless, observed discrepancies between different trials may actually be the result of complex interactions between calcitriol concentrations and a plethora of factors, including VDBP levels, and stimulation by prolactin, insulin-like growth factor 1, and parathyroid hormone (PTH)-related protein, while they are probably unaffected by PTH, which has been shown to decrease during pregnancy (30). Moreover, these studies used different laboratory methods of assessment of calcitriol concentrations; thus, interpretation of their results may be problematic.
Interestingly, Chun and colleagues have recently proposed a viable hypothesis considering a role for VDBP in tissue discrimination of 25(OH)D2 and 25(OH)D3 (31). Given that 25(OH)2 binds to VDBP with lower affinity than 25(OH)D3, the kidney would preferentially use the latter metabolite. Differently, cells in the immune system might profit of a greater pool of 25(OH) D2 for antimicrobial peptide induction (31), which is of outmost importance for enhancing systemic and local maternal tolerance to paternal and fetal alloantigens immune tolerance induction (32).
Placenta
Placenta has its own mechanisms regulating vitamin D metabolism. The decidua facilitates nutritional fetal-maternal exchange and serves as an endocrine tissue by secreting a plethora of biomolecules. In addition, it provides "immunological stability and tolerance" to accommodate the developing fetus. The 1 α -hydroxylase, the 24-hydroxylase, the VDBP, and VDR have all been detected either in trophoblast cultures or in freshly obtained, placental tissue (33)(34)(35)(36)(37). Undoubtedly, the placenta is able to metabolize vitamin D, providing active 1,25(OH)2 D in vitro. VDBP is expressed on the cell-surface of human placental trophoblasts during normal human pregnancy (37). This observation has led to the suggestion that the rise in VDBP concentrations during pregnancy could be the result of high turnover rate of trophoblasts, which are in direct contact with maternal blood (31). VDBP has been also demonstrated to affect the expression of specific placental aminotransporters, which may be involved in the regulation of amino acid transfer to the offspring during in utero development (38).
Vitamin D-binding protein could be possibly connected with the management of large amounts of progesterone produced by the placental trophoblast during the second and third trimesters of pregnancy, which could theoretically displace vitamin D from VDBP (25,26). Under these conditions, VDBP may additionally play the role of a major plasma progesterone transport protein at least during late gestation; however, relevant data are still scarce and this hypothesis warrants further clarification. Although these mechanisms are still under research investigation, the above observations support the multifunctional role of VDBP both as a regulator of vitamin D homeostasis and as an immunomodulator at a systemic and placental level, during pregnancy. In accordance with this hypothesis, VDBP dysregulation has been implicated in the pathogenesis of several adverse outcomes during pregnancy, which will be further discussed below. Figure 1 provides a schematic overview of the physiological functions of VDBP during pregnancy.
vDBP AS A MARKeR OF A HeALTHY OnGOinG PReGnAnCY: CLiniCAL iMPLiCATiOnS
During the past few years, there has been increasing research effort to discover novel biomarkers that could effectively predict adverse pregnancy and fetal outcomes. In this setting, the dysfunction of the immunoregulatory biological properties of circulating VDBP during pregnancy, as well as specific VDBP polymorphisms, have been the objective of several clinical studies.
Association with Type 1 Diabetes (T1D), Gestational Diabetes, and Adipokines In a recent nested case-control study, concentrations of VDBP and 25(OH)D throughout pregnancy between 113 women whose offspring later developed T1D and 220 controls, were evaluated (39). VDBP and 25(OH)D significantly increased by gestational week and were lower in cases than in controls. Lower third trimester VDBP concentrations tended to be associated with higher risk of T1D in the offspring (39). Moreover, in a study among Chinese women, the risk allele-A of rs3733359 of VDBP gene was correlated with an increased risk of gestational diabetes mellitus, in the obese subgroup (40). Similar to other autoimmune disorders, the involvement of VDBP in the pathogenesis of T1D, may lay on a positive correlation between its levels and macrophage activation (41). Higher levels and frequencies of serum anti-VDBP autoantibodies were identified in patients with T1D than in healthy controls, suggesting VDBP as a possible autoantigen in T1D (42). Given that VDBP exerts immunomodulatory characteristics and contributes to the transport of vitamin D metabolites, reduced serum concentrations may be related, in a direct or indirect way, to the autoimmune functional deterioration of pancreatic β-cells in the disease.
Recent results from our group, in maternal-neonatal pairs at birth, demonstrated an independent positive correlation of VDBP with adipokines, adiponectin, and irisin, which remained significant after adjustment for multiple parameters, including weeks of gestation, maternal age, and Body Mass Index, in both mothers and neonates (not for irisin in the case of neonates) (43). Further mechanistic studies are required to elucidate whether VDBP plays a carrier or regulatory role for adiponectin and/ or irisin during pregnancy and its potential effects on offspring anthropometry in late childhood and adolescence.
vDBP and the Risk of Adverse Pregnancy Outcomes
Vitamin D-binding protein has been also implicated recently in the pathogenesis of preeclampsia. A small pilot study showed that VDBP in the first trimester of pregnancy was upregulated in women who developed early-onset preeclampsia (EOPE) compared to controls, suggesting a hypothetical VDBP utility, as a biomarker for the diagnosis of EOPE (44). These results were in accordance with a previous cohort study, which included 239 pregnant women, 107 with preeclampsia, and 132 controls, where phenotype frequency distribution of serum group-specific component (Gc) and haptoglobin (Hp) was determined (45). The results indicated a significant statistical difference in phenotype frequency distribution of the Gc-system. Gc 2-1 phenotype was expressed significantly in women with preeclampsia compared to controls, suggesting a potential utility of Gc 2-1 phenotype as a genetic marker for early preeclampsia detection (45). However, a recent study by Powe et al. showed no significance variances between first trimester VDBP concentrations between cases with preeclampsia and controls, neither association with first trimester blood pressure (46). In contrast to previous studies, Tannetta et al. showed that actin-free VDBP plasma levels tended to be lower in early onset preeclampsia compared to normal pregnancies, still not statistically significantly (47). It becomes evident that the heterogeneity of baseline 25(OH)D concentrations across trimesters of pregnancy of the populations included in these studies could contribute to this discordance. Of major interest, Behrouz et al. demonstrated that VDBP of placental origin is a target for auto-antibodies detected in sera of preeclamptic women, indicating a strong autoimmune component in the pathogenesis of the disorder (48).
Vitamin D-binding protein has been also suggested to contribute to the development of an optimal intrauterine environment for the developing fetus as well as to a successful, uncomplicated delivery. Results from the Southampton Women's Survey (33) indicate that maternal both 25(OH)D and VDBP concentrations were positively linked to placental expression of certain genes related to placental amino acid transport. On that basis, in a recent study by Wookey et al., a significant reduction of placental VDBP concentrations in women with idiopathic fetal growth restriction, as compared to normal pregnancy controls, was demonstrated (49).
On the other hand, albumin/VDBP ratio was proven to be more efficacious than fetal fibronectin in predicting spontaneous preterm delivery in symptomatic women within 7 days (50). VDBP concentrations in cervicovaginal fluid (CVF) of pregnant women successfully predicted spontaneous labor onset within 3 days, with positive and negative predictive values of 82.8 and 95.3%, respectively (51). VDBP was estimated to be 3.9-fold higher in the CVF of asymptomatic women that subsequently presented preterm premature rupture of the fetal membranes (PROM), as compared to gestation-matched controls (52).
Potential explanations for the VDBP rise in CVF in pregnancies with high risk for preterm birth could be the increased cell death and inflammation of the fetal membranes leading to increased permeability of blood vessels and augmented VDBP deglycosylation, as an effect of the immune response (52). In addition, VDBP synthesis is known to be enhanced by proinflammatory cytokines, such as IL-6 (51). Given that the results of different studies regarding the VDBP concentrations in EOPE are conflicting, it has been also suggested that a potential reduction of VDBP plasma levels in EOPE, may reflect the dysfunction of the actin scavenging system, which is known to cleave extracellular actin and hinder repolymerization, inhibiting its thrombotic effects (47).
vDBP and 25(OH)D Status During Pregnancy and Lactation
GC Single Nucleotide Polymorphism (SNPs) rs12512631 and rs7041 were determined in the peripheral blood of 356 pregnant individuals and were found to significantly interplay with the maternal and cord-blood concentrations of 25(OH)D and birth weight (49). Low 25(OH)D concentrations in the maternal and cord blood were significantly associated with decreased birth weight among infants of mothers carrying the rs12512631 "C" allele, but not in those born to mothers homozygous for the "T" allele. In addition, low 25(OH)D concentrations in cord blood were significantly linked with reduced birth weight only among infants born to mothers being carriers of the rs7041 "G" allele (53).
Vitamin D-binding protein polymorphisms have been also reported to affect vitamin D status and attained 25(OH)D concentrations after supplementation. In this regard, GC rs2282679 polymorphism was found to positively correlate with achieved 25(OH)D status, following gestational cholecalciferol supplementation (54).
There is also evidence that polymorphisms in VDBP gene may be related to 25(OH)D status during pregnancy. The minor allele for rs7041 was related to increased 25(OH)D and rs4588 was associated with decreased 25(OH)D, among pregnant women (55). Chinese pregnant women with VDBP Gc-1f and Gc-1s genotypes had higher plasma 25(OH)D concentrations compared to women with Gc-2 (56). VDBP is known to increase during pregnancy; however, this phenomenon was observed only in women with rs7041 GG or GT genotypes, while pregnant TT carriers did not manifest greater VDBP concentrations compared to TT non-pregnant controls (57).
The impact of genotype on VDBP changes during pregnancy may reflect placental vitamin D transport and thus regulate the availability of vitamin D to the mother and fetus. A different study demonstrated higher VDBP concentrations in healthy pregnant women compared to non-pregnant controls, presenting comparable vitamin D intake, suggesting that metabolic alterations, possibly involving the placenta, may occur during pregnancy that aim to increase vitamin D supply (58). In addition, genetic and ethnic variations in VDBP polymorphisms could also explain the different responses after vitamin D supplementation during pregnancy (32).
On the other hand, a recent study that explored the association between 25(OH)D and VDBP concentrations in lactating mother-neonate pairs, concluded that the high maternal and the neonatal serum VDBP concentrations may be related to falsely low vitamin D concentrations, as suggested by the normal serum calcium (Ca), phosphorus (P), magnesium (Mg), Alkaline Phosphatase (ALP), and PTH levels (59). Even when maternal and neonatal serum vitamin D concentrations were consistent with each other in terms of profound hypovitaminosis D (<10 ng/ml), this definition was not enough to establish vitamin D deficiency, without taking into account other regulatory factors of the vitamin D biological network, including Ca, P, and PTH concentrations.
vDBP and infertility
Vitamin D-binding protein has been also considered to be involved in the pathogenesis of idiopathic infertility. A recent pilot case-control study, including 39 infertile premenopausal women and 29 fertile controls, identified that VDBP concentrations were lower in the infertile group, compared to controls (60). In the same study, total 25(OH)D concentrations did not significantly differ between the two groups; however, free and bioavailable vitamin D concentrations were higher among the infertile women. The genotype distribution of GC rs1155563 and rs2298849 SNPs was compared between 154 women with endometriosis-associated infertility and 347 controls; still, no statistically significant differences were detected (61).
In a cohort of 165 healthy women, aged between 26 and 75 years, it was found that postmenopausal women had higher 25(OH)D, VDBP, and estradiol concentrations than premenopausal subjects, and that estradiol was independently correlated to VDBP (62). The work by Pirani et al. demonstrated that estradiol treatment increased the uptake of labeled VDBP by hepatocytes isolated from female animals, but not from male animal cells, indicating that the estradiol effect may lay on the presence of estrogen receptors (63). Interestingly, in infertile women undergoing in vitro fertilization, VDBP concentrations were not found to fluctuate as estradiol changes throughout the follicular phase of the menstrual cycle (64). These findings are suggestive of the regulatory role that other factors-besides the already well known, such as age, gender and race-may play in determining VDBP concentrations. Table 1 summarizes key characteristics and findings of studies examined the relationship between VDBP and pregnancy-related clinical outcomes. Figure 2 provides a schematic overview of the main pathophysiologic aspects of the VDBP network, during pregnancy.
CRiTiCAL APPRAiSAL OF AvAiLABLe eviDenCe AnD ReASOnS FOR DiSCRePAnCieS BeTween STUDY ReSULTS
Available evidence in the field manifest several limitations, with the most prominent being the wide heterogeneity between study design, included populations, explored outcomes, and analytical methods. Therefore, any interpretation of studies results should be made with caution. Regarding genetic studies in particular, they often present specific methodological issues, including inadequate power to reveal potential gene-disease associations, population stratification as a result of genetic and environmental heterogeneity between studied populations, departure from Hardy-Weinberg equilibrium, hence, they tend to produce inconclusive and conflicting results (65). Ethnic differences in VDBP polymorphisms could also result in differences in 25(OH)D status in pregnant cohorts across the same geographical region (66,67), as well as the gap between observational and supplementation studies (68,69). We have previously described in detail (69) the main reasons behind the aforementioned gap between observational and interventional studies, with regard to the role of Vitamin D in pregnancy. These reasons can be summarized as follows: 1. various study designs (lack of a precise outcome in conjunction with timing of supplementation, enrollment of participants with varied vitamin D status); 2. difficulties in the interpretation of vitamin D equilibrium (lack of determination of plasma half-life); 3. administration of a wide range of regimens, in terms of dose, bolus, and form; 4. geographical dissimilarities (vitamin D needs could vary significantly within a country, particularly in areas with a wide range of latitude gradient); 5. alterations of vitamin D metabolism during pregnancy and 6. supplementation of individuals with low baseline 25(OH)D concentrations would be more likely to have beneficial effects compared to subjects with higher baseline status. It is highly likely that the above handicaps also affect the reproducibility of study results related to VDBP status during pregnancy, since Vitamin D and VDBP are parts of a common biological network with complex interactions between its various components.
In addition, laboratory assessment of VDBP concentrations during pregnancy may be challenging. Different analytical methods have been developed and used in the conducted studies, so far. The monoclonal immunoassay technique recognizes an epitope near the polymorphic region of VDBP and thus has different affinities for the different VDBP haplotypes: this issue probably affects the results of the assay. As a consequence, it presented uncoupling results compared to a polyclonal immunoassay method (70). Hoofnagle et al. developed a liquid chromatography-tandem mass spectrometric (LC-MS/ MS) assay, where plasma proteins can be cleaved into peptides making their specific detection and quantification possible (71). LC-MS/MS method gave results similar to the polyclonal immunoassay, but different from those of the monoclonal immunoassay (70,71). In addition, although the existence of various vitamin D forms (such as epimers) has been established, their clinical significance remains obscure. Furthermore, recent data show that at least one epimer form has activity in vitro (72,73). With the development of more advanced assays, a thorough understanding of the interplay among the various vitamin D forms could be achieved.
GAPS in eXiSTinG KnOwLeDGe AnD FUTURe ReSeARCH AGenDA
It becomes evident from the above that VDBP plays some role in the progression of normal pregnancy and that it is also implicated in the pathogenesis of some of the commonest pregnancy complications, in a way-however-that is not yet completely understood. The fact that VDBP seems to be involved in the pathogenesis of numerous and heterogeneous clinical entities, underlines its pluralistic role in vitamin D homeostasis.
Despite the intensive research work having been conducted during the past few years in terms of the role of Vitamin D in pregnancy, it is clear that existing data regarding VDBP is still very limited. The understanding of the physiology of the VDBP network is extremely useful; however, focus of future research on the association between VDBP and adverse pregnancy outcomes may have multiple benefits. First, the establishment of a novel biomarker for the early detection of endangered pregnancies, which can be translated into daily clinical benefit and second, the further decryption of the complex pathophysiological aspects of pregnancy's abnormalities.
For this purpose, additional clinical trials are required, characterized by interventional and randomized design in order to reduce potential bias, adequate power, and targeted on populations with high-risk for adverse outcomes. Future mechanistic studies from different ethnic groups are needed to investigate the regulatory and immune functions of VDBP during pregnancy and other reproduction outcomes.
|
2018-05-23T13:03:14.009Z
|
2018-05-23T00:00:00.000
|
{
"year": 2018,
"sha1": "fd0e6212fa64e7c64787ed92ff07b97c559c6b36",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2018.00259/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd0e6212fa64e7c64787ed92ff07b97c559c6b36",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
861807
|
pes2o/s2orc
|
v3-fos-license
|
miR2Gene: pattern discovery of single gene, multiple genes, and pathways by enrichment analysis of their microRNA regulators
Background In recent years, a number of tools have been developed to explore microRNAs (miRNAs) by analyzing their target genes. However, a reverse problem, that is, inferring patterns of protein-coding genes through their miRNA regulators, has not been explored. As various miRNA annotation data become available, exploring gene patterns by analyzing the prior knowledge of their miRNA regulators is becoming more feasible. Results In this study, we developed a tool, miR2Gene, for this purpose. Various sets of miRNAs, according to prior rules such as function, associated disease, tissue specificity, family, and cluster, were integrated with miR2Gene. For given genes, miR2Gene evaluates the enrichment of the predicted miRNAs that regulate them in each miRNA set. This tool can be used for single genes, multiple genes, and KEGG pathways. For the KEGG pathway, genes with enriched miRNA sets are highlighted according to various rules. We confirmed the usefulness of miR2Gene through case studies. Conclusions miR2Gene represents a novel and useful tool that integrates miRNA knowledge for protein-coding gene analysis. miR2Gene is freely available at http://cmbi.hsc.pku.edu.cn/mir2gene.
Background
MicroRNAs (miRNAs) are a class of small non-coding RNAs acting as negative gene regulators by binding to the 3'UTR of target mRNAs through base pairing at the post-transcriptional level [1]. Approximately over one third of all genes in the human genome could be regulated by miRNAs [2]. During the past few years, a number of bioinformatics tools have been developed to infer miRNA insights through integrative analysis of miRNAs and their targets [3][4][5][6][7]. These tools help improve our understanding of miRNAs. However, to our knowledge, tools that infer the patterns of protein-coding genes by analyzing the miRNAs that regulate the relevant protein-coding genes are currently unavailable. In recent years, the rapid development of various experiments involving miRNAs has dramatically increased knowledge regarding these regulators. For example, according to the Human microRNA Disease Database (HMDD, http://cmbi.bjmu.edu.cn/hmdd), which manually integrates experimentally supported miRNA-disease associations, the number of reported miRNA-disease associations is quite limited before 2002, but was increased dramatically in recent years, specifically up to 2507 miRNA-disease associations, including 440 distinct miRNA genes and 247 diseases, as stored as of January 2011 [8]. We previously confirmed the usefulness of the prior knowledge for mining novel miRNA patterns for desired miRNAs from biological experiments [9][10][11]. Meanwhile accumulating knowledge of these regulators makes it possible to explore hidden patterns of proteincoding genes by analyzing the miRNAs that regulate these genes however no such tools are currently available.
For the above purpose, we present a tool, miR2Gene (freely available at http://cmbi.hsc.pku.edu.cn/mir2gene). miR2Gene integrates miRNAs into various miRNA sets according to rules from prior knowledge, such as function, associated disease (HMDD), family, cluster, and tissue specificity. For the given genes, miR2Gene then integrates miRNAs that regulate them and performs enrichment analysis of the predicted miRNA regulators in each predefined miRNA set. The tool then provides the significant miRNA sets, which care the potential patterns of the given genes. Currently, miR2Gene can analyze one single gene, multiple genes, and the KEGG pathways (http://www.genome.jp/kegg/). Finally, we confirmed the usefulness of miR2Gene through case studies.
miR2Gene summary
The whole workflow of miR2Gene is shown in Figure 1. For the given protein-coding genes, miR2Gene first predicts the miRNAs that regulates them using different miRNA-target prediction algorithms (TargetScan [2], MicroCosm [12], and DIANA-microT [13]). Then, miR2Gene evaluates the enrichment of the predicted miRNA regulators of the given genes in the predefined miRNA sets. After submitting a task, the results are shown in a new page. For different tasks (single genes, multiple genes, and KEGG pathways), exact procedures have some differences. A tutorial page is provided to make miR2Gene user-friendly. For each specific task, a summarized analysis wizard is also provided in the specific analysis page.
Input data
When a specific task, such as analysis of one single gene, multiple genes, or one KEGG pathway is selected, the user needs to enter specific input data for the specific task. For single or multiple genes, the user needs to first input the gene name or ID for the specific gene identifiers. Currently, miR2Gene supports seven types of gene identifiers, such as the Official Gene Symbol, Entrez Gene ID, Ensembl Gene ID, Ensembl Transcript ID, UCSC gene ID, Refseq mRNA ID, and GeneBank Accession Number. For multiple genes, they should be arranged in one column and each row represents only one gene. We provide one parameter "set the threshold value" for the analysis of multiple genes. The threshold means that only the miRNAs that regulate no less than the "threshold" of given genes are considered in later analysis. For both single gene analysis and multiple gene analysis, the user can view the predicted miRNA regulators in the corresponding analysis pages. For the KEGG pathway analysis, the user needs to select the desired KEGG pathway first, and then determine whether to analyze the pathway genes individually or analyze them as a whole. The next procedure for all three types of tasks is selecting a method to predict the miRNAs that regulate the given protein-coding genes. miR2Gene provides three choices, namely, TargetScan [2], MicroCosm [12], and DIANA-microT [13] for predicting the miRNA regulators. We downloaded the TargetScan predictions (version 5.1) from http://www.targetscan.org/, the MicroCosm predictions (version 5) from http:// www.ebi.ac.uk/enright-srv/microcosm/htdocs/targets/v5/ , and the DIANA-microT predictions (version 3.0) from http://diana.cslab.ece.ntua.gr/microT.
Enrichment analysis of predefined miRNA sets to the predicted regulatory miRNAs for given genes
We used the hypergeometric test to determine the significant enrichment of each miRNA set for the predicted regulatory miRNAs for the given genes, as we previously described [9]. The hypergeometric test generates the significance (P-value) and calculates the fold of enrichment for each miRNA set. The fold value is calculated by dividing the actual with the expected number of predicted miRNAs matched in corresponding miRNA set. The percentage of matched miRNAs in the corresponding miRNA set is also given. Considering that miR2-Gene analyzes multiple miRNA sets for the same input dataset, two methods for multiple comparison correction, Bonferroni and FDR, are provided to correct the original P-values.
Outputs
The result of the desired task is shown in a new page. For analyzing single genes or multiple genes, the miRNA sets that have at least one match in the predicted miRNAs are shown. The miRNA sets are arranged in five categories, namely Cluster, Family, Function, HMDD (miRNA-associated diseases), and TissueSpecific (miRNA tissue specificity, which was obtained from the study of Lu et al. [8]). The miRNA functional set were manually curated from literature. We obtained the miRNA family set and miRNA cluster set from the miRBase database [14]. The user can rank the results by Count (number of matched miRNAs), Percent (percentage of matched miRNAs in corresponding miRNA set), Fold (the actual matched number/expected matched number), P-value, Bonferroni (Bonferroni-corrected P-value), and FDR (FDR-corrected P-value). The significantly enriched miRNA sets are considered as putatively associated with the given proteincoding gene(s). One important point that the user should remember is that the discovered pattern in the Function category could be sometimes reversed because of the inverse regulatory relationship between the given genes and their miRNA regulators.
Results
To confirm the usefulness of miR2Gene in gene pattern discovery, we chose the gene "ABL2" and the KEGG pathway "cell cycle" as examples for tasks of single gene and pathway analyses. Analysis of multiple genes is similar with that of single gene analysis.
For analysis of ABL2, miR2Gene found that the predicted miRNAs (obtained by TargetScan) that regulate ABL2 are significantly enriched in Cluster mir-302a (FDR = 3.37×10 -3 ), mir-181c (FDR = 0.04), and mir-106b (FDR = 0.05), Family let-7, mir-30, mir-17, mir-15, mir-181, mir-302, mir-148, and mir-25. Among these miRNA sets, some of them are well known to be associated with cancer, i.e. let-7 family and mir-17 cluster These results indicate that ABL2 is strongly related with cancer. Furthermore, the miRNA sets "miRNA tumor suppressors" is among the top significant sets. Because miRNAs mainly negatively regulate target genes, the above result suggests that ABL2 may act mainly as an oncogene. Indeed, according to the annotation of NCBI (http://www.ncbi.nlm.nih.gov/), ABL2 is a member of the Abelson family of nonreceptor tyrosine protein kinase genes and is v-abl Abelson murine leukemia viral oncogene homolog 2. Interestingly, almost all of the currently reported ABL2-associated cancers have been identified successfully through miR2Gene analysis, including melanoma [15] (FDR = 9.13×10 -9 , rank No.1 in all diseases by miR2Gene), lymphoma [16] (FDR = 4.23×10 -3 ) and leukemia [17,18] (FDR = 1.10×10 -3 ). Analysis also showed that ABL2 is strongly associated with digestive system cancer (FDR = 3.54×10 -5 ), which is further supported by two studies that found ABL2 is involved in gastrointestinal stromal tumors (GISTs) [19,20]. miR2Gene did not directly identify GISTs because GISTs-associated miR-NAs are not presently reported. Therefore, these data are not integrated with miR2Gene. Overall, the results show a high accuracy of miR2Gene prediction, suggesting that miR2Gene is a useful tool for gene pattern discovery. Non-cancer diseases showing strong significance through miR2Gene analysis include heart failure (FDR = 4.91×10 -8 , rank No. 4), Schizophrenia (FDR = 2.07×10 -4 ), and autistic disorder (FDR = 5.47×10 -3 ). Although no study provides evidence for the associations of these disease and ABL2, ABL2 may be a potential molecule associated with these diseases. Interestingly, ABL2 has a role in the KEGG ErbB signaling and viral myocarditis pathways, both of which are associated with heart function, suggesting that ABL2 has a role in heart function and could therefore be associated with heart failure. For the predicted functions, most of them, except for cancer-associated functions do not have direct evidences although several have some indications. For example, the function "granulopoiesis" could be supported indirectly by its well-known involvement in leukemia.
For the cell cycle pathway analysis, miR2Gene predicted that the mir-302 cluster is the most significant miRNA cluster and the mir-15 family is the most significant miRNA family. Indeed, mir-302 cluster was actually confirmed to be induced by Oct4/Sox2 and it regulates multiple cell cycle regulators. Inhibition of mir-302 causes human embryonic stem cells to accumulate in the G1 phase [21]. The mir-15 family, also known as the mir-16 family, was also confirmed to induce cell cycle arrest by regulating several cell cycle genes [22]. Various types of cancers occupy the top significant locations of the HMDD category, suggesting that cell cycle pathway is strongly related with cancer. The only non-cancer disease among the top locations is heart failure. Moreover, the heart-specific miRNA set is shown as the most significant set in the TissueSpecific category. These results suggest that heart function is also strongly associated with the cell cycle. The "cell cycle" miRNA set is one of the most significant sets in the Function category (rank No. 2). Figure 2 shows more details regarding the cell cycle-related miRNAs involved in the regulation of the cell cycle pathway. The miR2Gene shows that multiple genes in the cell cycle pathway are significantly preferred to be regulated by the cell cycle-related miRNA set. This result was confirmed by Carleton et al., who noted that some genes in the cell cycle pathway such as cyclin protein, CDK6/4, CDK2, E2F, CDC, WEE1 and CHEK1 are miRNA targets and their interactions are involved in cell cycle regulation [23]. miR2Gene also shows that miRNAs seem to take part more in the G1 phase ( Figure 2). Although the miR2Gene prediction result on the cell cycle pathway needs further experimental confirmation and support, the new patterns provide new insights into the cell cycle through miRNAs.
Discussion
By enrichment analysis of miRNAs that regulate the given gene, miR2Gene is able to mine patterns of the given protein-coding genes. Therefore, miR2Gene represents a novel tool in this topic. The results showed that this tool is useful. However, limitations exist in this tool. The major limitation is that currently the data of miRNA set is limited, which may result to bias in the analysis. Another limitation is that the prediction of miRNA-target pairs has high false positives and high false negatives. This also may produce bias in the analysis. We believe that as more miRNA sets are collected and more accurate miRNA-target prediction tools becomes available, miR2Gene would produce more reliable result.
Conclusions
In recent years, tools have been developed to infer biological insights of miRNAs through integrative analysis of miRNAs and their targets. However, tools for the reverse problem, that is, inferring the biological insights of protein-coding genes through their miRNA regulators are not available because of the limited prior knowledge regarding miRNAs. Considering that a majority of protein-coding genes are putative targets of miRNAs, exploring novel patterns of protein-coding genes through integrative analysis of the miRNAs that regulate them has become increasingly interesting. As prior knowledge regarding miRNAs is accumulating rapidly, developing tools for the above purpose is becoming more feasible. In this study, we developed a tool, miR2Gene, to address this problem. For given protein-coding genes, miR2Gene first predicts the miR-NAs that regulate the input genes and then performs enrichment analysis of the predefined miRNA knowledge in the predicted miRNAs. miR2Gene supports three types of analysis, namely single genes, multiple genes, and KEGG pathways. Moreover, the usefulness of miR2Gene has been confirmed through two case studies. Currently, miR2Gene is only used for human genes and pathways, but can easily be extended to other species when sufficient miRNA prior knowledge becomes available.
Additional material
Additional File 1: miRNA sets that are significantly enriched in the miRNAs that are predicted to regulate ABL2 and their statistical results.
Figure 2
The cell cycle pathway and the significant genes on cell cycle-related miRNAs. The genes whose miRNA regulators are significantly enriched in the cell cycle-related miRNA set are highlighted in yellow.
|
2017-06-24T18:29:26.012Z
|
2011-12-14T00:00:00.000
|
{
"year": 2011,
"sha1": "b53632316cf98789cde011d5a515e94a0e9deb6e",
"oa_license": "CCBY",
"oa_url": "https://bmcsystbiol.biomedcentral.com/track/pdf/10.1186/1752-0509-5-S2-S9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82618f9f615c5a63a9c4e26c702f023146ea7708",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
}
|
260956017
|
pes2o/s2orc
|
v3-fos-license
|
Improving speech perception for hearing-impaired listeners using audio-to-tactile sensory substitution with multiple frequency channels
Cochlear implants (CIs) have revolutionised treatment of hearing loss, but large populations globally cannot access them either because of disorders that prevent implantation or because they are expensive and require specialist surgery. Recent technology developments mean that haptic aids, which transmit speech through vibration, could offer a viable low-cost, non-invasive alternative. One important development is that compact haptic actuators can now deliver intense stimulation across multiple frequencies. We explored whether these multiple frequency channels can transfer spectral information to improve tactile phoneme discrimination. To convert audio to vibration, the speech amplitude envelope was extracted from one or more audio frequency bands and used to amplitude modulate one or more vibro-tactile tones delivered to a single-site on the wrist. In 26 participants with normal touch sensitivity, tactile-only phoneme discrimination was assessed with one, four, or eight frequency bands. Compared to one frequency band, performance improved by 5.9% with four frequency bands and by 8.4% with eight frequency bands. The multi-band signal-processing approach can be implemented in real-time on a compact device, and the vibro-tactile tones can be reproduced by the latest compact, low-powered actuators. This approach could therefore readily be implemented in a low-cost haptic hearing aid to deliver real-world benefits.
substantial improvements in speech-in-noise performance [12][13][14] and sound localisation [15][16][17][18] .In these studies, audio was converted to tactile stimulation using a vocoder approach.This approach converts the audio frequency range to the frequency range where tactile system is highly sensitive.To do this, the audio is first filtered into frequency bands.The amplitude envelope is then extracted for each band and used to modulate the amplitude of vibro-tactile tones.Unlike previous studies that have converted frequency to location of tactile stimulation on the skin 8,19 , this audio-to-tactile vocoder approach uses an intuitive frequency-to-frequency conversion.
Based on previous tactile 20 and hearing 21 studies, a single frequency band conveying the broadband amplitude envelope can provide some of the phonemic information needed for consonant identification.However, the transfer of phonemic information that is reliant on spectral cues, including that used to identify vowels, voicing, and consonant place of articulation 22,23 , will depend on the extent to which multiple frequency channels can be conveyed through tactile stimulation.Frequency difference discrimination thresholds suggest that between four and eight individual frequencies can be distinguished across the usable frequency range for the latest haptic actuators, when stimulating the wrist 24,25 .However, it is not known to what extent multiple frequency channels can be separated when presented simultaneously, and whether spectral information provided through tactile stimulation can be exploited to improve speech perception.
The current study aimed to establish whether a greater number of frequency channels allows for better tactile phoneme discrimination.Tactile stimulation was delivered to a single site on the wrist, which is a viable site for a real-world wearable haptic aid 11 .Phoneme discrimination was assessed for one, four, or eight frequency-bands and vibro-tactile tones.More frequency bands were expected to allow more phonemes to be discriminated, particularly for vowels and for consonants that differed by place of articulation or voicing, which rely heavily on spectral cues.If this multi-channel approach is found to be effective, it could be an important new means through which critical spectral speech information can be transferred in a new generation of haptic hearing aids.
Results
Figure 1 shows the percentage of phonemes correctly discriminated in each experimental condition for the 26 participants who took part in this study.Primary analysis consisted of three two-tailed t-tests.All reported p-values for this primary analysis were corrected for multiple comparisons (see "Methods").With four vibrotactile tones, four frequency bands were found to improve phoneme discrimination by 5.9% on average (ranging from -4.3 to 17.0%; standard deviation (SD) of 5.0%) compared to one frequency band (t(25) = 6.0, p < 0.001).Performance improved from 46.5% (ranging from 38.7 to 57.5%; SD of 5.1%) to 51.4% (ranging from 39.2 to 61.8%; SD of 5.4%).With eight vibro-tactile tones, eight frequency bands were found to improve performance by 8.4% (ranging from 3.3 to 14.6%; SD of 3.0%) compared to one frequency band (t(25) = 14.3, p < 0.001).Performance improved from 46.4% (ranging from 32.5 to 57.1%; SD of 5.1%) to 54.8% (ranging from 43.9 to 64.6%; SD of 4.8%).The improvement in performance compared to baseline (one frequency band) was 2.5% larger on average for eight frequency bands than for four (ranging from -13.7 to 17.45%; SD of 5.7%; t(25) = 2.2, p = 0.035).
The secondary analysis included multiple stages, with all the reported p-values for all stages corrected for multiple comparisons (see "Methods"). Figure 2 shows phoneme discrimination for consonants and vowels separately.Two two-way repeated-measures analyses of variance (RM-ANOVAs) were run on the differences between multiple-frequency-band conditions and their baselines, one for the consonants and one for the vowels, with factors 'Number of frequency bands' (four or eight) and 'Talker' (male or female).A larger improvement in performance was seen for the eight bands than for four bands for consonants (main effect of number of frequency bands: F(1,25) = 14.3, p = 0.037), but not for vowels.For consonants, with four frequency bands performance improved by 8.6% (ranging from -4.6 to 23.2%; SD of 7.6%) and with eight frequency bands performance improved by 14.8% (ranging from 7.4 to 24.1%; SD of 4.7%).For vowels, the mean performance increased by 3.1% (ranging from -5.8 to 17.3%; SD of 5.6%) with four frequency bands and by 1.7% (ranging from -7.7 to 14.4%; SD of 6.2%) for eight frequency bands.No significant main effect of talker or interaction between talker and the number of frequency bands was found for either consonants or vowels.
A three-way RM-ANOVA was then run for the baseline conditions (conditions with one frequency band), with the factors 'Number of vibro-tactile tones' (one, four, or eight), 'Phoneme type' (consonant or vowel), and 'Talker' .No effect of the number of vibro-tactile tones was found.The overall scores with one frequency band differed by talker (main effect of talker: F(1,25) = 25.6, p = 0.001), with a mean score for the female talker of 48.7% (ranging from 37.1 to 60.4%; SD: 4.1%) and for the male talker of 44.0% (ranging from 37.4 to 54.7%; SD: 4.1%).The overall one-frequency-band scores did not differ significantly between consonants and vowels, but an interaction between talker and phoneme type was observed (F(1,25) = 38.4,p < 0.001).For the male talker, performance was 43.4% (ranging from 34.6 to 50.6%; SD of 4.5%) for consonants and 44.6% (ranging from 37.2 to 59.0%; SD of 4.9%) for vowels.For the female talker, performance was 51.7% (ranging from 38.9 to 64.8%; SD of 6.6%) for consonants and 45.7% (ranging from 35.3 to 57.1%; SD of 6.0%) for vowels.
Next, t-tests were performed to explore which phoneme contrasts were better discriminated for different numbers of frequency bands.Figure 3 shows discrimination across different phoneme contrasts for four vibrotactile tones, with either one (baseline) or four frequency bands.Improved performance with four frequency bands was seen for consonant pairs that differed either by voicing (t(25) = 9.2, p < 0.001; mean effect: 35.6%;SD: 19.8%) or both place of articulation and voicing (t(25) = 3.7, p = 0.046; mean effect: 15.7%; SD: 21.8%).
Figure 5 shows the improvement in performance compared to the one-frequency-band baseline for the four and eight frequency band conditions.No significant difference in the four and eight frequency band improvement was observed either for the consonant or vowel phoneme subgroups.
Finally, additional post-hoc uncorrected analyses were conducted to explore possible predictors of phoneme discrimination performance.The score for the eight frequency-band and eight vibro-tactile-tone condition was used as the dependent variable.No evidence of a dependence on age, wrist circumference, vibro-tactile detection thresholds at 125 Hz on the finger (measured during screening), or probe position (above, in line, or below the termination point of the ulna) was found.
Discussion
The aim of the current study was to establish whether phoneme discrimination is improved when multiple frequency channels are available for single-site vibro-tactile stimulation on the wrist.A highly robust overall improvement in phoneme discrimination was observed with multiple frequency channels, with the largest effects seen for voicing and place contrasts.Performance was better with eight frequency bands than with four, indicating that higher resolution spectral information than has been provided in previous studies 12,[14][15][16] can be exploited.In the current study, the vibro-tactile tones were kept within the frequency and intensity range of the latest compact, low-powered haptic actuators.Furthermore, the audio-to-tactile vocoder signal-processing www.nature.com/scientificreports/approach used can be implemented in real-time on a compact device.The eight-channel frequency-to-frequency vocoder method could therefore readily be used in a new wrist-worn haptic hearing aid.
For tactile stimulation with a single frequency channel, some phonemic information was transferred, particularly for facilitating consonant manner and place contrasts.Discrimination by consonant manner was likely achieved using differences in broadband temporal envelope patterns.However, voicing information was not well transferred through single frequency-channel stimulation.The three cognate pairs that differed by voicing were fricatives, which cannot be discriminated using strong envelope cues.For the cognate pairs with a singlefrequency-channel, periodicity is likely to be a dominant voicing discrimination cue, but periodicity information is not well maintained by the amplitude envelope extraction used in the current vocoder approach.
Our results suggest that multiple frequency channels improve performance most for consonant pairs, particularly those differing by voicing alone or voicing and place.For isolated phonemes, the presence or absence of voicing (the voice bar) is conveyed primarily in frequencies below 400 Hz.The large improvement in voicing www.nature.com/scientificreports/discrimination with multiple frequency channels, as compared with a single channel, is therefore likely to be due to the utilisation of frequency channels corresponding to acoustic information below 400 Hz (the lowest channel when there were four frequency channels and the lowest two channels when there were eight frequency channels).Voicing information is not accessible through lip reading and so transferring this information could have a significant functional benefit for those who receive limited acoustic information through other means 26 .
The current study showed evidence that eight frequency channels improve performance more than four for phonemes that differ by place of articulation (see Fig. 5).Discrimination of these pairs requires sufficiently high-resolution mid-to-high-frequency audio information, as place of articulation in obstruent consonants (e.g., fricatives and plosives) are signalled by the spectral pattern of the frication or burst noise at middle-to-high frequencies 27 .It is likely that this was more salient with eight frequency channels, where four of the channels are dedicated to audio frequencies above 2000 Hz, than with four frequency channels, where only two channels are dedicated to frequencies above 2000 Hz.Accurate perception of place of articulation is important, particularly when lipreading is not possible (as lipreading can be used to resolve many place differences).Furthermore, loss of access to high frequency sound (as is typical for those with sensorineural hearing-loss) can reduce the salience of place cues 28 and many CI users also struggle to use place of articulation information because of limitations in the CI's spectral resolution 29 .These groups may therefore both benefit from provision of these cues through tactile stimulation.
Unexpectedly, vowel discrimination was poor across all conditions tested in the current study.It may be that, even with eight frequency channels, the different frequency bands did not sufficiently separate the lowest two formants, which are important for identifying vowels.An example is shown in Fig. 6, where shifts in the first and second formant frequencies can be seen in the audio for the phonemes /ae/ and /e/, but these shifts are not well represented in the tactile signal.Future work should explore whether different frequency band allocation focused on improving the representation of formants can improve tactile vowel discrimination.Despite the tactile system not having a highly tuned membrane to perform frequency decomposition like the cochlea, there are several means through which spectral information might have been deconstructed by participants in the current study.The first is by comparing responses across different tactile receptor types, which each have distinct frequency and temporal sensitivity 30,31 .Another is by exploiting the frequency-dependent transfer of vibration through the skin, bones, and soft tissue 32 .This could allow frequency decomposition to be achieved by assessing how excitation spreads across different receptor locations.Finally, spectral profiles might be distinguished using the firing rate of tactile nerve fibres, which are known to closely synchronise (phase lock) with the periodicity of sinusoidal vibration 33 .For stimuli with a clear dominant frequency, phase locking may occur, and, for other stimuli, the absence of phase locking may indicate the absence of a clear spectral peak.
There are important limitations to the current study.Firstly, the method used focuses on spectral or spectraltemporal speech information, and not on the ability to detect temporal boundaries of phonemes, syllables, or words (segmentation).As well as being important for tactile-only speech perception, tactile speech segmentation could be critical to improving speech perception in CI users and in those with hearing impairment, particularly in the presence of background noise.Indeed, segmentation could have played an important role in the tactile benefits observed in previous studies assessing word recognition in sentences 10,[12][13][14] .Assessing whether speech segmentation is improved by providing additional frequency information through the eight frequency-channel audio-to-tactile vocoder approach should be a focus of future work.
The method was also limited in that it assessed discrimination rather than identification.This was done to circumvent the need for a prolonged training regime and to thereby allow relatively fast testing of basic parameters of the audio-to-tactile vocoder approach.It should be noted that discrimination is a necessary but not sufficient prerequisite for identification.While the current study controlled for absolute intensity cues (through level roving) and for broadband temporal envelope cues (which were available in the control condition), other spectro-temporal cues not relevant to identification may have facilitated discrimination.The pattern of performance improvements with multiple frequency channels across phoneme sub-groups, which are explicable based on phoneme-specific information expected to be transferred with the multi-channel vocoder approach (e.g., voicing information), suggests that phoneme-specific cues critical for identification were used.However, the current results should be interpreted with caution as the relationship between tactile phoneme discrimination and identification is not well understood.Another limitation was that the participant group did not match the target user group for haptic hearing aids, with participants predominantly having no known hearing impairment.Several previous studies have found no differences in tactile speech performance between normal-hearing and hearing-impaired individuals (e.g., 12,13,34,35 ).However, there is evidence of increased tactile sensitivity in congenitally deaf individuals 36 , which might allow them to better exploit speech information provided through tactile stimulation.In the current study, one participant was a CI user (P14) and another had experienced persistent tinnitus for more than a decade (P2).Their results did not deviate from the other participants in the study, who reported no hearing impairment.Future work should comprehensively establish whether there are differences in tactile speech perception across potential user groups for haptic hearing aids.
Another difference between the participants in this study and the target user group is the average age.Participants were young (all under 40 years old), whereas a significant portion of the hearing-impaired community are older.In the current study, there was no evidence of a correlation between participant age (which spanned 18 years) and tactile phoneme discrimination.Furthermore, previous studies have found no effect of age on tactile intensity discrimination 17,37 or temporal gap detection for tonal stimuli 38 .However, absolute vibro-tactile detection sensitivity 39 and frequency discrimination 40 has been shown to worsen with age.In future work, it will be important to establish whether older users can benefit as much from additional frequency channels as younger users.
Several important questions remain about the optimisation of the frequency-to-frequency audio-to-tactile vocoder approach.One is whether a greater number of frequency bands and vibro-tactile tones than eight can yield still better speech performance.Another is whether focusing frequency bands differently within the audio frequency range can lead to better performance (e.g., more densely sampling the frequency range around the first and second formant frequencies to try to improve vowel discrimination, as suggested above).An advantage of presenting sound information through tactile stimulation, rather than audio or CI stimulation, is that the tactile system does not have an existing frequency map for speech, which can be disrupted by frequency distortions 41 .Existing frequency compression or expansion methods for hearing aids or CIs should therefore also be tested for tactile stimulation.
An alternative approach to improving the audio-to-tactile vocoder approach might be to extract auditory features that capture key missing speech information and map them to currently unexploited tactile signal parameters.A visual inspection of auditory features extracted from the phoneme corpus used in the current study suggests that spectral crest (how tonal the signal is), spectral entropy (how dense the frequency spectrum is), spectral flux (how much the spectral shape is changing), harmonic ratio (how harmonic the signal is), and spectral centroid (the spectral centre of energy), differ across phoneme pairs where discrimination was poor.Features such as these could be mapped to, for example, frequency modulation of the vibro-tactile tones (tone frequencies in the current study were kept static) or to amplitude envelope modulations at frequencies that are not thought to be important for speech recognition but where tactile sensitivity is high (e.g., above around 30 Hz 42,43 ).Alternatively, audio features could be mapped to differences in stimulation at different locations on the skin (for example, different positions around the wrist 7 or along the arm 19 ).However, it is possible that speech cues that are successfully transferred through the current eight-band vocoder approach will be masked or distorted by adding additional frequency or amplitude modulation, or by moving stimulation across sites.
Another important area for future research is the robustness of the vocoder approach to background noise.Previously, a multi-band expander technique has been used with the audio-to-tactile vocoder to enhance noise robustness 12,13 .In future work, the optimal parameters for the expander should be established and other more advanced noise-reduction techniques, such as those exploiting neural networks 44 , should be explored.
The demonstration in the current study that complex spectral information can be transferred through amplitude modulated vibro-tactile tones could have important implications for a range of other haptic devices.For example, amplitude modulated vibro-tactile tones could be used to transfer complex spatial information for other neuroprosthetic haptic devices, such as those for aiding vision 45 or balance 46 .The approach could also be used to transfer information in other haptic feedback applications, such as medical haptic tools for needle steering 47 , remote control of research tools 48 , or human-controlled robots 49 .Additionally, it could be used to generate distinctive sensations in haptic feedback devices used in entertainment such as music 50 or computer gaming, and to enhance virtual or augmented reality 51 .
Since tactile stimulation was last a significant focus in the hearing sciences, compact haptic actuator technology has advanced dramatically.Now, compact, low powered, high-fidelity actuators can produce intense vibration across a relatively broad frequency range where the skin is highly sensitive.This has opened an important new means through which sound information can be transferred through tactile stimulation.This study has shown that additional speech information can be transferred by exploiting these new actuator capabilities using a realtime audio-to-tactile signal-processing strategy that provides spectral information through tactile frequency differences.There is a powerful opportunity for this approach to be used in a new generation of low-cost haptic hearing aids which combine the latest haptic actuator technology with other cutting-edge technologies, such as compact long-life batteries, flexible microprocessors (which allow both advanced computation and substantially increased design flexibility), and low-latency, low-powered wireless technology (that allows the use of wireless microphones and remote data transfer 11,50 ).These new haptic hearing aids could substantially improve qualityof-life for large populations of hearing-impaired individuals, including both CI users and the tens of millions of people across the world who are unable to access CI technology.
Methods
Participants.Participant characteristics are shown in Table 1 for the 26 adults who took part in the study.
The average age was 28 years (ranging from 18 to 36 years), and there were 15 males and 11 females.All participants had normal touch perception, as assessed by a heath questionnaire and vibro-tactile detection thresholds at the fingertip (see "Procedure").Participants were not screened for their hearing ability, but self-reported hearing status was recorded.One participant had a CI and another had persistent tinnitus in both ears that had been present for more than a decade with no known accompanying hearing loss.All other participants reported no hearing impairment.Participants were paid an inconvenience allowance of £20 for taking part.
Stimuli.The tactile stimulus in the experiment phase (after screening), was generated using the EHS Research Group Phoneme Corpus, which contained a southern English male and female talker saying each of the 44 UK British English phonemes.The phonemes were produced, as far as possible, in isolation.However, for some of the obstruent consonants, particularly voiced plosives, a following /ə/ was produced.For each talker, the corpus contains four tokens of each phoneme.The long-term average speech spectrum across all phonemes is shown for each talker in Fig. 7 (with no normalisation).The spectrum was calculated from the average power spectral density (Hann windowed, with a 96 kHz sample rate, an FFT length of 4096, and a hop size of 2048).The average power spectral density was Gaussian-smoothed with a 1/3 octave resolution.
The male talker had an average fundamental frequency of 145.4 Hz (SD: 12.4 Hz; ranging from 107.0 to 182.2 Hz) and the female talker had an average fundamental frequency of 208.2 Hz (SD: 14.7 Hz; ranging from 174.2 to 284.9 Hz).The fundamental frequency (estimated using a Normalized Correlation Function) and the harmonic ratio were determined using the MATLAB audioFeatureExtractor object (MATLAB R2022b).A 30-ms Hamming window was used, with a 25-ms overlap length.Samples were included in the analysis if their harmonic ratio was greater than 0.75.
A subset of 53 phoneme pairs was chosen for the phoneme discrimination task (see Table 2).Pairs were selected to ensure a wide range of phoneme contrasts, including those where discrimination is not possible using lip reading alone or using the acoustic signal alone for those with a substantial high-frequency hearing-loss (which is common in sensorineural hearing impairment).Pairs also included common vowel and consonant confusions for both high-and low-performing CI users 29 and for users of a previous multi-channel tactile aid (Tactaid VII) 34 .This was done to maximize the functional relevance of the test set for different user groups and to include contrasts which have previously been challenging to convey through tactile stimulation.
Table 1.Participant characteristics.The probe site is either above (towards the elbow), in line, or below (towards the hand) the terminal point of the ulna bone at the wrist (see "Procedure").The stimulus duration was matched for all pairs by fading both stimuli out with a 20-ms raised-cosine ramp, with the exception of pairs containing a diphthong or those containing the consonants /g/, /d/, /l/, /r/, /v/, /w/, or /j/, where production in isolation as a single phoneme (without adjacent vowel) is impossible or acoustically very different from production in running speech.The ramp reached its zero-amplitude point at the end of the shortest stimulus (defined as the point at which the signal had dropped below 1% of its absolute maximum).This ensured that, for these pairs, discrimination could not be achieved by comparing the durations of the stimuli.
The audio was converted to tactile stimulation using a vocoder method similar to that used in previous studies [12][13][14][15][16] .The signal intensity was first normalised following ITU P.56 method B 52 .It was then downsampled to a sampling frequency of 16,000 Hz (matching that available through many hearing aids and other compact realtime audio devices).Following this, the signal was passed through a 512th-order FIR filter bank with one, four, or eight frequency bands (depending on the experimental condition) between 50 and 7000 Hz.This frequency range was selected to follow ITU-T G.722 53 , and focused on the range in which there is substantial speech energy (see Fig. 7 and 54 ).It is also similar to the range used in previous studies that have shown large improvements in speech-in-noise performance 14 and sound localisation 15,16 in CI users.For conditions with four or eight frequency bands, the frequency bands were equally spaced on the auditory equivalent rectangular bandwidth scale 55 .Next, the amplitude envelope was extracted for each frequency band using a Hilbert transform and a zero-phase 6th order Butterworth low-pass filter, with a corner frequency of 23 Hz.This filter was designed to focus on the envelope modulation frequencies most important for speech recognition 42 .These amplitude envelopes were then used to modulate the amplitudes of one, four, or eight fixed-phase vibro-tactile tonal carriers (depending on the experimental condition).
For the one vibro-tactile-tone and one frequency-band condition, the vibro-tactile tone frequency was set to 170 Hz to match the frequency at which vibration output is maximal for many compact haptic actuators.For the four-vibro-tactile-tone conditions, the tones were at 138, 170, 210, and 259.5 Hz.The tone frequency range was focused around 170 Hz, and the frequencies were spaced so that each tone could be discriminated, based on data at the palmer forearm 24 (no tactile frequency discrimination data for the wrist is known to the authors).For the eight-vibro-tactile-tone conditions, the tones were at 94.5, 116.5, 141.5, 170, 202.5, 239, 280.5 and 327.5 Hz.These were more tightly spaced based on frequency discrimination thresholds at the dorsal forearm 25 in order to remain within the frequency range that can be reproduced by compact, low-powered haptic actuators that are suitable for a wrist-worn device (either specialist wideband actuators or multiple actuators used together with a www.nature.com/scientificreports/frequency crossover filter).It should be noted that the available data suggests that both estimates of frequency discrimination are conservative, as the wrist is thought to have similar frequency discrimination to the finger 56 , which has better frequency discrimination thresholds than the forearm 24 .
A frequency-specific gain was applied to each vibro-tactile tone so that it was equally exciting, based on tactile detection thresholds 24 .For the four vibro-tactile tones, the gains were 9.6, 5.8, 0.4, and 0 dB, respectively, and, for the eight vibro-tactile tones, the gains were 13.8, 12.1, 9.9, 6.4, 1.6, 0, 1.7, and 4 dB, respectively.The tactile stimuli generated were scaled to have an equal overall amplitude in RMS, giving a nominal level of 141.5 dB ref 10 -6 m/s 2 (1.2 G), which is an intensity that can be produced by a range of compact, low-powered shakers.This stimulus level was roved by 3 dB around the nominal level (with a uniform distribution) to ensure that no discrimination cues based on absolute intensity were available.To mask any audio cues that might be used to discriminate the tactile stimuli, a pink noise was presented at 60 dBA.
Apparatus.
Participants were seated in a vibration isolated, temperature-controlled room (mean temperature: 23 °C; SD: 0.45 °C).The room temperature and the participant's skin temperature were measured using a Digitron 2022 T type K thermocouple thermometer.The thermometer was calibrated following ISO 80601-2-56:2017 57 .For calibration, the thermocouple was submerged and calibrated using three mercury glass bead thermometers (ASTM 90C, ASTM 91C, and ASTM 92C), which covered different temperature ranges.These thermometers were calibrated by C.I.S Calibration Laboratories (Leicestershire, UK).For cold temperatures (5 °C to 20 °C), a Grant GD120 water bath with a Grant ZD circulation unit and Grant C2G refrigeration unit was used, and for warmer temperatures (25 °C to 50 °C), a Grant Y6 water bath with a Grant VF circulation unit was used.
For the screening vibro-tactile detection threshold measurements, a HVLab Vibro-tactile Perception Meter 58 was used that conformed to ISO-13091-1:2001 59 .The Vibro-tactile Perception Meter had a circular probe with a 6-mm diameter and a rigid surround.The probe gave a constant upward force of 1N.A downward force sensor was built into the surround, and the force applied was displayed to the participant.The sensor was calibrated using Adam Equipment OIML calibration weights.The vibration intensity was calibrated using the Vibro-tactile Perception Meter's built-in accelerometers (Quartz Shear ICP, model number: 353B43) and a Brüel & Kjaer (B&K) Type 4294 calibration exciter.
In the experiment phase, a custom EHS Research Group haptic stimulation rig was used.This consisted of a Ling Dynamic Systems V101 shaker, with a 3D printed circular probe (Verbatim Polylactic Acid material) that had a 10-mm diameter and no rigid surround.The shaker was driven using a MOTU UltralLite-mk5 sound card, RME QuadMic II preamplifier, and HV Lab Tactile Vibrometer power amplifier.The shaker was suspended using an adjustable elastic cradle from an aluminium strut frame (see Fig. 8).The probe applied a downward force of 1N, measured using a B&K UA-0247 spring balance.The rig allowed the vibration probe to contact the dorsal wrist, with the palmar forearm resting on a 95 mm thick foam surface.The vibration output was calibrated using a B&K 4533-B-001 accelerometer and a B&K type 4294 calibration exciter.All stimuli had a total harmonic distortion of less than 0.1%.
Masking audio was played from the MOTU UltralLite-mk5 sound card through Sennheiser HDA 300 headphones.The audio was calibrated using a B&K G4 sound level meter, with a B&K 4157 occluded ear coupler (Royston, Hertfordshire, UK).Sound level meter calibration checks were carried out using a B&K Type 4231 sound calibrator.
The EHS Research Group Phoneme Corpus used in the experimental phase was recorded in the anechoic chamber at the Institute of Sound and Vibration Research.The audio was recorded using a B&K 4189 microphone, B&K 2669 preamplifier, B&K Nexus 2690 conditioning amplifier, and RME Babyface Pro soundcard (with a 96 kHz sample rate and a bit depth of 24 bits).The microphone was 0.3 m from the talker's mouth.
Procedure.Each participant completed the experiment in a single session lasting approximately 2 h.First, written informed consent was obtained from all the participants.Participants then completed a screening questionnaire to ensure they (1) did not suffer from any conditions that might affect their sense of touch (e.g., diabetes), (2) had not had any injury or surgery on their hands or arms, or (3) had not been exposed to severe or long periods of hand or arm vibration in the previous 24 h.Next, the wrist dimensions were measured at the site at which the participant would normally wear a wristwatch (this was also where the probe contacted the wrist in the experiment phase).The participant's skin temperature was then measured on the index fingertip of their dominant hand.Participants were only allowed to continue the screening when their skin temperature was between 27 and 35 °C.Following this, vibro-tactile detection thresholds were measured at the index fingertip following BS ISO 13091-1:2001 59 .During the threshold measurements, participants applied a downward force of 2N (monitored by the participant and experimenter using the HVLab Vibro-tactile Perception Meter display).Participants were required to have touch perception thresholds in the normal range (< 0.4 m/s 2 RMS at 31.5 Hz and < 0.7 m/s 2 RMS at 125 Hz), conforming to BS ISO 13091-2:2021 60 .The fingertip was used as there is not sufficient normative data available at the wrist.If participants passed the screening phase, they moved to the experiment phase.
In the experiment phase, participants were seated in front of the EHS Research Group haptic stimulation rig (see Fig. 8), with the palmar forearm of their dominant arm resting on a foam surface and the vibro-tactile stimulation probe contacting the centre of the dorsal wrist.The probe was positioned where the participant reported they would normally wear a wristwatch.This meant that the probe was either slightly above (towards the elbow), in line, or slightly below (towards the hand) the terminal point of the ulna bone at the wrist (see Table 1).
The participants completed a three-interval, three-alternative forced-choice phoneme discrimination task.The inter-stimulus interval was 250 ms.Each trial used a pair of phonemes from a single talker (see "Stimulus").One phoneme from the pair was presented in one of the three intervals and the other phoneme was presented in the other two intervals.Which phoneme of the pair was presented once, and which was presented twice was randomised.The order of intervals was randomised and the participant's task was to select the interval containing the phoneme presented only once via a key press.Participants were instructed to select the vibration that felt different from the others (i.e., the odd one out), but to ignore the overall intensity of each vibration.After each trial, visual feedback was given indicating whether the response was correct or incorrect.
The percentage of phonemes correctly discriminated was measured in five conditions, each with different tactile stimulation parameters: (1) with one frequency band and one vibro-tactile tone (1FB1T), (2) with one frequency band and four vibro-tactile tones (1FB4T), (3) with four frequency bands and four vibro-tactile tones (4FB4T), (4) with one frequency band and eight vibro-tactile tones (1FB8T), and (5) with eight frequency bands and vibro-tactile tones (8FB8T).For each condition, all phoneme pairs were tested for both the male and female talker.For each talker, two repeats of each phoneme pair were tested, with the phoneme sample randomly selected from the four available for each phoneme.The order of conditions was randomised for each phoneme pair repeat.
The experimental protocol was approved by the University of Southampton Faculty of Engineering and Physical Sciences Ethics Committee (ERGO ID: 68477).All research was performed in accordance with the relevant guidelines and regulations.
Statistics.The percentage of phonemes correctly identified was calculated for each condition for the male and female talkers.Primary analysis consisted of three two-tailed t-tests.These compared conditions 1FB4T to 4FB4T, 1FB8T to 8FB8T, and 4FB4T-1FB4T to 8FB8T-1FB8T.These tests had a Bonferroni-Holm correction 61 for multiple comparisons applied (correction for three tests).
Next, secondary analyses were conducted.This included two two-way RM-ANOVAs, which were run on the differences between multiple frequency band conditions, one for the vowels and one for the consonants.A third three-way RM-ANOVA was run on the baseline conditions (the conditions with one frequency band).For the RM-ANOVAs, no evidence of a breach of the assumption that data were normally distributed was found in Kolmogorov-Smirnov or Shapiro-Wilk tests and, for the baseline conditions, Mauchly's test indicated that the assumption of sphericity had not been violated.The RM-ANOVAs used an alpha level of 0.05.
In addition to the three RM-ANOVAs, two-tailed t-tests were run assessing the differences between 4FB4T and its baseline (1FB4T) and 8FB8T and its baseline (1FB8T) for each of the phoneme pair subgroups (see Table 2).The differences between the effects observed for the four and eight frequency band conditions were also tested for each phoneme pair subgroup.All these secondary analyses had a Bonferroni-Holm multiple comparisons correction applied (correction for 51 tests, which included the tests done in the primary analysis).
Finally, three Spearman correlations were run between the 8FB8T condition score and the screening vibrotactile detection threshold at 125 Hz, participant age, and wrist circumference (see Table 1).These variables were thought to have the most potential to correlate with phoneme task performance.In addition, a one-way RM-ANOVA with the factor 'Probe position' (above, in line, or below the termination point of the ulna) was run.For each of these exploratory tests it was hypothesised that no effect would be found, so no correction for multiple comparisons was applied.
Figure 1 .
Figure 1.Percentage of phoneme pairs discriminated for each experimental condition, with chance performance marked by a dashed grey line.Stars show the statistical significance of differences between conditions (corrected for multiple comparisons), with more stars indicating greater significance.Error bars show the standard error of the mean (SEM).
Figure 2 .
Figure 2. Percentage of phoneme pairs discriminated for each experimental condition, with consonant and vowel pairs shown separately.Error bars show the standard error of the mean (SEM).
Figure 3 .
Figure 3. Percentage of phoneme pairs discriminated for the four-vibro-tactile-tone conditions (one or four frequency bands), grouped by phoneme contrast type.Stars show the statistical significance of differences between one and four frequency bands (corrected for multiple comparisons), with more stars indicating greater significance.Error bars show the SEM.Chance performance is marked with a dashed grey line.
Figure 4 .
Figure 4. Percentage of phoneme pairs discriminated for the eight-vibro-tactile-tone experimental conditions (one or eight frequency bands), grouped by phoneme contrast type.Stars show the statistical significance of differences between one and eight frequency bands (corrected for multiple comparisons), with more stars indicating greater significance.Error bars show the SEM.Chance performance is marked with a dashed grey line.
Figure 5 .
Figure 5.The improvement in the percentage of phoneme pairs discriminated for four or eight frequency bands compared to one frequency band, grouped by phoneme contrast type.Error bars show the SEM.
Figure 6 .
Figure 6.Spectrograms showing the input audio (left panel) and the tactile envelopes extracted using the eightfrequency-channel vocoder approach (right panel) for the phonemes ae and e (spoken by the male talker).The first and second formants of the input audio are marked.The upper two frequency channels and lowest channel are not shown for the tactile envelopes.The audio spectrogram sample rate was 22.05 kHz, with a window size of 1024 (Hann) and a hop size of 1 sample.The tactile spectrogram sample rate was 16 kHz, and no windowing was applied to the envelopes.Intensity is shown in decibels relative to the maximum magnitude of the STFT for the input audio and in decibels relative to the maximum envelope amplitude for the tactile envelopes.The spectrograms were generated using the Librosa Python library (version 0.10.0).
Figure 7 .
Figure 7.The long-term average spectrum of the male and female talker from the EHS Research Group Phoneme Corpus (based on all phonemes), with no normalisation applied.
Figure 8 .
Figure 8.A 3D rendered image of the EHS Research Group haptic stimulation rig used in the current study.The left image shows the set up with no arm in place and the shaker and probe free hanging.The right image shows a close view of the rig with the arm in place and the shaker probe contacting the wrist.
Table 2 .
Constant and vowel pairs used in the experiment, grouped by the type of contrast.
|
2023-08-18T06:17:40.145Z
|
2023-08-16T00:00:00.000
|
{
"year": 2023,
"sha1": "ded67cb932d8cbb3162c20d8135bdf86930a44c2",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-40509-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "781c60e3ddfb467da19ea83b5e4297b7febe974c",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10107681
|
pes2o/s2orc
|
v3-fos-license
|
Sub-Wavelength Resonances in Metamaterial-Based Multi-Cylinder Configurations
Sub-wavelength resonances known to exist in isolated metamaterial-based structures of circular cylindrical shape are investigated with the purpose of determining whether the individual resonances are retained when several of such resonant structures are grouped to form a new structure. To this end, structures consisting of 1, 2 and 4 sets of metamaterial-based concentric cylinders excited by an electric line current are analyzed numerically. It is demonstrated that these structures recover the resonances of the individual structures even when the cylinders are closely spaced and the new structure is thus electrically small. The investigation is conducted through a detailed analysis of the electric near-field distribution as well as the radiation resistance in those cases where the individual structures are made of simple dielectric materials in conjunction with simple, but lossy and dispersive, metamaterials.
Introduction
The field of metamaterials (MTMs) has experienced significant scientific advances in recent years, and numerous applications within the microwave [1][2][3] and the optical [4] frequency regions have been devised. Important examples of MTMs include double-negative (DNG) materials, which possess a negative real part of the permittivity and permeability, as well as epsilon-negative (ENG) and OPEN ACCESS mu-negative (MNG) materials, which possess a negative real part of the permittivity and permeability, respectively. Among the numerous reported applications of these MTMs, specific attention has been devoted to their potential of providing sub-wavelength resonant structures of various canonical shapes [5][6][7][8][9][10][11][12][13][14] either when used alone or in combination with double-positive (DPS) materials, which possess a positive real part of permittivity and permeability. In particular, it was shown in [9] that an isolated set of concentric circular MTM-based cylinders excited by a nearby electric line current (ELC) possesses sub-wavelength resonances where the excitation of specific modes is found to lead to large radiated power for constant ELC.
The purpose of the present work is to investigate how the sub-wavelength resonances of the isolated MTM-based concentric cylinder structures studied in [9] are affected when several of such structures are grouped to form a new structure. To this end, configurations consisting of 1, 2 and 4 sets of MTM-based concentric cylinders, henceforth referred to as 1-, 2-, and 4-cylinder structure, are analyzed. It is shown that these structures recover the resonances of the individual structures even when the cylinders are closely spaced and the configuration is thus electrically small. The analysis is conducted with the ANSOFT High Frequency Structural Simulator (HFSS) [15] and includes detailed investigations of the electric near-field distribution and the radiation resistance in case of simple, but lossy and dispersive, MTMs. A collection of MTM-based objects were studied in [6] with the purpose of devising an effective hybrid MTM, in [16] for cloaking purposes, and in [17] for its scattering properties. The present work is an extension of [18], and in comparison, includes both a full account of the HFSS model as well as additional near-field investigations.
The present manuscript is organized as follows. In Section 2, the investigated structures are defined and the analysis techniques, including the exact method used for the 1-cylinder structure, as well as the numerical method, are described. This section also includes a brief discussion on the conditions for sub-wavelength resonance in the isolated 1-cylinder structures; this is used in conjunction with the exact analytical results to define the electrical and geometrical parameters of a given 1-cylinder structure. In Section 3, the numerical results are presented; in particular, the resonances of the individual structures are studied as the distance between the cylinders is changed. In all cases, the resonant structures are made of simple dielectric materials in conjunction with simple, but lossy and dispersive, MTMs, and the resonant properties of all configurations are analyzed through detailed investigations of their electric field distribution and the radiation resistance. Section 4 includes a summary and conclusion of the present work. The time factor ) exp( t j , with being angular frequency and t time, is assumed throughout the manuscript. Figure 1 shows the , k th concentric cylinder set (Ck) of the The 1-cylinder structure consist of a cylinder C1 which has its center at the origin, i.e., )
Configuration
The 2-cylinder structure consists of the previous cylinder C1 and a cylinder C2 having its center at ) 0 , ( . Thus, the cylinder C2 is displaced along the x-axis with a separation distance d to the cylinder C1. The 4-cylinder structure consists of the previous cylinders C1 and C2, and the additional cylinders C3 and C4 with their centers at , respectively.
Analysis Methods
For the 1-cylinder structure, both an exact as well as a numerical solution have been obtained. The exact solution is based on the eigenfunction expansion method, see e.g., [19]. Whereas the details of the exact solution can be found in [9], we emphasize below only the main points. The incident field of the ELC, as well as the unknown fields in the three regions, i.e., the scattered field in the region containing the ELC and total field in the remaining regions, are all expanded in terms of cylindrical wave functions. These expansions represent the multipole expansion of the respective fields, and for the unknown fields, they contain a set of unknown expansion coefficients jn A , to the dipole mode and so on. The unknown expansion coefficients in A depend on the electrical and geometrical parameters of the structure in Figure 1 as well as on the location of the ELC, and they are readily determined by enforcing the boundary conditions at the interfaces between the three regions; once these coefficients are known, the fields in the different regions have been determined. For the 2-and 4-cylinder structures, a numerical solution is established using the ANSOFT HFSS software [15] (the numerical solution was also employed to investigate the 1-cylinder structure and to compare its results with the exact solution in order to confirm the validity of the established HFSS model). Figure 2 shows the HFSS model, where the 4-cylinder structure with the individual cylinders designated as C1, C2, C3, and C4 is depicted. The model consists of the ELC source modeled by a finite length current tube of radius a, current e I , and its axis located at ( s s , )enlarged in the inset of the figure, and the finite length MTM-based cylinders. The finite length current tube and the MTM-based cylinders are positioned between, and perpendicular to, two parallel, perfectly electrically conducting infinite plates with separation h. Due to image theory [19], these plates model the infinite MTM-based cylinders and the ELC. Between the perfectly conducting plates, uniform perfect matching layers which model free-space radiation, which have thickness d, circumscribe a square of side length w, and have their corners and edges joined, are inserted. The values of the specific parameters of the HFSS model as well additional details are found in Section 3.
Derived Quantities and Resonance Condition
In the present work, the attention is devoted to the radiation resistance, t R , of the ELC for a given constant value of e I radiating in the presence of the material structure, where the quantities t P and i P in the above expressions represent, respectively, the power radiated by the ELC in the presence and absence of the material structure. For the 1-cylinder structure, the exact expressions for t P and i P have been obtained in [9], and are repeated here for the sake of convenience is the Bessel function of order n, when the ELC is outside the 1-cylinder structure. The symbol n is the Neumann number; thus, 1 n for 0 n and 2 n otherwise, and max N is the truncation limit chosen such to ensure the convergence of the cylindrical wave expansion. From (3) and (1), it is clear that large values of the total radiation resistance will result if the amplitude of the expansion coefficients n A 4 becomes large. When a given single concentric cylinder set Ck is electrically small, i.e., when it is sub-wavelength, these expansion coefficients become very large, and thus exhibit a resonance when the condition is satisfied [6,9]. As explained in [6,9], at least one of the regions comprising such a concentric cylinder set Ck must be made of DNG and/or MNG material in order to satisfy the condition in (5), and moreover, the excitation of the sub-wavelength resonances is due to the presence of natural modes in the structure. The resonance condition in (5) has been used in [6,9] to design resonant sub-wavelength 1-cylinder configurations, and is also used next to design the individual concentric cylinder sets of the 1-, 2-, and 4-cylinder structures.
Resonant Configurations and Further Remarks on the HFSS Model
According to Section 2.3 and [6,9], it is possible to design a sub-wavelength 1-cylinder structure capable of exciting a dipole ( 1 n ) mode resonance, which leads to large values of, e.g., radiated power and radiation resistance. In the present section, we investigate whether these resonances of the individual 1-cylinder structures exist, and under which conditions when several cylindrical structures are grouped to form a new structure. Table 1. [12], thereby verifying the established HFSS model of the ELC.
In order to assess the frequency behavior of the MNG material of the 1-, 2-, and 4-cylinder structures, the Drude dispersion model [3] has been employed for the permeability
1-Cylinder Structure
The resonances of the 1-cylinder structures are illustrated in Figure 3(a) where the quantity | t R | log 10 RR 10 [dB], where the radiation resistance t R (1) has been normalized by 1 mm / , is shown as a function of frequency when each of the cylinders is centered at the origin and the ELC is located at ) 0 , mm 5 . represent the exact analytical results while the full lines represent the corresponding HFSS results. The agreement between the exact analytical results and HFSS results is seen to be excellent; a similar agreement was reported in [13]. It is clear that the individual 1-cylinder structures resonate at the desired designed frequencies; moreover, the values of RR are comparable in the four cases and equal to approximately 20 dB, this showing large enhancements of the radiation resistance of the ELC nearby the MTM-based structures relative to the case where the ELC is alone in free space. Figure 3 However, these resonances are still due to the dipole modes in the two cylinders. This is, however, not the case for the separation distance of d = 5 mm where, e.g., the first resonance at f = 241 MHz, which attains higher amplitude than in the case of individual cylinders, is due to a mode characterized by strong coupling between the two cylinders as is illustrated in Figure 5 where the magnitude of the electric field is shown. With the diameter of the individual cylinders being approximately 20 mm, it is thus found that the sub-wavelength resonances of the individual cylinders also occur in 2-cylinder configurations of which the overall size is as small as 20 / MHz, which is the frequency at which the first resonance appears in Figure 4 Figure 6 shows the results for the 4-cylinder structure (the structure itself is shown in the inset in the right part of the figure). [dB] as a function of frequency for the resonant 4-cylinder structures for different separation distances d. In all cases, the ELC is located
4-Cylinder Structure
. The 4-cylinder structure is shown on the right.
Specifically, Figure 6 shows the quantity | t R | log 10 RR 10 [dB] as a function of frequency for the separation distances d = 50, 40, and 30 mm when the ELC is located at For all separation distances d, four distinct resonances are found, although slightly shifted from the resonant frequencies of the individual cylinders and with lower amplitudes than in the case of the 1-cylinder structures in Figure 3(a). For a given separation d, this shift is larger for the 4-cylinder than for the 2-cylinder configuration and is seen to be largest for the cylinders C1 and C2. The majority of the resonances in Figure 6 are due to the dipole mode excitation in the individual cylinders; this is clear from Figure 7 . This explains why, e.g., large RR values are attained for cylinders C3 and C4 not only at the frequencies f = 266.5 MHz and 282.5 MHz, respectively, but also at the original resonance frequencies of the individual cylinders, whereas this is found not to be the case for cylinders C2 and C1. Moreover, for the separation distance of d = 30 mm, the first resonance occurring at f = 240 MHz is not due to a clear dipole mode in the cylinder C2, but rather to a mode which is due to coupling effects between the four cylinders, as is clearly illustrated by the result in Figure 8, which shows the magnitude of the electric field in this particular case. With the diameter of the individual cylinders being approximately 20 mm, it is thus found that the sub-wavelength resonances of the individual cylinders also occur in 4-cylinder configurations of which the overall size is as small as 5 It is noted that if the individual cylinders are designed such that their resonances are even closer to each other, the coupling becomes more visible than in the case of the presently investigated cylinders. This is supported by the results in Figure 3(a) which suggests that for close enough resonance frequencies, the radiation resistance curves (those parts with significant values of the radiation resistance) for the individual cylinders will considerably overlap each other thus indicating a stronger coupling. MHz; this is the frequency at which the first resonance appears in Figure 6. The ELC is located at , where 0 f is the design frequency of the respective cylinders, and the radiation resistance was for the 4-cylinder configuration with the separation distance d = 40 mm. The obtained results are reported in Figure 9 in terms of the quantity | t R | log 10 RR 10 [dB] as a function of frequency. This figure also includes the corresponding lossless-case result for comparison purposes. It is observed that resonances occur at the same frequencies as in the lossless case, but that the corresponding amplitudes, as expected, are reduced.
Summary and Conclusions
This work presented a detailed study of resonant properties of a number of sub-wavelength MTM-based structures of circular cylindrical shape. In particular, attention was devoted to sub-wavelength resonances known to exist in isolated MTM-based structures of circular cylindrical shape with the aim of determining whether the individual resonances are retained when several of such resonant structures are grouped to form a new structure. To this end, structures composed of 1, 2 and 4 sets of MTM-based concentric cylinders excited by an ELC were analyzed numerically in ANSOFT HFSS with regard to their near-field properties and radiation resistance. The MTMs of the individual structures were assumed to be simple, but lossy and dispersive, where the effects of the latter were accounted for by the Drude dispersion mode.
It was demonstrated that the sub-wavelength resonances of the isolated MTM-based concentric cylinder structures also occur for the structures composed of 2 and 4 sets of MTM-based concentric cylinders even in the case where the cylinders are closely spaced and the entire structure is thus electrically small. Specifically, overall sizes of about 1/20 and 1/12.5 of the smallest free-space wavelength were found for 2-and 4-cylinder structures, respectively, in which the respective resonances were due to the dipole mode excitation in the constituent cylinders. These MTM-based structures thus offer the possibility for multi-resonant sub-wavelength configurations.
|
2016-03-14T22:51:50.573Z
|
2010-12-31T00:00:00.000
|
{
"year": 2010,
"sha1": "28a3d6b8f085226abe718beec8806d17bc337944",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/4/1/117/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28a3d6b8f085226abe718beec8806d17bc337944",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
52197217
|
pes2o/s2orc
|
v3-fos-license
|
Introduction of robotic surgery for endometrial cancer into a Brazilian cancer service: a randomized trial evaluating perioperative clinical outcomes and costs
OBJECTIVE: The purpose of this study was to evaluate the clinical outcome and costs after the implementation of robotic surgery in the treatment of endometrial cancer, compared to the traditional laparoscopic approach. METHODS: In this prospective randomized study from 2015 to 2017, eighty-nine patients with endometrial carcinoma that was clinically restricted to the uterus were randomized in robotic surgery (44 cases) and traditional laparoscopic surgery (45 cases). We compared the number of retrieved lymph nodes, total time of surgery, time of each surgical step, blood loss, length of hospital stay, major and minor complications, conversion rates and costs. RESULTS: The ages of the patients ranged from 47 to 69 years. The median body mass index was 31.1 (21.4-54.2) in the robotic surgery arm and 31.6 (22.9-58.6) in the traditional laparoscopic arm. The median tumor sizes were 4.0 (1.5-10.0) cm and 4.0 (0.0-9.0) cm in the robotic and traditional laparoscopic surgery groups, respectively. The median total numbers of lymph nodes retrieved were 19 (3-61) and 20 (4-34) in the robotic and traditional laparoscopic surgery arms, respectively. The median total duration of the whole procedure was 319.5 (170-520) minutes in the robotic surgery arm and 248 (85-465) minutes in the traditional laparoscopic arm. Eight major complications were registered in each group. The total cost was 41% higher for robotic surgery than for traditional laparoscopic surgery. CONCLUSIONS: Robotic surgery for endometrial cancer presented equivalent perioperative morbidity to that of traditional laparoscopic surgery. The duration and total cost of robotic surgery were higher than those of traditional laparoscopic surgery.
' INTRODUCTION
Endometrial cancer is the eighth most common cancer in Brazilian women, with 6,950 new cases estimated for 2016 at an incidence of 6.74 cases per 100,000 women. Endometrial cancer is more frequent in the southeast region of Brazil (9.58 cases/100,000) (1). Surgery is still considered the main treatment for endometrial cancer. Most surgeries in Brazil are still performed by gynecologists and obstetricians in general hospitals, and most of these clinicians perform only laparotomic surgeries.
Since 2009, at the Gynecological Division of the Instituto do Câncer do Estado de São Paulo (ICESP), endometrial cancer has been preferably treated by laparoscopy, and treatment has followed the recommendations of FIGO, namely, complete staging surgery comprising removal of the uterus, ovaries, uterine tubes, and pelvic and paraaortic lymph nodes (2). Robotic surgery using the da Vinci s robot (Intuitive Surgery Inc., CA, USA) was introduced to our institution in 2015 as a research project and has since been used for endometrial cancer staging. We aimed to evaluate the perioperative advantages, disadvantages and costs of robotic surgery compared to traditional laparoscopic surgery in patients with endometrial cancer.
' METHODS From January 2015 to June 2017, ninety consecutive patients with endometrial cancer that was apparently restricted to the uterus and candidates for minimally invasive surgeries were randomized to robotic or traditional laparoscopic surgical arms. Two experienced laparoscopic surgeons performed both the robotic and the traditional laparoscopic surgeries (ASS and JPMC). The study was approved by the center's Institutional Review Board, and written informed consent was provided by all patients (number of protocol CEP-FMUSP 438/13).
The method of randomization was as follows: we used the randomization of the permutated block to allow an adequate random distribution of patients between the groups. A function was created in Microsoft Excel (MS Excel), with randomization between blocks of 6, 8 and 10, and a list was generated with a sequence of 100 numbers. The randomization list was password-protected and was the responsibility of the study nurse. The sequence of numbers was hidden; that is, the research nurse had access to the patient's random number only after signing the ICF (inclusion in the study) and inserting the data into the worksheet. The surgeons participating in the project did not have access to the spreadsheet. The research nurse utilized email and performed insertions of information into institutional electronic medical records to inform the study team of the group in which the patient was allocated.
The following data were collected from the electronic medical record: age, body mass index, histological type, histological grade, tumor size, stage, number of pelvic lymph nodes, number of paraaortic lymph nodes, total duration of surgery, and the durations of several procedures, namely, right pelvic lymphadenectomy, left pelvic lymphadenectomy, paraaortic lymphadenectomy, hysterectomy and closure of the vaginal cuff.
In the traditional laparoscopic surgery arm, we used conventional permanent instruments from Karl Storz s (Stuttgart, Germany), with a high-definition camera and disposable advanced energy devices LigaSure s -Medtronic (Minneapolis, MN, USA) or Ultracision s (Ethicon Endo-Surgery Inc., Cincinnati, OH, USA). All patients had previous clinical and anesthetic evaluations, radiological evaluations (thoracic and abdominal computed tomography and magnetic resonance imaging of the pelvis).
In all but two patients, single docking was used. Robotic surgery cases were performed with a supraumbilical 13-mm port for the camera and three 8-mm operating arms disposed in the arch. Another 12-mm, and sometimes another 5-mm port, was placed on the right upper abdominal quadrant for use of the assistant. At the end of the surgery, all specimens were retrieved through the vagina. In six patients, the specimens were removed via a Pfannenstiel incision because of an enlarged uterus. Traditional laparoscopic surgeries were performed with two 5-mm ports close to the anterior iliac spine bilaterally and two 12-mm ports located at the umbilicus and suprapubic region. In all patients, the surgical plan was to remove the uterus, ovaries, uterine tubes, and pelvic and paraaortic lymph nodes. The upper anatomical limit of the dissection of the paraaortic lymph nodes was the level of the left renal vein.
The medians were compared with the Kruskal-Wallis test. The categorical variables were compared with the chi-square test. Statistical analyses were performed using MedCalc for Windows (version 17.9.2; MedCalc Software, Mariakerke, Belgium), and p-values less than 0.05 were considered significant.
The da Vinci s robot was donated to our hospital by the Brazilian government's Ministry of Science and Technology to carry out a multidisciplinary project. The aim was to evaluate the advantages and disadvantages of robotic surgery in a public hospital for the treatment of cancer in various specialties such as urological, digestive, head and neck, thoracic and gynecological surgeries.
The total cost to the hospital did not include the cost of the robot itself but did include the costs of hospital admission, theaters, drugs and pharmacy, blood products, highdependence care, imaging, pathology, medical staffing and rehabilitation therapy. Cost data were analyzed without considering the cost of the robot acquisition.
In this study, we analyzed the perioperative outcomes and costs of the patients with endometrial cancer treated by robotic surgery versus traditional laparoscopic surgery.
' RESULTS
Eighty-nine patients were randomized. Two candidates in the robotic arm were excluded (one patient refused surgical treatment for religious reasons, and the other patient abandoned treatment). In the traditional laparoscopic arm, two patients were excluded (one because she had developed inoperable peritoneal carcinomatosis and another due to poor clinical condition related to morbid obesity (BMI = 58.6 kg/m 2 )).
The patients who were distributed between the two arms of the study were similar according to age, BMI, preoperative histology, tumor grade, tumor size, and FIGO stage ( Table 1).
The median total number of lymph nodes retrieved was 19 in the robotic surgery arm and 20 in the traditional laparoscopic arm. The median numbers of retrieved paraaortic lymph nodes were 11.5 (0-32) and 15 (0-41) in robotic and traditional laparoscopic arms, respectively.
In our study, robotic surgery was more time consuming than traditional laparoscopic surgery. The median total duration of the whole procedure was 319.5 (170-520) minutes in the robotic surgery arm and 248 (85-465) minutes in the traditional laparoscopic surgery. We also separately analyzed the time in minutes devoted to each of the following procedures: right pelvic lymphadenectomy, 44.5 (26-128) vs. The median hospital stay was three days and was similar in both groups. One patient in the traditional laparoscopic surgery arm remained hospitalized for forty-three days until death due to septicemia. This patient had an infected and necrotic grade 3 endometrioid carcinoma. In the robotic group, a death occurred due to an unnoticed perforation of the duodenum. This patient developed peritonitis, and autopsy examination confirmed perforation of the duodenum.
In the robotic surgery arm, there was one conversion to laparotomy to correct a vena cava lesion, while in the traditional laparoscopic arm, there were two conversions to laparotomy, one due to advanced disease and another for multiple peritoneal adhesions.
There were eight occurrences of major complications in each arm. Major complications included vena cava, duodenal, obturator nerve, iliac artery and ureter injuries.
Other major complications included cases of thromboembolism and sepsis. Minor complications occurred in six cases in the robotic surgery arm and in two cases in the traditional laparoscopic arm. These complications comprised two cases of urinary tract infection, two cases of hernia in the trocar sites, one case of panniculitis, one case of vaginal cuff dehiscence, one case of bladder injury and one case of intestinal subocclusion. The most severe of all perioperative complications was vena cava injury. This complication occurred three times in both the robotic and traditional laparoscopic arms.
Costs analysis
The following costs were considered: the daily cost of hospitalization, intensive care unit admission, disposable material, medication, surgical theaters (per minute), medical gases (per minute), robot instruments; and therapeutic diagnostic and support services ( Table 2). The cost of reusable robot instruments was calculated considering that each instrument could be used in 10 procedures. The standard staff for both types of surgery consisted of three surgeons, an anesthesiologist, a circulating nurse, and a scrub nurse.
Statistical analysis of costs was performed using the software SPSS version 19 (SPSS, Inc, Chicago, IL, USA). The total costs were compared between the two surgery groups. For the comparison between the two groups, the U Mann-Whitney test was used. The chi-square test was used for nominal variables. The estimated total costs and subcategories (in US dollars) of the surgeries performed by robot versus laparoscopy are presented in Table 2.
Without considering the acquisition and maintenance costs of the robot, the estimated median total costs for each endometrial cancer surgical treatment in our institution was 6,812 US dollars (SD ± 1849) for traditional laparoscopic surgery and 9,655 US dollars (SD ± 850) for robotic surgery (po0.001).
' DISCUSSION
With the advent of minimally invasive methods, surgery for endometrial cancer has evolved substantially in recent decades (3). One of the major advancements was the change from open surgery to laparoscopic surgery that occurred in the late 1990s and that resulted in lower perioperative morbidity without losing the radicality or effectiveness of oncological surgery (4-6). However, the high complexity of traditional laparoscopic procedures requires time-consuming training. Different studies have shown that robotic hysterectomy with lymphadenectomy has a shorter learning curve than the laparoscopic approach (7) and that robotic surgery can be a suitable alternative to compensate for the lengthy training time needed to qualify the gynecological oncology surgeon for laparoscopic surgery.
In our current study, there were no significant differences between the patients subjected to robotic and traditional laparoscopic surgery in relation to age, body mass index, histological type, histological grade, tumor size, tumor stage or the total number of lymph nodes retrieved. Obesity poses a major challenge in laparoscopic surgery for endometrial carcinoma, and each additional unit of body mass index increases the risk of failure in complete laparoscopic surgery by 11% (8). Two-thirds of our patients had a body mass index greater than thirty. In the population served at our institution, endometrial carcinoma presents in more advanced stages (2). Only 14% of our patients had a tumor smaller than 2 cm.
The median total surgical time was higher in the robotic surgery group, at 319.5 (170-520) minutes, than in the traditional laparoscopic surgery group, at 248 (164-465) minutes. Leitao et al. (9) reported that robotic surgery requires the same amount of time as laparoscopic surgery until completion of the learning curve, which is considered to occur with forty cases.
In another study, Cragun et al. (11) found that patients with undifferentiated tumors with 11 pelvic lymph nodes removed had a better overall survival rate than patients with less than 11 pelvic lymph nodes removed. In our study, the number of lymph nodes retrieved was equivalent in both types of surgery, namely, 29.5 (10-93) in robotic surgery and 34 in traditional laparoscopic surgery, which was considered sufficient.
The number of serious complications was expected to be higher at the beginning of the learning curve. We observed the same major complication rates when robotic surgery was compared with traditional laparoscopic surgery. However, when we compared the results of the first and second half of the study, we observed a marked decline in major complications (11 vs. 5), as well as minor complications (6 vs. 2), in the second half of the study. For the laparoscopic surgeries, our two surgeons had already completed the learning curve, while for robotic surgery, they were just beginning the learning curve. We did not evaluate the outcomes of robotic surgery performed by surgeons who were not experienced in endoscopic surgery.
We regarded any vena cava injury as a serious complication regardless of the extent and consequences for the patient. Vena cava injuries occurred three times in each type of surgery. In all but one case, the lesion was repaired without conversion to laparotomy. Injury to the vena cava at the time of the paraaortic lymphadenectomy is an event that has been reported in different series since the beginning of the era of minimally invasive surgery (12)(13)(14). One patient had unperceived duodenum perforation. In the immediate postoperative period, this patient developed peritonitis and septic shock and subsequently died. Sectioning of the obturator nerve at the time of pelvic lymphadenectomy is a complication that has been reported in some series (15,16). We observed sectioning of the obturator nerve in one robotic surgery and in two traditional laparoscopic surgeries. These injuries were successfully repaired without conversion to laparotomy. Other complications included two cases of ureter perforation, one case of iliac artery perforation and two cases of thromboembolism.
The major obstacle to the use of robotic surgery in the treatment of endometrial cancer is the cost of the system. The decision to implement robotic surgery in a public healthcare system should consider the total cost to the institution, which includes the costs of the robot, hospital admission, theaters, drugs and pharmacy, blood products, high-dependence care, imaging, pathology, medical staff and rehabilitation therapy.
The total cost of robotic surgery depends on multiple factors that vary between different countries: the type of hospital, namely, general hospitals versus referral centers; the volume of surgeries performed; the previous experience of the team in minimally invasive surgeries; the cost of disposable materials; the cost of the use of the different surgical instruments; the duration of operating room use; the use of medical gases and medicines; and the cost of the team of medical professionals and paramedics involved. Costs associated with patient rehabilitation and treatment of complications may also be included. For these reasons, the economic feasibility studies of robotic surgery present different results and need to be considered within the reality of each institution. There are several publications with economic evaluations of the implementation of robotic hysterectomy compared to the implementation of laparoscopic or open surgery. Many of these results should not be generalized because they compare different cost categories. The costs of robotic surgery compared to laparoscopic or open surgery vary greatly in different studies. A comparison of the estimated cost of robotic hysterectomy (in US dollar) in different countries are presented in table 3 (17)(18)(19)(20)(21)(22).
Approximately seven hundred cancer surgeries are performed every month in our institution, and there is only one robot to be shared between different specialties. The robot was provided by the government. Since 2009, the surgical treatment of choice for endometrial carcinoma in our hospital has been traditional laparoscopic surgery. The use of the robot was not an option for surgeons in general. The robot was only available for cases included in research protocols involving all surgical specialties and for protocols that had evaluations of the impact of robotic surgery on patient outcomes and economic viability in the institution as their research objectives. An analysis of the costs of each procedure will be carried out in the future, along with an analysis of all specialties.
The costs of the robot-specific supplies are the main drivers of additional costs compared with traditional laparoscopic surgery. In our study, robotic hysterectomy for the treatment of endometrial cancer was 41.7% more expensive than traditional laparoscopic surgery and had an equivalent perioperative outcome.
Despite the variations in the absolute values of costs in different countries, we can clearly state that robotic surgery is still more expensive than traditional laparoscopic surgery, and the justification for its introduction into an institution is still based on reasons other than costs.
One of the most relevant indirect advantages of robotic surgery is its ability to allow institutions with a low volume of minimally invasive surgeries to change this profile by introducing robotic surgery that requires less time to complete the learning curve. Lau et al. (23) have reported that the rate of minimally invasive surgeries rose from 17% to 98% with the introduction of robotic surgery. In our institution, most surgeries for endometrial carcinoma (62%) are performed by laparoscopy (2), and the incorporation of the robot did not have a great impact on the number of patients treated by minimally invasive surgery. We have only one robot that is shared by surgeons of all other specialties in addition to gynecological oncology. This fact represents a limitation for the use of the robot for a small number of patients.
The introduction of robotic surgery in our public hospital for the treatment of endometrial cancer demonstrated perioperative morbidity that was equivalent to that of the traditional laparoscopic surgery performed by the same surgeons. The duration of robotic surgery was higher than that of traditional laparoscopic surgery. The total cost of robotic surgery was 41% higher than that of traditional laparoscopic surgery at our institution. Incorporation of the robot did not have a great impact on the number of patients treated by minimally invasive surgery because we have only one robot that is available to a small number of patients.
' AUTHOR CONTRIBUTIONS
Silva e Silva A and Carvalho JP conceptualized the study, participated in the design of the manuscript and in the draft of the manuscript, and carried out half of the surgeries. Anton C and Fernandes RP participated in the design of the manuscript and coordinated the study. Baracat EC provided critical revisions to the manuscript. Carvalho JP conceptualized the study and participated in the design, draft and writing of the manuscript.
|
2018-10-18T11:25:29.061Z
|
2018-09-11T00:00:00.000
|
{
"year": 2018,
"sha1": "ba3754ab9f8d052a1b20559789dafad8632a8251",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.6061/clinics/2017/e522s",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba3754ab9f8d052a1b20559789dafad8632a8251",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119276546
|
pes2o/s2orc
|
v3-fos-license
|
Infall of nearby galaxies into the Virgo cluster as traced with HST
We measured the Tip of the Red Giant Branch distances to nine galaxies in the direction to the Virgo cluster using the Advanced Camera for Surveys on the Hubble Space Telescope. These distances put seven galaxies: GR 34, UGC 7512, NGC 4517, IC 3583, NGC 4600, VCC 2037 and KDG 215 in front of the Virgo, and two galaxies: IC 3023, KDG 177 likely inside the cluster. Distances and radial velocities of the galaxies situated between us and the Virgo core clearly exhibit the infall phenomenon toward the cluster. In the case of spherically symmetric radial infall we estimate the radius of the"zero-velocity surface"to be (7.2+-0.7) Mpc that yields the total mass of the Virgo cluster to be (8.0+-2.3) X 10^{14} M_sun in good agreement with its virial mass estimates. We conclude that the Virgo outskirts does not contain significant amounts of dark matter beyond its virial radius.
Introduction
In the standard ΛCDM cosmological model groups and clusters are built from the merging of already formed galaxies embedded in massive dark haloes (White & Rees, 1978).
Besides the dynamically evolved core, characterized by a virial radius R v , any cluster has a more extended region where galaxies are falling towards the cluster center. In the simplest case of spherical symmetry, the region of infall has a "surface of zero-velocity" at a radius 1 Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program GO 12878.
-3 -R 0 which separates the cluster against the global Hubble expansion. The ratio R 0 /R v lies in the range of (3.5 -4.0) being slightly dependent on the adopted cosmological parameter Ω Λ (Tully, 2010, Karachentsev, 2012. As it has been noted by different authors (Vennik 1984, Tully 1987, Crook et al. 2007, Makarov & Karachentsev 2011, Karachentsev 2012, the total virial masses of nearby groups and clusters leads to a mean local density of matter of Ω m ≃ 0.08, that is 1/3 the mean global density Ω m = 0.24 ± 0.03 (Spergel et al. 2007). One possible explanation of the disparity between the local and global density estimates may be that the outskirts of groups and clusters contain significant amounts of dark matter beyond their virial radii, beyond what is anticipated from the integrated light of galaxies within the infall domain.
If so, to get agreement between local and global values of Ω m , the total mass of the Virgo cluster (and other clusters) must be 3 times their virial masses. A measure of this missing mass can be made by mapping the pattern of infall into the cluster (or group). Uniquely in the case of the Virgo cluster, it is possible to resolve the location of galaxies in three dimensions and separate peculiar galaxies of infall from cosmic expansion as well as from virial motions. The possibility of a massive dark superhalo around Virgo can be easily tested using accurate distances at the near surface of the Virgo infall boundary with Tip of the Red Giant Branch measurements.
As shown by Lynden-Bell (1981) and Sandage (1986), in the case of a spherical over density with cosmological parameter Λ = 0 the radius R 0 depends only on the total mass of a group (cluster) M T and the age of the Universe t 0 : where G is the gravitational constant. Measuring R 0 via distances and radial velocities of galaxies outside the virial radius of the system R v , one can determine the total mass of the -4system independent of its virial mass estimate.
Numerous measurements of distances to nearby galaxies obtained recently with the Hubble Space Telescope (HST) allowed us to investigate the Hubble flow around the Local Group (Karachentsev et al. 2009) and some other nearest groups: M 81 (Karachentsev & Kashibadze, 2006), and Cen A (Karachentsev et al. 2006). The average total-to-virial mass ratio for the proximate groups, derived from R 0 via eq. (1) and from R v , turns out (Karachentsev, 2005). But as it was noticed by Peirani & Pacheco (2006 and Karachentsev et al. (2007), in a flat universe dominated by dark energy the resulting M T (R 0 ) mass is higher than that derived from the canonical Lemaître-Tolman eq. (1). In the "concordant" cosmological model with Λ-term and Ω m as a matter component eq. (1) takes a form where Assuming Ω m = 0.24 and H 0 = 72 km s −1 Mpc −1 , one can rewrite (2) as It yields a mass that is 1.5 as large as derived from the classic eq. (1). This correction leads to a good agreement on average between the R 0 mass estimates and virial masses for the above mentioned galaxy groups.
The most suitable object to explore the infall phenomena on a cluster scale is the -5nearest massive cluster of galaxies in Virgo. The kinematics and dynamics of Virgo cluster infall were studied by Hoffman et al. (1980), Tonry & Davis (1981), Hoffman & Salpeter (1982), Tully & Shaya (1984), Teerikorpi et al. (1992), and Ekholm et al. (1999Ekholm et al. ( , 2000. In a model developed by Tonry et al. (2000Tonry et al. ( , 2001 based on distance measurements of 300 E and S0 galaxies via their surface brightness fluctuations, the Virgo cluster with its center distance 17 Mpc and virial mass M v = 7 × 10 14 M ⊙ generates an infall velocity of the Local Group (LG) towards Virgo of about 140 km s −1 . With this value of the virial mass, the expected radius of the infall zone is R 0 = 7.0 Mpc or Θ 0 = 23 • in angular measure.
Recently, Karachentsev & Nasonova (2010) considered the existing data on radial velocities and distances of 454 galaxies situated within Θ = 30 • around the Virgo and came to the conclusion that the value of the radius R 0 lies in the range [5.0 -7.5] Mpc. In the standard ΛCDM model with the parameters Ω m = 0.24 and H 0 = 72 km s −1 Mpc −1 (Spergel et al. 2007), these quantities of R 0 correspond to a total cluster mass M T = [2.7 − 8.9] × 10 14 M ⊙ .
The mass estimate derived from external galaxy motions does not contradict the virial mass obtained from internal motions. However, the present accuracy is insufficient to judge whether or not the periphery of the Virgo cluster contains a significant amount of dark matter outside its virial radius R v = 1.8 Mpc (Hoffmann et al. 1980). The distance of the Virgo cluster itself is now well established by observations of Cepheid variables in 4 galaxies. The Cepheid distances anchor precision relative distances for 84 galaxies with HST SBF measurements (Mei et al. 2007, Blakeslee et al. 2009) and 4 galaxies with SNIa measurements (Jha et al. 2007). These galaxies reside in the cluster core at R LG = (16.5 ± 2) Mpc and therefore are useless as tracers of the Virgocentric infall.
Expected pattern of the infall
At large distances on the diagram, behind the Virgo cluster, while most distance measures are based on the optical or IR Tully-Fisher relation with typical errors of ∼ 20%, there is one very well constrained group. The Virgo W ′ group around NGC4365 (de Vaucouleurs 1961) with < V LG >≃ 1000 km s −1 contains one galaxy with both a Cepheid and SNIa measurement and 5 other galaxies with HST SBF measurements. These observations locate Virgo W ′ at 23 Mpc, 6.5 Mpc behind Virgo. The group velocity and distance indicate that this group lies very near the edge of the Virgo infall zone at R 0 on the far side of the cluster.
The most feasible way to trace the Z-like wave of Virgocentric infall in detail is to make distance measurements to galaxies on the front side of the cluster via TRGB. This method (Lee et al. 1993) is applicable to galaxies of all morphological types and provides the needed distance accuracy of ∼5-7% (Rizzi et al. 2007). The greatest precision will be achieved with lines-of-sight tight to the cluster where projection factors with radial motions will be minimal. Unfortunately, in the virial cone Θ v = 6 • , there is no foreground galaxy with a literature TRGB distance. In the wider area with Θ < 15 • there are only 2 galaxies: NGC 4826 and GR-8 with existing TRGB distances between the LG and Virgo.
Selection of targets
The scarcity of TRGB data on the near side of the Virgocentric infall wave can be understood. In the past, targets for TRGB distance measurements with HST were usually galaxies from the Kraan-Korteweg & Tammann (1979) sample with radial velocities V LG < 500 km s −1 . In the Virgo core direction a galaxy with a velocity ∼ 500 km s −1 may be a representative of the Local Volume (R LG < 10 Mpc), or a Virgo cluster member, or even be situated behind the cluster at R LG ≃ 20 Mpc and infalling toward us. The selection of candidates that might be true nearby galaxies hidden among the huge number of Virgo cluster members is a complicated task. That is why Kraan-Korteweg & Tammann (1979) even excluded the Virgo cluster core (Θ < 6 • ) from their consideration. We undertook a special search for likely foreground galaxies, inspecting SDSS images of more than 2000 objects in the specified area. Among these we found 37 galaxies with HI line widths that yield Tully-Fisher distances less than ∼11 Mpc. Their radial velocities lie in the range V LG = (400 − 1400) km s −1 , and the majority of these turn out to be blue -8dwarf galaxies showing no apparent concentration towards the Virgo center. As objects for our pilot program to measure distances with ACS HST via TRGB, we selected 8 galaxies which have lower Tully-Fisher distance estimates. In the target list we also included the S0-type galaxy NGC 4600 with a distance estimate via surface brightness fluctuations by Tonry et al. (2001). (The case of the nearby S0a galaxy NGC 4826 with D(sbf)= 7.48 Mpc (Tonry et al. 2001) and D(trgb) = 4.37 Mpc (Jacobs et al. 2009) tells us that these methods sometimes give distance estimates with a significant difference. At present all nine our targets have been imaged with HST within GO 12878.
Galaxies situated on the nearby boundary of the "zero velocity sphere" will have radial velocities close to the mean cluster value, < V V irgo >= 1000 km s −1 and, given the expected value R 0 ≃ 7 Mpc, distances R LG ≃ 10 Mpc. The F814W and F606W images of these galaxies obtained with ACS at HST in a two orbit per object mode can determine their TRGB distances with an accuracy of ∼ 7% or ∼0.7 Mpc. Given a total mass of the cluster within the radius R 0 expresses by eq.(4), then the measurement of R 0 ≃ 7 Mpc with an accuracy of ∼ 0.7 Mpc can yield a mass of the Virgo cluster with an error of ∼ 30%.
Observations and data processing
We have observed 9 galaxies with the Advanced Camera for Surveys ( (Dolphin 2000), using the recommended recipe and parameters. In brief, this involves the following steps.
-9 -First, pixels that are flagged as bad or saturated in the data quality images were marked in the data images. Second, pixel area maps were applied to restore the correct count rates. Finally, the photometry was run. In order to be reported, a star had to be recovered with S/N of at least five in both filters, be relatively clean of bad pixels (such that the DOLPHOT flags are zero) in both filters, and pass our goodness of fit criteria (χ ≤ 2.5 and |sharp| ≤ 0.3). These restrictions reject non-stellar and blended objects. At the high Galactic latitude of the Virgo cluster foreground stars from the Milky Way are insignificant contaminants. For some of the most distant galaxies we extended to stars with S/N > 2 in order to evaluate the TRGB. This extension introduces a lot of noise which is monitored by plotting CMD of empty regions beside the galaxy body.
The TRGB is determined by a maximum likelihood analysis monitored by recovery of artificial stars (Makarov et al. 2006). Artificial stars with a wide range of known magnitudes and colors are imposed at intervals over the surface of the target and recovered The calibration of the absolute value of the TRGB including a small color term has been described by Rizzi et al. (2007
TRGB distances to nine target galaxies
Images of our target galaxies taken from Sloan Digital Sky Survey (http://www.sdss.org/) are shown in Figure 2. Each field has a size of 6 by 6 arcminutes. North is up and East is left. The ACS HST footprints are superimposed on the SDSS frames. In Figure 3 a mosaic of enlarged ACS (F606W + F814W) images of the nine galaxies are shown. Their size is 1 arcminute, North is up and East is left. Color magnitude diagrams (CMDs) of F814W versus (F606W -F814W) are presented in Figure 4.
-11 -A summary of some basic parameters for the observed galaxies as well as the resulting distance moduli for them are given in Table 1. Some additional comments about the galaxy properties are briefly discussed below. (1) galaxy name, (2) TRGB magnitude and its 68% uncertainty from the maximum likelihood analysis. (12) the linear distance (in Mpc) and conservative global characterization of 10% uncertainty for -12a one-orbit ACS observation of a galaxy near 10 Mpc.
GR34=VCC530, UGC7512 and VCC2037. These are irregular type dwarf galaxies with narrow HI lines. New TRGB distances to them agree with the Tully-Fisher distances confirming all the galaxies to be situated in front of Virgo cluster.
NGC4517. This Sd galaxy seen edge-on has the major angular diameter about 12', extending far beyond the ACS frame. Its CMD is constructed from an outskirts field along the minor axis to sample the halo and avoid crowded dusty regions of star formation. The TF distance to NGC 4517 is consistent with the TRGB distance.
IC3583. This Magellanic type dwarf has an asymmetric diffuse halo extended to the West. The field contributing to the CMD that is shown in Fig. 4 NGC4600. This is a gas-poor dwarf lenticular galaxy with Hα emission in the core (Karachentsev & Kaisin, 2010). We recognize a moderate agreement in the distance estimates for NGC 4600 via surface brightness fluctuations (Tonry et al. 2001) and from TRGB. It is a bit unexpected to find this isolated dS0 galaxy in front of the Virgo cluster rather than in the virial core.
KDG215=LEDA44055. This galaxy is gas-rich low surface brightness dwarf with a narrow HI-line, a high hydrogen mass-to-stellar mass ratio M HI /M * = 3.1, and a narrow RGB characteristic of a low metallicity system. KDG215 lies more than a magnitude closer -13than any of the other targets, at 4.8 Mpc.
IC3023 and KDG177=VCC1816. Both the galaxies of Im-type are HI-rich and active star formation objects typical of field galaxies. In spite of their narrow HI-lines: 44 and 30 km/s, they both appear to belong to the Virgo cluster. The TRGB are not seen as would have been the case if these galaxies were in the Virgo foreground. In each case, the TRGB is probably being seen around I ∼ 27 as expected for a cluster member. These tentative measurements are at the limit of the current HST photometry and we do not attempt a distance determination.
Apart from these objects, there are five other galaxies in front of the Virgo cluster that have accurate distance measurements. Information about them is collected in Table 2. We use the data on distances and radial velocities of these 7 + 5 galaxies from Tables 1 and 2 to trace the near-side Virgocentric infall. Two probable Virgo core galaxies with uncertain distances: IC 3023 and KDG 177 are excluded from consideration. In addition, the analysis will include the galaxy NGC 4365 in the Virgo W' cloud as a representative with an accurate distance of the back-side infall to the Virgo cluster. Its parameters are given in the last line of Table 2.
6. Estimating the total mass of the Virgo cluster As noted above, the analysis of available observational data on radial velocities and distances for several hundred galaxies in the vicinity of the Virgo cluster lead to the conclusion that the radius of the zero velocity surface of the cluster lies in the range R 0 = (5.0 − 7.5) Mpc (Karachentsev & Nasonova, 2010). According to equation (4) this scatter in R 0 leads to a wide scatter in the total mass estimates of the cluster, M T = (2.7−8.9)×10 14 M ⊙ , exceeding a factor of three. New accurate distance measurements The column designations are similar to those in Table 1.
to relatively few galaxies residing near the front side of Virgo fix the R 0 and M T quantities in a narrower interval. -15 - To determine the radius R 0 , one needs to fix the mean radial velocity of the cluster, LG in the rest frame of the Local Group. According to Binggeli et al. (1993), it equals to +946 ± 35 km/s. This estimate was obtained over a large number of galaxies with measured velocities but unmeasured distances, whose membership in the Virgo cluster was considered to be probable. Basing on the galaxies with membership in Virgo confirmed by accurate distances, Mei et al. (2007) derived the mean cluster velocity of +1004 ± 70 km/s.
The difference of 58 km/s between these estimates can be caused by a specific selection affecting Binggeli's estimate. In a spherical layer between the radii R v and R 0 bounded by a cone with the angular radius of Θ 0 ∼ 20 • , the expected number of galaxies behind the cluster is greater than that in front of the cluster. In the case of radially infalling galaxies into the cluster core, the difference in galaxy number falling toward us and away from us should artificially decrease the mean radial velocity of the sample. Probably, any (unknown) pre-selection effect on velocities could also be in the list of targets investigated by Mei et al. (2007). We adopt the average of these two independent values as the radial velocity of the Virgo cluster centroid, V V irgo LG = 975 ± 29 km/s, shown in Figure 6 as the horizontal km/s. This quantity is not significantly higher than the previous estimates -139 km/s (Tonry et al. 2001) and -185 km/s (Tully et al. 2008).
The presented data exhibit also that the solid wave-like line crosses the line of the mean cluster velocity at the distance of 9.3 Mpc. Therefore, the radius of the zero-velocity surface around the Virgo cluster turns out to be R 0 = 16.5 − 9.3 = 7.2 Mpc. There are at least three circumstances affecting this estimate: a) uncertainty of the Virgo center -16position, which is ∼ 0.4 Mpc, b) uncertainty of the mean velocity of the cluster ∼30 km/s corresponding to ∼ 0.3 Mpc on the distance scale, and c) the mean-square scatter of galaxies with respect to the Z-like line that consists of ∼ 0.5 Mpc. Considering these factors as being statistically independent, we obtain the sought-for radius R 0 = (7.2 ± 0.7)Mpc.
According to equation (4), this quantity corresponds to the total mass of the Virgo Virial mass estimates for the Virgo are: 6.2 (de Vaucouleurs, 1960), 7.5 Shaya, 1984) and7.2 (Giraud, 1999) When the difference of the individual Θ of the galaxies is taken into account, the value of -17σ v drops to 130 km/s. An essential part of this scatter, ∼ 90 km/s, is caused by errors of the distance measurements, which are ∼ (7 − 10)%. After a quadratic subtraction of the component related to distance errors, the remaining ("cosmic") dispersion of radial velocities turns out to be ∼ 95 km/s. Therefore, one can say that the infall flow pattern around the Virgo cluster looks to be rather "cold".
The measurements of distances to nearby galaxies with the Hubble Space Telescope makes the picture of galaxy infall into the Virgo cluster much more distinct. Among nine galaxies selected as Virgo foreground candidates for our pilot HST GO 12878 program, seven reside in the expected near region while two others are probably cluster members.
In our list of targets for HST there are ∼ 30 more galaxies with Tully-Fisher distances around 10 Mpc. Measurements of their distances with ACS HST can give us a more precise estimate of the total mass of the nearest large cluster via infalling galaxy motions.
Multicolor images of galaxies that have been obtained with the 3.5-meter CFHT telescope under the program "Next Generation Virgo Cluster Survey" (Ferrarese et al. 2012) will be useful in choosing the best candidates for new HST observations.
In the framework of the simplest spherically-symmetric radial infall of galaxies into a point-like central mass, the observed distances and radial velocities of galaxies As was noted by Karachentsev et al. (2003) and Tully et al. (2008Tully et al. ( , 2013, the nearby galaxies residing inside a radius of ∼ 6 Mpc around the Local Group form a flat configuration (the "Local Sheet") with surprisingly low peculiar velocities of the barycenters of groups of ∼ 30 km/s, A hint to the existence of the Local Sheet can be seen in Figure 6 too, where three the nearest galaxies: (Table 1). Open symbols: galaxies with distances drawn from the literature ( Table 2). The horizontal bars indicate distance errors.
The inclined dashed line marks the unperturbed Hubble flow. The horizontal dashed line corresponds to the mean radial velocity of the Virgo cluster. The grey vertical column denotes the zone of virial motions.
|
2014-01-13T06:15:04.000Z
|
2013-12-24T00:00:00.000
|
{
"year": 2014,
"sha1": "ef1fb9aa3cc9fa21580f8fd92d139933020eded9",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/0004-637X/782/1/4/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "ef1fb9aa3cc9fa21580f8fd92d139933020eded9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
52087239
|
pes2o/s2orc
|
v3-fos-license
|
Repercussões do ruído na unidade de terapia intensiva neonatal Repercussion of noise in the neonatal intensive care unit
Objective: To identify repercussion of noise in the neonatal intensive care unit on mothers, newborns and on interactions of neonates with healthcare professional from the mothers’ perspective. Methods: This descriptive cross-sectional study was carried out in the neonatal intensive care unit. The study population was composed by 95 mothers. Data were collected using formularies. The statistical analysis was
Introduction
The birth of a premature and/or sick newborn constitutes a crisis for the family because parents have problems to identify in physical and behavior characteristics of the child, differently from what was expected, their previous ideal dream, so causing frustrations. (1)(3)(4)(5) This situation could difficult the establishment of an affective bonding and affect negatively families dynamics.
The situation could be worse when the newborn is hospitalized in the neonatal intensive care unit (NICU), a restrict and unknown environment that impact and intimidate most of families. (6)4) Due to the high complexity of procedures and technology used in the NICU, this environment conditions include intense sensorial stimulus such as excessive lighting and noise which are incompatible with well-being of neonates, family and professionals.
9)(10)(11) Noises are loud or confused sounds at frequency physiologically incompatible with human ear that may cause physical lesions, physiological and behavioral changes. (12)15)(16) Newborns exposed to high sound pressure levels (SPL) could present hypoxia, increase of adrenocorticotropic hormone and adrenaline release, increased heart rate, systemic vasoconstriction, pupil dilation, elevated blood and intracranial pressure, increased oxygen consumption rates, and energetic expenditure, which in long-term could result in delayed weight gain.Hearing loss due to long exposure to noise among hospitalized newborns in the NICU is a problem widely discussed in the literature. (7,8)ong deleterious effects of high SPL for professionals the most common are increased blood pressure, changes in heart rate and muscle tone, headache, hearing loss, low concentration power, irritability, burnout syndrome and job dissatisfaction. (9,10)razilian studies have been showing high SPL within the unit and inside incubators. (13,14)However, few studies are published on this topic mainly considering the health service user's perspective.Therefore, because of the importance of an environment that could enable an affective bounding between mother and child after birth, as well as to provide an adequate communication between family and health professional, this study attempted to verify if noise in the NICU could change interaction between the mother and her baby, and also mothers and health professionals.
In this study we aimed to identify repercussions of noise in the neonatal intensive care unit on mothers and newborns and also on their interaction.The interaction between mothers and healthcare professionals was also verified.
In this study the term perception is used as the act to acquire knowledge by sensitive organs.In other words, to notice. (17)
Methods
This descriptive cross-sectional study was conducted in two emergency room of the Neonatal Intensive Care Unit (NICU) at an academic hospital in São Paulo, SP, Brazil.The rooms are placed in the eighth floor of the hospital and have four beds exclusive for patients of the public health system enrolled in the hospital's prenatal care program.The unit does not have adequate physical structure to reduce high SPL particularly for presence of many professionals, students and families.The single intervention in this front is called "the sleeping time" that consists in environmental measures such as to reduce lighting and noise, and minimal manipulation of newborns.
Due to the limited space within the unit, parents are allowed to stay inside NICU from 9 a.m. to 9 p.m.In addition, professionals often report neo-nate clinical conditions and treatment management for families close to the incubator.
To data collection all national and international ethical and legal aspects of research on human subjects were followed.
Independent variables were age, mothers' formal education level, number of visits to the baby in the unit, duration of baby's hospitalization, previous experience of hospitalization of a family member at NICU.
The dependent variable "mothers' perception of noise" was measure according to noise level, repercussion of noise on them and baby, mothers' reactions to the noise, professional management, influence of noise on interaction with her baby and with the multidisciplinary team.Data was collected using interviews and structured forms with open and closed questions.
To validate forms, a pre-test was conducted with mothers who had newborns hospitalized in other neonatal units and who had similar characteristics.In general, interviews lasted for approximately 15 minutes.
The study population was composed of 95 mothers who babies were hospitalized during the period of this study.Inclusion criteria were mothers who child was hospitalized in the NICU and visited their child for at least three days excluding the day of the interview.All mother agreed to participate.The number of visits was defined as three because we believed to be enough for mothers have a perception of the noise in the unit.Mothers who reported hearing loss and psychiatric disturbs were excluded.
For data analysis we used absolute, relative, mean and standard deviation (SD).The analysis of open question was done considering the frequency of answers.Spearman's rank correlation coefficient was used to analyze relation between numbers of days that families visited newborns in the NICU along with variables of mothers' perceptions related to the negative impact of noise on them, on their child, and in the level of care perceived by interactions with the professional.
The receiver operator characteristic curve (ROC) was used to verity if number of days that mother visited their child in the NICU was different from any of the characteristics evaluated.
This study followed national and international ethical and legal aspects of research on human subjects.
Results
Considering independent variables the sample of this study presented: a) mean age of 28.8, SD of 6.9 years; b) formal education level of 52.6% for complete high school and 30.5% for incomplete high school; c) mean number of visits to newborns was 16.7, SD of 26.8 days; d) duration of newborn's hospitalization was 18.8, SD of 28.3 days; e) those who never been in a NICU before were 93,7%.
Mothers (80%) considered that NICU was noisy.From the total, 25.3% mentioned that NICU had mild noise, 22.1% perceived more or less noisy, 18.9% considered the unit noisy and only 13.7% classified the unit as very noisy.
Regarding discomfort in the environment related to sound, 59.1% of mothers reported that noise disturbed them, 29.6% perceived the environment as noisy but it did not disturb, and 11.3% did not report any discomfort.Mothers' feeling when a high noise in the unit was perceived was agitated (23%), tense (42%) wish to cry (20%) and headache (15%).
Those mothers who most visited the unit perceived a higher discomfort (mean 20.9; SD 35.2 days) (Figure 1).A total of 69.7% of participants perceived that repercussion of noise in the NICU on their babies disturbed the child and, among them, 32.1% considered that their baby was very disturbed due to the child agitation, grumbling, crying and movements that seemed that he/she was scary or had fear shown by facial expression of pain.Mothers who most visited their child had higher perception of disturbance that noise caused on their baby (mean of 20.1, SD 34.3 days).(Figure 2) For perception of reactions caused by the noise, 77% of mother reported that even when an elevated noise that disturbed them was in the NICU they stayed in the environment, whereas others (22.9%) reported to leave the place, leaving the child alone.A total of 40 mothers (66.7%) were worried to not make loud sounds when closed to the baby and, among them, 87.5% tried to not make any noise.However, 31 mothers (40.8%) did not mention the same concern.
Almost half of mothers (52.1%) considered that spoke less with their child when the NICU was noisy.A significant portion of moms (45.8%) also reported to speak more softly with babies because of environmental conditions.Related to tactile interaction, less than a half of mothers (47.9%) reported to touch less their child when the environment was noisy.
A total of 66.7% of mother reported that they did not change their tone of voice during communication with health care team even when the NICU had unfavorable acoustic conditions.ROC had sensibility of 0.96 and specificity of 0.65, it also indicated that mothers who visited their child in the ICU for more than eight days had high probability of not change tone of voice in noisy situation (Figure 3).Interestingly, 50% of participants reported that when the environment was noisy they had problems to concentrate on while the professional were explaining something to them.For variables we found that most frequent visits of mother to the newborn in the NICU (mean of 12.9 SD 8.7 days) were associated with high proportion of difficult to concentrate during explications about baby's health status.(Figure 4)
Discussion
Several studied had reported high level of stress in parents caused by their child hospitalization in the According to mothers, the noise also affected communication with health professionals because during professionals' explanations related to babies' clinical status, most of them were unable to be concentrated.The attention for families in the NICU expects that health team promote adequate conditions to communicate with them during hospitalization.Professionals should be open to answer questions and provide information regarding the care delivered, diagnosis, treatment and prognosis of the child.Such actions are crucial to promote a reliable environment.
In this context the family has the chance to report healthcare professionals their feelings, fears and concerns.However, to understand orally communication is a complex process that involves identification and comphrehension of words articulated that may lead correct understanding of the message. (26)Therefore, the importance of good acoustic condictions in the NICU is justified.In addition, the literature report that even normal hearing people complaint about the difficults to understand other inside a noisy enviroment. (27)Hence, communication between family and professionals in the NICU with elevated SPL could be impaired and, as a result, many important information are lose, which might cause conflicts between parents and professionals and also promotes a less humanized environment.
Based on mothers' perspective this study revealed one aspect of the ecological environment in the NICU, which indicated the influence of noise in interaction between mother and infant in the first phase of a vital cycle.It is important to highlight that first interactions are vital for parents and can reflect on the quality of interaction with their child in the future.Another important fact is that an environment free of noise stimulates parents to spend more time in the NICU with their child.In addition, to spend a long time during neonates hospitalization is a right for parents provided by the law.
Further studies are encouraged to understand better what consequences a noisy environment could cause to families who child is hospitalized at a NICU.
Considering findings of this study it is suggested that health services should promote educa-NICU.This stress is not only because of suffering with newborn's critical health status but also because the NICU is unknown environment that may cause fear on them. (4,18)Little is known about consequences of noise in parents, how they perceive (19) and understand it.Findings of this study showed that noise in the NICU was a disturbance factor that increased mothers' stress.This study reinforced the importance of noise management in the NICU.In addition, some studies (13,14) have detected high level of sound pressure, over the levels recommended by regulatory agencies, in the NICU and inside incubators. (13,14,20,21)ome authors considered the importance of an adequate environment.Responses of newborns to the first external stimulus are crucial to establish an affective bonding between families and their babies. (1,2,22)n this sense, for parents is easier and motivator to interact with their child while perceive a positive response.From stages that goes from deep sleeping and crying, the state of alert inactivity is the better to promote interaction with the baby who at this state is calm, alert, had low motor activity, regular breathing and sensorial systems such as hearing more opened for interactions.This state also help the caregiver to delivery a better care for the baby. (23,24)owever, more intense environmental stimulation such as excessive lighting and loud noise, could lead the baby to become more alert and with tendency to cry. (25)Babies exposed to an excessively noisy environment may lead parents to give less attention to their babies, therefore affecting negatively the interaction and bonding formation with parents. (5)his study data indicated that most visits to NICU were associated with high perception of noise that disturbed mothers and their baby.For this reason, longer hospitalization could have a negative effect on mother, increase stress, and as a result, reduce interaction between the mother and her child.Another study, however, reported that professionals' perspective was that noise do not affect families, (19) and it suggested a development of strategies by multidisciplinary teams to receive better parents who baby are hospitalized for longer periods, giving emphasizes to an humanized care in order to include families.tional programs in order to create a conscience in professionals about the importance of comfortable acoustic environment for care.In addition, other administrative and organizational measures is required such as to establish criteria to purchase new equipments that produce less noise, to promote preventive maintenance and to place sensors in order to create a systematic monitoring of noise.Besides that, other things required are adequate physical structure that could offer more privacy for parents and neonates, a job system based on integrative and individualized care to enable a low concentration of professionals in the same environment, among other.It is understand that in front of this moment that human beings are challenged by sustainability issues, to take care of a microenvironment during initial life phase of newborns deserve more attention of public health administrators.
Conclusion
Findings showed that repercussion of noise perceived by mothers on themselves and on neonates caused physical and emotional changes.It is verified that changes compromised areas related to ability of interactions.This situation could result in a decreasing of affective and sensorial changes between parents and children, therefore it could compromise the bonding and, as a consequence, psychobiology needs.
Other repercussion of noise in the NICU pointed out by mother was the difficult to be concentrated during communication with health professional.
Figure 1 .
Figure 1.Relation between number of days of visit to the neonate and discomfort caused by the noise to the mothers
Figure 2 .Figure 3 .
Figure 2. Relation between number of visits to the neonate and the baby's discomfort caused by the noise perceived by the mother
Figure 4 .
Figure 4. Mothers' perceptions related to care level and interactions with health professional associated with number of visits to the NICU
|
2018-08-24T18:13:02.139Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "00ba6ebbc138e656b8d395a4f308f234c6454e38",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ape/a/V73mQ5JhRScptmLLwfHhwyx/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fa405d4c6fb762b4626be953e2fbcc096c4123bb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234277836
|
pes2o/s2orc
|
v3-fos-license
|
Cognitive Metaphors of Covid-19 Pandemic in Business News
The research considers the COVID-19 pandemic cognitive metaphors conveyed by means of the English language in business news. The interpretation of metaphor goes beyond its traditional understanding as a rhetorical device. The approach is consistent with a cognitive theory claiming that metaphor is a mental instrument to reflect the way we reason and imagine the world. The paper provides a brief theoretical framework of the research, discusses the concept, role and types of cognitive metaphor. It deals with particular cases of metaphoric representations of the pandemic selected from The Financial Times, an international daily with focus on business and economic affairs. The results of the study reveal a variety of lexical means to express the dynamic image of the pandemic that exhibits a gradual shift from the military metaphor to variant interpretations. The findings prove the pervasiveness of metaphor in business and mass media communication, its significance to understand difficult situations, efficiently communicate ideas and influence the audience.
Introduction
The outbreak and consequences of the COVID-19 pandemic will leave a deep mark in the consciousness of people all over the world. Because of its unexpectedness, rapid pace and global scale, the pandemic has forced significant changes into our lives. Under such circumstances, immediate response, and efficient prevention have become the greatest challenges faced by the world community. The enormity of the challenge issued by the dreaded pandemic is being discussed, explained, rationalized and interpreted in numerous publications relating medicine [1,2] and other fields of science, politics, economics [3], education, culture [4,5] to name but few. In addition to that, there exists a sizable academic literature discussing linguistic and communicative aspects of the pandemic crisis [6][7][8], addressing the ongoing events, tackling cognitive and emotional response to the unpredictable circumstances, and explaining how coronavirus outbreak communication is being handled by political, mass media and scientific communities.
Background
The studies with a focus on applied linguistics perspective were carried out to discover and explain new coinage related to Covid-19 situation, its influence on other languages and problems arising in the translation and coordination of terminology [9][10][11], generate taxonomies of terms with the help of corpus analysis and estimate word frequencies [12][13][14], collect and systematize massive Covid-19 related text data [15]. The findings shed light on the specificity of scientific and medical language which is significant in specialist and everyday discourse.
The findings revealed an unprecedented and rapid (within 3 months) growth in the frequency of pandemicrelated words (coronavirus, corona, COVID- 19) compared to lexical items connected with recent political and social events (Brexit, impeachment) [13]. It was discovered that linguistic changes accelerated with the increase in names for social response and consequences (social distancing, self-isolation, etc.), economic impact (lockdown), and distant communication (zoom) [14].
The noticeable result of such linguistic change is the qualitative and quantitative expansion of the English vocabulary. The corona-related vocabulary amounts to approximately 500 items [11] which find equivalents in other languages. The quantitative change can be traced through linguistic creativity of speakers. According to Haddad & Monterero-Martinez [9], a tremendous vocabulary expansion should be attributed to solving communication needs in specialist and everyday communication by filling lexical gaps. The data from various studies [9][10][11][12][13][14][15] indicated that most productive types of vocabulary development included metaphoric and metonymic transfers. Affixation, compounding, abbreviation, clipping and conversion prevailed over other word-building processes.
However, it is not only naming gaps that require to be filled. A more challenging task precedes creating naming labels. It is crucial for interlocutors is to cognize new aspects of reality, systematize new and old experience, evaluate things and events. The researches found that such concepts as disease, Covid-19, pandemic became prevalent in various types of discourse today. Having studied framing of Covid-19 in Twitter communication, Wicke & Bolognesi [16] showed that the discourse around the pandemic made use of the "war", "monster", "storm", and "family" metaphors. The conclusion was made about "a metaphor menu" facilitating the communication of various aspects connected with Covid-19.
Semino's [17] findings about the appropriateness of "fire" metaphors in communication about contagion and public health measures resulted from the analysis of news articles in English.
Other researchers analysed "war" metaphorisation of Covid-19 to discover the diverse arsenal of means of its manifestation in language and explain the diversity by socio-political individual variables speakers', such as political orientation [18], or by universal character of the war rhetoric [19][20][21][22].
Although much work has been done to date, more studies need to be conducted to ascertain whether different discourses and their various genres determine the type of metaphors they employ how they may vary.
The purpose and methodology of research
The purpose of this study is to investigate metaphoric representations of the Covid-19 pandemic in business news articles in English.
It is hypothesized that the universal metaphor "disaster is war" can manifest as a set of alternatives to communicate senses and ideas particularly significant for mass media interpretation of business affairs.
The material used in this research consisted of 125 metaphoric manifestations of the Covid-19 pandemic. The extracts were selected from The Financial Times (FT) articles placed online between February 2020 and January 2021.
The analysis of cognitive metaphors of the Covid-19 pandemic was based exclusively on the data collected from FT. This source was considered valid and credible for the following reasons. Firstly, FT is a respected international daily. Secondly, with its focus on business and economic affairs, it is generally regarded to be an authority in these subjects. Thirdly, in spite of the fact that it is primarily targeted at the readers interested in finance, FT is a newspaper with a wide coverage of topics attracting audience from various fields of life.
As to methods of material selection, it should be mentioned that modern "conceptual metaphor theory has no explicit methods for identifying conceptual metaphors" [23]. That is why a set of criteria was employed at the stage of empirical data selection. It was devised to ensure the presence of the lexical units Covid, Covid-19, coronavirus, corona, pandemic in the contexts, and the presence of some word/s the written utterance that could be taken figuratively (metaphorically), not literally. The choice also had to be made as to the appropriateness of the selected samples and the presence of cross-domain ties found in the expressions. To decrease the uncertainty and find systematicity in selected expressions, the following features were considered: similarity or polarity in meaning (struggle, fight; win, lose); shared semantic component/s and relatedness to the particular semantic field (fighter, hunter "someone who attacks); shared collocates (animal hunter, vaccine hunter, Covid hunter).
The collected samples covered a wide range of aspects of the pandemic including its outbreak and unfolding, the scale of spread, preventive measures, impact on business. The whole set was subjected to descriptive, contextual, semantic, and structural analysis employed in combination with conceptual analysis to make judgements about the mappings and the types of metaphors. Finally, the data were systematised to discover the source domains of metaphorisation and linguistic means of metaphoric manifestations.
Paper structure
The paper is structured as follows. Section 2 gives a brief account of the theoretical basis for the research and systematizes the key principles of the theory of cognitive metaphor, its types and significance in communication.
Section 3 presents the analysis of metaphoric interpretations of the Covid-19 pandemic disaster and means of their manifestation in business news. It also discusses variant metaphoric conceptualisations of Covid-19 pandemic elaborated by speakers from different but related to the military domain spheres of knowledge. Furthermore, this section deals with the business news rhetoric and expressivity of means employed to communicate about the pandemic.
Traditional and innovative definitions of metaphor
There is an agreement among traditional and cognitive linguists on the importance and pervasiveness of metaphor. The history of the theory of metaphor has its roots in the distant past.
The earliest accounts about metaphor are found in the works of Aristotle and other ancient philosophers who treated metaphor as a rhetorical figure, an element of speech decoration and the demonstration of eloquence skill. Such approach was determined by the social environment in ancient Greece and established democracy when all political innovations, rise to power, endurance of laws and political achievements could be, though not secured, but assisted by argumentation, pervasiveness, impressiveness of communication of necessary ideas. Metaphors played a significant role in adding suggestive power to rhetoricians' and critics' elegant speeches, "the grand, also called "lofty," … florid speech that impresses with sound" [24]. Since then, the influential potential of metaphors has never been questioned.
In later studies, metaphor was defined in terms of traditional linguistics as a path of meaning development that occurs due to the associative transfer on the basis of similarity between two entities in question. Thus, linguistic metaphor is associating between two referents which resemble each other. From that classical point of view, metaphors are based on various types of similarity (shape, position, colour, etc.) and discussed as similies, personifications, spatial images, transfers of sensation and the like.
Considered as a product of logical ties, metaphor is treated as a naming technique with characterising function.
Lakoff and Johnson [25] paid attention to the mental/conceptual spaces and how they operate with knowledge domains. Fauconnier and Turner [26,27] discovered How they can preserve structures and make permanent correspondences, or how they can blend them. In contemporary linguistics, metaphor is tackled from a cognitive point of view. It is acknowledged that metaphor is an instrument of thinking, reasoning about the world.
The interplay occurs between the source domain of and the target domain of metaphor. The former contains some knowledge or experience with which new knowledge is identified. The latter is the sphere to which metaphor is applied. The structural correspondences between the source and target domains are defined as mappings (or metaphor maps).
The theory of metaphor was enriched by Kövecses who expressed doubts on whether literal language existed at all, argued that contextual metaphors were also conceptual ones [28], developed the theory of the scope of metaphor [29]. He discovered that metaphors can be derived from several domains in order to interpret an abstract concept.
These and other contributions into the theory of cognitive/conceptual metaphor added to the understanding of the variety of metaphors.
Types and significance of cognitive metaphors
Cognitive metaphors break into several types depending on what sources and mechanisms they employ to represent a more complex idea: conduit metaphors that deal with viewing communication, ontological metaphors and personification that the most basic as they allow to interpret abstract ideas, events, etc. as entities and objects, structural metaphors that structure one concept in terms of another, orientational metaphors that allow to ground our experience of the wold with the spatial experience.
The role of cognitive metaphors is crucial in our life. They enable appropriate orientation in the environment, systematisation of our experience, provide contiguity of knowledge and cultural values. Despite their representative nature, they are not deprived of descriptive potential. They help us to depict the world, build up images, compare and recognise various properties of things around us. They are also of importance for the creative and cognitive development of humans.
As to communication, cognitive metaphors carry out a set of functions. They link and harmonise individual cognitive processes with those of the social group. They become persuasive performatives responsible for implicit inclination, involvement, invitation, etc.
Metaphors are invaluable suggestive tools shaping social behaviour and thought.
Their psychological role is no less significant. With the help of analogy carried by metaphors, speakers are less vulnerable to cognitive disorders, better cope with stressful circumstances, avoid communicative failure. In sum, metaphors are mediators between the world and the human, between the society and an individual. For mass media communication, metaphoric effects are vital in the sense that metaphorical patterns become models of thought and behaviour of the recipients of information.
Results and discussion
The epidemic disaster has influenced the way we think and speak about the world, society and various spheres of life, particularly business and economy, environment, health issues. Moreover, our perception and reasoning of the crisis shapes the way we communicate about it. Most appropriate and common way of interpreting the pandemics is in terms of the war. The discourse of business news makes use of both the common and military parlance around the current disaster.
Military metaphor: the epidemic as a common foe
The world is seen as a war zone where people are fighting with the disease: "Europe battles to contain surge in Covid-19 cases" (FT, 29 July 2020), "…countries fight Covid-19 resurgence" (FT, 25 Dec 2020), "We need to aggressively stop the spread now" (FT, 5 Dec 2020), "hospitals and intensive care units are struggling to cope." (FT, 21 March 21 2020). In business news, the current events relating to the coronavirus outbreak are represented in terms of an armed conflict with the help of the words denoting a fight, an armed conflict, pressure or the use of force (battle, beat, buckle, combat, conquer, defeat, fight, struggle). For example, "War on superbugs must follow defeat of Covid-19" (FT, 20 Dec 2020), "…the reality of Britain's "wartime economy" in the era of coronavirus" (FT, 27 March 2020).
Experienced difficulties and great efforts made to win the battle and overcome the pandemic are interpreted as shooting from a gun -" '… bullet' to beat Covid-19" (FT, 29 July 2020), while a record of the virus spread is seen as following the target -"coronavirus tracked" (FT, 25 Dec 2020). In metaphoric representations, warnings of danger are portrayed as alarms given by a loud noise or flashing light: "US states sound alarm on Covid-19 hospitaliazations" (FT, 5 Dec 2020).
Reporting on coronavirus-related deaths has also added to the image of a military disaster with enormous loss of life: "Death toll surges" (FT, 22 March 2020), "… worst day for mortalities in escalating European outbreak" (ibid.).
Preventive measures are described as actions demanded by law: order mandatory testing, impose measures, urge mask use and other expressions are frequent in the texts of daily business news. Both in cases of armed conflicts and epidemic disasters, restrictive measures are taken against regional instability as well as protection of people and property. Among other things, pandemic restrictive defence includes Covid-19 border control, lockdown, isolation, distancing: "World on lockdown: West closes borders and orders isolation" (FT, 18 March 2000), "… western economies took drastic measures to limit public movement on Monday, closing borders, shutting down retailers and ordering citizens to stay in their homes in an urgent effort to arrest the spreading coronavirus pandemic" (ibid.) The key conceptual feature "common", that is "the same enemy in a lot of places or for a lot of people" is explicitly expressed by adjectives meaning "relating to the whole world" (global, international, world-wide, etc.), nouns denoting "the act of working together" (cooperation, collaboration, etc.). For example, "Global cooperation is needed to beat the virus" (FT, 29 Jan 2021). Grammatical means such as the marker of the plural of nouns (-s), the plural pronouns (we, us, they, them) and determiners (our, their) are also employed to communicate the idea of "common foe": "Governments must restore these freedoms when the virus is eventually beaten" (FT, 29 Jan 2021), "… we'll be shooting to get to a year's immunity" (FT, 23 Sept 2020), "To our relief and surprise, the number of cases have started to come down…" (FT, 29 Jan 2021). While the use of noun plurals and adjectives with cooperative meanings is unambiguous, the plural pronoun we/us can vary according to the context. It can be either semantically inclusive (i. e. including the addresser(s)/speaker(s) and the addressee) or exclusive (i. e. excluding the addresser(s)/speaker(s) but including other people as the addressees): "We won't remember much of what we did in the pandemic" (FT, 14 Aug 2020). However, news reporters seem to favour explicit methods and make a clear reference by adding other words (we all) or employing generic uses of personal pronouns to refer to people in general. For instance, "…what we all feel about the high number of ... the global pandemic and the rapidly evolving economic crisis" (FT, 4 Jan 2021), "The next pandemic: where is it coming from and how do we stop it? As deforestation and climate change increase spread of new diseases, meet the virus-hunters trying to prevent the next Covid-19" (FT, 29 Oct 2020). In contrast, the concepts of loneliness and isolation during the pandemic are successfully verbalized by the nominative and accusative personal pronouns (I, me): "I live alone, I work alone, I'm hundreds ..." (FT, 20 Nov 2020), "Loneliness and me. Millions of us were living with this curse before the pandemic. How can we break it?" (ibid.).
Overall, the military metaphor has appeared to be a remarkably efficient tool in reasoning and reporting about the pandemic. The prior knowledge of war as a conflict between particular actors which is characterized by violence, social disruption, and economic destruction is linked to new experience. Inasmuch as cognizers bridge what they already know (an armed conflict) to a new entity (the Covid-19 pandemic), they create a new cognitive space that results from framing the target domain (the pandemic) in terms of the source domain (the war). Hence, if the Covid-19 pandemic is interpreted with the help of the military metaphor, then it will produce an image of "the war against the common aggressor". However, there are diverse applications of the "common foe" description in business news reports.
Variant conceptualizations of the pandemic as a foe
Variant conceptualizations of the Covid-19 pandemic appeal to different but related cognitive domains. They differ in the sphere of knowledge that is prioritized by business news reporters who target to create eloquent images of the pandemic: knowledge and experience in military affairs, hunting, games, killing a monster, and dealing with liquid are not alike. Despite that fact, these domains enable speakers to establish associative links with the image of "common foe" and provide corresponding mappings between the source domains and the target domain of the metaphor. In other words, variant conceptualizations of the Covid-19 pandemic have resulted from one-to many ties between the target domain and the source domains.
The relationship between the actors of the situation (the pandemic versus humanity, society, business, economy, etc.) is alternatively considered as competition in which one party becomes victimized ("victim of the pandemic" (FT, 29 Jan 2021)). Let us look into specific cases: In the hunting metaphor, the actors of the pandemic situation come up as the hunter (scientists, investors) and the game/wild beast (the virus): "Hunting for new viruses has become more difficult during an actual pandemic, but it has never been more important" (FT, 29 Oct 2020), "on the hunt for a cure" (FT, 14 May 2020), "Investors' hunt for coronavirus rebound stocks" (FT, 11 Feb 2020), "Investors hunt for alternative data to track coronavirus shock…" (FT, 18 Feb 2020).
Being metaphorically considered as game players, the actors of the pandemic situation are conceptualized as the competing participants: "Vaccine makers prepare for game of Covid cat and mouse. Manufacturers and regulators need to be ready if shots prove less effective…" (FT, 10 Jan 21), "Asia plays a long game on Covid vaccine rollout" (FT, 15 Dec 2020), "… attempts to play down Covid-19" (FT, 29 Oct 2020).
The virus disaster has reshaped the way we talk about society, politics and economy dividing businesses and agencies into winners and losers: "… winners and losers in the Covid economy" (FT, 10 Oct 2020), "Three ways the banks will be winners from Covid recovery" (FT, 16 Nov 2020), "This year, Covid-19 has brought some of the most powerful countries in the world to their knees" (FT, 29 Oct 2020), "Covid-19 unmasks weakness of English public health agency" (FT, July 22, 2020).
Another way to communicate the struggle against the pandemic is the metaphor "a fight with a monster": "Coronavirus turns the City [London business center] into a ghost town" (FT, 27 July 2020). "'Silver bullet' to beat Covid-19 unlikely" (FT, 29 July 2020) seems to be a transparent allusion to the stories about werewolves and vampires fired to death with silver bullets by vampirehunters. This conceptualization is a very interesting instance of the cognitive frame development. It is based on the mapping between the vampire lore and the contemporary knowledge about the evil of pandemic. Furthermore, it provides a convincing evidence to the existence of conceptual ties among different domains of knowledge and experience. By this the consistency of mapping is achieved within the frame representation of the active agent as a hunter / fighter (cf. animal hunter, vaccine hunter and Covid hunter as semantically close expressions implying "someone who is trying to find and get the desired/stated thing"). The explanation that can be suggested for such a metaphor of the pandemic is that cognitive metaphors are not single, independent tools of our cognition but complex mental images embracing sets of associated featured.
The expressive language of metaphoric manifestations
The findings about the linguistic means of metaphoric manifestations showed that reporters employed typical for the news discourse expressivity. As is clear, expressive vocabulary and structures enable addressers to encode their emotions and evaluations of the current situation. Expressivity provides efficiency in expressing the addressers' intentions as well as their individuality. In addition to that, expressivity of speech allows to achieve loftiness in communication. What follows is the systematization and examples of the expressive means found in the selected contexts: − words with inherent expressivity and mostly negative meanings (dreadful "causing shock and suffering", monstrous "very cruel", vicious "showing an intention to hurt badly" as in "We can defeat this invisible and vicious adversary [coronavirus] -but only with global leadership" (FT, 25 March 2020)), which may be intensified in the context (extraordinary "very strange, unusual" as in extraordinary crisis; curse "speak angrily" as in live with this curse (FT, 21 Nov 2020)); − words with inherent expressivity and intensive meanings (to surge "to increase suddenly and greatly", to hit "to move one's hand with force; to produce a negative, unpleasant effect", to thrive "to grow and become successful"; boom "a sudden increase in something" as in "Europe fears as coronavirus surge threatens to overwhelm hospitals" (FT, 23 Oct 2020), "Nations look into why coronavirus hits ethnic minorities so hard" (FT, 29 Apr 2020)); − colourful phraseological units (get your act together "organize and deal with something effectively", bleak future "without anything to make one feel happy or hopeful, be on one's knees "be weak" as in "Covid brings China's high-growth rental industry to its knees" (FT, 18 Nov 2020)) and their occasional modifications (for example, the idiom to have the stomach for "to be brave or determined to do something dangerous or unpleasant" was transformed into to lose the stomach "to become exhausted because of doing something difficult" as in "America is losing the stomach to fight Covid-19" (FT, 11 Jun 2020)); − names of imaginary entities (ghost "the spirit of dead, transparent image", to haunt "(of a ghost) to appear in a place repeatedly, cause anxiety or suffering" as in "If Covid-19 is not beaten in Africa, it will return to haunt us all" (FT, 25 March 2020); − phrasal verbs (to lose out "to not have an advantage as other people have" as in "The kids aren't alright. How generation Covid is losing out" (FT, 17 Nov 2020); to bounce back "to return to a usual state after having a problem" as in "… if the virus is not defeated in Africa, it will only bounce back to the rest of the world" (FT, 25 March 2020)).
The following quote is an example of how neutral words get involved into utterances with expressive vocabulary and acquire intensification to communicate about the pandemic disaster: "Gap between financial markets and global economy yawns wider ... We have a monster mash-up of the Great Depression in size, the crash of ... effects of measures" (FT, 24 Apr 2020). Owing to the proximity of distance between the expressive (yawn wider, monster, mash-up, crash) and neutral vocabulary, the latter become emotionally charge. Thus, "effects of measures" should be interpreted as "ineffectual and unable to produce good results under the circumstances".
In the following passage, the involvement of the intensive verb surge provides expressivity for the whole utterance, suggesting complete destruction of most powerful (major) economic systems: "Pandemic triggers surge in business start-ups across major economies" (FT, 29 Dec 2020).
Conclusions
The results of the research proved the importance of cognitive metaphors in different spheres and genres of communication. The study of cognitive metaphors of the Covid-19 epidemic in FT daily provided evidence to the fact that the target domain of metaphors in question is actual for the coverage of various topics, directly or indirectly related to the themes of financing and business. It can be explained by the drastic and global scale influence of the epidemic on the economic sphere.
The prevalence of military metaphor can be explained by the universal nature of the concept "disaster is war". However, in internationally oriented business communication, reporters suggest the idea of the pandemic as a "common foe", implying the necessity for cooperation in coping with the crises stimulated by the disaster. Although this scenario demonstrates alternative metaphoric representations (Covid is an animal to be hunted, a competitor/player to be defeated, a monster to be destroyed), they are attracted by the central idea of the common enemy.
The linguistic means involved to manifest metaphors demonstrate expressivity typical of the news discourse.
|
2021-05-11T00:06:22.197Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ef17703b4f9bfc151f517f12896f33694e503f72",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/11/shsconf_iscsai2021_02004.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2ab7114912b96e33fa72d91e9aa68e2dd5e7a0ab",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
220526138
|
pes2o/s2orc
|
v3-fos-license
|
Spontaneous resolution of acute syphilitic posterior placoid chorioretinitis: reappraisal of the literature and pathogenetic insights
Acute syphilitic posterior placoid chorioretinitis (ASPPC) is a rare clinical manifestation of ocular syphilis. Spontaneous resolution of this condition has been reported in a few cases. The aim of this manuscript is to report an additional case and to discuss the possible pathogenesis of this condition by reviewing the current evidence on this subject. A 45-year-old man presented to us with decreased vision in the right eye secondary to a placoid macular lesion. Fourteen days after presentation, there was a dramatic improvement of the vision, and multimodal retinal imaging showed almost complete spontaneous resolution of the placoid lesion. Syphilis serology turned out positive and a diagnosis of ASPPC was made. The pathogenesis of ASPPC is unclear, and there is contrasting evidence about the role of the cellular immune system. Since this condition may resolve spontaneously before systemic antimicrobial treatment, the presence of a placoid macular lesion should raise a high suspicion of ASPPC in order to make a timely diagnosis and to avoid progression of untreated syphilis.
Introduction
Syphilis is a sexually transmitted infection caused by the spirochete bacterium Treponema pallidum [1]. Syphilis is a re-emerging and rising infection in the developed world. In up to one-quarter of patients with syphilis, ocular involvement manifests at any time during the disease course. Ocular syphilis may precede the diagnosis of systemic disease in up to one-half of cases [2]. Ocular syphilis, known as "the great masquerader", may affect almost every structure of the eye and has a broad spectrum of presentation, including, among others, interstitial keratitis, optic neuropathy and posterior uveitis, the latter commonly represented by chorioretiniti [3], [4]. In 1988, de Souza et al. [5] reported three young patients with "unilateral central chorioretinitis" as manifestation of ocular syphilis. Two years later, Gass et al. [6] reported six additional similar cases. They concluded that this condition was a separate clinical entity, and coined the term "acute syphilitic posterior placoid chorioretinitis" (ASPPC). ASPPC is defined by the presence of one or more placoid, yellowish, outer retinal lesions, typically involving the posterior pole and the mid-periphery of the retina near the temporal vascular arcade [6]. ASPPC may have a unilateral or bilateral involvement with a presenting visual acuity ranging from 20/20 to no light perception [7]. The advent of multimodal imaging (MMI) of the retina, especially of spectral domain optical coherence tomography (SD-OCT), has made it possible to report pathognomonic features of ASPPC, which include punctate hyperreflectivity in the choroid, disruption and loss of the ellipsoid zone, nodular irregularity of the retinal pigment epithelium, and transient localized subretinal fluid [8], [9]. Since patients with ASPPC usually receive prompt antimicrobial treatment after serologic results, little is known about the natural course of the disease. To the best of our knowledge, only 5 cases of ASPPC with spontaneous improvement have been reported [10], [11], [12], [13]. We report the natural course and the multimodal retinal imaging features of an additional case, and discuss the pathogenetic implications and the importance of early recognition of this rare clinical entity.
Case presentation
A 45-year-old man with no relevant past medical history presented to the eye casualty service complaining of sudden onset central 'white ring' and decreased vision in the right eye (RE) over the past seven days. Best-corrected visual acuity (BCVA) was 6/12 in the right eye and The medical history was carefully reviewed; the patient admitted to be addicted to poppers and cocaine, and reported promiscuous homosexual activity over the last months. He denied intravenous drug use and any systemic symptoms such as headache, skin rash, nausea, weight loss, cough, or night sweats. A complete laboratory workup was ordered, including TB QuantiFERON-TB testing, syphilis serology and human immunodeficiency virus (HIV) antibodies. Seven days after presentation, the patient reported spontaneous improvement in the vision of the RE, and BCVA improved to 6/9 in the RE and was stable in the LE. Full blood count, liver function test, kidney function, angiotensin-converting enzyme level were within normal range, HIV antibodies were negative. However, results for QuantiFERON-TB testing and syphilis had not been available yet. MMI revealed spontaneous improvement of the placoid lesion ( Figure 2, Figure 3). Two weeks after presentation, BCVA further improved to 6/6 in the RE and MMI showed signs of early resolution of the placoid lesion. Laboratory results returned negative for QuantiFERON-TB testing, and positive for venereal disease research laboratory test and fluorescent treponemal antibody testing. Therefore, a definite diagnosis of ASPPC was made, and the patient was promptly referred to the Infectious Disease Department for systemic treatment with penicillin.
Discussion
ASPPC is a rare clinical manifestation of ocular syphilis. Although the pathophysiology of ASPPC is not completely understood, timing and characteristics of SD-OCT findings may be the reflection of the sequence of disease events [9]. It has been suggested that circulating T. pallidum organisms may enter the choroidal circulation, giving the choroidal hyperreflective pinpoint lesions seen on SD-OCT; subsequent access to the outer retina may give a variable amount of subretinal fluid and impaired photoreceptor function expressed by disruption of EZ seen on SD-OCT [9]. However, the role played by the cellular immune system in the pathogenesis of this condition remains controversial. While it was initially suggested that ASPPC is secondary to immunocompromised status such as in HIV-positive patients [5], [6], [14], it was later described in both immunocompetent and immunocompromised individuals [7], [9], [15]. Of note, no differences have been found in terms of clinical characteristics and long-term visual outcome in HIVpositive versus HIV-negative patients with ASPPC [7].
To the best of our knowledge, spontaneous resolution of ASPPC before initiation of systemic antimicrobial treatment has been reported in 5 cases [10], [11], [12], [13]. The first cases were described in 2015 by Ji et al. [10], who reported two HIV-negative patients with ASPPC which spontaneously improved 10 days (for the first case) and 3 weeks (for the second case) after presentation. In the same year, Aranda et al. [11] reported an HIV-positive patient on anti-retroviral therapy for 4 years and CD4+ 3/5 GMS Ophthalmology Cases 2020, Vol. 10, ISSN 2193-1496 T-cell count of 204 cells/µL presenting with ASPPPC; spontaneous resolution of ASPPC was observed 10 days after presentation [11]. One year later, Franco et al. [12] reported an additional case of ASPPC in an HIV-negative patient with complete spontaneous recovery and no signs of reactivation until the patient was started on antimicrobial treatment, 45 days after presentation. The spontaneous improvement observed in our case is in line with these previous reports [10], [11], [12] and may suggest that ASPPC is the result of the host's cellular immune response which may be able to locally control the spirochete infection. Some authors have speculated that the immune privilege of the eye may contribute to the spontaneous resolution of ASPPC. Alternatively, spontaneous resolution of initial ASPPC can be explained as the disease entering prolonged latency. Indeed it is known that syphilis is characterized by episodes of active disease that are interrupted by periods of latency because of the host's cellular immune response [1], [12]. Therefore the placoid lesions may disappear in the same way the mucocutaneous syphilitic lesions disappear without treatment during the latent stage of the disease. The latter hypothesis has been supported by Baek et al. [13] who reported an HIV-negative patient presenting with ASPPC which spontaneously improved 7 days after presentation, but unlike the aforementioned cases [10], [11], [12] did not receive systemic antimicrobial treatment and returned 9 months later with progression to posterior uveitis [13]. Of note, in 2014 Armstrong et al. [16] had reported spontaneous evolution of unilateral ASPPC to panuveitis in a patient with HIV co-infection. In their case, panuveitis developed 6 weeks after the initial diagnosis of ASPPC, without spontaneous resolution of the lesion. These latter cases may suggest that ASPPC could be an early manifestation of posterior uveitis, and in absence of an adequate host's immune response, such as in patients with untreated HIV co-infection, progression without spontaneous improvement may be observed. In addition, there is evidence that immunosuppression may be a major stimulator of ASPPC. Zamani et al. [17] reported a case of undiagnosed syphilitic chorioretinitis which evolved to multifocal placoid lesions following immunosuppression induced by corticosteroid therapy. Furthermore, Erol et al. [18] and Song et al. [19] published cases of ASPPC following local intravitreal triamcinolone injections. However, in contrast to these aforementioned cases [17], [18], [19], Ormaechea et al. [20] reported a patient with ASPPC who was initially misdiagnosed as having non-infectious uveitis and received corticosteroid as well as methotrexate for 7 months and demonstrated no worsening of the disease.
Conclusions
There is contrasting evidence about the role of the immune system in the pathogenesis of ASPPC which may be the manifestation of different pathogenetic pathways that ultimately lead to an inflammatory response driven by the presence of the spirochete. Since this condition may resolve spontaneously before antimicrobial treatment, the presence of a placoid macular lesion should raise a high suspicion of ASPPC, as the ophthalmologist may be the first to diagnose syphilis in the patient. Indeed, timely diagnosis and antimicrobial treatment are essential to preventing the progression of syphilis which may include irreversible visual damage [21].
Notes
Competing interests
|
2020-07-16T05:08:38.260Z
|
2020-05-04T00:00:00.000
|
{
"year": 2020,
"sha1": "114f56272f58fc4afc361a8fffdbf40d06c76848",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "114f56272f58fc4afc361a8fffdbf40d06c76848",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216562223
|
pes2o/s2orc
|
v3-fos-license
|
Bistability and time crystals in long-ranged directed percolation
Stochastic processes govern the time evolution of a huge variety of realistic systems throughout the sciences. A minimal description of noisy many-particle systems within a Markovian picture and with a notion of spatial dimension is given by probabilistic cellular automata, which typically feature time-independent and short-ranged update rules. Here, we propose a simple cellular automaton with power-law interactions that gives rise to a bistable phase of long-ranged directed percolation whose long-time behaviour is not only dictated by the system dynamics, but also by the initial conditions. In the presence of a periodic modulation of the update rules, we find that the system responds with a period larger than that of the modulation for an exponentially (in system size) long time. This breaking of discrete time translation symmetry of the underlying dynamics is enabled by a self-correcting mechanism of the long-ranged interactions which compensates noise-induced imperfections. Our work thus provides a firm example of a classical discrete time crystal phase of matter and paves the way for the study of novel non-equilibrium phases in the unexplored field of driven probabilistic cellular automata.
P ercolation theory describes the connectivity of networks, with applications pervading virtually any branch of science 1 , including economics 2 , engineering 3 , neurosciences 4 , social sciences 5 , geoscience 6 , food science 7 and, most prominently, epidemiology 8 . Among the multitude of phenomena described by percolation, of predominant importance are spreading processes, in which time plays a crucial role and that can be studied within models of directed percolation (DP) 9 . Characterized by universal scalings in time 10 , in their discretized versions these models are probabilistic cellular automata (PCA), that is, dynamical systems with a state evolving in discrete time according to a set of stochastic and generally short-ranged update rules. To account for certain realistic situations, e.g. of long-distance travels in epidemic spreading, DP has been extended to long-ranged updates 11,12 leading to a change of the universal scaling exponents 13 .
Despite their wide applicability, PCAs have surprisingly remained an outlier in a branch of non-equilibrium physics that has recently experienced a tremendous amount of excitementthat of discrete time crystals (DTCs) [14][15][16][17][18][19][20] . In essence, DTCs are systems that, under the action of a time-periodic modulation with period T, exhibit a periodic response at a different period T 0 ≠T, thus breaking the discrete time-translational symmetry of the drive and of the equations of motion. DTCs thus extend the fundamental idea of symmetry breaking 21 to non-equilibrium phases of matter. Following the pioneering proposals in the context of many-body-localized (MBL) systems 17,18 , DTCs have been observed experimentally 22,23 , and their notion has been extended beyond MBL [24][25][26][27] .
More recently, Yao and collaborators have fleshed out the essential ingredients of a classical DTC phase of matter 28 . Namely, in a classical DTC, many-body interactions should allow for an infinite autocorrelation time, which should be stable in the presence of a noisy environment at finite temperature, a subtle requirement that rules out the vast class of long-known deterministic dynamical systems. Despite various efforts [28][29][30][31] , an example of such a classical DTC has mostly remained elusive, and proving an infinite autocorrelation time robust to noise and perturbations for this phase of matter is an outstanding problem. The general expectation is in fact that PCAs and other minimal models for noisy systems in one spatial dimension can only show a transient subharmonic response because noise-induced imperfections generically nucleate and spread, destroying true infiniterange symmetry breaking in time 28,32 .
Here we overcome these difficulties by introducing a simple and natural generalization of DP in which the dynamical rules are governed by power-law correlations. This leads to qualitative changes of the system behaviour and, crucially, the emergence of a bistable phase of long-ranged DP, enabled by the ability of longrange interactions to counteract the dynamic proliferation of defects. By adding a periodic modulation to the update rules, we then study a version of periodically driven DP and show that the underlying bistable phase intimately connects to a stable DTC. In this non-equilibrium phase, the system is able to self-correct noise-induced errors and the autocorrelation time grows exponentially with the system size, thus becoming infinite in the thermodynamic limit. In analogy to the one-dimensional Ising model for which, at equilibrium, long-range interactions enable a normally forbidden finite-temperature magnetic phase 33,34 , in our model, out of equilibrium, the long-range interactions lead to a classical time-crystalline phase. Crucially, our results appear naturally in a minimal model of long-ranged DP but are expected to find applications in many different contexts of dynamical many-body systems.
Basic understanding of new concepts has historically been built around the study of minimal models, such as the Ising model for magnetism at equilibrium 33,34 , the kicked transverse field Ising chain for DTCs 17,18 , or the prototypical Domany-Kinzel (DK) PCA for DP 35 . In this paper, we start our discussion with a brief review of the DK model and then generalize it to include power-law interactions. We characterize its phase diagram and show that its long-range nature is the key ingredient for the emergence of a bistable phase. Finally, we include a periodic drive for the long-ranged DP process and show with a careful scaling analysis that the autocorrelation time of the subharmonic response is exponential in system size. In the thermodynamic limit, our model provides therefore the first example of a PCA behaving as a classical DTC, which is persistent and stable to the continuous presence of noise. Lastly, we conclude with a summary of our findings and an outlook for future research.
Results
Review of DP. We consider a triangular lattice in which one dimension can be interpreted as discrete space i and the other one as discrete time t = 1, 2, 3, … , see Fig. 1. To implicitly account for the triangular nature of the lattice, i runs over integers and halfintegers at odd and even times t, respectively. We denote L the spatial system size and are interested in the thermodynamic limit L → ∞. The site i at time t can be either occupied or empty, s i,t = 0,1. For a given time t, we call generation the collection of variables fs i;t g i specifying the system state. Initially, the sites are occupied with uniform probability p 1 > 0. A DP process is defined by a stochastic Markovian update rule with which, starting from the initial generation fs i;1 g i , all subsequent generations fs i;t g i are obtained one by one. The main observable we will focus on is the global density n(t) (henceforth just referred to as density for brevity) defined as where the inner and outer brackets denote average over the L sites and over R independent runs, respectively. Since n(1) = p 1 , we will often refer to p 1 as initial density. The simplest, and yet already remarkably rich, example of the above setting of DP is the DK model 35 . Here, we briefly review it adopting an unconventional notation that, making explicit use of a local density, will prove very convenient for a straightforward generalization to a model of long-ranged DP.
In the DK model, the probability of site i to be occupied at time t depends on the state of its neighbours i ± 1/2 at previous time t − 1. More specifically, as summarized in Fig. 1a, site i is: (i) empty if both its neighbours were empty, (ii) occupied with probability q 1 if one and just one of its neighbours was occupied, and (iii) occupied with probability q 2 if both its neighbours were occupied. To account for these possibilities in a compact fashion, we define a local density n i,t as and say that site i at time t is occupied with a probability p i,t given by In other words, the probability p i,t is a nonlinear function f q 1 ;q 2 ðn i;t Þ of the local density n i,t , with domain {0, 0.5, 1}. Since n i, t only involves the nearest neighbours of site i, the DK model of DP is obviously 'short-ranged'. In essence, s i,t is a Bernoullian random variable of parameter p i,t , which we compactly denote s i;t $ Bernoulli ðp i;t Þ. The complexity of this model arises from the fact that the value of the parameter p i,t is not known a priori, as it depends on the actual state of the system at previous time t − 1. Equipped with a random number generator, one can obtain all the generations one by one according to the above procedure, as schematically illustrated in the flowchart of Fig. 1b. Reiterating for several independent runs, one finally obtains the time series of the density n in Eq. (1). The DK model features two dynamical phases, shown in Fig. 1c, d. In the inactive phase, for small enough probabilities q 1 and q 2 , the system eventually reaches the completely unoccupied absorbing state, that is, no percolation occurs. In the active phase instead, for large enough probabilities q 1 and q 2 , a finite fraction of sites remains occupied up to infinite time, that is, the system percolates. For small initial probability p 1 ≪ 1, the critical line separating the two phases is characterized by a power-law growth of the density 36 , n~t θ , with exponent θ ≈ 0.31. As conjectured by Grassberger 37 , this exponent is universal for all systems in the DP universality class. Indeed, DP exemplifies how the unifying concept of universality pertaining to quantum and classical many-body systems 38 can be extended to non-equilibrium phenomena.
Important for our work is that, in the DK model, whether the system percolates or not depends on the parameters q 1 and q 2 but not on the initial density p 1 , at least as long as p 1 > 0. Indeed, the phase boundaries for initial densities p 1 = 0.01 and p 1 = 1 in Fig. 1c, d, respectively, coincide.
Long-ranged percolation and bistability. As the vast majority of PCA, the DK model features short-ranged update rules 9 . In realistic systems, however, it is often the case that the occupation of a site i is influenced not only by the neighbouring sites but also by farther sites j, with an effect decreasing with the distance r i,j between the sites. Building on an analogy with the DK model, we propose here a model for such a 'long-ranged' DP, whose protocol is explained in the flowchart of Fig. 2a. Specifically, we consider as a local density n i,t a power-law-weighted average of the previous generation fs j;tÀ1 g j centred around site i where the normalization factor N α;L ensures n i,t = 1 if all sites j are occupied and the adjective 'local' emphasizes the site dependence. The occupation probability p i,t then depends on the local density n i,t through some nonlinear function f μ that for concreteness we consider to be with μ ∈ (0, 1) a control parameter. The whole DP dynamics is determined via the occupations s i;t $ Bernoulli ðp i;t Þ and reiterating from one generation to the next. Note, our findings are not contingent on the specific choice of Eqs. (4) and (5) but are rather expected to hold generally for a broad class of long-ranged forms of the densities n i,t and of functions f μ -see 'Methods' section for details. We emphasize that Eqs. (4) and (5) 2) and (3) and Fig. 1b, respectively. Furthermore, whereas in the DK model the control parameters are the probabilities q 1 and q 2 , the control parameter is now μ. As an important difference, now the domain of f μ accounts for several (and α-dependent) values of n i,t , for which the piecewise definition of p i,t as in Eq. (3) would have been unpractical, and the compact form of Eq. (5) was necessary instead.
The introduction of a long-ranged local density n i,t in Eq. (4) has profound implications. Arguably, the most dramatic is the appearance of a bistable phase, in addition to the standard active and inactive ones. In the bistable phase, the ability of the system to percolate depends on the initial density p 1 , see the red lines in Fig. 2b, c. That is, the bistable phase features two basins of attraction, resulting into an asymptotically vanishing or finite n, respectively, and separated by some critical initial density p 1,c > 0. To characterize systematically the dynamical phases of our model, we plot in Fig. 2d, e the long-time density n(t = 10 3 ) as a suitable order parameter in the plane of the power-law exponent α and control parameter μ. Comparing the results obtained for a large and a small initial density p 1 , it is possible to sketch a phase diagram composed of three phases: (i) inactive-n decays to 0 at long times; (ii) active-n does not decay at long times; (iii) bistable-n either decays or not depending on p 1 being small or large. The existence of this bistable phase is in striking contrast with short-ranged models of DP such as the DK model and in fact appears only for α ⪅ 2, that is, when the local densities fn i;t g i are The probability p i,t of site i to be occupied at time t depends on the occupation of its nearest-neighbours i ± 1 2 at time t − 1 and can take discrete values 0, q 1 and q 2 . b Flowchart representation of the DK model. The initial occupation probability is uniform p i,t = 1 = p 1 . At time t, each site i is either occupied (s i,t = 1) or empty (s i,t = 0) with probability p i,t and 1 − p i,t , respectively. Time is advanced and local densities fn i;t g i are computed for each site i as averages of the nearest-neighbour occupations at previous time, and these densities determine the occupation probabilities for the next generation, see Eq. (3). The generations at all subsequent times are obtained by iteration. c, d The density n at late times can be used to discern the active and inactive phases, in which n(t = 10 3 ) >0 and ≈0, respectively. The dashed lines serve as a reference to locate the phase boundary and are the same for initial densities p 1 = 1 (c) and p 1 = 0.01 (d). The insets show representative single instances of the DP for the points in the (q 1 , q 2 ) plane marked with a cross. Here L = 100 and R = 10 3 . correlated over a sufficiently long range. To understand the origin of this rich phenomenology, we study the short-and infiniterange limits of our DP process.
In the short-range limit α → ∞, the local densities n i,t reduce to the averages of the nearest-neighbour occupations s iÀ 1 2 ;tÀ1 and s iþ 1 2 ;tÀ1 , that is, Eq. (4) recasts into Eq. (2) and the DK model is recovered. In the notation of Eq. (3), the DK parameters are q 1 = f μ (0.5) and q 2 = f μ (1). Therefore, we can move across the DK parameter space (q 1 ,q 2 ) varying μ, going from the inactive phase (μ < μ 1 c ) to the active one (μ > μ 1 c ), and no bistable phase is possible. We find that the transition happens at a critical μ 1 c ¼ 0:85ð7Þ. Note that, in the active phase, a completely empty state (p 1 = 0) remains trivially empty at all times. This behaviour is, however, unstable, because any p 1 > 0 leads to percolation (i.e. p 1,c = 0), and we therefore do not classify the active phase as bistable. At criticality, and for p 1 ≪ 1, the density grows as n~t θ with θ = 0.3(0), as expected for the DP universality class 9 . See Supplementary Fig. 2 for details.
In the infinite-range limit α → 0, and more generally for α ≤ 1, the factor N α;L in Eq. (4) diverges as L → ∞. Correspondingly, spatial stochastic fluctuations are suppressed, that is, all sites i share the same occupation probability p i,t+1 = p t and density n i,t = n(t) = p t . Therefore, in this limit the dynamics reduces to the deterministic 0-dimensional recurrence relation The system asymptotic behaviour can then be understood from the analysis of the fixed points (FPs) of the equation x = f μ (x), which is detailed in the 'Methods' section.
Driven percolation and time crystals. We have established that long-range correlated local densities fn i;t g i give rise to a bistable phase. We now show how, in a driven DP with periodically modulated update rules, this phase intimately relates to the emergence of a classical DTC. In this phase, as we shall see, the density n displays oscillations over a period larger than that of the drive and up to a time that, thanks to the long-range interactions and despite the presence of multiple sources of noise, is exponentially large in the system size, a feature that would generally be forbidden in short-ranged PCA 28 . In the thermodynamic limit L → ∞, these subharmonic oscillations are therefore persistent, that is, the system autocorrelation time diverges to infinity, breaking the time-translational symmetry and proving a classical DTC in a periodically driven PCA.
In the spirit of keeping the model as simple as possible, we consider a minimal drive in which, after every T iterations of the 4) and (5). b, c Time evolution of the density n for p 1 = 1 (b) and p 1 = 0.01 (c) for various representative values of the power-law exponent α and control parameter μ. Three dynamical phases can be distinguished: (i) inactive-the density n decays to 0 (blue); (ii) active-n does not decay to 0 (yellow); (iii) bistable-n either decays to 0 or not depending on whether the initial density p 1 is small or large (red). d, e Long-time density n(t = 10 3 ) in the plane of α and μ for p 1 = 1 (d) and p 1 = 0.01 (e). With the criterion used in b, c, we discern the three phases: inactive (light), active (dark), and bistable (light or dark depending on p 1 ). The dashed lines help locating the phases and coincide in d and e, and critical values μ 0 c and μ 1 c of μ in the limits α → 0 and α → ∞, respectively, are reported (the offset of μ 0 c from the dashed line, as well as the softening of the dashed line for α ≈ 1, are due to finite-size effects). Crucially, the bistable phase is present only for small enough α ≾ 2, that is, for a sufficiently longranged DP. Single instances of the DP for the three phases are shown in the insets, as obtained for the α and μ indicated with coloured dots, and corresponding to the parameters used in b, c. Here R = 10 4 and 10 2 in b, c and d, e, respectively, and L = 500. DP in Eqs. (4) and (5), empty sites are turned into occupied ones and vice versa, making the full equations of motion periodic with period T. As a further source of imperfections, adding to the underlying noisy DP, we also account for faulty swaps with probability p d . More explicitly, the periodic drive consists of the following transformation s i;1þkT ! 1 À s i;1þkT with probability 1 À p d s i;1þkT with probability p d : ( In Fig. 3a, b, we show the spatio-temporal pattern of single instances of the driven DP, alongside with the density n averaged over several independent runs. If the DP is shortranged enough, the spatio-temporal pattern at long times looks similar from one period to the next, that is, the density n synchronizes with the drive and eventually picks a periodicity T. On the contrary, for a long-ranged enough DP, the system keeps alternating at every period between a densely occupied regime and a sparsely occupied one, and n oscillates with period 2T, that is, the system breaks the discrete time-translation symmetry of the equations of motion. When using the tag 'classical DTC', special care should be reserved for showing the defining features of this phase, namely, its rigidity and persistence 28 . Our system is rigid in the sense that it does not rely on fine-tuned model parameters, e.g. μ, α or the initial density p 1 , and that noise, either in the form of the inherently stochastic underlying DP or of a small but non-zero drive defect density p d , does not qualitatively change the results. Moreover, in the limit L → ∞, our DTC is truly persistent. Indeed, one might expect that the accumulation of stochastic mistakes introduces phase slips and eventually leads to the (possibly slow but unavoidable) destruction of the subharmonic response. Although this expectation is generally correct for shortranged DP models, including our model at large α, it can fail for long-ranged DP models.
To show that, in the limit L → ∞, the lifetime of our DTC is infinite, we perform a scaling analysis comparing results for increasing system sizes L. First, we introduce an order parameter Φ(t), henceforth called subharmonicity, that is defined at stroboscopic times t = 1, 1 + T, 1 + 2T, … as If the density n oscillates with the same period T as the drive, then n(t) = n(t + T) and Φ(t) = 0. On the contrary, if n oscillates with a doubled period 2T, then n(t = 1 + kT) is positive and negative for even and odd k, respectively, and Φ(t) is finite and maintains a constant sign. Therefore, Φ(t) is a suitable diagnostics to track the degree of subharmonicity of n in time and to perform the scaling analysis.
In Fig. 3c, we show Φ(t) for various system sizes L. For both α = 1.4 and α = 1.8, the subharmonicity decays exponentially in time, ΦðtÞ $ expðÀ tÀ1 τT Þ. As shown in Fig. 3d, these two values of α are, however, crucially different in how the lifetime τT scales with the system size. In fact, τT is approximately independent of L for α = 1.8, whereas it scales exponentially as τ $ expðβLÞ for α = 1.4, for which the decay of the subharmonicity is therefore just a finite-size effect. The scaling coefficient β quantifies the time crystallinity of the system and can thus be used to obtain a full phase diagram as a function of the power-law exponent α, in Fig. 3e. We observe a phase transition between a DTC and a trivial phase at α ≈ 1.7. That is, if the DP is sufficiently longranged (α ⪅ 1.7), β is finite and in the thermodynamic limit L → ∞ the subharmonic response extends up to infinite time, as required for a true DTC. In contrast, for a shorter-range DP (α ⪆ 1.7), β ≈ 0 independently of L and the subharmonic response is always dynamically destroyed.
Discussion
We have shown that long-range DP and its periodically driven variant can give rise to a bistable phase and a DTC, respectively. At the core of our model in Eqs. (4) and (5) is the idea that the occupation of a given site depends on the state of all the other sites at the previous time. In this sense, our model is reminiscent of some SIR-type models of epidemic spreading in which not only a sick site can infect a susceptible site, but several infected sites can also cooperate to weaken a susceptible site and finally infect it 39,40 . This cooperation mechanism among an infinite number of parent sites, rather than a finite one as considered in previous works on long-ranged DP 13,41 , is the key feature allowing the emergence of the bistable phase that finds a transparent explanation in the infinite-range limit α → 0, where it corresponds to the equation x = f μ (x) having two stable FPs. Bistability also provides intuition on the origin of the DTC, to which it is deeply connected. Indeed, the drive in Eq. (7) switches the system from a densely occupied regime to a sparsely occupied one (and vice versa). If the underlying DP is bistable, these regimes fall each within different basins of attraction and can therefore be both stabilized by the contractive dynamics 25,29 . Ultimately, this double stabilization facilitates the establishment of the DTC with infinite autocorrelation time. Remarkably, this mechanism does not rely on the equations of motion being perfectly periodic, as required for DTCs in closed MBL systems 42 , and we expect that infinite autocorrelation times could be maintained even in the presence of aperiodic variations of the drive (although the nomenclature should be revised in this case, since the underlying discrete time symmetry would only be present on average but not for individual realizations). This is in contrast to DTCs in closed MBL systems 42 , in which the non-ergodic dynamics hinges on the peculiar mathematical structure of the Floquet operator, which, in turns, relies on the underlying equations being perfectly periodic.
The intimate connection between bistability and DTC is, however, not a strict duality, and the boundaries of the two phases, in the equilibrium and non-equilibrium phase diagrams, respectively, do not coincide. For instance, in our analysis we found that for μ = 0.9 the bistable phase extends up to α ≈ 1.6, whereas the DTC stretches slightly farther, up to α ≈ 1.7. The origins of this imperfect correspondence can be traced back to two competing effects. On the one hand, bistability may not be sufficient to stabilize a DTC. This can already be understood in the limit α → 0, in which the asymmetry of f μ and of its FPs does not guarantee the drive to switch the density n from one basin of attraction to the other, that is, across the critical probability p 1,c . This issue becomes even more relevant for larger α, for which the asymmetry is possibly accentuated and p 1,c can approach 0 (see for instance Supplementary Fig. 1). On the other hand, a perfect bistability may not even be necessary for a DTC to exist. In fact, for the stabilization of a DTC, it may be sufficient that, of the densely and sparsely occupied regimes of the underlying DP, only one is stable, and the other is just weakly unstable (that is, metastable), meaning that the time scales of the dynamics of the density n in the two regimes are very different. Loosely speaking, the stability of one regime might be able to compensate for the weaker instability of the other, resulting in an overall stable DTC. The asymmetry of the underlying DP and the mismatch between the bistable phase and the DTC highlight the purely dynamical nature of the latter, that cannot 'piggy-back' on any underlying symmetry.
While these considerations are model and parameters dependent, and it is ultimately up to numerics to find the bistable and the DTC phases, what is universal and far reaching here is the concept that long-ranged DP, and PCA more generally, can host novel dynamical phases, such as DTCs. As Yao and collaborators recently pointed out 28 , long autocorrelation times are in fact generally unexpected in 1 + 1-dimensional PCA, because imperfections and phase slips can nucleate, spread and destroy the order. Our work proves that this fate can be avoided, and time-crystalline order established, in long-ranged PCA. These systems enable in fact an error correction mechanism, in our case intimately related to the bistability, that would be impossible if correlations were limited to a finite radius. We may speculate that, in the physical picture of a Hamiltonian system coupled to a bath, this defect suppression would correspond to the cooling rate being larger than the heating rate.
In conclusion, we have studied the effects of long-range correlated update rules in a model of DP, which we built from an analogy with the prototypical (but short-ranged) DK PCA. First, we proved that, beyond the standard active and inactive phases, a new bistable phase emerges in which the system at long times is either empty or finitely occupied depending on whether it was initially sparsely or densely occupied. Second, in a driven DP with periodic modulation of the update rules, we showed that this bistable phase intimately connects with a DTC phase, in which the density oscillates with a period twice that of the drive. In this DTC phase, the autocorrelation time scales exponentially with the system size, and in the thermodynamic limit a robust and persistent breaking of the discrete time-translation symmetry is established.
As an outlook for future research, further work on the driven DP should better assess the nature of the transition between the DTC Single instances of the periodically driven DP, alongside with the density n averaged over multiple independent runs, for L = 500 sites. a For a power-law exponent α = 1.4, n oscillates subharmonically with a period that is twice that of the drive, whereas, for α = 1.8, n eventually picks the periodicity T enforced by the drive. c For finite system sizes L, the subharmonicity Φ(t) decays as ΦðtÞ $ expðÀ tÀ1 τT Þ due to the accumulation of phase slips, and, after a few time scales τT, the density n synchronizes with the drive and oscillates with period T. Exponential fits (dotted lines) can be used to extrapolate the lifetime τ of the subharmonic response, on which a scaling analysis is performed in d. For α = 1.4 (blue), the lifetime τ scales exponentially with the system size, τ~e βL , whereas no such a scaling is found for α = 1.8. The scaling coefficient is again found from an exponential fit (dotted line) and plotted in e versus the power-law exponent α. For small α, that is, long-ranged enough DP, the scaling coefficient β is finite, indicating that in the thermodynamic limit L → ∞ the subharmonic response is persistent and a DTC with infinite autocorrelation time emerges. On the contrary, β ≈ 0 for large α, indicating a trivial dynamical phase in which no stable subharmonic dynamics is established. Here we considered p 1 = 1, μ = 0.9, p d = 0.02, T = 20 and R = 2000. and the trivial phase, characterize more systematically the phase diagram in other directions of the parameter space, and, most interestingly, address the role of dimensionality. Indeed, it is well known that dimensionality can facilitate the establishment of ordered phases of matter at equilibrium, and the question whether this is the case also out of equilibrium remains open. A positive answer to this question is suggested by the fact that, in D + 1dimension with D ≥ 2, bistability can emerge even in short-ranged models of DP 40,43,44 . Another interesting question regards the fate of chaos and damage spreading in long-ranged DP 45 . Further research should then aim to gain analytical intuition into the problem. For instance, the critical α separating the various phases may be located using a field theoretical approach, which has been successful in similar contexts in the past 41 . Finally, on a broader perspective, our work paves the way towards the study of nonequilibrium phases of matter in the uncharted territory of driven PCA, with a potentially very broad range of applications throughout different branches of science. As a timely example, Floquet PCA may provide new insights into the understanding of seasonal epidemic spreading and periodic intervention efficacy.
Methods
Here we provide further technical details on our work. In Eq. (4), we considered as distance r i,j between sites i and j where the tangent accounts for periodic boundary conditions and makes the distance of the farthest sites with |i − j| = L/2 artificially diverge. This divergence is expected to reduce finite-size effects without changing the underlying physics, that is, in fact dominated by sites with |i − j| ≪ L, for which we get a natural r i,j ≈ |i − j|. Indeed, as we checked, similar results are obtained with r i;j ¼ minðji À jj; L À ji À jjÞ. The Kac-like normalization factor N α;L reads instead The phenomenology of the bistable phase can be understood from a graphical FP analysis of the equation f μ (x) = x illustrated in Fig. 4, which explains the dynamics for α < 1. Three scenarios are possible and interpreted in terms of the ways the graph of the function f μ intersects with the bisect. (i) Inactive-if μ < μ 0 c , the only FP is x 0 = 0, which is stable and corresponds to a completely empty state. The system moves towards this FP and p t t ! 1!0. (ii) Critical-if μ ¼ μ 0 c , a new semi-stable FP emerges at x c , which is attractive from its right and repulsive on its left. (iii) Bistable-if μ > μ 0 c , the semi-stable FP splits into an unstable FP x 1 > x 0 and a stable FP x 2 > x 1 . In this case, the system will reach either the unoccupied FP x 0 = 0 or the finitely occupied FP x 2 > 0 depending whether p 1 < x 1 or p 1 > x 1 , respectively. That is, the system is bistable, and the critical initial probability separating its two basins of attraction is p 1,c = x 1 (see also Supplementary Fig. 1). The critical value μ 0 c is obtained numerically solving for the condition of tangency between the graph of f μ and the bisect and gives μ 0 c ¼ 0:6550ð8Þ and x c = 0.5216 (9). For μ > μ 0 c , the FPs x 1 and x 2 are found solving for f μ (x) = x, and, for instance, we find x 1 = 0.3326(5) and x 2 = 0.7890(9) for μ = 0.8.
The FP analysis also clarifies the general features of f μ that allow for the emergence of bistability, that is, in fact not contingent on the choice of f μ made in Eq. (5). Indeed, the only requirement is that, for some parameter(s) μ, the equation f μ (x) = x has three FPs x 0 < x 1 < x 2 , of which x 0 and x 2 are stable, whereas x 1 is unstable. Put simply, f μ should be a nonlinear function with a graph looking qualitatively as that of Fig. 4c. This condition guarantees a bistable phase for α < 1, which can then possibly extend to α ≥ 1 and, in the presence of a periodic drive, facilitate the establishment of a DTC.
Finally, note that higher resolution and smaller fluctuations could be achieved in the figures throughout the paper if simulating larger system sizes L and/or considering a larger number of independent runs R. This could, for instance, allow a more accurate characterization of both the equilibrium and the non-equilibrium phase diagrams of our model, which could be explored in other directions of the parameter space for varying α, μ, p d and T. This would, however, require a formidable numerical effort and goes therefore beyond the scope of this work. As a reference, for instance, the generation of Fig. 3e for the parameters considered therein requires a computing time of approximately 4 × 10 3 h per 3 GHz core.
Data availability
No data sets were generated or analysed during the current study.
Code availability
The codes that support the findings of this study are available at https://figshare.com/ articles/software/Code/13468836. Received: 2 July 2020; Accepted: 19 January 2021; a For a control parameter μ<μ 0 c ¼ 0:6550ð8Þ, the system is inactive, corresponding to a single FP x 0 = 0: at long times, the system ends up in the empty, absorbing state with state variables s i = 0 for all sites i. b At the critical point μ ¼ μ 0 c , a new semi-stable FP emerges at x c = 0.5216 (9), that is, unstable from his left and stable on his right. c Increasing μ above μ 0 c , the semi-stable FP splits into an unstable FP x 1 < x c and a stable FP x 2 > x c . Depending on whether the initial density p 1 is <x 1 or >x 1 , the system flows towards density n = x 0 = 0 or n = x 2 > 0, respectively, indicating bistability.
|
2020-04-29T01:01:22.979Z
|
2020-04-27T00:00:00.000
|
{
"year": 2021,
"sha1": "73899f7adf422d288603720d118c420f48104a6d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-021-21259-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a819a40148c90330026a0ac678f2d9b8dba6841",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Computer Science"
]
}
|
252992701
|
pes2o/s2orc
|
v3-fos-license
|
Tourist Guidance Robot Based on HyperCLOVA
This paper describes our system submitted to Dialogue Robot Competition 2022. Our proposed system is a combined model of rule-based and generation-based dialog systems. The system utilizes HyperCLOVA, a Japanese foundation model, not only to generate responses but also summarization, search information, etc. We also used our original speech recognition system, which was fine-tuned for this dialog task. As a result, our system ranked second in the preliminary round and moved on to the finals.
I. INTRODUCTION
This paper describes our system submitted to Dialogue Robot Competition 2022. The dialog task for this competition is to develop a tourist guidance system that utilizes a humanoid robot [1], [2]. In our system, we have developed a scenario-based dialog system using HyperCLOVA, a Transformer-based Japanese foundation language model with 82B parameters. We also developed a speech recognition system, which we fine-tuned for this tourist guidance domain. As a result of the preliminary round, our system ranked second and moved on to the final round. The evaluation was performed through a survey after the dialog session. The results show that our system could achieve the best trustworthiness score among all the submitted systems, which could be the advantage of flexible responses from Hyper-CLOVA. However, our model has a low naturalness score compared to the other metrics, which may be due to our method that does not fixate utterances in a rule-based manner. Overall, the scores on all the evaluation metrics are still far from the maximum score, which indicates that there is space for improvement in many aspects, such as response quality, response time, body movement, etc.
II. COMPETITION DESCRIPTION
The dialogue task in this competition simulates a situation in which the robot acting as a counter-sales person at a travel agency accommodates customers' requests. The customer's objective is to decide on a single tourist sight to visit, by consulting with the robot to choose from two candidates. The robot is pre-informed by the organizer's system which tourist sight to strongly recommend and is expected to persuade the customer while showing appropriate hospitality. The designated conversation time is about five minutes.
During the competition, a booth simulating a travel agent's counter is set up in a fixed place. The preliminary round 1 was held in The National Museum of Emerging Science and Innovation in Odaiba Tokyo, Japan. In the booth, two chairs are facing each other and a desk is in between. The humanoid robot sits in one chair and a customer, who randomly showed up at the booth, sits in the other. In addition, there is a microphone on the desk and there is a camera behind the humanoid. Contestant systems are allowed to use sensor information from the microphone and camera, and also have access to the tourist sight database which contains a summary, business hours, facility information, etc. The contestants' system is expected to control the humanoid robot's body (e.g. viewpoint, pose, head inclination), expression, and speech. The contest organizers provide intermediate software to allow contestants to easily control the humanoid.
The evaluation is based on the customer's survey, which is answered after the dialog session. Detailed descriptions are in [2].
III. SYSTEM OVERVIEW
The system proceeds with the dialog task with a predefined scenario, shown in Figure 1. The robot, which we named Shoko, starts with simple greetings and then does some chitchat as an icebreaker. Then, it briefly explains the purpose of this conversation and the selected tourist sight. After that, the robot asks some questions to obtain the customer's information. Based on that result, the robot recommends and counter-recommends the tourist sights. Finally, there will be a question-answering time and then finish with a greeting in the closing phase.
Our system is based on HyperCLOVA [3], which is a system containing a Japanese foundation language model with 82B parameters. Previous works with HyperCLOVA showed that using few-shot learning (or prompt learning) enables generating fluent and interesting responses in both opendomain [4] and situated [5] dialog tasks. In this competition, unlike the previous works, we attempted to combine Hyper-CLOVA with the rule-based dialog to achieve a controllable task-oriented dialog system. In our system, HyperCLOVA is used not only in generating responses, but also generating summaries, finding recommendation points, searching for information, etc. Details of the HyperCLOVA usage will be explained in section IV. Note that all the following prompt examples in the figures are translated into English, and they are originally Japanese.
Throughout the system, we used a speech recognition system developed for this tourist guidance domain, which will be described in section V. For the robot's facial expression, we made the robot smile most of the time, but when the generated utterance contained an exclamation mark, we made the robot's eyes open widely and eyebrows go up to look like it is surprised. For the robot's movement, the robot looks at the monitor during the Brief Explanation and Recommendation, to attract the customers' attention to the monitor. We also made the robot nod its head at random intervals when the user is speaking.
IV. DIALOG SYSTEM A. Greetings and Icebreaker
The system starts the conversation with a fixed conventional Japanese greeting and self-introduction. The robot is programmed to bow during greetings, which is a common thing to do in Japanese culture. In the greetings, the robot asks the customer to speak loudly, since the speech recognition system is vulnerable to noisy and unclear voices.
After the greetings, the system moves on to the icebreaker phase, in which the robot asks open questions about the customers' work. We included this phase because of two reasons: 1) to let customers feel relaxed by letting them speak about themselves, and 2) humans tend to speak to the robot in very short and rough answers such as "Yes" or "OK", and the robot needs to inform them that it is capable of understanding free-form and long utterances.
Our system's icebreaker phase is structured as three turn conversation. For the first turn, the robot asks the customer with a fixed utterance, "What do you do for a living?", and wait for the customer's responsibility. For the second turn, our system uses HyperCLOVA to respond to the answer, by inputting the prompt shown in Figure 2. It is designed to ask a follow-up question to the customer's work. For the third turn, our system again uses HyperCLOVA to respond with
B. Brief Explanation
After the icebreaking, the system briefly explains the objective of this conversation and the selected tourist sights. It first says "I heard that you are deciding between two tourist sites," then asks "have you ever been to either one of those?". The system then uses a regular expression to detect if the answer is yes or no. We prepared fixed responses for both cases.
Then, the system briefly explains the two tourist sights. Since there were not any summaries in the provided database, we generated a one-line summary by utilizing HyperCLOVA. We handcrafted a prompt for the summarization task, in which the input is a long explanation of the tourist sight and the output is a one-line summary.
C. Interview
For the interview section, we apply a strategy that combined a conventional rule-based approach with a modern text generation approach. The former allows us to obtain several essential pieces of information from users, and the latter provides users with some pertinent and comfortable comments based on their answers. We prepare a set of questions to find out user preferences, in advance. The answers from users are always analyzed by a simple slotfilling function.
We list the interview questions in Table I. There are two types of questions, the mandatory questions which our system always asks, and the location-wise questions which our system asks by user-selected location. First, we mention two mandatory questions. They are fixed and selected according to the solid if-then rules. a) Participants: Our system always asks the first question, "Who are you traveling with?" This question is for recommending tourist sights based on the user's companions. For instance, if a user answers that he is traveling with his family, our system recommends him to a family-friendly location. The first question branches into multiple due to the user's answers listed in Table I b) Transportation: Another mandatory question is "Are you using a car for this trip?" Since some locations have no parking space and some are far from any train stations, it is desirable to be able to provide local access information. In the last part of our interview section, our system asks "Can you tell us what points are important to you to enjoy your travel?" Though we do not exploit the answer to this question for our recommendation, that gives the user a sense of satisfaction by listening to itself. d) Question Generation: Our location-wise questions are generated by HyperCLOVA for each sight location. Figure 3 shows our specific prompt which converts a sight summary into questions.
All questions are related to the sight and always formatted "Do you like -?" For example, the summary of Tokyo Trick Art Museum has "It is a place where you can have a magical experience by optical illusions." In this case, HyperCLOVA may generate "Do you like to have magical experiences?" For each sight, HyperCLOVA generates 10 questions and our system selects at most three unique questions. Because the question generation process is often time-consuming, we handled this process in advance. All location-wise questions are corresponding to the recommendations mentioned in Section IV-D. e) Comment Generation: In addition to asking questions, our system responds to users' answers and gives them appropriate comments. When commenting, our system repeats what the user has said with the nodding motion. We completely separate the question and comment generation process. We build another prompt shown in Figure 4 which generates only comments without any questions. If this prompt accidentally generates a comment including a question, our system filters it out and re-generates the other comment.
D. Recommendation and Counter-Recommendation
For the recommendation phase, our system starts with a fixed utterance saying "according to your information, we recommend <sight name>." The system then explains the appealing point of the recommended tourist sight. Then, it explains why it is especially recommended to the customer, based on the information obtained in the previous interview phase and a search system. Finally, the robot does a counterrecommendation, which is to explain why the customer should not go to the other sight. a) Search System: During the recommendation, the system utilizes a search system to generate a convincing and factual response. First, as an additional data resource to answer the question, we preliminarily crawled two tourist information websites, Jalan 1 and TripAdvisor 2 . The crawled data includes basic information such as business hours and transportation access, as well as reviews by users and their review score. The crawled data was formatted into a database using Elasticsearch 3 .
While this resource is useful to answer the question, if all information of each site is used as a prompt for QA, the information will be scattered and the prompts will be unnecessarily long. Therefore, we utilized HyperCLOVA to extract only relevant basic information from the speaker's questions. As a prompt, we first write the instruction "Extract only relevant information from the following information," and then listed the basic information of a tourist site. And by describing multiple samples that were prepared as templates, combining assumed queries and basic information, we performed few-shot information extraction. The example is shown in Figure 5.
Extract only what is relevant from the following information. The recommendation starts with explaining why the sight is appealing, and this text is generated by HyperCLOVA. The prompt use the summary of the sight and HyperCLOVA starts generating after "This place is appealing because ".
For the robot to be more convincing, the model explains why it is recommended especially to the customer. To generate such a recommendation sentence, the system takes two steps: 1) create a short recommending point and 2) generate the response using the recommending point.
For step 1, The recommending point is generated from the answers of the interview phase. If the customer says yes to any of the questions, we use the corresponding preprocessed recommendation point. The preprocess is a translation task that takes a question as an input (e.g. "Will you be touring with a baby?") and a recommendation point as an output (e.g. "<sight> is recommended to a family with babies."), which is done with HyperCLOVA. However, in some cases, there may be no suitable feature to explain to the customer. Therefore, we also prepared a customerindependent recommendation point. We extracted several features such as price, indoor/outdoor, distance from the station, number of reviews, etc. from the database, and find advantages that are independent of the customer's answer. We take at most two recommendation points to pass on to step 2, in which the customer-independent recommendation point has lower priority than the customer-dependent ones.
For step 2, we define this response generation task as follows: {s, d, p} → u where s is the summary of the recommendation sight, d is the searched data, p is the recommendation point, and u is the robot's utterance. d is obtained using the search system, by using the recommendation point from step 1 as the query. We used HyperCLOVA to solve this task. c) Counter-Recommendation: To discourage the customer from going to the other sight, the system does a counter-recommendation, which explains why they should A friendly Shoko at Odaiba answers the questions of the customers who can't decide which to go between two tourist attractions. Using search information, Shoko gives accurate information. % Task not go to that place. To generate the final utterance, the system takes a similar two-step procedure as the recommendation explained above. One example utterance is as follows: "The Water Science Museum is also a good place to learn about science, but it has few reviews and is not very popular, so Madame Tussauds Tokyo is probably a better choice."
E. Question Answering
One of the important roles of a counter-sales is to flexibly answer customers' open questions. The robot first asks the customer if they have any questions. If they do not have any questions, it skips to the closing phase, but if they do, the system generates the answer response by using HyperCLOVA.
We define this response generation task as follows: {s 1 , s 2 , d 1 , d 2 , r, q} → a, where s 1 , s 2 are summary of the tourist sights, d 1 , d 2 is the searched data (using the same search system in section IV-D) for each tourist sights, r is the recommended sight, q is the question, and a is the answer response. The system uses HyperCLOVA to solve the task by using shots shown in Figure 6.
F. Closing
For the closing phase, the robot first informs the customer that the limited time has come, and then moves on to the final persuasion. During the final persuasion, the robot talks about the experience of visiting the recommended tourist sight. This is generated by HyperCLOVA. To generate a plausible experience, the prompt includes user-submitted positive reviews from TripAdvisor.
G. Other Improvements
In this competition, we are required to use Amazon Polly 4 for text-to-speech conversion. For some Kanji letters in Japanese, Amazon Polly failed to speak in the right way. The most critical error in this tourist guidance task was "方", which can be pronounced "kata" or "ho", and it can have different meanings by being pronounced differently. The right pronunciation can be inferred by reading the context. Therefore, we used HyperCLOVA to convert "方" to Hiragana letters, so that it could be pronounced correctly with Amazon Polly.
A. Model Architecture
Our original speech recognition system uses an end-toend model based on connectionist temporal classification (CTC) with self-conditioning architecture [6]. Our model is composed of stacked encoder layers, which is shown in Figure 7. As shown in the figure, intermediate predictions of the encoder layer are fed back to the next encoder layer. This self-conditioning architecture is known to improve speech recognition accuracy by relaxing the conditional independence assumption of CTC-based speech recognition models.
B. Training Details
A following two-stage strategy was adopted for the model training. Firstly, a general-purpose model was trained using realistic speech data including various domains. Thousands of hours of in-house speech data were used for the initial training. Secondly, the model was fine-tuned with another speech data set to match itself to the tourist-guide domain. This additional training set was created by extracting sentences containing landmarks and place names from the text corpus, which was used for the HyperCLOVA model training. To reduce the time and effort required for the speech recording, the speech data corresponding to the text sentence was synthesized by our in-house text-to-speech (TTS) engine. 4 https://aws.amazon.com/polly/
C. Evaluation
We performed an experimental evaluation of our speech recognition models. The best-path decoding was employed to reduce latency due to the speech recognition process. Two evaluation sets were prepared for the general and touristguide domains. The number of utterances was 5,000 for the general domain, and 3,000 for the tourist-guide domain. TABLE II compares the character error rates (CERs) between our initial and fine-tuned model. We can see that the finetuned model could improve CER for the tourist-guide domain with keeping that of the general domain. This means that the model was able to specialize in the tourist-guide domain while maintaining its generality. Fig. 8 shows the result of the surveys, which was conducted after the dialog session with the robot. Our system ranked second in overall results in the preliminary round. Team A and Team C are the results of first and third place respectively. Baseline system is described in [2].
VI. RESULTS AND ANALYSIS
As shown in Fig. 8, our model has a low score on naturalness. One possible cause is the generation quality of the HyperCLOVA. HyperCLOVA often generated unconvincing or out-of-place responses (e.g. "I'm not sure. Please search by yourself."). The generation quality was degraded mainly because of the lack of contextual information in the prompt and speech recognition errors. Another possible reason is the response time which takes 2 to 4 seconds, and users could feel stressed waiting for the response. For the trustworthiness metric, our system was the best among all the submitted systems. This could be because our model can generate flexible responses that reflect the user's input, which may give impressions to the customers that they are being listened to.
VII. CONCLUSIONS
This paper explains the tourist guidance robot based on HyperCLOVA, which was submitted to the Dialogue Robot Competition 2022. Our model has used HyperCLOVA to solve multiple types of language tasks in tourist guidance, including summarization, information extraction, response generation, style transfer, and more. We also implemented a speech recognition system fine-tuned for this dialog task. As a result of the qualifying round, our model achieved second place for the overall score and moved on to the final round. The evaluation results show that our system could help users feel trustworthy, while leaving huge space for improvements on naturalness.
|
2022-10-20T01:16:17.329Z
|
2022-10-19T00:00:00.000
|
{
"year": 2022,
"sha1": "70de230156a711e738d978b0c498652cd2ba280a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "70de230156a711e738d978b0c498652cd2ba280a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
21554445
|
pes2o/s2orc
|
v3-fos-license
|
Diacylglycerol Induces Fusion of Nuclear Envelope Membrane Precursor Vesicles*
Purified membrane vesicles isolated from sea urchin eggs form nuclear envelopes around sperm nuclei following GTP hydrolysis in the presence of cytosol. A low density subfraction of these vesicles (MV1), highly enriched in phosphatidylinositol (PtdIns), is required for nuclear envelope formation. Membrane fusion of MV1 with a second fraction that contributes most of the nuclear envelope can be initiated without GTP by an exogenous bacterial PtdIns-specific phospholipase C (PI-PLC) which hydrolyzes PtdIns to form diacylglycerides and inositol 1-phosphate. This PI-PLC hydrolyzes a subset of sea urchin membrane vesicle PtdIns into diglycerides enriched in long chain, polyunsaturated species as revealed by a novel liquid chromatography-mass spectrometry analysis. Large unilammelar vesicles (LUVs) enriched in PtdIns can substitute for MV1 in PI-PLC induced nuclear envelope formation. Moreover, MV1 prehydrolyzed with PI-PLC and washed to remove inositols leads to spontaneous nuclear envelope formation with MV2 without further PI-PLC treatment. LUVs enriched in diacylglycerol mimic prehydrolyzed MV1. These results indicate that production of membrane-destabilizing diglycerides in membranes enriched in PtdIns may facilitate membrane fusion in a natural membrane system and suggest that MV1, which binds only to two places on the sperm nucleus, may initiate fusion locally.
At the end of each mitosis in eukaryotes, the nuclear envelope is typically reconstituted by membrane fusion, forming the nuclear compartment and segregating the chromosomes from the cytoplasm. A similar process encloses sperm chromatin in egg cytoplasm following fertilization. A number of studies emphasizing the role of proteins have addressed the mechanism of nuclear envelope assembly, many utilizing cell-free systems derived from eggs or somatic cells (1)(2)(3)(4). However, relatively little attention has been paid to the essential role(s) played by membrane lipids in this process.
Male pronuclear or somatic nuclear envelope formation involves binding of nuclear membrane precursors to the chromatin surface followed by fusion to create a double membrane enclosing the chromatin (1,(5)(6)(7). We have previously reported that envelope formation in a cell-free system derived from sea urchin eggs requires the fusion of three egg membrane vesicle populations and remnants of the sperm nuclear envelope at the tip and base of the conical nucleus (8 -10).
One of the egg vesicle populations (MV1) 3 is particularly unusual. It is a low density fraction highly enriched in the membrane lipid phosphatidylinositol (PtdIns) (9,11). MV1 binds at the tip and base of the sperm nucleus and is required for nuclear envelope formation, which can be induced by addition of GTP or a bacterial PtdIns-specific phospholipase C (PI-PLC) (9,12). The endogenous sea urchin PI-PLC activity probably resembles a typical eukaryotic enzyme whose substrate is PtdIns(4,5)P 2 . GTP-initiated envelope formation is inhibited by GTP␥S and by the PI 3-kinase inhibitors, wortmannin and LY294002 (12,13). Initiation of the fusion process by exogenous bacterial PI-PLC or human recombinant PI-PLC␥ can be inhibited by the PI-PLC inhibitors ET-18-OCH 3 or U73122 (12,14).
PtdIns hydrolysis is best known as an intermediate step in G-protein signaling pathways in which PtdIns(4,5)P 2 is hydrolyzed by PI-PLC to form diacylglycerol (DAG) and inositol-1,4,5 triphosphate (InsP 3 ). Typically, such signaling occurs in membranes containing 3-10% PtdIns with much lower amounts of PtdIns(4,5)P 2 (15). However, the large amount of PtdIns present in MV1 (up to 80% of the phospholipid) suggested to us that the PtdIns hydrolysis may be important in altering membrane structure rather than in initiating a signaling pathway (12). Since MV1 binds to the tip and base of the nucleus, hydrolysis at these points might lead to fusion initiation through localized formation of DAG. The hydrolysis products of PtdIns catalyzed by the bacterial PI-PLC are diglycerides (normally DAG) and inositol 1-phosphate (from the intermediate D-myo-inositol-1,2-cyclic phosphate) (16). DAG produced enzymatically by phospholipases acting on synthetic membranes has been shown to be membrane destabilizing and induce membrane fusion (17)(18)(19)(20).
We show here that under fusion-stimulating conditions, bacterial PI-PLC treatment of sea urchin egg membranes results in large increases of a small subset of diradylglyceride (diacylglycerol, alkylacylglycerol, and alkenylacylglycerol) molecular species, in particular, DAG 18:0/20:4. To test whether fusion of natural membranes induced by PI-PLC in our cell-free system might result from localized production of DAG, we took two complementary approaches. First, MV1 was hydrolyzed with PI-PLC and washed to remove the water-soluble inositol products. The resulting prehydrolyzed MV1 when added to a cell-free system containing other nuclear envelope precursor MVs and cytosol led to nuclear envelope formation with no added inducer. Second, we used synthetic large unilamellar membrane vesicles (LUVs) to substitute for MV1 and varied the phospholipid composition of these membranes. LUVs containing 75% PtdIns mimicked MV1. In the presence of cytosol these LUVs bound to the tips of nuclei and initiated fusion when exogenous PI-PLC was added. Furthermore, LUVs containing 75% DAG led to fusion without exogenously added PI-PLC, mimicking prehydrolyzed MV1.
These results indicate that production of substantial amounts of diglycerides from the PtdIns of MV1 can lead to nuclear envelope formation and offer a possible role for PtdIns-rich membranes in local initiation of nuclear envelope formation.
Lipid Extraction of Cytoplasmic Membrane Vesicles Hydrolyzed with PI-PLC-MV0 from P. lividus was isolated from 1 ml of S10 and resuspended in 400 l of LB. The suspension was divided into two equal parts and either left untreated or treated with 0.16 unit/ml bacterial PI-PLC for 2 h at room temperature. Total lipids were extracted from each using a modified Folch procedure (11). An internal standard mix containing 12:0/12:0 species of DAG, phosphatidic acid, phosphatidylcholine (Ptd-Cho), phosphatidylethanolamine, phosphatidylglycerol, and phosphatidylserine (500 ng each) was added to each sample followed by 1.5 ml of methanol. Chloroform (3 ml) was then added to each, and samples were mixed and left for 10 min. To split the phases, 1.5 ml of 0.88% KCl was added. The upper aqueous phase was removed and the lower organic phase containing total lipids was dried under a stream of nitrogen, dissolved in 150 l chloroform/methanol (2:1 v/v), transferred into a silanized autosampler vial insert, dried again on a rotary vacuum evaporator, and dissolved in 15 l of chloroform.
Sperm Nuclei Permeabilization, Fertilized Egg Extracts, and MV Preparation-Sperm nuclei of L. pictus were permeabilized with 0.1% Triton X-100 as described previously (21). Demembranated nuclei were washed and resuspended at 10 8 nuclei/ml. Nuclei were diluted 1:25 and added to egg extracts to a final ratio of approximately one sperm nucleus per egg equivalent. Eggs and sperm were collected and eggs fertilized as described (21). Fertilized eggs were washed twice in Millipore HAWPfiltered sea water at 100 ϫ g for 1 min in a 5403 Eppendorf swinging bucket microcentrifuge at 15°C. At 13 min post-fertilization, 2.5 ml of packed eggs were washed twice with an equal volume of cold LB buffer and homogenized by passing twice vigorously through a 22-gauge needle. The lysate was cleared at 10,000 ϫ g for 10 min in a 5417R Eppendorf microcentrifuge at 4°C. The recovered supernatant, referred to as cytoplasmic extract or S10, includes cytosol and cytoplasmic membrane vesicles. S10 was used directly or frozen and stored in small aliquots at Ϫ80°C.
Cytosol (S150) was prepared by fractionating the S10 at 150,000 ϫ g for 3 h in a Beckman Ti50 rotor at 4°C. S150 supernatant was used immediately or frozen in aliquots at Ϫ80°C. The pellet of membrane vesicles (MV0) was washed twice in MWB with phenylmethylsulfonyl fluoride added freshly to a final concentration of 1 mM for 10 min at 45,000 ϫ g in a Ti50 rotor. MV0 was resuspended in 0.10 of the volume of the original S10 and used immediately or quick frozen in aliquots at Ϫ80°C.
To prepare MV1 and MV2, MV0 from 2.5 ml of packed eggs was carefully resuspended in 100 l of TN, and then 900 l of MWB buffer was added. Complete suspension was achieved by passing through a series of increasingly smaller plastic micropipette tips (1 ml to 20 l). MVs were stained with DiOC 6 at a final concentration of 10 g/ml and observed in a fluorescence microscope with a fluorescein filter set to confirm the absence of MV aggregates. A linear sucrose gradient of 0.1-2.0 M sucrose (15 ml) in TN buffer was made in a 16-ml Ultra-Clear Beckman centrifuge tube. The MV0 suspension was carefully applied to the top of the gradient and overlaid with mineral oil. MVs were subfractionated by sedimentation to density equilibrium at 150,000 ϫ g for 20 h at 4°C. Each band was recovered by side puncture with a 22-gauge needle on a 5-ml syringe. Median densities of MV1 and MV2 were 1.02 and 1.04 g/ml, respectively. Each band was diluted with 4 volumes of ice-cold MWB and concentrated at 150,000 ϫ g for 30 min in an SW28 rotor. Each pellet was suspended in 250 l of MWB, and the samples were aliquoted and frozen at Ϫ80°C.
Binding, Fusion, and Inhibition Assays with MVs-To 20 l of S10 and 1.2 l of ATP-generating system, demembranated nuclei at a final concentration of 8 ϫ 10 5 were added as described (12). After 1 h at room temperature, a 0.10 volume of DiOC 6 stock was added and samples observed using a Zeiss Neofluar 100ϫ oil-immersion objective and a fluorescein filter set ( ex 460 Ϯ 20 nm; em Ͼ 500 nm). Images were captured in gray scale with a Hamamatsu Photonics C2400 SIT video camera using a frame image averaging and background subtraction with a Hamamatsu Argus-10 image processor.
Decondensed nuclei with bound MVs were underlaid with 0.5 M sucrose in nuclear preparation buffer and centrifuged for 20 min at 500 ϫ g at 4°C. Pellets were resuspended in 20 l of S100 or S150, and the appropriate inducer was added. Samples were incubated for 2 h. Nuclear envelope formation was scored as a continuous fluorescent rim, in contrast to the patchy appearance of bound MVs.
GTP inducer was added from the stock solution to a final concentration of 0.25 mM. PI-PLC was added to a final concentration of 0.07 unit/ml. Inhibition reactions were performed by addition of inhibitors to S150. The final concentration of ET-18-OCH 3 (Sigma) was 19.6 M and of wortmannin (Sigma) was 25 nM. Each experiment was repeated at least three times, 100 nuclei were counted in each sample and standard deviations calculated.
For the synaptojanin1 experiments, MVs from S10 were assembled around chromatin in the presence of ATP for an hour. Fusion was induced with 1 mM GTP for 2 h. Alternatively, nuclei were treated with 1 g/ml of the Syn1-5ptase protein for 15 min prior to the addition of GTP, and a further 1 g/ml Syn1-5ptase was added simultaneously with GTP. An average of 24 nuclei were scored on three independent occasions and the mean and S.E. of these results calculated.
Pretreatment of MV1-MV1 was diluted in MWB, and either the inducer alone or the inducer and inhibitor was added to the reaction mix as above. Reactions were incubated for 1 h at room temperature. Vesicles were stained with DiOC 6 as described above and were pelleted by centrifugation at 45,000 ϫ g for 15 min. The supernatant was removed, and the pellet was washed in MWB and resuspended. The pretreated MV1 was added to MV2 and S150 in presence of the ATPgeneration system and decondensed nuclei. The reaction was incubated for 2 h and observed as described.
LUV Preparation-LUVs were made by extrusion with a mini-extruder from Avanti Polar Lipids, Inc. Lipids were dissolved in chloroform and mixed in the desired proportions by weight. The organic solvent was evaporated with an argon stream in a fume hood to yield a lipid film, which was hydrated in LB buffer by agitation at 4°C. The lipid suspension was successively extruded with an Avanti Mini-Extruder (Avanti Polar Lipds) through polycarbonate filters of pore sizes 1.0 and 0.4 m, the latter approximately the size of MVs in S10.
GTP-induced Nuclear Envelope Formation Is Blocked by Depletion of
the Putative Substrate of the Endogenous PI-PLC-Nuclear envelope formation in a cell-free system can be induced by adding GTP or PI-PLC (12). GTP induction of nuclear envelope formation is inhibited by the PI-PLC inhibitor U73122 (14). To further demonstrate that GTP induction of nuclear envelope formation in the cell-free system requires endogenous PI-PLC activity, GTP was added to cytoplasmic extracts depleted of the PI-PLC substrate phosphatidylinositol bisphosphate.
We used a construct of synaptojanin1 (Syn1-5ptase) phosphatase (S470-R962; lacking the Sac phosphatase and proline-rich domains), which has a strong specificity for the D5-phosphate of the inositol ring. Enzyme kinetics have shown that this construct has the greatest preference for PtdIns(4,5)P218:0/20:4 over other phosphoinositide substrates (22). Nuclear envelope formation by GTP was severely inhibited by treatment with this phosphatase indicating that an endogenous PI-PLC is required for GTP induction (Fig. 1.).
Bacterial PI-PLC Treatment of Sea Urchin Membrane Vesicles Produces a Subset of Diacylglycerol Species-Bacterial PI-PLC, which hydrolyzes unphosphorylated PtdIns and has been reported to have no activity toward PtdIns(4,5)P 2 (16), was chosen for our experiments to minimize complications of rates of PtdIns phosphorylation upon the kinetics of DAG production from PtdIns(4,5)P 2 . In addition this enzyme does not produce inositol 1,4,5-trisphosphate and therefore a resulting increase in free Ca 2ϩ associated with this effector. Hydrolysis of sea FIGURE 1. Nuclear envelope formation is inhibited by Syn1 5-phosphatase. After binding of membrane vesicles to sperm nuclei in cytoplasmic extracts (S10) in the presence of an ATP-generating system, aliquots were untreated or treated with a construct of the 5-phosphatase domain of synaptojanin1. GTP was added to one treated and one untreated aliquot and nuclear envelope formation scored. The phosphatase, specific for the D5-phosphate of the inositol ring, severely blocked nuclear envelope formation induced by GTP. FIGURE 2. Mass spectrometry analysis of diacylglycerol species present in isolated sea urchin cytoplasmic membrane vesicles before and after bacterial PI-PLC treatment. P. lividus MV0 was isolated from 1.0 ml of S10 and resuspended in 400 l of LB. Half was untreated, and half was treated with 0.16 unit/ml bacterial PI-PLC for 2 h at room temperature. Total lipid extracts were made and 64 diglyceride species analyzed as described under "Materials and Methods." Species that contributed less than 2% of total diglycerides in control and treated samples have been grouped as "others." Data are presented as mean mole % of the total DAG pool Ϯ S.E. (n ϭ 4). The DRG species were resolved using a novel LC-MS procedure. Direct detection of underivatized DRG in very low amounts by mass spectrometry has previously been difficult to achieve, primarily because of extremely poor ionization of the lipid. To address this we developed a novel HPLC separation of the diradylglycerols with post-column addition of ammonium formate, which permitted the formation of positively charged adducts that could then be detected by ESI-MS.
In untreated membrane vesicles, the major DRG species was the alkylacyl structure 1-O-18:0/22:6 representing ϳ18% of total species (Fig. 2). Following PI-PLC hydrolysis, the species profile changed, with almost a 5-fold increase in the proportion of DAG 18:0/20:4, which became the predominant species representing ϳ24% of the total.
Several other species initially present in lower proportions also increased, but only two of these to comparable degrees Pretreatment of MV1 with PI-PLC Renders It Fusogenic-We have previously hypothesized that, upon hydrolysis, high levels of PtdIns in MV1 provide sufficient DAGs to facilitate fusion of nuclear envelope precursors (12). We tested this idea by pretreating MV1 with PI-PLC to produce diglycerides, then washing the membranes to remove soluble inositol phosphates and incubating pretreated MV1 in extracts containing the remaining egg MV nuclear envelope precursors (collectively termed MV2).
As shown in TABLE ONE, nuclear envelope formation can be induced by either GTP or PI-PLC in a complete system containing decondensed sperm nuclei, cytosol (S150), an ATP generating system and MV0 (cytoplasmic MVs, which include MV1, MV2, and other unnecessary MV fractions (9)). Similar levels of NE formation were seen in a more defined system when purified MV1 and MV2 were substituted for MV0. If MV1 was omitted, basal levels of envelope formation were detected, indicating that MV1, although it contributes a minor fraction of total nuclear envelope, is required for envelope formation. GTPinduced fusion was blocked by wortmannin (a PI 3-kinase inhibitor), and PI-PLC induced fusion by ET-18-OCH 3 (a PI-PLC inhibitor). Representative nuclei corresponding to some experiments in TABLE ONE (and keyed there) are shown in Fig. 3.
If the relevant activity of exogenous PI-PLC is the production of diglycerides in MV1, pretreatment of MV1 should make subsequent PI-PLC treatment unnecessary. As shown in TABLE TWO, MV1 prehydrolyzed with PI-PLC led to nuclear envelope formation even in the absence of the PI-PLC inducer. Envelope assembly using pretreated MV1 was no longer sensitive to ET-18-OCH 3 inhibition during incubation in extract, although ET-18-OCH 3 blocked the effect of PI-PLC pretreatment. MV2 was required, since it provides most of the nuclear envelope (9). Representative nuclei corresponding to some experiments of TABLE TWO (and keyed there) are shown in Fig. 4.
These data indicate that PI-PLC pretreatment renders MV1 fusigenic and are consistent with the notion that exogenous PI-PLC acts to induce fusion in the cell-free system by production of diglycerides in the PtdIns-rich MV1 membrane fraction at sufficient levels to induce fusion.
PtdIns-rich LUVs Can Substitute for MV1-To better define the role of MV1 in the formation of nuclear envelopes, we prepared protein-free model membranes with a phospholipid composition mimicking MV1 (75% PtdIns/25% PtdCho (w/w)). These vesicles, when added to a system containing cytosol, an ATP-generating system and sperm nuclei but no MV2, bound to the nuclei at two positions corresponding to the sites of the sperm nuclear envelope remnants, thus mimicking MV1
Nuclear envelope formation in vitro requires MV1
Nuclear envelopes (NEs) were formed around membrane-stripped sperm nuclei in a cell-free system. MV0 contained total cytoplasmic membrane vesicles from a 10,000g supernatant of fertilized egg extract. MV1 and MV2 were separated by buoyant density and contain all of the MV0 membrane precursors necessary to form the NE (9). Formation is dependent on MV1, stimulated by GTP hydrolysis or PI-PLC activity, and inhibited, respectively, by wortmannin and ET-18-OCH 3. All reactions contained decondensed sperm nuclei, cytosol (S150), and an ATP-generating system. Some data points are keyed to Fig. 3. (Fig. 3A) Ϫ ϩ Ͻ 1% (Fig. 3B) PI-PLC 93 Ϯ 6% (Fig. 3C) ϩ Ϫ Ͻ 1% (Fig. 3D) 5A). Binding did not occur in LB buffer (data not shown), presumably because proteins in the S150 (cytosol) are necessary to mediate specific binding. That these LUVs are capable of assuming the role of MV1 in membrane fusion was tested as shown in TABLE THREE. TABLE ONE shows that induction by PI-PLC requires the MV1 fraction. TABLE THREE shows that when MV1 is substituted by PtdIns-rich LUVs, either GTP or PI-PLC will initiate fusion, each subject to the appropriate inhibition. No envelope formation was seen when the LUV, MV2, or inducer were omitted. Some representative examples of induction of nuclear envelope formation by GTP and PI-PLC of PtdIns-rich LUVs in the presence of MV2 are shown in Fig. 5, B-E (keyed in TABLE THREE).
Membranes present Inducers Envelope formation Inhibitors Envelope formation ET-18-OCH 3 Wortmannin
These data indicate that synthetic membranes of 75% PtdIns/25% PtdCho can mimic many properties of MV1. These vesicles bound to the same regions as MV1 and conferred GTP or PI-PLC regulation of nuclear envelope formation.
DAG-rich LUVs Are Fusigenic-The experiments in TABLE TWO suggest that diglycerides in the hydrolyzed MV1 fraction are responsible for fusion. To directly test this idea, we made LUVs in which DAG was quantitatively substituted for PtdIns. When 75% DAG/25% PtdCho (w/w) LUVs were added to a system containing MV2, cytosol, and an ATP-generating system, envelope formation occurred with or without the inducers GTP or PI-PLC (TABLE FOUR). The percent of nuclei showing fusion was in all cases well above the background levels in the absence of LUVs or MV1 (TABLE ONE). No envelopes were formed without MV2 so that the LUVs by themselves did not form an envelope. Fusion was not altered by the inclusion of inhibitors of the normal inducers. Fig. 6 shows some representative examples (keyed in TABLE FOUR). These data indicate that DAG vesicles are fusigenic in a natural membrane system and suggest a mechanism for the fusion of PtdInsrich vesicles upon hydrolysis by PI-PLC.
DISCUSSION
A cell-free system derived from fertilized sea urchin eggs supports nuclear envelope assembly on added membrane-stripped sperm nuclei induced by GTP hydrolysis (4,10). Exogenously added bacterial PI-PLC can also induce nuclear membrane formation (12, Fig. 7). Each is dependent on a minor membrane fraction MV1, highly enriched in PtdIns (12). MV1 contributes only 10% of the nuclear membrane vesicle precursor population and binds exclusively to the regions of the sperm nucleus containing remnants of the sperm nuclear envelope (9). Most of the nuclear membrane is contributed by the major fraction MV2 enriched in a marker enzyme of the endoplasmic reticulum (9).
Pretreatment of MV1 with PI-PLC renders it fusogenic
NEs were formed around membrane-stripped sperm nuclei in a cell-free system. MV1 was pretreated with bacterial PI-PLC before addition. Formation no longer required inducer and was not inhibited by post-treatment withET-18-OCH 3 . All reactions contained decondensed sperm nuclei, cytosol (S150), and an ATPgenerating system. Some data points are keyed to Fig. 4.
MV1 pretreatment MV2 Inducers Envelope formation Inhibitors ET-18-OCH 3 Envelope formation
Diacylglycerol and Nuclear Envelope Formation DECEMBER 16, 2005 • VOLUME 280 • NUMBER 50 binds via peripheral proteins to two specific regions of the sperm nucleus (9) containing that portion of the sperm nuclear membrane that does not break down following fertilization (23) and that is characterized by an unusual underlying osmiophilic "cup" (10). In contrast, the major MV2 subfraction binds all around the nuclear periphery through lamin B receptor, a chromatin-binding intrinsic inner nuclear membrane protein (24). The high concentration of PtdIns in MV1 prompted us to hypothesize that its role was to generate, upon PI-PLC catalyzed hydrolysis, a local enrichment of membrane destabilizing DAG leading to initiation of fusion with the other membrane vesicles that make up the bulk of the FIGURE 6. Representative examples of nuclear envelope formation with 75% DAG/ 25% PtdCho LUVs substituted for MV1. A, LUVs (75%DAG/25% PtdCho (w/w)) added to MV2 but no inducer added. B, LUVs (75% DAG/25% PtdCho) added to MV2, GTP, and wortmannin. C, LUVs (75% DAG/25% PtdCho) added to GTP but no MV2. D, LUVs (75% DAG/25% PtdCho) added to MV2, PI-PLC, and ET-18-OCH 3 . All reactions contained decondensed sperm nuclei, cytosol (S150), and an ATP-generating system. (See TABLE FOUR.) FIGURE 7. Model of nuclear envelope assembly.
Step 1, condensed chromatin (gray) deconsenses in the presence of soluble cytosolic proteins (25). Simultaneously MVs (black) bind to the surface of the chromatin, initially around regions of the lipophilic structures (white circles), detergent-resistant membranes in the centriolar and acrosomal fossae (8). These steps require an ATP-generating system.
Step 2, chromatin fully decondenses to a sphere ϳ4 m in diameter (8), and MV binding is completed.
Step 3, MVs fuse to form the double bilayer of the nuclear envelope. This process is initiated by GTP hydrolysis, which requires a PLC activity in the cytosol (14). Endogenous activated PLC hydrolyzes PtdIns(4,5)P 2 to DAG. Polyunsaturated DAG (mostly DAG 18:0/20:4) can be generated from MV0 (Fig. 2). DAG, by destabilizing membranes and inducing negative curvature (26,27), and/or by recruiting non-PKC C1 domain containing proteins to the nuclear envelope, induces fusion of MVs. This final step also has a low (Ͻ50 nM) calcium dependence (4,14). Fusion is subject to inhibition by GTP␥S and BAPTA, exogenous DAG kinase, and the 5-phosphatases SopB and Syn1, which remove DAG and PtdIns(4,5)P 2 , respectively, from the system (Ref. 14 and Fig. 1). In a further step, nuclei can be induced to swell in the presence of additional ATP (13), in a process requiring lamin B import to the nucleus (data not shown) (28).
PtdIns-rich LUVs can substitute for MV1 in nuclear envelope assembly
NEs were formed around membrane-stripped sperm nuclei in a cell-free system. MV1 was replaced by LUVs of 75% PtdIns/25% PtdCho. These LUVs mimicked MV1. All reactions contained decondensed sperm nuclei, cytosol (S150), and an ATP-generating system. Some data points are keyed to Fig. 5.
DAG-containing LUVs are fusigenic
NEs were formed around membrane-stripped sperm nuclei in a cell-free system. MV1 was replaced by LUVs of 75% DAG/25% PtdCho. Formation no longer required inducer and was not inhibited by wortmannin or ET-18-OCH 3 . These LUVs mimicked MV1 pretreated with PI-PLC. All reactions contained decondensed sperm nuclei, cytosol (S150), and an ATP-generating system. Some data points are keyed to Fig. 6.
MV2 Inducers
Envelope formation Inhibitors Envelope formation Et-18-OCH 3 Wortmannin Ϫ Ϫ envelope precursors (10,12). Several lines of evidence presented here support this hypothesis. First, PI-PLC-mediated fusion requires that MV1 be present. Second, pretreatment of MV1 with bacterial PI-PLC renders it fusigenic. These observations are consistent with the notion that exogenous PI-PLC acts to induce fusion in the cell-free system by production of DAG in the MV1 membrane fraction. Use of synthetic membranes of known composition permitted a third test. LUVs of 75% PtdIns/25% PtdCho mimic MV1, binding to the same regions as MV1 and conferring both GTP and PI-PLC induced fusion in the presence of MV2. Moreover, similar model membranes containing 75% DAG are also fusigenic in the absence of inducers.
Although the relevant endogenous PI-PLC activities required for nuclear envelope formation (which are blocked by the inhibitor U73122 but not its inactive analog U73343 (14)) would act upon PtdIns(4,5)P 2 , and the bacterial enzyme acts upon PtdIns, both produce membrane diglycerides (16). Their soluble products are InsP 3 or inositol 1-phosphate, respectively. To avoid the classic signaling pathway involving InsP 3 production and kinetic complications of kinase activities required for PtdIns(4,5)P 2 production, we chose to use the prokaryotic PI-PLC (16). Since prehydrolyzed and washed MV1 was capable of initiating nuclear envelope formation, the effects of soluble inositols were eliminated. Use of DAG-rich LUVs also ruled out a role for production of these soluble inositols from MV1 in nuclear envelope formation in the cell-free system.
Since MV2 hydrolyzed with PI-PLC was unable to form nuclear envelopes in the absence of MV1, despite containing (typically low levels of) PtdIns (11) and DAG, a requirement for MV1 may be understood on the basis of its lipid ratios. We propose that high levels of PtdIns, either in a membrane domain or in a separate set of vesicles, may provide sufficient DAG upon PtdIns hydrolysis to locally initiate membrane fusion. DAG, by virtue of its physical properties, facilitates the phase transition of the lipid bilayer from lamellar to hexagonal II. This type of phase transition induces a localized destabilization of the membrane structure, which in turn favors membrane fusion. That DAG can lead to membrane fusion is supported by several reports using synthetic membrane systems treated with PLCs (17,19,20).
The fatty acid chains of the PtdIns and DAG species may play an additional role. The new method for quantification of DAGs by LC-MS presented here permitted a detail of analysis and sensitivity previously unattained in a natural membrane fusion system. That the PI-PLC preferentially produces diglycerides of long chain, polyunsaturated fatty acid content is intriguing, since these are expected to have major effects on increased membrane fluidity and alteration of other structural properties that could facilitate or fine-tune membrane fusion processes.
Our current and previous work emphasizes a role for DAG that is distinct from the classical pathways of signaling utilizing protein kinase C or other C-1 domain containing receptors, which typically involve low levels of DAG (Ref. 14 and Fig. 7). Although DAG can be generated in several ways, such as through PC-PLC hydrolysis of PtdCho or PLD pathways starting with PtdCho, it is worth noting there are no known eukaryotic PC PLCs. Moreover, we have neither been able to induce NE formation with bacterial PC-PLC nor inhibit it with D609 (29), a compound that inhibits both PC-PLC and PLD activities (30). Furthermore, the molecular species composition of PtdIns changes during fusion, whereas PtdCho is identical before and after binding and hydrolysis (14). Since PI-PLC pretreatment of MV1 is sufficient to lead to NE formation in the absence of further inducers in the cell-free system, it is unlikely that other sources of DAG are necessary.
We additionally suggest an important role for lipid modification in biological membrane fusion reactions. We propose that at high DAG levels, alterations of structure of natural membranes in localized regions can affect fusion events.
|
2018-04-03T04:28:07.149Z
|
2005-12-16T00:00:00.000
|
{
"year": 2005,
"sha1": "e965498b303c3d2a0599ea91ed567163e30b91f4",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/50/41171.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "690d16ce894a9522edb561825bc2ba8a4e1c500b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
267457843
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Photodynamic Therapy on Tumor Metabolism and Oxygenation Revealed by Fluorescence and Phosphorescence Lifetime Imaging
This work was aimed at the complex analysis of the metabolic and oxygen statuses of tumors in vivo after photodynamic therapy (PDT). Studies were conducted on mouse tumor model using two types of photosensitizers—chlorin e6-based drug Photoditazine predominantly targeted to the vasculature and genetically encoded photosensitizer KillerRed targeted to the chromatin. Metabolism of tumor cells was assessed by the fluorescence lifetime of the metabolic redox-cofactor NAD(P)H, using fluorescence lifetime imaging. Oxygen content was assessed using phosphorescence lifetime macro-imaging with an oxygen-sensitive probe. For visualization of the perfused microvasculature, an optical coherence tomography-based angiography was used. It was found that PDT induces different alterations in cellular metabolism, depending on the degree of oxygen depletion. Moderate decrease in oxygen in the case of KillerRed was accompanied by an increase in the fraction of free NAD(P)H, an indicator of glycolytic switch, early after the treatment. Severe hypoxia after PDT with Photoditazine resulted from a vascular shutdown yielded in a persistent increase in protein-bound (mitochondrial) fraction of NAD(P)H. These findings improve our understanding of physiological mechanisms of PDT in cellular and vascular modes and can be useful to develop new approaches to monitoring its efficacy.
Introduction
Photodynamic therapy (PDT) is a tumor treatment modality based on the ability photosensitive substances-photosensitizers-under local exposure to laser irradiation generate reactive oxygen species that cause the death of tumor cells [1,2].The anti-tumor effect of PDT is based on three mechanisms: (1) direct phototoxic damage to tumor cells; (2) vascular damage; and (3) activation of a non-specific immune response [3].The relative contribution of each of them depends on many factors: the chemistry of the photosensitizer, its localization in the tumor, the degree of vascularization and the content of macrophages in the tumor, the time from the injection of the photosensitizer to irradiation, etc.The predominance of the cellular mechanism should be expected with a high concentration of the photosensitizer in tumor cells and low concentration in the blood, which is typically achieved at a long drug-light interval.Vascular mechanism of PDT prevails when photodynamic reactions caused by the sensitizer target the tumor vessels, which lead to vascular stasis, thrombosis, hemorrhage, hypoxia and, as a consequence, death of tumor cells [4,5].This is observed either with specific vascular-targeted photosensitizers or at short drug-light intervals with common photosensitizers.In practice, many PDT regimens suggest the realization of both cellular and vascular effects concurrently.
Although PDT is firmly established in clinical practice, some of its biological aspects remain poorly investigated, specifically its effects on tumor metabolism.Glycolytic phenotype is considered as a factor of poor prognosis for patients undergoing PDT, and pharmacological inhibition of glycolysis increases its effectiveness [5].At the same time, the hypoxia and oxidative stress induced by PDT in the tumor can promote the metabolic switch to glycolysis and the activation of molecular pathways (primarily the hypoxiainducible factor-1, HIF-1) leading to the survival of more aggressive tumor cells [6,7].It is assumed that the metabolic response of the tumor depends on the mechanisms of action of PDT and differs for the drugs causing a direct cell kill or vascular shutdown.For example, the differences in tumor glucose uptake profiles between the two PDT protocols have been identified by positron emission tomography with 18 F-fluorodeoxyglucose; a rapid decrease in glucose uptake followed by a rapid recovery was observed in the case of cellular mode and a delayed decrease in glucose levels and recovery to significantly lower levels-in the case of vascular mode [8].In cellular-targeted PDT metabolic alterations are often associated with mitochondrial damage, which results in the reduction in adenosine triphosphate level and triggers mitochondrial production of reactive oxygen species (ROS), which in turn leads to apoptotic cell death [9,10].Metabolomic data suggest that PDT affects various components of glycolysis and the citric acid cycle as well as metabolites involved in redox signaling.The metabolic processes that are dependent on mitochondria were downregulated, whereas the antioxidant response was activated after PDT with liposomal zinc phthalocyanine in vitro [11].Metabolic transitions after vascular-targeted PDT are, most likely, due to blood flow stasis and hypoxia as well as nutrient deprivation.In any PDT mode, oxidative stress is induced by generation of free radicals, which is closely linked to cellular metabolic profile [12].However, the associations of metabolic reactions with the changes in tumor oxygenation and redox state are not well characterized.In vivo studies with parallel monitoring of tumor metabolism and oxygen in the course of PDT are especially lacking.
Modern optical techniques such as combined fluorescence and phosphorescence lifetime imaging (FLIM and PLIM, correspondingly) provide a unique opportunity to monitor cellular metabolism and tissue oxygenation non-invasively, in tumor models in vivo.Probing of metabolism using FLIM relies on the recording of endogenous fluorescence of the redox-cofactors nicotinamide adenine dinucleotide (phosphate) NAD(P)H in the reduced state and flavin adenine dinucleotide FAD in the oxidized state that act as electron donor and acceptor in reactions of energy metabolism [13].The free form of NAD(P)H associated with glycolysis has short fluorescence lifetime (~0.4 ns), while the protein-bound form, associated with mitochondrial oxidative phosphorylation has longer lifetimes (1.7-3.5 ns) [14].Thus, by extracting the relative contribution of the short and long components from the decay curve upon bi-exponential fitting, it is possible to conclude about the changes in the balance between glycolysis and oxidative metabolism.Unlike NAD(P)H, FAD fluorescence decay has a more difficult interpretation and its fluorescence intensity is typically low in the tumors, so it is rarely used as a metabolic indicator.Given a label-free principle of contrast acquisition, a high sensitivity and molecular specificity of NAD(P)H FLIM, it is considered as a valuable research tool with a great potential for clinical use [15].
PLIM allows for assessing the molecular oxygen content in a tumor using oxygensensitive phosphorescent probes.Bimolecular collisions of the probe with molecular oxygen shorten the probe's triplet lifetime and quench phosphorescence [16,17], so that the phosphorescence decay time of the probe linearly decreases with the increase in oxygen concentration, according to the Stern-Volmer equation.The typical phosphorescent probes are the synthetic organic complexes with transition metals, such as Pt(II), Pd(II), Ru(II), and Ir(III).While numerous oxygen probes have been developed so far, only few of them are suitable for in vivo applications.Depending on the location of the probe within tumor tissue, oxygen concentration can be assessed inside the blood vessels, in the interstitial space and/or inside the cells [18][19][20].Due to much longer phosphorescence decay time (µs to ms) compared to fluorescence, the measurements of oxygen can be combined with fluorescence imaging, including NAD(P)H FLIM [21,22].
The purpose of this work was to investigate the relationships between the changes in oxygenation and metabolic activity of tumors in vivo induced by PDT in "cellular" and "vascular" modes of action.For "cellular" PDT, the genetically encoded photosensitizer KillerRed was used, and vascular-targeted PDT was carried out with chlorin e6 derivative Photoditazine.Here, we implemented fluorescence lifetime imaging of NAD(P)H on a two-photon laser scanning microscope and phosphorescence lifetime imaging with the oxygen probe BT-PDM1 on a one-photon confocal macroscanner [23].Additionally, intravital imaging of tumor microvasculature was performed after vascular PDT using optical coherence tomographybased angiography (OCA).Therapeutic efficacy of both PDT protocols was confirmed by the inhibition of tumor growth and histopathological alterations.
Metabolic Changes after PDT
Using NAD(P)H FLIM, data were obtained on changes in the metabolic status of tumor cells after PDT with genetically encoded photosensitizer KillerRed or Photoditazine.
At the first step of the study of metabolic effects of "cellular" PDT with KillerRed, the experiments on the tumor spheroids were performed (Figure 1).Control spheroids were 400-500 µm in diameter and had a typical dense structure.They consisted of the thin outer layer of proliferating cells, middle layer of quiescent cells and necrotic core.PDT (50 mW/cm 2 , 25 min, 75 J/cm 2 ) caused alteration of spheroids morphology-the proportion of dead (trypan blue stained) cells increased, and they were distributed across the whole spheroid; the spheroids became loosely packed and weakly adhered to the dish bottom.The photodynamic effects of KillerRed were accompanied by its photobleaching by ~50% at the regimen used, which is consistent with our previous results [24].FLIM of NAD(P)H revealed the increased a 1 /a 2 ratio in PDT-treated compared with untreated spheroids at 6-24 h post-PDT (4.28 ± 0.10 vs. 3.75 ± 0.27, p = 0.005) suggesting the glycolytic shift in cellular metabolism (Figure 1).
In mouse tumors, PDT with KillerRed resulted in the increased NAD(P)H a 1 /a 2 ratio at early time points (3-6 h) compare to the control (4.38 ± 0.17 vs. 3.79 ± 0.05, p = 0.011), indicating that the treated tumors were more glycolytic (Figure 2A,B).At later times, 2-5 days, the NAD(P)H a 1 /a 2 ratio in the treated group was statistically lower than in control.These results obtained on mouse tumors in vivo are consistent with the data obtained on tumor spheroids.In the group of "vascular" PDT with Photoditazine all tumors had statistically reduced NAD(P)H a1/a2 ratio compared to untreated controls (3.52 ± 0.056 vs. 4.14 ± 0.018, p = 0.013) already 3 h after laser irradiation.Upon further observation during 5 days, the differences between control and treated tumors' metabolism became more pronounced, mainly due to increase in the free NAD(P)H (a1) pool in the control tumors (Figure 2C,D).
Analysis of the fluorescence lifetime of NAD(P)H in tumor cells after PDT with either of the photosensitizers did not reveal statistically significant changes.The value of the short component (a1) corresponding to free NAD(P)H was ~0.4 ns and of the long component (a2) corresponding to protein-bound NAD(P)H was ~2.5 ns both in spheroids and tumors in vivo (Table S1).
Therefore, our NAD(P)H FLIM study showed that PDT can cause different metabolic responses in the tumors in vivo, including both the elevation of the contribution from glycolytic, free, NAD(P)H pools and the increase in mitochondrial, bound NAD(P)H fraction.
Tumor Oxygenation after PDT
In order to assess the oxygenation status of the tumors after PDT, the macroscopic PLIM with phosphorescent oxygen probe BTPDM1 was performed on the same tumors as NAD(P)H FLIM.
Untreated KillerRed-expressing tumors had phosphorescence lifetime of BTPDM1 ~3.86 μs.In naïve tumors (control for PDT with Photoditazine), the initial phosphorescence lifetime was ~4.96 μs indicating their slightly worse oxygenation compared with tumors expressing KillerRed.The differences in oxygen status between the control groups are likely due to the fact that they were examined at different time points after tumor inoculation (Day 13 for KillerRed-expressing tumors, Day 7 for naïve tumors).
In the case of PDT with KillerRed, the BTPDM1 phosphorescence lifetimes were statistically higher in the period from 3 h to 48 h, but during first 24 h these changes were less pronounced than after PDT with Photoditazine and did not exceed 0.5 μs.The greatest difference in the oxygen status of treated and untreated tumors was recorded 48 h after PDT (5.46 ± 0.07 vs. 4.05 ± 0.05 μs, p = 0.003).In 5 days after PDT, BPTDM1 phosphorescence lifetime was shorter compared to control tumors (3.77 ± 0.07 vs. 4.19 ± 0.06 In the control groups, the a 1 /a 2 ratio of NAD(P)H gradually increased during 5 days of tumor growth from ~4 to ~5.5, which indicated a metabolic shift towards glycolysis.
In the group of "vascular" PDT with Photoditazine all tumors had statistically reduced NAD(P)H a 1 /a 2 ratio compared to untreated controls (3.52 ± 0.056 vs. 4.14 ± 0.018, p = 0.013) already 3 h after laser irradiation.Upon further observation during 5 days, the differences between control and treated tumors' metabolism became more pronounced, mainly due to increase in the free NAD(P)H (a 1 ) pool in the control tumors (Figure 2C,D).
Analysis of the fluorescence lifetime of NAD(P)H in tumor cells after PDT with either of the photosensitizers did not reveal statistically significant changes.The value of the short component (a 1 ) corresponding to free NAD(P)H was ~0.4 ns and of the long component (a 2 ) corresponding to protein-bound NAD(P)H was ~2.5 ns both in spheroids and tumors in vivo (Table S1).
Therefore, our NAD(P)H FLIM study showed that PDT can cause different metabolic responses in the tumors in vivo, including both the elevation of the contribution from glycolytic, free, NAD(P)H pools and the increase in mitochondrial, bound NAD(P)H fraction.
Tumor Oxygenation after PDT
In order to assess the oxygenation status of the tumors after PDT, the macroscopic PLIM with phosphorescent oxygen probe BTPDM1 was performed on the same tumors as NAD(P)H FLIM.
Untreated KillerRed-expressing tumors had phosphorescence lifetime of BTPDM1 ~3.86 µs.In naïve tumors (control for PDT with Photoditazine), the initial phosphorescence lifetime was ~4.96 µs indicating their slightly worse oxygenation compared with tumors expressing KillerRed.The differences in oxygen status between the control groups are likely due to the fact that they were examined at different time points after tumor inoculation (Day 13 for KillerRed-expressing tumors, Day 7 for naïve tumors).
In the case of PDT with KillerRed, the BTPDM1 phosphorescence lifetimes were statistically higher in the period from 3 h to 48 h, but during first 24 h these changes were less pronounced than after PDT with Photoditazine and did not exceed 0.5 µs.The greatest difference in the oxygen status of treated and untreated tumors was recorded 48 h after PDT (5.46 ± 0.07 vs. 4.05 ± 0.05 µs, p = 0.003).In 5 days after PDT, BPTDM1 phosphorescence lifetime was shorter compared to control tumors (3.77 ± 0.07 vs. 4.19 ± 0.06 µs, p = 0.011), which indicated reoxygenation of the treated tumors (Figure 3A,B).
changes in the oxygen content could be due to both, the oxygen consumption and stopping the supply of oxygen to the tumor cells.
Antivascular Effects of PDT with Photoditazine
Dynamic observation of the tumor vascular response to PDT with Photoditazine was carried out using the OCA in vivo in parallel with oxygen mapping by PLIM (Figure 4).Before PDT, all tumors were characterized by a dense vascular network consisting of thin, tortuosity vessels.In the control group, throughout the entire observation period, the structure of the vascular network and its density did not change.Immediately after PDT, only local vessel reactions were observed in some of the tumors.In 6 h after PDT, in three out of eight animals, there were no visible vessels on OCA images; in the remaining five tumors, the density of the vascular network significantly decreased to values close to 0. After 24 h and 48 h, blood vessels were not visualized on OCA images in all tumors.By the 5th day after PDT, perfused vessels appeared at the edges of some tumors, probably due to their re-growth from the peri-tumorous tissue.It was found that in the time-period from 3 to 6 h after PDT with Photoditazine, the phosphorescence lifetime of BTPDM1 in the tumors was significantly longer than in untreated control (6.09 ± 0.21 µs vs. 4.96 ± 0.35 µs at 3 h, p = 0.001), which indicate the development of hypoxia in the tumor.Then, in 24 h and 48 h after irradiation, a decrease in the lifetime of BTPDM1 was recorded, which may be associated with reoxygenation of the tumor tissue.However, 5 days after PDT, the oxygen level in the treated tumors was again lower than in control (Figure 3C,D).
Since the genetically encoded photosensitizer KillerRed is expressed by the tumor cells themselves and does not re-distribute within the tumor tissue, the reduced oxygenation detected by PLIM after PDT can be attributed exclusively to the consumption of oxygen for photodynamic reactions, at least at the early time points.PDT with Photoditazine in the regimen used causes vascular damages; therefore, it can be assumed that changes in the oxygen content could be due to both, the oxygen consumption and stopping the supply of oxygen to the tumor cells.
Antivascular Effects of PDT with Photoditazine
Dynamic observation of the tumor vascular response to PDT with Photoditazine was carried out using the OCA in vivo in parallel with oxygen mapping by PLIM (Figure 4).Before PDT, all tumors were characterized by a dense vascular network consisting of thin, tortuosity vessels.In the control group, throughout the entire observation period, the structure of the vascular network and its density did not change.Immediately after PDT, only local vessel reactions were observed in some of the tumors.In 6 h after PDT, in three out of eight animals, there were no visible vessels on OCA images; in the remaining five tumors, the density of the vascular network significantly decreased to values close to 0. After 24 h and 48 h, blood vessels were not visualized on OCA images in all tumors.By the 5th day after PDT, perfused vessels appeared at the edges of some tumors, probably due to their re-growth from the peri-tumorous tissue.
Verification of Anti-Tumor Effects of PDT
Fluorescence intensity imaging of tumors in vivo was performed before and after laser irradiation to assess photobleaching of photosensitizers, an indirect indicator of treatment efficacy (Figure 5A).The irradiation regimens used for PDT caused ~90% decrease in the fluorescence intensity in the case of Photoditazine and ~40% in the case of KillerRed (Figure 5B,C).These fluorescence measurements showed that PDT with Photoditazine is likely more efficient in terms of ROS generation, which allowed us to optimize PDT dosimetry, specifically the number of irradiation procedures.
The therapeutic effects of PDT with Photoditazine or KillerRed on the CT26 mouse tumors were confirmed by inhibition of tumor growth and pathomorphological disorders (Figure 5D,E).However, in order to achieve these effects with KillerRed multiple (x5) irradiation of tumors was required with a rather high light dose compared with Photoditazine.Therefore, monitoring of the vascular response of the CT26 tumors to PDT with Photoditazine revealed a complete stop of blood flow in 6-48 h after PDT, which was manifested as the absence of visible blood vessels on the OCA images (p < 10 −5 with control) and suggested an irreversible strong vessel reaction.
Since the measurements of oxygen and microvasculature were performed from the same individual tumors, we have made an attempt to correlate these variables with each other.Plotting oxygen content (BTPDM1 phosphorescence lifetime) against perfused vessels densities showed no associations for both untreated (r = 0.2892) and PDT-treated tumors (r = 0.1204) (Figure S1).Both well-and poorly-vascularized tumors could be oxygenated equally.This suggests that different factors, besides vessel density, determine oxygen concentration in the tissue, at least within the tumor growth stage included in the study.
Verification of Anti-Tumor Effects of PDT
Fluorescence intensity imaging of tumors in vivo was performed before and after laser irradiation to assess photobleaching of photosensitizers, an indirect indicator of treatment efficacy (Figure 5A).The irradiation regimens used for PDT caused ~90% decrease in the fluorescence intensity in the case of Photoditazine and ~40% in the case of KillerRed (Figure 5B,C).These fluorescence measurements showed that PDT with Photoditazine is likely more efficient in terms of ROS generation, which allowed us to optimize PDT dosimetry, specifically the number of irradiation procedures.Histological analysis showed that control CT26 tumors, both naïve and expressing KillerRed, had typical structure with high mitotic activity, and the content of viable cells was 90-100% (Figure 5E).The cells had round or oval large nuclei, predominantly with a diffuse distribution of chromatin and 1-2 nucleoli.Dystrophic changes in cells and apoptosis were rare.The areas of spontaneous necrosis did not exceed 5%.The therapeutic effects of PDT with Photoditazine or KillerRed on the CT26 mouse tumors were confirmed by inhibition of tumor growth and pathomorphological disorders (Figure 5D,E).However, in order to achieve these effects with KillerRed multiple (x5) irradiation of tumors was required with a rather high light dose compared with Photoditazine.
It was shown that after PDT with Photoditazine, tumors inhibited their growth starting from the 13th day of growth (6 days after PDT, p = 0.01 with control).Once the treated tumors reached a volume of 75-80 mm 3 on the 11th day of growth, their sizes did not change throughout the entire observation period until the 19th day.Whereas the untreated tumors grew actively, and their size increased from ~50 mm 3 on the 7th day to ~110 mm 3 on the 15th day after inoculation (Figure 5D).
Analysis of tumor growth after PDT with the genetically encoded photosensitizer KillerRed showed that it led to inhibition of tumor growth starting from the 17th day (4 days after PDT).On the 19th day, the differences between the treated and untreated tumor sizes were statistically significant (p = 0.002) (Figure 5D).
Histological analysis showed that control CT26 tumors, both naïve and expressing KillerRed, had typical structure with high mitotic activity, and the content of viable cells was 90-100% (Figure 5E).The cells had round or oval large nuclei, predominantly with a diffuse distribution of chromatin and 1-2 nucleoli.Dystrophic changes in cells and apoptosis were rare.The areas of spontaneous necrosis did not exceed 5%.
PDT with Photoditazine yielded in massive necrosis of tumor tissue (up to 70-80% of the tumor area) and pronounced vascular reaction with hemorrhages and hemolysis, which was observed in 5 days after PDT.The cellular component of the tumor was sparse, the boundaries of tumor cells were blurred and difficult to identify.Tumor cells in the viable part of the tissue were characterized by pronounced polymorphism, nuclear edema, the loss of integrity of the cell membranes.Similarly, after PDT with KillerRed the total destruction of the tumor tissue and massive necrosis were revealed.Single viable cells had serious dystrophic changes in the form of disruption of membrane integrity, chromatin condensation, cellular edema, and blurring of cell boundaries (Figure 5E).
Therefore, both PDT regimens were effective in the CT26 tumors in mice.
Discussion
Metabolic reorganizations in tumors as an effect of PDT have attracted increasing attention in the past decade, but our knowledge of this aspect of PDT remains very limited.Here, we attempted to identify the relationships between energy metabolism, the level of oxygenation and the result of PDT in vivo using the following optical imaging approaches: (1) two-photon fluorescence lifetime microscopy of NAD(P)H to monitor the cellular metabolic status of cells within tumors, (2) macroscopic PLIM with oxygen-sensitive probe to monitor the oxygen status of the tumors, and (3) OCT-angiography to verify the effects of PDT on the blood perfusion in the case of vascular-targeted mode.A comprehensive in vivo study on mouse tumor model CT26 was performed for two PDT modalities-cellular, using the genetically encoded photosensitizer KillerRed, and vascular, using chlorin e6-based photosensitizer Photoditazine.
Since oxygen is directly involved in photochemical reactions in PDT, an evaluation of the initial oxygen status of the tumor is essential for the effective implementation of the treatment [25].On the other hand, PDT leads to a depletion of oxygen in the tumor, which may have unfavorable consequences, such as activation of angiogenic pathways and surveillance of the most aggressive populations of tumor cells that had been adapted to hypoxic environment [7].Therefore, evaluating oxygen distribution in the tumor may help in the optimization of treatment protocols.It is known that hypoxia following PDT can arise either from the consumption of molecular oxygen directly for photochemical reaction with photosensitizer or from damage to the microvasculature resulting in a significant decrease in the blood flow, or both, depending on the PDT regimen [26].As anticipated, both the cellular and vascular PDT modalities caused a decrease in the oxygen content in tumor cells early (within 3 h) after therapy; however, in the case of vascular PDT with Photoditazine hypoxia was more pronounced (Figure 3).The development of hypoxia as a result of PDT with KillerRed is associated exclusively with the consumption of oxygen for photodynamic reactions, while hypoxia after PDT with Photoditazine could result from both consumption of oxygen and vascular shutdown.Further changes in oxygen status differed: in the case of cellular PDT, the development of hypoxia was observed (possibly due to an increase in oxygen consumption by cells) followed by reoxygenation; in the case of vascular PDT, reoxygenation preceded secondary hypoxia resulting from irreversible vascular damage (Figure 3).
To monitor oxygen status of tumors we used PLIM with phosphorescent oxygen probe BTPDM1.According to Yoshihara et al.BTPDM1 has a high cellular uptake efficiency in cultured cells and re-localizes from the blood to the tumors tissues within a short period after intravenous injection [27].A good cell-and tissue-penetrating ability of BTPDM1 allowed us to assess tissue oxygenation with this probe upon its local injection directly to the tumor.Earlier, PLIM was used for oxygen measurements in only a few works related to PDT.Kalinina et al. presented the PLIM-FLIM study of oxygen consumption and the cellular metabolic state during PDT with the TLD1433 agent that is simultaneously a photosensitizer and a phosphorescent oxygen probe.Using two-photon PLIM-FLIM-microscopy, they showed on the human urinary bladder carcinoma cells T24 in vitro elongation of phosphorescence lifetimes after PDT, an indication of low oxygen concentration, and a shortening of the fluorescence lifetime of NAD(P)H, an indication for glycolytic shift [28].Their result corroborates our observation of lower oxygen and greater free NAD(P)H fraction after PDT with KillerRed.In the study by Stepinac et al. a porphyrin dye PdTCPP was simultaneously used as an oxygen sensor and a photosensitizer in vivo on the optic disc of piglets [29].Photoirradiation induced alterations of the vascular endothelium and the increase in phosphorescence lifetime (i.e., the depletion of O 2 ).
Vascular effects of PDT with chlorin e6-based photosensitizers are well documented.For example, Dong et al. performed hemodynamic monitoring of chlorin e6-mediated PDT in mice with mammary tumor EMT-6 using diffuse optical spectroscopy; the authors observed a decrease in relative blood flow and tissue oxygenation in responders starting from 3 h post-PDT without recovery up until 48 h [30].Saito et al. analyzed vascular changes after PDT with Mono-L-aspartyl Chlorin e6 in fibrosarcoma-bearing mice and found relatively marked vascular degeneration and blood stasis in 4 h after irradiation at 10 min and 2 h drug-light intervals [31].The in vivo study by Kirillin et al. using the optical coherence angiography on mice with CT26 tumors demonstrated vasculature response to PDT after intravenous injection of a chlorin e6-based photosensitizer Photolon but not with topical application of Revixan [32].In accordance with our PLIM results, a rapid decrease in blood oxygen saturation was revealed using diffuse optical spectroscopy in CT26 mouse tumors upon PDT with Photolon, which was explained by the blood flow arrest [33].Our previous studies using Photoditazine showed early (within 24 h post PDT) microvascular damage in CT26 tumors, which was detected by the optical coherence angiography [34].Notably, in non-responders the blood flow partially recovered in 24 h post PDT, unlike responders.In a model of chorioallantoic membrane, Buzza et al. observed more pronounced vascular effects of PDT with Photodithazine compared with porphyrinbased compound Photogem [35].At the same time, Photoditazine is uptaken by cancer cells in vitro [36] and in vivo (at longer accumulation times) [37], so direct cytotoxic effects can be also induced.However, "cellular" mode of PDT with Photoditazine was out of the scope of this study and will be examined further.
Previous studies of others and our groups provided evidence that KillerRed is capable to induce oxidative damage to the tumor cells in different models-monolayer cultures [38], multicellular spheroids [23] and tumor xenografts [39].However, compared to traditional chemical photosensitizers, it has lower phototoxicity and thus requires higher light doses and multiple irradiations, especially in vivo, which along with the need for gene delivery, makes the prospects for clinical use of KillerRed, at least as a monotherapy, in the nearest future rather vague.But taking into account the unique advantages of KillerRed, it is considered as a promising tool for research of cellular responses to PDT [40].Its red-shifted fluorescence emission makes it possible to perform a combined imaging with endogenous fluorescence of metabolic cofactors NAD(P)H and flavins and, therefore, to gain an insight into metabolic mechanisms of cellular-targeted PDT.This opportunity was first demonstrated by Lin et al. [41].The authors assessed autofluorescence intensity of NAD(P)H and flavins in tumors' cryo-sections and showed that NAD(P)H and flavoproteins were oxidized in the course of KillerRed-based PDT.Our study is, therefore, the first that exploited NAD(P)H fluorescence in time-resolved mode to monitor metabolic changes in response to PDT with KillerRed in vivo.
In our study, different metabolic responses were observed after PDT with Photoditazine (Type II, vascular mode) and KillerRed (Type I, cellular mode).In the case of Photoditazine, the ratio of free/bound NAD(P)H forms was stably lower than in control, which usually testifies to a shift to an oxidative metabolism.In line with our findings, Broekgaarden et al. have observed an increased FAD/(NADH + FAD) optical redox ratio in 3D culture model of pancreatic cancer after PDT with a benzoporphyrin derivative, which the authors attributed to severe oxidative stress [42].In contrast, PDT with KillerRed resulted in increased free/bound NAD(P)H ratio early (1-6 h) after PDT, indicating a more glycolytic tumor state.Later on, the ratio did not change, but was statistically lower than in control tumors, that became more glycolytic during natural growth.
Given a development of hypoxia in the tumor tissue after both PDT regimens, as followed from concurrent PLIM measurements, a shift to an oxidative metabolism after vascular PDT with Photoditazine was quite an unexpected result.However, we noticed that oxygen depletion after this regimen was more marked than after PDT with KillerRed, which can explain the differences in the metabolic responses.Decreasing oxygen tension elicits different alterations in cellular metabolism and redox state depending on severity and duration of hypoxia [43].Upon acute or mild hypoxia, metabolic adaptation takes a place-HIF-1α accumulates in the cytoplasm, translocates to the nucleus and promotes the expression of various metabolism-related genes and activation of vascular endothelial growth factor (VEGF), thus accelerating anaerobic glycolysis and angiogenesis [44,45].Chronic or severe hypoxia alleviates ROS generation in the mitochondrial electron transport chain that causes oxidative stress [46].At that, reducing equivalents (mostly NADH and FADH 2 ) in the mitochondria are elevated owing to slowing of electron transport and consequent reduction in the rate of NADH oxidation and changes in the composition of ETC complexes, specifically reduced complex I activity [47].Therefore, it is possible that increased protein-bound NAD(P)H fraction after PDT with Photoditazine is a result of the change in the reduced:oxidized (NADH:NAD+) ratio in the mitochondria.The effects of PDT on tumor metabolism and oxygenation are summarized in Table 1.
A general limitation of the NAD(P)H FLIM approach is that it is unable to report on the specific metabolic pathways underlying the changes in fluorescence decay parameters.So, additional studies using biochemical and molecular assays are needed to uncover the mechanisms of the changes in the optical metabolic metrics upon PDT.
To generate spheroids, the cells were seeded in 96-well ultra-low attachment round bottom plates in the amount of 100 cells in 200 µL medium.The formation of spheroids with a size of ~300 µm was confirmed in 7 days using light microscopy.
For PDT and subsequent FLIM-microscopy, the 7-day-spheroids were gently transferred onto glass-bottom dishes (8-10 spheroids per dish) in DMEM life medium without phenol red.PDT was performed with a laser MGL-III-593 (CNI, China) at a wavelength of 593 nm.The intensity was 50 mW/cm 2 , exposure time was 25 min, and the light dose was 75 J/cm 2 .Non-irradiated spheroids served as control.The experiment was repeated two times showing reproducible results.
Animal Tumor Model
All the protocols related to experiments on animals were approved by the institutional review board of the Privolzhsky Research Medical University.
To generate spheroids, the cells were seeded in 96-well ultra-low attachment round bottom plates in the amount of 100 cells in 200 μL medium.The formation of spheroids with a size of ~300 μm was confirmed in 7 days using light microscopy.
For PDT and subsequent FLIM-microscopy, the 7-day-spheroids were gently transferred onto glass-bottom dishes (8-10 spheroids per dish) in DMEM life medium without phenol red.PDT was performed with a laser MGL-III-593 (CNI, China) at a wavelength of 593 nm.The intensity was 50 mW/cm 2 , exposure time was 25 min, and the light dose was 75 J/cm 2 .Non-irradiated spheroids served as control.The experiment was repeated two times showing reproducible results.
Animal Tumor Model
All the protocols related to experiments on animals were approved by the institutional review board of the Privolzhsky Research Medical University.
The study was carried out on Balb/c mice, female, weighing 20-22 g, with CT26 or CT26-KillerRed tumors intradermally grafted into the ear (Figure 6).The CT26 cells and CT26 cells stably expressing KillerRed-H2B were cultured according to standard protocol in a CO 2 incubator (37 • C, 5%, CO 2 a humid atmosphere) in DMEM (Life Technologies, Carlsbad, CA, USA) supplemented with glutamine, penicillin and streptomycin, and 10% FBS (HyClone, Logan, UT, USA).For inoculation in mice, cells were suspended in phosphate-buffered saline (PBS) at a concentration of 1 × 10 6 cells/mL and injected intradermally in the ear in the amount of 20 × 10 3 cells in 20 µL of PBS.
Tumor sizes were measured in two dimensions with a caliper every 2-3 days, starting from day 7 after tumor cell inoculation, and the volume was calculated using the formula V = a × b/2, where a is the length, b is the width of the tumor.
Photoditazine is N-methyl glucosamine chlorin e6 salt.It is a clinically approved drug used for PDT of malignant tumors of different origin [48].In the absorption spectra, there are a large absorption band around 400 nm and another band in the red region around 650 nm.Maximum of fluorescence emission is at the wavelength of 662 nm.Photoditazine is supposed to act predominantly via Type II photoreactions, that is generates singlet oxygen (quantum yield of ~0.56) [49].After intravenous injection, Photoditazine in a tumor targets blood vessels walls and intracellular membrane structures such as the endoplasmic reticulum and Golgi apparatus [50].
PDT with Photoditazine was implemented on day 7 of tumor growth, when the tumor size was ~3-4 mm 3 (Figure 6A).Photoditazine was injected into the tail vein at a dose of 5 mg/kg, and PDT was carried out 15 min after the injection.Tumors were irradiated with a continuous diode laser (Atkus, St. Petersburg, Russia) operating at the wavelength of 659 nm.The intensity, exposure time and light dose were 120 mW/cm 2 , 12 min, and 86 J/cm 2 , respectively.The predominant vascular response to PDT with Photoditazine within 1 h drug-light interval was demonstrated earlier [34].Laser power was controlled before each irradiation using a PM100A power meter (Thorlabs, Bergkirchen, Germany).
KillerRed is a dimeric fluorescent protein (excitation maximum 585 nm, emission maximum 610 nm) of the green fluorescent protein family with notable phototoxicity.Upon irradiation with yellow light it generates ROS (presumably superoxide and hydrogen peroxide) in a Type I photodynamic reaction [38].The key structural features responsible for its unique phototoxic properties are the water-filled channel reaching the chromophore area from the end cap of the β-barrel and the presence of Glu 68 and Ser 119 residues, adjacent to the chromophore [51].Being expressed by the tumor cells that had been transfected with the gene encoded this protein, KillerRed represents a genetically encoded photosensitizer with an exceptional selectivity.Fully genetically encoded nature of KillerRed makes it completely different from chemical photosensitizers in terms of mechanisms of the drug delivery and localization.Among different intracellular targets of KillerRed, fusion with histone H2B showed most pronounced cytotoxic effects in vitro as it interfered with cell division [38].
PDT of KillerRed-expressing tumors started on day 9 of tumor growth, when the tumors reached a size of 3-4 mm 3 .Note, that tumors expressing KillerRed grew slightly slower than tumors generated from their parental cell line CT26.Tumors were exposed to a continuous laser MGL-III-593 (CNI, Qingdao, China) irradiation at a wavelength of 593 nm.The intensity of the laser light was 170 mW/cm 2 .Exposure time was 30 min, and the light dose was 306 J/cm 2 .PDT was performed once a day for 5 days.Design of the experiment is presented on Figure 6.When selecting the treatment mode, we relied on our previous experience on PDT with KillerRed [39].
Tumors that contained a photosensitizer, Photoditazine or KillerRed, but had not been irradiated served as controls.
In Vivo Fluorescence Imaging
To ensure the accumulation/expression of photosensitizers in the tumors and their photobleaching after PDT, fluorescence was recorded in vivo using an IVIS-Spectrum system (Caliper Life Sciences, Hopkinton, MA, USA).Fluorescence of KillerRed was excited at 570 nm (bandwidth 30 nm) and detected at 620 nm (bandwidth 20 nm).Fluorescence of Photoditazine was excited at 640 nm (bandwidth 30 nm) and detected at 720 nm (bandwidth 20 nm).During in vivo imaging, the mice were anesthetized with 2.5% isoflurane.Images were acquired before and immediately after the PDT and analyzed using Living Image 3.2 software (Caliper Life Sciences, Hopkinton, MA, USA).Tumors were selected as regions of interest (ROI) to calculate the average radiant efficiency ((p/s/cm 2 /sr)/(µW/cm 2 )).
FLIM of NAD(P)H
FLIM was performed on two-photon laser scanning microscope LSM 880 (Carl Zeiss, Jena, Germany) equipped with the time-correlated single photon counting (TCSPC) module for time resolution (hybrid detector HPM-100-40; single-photon counting card SPC-150, Becker & Hickl GmbH, Berlin, Germany).Two-photon fluorescence of NAD(P)H was excited at a wavelength of 750 nm with a Ti:Sa femtosecond laser MaiTai HP (Spectra-Physics Inc., Milpitas, CA, USA) and detected in a range of 450-490 nm.Images were acquired using a C-Apochromat 40×/1.3 oil immersion objective.The laser power was ~6 mW.Image collection time was 60 s.To obtain a reasonable accuracy in terms of the fluorescence lifetime evaluation, the number of the photons per decay curve was adjusted to be not less than 5000 using the binning option when necessary.
In spheroids, fluorescence of NAD(P)H was recorded at 6 h and 24 h post-PDT.During image acquisition the spheroids were maintained in the stage top incubator at 37 • C, 5% CO 2 .
The metabolic status of tumor cells in vivo was assessed in 3, 6, 24, 48 h and 5 days after PDT by the fluorescence lifetime of the metabolic cofactor NAD(P)H.4-6 images were obtained from each tumor at each time point.To acquire images, mice were anesthetized with an injection of Zoletil (40 mg/kg, 50 µL, Virbac SA, Carros, France) and Rometar (10 mg/kg, 10 µL, Spofa, Prague, Czech Republic), placed on a glass coverslip with an ear fixed by a medical tape and mounted in a microscope stage.
The NAD(P)H fluorescence decays were fitted with a bi-exponential function, from which the short and long lifetimes (τ 1 , τ 2 ) and their relative contributions (a 1 and a 2 , respectively, where a 1 + a 2 = 100%) were estimated in the SPCImage 8.2 software (Becker & Hickl GmbH, Berlin, Germany).The goodness of the fit, the chi-square, was 0.8 to 1.2.NAD(P)H fluorescence was analyzed in the cytoplasm of each individual cell, which was selected as ROI.A total of 20-40 cells from each tumor were analyzed at each time point.
Macroscopic PLIM
PLIM of the whole mouse tumors in vivo was performed using a confocal FLIM/PLIM macroscanner (Becker & Hickl GmbH, Berlin, Germany), which allows for obtaining timeresolved images from a field of view up of to 18 × 18 mm with a spatial resolution of around 15 µm [23].The phosphorescent molecular probe BTPDM1 based on iridium (III) complex with benzothienylpyridine containing a cationic dimethylamino group was used for oxygen sensing [27].Phosphorescence of BTPDM1 was excited in a one-photon mode at a wavelength of 488 nm using a BDL-488-SMC picosecond laser (Becker & Hickl, Berlin, Germany) and detected in the range of 608-682 nm.Laser power incident on the sample was 20 µW.The photon collection time was ~90 s.The number of photons per decay curve was at least 5000.The BTPDM1 solution (12 µM) was injected into the tumor locally, 2-3 injections of 30-50 µL, according to the previously developed protocol [52].Measurements were carried out 30 min after the injections.The images of tumors were taken in 3, 6, 24, 48 h and 5 days after PDT.
The phosphorescence lifetime of BTPDM1 in tumors was assessed using the SPCImage 8.2 software (Becker & Hickl GmbH, Berlin, Germany).The decay curves were fitted with a monoexponential function and the average phosphorescence lifetime across each tumor was determined.
OCT-Angiography
In the experiments on vascular PDT, the state of the microvasculature in tumors was analyzed using the optical coherence tomography (OCT)-based angiography (OCA).The principle of vascular network imaging is based on determining the temporal variability of the amplitude and phase of the OCT signal in a series of OCT images of the same tissue area.The OCA makes it possible to visualize the perfused blood vessels with transverse spatial resolution of ~15 µm and depth resolution of ~10 µm from a depth of up to ~1.5 mm.The studies were carried out on the spectral multimodal OCT system (BioMedTech, Nizhny Novgorod, Russia) with a central wavelength of 1310 nm, radiation power of 20 mW, the size of the resulting OCT image is 2.4 × 2.4 mm and the scanning speed is 20,000 A-scans/s [53].OCA images were presented in the form of maximum signal intensity projection-en face image of the vascular network from the entire visualization depth.Using OCA, the structure of the vasculature of the CT26 tumor in mice in vivo was visualized before, immediately (0 h), 6, 24, 48 h and 5 days after PDT.
Perfused vessel density (PVD) was calculated in the original software Anaconda 4.3.1 (Institute of Applied Physics, Nizhny Novgorod, Russia), Python 3.6 (Python Software Foundation, Beaverton, OR, USA) as the number of pixels of all vessel skeletons in the analyzed image area, divided by the total number of pixels in this area as described in Ref. [54].
Histopathology
For histological analysis tumors were taken on either 12th (CT26) or 18th (CT26-KillerRed) days of growth (5 days after PDT).7-µm-thick paraffin sections were stained with hematoxylin and eosin (H&E) and examined with light microscopy on Leica DM1000 system under 40× magnification.Histopathological examination included visual assessment of tumor blood vessels damage, necrotic areas and cellular morphology.
Statistical Analysis
Data analysis was carried out in the STATISTICA 10.0 software (StatSoft GmbH, Hamburg, Germany).Multiple comparisons were made using ANOVA with the Bonferroni correction.At p < 0.05 differences were considered statistically significant.Results presented below are the mean ± standard deviation (SD) or standard error of the mean (SEM).
Conclusions
Although PDT has proven to be a promising treatment option for cancer, there are still considerable differences in treatment outcomes.More information on underlying physiology is needed to develop new strategies for its improvement.Pre-existing hypoxia and associated glycolytic status of tumors as well as irregular vascular supply are the major determinants of resistance to PDT.At the same time, both oxygenation and metabolic states are affected by PDT, which can promote the acquired resistance.On the other hand, these dynamic transient changes can serve to monitor the efficacy of the treatment and report on the mechanisms of action of photosensitizers.With recent achievements in optical bioimaging, non-invasive monitoring of cellular metabolism, oxygen distribution and vascularization became possible in living mice.In this study, we used a combination of FLIM-microscopy, macro-PLIM and OCA to investigate the influence of PDT on these parameters in the mouse tumor model.We observed that different photosensitizers (KillerRed and Photoditazine) used in a cell death and vascular modes, correspondingly, produced markedly different metabolic changes, presumably due to different degree of PDT-induced hypoxia.The results, presented in this work, are of interest for the search for predictive markers of the effectiveness of therapy and for monitoring the early response of the tumor to treatment.
Figure 1 .
Figure 1.Effects of PDT with KillerRed on multicellular tumor spheroids.(A) Changes in spheroid structure and cell viability in 24 h and in fluorescence intensity immediately after PDT.(B) FLIM-microscopy of NAD(P)H in control and PDT-treated spheroids at 6 h and 24 h time points.(C) Quantification of NAD(P)H a1/a2 value in spheroid's cells.*-statistically significant differences with control at the same time point (p ≤ 0.05).Mean ± SD, n = 4-5 spheroids, 30-40 cells in each.NAD(P)H FLIM measurements were performed only in viable cells within spheroids.Scale bar 200 μm.
Figure 1 .
Figure 1.Effects of PDT with KillerRed on multicellular tumor spheroids.(A) Changes in spheroid structure and cell viability in 24 h and in fluorescence intensity immediately after PDT.(B) FLIM-microscopy of NAD(P)H in control and PDT-treated spheroids at 6 h and 24 h time points.(C) Quantification of NAD(P)H a 1 /a 2 value in spheroid's cells.*-statistically significant differences with control at the same time point (p ≤ 0.05).Mean ± SD, n = 4-5 spheroids, 30-40 cells in each.NAD(P)H FLIM measurements were performed only in viable cells within spheroids.Scale bar 200 µm.
Figure 2 .
Figure 2. In vivo study of the metabolic status of mouse tumors after PDT using NAD(P)H FLIM.Representative microscopic FLIM images of control tumors and tumors after PDT with the genetically encoded photosensitizer KillerRed located in the cell nuclei (A) or Photoditazine (C).The ratios of the free to protein-bound forms of NAD(P)H a1/a2 are shown.Time after treatment is indicated above the images.Scale bar 100 μm.Quantification of NAD(P)H a1/a2 value (B,D).*-statistically significant differences with control at the same time point (p ≤ 0.05).Mean ± SEM, n = 4-10 tumors.
Figure 2 .
Figure 2. In vivo study of the metabolic status of mouse tumors after PDT using NAD(P)H FLIM.Representative microscopic FLIM images of control tumors and tumors after PDT with the genetically encoded photosensitizer KillerRed located in the cell nuclei (A) or Photoditazine (C).The ratios of the free to protein-bound forms of NAD(P)H a 1 /a 2 are shown.Time after treatment is indicated above the images.Scale bar 100 µm.Quantification of NAD(P)H a 1 /a 2 value (B,D).*-statistically significant differences with control at the same time point (p ≤ 0.05).Mean ± SEM, n = 4-10 tumors.
Figure 3 .
Figure 3.In vivo assessment of the oxygen status of mouse tumors after PDT using PLIM.Representative macro-PLIM images of control tumors and tumors after PDT with the genetically encoded photosensitizer KillerRed located in the cell nuclei (A) or Photoditazine (C).Time after treatment is indicated above the images.Scale bar 1.5 mm.Note, that images in A and C are shown in different scales.(C) Phosphorescence lifetimes of the BPTDM1 oxygen-sensitive probe after PDT (B,D).*-statistically significant differences with control at the same time point (p ≤ 0.05).Mean ± SEM, n = 3-4 tumors.
Figure 3 .
Figure 3.In vivo assessment of the oxygen status of mouse tumors after PDT using PLIM.Representative macro-PLIM images of control tumors and tumors after PDT with the genetically encoded photosensitizer KillerRed located in the cell nuclei (A) or Photoditazine (C).Time after treatment is indicated above the images.Scale bar 1.5 mm.Note, that images in A and C are shown in different scales.(C) Phosphorescence lifetimes of the BPTDM1 oxygen-sensitive probe after PDT (B,D).*-statistically significant differences with control at the same time point (p ≤ 0.05).Mean ± SEM, n = 3-4 tumors.
Figure 4 .
Figure 4.In vivo imaging of the perfused blood vessels in mouse tumors after PDT with Photoditazine using OCT-based angiography.(A) Representative OCA images of vascular network in the control or treated tumors on the indicated time points after PDT.A maximum intensity projection 2D display represents 3D data to a depth of 1.3 mm.Bar is 1 mm, applicable for all images.(B) Quantification of the perfused vessels density in the control and treated tumors.Mean ± SEM, n = 5-8 tumors.*-statistically significant differences with control at the same time point (p ≤ 0.05).
Figure 4 .
Figure 4.In vivo imaging of the perfused blood vessels in mouse tumors after PDT with Photoditazine using OCT-based angiography.(A) Representative OCA images of vascular network in the control or treated tumors on the indicated time points after PDT.A maximum intensity projection 2D display represents 3D data to a depth of 1.3 mm.Bar is 1 mm, applicable for all images.(B) Quantification of the perfused vessels density in the control and treated tumors.Mean ± SEM, n = 5-8 tumors.*-statistically significant differences with control at the same time point (p ≤ 0.05).
Figure 5 .
Figure 5.The effects of PDT on the fluorescence, growth rate and histopathological structure of the CT26 and CT26-KillerRed tumors.(A) Photobleaching of the photosensitizers after PDT.In vivo fluorescence intensity images of tumors are shown.Scale bar 5 mm.Quantification of the fluorescence intensity in the tumors after PDT with Photoditazine (PDZ) (B) or KillerRed (C).Mean ± SD, n = 4-5 tumors.For KillerRed fluorescence intensity before and after the first irradiation procedure is shown.*-statistically significant differences with control before irradiation (p ≤ 0.05).(D) Monitoring of tumor volume in control and treated groups.Mean ± SEM, n = 4-10 tumors.
Figure 5 .
Figure 5.The effects of PDT on the fluorescence, growth rate and histopathological structure of the CT26 and CT26-KillerRed tumors.(A) Photobleaching of the photosensitizers after PDT.In vivo fluorescence
Figure 6 .Figure 6 .
Figure 6.Design of the in vivo study.Schematic overview of the experiments on PDT with Photoditazine (A) and KillerRed (C).Day 0 is a day of inoculation of CT26 or CT26-KillerRed tumor cells.Photoditazine (PDZ) was injected intravenously (i.v.) in mice with CT26 tumors on Day 7. Laser irradiations of tumors are indicated by red "lightning" signs.Investigations using OCT-MA, NAD(P)H FLIM-microscopy, macro-PLIM and histopathology with H&E are shown by arrows.(B) Photograph of the ear tumor model before PDT.The CT26 cells and CT26 cells stably expressing KillerRed-H2B were cultured according to standard protocol in a CO2 incubator (37° C, 5%, CO2 a humid atmosphere) in DMEM (Life Technologies, Carlsbad, CA, USA) supplemented with glutamine, penicillin and streptomycin, and 10% FBS (HyClone, Logan, UT, USA).For inoculation in mice, cells were suspended in phosphate-buffered saline (PBS) at a concentration of 1 × 10 6 cells/mL and injected intradermally in the ear in the amount of 20 × 10 3 cells in 20 μL of Figure 6.Design of the in vivo study.Schematic overview of the experiments on PDT with Photoditazine (A) and KillerRed (C).Day 0 is a day of inoculation of CT26 or CT26-KillerRed tumor cells.Photoditazine (PDZ) was injected intravenously (i.v.) in mice with CT26 tumors on Day 7. Laser irradiations of tumors are indicated by red "lightning" signs.Investigations using OCT-MA, NAD(P)H FLIM-microscopy, macro-PLIM and histopathology with H&E are shown by arrows.(B) Photograph of the ear tumor model before PDT.
Table 1 .
Summary of the effects of PDT on tumor metabolism and oxygenation.
|
2024-02-06T18:25:28.428Z
|
2024-01-30T00:00:00.000
|
{
"year": 2024,
"sha1": "b63e6a6d42b96267938abe23b1795fa533b75436",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/3/1703/pdf?version=1706615357",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bc41d6f1aeba1295ff2327694c23fde012e0427",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119653510
|
pes2o/s2orc
|
v3-fos-license
|
Elliptic gradient estimates for a nonlinear heat equation and applications
In this paper, we study elliptic gradient estimates for a nonlinear $f$-heat equation, which is related to the gradient Ricci soliton and the weighted log-Sobolev constant of smooth metric measure spaces. Precisely, we obtain Hamilton's and Souplet-Zhang's gradient estimates for positive solutions to the nonlinear $f$-heat equation only assuming the Bakry-\'Emery Ricci tensor is bounded below. As applications, we prove parabolic Liouville properties for some kind of ancient solutions to the nonlinear $f$-heat equation. Some special cases are also discussed.
1.1.
Background. This is a sequel to our previous work [26]. In that paper we proved elliptic gradient estimates for positive solutions to the f -heat equation on smooth metric measure spaces with only the Bakry-Émery Ricci tensor bounded below. We also applied the results to get parabolic Liouville theorems for some ancient solutions to the f -heat equation. In this paper we will investigate elliptic gradient estimates and Liouville properties for positive solutions to a nonlinear f -heat equation (see equation (1.3) below) on complete smooth metric measure spaces.
Recall that an n-dimensional smooth metric measure space (M n , g, e −f dv) is a complete Riemannian manifold (M n , g) endowed with a weighted measure e −f dv for some f ∈ C ∞ (M ), where dv is the volume element of the metric g. The associated m-Bakry-Émery Ricci tensor [2] is defined by The Bochner formula for Ric m f can be read as (see also [26]) 1 2 ∆ f |∇u| 2 = |∇ 2 u| 2 + ∇∆ f u, ∇u + Ric f (∇u, ∇u) for any u ∈ C ∞ (M ). When m < ∞, (1.1) could be viewed as the Bochner formula for the Ricci tensor of an (n + m)-dimensional manifold. Hence many geometric and topological properties for manifolds with Ricci tensor bounded below can be possibly extended to smooth metric measure spaces with m-Bakry-Émery Ricci tensor bounded below, see for example [16,19]. When m = ∞, the (∞-)Bakry-Émery Ricci tensor is related to the gradient Ricci soliton Ric f = λ g for some constant λ, which plays an important role in Hamilton's Ricci flow as it corresponds to the self-similar solution and arises as limits of dilations of singularities in the Ricci flow [11]. A Ricci soliton is said to be shrinking, steady, or expanding according to λ > 0, λ = 0 or λ < 0. On the gradient estimate, the smooth function f is often called a potential function. We refer [5] and the references therein for further discussions. On smooth metric measure space (M, g, e −f dv), the f -Laplacian ∆ f is defined by ∆ f := ∆ − ∇f · ∇, which is self-adjoint with respect to the weighted measure. The associated f -heat equation is defined by If u is independent of time t, then it is f -harmonic function. In the past few years, various Liouville properties for f -harmonic functions were obtained, see for example [3], [16], [17], [21], [23], [25], [27], [28], and the references therein. Recently, the author [26] proved elliptic gradient estimates and parabolic Liouville properties for f -heat equation under some assumptions of (∞-)Bakry-Émery Ricci tensor.
In this paper, we will study analytical and geometrical properties for positive solutions to the equation Historically, gradient estimates for the harmonic function on manifolds were discovered by Yau [30] and Cheng-Yau [7] in 1970s. It was extended to the so-called Li-Yau gradient estimate for the heat equation by Li and Yau [15] in 1980s. In 1990s, Hamilton [10] gave an elliptic type gradient estimate for the heat equation on closed manifolds, which was later generalized to the non-compact case by Kotschwar [13]. In 2006, Souplet and Zhang [22] proved a localized Cheng-Yau type estimate for the heat equation by adding a logarithmic correction term. Integrating Hamilton's or Souplet-Zhang's gradient estimates along space-time paths, their estimates exhibit an interesting phenomenon that one can compare the temperature of two different points at the same time provided the temperature is bounded. However, Li-Yau gradient estimate only provides the comparison at different times.
Equation (1.3) has some relations to the geometrical quantities. On one hand, the time-independent version of (1.3) with constant function f is linked with gradient Ricci solitons, for example, see [20,29] for detailed explanations. On the other hand, the steady-state version of (1.3) is closely related to weighted log-Sobolev constants of smooth metric measure spaces (Riemmann manifolds case due to Chung-Yau [8]). Recall that, weighted log-Sobolev constants S M , associated to a closed smooth metric measure space (M n , g, e −f dv), are the smallest positive constants such that the weighted logarithmic-Sobolev inequality In particular, the case of Euclidean space R n equipped with the Gaussian measure is inequivalent to the original log-Sobolev inequality due to L. Gross [9]. If function u achieves the weighted log-Sobolev constant and satisfies Using the Lagrange's method with respect to weighted measure e −f dv, we have Notice that multiplying (1.4) by u and integrating it with respect to the weighted measure e −f dv, we have which implies S M = c 2 . Therefore (1.4) can be simplified as which is an elliptic version of (1.3). For (1.5), if Ric m f ≥ 0, using (1.1) instead of the classical Bochner formula for the Ricci tensor and following Chung-Yau's arguments [8], we immediately get where λ 1 and d denote the first nonzero eigenvalue of the f -Laplacian and the diameter of (M n , g, e −f dv).
We also remark that our proof is a little different from Yau's original proof [30]. In Yau's case, the proof is to compute the evolution of quantity ln u, then multiply by a cut-off function and apply the maximum principle. In our case, we compute the evolution of quantity u 1/3 instead of ln u. Moreover, our proof not only applies some arguments of Souplet-Zhang [22], where the maximum principle in a local space-time supported set is discussed, but also uses some proof tricks of Bailesteanua-Cao-Pulemotov [1], Li [14] and Wei-Wylie's comparison theorem [23].
An immediate application of Theorem 1.1 is the parabolic Liouville property for the nonlinear f -heat equation. Similar results appeared in [12].
For more interesting special cases and applications of Theorem 1.1, see Section 3 for furthermore discussion.
If a = 0, theorem recovers the result in [26]. We point out that, similar to Theorem 1.1, gradient estimates of Theorem 1.5 also hold provided that only the Bakry-Emery Ricci tensor is bounded below. Remark 1.6. In [24] the author proved similar estimates when m-Bakry-Emery Ricci tensor is bounded below. He also remarked that m-Bakry-Emery Ricci tensor could be replaced by (∞-)Bakry-Émery Ricci tensor (see Remark 1.3 (ii) in [24]). Professor Xiang-Dong Li pointed out to me that the remark is not accurate because of the lack of global f -Laplacian comparison except some special constraint of f is given. However, Theorem 1.5 corrects my previous remark and provides an answer to this question.
The rest of this paper is organized as follows. In Section 2, we will give some auxiliary lemmas and introduce a space-time cut-off function. These results are prepared to prove Theorem 1.1 and Theorem 1.5. In Section 3, we will give completely detail proofs of Theorem 1.1 by the classical Yau's gradient estimate technique. Then we will apply Theorem 1.1 to prove Theorem 1.3 and Corollary 1.4. Meanwhile we will also discuss some special cases of Theorem 1.1. In Section 4, we will adopt the arguments of Theorem 1.1 in [26] to prove Theorem 1.5.
Basic lemmas
In this section, we will give some useful lemmas, which are prepared to prove Theorem 1.1 and Theorem 1.5 in the following sections. Consider the nonlinear f -heat equation where a is a real constant, on an n-dimensional complete smooth metric measure space (M, g, e −f dv). For any point x 0 ∈ M and any R > 0, let Similar to [14,18], we introduce a new smooth function Using above, we derive the following evolution formula, which is a generalization of Lemma 2.1 in [12]. where h := u 1/3 . For any (x, t) ∈ Q R,T , (i) if a ≥ 0, then ω satisfies (ii) if a < 0, further assuming that 0 < δ ≤ u(x, t) ≤ D for some constant δ > 0, then ω satisfies Proof. Following the computation method of [15], let e 1 , e 2 , ..., e n be a local orthonormal frame field on M n . We adopt the notation that subscripts in i, j, and k, with 1 ≤ i, j, k ≤ n, mean covariant differentiations in the e i , e j and e k , directions respectively. Differentiating ψ in the direction of e i , we have and once more differentiating ψ in the direction of e i , Finally, we notice that if a ≥ 0, then 0 < h ≤ D 1/3 and hence ln h ≤ 1/3 ln D.
The above two cases imply the desired results.
For equation (2.1), if we introduce another new function g = ln u, then g satisfies Using this, we can get the following lemma, which is also a generalization of previous results in [22,24,26].
In the rest of this section, we introduce a smooth cut-off function originated by Li-Yau [15] (see also [1] and [26]). This will also be used in the proof of our theorems.
We remind the readers that Lemma 2.3 is a little different from that of [15] and [22]. Here, the cut-off function was previously used by M. Bailesteanua, X. Cao and A. Pulemotov [1].
Proof of Theorem 1.1. We only prove the case (i) a ≥ 0. The case (ii) a < 0 is similar. Pick any number τ ∈ (t 0 − T, t 0 ] and choose a cutoff function ψ(r, t) satisfying propositions of Lemma 2.3. We will show that (1.6) holds at the space-time point (x, τ ) for all x such that d(x, x 0 ) < R/2, where R ≥ 2. Since τ is arbitrary, the conclusion then follows.
Introduce a cutoff function ψ : Then, ψ(x, t) is supported in Q R,T . Our aim is to estimate ∆ f − ∂ ∂t (ψω) and carefully analyze the result at a space-time point where the function ψω attains its maximum.
We apply (3.2) to prove the theorem. If x 1 ∈ B(x 0 , 1), then ψ is constant in space direction in B(x 0 , R/2) according to our assumption, where R ≥ 2. So at (x 1 , t 1 ), (3.2) yields where we used proposition (3) of Lemma 2.3. Since ψ(x, τ ) = 1 when d(x, x 0 ) < R/2 by the proposition (2) of Lemma 2.3, the above estimate indeed gives for all x ∈ M such that d(x, x 0 ) < R/2. By the definition of w(x, τ ) and the fact that τ ∈ (t 0 − T, t 0 ] was chosen arbitrarily, we prove that for all (x, t) ∈ Q R/2,T with t = t 0 − T . This implies (1.6).
. This comparison theorem holds without any grow condition of f , which is critical in our latter proof. Below we will estimate upper bounds for each term of the right-hand side of (3.2), similar to the arguments of Souplet-Zhang [22]. Meanwhile, we also repeatedly use the Young's inequality , ∀ a 1 , a 2 , p, q > 0 with 1 p In the following c denotes a constant depending only on n whose value may change from line to line. First, we have the estimates of first term of the right hand side of (3.2): For the second term of the right hand side of (3.2), we have For the third term of the right hand side of (3.2), since ψ is a radial function, then at (x 1 , t 1 ), using (3.3) we have where in the last inequality we used proposition (4) of Lemma 2.3. Then we estimate the fourth term of the right hand side of (3.2): Finally, we estimate the last term of the right hand side of (3.2): K, a, D).
We now substitute (3.4)-(3.8) into the right hand side of (3.2), and get that at (x 1 , t 1 ). This implies that Since ψ(x, τ ) = 1 when d(x, x 0 ) < R/2 by the proposition (2) of Lemma 2.3, from the above estimate, we have for all x ∈ M such that d(x, x 0 ) < R/2. By the definition of w(x, τ ) and the fact that τ ∈ (t 0 − T, t 0 ] was chosen arbitrarily, we in fact show that We have finished the proof of theorem since h = u 1/3 and R ≥ 2. In particular, if a = 0, Theorem 1.1 implies a local elliptic gradient estimate for the f -heat equation: Furthermore, if a = 0 and f is constant, by using the classical Laplacian comparison ∆r ≤ (n − 1)(1/r + √ K) instead of Wei-Wylie's f -Laplacian comparison (see (3.3)), the proof of Theorem 1.1 in fact implies the following gradient estimate for the heat equation: Compared with Hamilton's estimate [10] and Souplet-Zhang's estimate [22] for the heat equation, this elliptic gradient estimate seems to be new.
Moreover, gradient estimate (3.10) implies which is contradiction with the theorem assumption: 0 < u(x, t) ≤ e −2 . Therefore such u does not exist.
for all R ≥ 2. Similar to the above arguments, letting R → ∞, then u is constant in x, and u = exp(ce at ) for some constant c. When t → −∞, we observe that: u = exp(ce at ) → +∞ if c > 0; u = exp(ce at ) → 0 if c < 0; u = 1 if c = 0. Moreover, the theorem assumption requires e −2 ≤ u(x, t) ≤ D.
Hence u only exists when D ≥ 1 and the desired result follows. When a = 0, K = 0, and assume that u(x, t) is a positive ancient solution to equation (1.2) such that u(x, t) = o r 1/2 (x) + |t| 1/4 2 near infinity. Fixing any space-time (x 0 , t 0 ) and using (3.10) for u on the set for all R ≥ 2. Letting R → ∞, it follows that |∇u(x 0 , t 0 )| = 0.
Since (x 0 , t 0 ) is arbitrary, we get u is constant in space-time.
Since (x 0 , t 0 ) is arbitrary, the result follows.
Proof of Theorem 1.5
In this section, we will prove Theorem 1.5. The proof is analogous to Theorem 1.1 in [26]. For the readers convenience, we provide a detailed proof. Compared with the previous proof, here we need to carefully deal with an extra nonlinear term.
Proof of Theorem 1.5. We only consider the case a ≥ 0. The case a < 0 is similar. Using Lemma 2.2, we calculate that Let (x 1 , t 1 ) be a point where ψω achieves the maximum. We first consider the case x 1 ∈ B(x 0 , 1). By Li-Yau [15], without loss of generality we assume that x 1 is not in the cut-locus of M . Then at this point, we have ∆ f (ψω) ≤ 0, (ψω) t ≥ 0, ∇(ψω) = 0.
|
2016-03-01T07:09:39.000Z
|
2016-03-01T00:00:00.000
|
{
"year": 2016,
"sha1": "ddc27603f277e0591c6f0ca63c0fe0d08c8fdf25",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1603.00166",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ddc27603f277e0591c6f0ca63c0fe0d08c8fdf25",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
257704372
|
pes2o/s2orc
|
v3-fos-license
|
Generative Adversarial Network for Overcoming Occlusion in Images: A Survey
: Although current computer vision systems are closer to the human intelligence when it comes to comprehending the visible world than previously, their performance is hindered when objects are partially occluded. Since we live in a dynamic and complex environment, we encounter more occluded objects than fully visible ones. Therefore, instilling the capability of amodal perception into those vision systems is crucial. However, overcoming occlusion is difficult and comes with its own challenges. The generative adversarial network (GAN), on the other hand, is renowned for its generative power in producing data from a random noise distribution that approaches the samples that come from real data distributions. In this survey, we outline the existing works wherein GAN is utilized in addressing the challenges of overcoming occlusion, namely amodal segmentation, amodal content completion, order recovery, and acquiring training data. We provide a summary of the type of GAN, loss function, the dataset, and the results of each work. We present an overview of the implemented GAN architectures in various applications of amodal completion. We also discuss the common objective functions that are applied in training GAN for occlusion-handling tasks. Lastly, we discuss several open issues and potential future directions.
Introduction
Artificial intelligence has revolutionized the world. With the advent of deep learning and machine learning-based models, many applications and processes in our daily life have been automated. Computer vision is prominently essential in these applications, and while humans can effortlessly make sense of their surrounding, machines are far from achieving that level of comprehension. Our environment is dynamic, complex, and cluttered. Objects are usually partially occluded by other objects. However, our brain completes the partially visible objects without us being aware of it. The capability of humans to perceive incomplete objects is called amodal completion [1]. Unfortunately, this task is not as straightforward and easy for computers to achieve, because occlusion can happen in various ratios, angles, and viewpoints [2]. An object may be occluded by one or more objects, and an object may hide several other objects.
GAN is a structured probabilistic model that consists of two networks, a generator that captures the data distributions and a discriminator that decides whether the produced data come from the actual data distribution or from the generator. The two networks train in a two-player minimax game fashion until the generator can generate samples that are similar to the true samples, and the discriminator can no longer distinguish between the real and the fake samples.
Since its first introduction by Goodfellow et al. in 2014, numerous variants of GAN are proposed, mainly architecture variants and loss variants [3]. The modifications in the first category can either be in the overall network architecture such as progressive GAN (PROGAN) [4], in representation of the latent space such as conditional GAN (CGAN) [5], or in modifying the architecture toward a particular application as in CycleGAN [6]. The second category of variants encompasses modifications that are introduced to the loss functions and regularization techniques such as the Wasserstein GAN (WGAN) [7] and PatchGAN [8].
Despite the various modifications, GAN is challenging to train and evaluate. However, due to its generative power and outstanding performance, it has a significantly large number of applications in computer vision, bio-metric systems, medical field, etc. Therefore, there are a considerable number of reviews carried out on GAN and its application in different domains (shown in Section 3). There are a limited number of existing reviews that briefly mention overcoming occlusion in images with GAN. Therefore, in this survey we concentrate on the applications of GAN in amodal completion in detail. In summary, the contributions of this survey paper are:
1.
We survey the literature for the available frameworks where they utilize GAN in one or more aspects of amodal completion.
2.
We discuss in detail the architecture of existing works and how they have incorporated GAN in tackling the problems that occur from occlusion. 3.
We summarize the loss function, the dataset, and the reported results of the available works. 4.
We also provide an overview of prevalent objective functions in training the GAN model for amodal completion tasks. 5.
Finally, we discuss several directions for the future research in tasks of occlusion handling wherein GAN can be utilized.
The term "occlusion handling" is polysemous in the computer vision literature. In object tracking, it mostly refers to the ability of the model to address occlusions and resume tracking the object once it re-appears in the scene [9]. In classification and detection tasks, the term indicates determining the depth order of the objects and the occlusion relationship between them [10]. Other works such as [11,12] define occlusion handling as the techniques that interpolate the blank patches in an object, i.e., content completion. However, we believe that, in order to enable a model to address occlusions, it needs the same tasks defined in amodal completion. Therefore, in this survey we use "amodal completion" and "occlusion handling" interchangeably.
As a limitation, we only focus on occlusion handling in a single 2D image. Therefore, occlusion in 3D images, stereo images, and video data are out of the scope of this work. Additionally, we emphasize on the GAN component of each architecture we reviewed. As GAN is applied for various tasks in different problems, it is difficult to carry out a systematic comparison of the existing models. Each model is evaluated on a different dataset using a different evaluation metric for a different task. In some cases, the papers do not assess the performance of GAN. In those cases, we present the result of the entire model.
The rest of this document is organized as follows: the methodology for conducting this survey is presented in Section 2. Next, Section 3 mentions the related available articles in the literature. Section 4 introduces the fundamental concepts about GAN and its training challenges, and the aspects of amodal completion. Afterward, Section 5 presents the problems in amodal completion and how GAN has been applied to address them. The common loss functions in GAN for amodal completion are discussed in Section 6. In Sections 7 and 8, future directions and key findings of this survey article are presented. Finally, conclusions are enunciated in Section 9.
Methodology
To perform a descriptive systematic literature review, we begin by forming the research questions which this survey attempts to answer. The questions are (1) what are the challenges in amodal completion? (2) how are GAN models applied to address the problems of amodal completion? Based on the formulated questions, the search terms are identified to find and collect relevant publications. The search keywords are "GAN AND occlusion", "GAN AND amodal completion", "GAN AND occlusion handling", "GAN for occlusion handling", and "GAN for amodal completion".
We inspect several research databases, such as IEEE Xplore, Google Scholar, Web of Science, and Scopus. The list of the returned articles from the search process is sorted and refined by excluding the publications that do not satisfy the research questions. The elimination criteria are as follows: the research article addresses aspects of occlusion handling but do not employ GAN; GAN is used in applications other than amodal completion; the authors have worked on occlusion in 3D data, or video frames. Subsequently, each of the remaining publications in the list is investigated and summarized. The articles are examined for the GAN architecture, the objective function, the dataset, the results, and the purpose of using GAN.
Related Works
Occlusion: Handling occlusion has been studied in various domains and applications. Table 1 shows the list of published surveys and reviews of occlusion in several applications. A survey of occlusion handling in generic object detection of still images is provided in [2], focusing on challenges that arise when objects are occluded. Similarly, the most recent survey article by the authors of [13] provides the taxonomy of problems in amodal completion from single 2D images. However, none of those review articles concentrate on the applications of GAN for overcoming occlusion particularly. Other works have focused on occlusion in specific scopes, such as object tracking [14,15], pedestrians [16,17], human faces [18][19][20][21][22], automotive environment [23,24], and augmented reality [25]. In contrary, we review the articles that address occlusion in single 2D images. Generative Adversarial Network: Due to their power, GANs are ubiquitous in computer vision research. Due to the growing body of published works in GAN, there are several recent surveys and review papers in the literature investigating its challenges, variants, and applications. Table 2 contains a list of survey articles that have been published in the last five years. The list does not include papers that specifically focus on GAN applications outside the computer vision field.
The authors in [27][28][29][30][31][32] discuss the instability problem of GAN with the various techniques and improvisations that have been designed to stabilize its training. Adversarial attack can be carried out against machine learning models by generating an input sample that leads to unexpected and undesired results by the model. Sajeeda et al. [27] investigate the various defense mechanisms to protect GAN against such attacks. Li et al. [33] summarize the different models into two groups of GAN architectures: the two-network models and the hybrid models, which are GANs combined with an encoder, autoencoder, or variational autoencoder (VAE) to enhance the training stability. The authors of [34,35] explore the available evaluation metrics of GAN models. Other works have discussed the application of different GAN architectures for computer vision [36,37], image-to-image translation [38,39], face generation [40,41], medical field [29,[42][43][44], person re-identification (ReID) [45], audio and video domains [29], generating and augmenting training data [46,47], image super-resolution [39,48], and other real-world applications [39,45,49,50]. Some of the mentioned review articles discuss the occlusion handling as an application of GAN very briefly, without detailing the architecture, loss functions, and the results. In this paper, we focus on the works that combine the two above-mentioned topics. Specifically, we want to present the works that have been carried out to tackle the problems that arise from occlusion using GAN. However, depending on the nature of the problems, the applicability of GAN varies. For example, in amodal appearance generation, GAN is the optimal choice of architecture. Comparably, in amodal segmentation and order recovery tasks, it is less used.
Generative Adversarial Network
GAN is an unsupervised generative model that contains two networks, namely a generator and a discriminator. The two networks learn in an adversary manner similar to the min-max game between two players. The generator tries to generate a fake sample that the discriminator cannot distinguish from the real sample. On the other hand, the discriminator learns to determine whether the sample is real data or generated. The generator G takes a random noise z as input. It learns a probability distribution p g over data x to generate fake samples that imitate the real data distribution (p data ). Then, the generated sample is forwarded to the discriminator D which outputs a single scalar that labels the data as real or fake ( Figure 1). The classification result is used in training G as gradients of the loss. The loss guides G to generate samples that are less likely and more challenging to be labeled as fake by the D. Overtime, G becomes better in generating more realistic samples that would confuse D, and D becomes better at detecting fake samples. They both try to optimize their objective functions, in other words, G tries to minimize its cost value and D tries to maximize its cost value.
Equation (1) was designed by Goodfellow et al. [53] to compute the cost value of GAN where x is the real sample from the training dataset, G(z) is the generated sample, and D(x) and D(G(z)) are the discriminator's verdict that x is real and the fake sample G(z) is real. There are numerous variations of the original GAN. Among the most prominent ones are CGAN, WGAN, and Self-Attention GAN (SAGAN) [54]. CGAN extends the original GAN by taking an additional input which is usually a class label. The label conditions the generated data to be of a specific class. Therefore, the loss function in (1) becomes as follows: where c is the conditional class label. In order to prevent the vanishing gradient and mode collapse problems (discussed below), WGAN applies an objective function that implements the Earth-Mover (EM) [55] distance for comparing the generated and real data distributions. EM helps in stabilizing GAN's training and the equilibrium between the generator and the discriminator. If the gradient of the loss function becomes too large, WGAN will employ weight clipping. WGAN Gradient Penalty (WGAN-GP) [56] extends WGAN by introducing a penalty term instead of the weight clipping to enhance the training stability, convergence power, and output quality of the network. Moreover, SAGAN applies an attention mechanism to extract features from a broader feature space and capture global dependencies instead of the local neighborhoods. Thus, SAGAN can produce high-resolution details in data as it borrows cues from all feature locations in contrast to the original GAN that depends on only spatially local points.
In theory, both G and D are expected to converge at the Nash equilibrium point. However, in practice this is not as simple as it sounds. Training GANs is challenging, because they are unstable and difficult to evaluate. GANs are notorious for several issues, which are already covered intensively in the literature; therefore, we will only discuss them briefly below.
Achieving Nash Equilibrium
In game theory, Nash equilibrium is when none of the players will change their strategy no matter what the opponents do. In GAN, the game objective changes as the networks take turn during the training process. Therefore, it is particularly difficult to obtain the desired equilibrium point due to the adversarial behavior of its networks. Typically, gradient descent is used to find the minimum value of the cost function during training. However, in GAN, decreasing the cost of one network leads to the increase in the cost of the other network. For instance, if one player minimizes xy with regard to x and another player minimizes −xy with regard to y, gradient descent reaches a stable sphere, but it does not converge to the equilibrium point which is x = y = 0 [57].
Mode Collapse
One of the major problems with GANs is that they are unable to generalize well. This poor generalization leads to mode collapse. The generator collapses when it cannot generate large diverse samples known as complete collapse, or it will only produce a specific type (or subset) of target data that will not be rejected by the discriminator as being fake, known as partial collapse [53,57].
Vanishing Gradient
GAN is challenging to train due to the vanishing gradient issue. The generator stops learning when the gradients of the weights of the initial layers become extremely small. Thus, the discriminator confidently rejects the samples produced by the generator [58].
Lack of Evaluation Metrics
Despite the growing progress in the GAN architecture and training, evaluating it remains a challenging task. Although several metrics and methods have been proposed, there is no standard measure for evaluating the models. Most of the available works propose a new technique to assess the strength and the limitation of their model. Therefore, finding a consensus evaluation metric remains an open research question [59].
Amodal Completion
Amodal completion is the natural ability of humans to discern the physical objects in the environment even if they are occluded. Our environment contains more partially visible or temporarily occluded objects than fully visible ones. Hence, the input to our visual system is mostly incomplete and segmented. Yet, we innately and effortlessly imagine the invisible parts of the object in our mind and perceive the object as complete [1]. For instance, if we only see a half of stripped legs in the zoo, we can tell that there is a zebra in that territory.
As natural and seamless this task is for humans, for computers it is challenging yet essential. This is because the performance of most computer vision-related real-world applications drop when objects are occluded. For example, in autonomous driving, the vehicle must be able to recognize and identify the complete contour of the objects in the scene to avoid accidents and drive safely.
Our environment is complex, cluttered, and dynamic. An object may be behind one or more other objects, or an object may hide one or more other objects. Thus, possible occlusion patterns between objects are endless. Therefore, the shape and appearance of occluded objects are unbounded. Whenever a visual system requires de-occlusion, there are three sub-tasks involved in the process ( Figure 2). Firstly, inferring the complete segmentation mask of the partially visible objects, including the hidden region. Secondly, predicting and reconstructing the RGB content of the occluded area based on the visible parts of the object and/or the image. Often, these two sub-tasks require the result of the third sub-task, which determines the depth order of the objects and the relationship between them, i.e., which object is the occluder and which one is the occludee. Several of the existing works address these sub-tasks simultaneously. Designing and training a model that could perform any/all of the above-mentioned sub-processes presents several challenges. In the following section, we explore the existing works in the literature wherein a GAN architecture is implemented to address those obstacles.
GAN in Amodal Completion
The taxonomy of the challenges in amodal completion is presented by Ao et al. [13]. In the following sections, we present how GAN has been used to address each challenge. In exploring the existing research papers, we emphasized the aspects of amodal completion wherein GAN was utilized, not the original aim of the paper.
Amodal Segmentation
Image segmentation tasks such as semantic segmentation, instance segmentation, or panoptic segmentation solely predict the visible shape of the objects in a scene. Therefore, these tasks mainly operate with modal perception. Amodal segmentation, on the other hand, works with amodal perception. It estimates the shape of an object beyond the visible region, i.e., the visible mask (also called the modal mask) and the mask for the occluded region, from the local and the global visible visual cues (see Figure 3).
Amodal segmentation is rather challenging, especially if the occluder is of a different category (e.g., the occlusion between vehicles and pedestrians). The visible region may not hold sufficient information to help in determining the whole extent of the object. Contrariwise, if the occluder is an instance of the same category (e.g., occlusion between pedestrians), since the features of both objects are similar, it becomes difficult for the model to estimate where the boundary of one object ends and the second one begins. In either case, the visible region plays a significant role in guiding the amodal mask generation process. Therefore, most existing methods require the modal mask as input. To alleviate the need for a manually annotated modal mask, many works apply a pre-trained instance segmentation network to obtain the visible mask and utilize it as input. In the following, we describe the architecture of the GAN-based models that are used in generating the amodal mask of the occluded objects.
A two hourglass generator: Zhou et al. [60] apply a pre-trained instance segmentation network on the input image to obtain an initial mask and feeds it to a two-stage pipeline for human deocclusion. Given the initial mask, the generator implements two hourglass modules to refine and complete the modal mask to produce the amodal mask at the end. A discriminator enhances the quality of the output amodal mask. An additional parsing result accompanies the result of the generator, which is employed by a Parsing Guided Attention (PGA) module to reinforce the semantic features of body parts at multiple scales as a part of a parsing guided content recovery network. The latter uses a combination of UNet [61] and partial convolutions [62] in generating the content of the invisible area. The additional parsing branches add extra semantic guidance, which improves the final invisible mask.
A coarse-to-fine architecture with contextual attention: Xiong et al. [63] firstly employ a contour detection module to extract the visible contour of an object and then complete it through a contour completion network. The contour detection module uses DeepCut [64] to segment prominence objects, and performs noise removal and edge detection to extract the incomplete contour of the object from the segmentation map. Then, the contour completion network learns to conjecture the foreground contour. The contour completion network is composed of a generator and a discriminator. The generator has a coarse-to-fine architecture, each with a similar encoder-decoder structure, except that the refinement network employs a contextual attention layer [65]. Finally, the completed contour along with the ground-truth image are fed to the discriminator which produces a score map to indicate the originality of each region in the generated contour mask and can decide whether the mask aligns with the contour of the image. The discriminator is a fully convolutional PatchGAN [8] trained with a hinge loss. The results show that the contour completion step assists in the explicit modeling of the background and the foreground layer borders, which leads to less evident artifacts in the completed foreground objects.
A generator with priori knowledge: The authors of [66] also utilize a pre-trained instance segmentation model to obtain the visible human mask, which is fed with the input image into a GAN-based model to produce the amodal mask of occluded humans. The model predicts the mask of the invisible region through an hourglass network structure.
The local fine features and the higher-level semantic details are aggregated in the encoding stage, and they are added to each layer's feature maps in the decoding stage. The predicted amodal mask is evaluated by a Patch-GAN discriminator. To improve the amodal segmentation outcome, some typical human poses are concatenated with the feature maps as a priori information to be used in the decoding stage. Although the a priori knowledge enhances the predicted amodal masks, it restricts the application of the model to humans with specific poses.
A coarse-to-fine architecture with multiple discriminators: In the applications such as visual surveillance and autonomous driving, path prediction, and intelligent traffic control, detecting vehicles and pedestrians is essential. However, these are often obstructed by other objects which makes the task of learning the visual representation of intended objects more challenging. The model in [67] aims to recover the amodal mask of a vehicle and the appearance of its hidden regions iteratively. To tackle both tasks, the model is composed of two parts: a segmentation completion module and an appearance recovery module. The first network, follows an initial-to-refined framework. Firstly, an initial segmentation mask is generated by taking an input image with occluded vehicles through a pre-trained segmentation network. Then, the input image is fed again into the next stage after it is concatenated with the output from the initial stage. The second part, in contrary to a standard GAN, has a generator with an encoder-decoder structure, an object discriminator, and an instance discriminator. To assist the model in producing more realistic masks, an additional 3D model pool is employed. This provides silhouette masks as adversarial samples which motivates the model to learn the defining characteristics of actual vehicle masks. The object discriminator, which uses a Stack-GAN structure [68], enforces the output mask to be similar to a real vehicle, whereas the instance discriminator with a standard GAN structure aims at producing an output mask similar to the groundtruth mask. The recovered mask is fed to the appearance recovery module to regenerate the whole foreground vehicle. Both modules are trained with reconstruction loss (i.e., L1 loss) and perceptual loss. Although using the 3D model pool and multiple discriminators produces better amodal masks, when the model is tested on synthetic images with different types of synthetic occlusions, it requires multiple iterations to progressively eliminate the occlusions. However, on real images with less severe occlusions, the model is unable to refine the results beyond three iterations and its performance declines.
Order Recovery
In order to apply any de-occlusion or completion process, it is essential to determine the occlusion relationship and identify the depth order between the overlapping components of a scene. Other processes such as amodal segmentation and content completion depend on the predicted occlusion order to accomplish their tasks. Therefore, vision systems need to distinguish the occluders from the occludees, and to determine whether an occlusion exists between the objects. Order recovery is vital in many applications, such as semantic scene understanding, autonomous driving, and surveillance systems.
The following works attempt to retrieve the depth order/layer order between the objects in a scene through utilizing a GAN-based architecture.
A generator with multiple discriminators: Dhamo et al. [69] present a method to achieve layered depth prediction and view synthesis. Given a single RGB image as input, the model learns to synthesize a RGB-D view from it and hallucinates the missing regions that were initially occluded. Firstly, the framework uses a fully-convolutional network to obtain a depth map and a segmentation mask for foreground and background elements from the input image. Depending on the predicted masks, the foreground objects are erased from the input image and the obtained depth map (RGB-D). Then, a Patch-GAN [8]-based network is used to refill the holes in the RGB-D background image that were created from removing the foreground objects. The network has a pair of discriminators to enforce inter-domain consistency. This method has data limitations, as it is difficult to obtain ground-truth layered depth images in real-world data.
Inferring the scene layout beyond the visible view and hallucinating the invisible parts of the scene is called amodal scene layout. MonoLayout, proposed in [70], provides the amodal scene layout in the form of bird's eye view (BEV) in real time. With a single input image of a road scene, the framework delivers a BEV of static (such as sidewalks and street areas) and dynamic (vehicles) objects in the scene, including the partially visible components. The model contains a context encoder, two decoders, and two discriminators. Given the input image, the encoder captures the multi-scale context representations of both static and dynamic elements. Then, the context features are shared with two decoders, an amodal static scene decoder and a dynamic scene decoder, to predict the static and dynamic objects in BEV. The decoders are regularized by two corresponding discriminators to encourage the predictions to be similar to the ground-truth representations. The context sharing within the decoders achieves better performance of amodal scene layout. MonoLayout can infer 19.6 M parameters in 32 fps. However, it needs generalization for unseen scenarios.
A single generator and discriminator: Zheng et al. [71] tackle the amodal scene understanding by creating a layer-by-layer pipeline (Completed Scene Decomposition Network (CSDNet)) to extract and complete RGB appearance of objects from a scene, and make sense of their occlusion relation. In each layer, CSDNet only separates the foreground elements that are without occlusion. This way, the system identifies and fills the invisible portion of each object. Then, the completed image is fed again to the model to segment the fully visible objects. In this iterative manner, the depth order of the scene is obtained, which can be used to recompose a new scene. The model is composed of a decomposition network and a completion network. The decomposition network follows Mask-RCNN [72] with an additional layer classification branch to estimate the instance masks, and determine whether an object is fully or partially visible. The predicted masks are forwarded to the completion network, which uses an encoder-decoder to complete the resultant holes in the masked image. By masking the fully visible objects in each step and the iterative completion of the objects in the scene, the earlier completion information is propagated to the later steps. Nonetheless, the model is trained on a rendered dataset; therefore, it cannot generalize well to real scenes that are unlike the rendered ones. In addition, the completion errors over the layers are accumulated, which leads to a drop in accuracy when the occlusion layers are too numerous.
On the other hand, Dhamo et al. [73] present an object-oriented model with three parts: object completion, layout prediction, and image re-composition, while the object completion unit attempts to fill the occluded area in the input RGBA image through an auto-encoder, the layout prediction uses a GAN architecture to estimate the RGBA-D (the RGBA and depth images) background, i.e., the object-free representation of the scene. The model infers the layered representation of a scene from a single image and produces a flexible number of output layers based on the complexity of the scene. However, the global and the local contexts, and the spatial relationship between the objects in the scene, are not considered.
Amodal Appearance Reconstruction
Recently, there has been a significant progress in image inpainting methods, such as the works in [65,74]. However, these models recover the plausible content of a missing area with no knowledge about which object is involved in that part. On the contrary, amodal appearance reconstruction (also known as amodal content completion) models require identifying individual elements in the scene, and recognizing the partially visible objects along with their occluded areas, to predict the content for the invisible regions.
Therefore, the majority of the existing frameworks follow a multi-stage process to address the problem of amodal segmentation and amodal content completion as one problem. Therefore, they depend on the segmentator to infer the binary segmentation mask for the occluded and non-occluded parts of the object. The mask is then forwarded as input to the amodal completion module, which tries to fill in the RGB content for the missing region indicated by the mask.
Among the three sub-tasks of amodal completion, GAN is most widely used in amodal content completion. In this section, we present the usage of GAN in amodal content completion for a variety of computer vision applications.
Generic Object Completion
GANs are unable to estimate and learn the structure in the image implicitly with no additional information about the structures or annotations regarding the foreground and background objects during training. Therefore, Xiong et al. [63] propose a model that is made up of a contour detection module, a contour completion module, and an image completion module. The first two modules learn to detect and complete the foreground contour. Then, the image completion module is guided by the completed contour to determine the position of the foreground and the background pixels. The incomplete input image, the completed contour, and the hole mask are fed to the image completion network to fill the missing part of the object. The network has a similar coarse-to-fine architecture as the contour completion module. However, the depth of the network weakens the effect of the completed contour. Therefore, the complete contour is passed to both the coarse network and the refinement network. The discriminator of the image completion network is a PatchGAN that is trained with hinge loss and requires the generated fake image or the ground-truth image with the hole mask. The experiments show that, under the guide of the contour completion, the model can generate completed images with less artifacts and complete objects with more natural boundaries. However, the model will fail to produce results without artifacts and color discrepancy around the holes due to implementing vanilla convolutions in extracting the features.
Therefore, Zhan et al. [75] use CGAN and partial convolution [62] to regenerate the content of the missing region. The authors apply the concept of partial completion to de-occlude the objects in an image. In the case of an object hidden by multiple other objects, the partial completion is performed by considering one object at a time. The model partially completes both the mask and the appearance of the object in question through two networks, namely Partial Completion Network-mask (PCNet-M) and Partial Completion Network-content (PCNet-C), respectively. A self-supervised approach is implemented to produce labeled occluded data to train the networks, i.e., a masked region is obtained by positioning a randomly selected occluder from the dataset on top of the concerned object. Then, the masked occludee is passed to the PCNet-M to reproduce the mask of the invisible area, which in turn is given to the PCNet-C. Although the self-supervised and partial completion techniques alleviate the need for annotated training data, the generated content contains the remaining of the occluder and its quality is not good if it has texture.
Ehsani et al. [76] trained a GAN-based model dubbed SeGAN. The model consists of a segmentator which is a modified ResNet-18 [77], and a painter which is a CGAN. The segmentator produces the full segmentation mask (amodal mask) of the objects including the occluded parts. On the other hand, the painter, which consists of a generator and a discriminator, takes in the output from the segmentator and reproduces the appearance of the hidden parts of the object based on the amodal mask. The final output from the generator is a de-occluded RGB image which is then fed into the discriminator. As a drawback, the model is trained on a synthetic dataset, which presents an inevitable domain gap between the training images and the real-world testing images.
Furthermore, Kahatapitiya et al. [78] aim to detect and remove the unrelated occluders, and inpaint the missing pixels to produce an occlusion-free image. The unrelated objects are identified based on the context of the image and a language model. Through a background segmentator and the foreground segmentator, the background and foreground objects are extracted, respectively. The foreground extractor produces pixel-wise annotations for the objects (i.e., thing class) and the background segmentator outputs the background objects (i.e., stuff class). Then, the relation predictor uses the annotations to estimate the relation of each foreground object to the image context based on a vector embedding of class labels trained with a language model. The result of the relation prediction can detect any unrelated objects which are considered as unwanted occlusion. Consequently, the relations and pixel annotations of the thing class are fed into the image inpainter to mask and recreate the pixels of the hidden object. The image inpainter is based on the contextual attention model by Yu et al. [65], which employs a coarse-to-fine model. In the first stage, the mask is coarsely filled in. Then, the second stage utilizes a local and a global WGAN-GP [56] to enhance the quality of the generated output from the coarse stage. A contextual attention layer is implemented to attend to similar feature patches from distant pixels. The local and global WGAN-GP enforce global and local consistency of the inpainted pixels [65]. The contextual information helps in generating a de-occluded image; however, the required class labels of the foreground and background objects limit the applicability of the method.
Face Completion
Occlusion is usually present in faces. The occluding objects can be glasses, scarf, food, cup, microphone, etc. The performance of biometric and surveillance systems can degrade when faces are obstructed or covered by other objects, which raises a security concern. However, compared to background completion, facial images are more challenging to complete since they contain more appearance variations, especially around the eyes and the mouth. In the following, we categorize the available works for face completion based on their architecture.
A single generator and discriminator: Cai et al. [79] present an Occlusion-Aware GAN (OA-GAN), with a single generator and discriminator, that alleviates the need for an occlusion mask as an input. Through using paired images with known mask of artificial occlusions and natural images without occlusion masks, the model learns in a semi-supervised way. The generator has an occlusion-aware network and a face completion network. The first network estimates the mask for the area where the occlusion is present, which is fed into the second network. The latter then completes the missing region based on the mask. The discriminator employs an adversarial loss, and an attribute preserving loss to ensure that the generated facial image has similar attributes to the input image.
Likewise, Chen et al. [80] depend on their proposed OA-GAN to automatically identify the occluded region and inpaint it. They train a DCGAN on occlusion-free facial images, and use it to detect the corrupted regions. During the inpainting process, a binary matrix is maintained, which indicates the presence of occlusion in each pixel. The detection of occluded region alleviates the need for any prior knowledge of the location and type of the occlusion masks. However, incorrect occlusion detection leads to partially inpainted images.
Facial Structure Guided GAN (FSG-GAN) [81] is a two-stage model with a single generator and discriminator. In the first part, a variational auto-encoder estimates the facial structure which is combined with the occluded image and fed into the generator of the second stage. The generator (UNet), guided by the facial structure knowledge, synthesizes the deoccluded image. A multi-receptive fields discriminator encourages a more natural and less ambiguous appearance of the output image. Nevertheless, the model cannot remove occlusion in a face image with large posture well, and it cannot correctly predict the facial structure under severe occlusions, which leads to unpleasant results.
Multiple discriminators: Several of the existing works employ multiple discriminators to ensure that the completed facial image is semantically valid and consistent with the context of the image. Li et al. [82] train a model with a generator, a local discriminator, a global discriminator, and a parsing network to generate an occlusion-free facial image. The original image is masked with a randomly positioned noisy square and fed into the generator which is designed as an auto-encoder to fill the missing pixels. The discriminators, which are binary classifiers, enhance the semantic quality of the reconstructed pixels. Meanwhile, the parsing network enforces the harmony of the generated part and the present content. The model can handle various masks of different positions, sizes, and shapes. However, the limitations of the model include the facts that (1) it cannot recognize the position/orientation of the face and its corresponding elements which leads to unpleasant generative content; (2) it fails to correctly recover the color of the lips; (3) it does not capture the full spatial correlations within neighboring pixels.
Similarly, Mathai et al. [83] use an encoder-decoder for the generator, a Patch-GANbased local discriminator, and a WGAN-GP [56]-based global discriminator to address occlusions on distinctive areas of a face and inpaint them. Consequently, the model's ability in recognizing faces improves. To minimize the effect of the masked area on the extracted features, two convolutional gating mechanisms are experimented: hard gating mechanism known as partial convolutions [62] and a soft gating method based on sigmoid function.
Liu et al. [84] also follow the same approach by implementing a generator (autoencoder), a local discriminator, and a global discriminator. A self-attention mechanism is applied in the global discriminator to enforce complex geometric constrains on the global image structure, and model long-range dependencies. The authors report the results for the facial landmark detection only, without providing the experimental data.
Moreover, Cai et al. [85] present FCSR-GAN to create a high-resolution deoccluded image from a low-resolution facial image with partial occlusions. At first, the model is pre-trained for face completion to recover the missing region. Afterward, the entire framework is trained end-to-end. The generator comprises a face completion unit and a face super-resolution unit. The low-resolution occluded input image is fed into the face completion module to fill the missing region. The face completion unit follows an encoderdecoder layout and the overall architecture is similar to the generative face completion by Li et al. [82]. Then, the occlusion-free image is fed into the face super-resolution module which adopts a SRGAN [86]. The network is trained with a local loss, a global loss, and a perceptual loss to ensure that the generated content is consistent with the local details and holistic contextual information. An additional face parsing loss and perceptual loss are computed to produce more realistic face images.
Furthermore, face completion can improve the resistance of face identification and recognition models to occlusion. The authors in [87] propose a two-unit de-occlusion distillation pipeline. In the de-occlusion unit, a GAN is implemented to recover the appearance of pixels covered by the mask. Similar to the previously mentioned works, the output of the generator is evaluated by local and global discriminators. In the distillation unit, a pre-trained face recognition model is employed as a teacher, and its knowledge is used to train the student model to identify masked faces by learning representations for recovered faces with similar clustering behaviors as the original ones. This teaches the student model how to fill in the information gap in appearance space and in identity space. The model is trained with a single occlusion mask at a time; however, in real-world instances, multiple masks cover large discriminative regions of the face.
Multiple generators: In contrast to the OA-GAN presented by Cai et al. [79], the authors of [88] propose a two-stage OA-GAN framework with two generators and two discriminators. While the generators (G 1 , and G 2 ) are made up of a UNet encoder-decoder architecture, PatchGAN is adopted in the discriminators. G 1 takes an occluded input image and disentangles the mask of the image to produce a synthesized occlusion. G 2 then takes the output from G 1 in order to remove the occlusions and generate a deoccluded image. Therefore, the occlusion generator (i.e., G 1 ) plays a fundamental role in the deocclusion process. The failure in the occlusion generator produces incorrect images.
Multiple generators and discriminators: While using multiple discriminators ensures the consistency and the validity of the produced image, some available works employ multiple generators, especially when tackling multiple problems. For example, Jabbar et al. [89] present a framework known as Automatic Mask Generation Network for Face Deocclusion using Stacked GAN (AFD-StackGAN) that is composed of two stages to automatically extract the mask of the occluded area and recover its content. The first stage employs an encoder-decoder in its generator to generate the binary segmentation mask for the invisible region. The produced mask is further refined with erosion and dilation morphological techniques. The second stage eliminates the mask object and regenerates the corrupted pixels through two pair of generators and discriminators. The occluded input image and the extracted occlusion mask are fed into the first generator to produce a completed image. The initial output from the first generator is enhanced by rectifying any missing or incorrect content in it. Two PatchGAN discriminators are implemented against the result of the generators to ensure that the restored face's appearance and structural consistency are retained. AFD-StackGAN can remove various types of occlusion masks in the facial images that cover a large area of the face. However, it is trained with synthetic data, and the incompatibility of the training images and the real-world testing images is likely.
In the same way, Li et al. [90] employ two generators and three domain-specific discriminators in their proposed framework called disentangling and fusing GAN (DF-GAN). They treat face completion as disentangling and fusing of clean faces and occlusions. This way, they remove the need for paired samples of occluded images and their congruent clean images. The framework works with three domains that correspond to the distribution of occluded faces, clean faces, and structured occlusions. In the disentangling module, an occluded facial image is fed into an encoder which encodes it to the disentangled representations. Thereafter, two decoders produce the corresponding deoccluded image and occlusion, respectively. In other words, the disentangling network learns how to separate the structured occlusions and the occlusion-free images. The fusing network, on the other hand, combines the latent representations of clean faces and occlusions, and creates the corresponding occluded facial image, i.e., it learns how to generate images with structured occlusions. However, real-world occlusions are of arbitrary shape and size, not necessarily structured.
Coarse-to-fine architecture: Conversely to the previously mentioned works where one output is generated, Jabbar et al. [91] propose a two-stage Face De-occlusion using Stacked Generative Adversarial Network (FD-StackGAN) model that follows the coarse-tofine approach. The model attempts to remove the occlusion mask and fill in the affected area. In the first stage, the network produces an initial deoccluded facial image. The second stage refines the initial generated image to create a more visually plausible image that is similar to the real image. Similar to AF-StackGAN, FD-StackGAN can handle various regions in the facial images with different structures and surrounding backgrounds. However, the model is trained on a synthetic dataset but it is not tested on images with natural occlusions.
Likewise, Duan and Zhang [92] address the problem of deoccluding and recognizing face profiles with large-pose variations and occlusions through BoostGAN, which has a coarse-to-fine structure. In the coarse part, i.e., multi-occlusion frontal view generator, an encoder-decoder network is used for eliminating occlusion and producing multiple intermediate deoccluded faces. Subsequently, the coarse outputs are refined through a boosting network for photo-realistic and identity-preserved face generation. Consequently, the discriminator has a multi-input structure.
Since BoostGAN is a one-stage framework, it cannot handle de-occlusion and frontalization concurrently, which means that it loses the discriminative identity information. Furthermore, BoostGAN fails to employ the mask guided noise prior information. To address these, Duan et al. [93] perform face frontalization and face completion simultaneously. They propose an end-to-end mask guided two-stage GAN (TSGAN) framework. Each stage has its own generator and discriminator, while the first stage contains the face deocclusion module, the second one contains face frontalization module. Another module named mask-attention module (MAM) is deployed in both stages. The MAM encourages the face deocclusion module to concentrate more on missing regions and fills them based on the masked image input. The recovered image is fed into the second stage to obtain the final frontal image. TSGAN is trained with defined occlusion types and specified sizes, and multiple natural occlusions are not considered. Table 3 provides an outline of the above-mentioned works, summarizing the type of GAN, the objective function, the dataset, and the results of each work.
Attribute Classification
With the availability of surveillance cameras, the task of object detection and tracking through its visual appearance in a surveillance footage has gained prominence. Furthermore, there are other characteristics of people that are essential to fully understand an observed scene. The task of recognizing the people attributes (age, sex, race, etc.) and the items they hold (backpacks, bags, phone, etc.) is called attribute classification.
However, occluding the person in question by another person may lead to incorrectly classifying the attributes of the occluder instead of the occludee. Furthermore, the quality of the images from the surveillance cameras is usually low. Therefore, Fabbri et al. [108] focus on the poor resolution and occlusion challenges in recognizing the attribute of people such as gender, race, clothing, etc., in surveillance systems. The authors propose a model based on DCGAN [109] to improve the quality of images in order to overcome the mentioned problems. The model has three networks, one for attribute classification from the full body images, and the other two networks attempt to enhance the resolution and recover from occlusion. Eliminating the occlusion produces an image without noise and the residual of other subjects that could result in misclassification. However, under severe occlusions, the reconstructed image still contains the remaining of the occluder and the model fails to keep the parts of the image that should stay unmodified.
Similarly, Fulgeri et al. [110] tackle the occlusion issue by implementing a combination of UNet and GAN architecture. The model requires as input the occluded person image and its corresponding attributes. The generator takes the input and restores the image. The output is then forwarded to three networks: ResNet-101 [77], VGG-16 [111], and the discriminator to calculate the loss. The loss is backpropagated to update the weights of the generator. The goal of the model is to obtain a result image of a person that (a) is not occluded, (b) is similar at the pixel level to a person shape, and (c) contains the similar visual features as the original image. The results show that the model can detect and remove occlusion without any additional information. However, the model fails to fully recover the pixels around the boundary of the body parts. The authors constraint the input images by not having occlusion of more than six-sevenths of the image height.
Miscellaneous Applications
In this section, we present the applications of GAN for amodal content completion in various categories of data.
Food: Papadopoulos et al. [112] present a compositional layer-based generative network called PizzaGAN that follows the steps of a recipe to make a pizza. The framework contains a pair of modules to add and remove all instances of each recipe component. A Cycle-GAN [6] is used to design each module. In the case of adding an element to the existing image, the module produces the appearance and the mask of the visible pixels in the new layer. Moreover, the removal module learns how to fill the holes that are left from the erased layer and generate the mask of the removed pixels. However, the authors do not provide any quantitative assessment of PizzaGAN.
Vehicles: Yan et al. [67] propose a two-part model to recover the amodal mask of a vehicle and the appearance of its hidden regions iteratively. To tackle both tasks, the model is composed of two parts: a segmentation completion module and an appearance recovery module. The first network is to complement the segmentation mask of the vehicle's invisible region. In order to complete the content of the occluded region, the appearance recovery module has a generator with a two-path network structure. The first path accepts the input image, the recovered mask from the segmentation completion module, and the modal mask, while learning how to fill in the colors of the hidden pixels. The other path requires the recovered mask and the ground-truth complete mask and learns how to use the image context to inpaint the whole foreground vehicle. The two paths share parameters, which increases the ability of the generator. To enhance the quality of the recovered image, it is taken through the whole model several times. However, the performance of the model degrades beyond three iterations for real images if occlusions are not severe.
Humans:
The process of matching the same person in images taken by multiple cameras is referred to as Person re-identification (ReID). In surveillance systems where the purpose is to track and identify the individuals, ReID is essential. However, the stored images usually have low resolution and are blurry because they are from ordinary surveillance cameras [113]. Additionally, occlusion by other individuals and/or objects is most likely to occur since each camera has a different angle of view. Hence, some important features become difficult to recognize.
To tackle the challenge of person re-identification under occlusion, Tagore et al. [114] design a bi-network architecture with an Occlusion Handling GAN (OHGAN) module. An image with synthetic added occlusion is fed into the generator which is based on UNet architecture and produces an occlusion-free image by learning a non-linear project mapping function between the input image and the output image. Afterward, the discriminator computes the metric difference between the generated image and the original one. The ablation studies for the reconstruction task illustrate that the quality of completion is good for 10-20% occlusion and average for 30-40% occlusion. However, the quality of reconstruction degrades for occlusions higher than 50%.
On the other hand, Zhang et al. [66] attempt to complete the mask and the appearance of an occluded human through a two-stage network. First, the amodal completion stage predicts the amodal mask of the occluded person. Afterward, the content recovery network completes the RGB appearance of the invisible area. The latter uses a UNet architecture in the generator, with local and global discriminators to ensure that the output image is consistent with the global semantics while enhancing the clarity and contrast of the local regions. The generator adds a Visible Guided Attention (VGA) module to the skip connections. The VGA module computes a relational feature map to guide the low-level features to complete by concatenating the high-level features with the next-level features. The relational feature map represents the relation between the pixels inside and outside the occluded area. The process of extracting feature maps is similar to the self-attention mechanism in SAGAN by Zhang et al. [54]. Although incorporating VGA leads to a more accurate recovery of the content and texture, the model does not perform well on real images as it does on synthetic images.
Training Data
Supervised learning frameworks require annotated ground-truth data to train a model. These data can be either from a manually annotated dataset, a synthetic occluded data from 3D computer-generated images, or by superimposing a part of an object/image on another object. For example, Ehsani et al. [76] train their model (SeGAN) on a photorealistic synthetic dataset, and Zhan et al. [75] apply a self-supervised approach to generate annotated training data. However, a model trained with synthetic data may fail when it is tested on real-world data, and human-labeled data are costly, time-consuming, and susceptible to subjective judgments.
In this section, we discuss how GAN is implemented to generate training data for several categories.
Generic objects: It is nearly impossible to cover all the probable occlusions, and the likelihood of appearance of some occlusion cases is rather small. Therefore, Wang et al. [115] aim to utilize the data to improve the performance of the object detection in the case of occlusions. They utilize an adversarial network to generate hard examples with occlusions, and use them to train a Fast-RCNN [116]. Consequently, the detector becomes invariant to occlusions and deformations. Their model contains an Adversarial Spatial Dropout Network (ASDN), which takes as input features from an image patch and predicts a dropout mask that is used to create occlusion such that it would be difficult for Fast-RCNN to classify.
Likewise, Han et al. [117] apply an adversarial network to produce occluded adversary samples to train an object detector. The model, named Feature Fusion and Adversary Networks (FFAN), is based on Faster RCNN [118] and consists of a feature fusion network and an adversary occlusion network, and while the feature fusion module produces a feature map of high resolution and high semantic information to detect small objects more effectively, the adversary occlusion module produces occlusion on the feature map of the object thus outputs an adversary training sample that would be hard for the detector to discriminate. Meanwhile, the detector becomes better in classifying the generated occluded adversary samples through self-learning. Over time, the detector and the adversary occlusion network learn and compete with each other to enhance the performance of the model.
The occlusions produced by adversary networks in [115,117] may lead to overgeneralization, because they are similar to other class instances. For example, the occluded wheels of a bicycle results in misclassifying a wheel chair as a bike.
Humans: Zhao et al. [119] augment the input data to produce easy-to-hard occluded samples with different sizes and positions of the occlusion mask to increase the variation of occlusion patterns. They address the issue of ReID under occlusion through an Incremental Generative Occlusion Adversarial Suppression (IGOAS) framework. The network contains two modules, an incremental generative occlusion (IGO) block, and a global adversarial suppression (G&A) module. IGO takes the input data through augmentation and generates easy occluded samples. Then, it progressively enlarges the size of the occlusion mask with the number of training iterations. Thus, the model becomes more robust against occlusion as it learns harder occlusion incrementally rather than hardest ones directly. On the other hand, G&A consists of a global branch which extracts global features of the input data, and an adversarial suppression branch that weakens the response of the occluded region to zero and strengthens the response to non-occluded areas.
Furthermore, to increase the number of samples per identity for person ReID, Wu et al. [120] use a GAN network to synthesize labeled occluded data. Specifically, the authors impose block rectangles on the images to create random occlusion on the original person images which the model then tries to complete. The completed images that are similar but not identical to the original input are labeled with the same annotation as the corresponding raw image. Similarly, Zhang et al. [113] follow the same strategy to expand the original training set, expect that an additional noise channel is applied on the generated data to adjust the label further. Both approaches in [113,120] work with rectangular masks, but in real-world examples occlusions appear in free-form shapes.
Face images: Cong and Zhou [106] propose an improved GAN to generate occluded face images. The model is based on DCGAN with an added S-coder. The purpose of the Scoder is to force the generator to produce multi-class target images. The network is further optimized through Wasserstein distance and the cycle consistency loss from CycleGAN. However, only sunglasses and facial masks are considered as occlusive elements. Figure 4 outlines of the discussed approaches for tackling the issues in overcoming occlusion through using GAN. Table 4 summarizes the GAN model, the loss function, and the datasets that were used in the discussed works in this section (except for the face completion works), it also shows the reported result for the tasks where GAN was applied. For amodal segmentation the implemented architecture are, a discriminator with a two hourglass generator [60], a coarse-to-fine architecture with contextual attention [63] or multiple discriminators [67], and a generator with priori knowledge [66]. For order recovery, GAN is designed as a generator with a single discriminator [71,73], or multiple discriminators [69,70]. To perform amodal content completion for facial images, the architectures include: a single generator and discriminator [79][80][81], multiple discriminators [82][83][84][85][86][87], multiple generators [88], multiple generators and discriminators [89,90], or a coarse-to-fine architecture [91][92][93]. Generic object completion is carried out through coarse-to-fine architecture [63], multiple discriminators with contextual attention [78], or partial convolution and CGAN [75,76]. Human completion for attribute classification is utilized in [108,110]. Other works use GAN to complete the images of food [112], vehicles [67], and humans [66,114]. GAN is also used to generate training data of generic objects [115,117], humans [113,119,120], and face images [106].
Loss Functions
In GAN, the generator G and the discriminator D play against each other in a twoplayer mini-max game until they reach Nash equilibrium through a gradient-based optimization method. The gradient of the loss value indicates the learning performance of the network. The loss value is calculated via a loss (objective) function. In fact, defining a loss function is one the fundamental elements of designing GAN. Consequently, numerous objective functions have been proposed to stabilize and regularize GAN. The following losses are the most common ones used in training GAN for amodal completion.
1.
Adversarial Loss: The loss function used in training GAN is known as an adversarial loss. It measures the distance between the distribution of the generated sample and the real sample. Each of G and D have their dedicated loss function which together form the adversarial loss, as shown in Equation (1). However, G is trained as the term that reflects the distribution of the generated data (E z∼p z (z) [log(1 − D(G(z)))]).
Extensions to the original loss function are the conditional loss and the Wasserstein loss defined in CGAN and WGAN, respectively.
2.
Content Loss: In image generation, content loss [138] measures the difference between the content representation of the real and the generated images, to make them more similar in terms of perceptual content. If p and x are the original and the generated images, and p l and X l are their respective representations in layer l, the content loss is calculated as 3.
Reconstruction Loss:
The key idea behind reconstruction loss proposed by Li et al. [139] is to benefit from the visual features learned by D from the training data. The extracted features from the real data by D are fed to G to regenerate real data. By adding reconstruction loss to the GAN's loss function, G is encouraged to reconstruct from the features of D, which brings G closer to the configurations of the real data. The reconstruction loss equation is as follows: where D φ F is a part of the discriminator which encodes the data to features, and G θ decodes the features to the training data.
4.
Style Loss: The style loss, originally designed for image style transfer by Gatys et al. [138], is defined to ensure that the style representation of the generated image matches that of the input style image. It depends on the feature correlation between the feature maps, given by the Gram matrix (G l ). Let a and x be the original image and the generated image, respectively, and A l and G l their corresponding style representation in layer l. The style loss is computed by the element-wise mean square difference between A l and G l , where w l is the weighting factor of each layer, and N and M represent the number and the size of the feature maps, respectively. 5.
L 1 and L 2 Loss: L 1 loss function is the absolute difference between the groundtruth and the generated image. On the other hand, L 2 loss is the squared difference between the actual and the generated data. When used alone, these loss functions lead to blurred results [140]. However, when combined with other loss functions, they can improve the quality of the generated images, especially L 1 loss. The generator is encouraged to not only fool the discriminator but also to be closer to the real data in L 1 or L 2 sense. Although these losses cannot capture high-frequency details, they accurately capture low frequencies. L 1 loss enforces correctness in low-frequency features; hence, it results in less blurred images compared to L 2 [8]. Both losses are defined in Equations (6) and (7).
where x, y, and z are the ground-truth image, the generated image, and the random noise, respectively. 6.
Perceptual Loss: The perceptual loss measures the high-level perceptual and semantic differences between the real and the fake images. Several works [141,142] introduce perceptual loss as a combination of the content loss (or feature reconstruction loss) and the style loss. However, Liu et al. [62] simply compute the L 1 distance between the real and the completed images. Others incorporate more similarity metrics into it [140]. 7.
BCE Loss: BCE loss measures how close the probability of the predicted data is to the real data. Its value increases as the predicted probability deviates from the real label. The BCE is defined as where y i is the label of i. y i =0 and y i =1 represents fake and real samples. BCE is used in training the discriminator in amodal segmentation task [76], and in training the generator [110]. 8.
Hinge Loss: In GAN, Hinge loss is used to help the convergence to a Nash equilibrium. Proposed by Lim and Ye [143], the objective function for G is and for D is where x and z are the ground-truth and the generated images, respectively.
As it can be seen from Tables 3 and 4, many of the previously mentioned loss functions are combined with others to train a GAN model. Adversarial loss is the base objective function for training the two networks of the GAN. However, with the original GAN's adversarial loss function, the model may not converge. Therefore, the Hinge loss is often implemented as an alternative objective function. In some works, global and local adversarial losses are used to train local and global discriminators to ensure that the generated data is semantically and locally coherent. In addition to this, L 1 or L 2 losses are frequently utilized to capture low-frequency features, and hence improve the quality of the generated images. Furthermore, the reconstruction loss is employed to encourage the generator to maintain the contents of the original input image. On the other hand, perceptual loss encourages the model to capture patch-level information when completing a missing patch in an object/image. Furthermore, to emphasize on the style match between the generated image and the input image, style loss is implemented.
The choice of the objective functions is an essential decision of designing a model. In amodal completion and inpainting, designing a loss function is still an active area of research. The ablation studies performed by the reviewed works show that there is no optimal objective function. For different tasks and data, a different set of loss terms produces the best results. In addition, using a complex loss function may lead to problems of instability, vanishing gradient, and mode collapse.
Open Challenges and Future Directions
Despite the significant progress of the research in GAN and amodal completion in the last decade, there remain a number of problems that can be considered as future directions.
1.
Amodal training data: Up until now, there has been no fully annotated generic amodal dataset with sufficient ground-truth labels for the three sub-tasks in amodal completion. Most of the existing datasets are specific to a particular application or task. This not only makes training the models themselves more difficult, but verifying their learning capability as well. In many cases, there is no sufficient labeled amodal validation data to establish the accuracy of the model. We present the challenges related to each sub-task in amodal completion. For amodal segmentation, the current datasets do not contain sufficient occlusion cases between similar objects. Hence, the model cannot tell where the boundary of one object ends and the other one begins. The existing real (manually annotated) amodal datasets have no ground-truth appearance for the occluded region. This makes training and validating the model for amodal content completion more challenging.
As for the case of order recovery, some occlusion situations are very rare in the existing datasets. On the other hand, it is impossible to cover all probable cases of occlusion in the real datasets. Nevertheless, in the future, the current datasets can be extended through generated occlusion to include more of those infrequent cases with varying degrees of occlusion.
2.
Evaluation metrics: There are several quantitative and qualitative evaluation measures for GAN [59]. However, as it can be noticed from the results, there is no standard and unanimous evaluation metric for assessing the performance of GAN when it generates the occluded content. Many existing works depend on the human preference judgement which can be biased and subjective. Therefore, designing a consensus evaluation metric is of utmost importance.
3.
Reference data: Existing GAN models fail to generate occluded content accurately if the hidden area is large. Particularly, when the occluded object is non-symmetric, such as the face or the human body. The visible region of the object may not hold sufficient relevant features to guide a visually plausible regeneration. As the next step, reference images can be used along the input image to guide the completion more effectively.
In addition to the above-mentioned problems, the challenges in the stability and convergence of GAN remain open issues [28].
Discussion
Current computational models approach the human capability of visible perception when performing visual tasks such as recognition, detection, and segmentation. However, our environment is complex and dynamic. Most of the objects we perceive are incomplete and fragmented. Therefore, the existing models that are designed and trained with a fully visible sample of instances do not perform well when tested on real-world scenes. Hence, overcoming occlusion is essential for leveraging the performance of available models. Amodal completion tasks address the occluded patches of an image to infer the occlusion relation between objects (i.e., order recovery), predict the full shape of the objects (i.e., amodal segmentation), and complete the RGB appearance of the missing pixels (i.e., amodal content completion). These tasks are usually interleaved and depend on each other. For example, amodal segmentation can benefit order recovery [144] and it is crucial for amodal content completion [76]. On the other hand, order recovery can guide the amodal segmentation [75].
Although GAN is notorious for its stability issues and is difficult to train, it is a popular approach for tasks that require generative capability. In handling occlusion, the initially incomplete representation needs to be extended to a complete representation with the miss-ing region filled in. Therefore, GAN is the chosen architecture for processes/sub-processes involved in amodal completion. However, depending on the nature of the problems, the applicability of GAN varies. For example, in amodal appearance reconstruction, GAN is the ideal option of architecture and it produces superior results in comparison to other methods. Comparably, in amodal segmentation and order recovery tasks, GAN is less commonly used. Nevertheless, to take advantage of the potential of GAN, it can be combined with other architectures and learning strategies to tackle those tasks too.
In order to help GAN in learning implicit features from the visible regions of the image, various methods are used, which can be summarized as follows: • Architecture: While the original GAN consists of a single generator and discriminator, several works utilize multiple generators and discriminators. The implementation of local and global discriminators is especially common, because it enhances the quality of the generated data. The generator is encouraged to concentrate on both the global contextual and local features, and produce images that are closer to the distribution of the real data. In addition to this, an initial-to-refined (also called coarse-to-fine) architecture is implemented in many models. The initial stage produces a coarse output from the input image, which is then further refined in the refinement step. • Objective function: To improve the quality of the generated output and stabilize the training of the GAN, a combination of loss terms is used. While adversarial loss and Hinge loss are used in training the two networks in the GAN, other objective functions encourage the model to produce an image that is consistent with the ground-truth image. • Input: Under severe occlusion, the GAN may fail to produce a visually pleasing output solely depending on the visible region. Therefore, providing additional input information guides GAN in producing better results. In the amodal shape and content completion, synthetic instances similar to the occluded object are useful, because they can be used as a reference by the model. A priori knowledge is also beneficial, as it can either be manually encoded (e.g., utilizing various human poses for human deocclusion) or transferred from a pre-trained model (e.g., using a pre-trained face recognition model in face deocclusion). In addition to these, employing the amodal mask and the category of the occluded object in the content completion task restricts the GAN model to focus on completing the object in question. For producing the amodal mask, a modal mask is needed as an input. If the input is not available, most of existing works depend on a pre-trained segmentation model to predict the visible segmentation mask. • Feature extraction: The pixels in the visible region of an image are rather important and contain essential information for various tasks; hence, they are considered as valid pixels. Contrary to this, the invisible pixels are invalid ones; hence, they should not be included in the feature extraction/encoding process. However, the vanilla convolution process cannot differentiate between valid and invalid pixels, which generates images with visual artifacts and color discrepancies. Therefore, partial convolution and a soft gating mechanism are implemented to enforce the generator to focus only on valid pixels and eliminate/minimize the effect of the invalid ones. On the other hand, dilated convolution layers can replace the vanilla convolution layers to borrow information from relevant spatially distant pixels. Additionally, contextual attention layers and attention mechanism are added to the networks of the GAN to leverage the information from the image context and capture global dependencies.
Among the various architectures of GAN, three types are most commonly used in the reviewed works in this article, namely CGAN, WGAN-GP, and PatchGAN. The application of CGAN is mostly in amodal content completion tasks, because the GAN is encouraged to complete an object of a specific class. WGAN-GP stabilizes the training of GAN with an EM distance objective function and a weight clipping method. Therefore, it is a preferred architecture to ensure GAN convergence. On the other hand, PatchGAN is used in designing the discriminator, as it attempts to classify patches of the generated image as real or fake. Consequently, the image is penalized for style consistency between pixels that are spatially more than a patch diameter away from each other.
Finally, handling occlusion is fundamental in several computer vision tasks. For example, completing an occluded facial image helps in better recognizing the face and predicting the identity of the person. Similarly, inferring the full shape of pedestrians and vehicles as well as the occlusion relationship between them can lead to a safer autonomous driving. Furthermore, in surveillance cameras, amodal completion helps in target tracking and security applications.
Conclusions
GANs are considered the most interesting idea in machine learning since their invention. Due to their generative capability, they are extending the ability of artificial intelligence systems. The GAN-based models are creative instead of mere learners. In the challenging field of amodal completion, GAN has had a significant impact especially in generating the appearance of a missing region. This brings existing vision systems closer to the human capability in predicting the occluded area.
To help the researchers in the field, in this survey we have reviewed the available works in the literature wherein a GAN is applied in accomplishing tasks of amodal completion and resolving the problems that arise when addressing occlusion. We discussed the architecture of each model along with its strengths and limitations in detail. Then, we summarized the loss function and the dataset that was used in each work and presented their results. Then, we discussed the most common types of objective functions which are implemented in training the GAN models for occlusion handling. Finally, we provided a discussion of the key findings of our survey article.
However, after reviewing the current progress in overcoming occlusion using a GAN, we detected several key issues that remain an open challenge in the research of addressing occlusion. These issues pave the way for the future research direction. By addressing them, the field will progress significantly. Data Availability Statement: The data will be made available upon request.
|
2023-03-24T15:18:06.877Z
|
2023-03-22T00:00:00.000
|
{
"year": 2023,
"sha1": "3540dc5aa822d3f99a898c3548e069e75f747b1a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4893/16/3/175/pdf?version=1679903097",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8045dc4b19a3b66bfd3e9a2678bb986b6619d721",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
51891616
|
pes2o/s2orc
|
v3-fos-license
|
Contact inhibitory Eph signaling suppresses EGF-promoted cell migration by decoupling EGFR activity from vesicular recycling
Ephrin signaling in densely crowded cells alters EGFR recycling to inhibit migration induced by EGF. Limiting movement in a crowded situation The epidermal growth factor receptor (EGFR) mediates the distinct cellular processes of proliferation and migration, which do not always occur concomitantly upon EGFR stimulation. Eph receptors are activated by increasing cell density, and they suppress cell migration, in contrast to EGFR. Stallaert et al. (see also the Focus by Shi and Wang) found that Eph receptors selectively inhibited migration but not proliferation mediated by EGFR. Eph receptor activation prevented the recycling of EGFR to the cell surface (the subcellular compartment from where it mediates migratory signaling) by trapping EGFR in endosomes (the subcellular compartment from where it can continue to promote proliferative signaling). In addition, EGFR-mediated migration was also inhibited by the receptor Kiss1, which not only is structurally unrelated to Eph receptors but also inhibits cell migration by suppressing EGFR recycling. The authors note that this system enables different receptors to regulate a signaling pathway without needing to directly interact with components in that pathway. The ability of cells to adapt their response to growth factors in relation to their environment is an essential aspect of tissue development and homeostasis. We found that signaling mediated by the Eph family of receptor tyrosine kinases from cell-cell contacts changed the cellular response to the growth factor EGF by modulating the vesicular trafficking of its receptor, EGFR. Eph receptor activation trapped EGFR in Rab5-positive early endosomes by inhibiting Akt-dependent vesicular recycling. By altering the spatial distribution of EGFR activity, EGF-promoted Akt signaling from the plasma membrane was suppressed, thereby inhibiting cell migration. In contrast, ERK signaling from endosomal EGFR was preserved to maintain a proliferative response to EGF stimulation. We also found that soluble extracellular signals engaging the G protein–coupled receptor Kiss1 (Kiss1R) similarly suppressed EGFR vesicular recycling to inhibit EGF-promoted migration. Eph or Kiss1R activation also suppressed EGF-promoted migration in Pten−/− mouse embryonic fibroblasts, which exhibit increased constitutive Akt activity, and in MDA-MB-231 triple-negative breast cancer cells, which overexpress EGFR. The cellular environment can thus generate context-dependent responses to EGF stimulation by modulating EGFR vesicular trafficking dynamics.
INTRODUCTION
Activation of epidermal growth factor receptor (EGFR) promotes various cellular responses, including cell growth, proliferation, survival, apoptosis, differentiation, and migration (1), some of which are functionally opposed. To select among these diverse outcomes, the cell requires additional contextual information. This context can be intrinsic (dependent on the cell type or cell cycle stage, for example), or extrinsic, in the form of extracellular signals that provide information about the current (or past) environmental context. Adaptability to a changing environment requires that extrinsic information be integrated through mechanisms that can transform the response to subsequent growth factor stimulation.
Local cell density is one such example of extrinsic context that can influence cellular activity to generate distinct functional states (2)(3)(4). The Eph family of receptor tyrosine kinases act as sensors of cell density, becoming activated at points of cell-cell contact through interactions with membrane-bound ephrin ligands presented on the surfaces of adjacent cells (5). In many ways, Eph receptors operate in functional opposition to EGFR, acting as tumor suppressors (5)(6)(7)(8)(9)(10)(11) and mediating contact inhibition of locomotion to suppress cellular migration and metastasis (12)(13)(14)(15). Moreover, a functional coupling of EGFR and Eph receptor activity controls cell migration (15). Although the precise mechanism through which Eph receptors regulate EGFpromoted migration remains elusive, a convergence of receptor activity on phosphoinositide 3-kinase (PI3K)/Akt signaling has been implicated.
Akt regulates EGFR vesicular trafficking through the endosomal system (16). By stimulating the activity of the early endosomal ef-fector PIKfyve (FYVE-containing phosphatidylinositol 3-phosphate 5-kinase), Akt activity controls the transition of EGFR through early endosomes, regulating both its recycling back to the plasma membrane (PM) and its degradation in the lysosome. Thus, while endocytosis of cell surface receptors has traditionally been viewed as a mechanism to attenuate downstream signaling after ligand stimulation, the notion that signaling molecules downstream of cell surface receptors can, in turn, influence vesicular trafficking (16)(17)(18)(19)(20)(21) generates a reciprocal relationship between receptor activation and vesicular dynamics whose role in shaping the cellular response to stimuli has begun to garner attention (22). Furthermore, this bidirectional relationship could also allow the signaling activity of one receptor to influence the response properties of another through changes in its vesicular trafficking dynamics, generating context-dependent receptor activity.
Here, we demonstrated that Eph receptor activation at cell-cell contacts regulated the vesicular dynamics of EGFR by inhibiting Akt-dependent trafficking. By modulating the spatial distribution of EGFR activity, Eph receptor activation altered the cellular response to EGF stimulation, selectively suppressing EGF-promoted migratory signaling while preserving its effect on proliferation.
A1 stimulation (min) 5 2 0 4 0 Pre regulating the abundance of EGFR at the PM. Therefore, we hypothesized that Eph receptor activation might reduce PM EGFR abundance by trapping constitutively recycling receptors in endosomes. Activation of Eph receptors decreases the activity of Akt (12,14,26,27), a signaling effector that regulates EGFR vesicular trafficking (16). A1 stimulation of Cos-7 cells, which endogenously express multiple EphA isoforms including EphA2, EphA3, EphA4, EphA7, and EphA8 (fig. S1C and table S1), decreased Akt activity ( Fig. 1C and fig. S1D). A reduction in Akt activation and a concomitant loss of PM EGFR were also observed after Eph receptor activation in human embryonic kidney (HEK) 293 cells, mouse embryonic fibroblasts (MEFs), MCF10A cells, and MDA-MB-231 cells (fig. S1, E to H), which exhibit a wide range of EGFR expression ( fig. S1I). The abundance of PM EGFR over time followed the changes in Akt activity across the different cell types. Notably, in MEFs, changes in PM EGFR abundance followed a biphasic decrease in Akt activation in response to A1 stimulation (fig. S1F). Pharmacological inhibition of Akt with AktVIII ( Fig. 1D and fig. S1D) or of its downstream, early endosomal effector PIKfyve with YM201636 ( Fig. 1E) (16) reduced PM EGFR abundance in Cos-7 cells, as did knockdown of PIKfyve by small interfering RNA (siRNA) ( fig. S2A). Eph receptor activation and PIKfyve inhibition promoted a similar decrease in PM EGFR abundance (Fig. 1E). In addition, the combination of Eph receptor activation with PIKfyve inhibition did not further reduce PM EGFR abundance (Fig. 1E), suggesting a shared molecular mechanism. Consistent with a suppression of constitutive EGFR recycling, we observed an endosomal accumulation of ectopically expressed EGFR-mCherry in live cells after Akt or PIKfyve inhibition (Fig. 1F, movie S1, and fig. S2B). Time-lapse confocal imaging of Cos-7 cells expressing EGFR-mCherry and EphA2-mCitrine also revealed endosomal accumulation of EGFR with time after soluble A1 stimulation (Fig. 1, G and H, and movie S2) or upon presentation of ephrinA1 ligand on the membrane of adjacent cells at sites of cell-cell contact (movie S3), leading to a decrease in PM EGFR abundance (Fig. 1I). This shift in the spatial distribution of EGFR occurred primarily through the trapping of receptors in Rab5-positive early endosomes, as observed for both ectopically expressed (Fig. 1J, left) and endogenous EGFR (Fig. 1K), consistent with an inhibition of Akt-dependent trafficking (16). Thus, Eph receptor activation alters the subcellular distribution of EGFR before growth factor stimulation by inhibiting Akt/ PIKfyve-dependent vesicular recycling and trapping constitutively recycling receptors in Rab5-positive early endosomes.
We next investigated how Eph receptor activation influences EGFR trafficking during EGF stimulation. The trafficking fate of EGFR through the endosomal system is determined by posttranslational modifications, with the ubiquitination of ligand-bound receptors acting as a molecular signal that diverts EGFR through Rab7-positive late endosomes to lysosomes for degradation (25). Saturating EGF concentrations (>50 ng/ml) (28), therefore, generate a finite temporal signaling response by progressively depleting liganded EGFR through ubiquitin-dependent lysosomal degradation. Stimulation of endogenous receptors in Cos-7 cells with a saturating concentration of EGF (100 ng/ml) induced a ~40% reduction in total EGFR expression after 60 min of stimulation ( fig. S2, C and D), and residual EGFR resided primarily in Rab7-positive late endosomes (Fig. 1J, right). In contrast, A1 pretreatment or direct Akt inhibition impaired Rab5-to-Rab7 endosomal maturation (Fig. 1J, right) (29), leading to a reduction in receptor degradation at saturating EGF concentrations (≥50 ng/ml; fig. S2, C and D).
At subsaturating EGF concentrations typically found in human tissue secretions (0.4 to 20 ng/ml) (30), only a fraction of receptors are ligand-bound, receptor ubiquitination is reduced (31), and unliganded, nonubiquitinated receptors are recycled back to the PM (32). Therefore, during subsaturating EGF stimulation, the recycling of unliganded receptors is necessary to counter the EGF-induced depletion of receptors by endocytosis and maintain sensitivity to persistent growth factor stimulation (25). To assess whether Eph receptors inhibit EGFR recycling during subsaturating EGF stimulation, we exposed endogenous receptors in Cos-7 cells to a pulse of EGF (10 ng/ml) to induce EGFR endocytosis and measured its subsequent return to the PM after EGF washout (Fig. 1L). Whereas we observed a complete recovery of PM EGFR abundance in control cells after EGF washout, A1 pretreatment completely suppressed EGFR recycling. Thus, Eph receptor activation suppresses EGFR trafficking from the early endosome during EGF stimulation, impairing the recycling of nonubiquitinated receptors back to the PM and inhibiting the degradation of ubiquitinated receptors in the lysosome.
Eph receptor activation changes the spatial distribution of EGFR activity
Many functional outcomes to EGFR activation, such as cellular migration, require that cells remain responsive to persistent growth factor stimulation. To ensure sensitivity to stimuli during long periods of exposure, cells must maintain sufficient receptor abundance at the PM despite continuous internalization of activated receptors. We therefore posed the following questions: (i) Does Akt-dependent recycling help maintain cellular responsiveness to EGF during persistent, subsaturating stimulation, and (ii) can Eph receptor activation at cell-cell contacts change the response properties of EGFR by modulating its vesicular trafficking?
To address the impact of Akt-dependent recycling on EGFR activation, measurements of endogenous EGFR phosphorylation and trafficking in Cos-7 cells were obtained by immunofluorescence after subsaturating EGF stimulation in control cells and after Eph receptor activation or inhibition of Akt or PIKfyve. Individual cells were radially segmented to quantify changes in the average spatial distribution of EGFR abundance and activity with time and visualized using three-dimensional (3D) spatiotemporal maps ( Fig. 2A). Through an accumulation of EGFR in endosomal compartments during sustained EGF stimulation, cells pretreated with either A1 or an Akt inhibitor generated less EGFR phosphorylation after 60 min of EGF stimulation relative to control cells (Fig. 2, A and B). Decoupling Akt activation from its effect on trafficking by inhibiting PIKfyve had similar effects as direct Akt inhibition or A1 pretreatment on EGFR phosphorylation and trafficking (Fig. 2, A and B), indicating that Akt activity maintains EGFR activation at the PM during sustained, subsaturating EGF stimulation by promoting vesicular recycling.
To quantify EGFR phosphorylation at the PM and on endosomes during sustained, subsaturating EGF stimulation, we used fluorescence lifetime imaging microscopy (FLIM) to detect Förster resonance energy transfer (FRET) between EGFR-mCitrine and a phosphotyrosinebinding domain fused to mCherry (PTB-mCherry) (33) in Cos-7 cells (Fig. 2, C to F). In control cells, EGFR-mCitrine remained highly phosphorylated at both the PM and in endosomes after 60 min of sustained EGF-Alexa Fluor 647 stimulation (Fig. 2, C and D). In cells pretreated with A1 or after Akt or PIKfyve inhibition, we observed reduced PM EGFR-mCitrine density (Fig. 2E) and EGF-Alexa Fluor 647 binding (Fig. 2F), resulting in diminished EGFR-mCitrine phosphorylation specifically at the PM (Fig. 2D). In conditions in which Akt-dependent recycling was suppressed, ligand-bound and active EGFR-mCitrine accumulated in endosomes ( Fig. 2, C, E, and F), maintaining its phosphorylation in this compartment to the same extent as control cells (Fig. 2D). By inhibiting Akt-dependent recycling, Eph receptor activation thus changes the spatial distribution of EGFR activity during sustained, subsaturating EGF stimulation, selectively reducing EGFR activation at the PM while preserving receptor activity in endosomes.
Eph receptor activation at cell-cell contact alters the EGFR signaling response
Although EGFR continues to activate signaling effectors from endosomal membranes (34)(35)(36)(37)(38)(39)(40)(41)(42), Akt is preferentially activated at the PM ( fig. S3, A to D) (43,44). We therefore investigated how Eph receptor activation, by changing the spatial distribution of EGFR activity, regulates its signaling response during EGF stimulation. By suppressing vesicular recycling and reducing EGFR activity at the PM, A1 pretreatment selectively inhibited Akt activation during sustained, subsaturating EGF stimulation of endogenous ( To confirm that EphA2 inhibits EGF-promoted Akt activation by suppressing EGFR recycling and does not simply reflect the opposed regulation of Akt by EGFR and EphA2 (activation and inhibition, respectively), we assessed whether EGFR trafficking was dispensable for the A1-induced suppression of EGF-promoted Akt activation. Cells were prestimulated with A1, treated with the dynamin inhibitor dynole 34-2 to block subsequent endocytosis, and then stimulated with EGF. When EGFR endocytosis was blocked ( fig. S3C), A1 pretreatment did not reduce EGF-promoted Akt activation ( Fig. 3B, top). Pretreatment with the negative control analog dynole 31-2, to control for off-target effects, did not inhibit EGFR endocytosis ( fig. S3C) and did not affect the A1-induced suppression of EGF-promoted Akt activation (Fig. 3B, bottom), corroborating that intact EGFR vesicular trafficking is required for the inhibitory effect of Eph receptors on EGFR signaling. Increasing concentrations of A1 progressively inhibited EGFmediated Akt activation (Fig. 3C), suggesting that the degree of cell-cell contact might determine the magnitude of Akt activation in response to a given concentration of EGF. Use of a conformational FRETbased sensor of EphA2 activity (24) demonstrated that homotypic cell-cell contact promoted Eph receptor activation through interactions with ephrins presented on neighboring cells (Fig. 3D). Furthermore, activation of Eph receptors at cell-cell contacts promoted the recruitment of the effector SH2 domain tagged with mCherry (46) to phosphorylated receptors (Fig. 3D), indicating their signaling competency. To directly investigate the influence of cell-cell contact on EGFR signaling, we obtained single-cell measurements of Akt and ERK activation in Cos-7 with varying degrees of cell-cell contact.
Akt activation decreased with cell-cell contact both before and after EGF stimulation (Fig. 3E), demonstrating that increasing cell-cell contact reduces the magnitude of EGF-promoted Akt activation. In contrast, ERK activation was unaffected by cell-cell contact, with cells generating similar EGF-promoted increases in ERK activation irrespective of their degree of cell-cell contact (Fig. 3F).
Coupling EGFR activity to vesicular recycling generates positive feedback
Although the inhibition of Akt-dependent recycling resulted in reduced PM EGFR abundance in Cos-7 cells (Fig. 1, A, B, D, and E, and figs. S1, E to H, and S2A), we also observed that increasing cellular Akt activity through the inhibition of its negative regulator PP2A by okadaic acid (Fig. 4A) or ectopic expression of the constitutively active Akt D323A/D325A mutant ( Fig. 4B) (47) resulted in a concomitant increase in PM EGFR. Because EGFR activation itself increased Akt activity in cells (Fig. 3, A to C and E, and fig. S3, C and D), we next asked whether PM EGFR abundance is actively maintained during growth factor stimulation through an EGF-induced increase in Akt-dependent vesicular recycling. Using a fluorescence localization after photoactivation (FLAP) approach to quantify the vesicular recycling of EGFR to the PM after photoactivation of EGFR-paGFP in endosomes, we observed an increase in EGFR-paGFP recycling during EGF stimulation (Fig. 4C). Akt inhibition completely suppressed this EGF-promoted increase in vesicular recycling (Fig. 4C), further demonstrating the contribution of Akt-dependent recycling in sustaining PM EGFR activity. Thus, by stimulating Akt-dependent recycling, EGFR activation generates a positive feedback that actively maintains its PM abundance during EGF stimulation. Positive feedback in combination with inhibitory network motifs can convert graded inputs into switch-like, ultrasensitive signaling responses (48). Because Akt is preferentially activated at the PM (fig. S3, A to D), the EGF-induced increase in EGFR vesicular recycling ( Fig. 4C) might generate a positive feedback for Akt activation (Fig. 4D). To investigate whether this positive feedback can generate a switch-like activation of Akt, we measured Akt phosphorylation in thousands of individual Cos-7 cells by flow cytometry after sustained stimulation with a range of EGF concentrations (Fig. 4, E and F). Cells were stimulated in suspension to negate in situ cell-cell contact as an extrinsic source of variability in Akt activation (Fig. 3E). At concentrations of ≥1 ng/ml, EGF stimulation produced a switchlike activation to a high Akt phosphorylation state in a subpopulation of cells, whose proportion increased with EGF concentration (Fig. 4, E and F, top). Decoupling EGFR activation from its effect on vesicular recycling by PIKfyve inhibition (Fig. 4, E and F, middle) or A1 pretreatment (Fig. 4, E and F, bottom) did not result in a global decrease in cellular Akt activation but rather reduced the proportion of cells generating a high Akt phosphorylation state (Fig. 4F), consistent with the inhibition of a positive feedback that produces this switch-like response. Intrinsic cell-to-cell variability in the EGF threshold required to stimulate Akt-dependent vesicular recycling therefore determines the proportion of cells that transition to a high Akt activity state at a given EGF concentration. Eph receptor activation, by decoupling EGFR activation from its effect on vesicular trafficking, reduces Akt activation within the population by decreasing the proportion of cells transitioning to a high Akt activity state during EGF stimulation.
Eph activation at cell-cell contact suppresses the EGF-promoted transition to a migratory state EGFR signaling to effectors at the PM generates exploratory cellular behaviors (49)(50)(51)(52)(53)(54)(55)(56) that must be maintained to induce a persistent migratory response. Given that contact inhibitory Eph receptor activation selectively suppresses PM signaling during sustained, subsaturating EGF stimulation (Figs. 3, A and E, and 4, E and F), we investigated whether cell-cell contact regulates EGF-promoted migration by inhibiting Akt-dependent recycling. Because Cos-7 cells exhibit limited migratory behavior, we examined MEFs, which generate a haptotactic migratory response to fibronectin that is enhanced by EGF through an increase in exploratory behavior (57). Similar to Cos-7 cells, these cells express several EphA isoforms, including EphA2, EphA3, and EphA5 (58,59), and also exhibited an Eph activity-dependent depletion of PM EGFR abundance (Fig. 5, A and B, and fig. S1F). After stimulation with a subsaturating EGF concentration (20 ng/ml), we observed a significant increase in the proportion of migratory cells (Fig. 5C, top; movie S4; and fig. S5, A and B) but no change in the average distance traveled per cell (Fig. 5C, bottom). These findings indicate that EGF promotes the transition of individual cells to a migratory state rather than increasing overall cellular motility. Because EGF binding promotes receptor ubiquitination and degradation, leading to a loss in EGF sensitivity with time, sustained stimulation with supraphysiological, saturating EGF concentrations (100 ng/ml) did not significantly increase the proportion of migratory cells (Fig. 5C, top). Decoupling EGFR activation from its effect on Akt-dependent recycling through the inhibition of PIKfyve or after Eph receptor activation decreased the proportion of migratory cells (Fig. 5C, top). We observed further that increasing concentrations of A1 progressively decreased EGF-induced migration (Fig. 5C, top), consistent with its concentration-dependent effect on EGF-promoted Akt activation (Fig. 3C) and suggesting that the amount of ephrinA1-Eph receptor interactions at points of cellcell contact may determine whether a cell initiates a migratory response to EGF. We found that the number of migratory cells after EGF stimulation was inversely proportional to cell density (Fig. 5D) and that the increase in migration observed at low densities could be countered by treatment with soluble A1 to mimic Eph receptor contact inhibitory signaling (Fig. 5D). Thus, physiological Eph receptor activation at points of homotypic cell-cell contact suppresses EGF-promoted migration by inhibiting Akt-dependent vesicular recycling. The influence of Eph receptor activation on the migration of cells was also assessed using a transwell migration assay. EGF promoted a significant increase in the migration of MEFs, which was completely suppressed by A1 prestimulation or PIKfyve inhibition (Fig. 5E).
We next investigated the influence of Akt-dependent recycling on EGF-promoted migration in pathological contexts in which either Akt activity is dysregulated or EGFR is overexpressed. Knockout of phosphatase and tensin homolog (PTEN) in MEFs (Pten −/− MEFs), which increases cellular Akt activity (60), enhanced both autonomous and EGF-promoted directional migration (Fig. 5F). Similarly, the triple-negative breast cancer cell model MDA-MB-231, in which EGFR is overexpressed ( fig. S1I), exhibited an increase in motility relative to wild-type MEFs (Fig. 5G). Inhibition of Akt-dependent recycling by either A1 prestimulation or PIKfyve inhibition also blocked EGF-induced chemotaxis in both cell lines (Fig. 5, F and G), indicating that decoupling Akt activity from its effect on vesicular trafficking can inhibit cell migration even when Akt activity or EGFR expression is pathologically increased.
Suppression of Akt-dependent recycling selectively inhibited PM signaling while leaving endosomal ERK activation intact (Fig. 3A). Consistent with this result, we found that neither PIKfyve inhibition nor A1 pretreatment reduced EGF-promoted cell proliferation (Fig. 5H) in wild-type MEFs. Thus, by altering the spatiotemporal distribution of EGFR activity, contact inhibitory signaling by Eph receptors influences the cellular outcome to EGF stimulation, preserving a proliferative response while suppressing cell migration.
Modulation of vesicular dynamics may represent a general mechanism to produce context-dependent EGFR signaling
To determine whether environmental signals other than cell-cell contact can influence the cellular response to EGF stimulation through changes in EGFR trafficking, we investigated the effect of activation of the G protein-coupled receptor Kiss1 (Kiss1R), which, similar to Eph receptors, inhibits Akt (61) and suppresses cell migration and metastatic invasion (62). Stimulation with the soluble Kiss1R ligand kisspeptin-10 (Kp-10) reduced Akt activity in HEK293 cells and decreased PM EGFR abundance (Fig. 6A). Similar to the effect of cellcell contact, pretreatment with Kp-10 selectively inhibited EGF-promoted Akt activation (Fig. 6B) while preserving ERK activation (Fig. 6C). Furthermore, activation of Kiss1R completely suppressed EGF-promoted migration of both wild-type and Pten −/− MEFs, as well as MDA-MB-231 breast cancer cells (Fig. 6, D to F). The modulation of EGFR vesicular trafficking dynamics could therefore provide a general mechanism to generate plasticity in the signaling response to EGFR activation, through which diverse environmental signals such as cell-cell contact or soluble stimuli like Kp-10 can influence the cellular response to EGF.
DISCUSSION
Here, we demonstrated that Eph receptor activation at cell-cell contacts generated context-dependent cellular responses to EGF stimulation by modulating EGFR vesicular trafficking dynamics. Chemotaxis requires that cells remain responsive to stimuli for prolonged periods of time as they migrate toward the chemotactic source. EGF stimulation promoted an increase in Akt-dependent recycling (Fig. 4C), which maintains sensitivity to EGF by sustaining unliganded EGFR abundance at the PM to counter the depletion of liganded receptors by endocytosis. A1-promoted inhibition of both the constitutive and EGF-induced recycling of unliganded EGFR thereby changes the signaling output of the receptor and alters the cellular response to EGF stimulation. Because Akt itself is preferentially activated at the PM ( fig. S3, A to D), the EGF-promoted increase in vesicular recycling generates a positive feedback that switches cells to a high Akt activation state (Fig. 4, E and F). Although Akt has previously been observed on endosomal membranes through interactions with the early endocytic adaptor protein APPL1 (47,63), de novo activation of Akt by EGFR requires the production of phosphatidylinositol 3,4,5-trisphosphate [PI(3,4,5)P 3 ], which is impeded by the low abundance of phosphatidylinositol 4,5-bisphosphate [PI(4,5)P 2 ] in endosomal membranes (43,64). Akt activation may occur, to some extent, on endosomal membranes (65); however, because the coupling of active EGFR to Akt activation will be more efficient at the PM, any perturbations that influence the spatial distribution of EGFR, such as Eph or Kiss1R activation, should influence the capacity of EGFR to activate Akt (Figs. 3, A and E, and 6, A and B). We observed that the switch to a high Akt activity state only occurred in a proportion of cells, even in the absence of in situ cell-cell contacts, and increased with EGF concentration (Fig. 4, E and F been previously attributed to cell-to-cell variation in PI3K expression (66). Our data suggest that intrinsic variability in the expression of signaling and/or trafficking effectors in individual cells may determine the EGF concentration required to stimulate Akt-dependent trafficking and engage the positive feedback that produces a high Akt activity state. Small differences in EGF concentration substantially influenced the proportion of cells generating a high Akt res ponse (for example, a shift from 5 to 10 ng/ml increased the proportion of cells from 43 to 85%, respectively; Fig. 4, E and F). It may not be coincidental that the concentration range over which this switch occurs corresponds to the physiological range of EGF concentrations (30). By generating a sharp boundary for Akt activation within the physiological EGF concentration regime, even slight changes in the threshold of this switch could have profound implications for tissue dynamics (for example, the initiation of migration). Eph receptor activation decreased the proportion of cells generating a high Akt response from 85 to 41% in response to a given concentration of EGF (10 ng/ml) (Fig. 4, E and F). The dependence of Akt activation on EGFR recycling thus allows the degree of cell-cell contact to regulate the proportion of cells generating a migratory response to EGF stimulation. PI3K/Akt signaling has previously been suggested as the point of convergence for EGFR/Eph control of cell migration (15); however, the molecular mechanism underlying this oppositional relationship remained unclear. Our results indicate that Eph receptor activation inhibits EGF-promoted cell migration by suppressing Akt-dependent recycling. By inhibiting EGFR recycling, Eph activation impedes the spatially maintained positive feedback that generates a high Akt response and decreases the sensitivity of cells to persistent EGF stimulation, which is necessary to maintain an exploratory behavior. However, by changing the spatial distribution of EGFR activity (Fig. 2, C and D), Eph receptor activation selectively suppressed migratory signaling from the PM while leaving proliferative ERK signaling intact (Fig. 3, A, E, and F). This contextual plasticity generates two distinct cellular outcomes to EGF stimulation that may be important in physiological settings such as wound healing. At the tissue boundary, cells with reduced cell-cell contact would increase their exploratory behavior in response to EGF released at the site of the wound. Cells located deeper in the tissue, despite extensive cell-cell contacts, would retain their proliferative response to extracellular EGF and undergo mitosis to fill the vacant space created as exploratory cells migrate to occupy the wound area.
Our observations demonstrate that communication between receptors with opposed functionality can emerge through changes in vesicular trafficking dynamics rather than relying on direct interactions between the receptors or their respective effectors. Such a mechanism also allows different receptors with similar functional roles (for example, EphA2 and Kiss1R) to alter the cellular response to stimuli without having to evolve distinct protein interaction domains to do so. The dependency of EGFR signaling on its vesicular dynamics could confer a general mechanism through which the cell can generate functional plasticity to growth factor stimulation while preserving specificity in cell-cell communication.
Antibodies
The primary antibodies used were as follows: mouse anti-Akt . Cells were pretreated for 1 hour with serum-free medium or Kp-10 (purple, 100 nM) before seeding (means ± SEM from three independent experiments). Control and EGF data were previously presented in Fig. 5 (E to G).
In-Cell Western and On-Cell Western
Cells were seeded on black, transparent-bottomed 96-well plates (3340, Corning) coated with poly-l-lysine (P6282, Sigma-Aldrich). Cells were fixed with Roti-Histofix 4% (Carl Roth) for 5 min at 37°C. For ICW, cells were permeabilized with 0.1% (v/v) Triton X-100 for 5 min at room temperature. For OCW, cells were not permeabilized. Samples were incubated in Odyssey tris-buffered saline (TBS) blocking buffer (LI-COR Biosciences) for 30 min at room temperature. Primary antibodies were incubated overnight at 4°C, and secondary antibodies (IRDyes, LI-COR Biosciences) were incubated in the dark for 1 hour at room temperature. All wash steps were performed with TBS (pH 7.4). Intensity measurements were made using the Odyssey Infrared Imaging System (LI-COR Biosciences). ICWs were calibrated by Western blots to ensure accurate quantification ( fig. S1D). Quantification of the integrated intensity in each well was performed using the Micro Array Profile plugin (OptiNav Inc.) for ImageJ v1.47 (http://rsbweb.nih. gov/ij/). In each ICW or OCW, two to four replicates per condition were obtained per experiment, and all data presented represent means ± SEM from at least three independent experiments.
Immunofluorescence
Cells were cultured on four-or eight-well chambered glass slides (Lab-Tek) and fixed with 4% (w/v) paraformaldehyde/phosphatebuffered saline (PBS) for 10 min at 4°C. To measure PM EGFR, fixed, nonpermeabilized samples were first incubated with primary antibody directed at an extracellular epitope of EGFR (AF231, R&D Systems) overnight at 4°C followed by secondary antibody for 1 hour at room temperature. For all other immunofluorescence experiments, samples were permeabilized with 0.1% (v/v) Triton X-100 for 5 min at room temperature before incubation with primary antibodies. All wash steps were performed with TBS (pH 7.4). Fixed samples were imaged in PBS at 37°C. For all analyses, an initial background subtraction was performed on immunofluorescence images. To quantify the proportion of EGFR in Rab5 or Rab7 compartments, binary masks were generated from intensity thresholded images of Rab5 and Rab7 staining. To generate a mask of Rab5/Rab7 doublepositive endosomes, the product of their individual masks was used.
The integrated fluorescence intensity of EGFR-mCherry was determined in each of the endosomal masks and divided by the total integrated EGFR fluorescence intensity of the cell. Image analysis was performed using ImageJ. A cell segmentor tool was developed inhouse in Anaconda Python (Python Software Foundation version 2.7; www.python.org/) to quantify the spatial distribution of EGFR and EGFR(pTyr 845 ) in fixed cells. Cells were divided into six equally spaced radial bins emanating from the plasma membrane.
Confocal imaging
Cells were cultured for live-cell confocal imaging on four-or eightwell chambered glass slides (Lab-Tek) and transiently transfected as described above. Confocal images were recorded using an Olympus FluoView FV1000 confocal microscope (Olympus Life Science Europa) or a Leica SP8 confocal microscope (Leica Microsystems).
Leica SP8
The and 12-bit images of 512 × 512 pixels were acquired in a frame-byframe sequential mode.
Analysis of time-lapse confocal imaging
All analysis of live-cell imaging data required an initial background subtraction for images obtained. To quantify the proportion of endosomal EGFR-mCherry or EphA2-mCitrine, binary masks of endosomes were generated from intensity thresholded images. The integrated fluorescence intensity of EGFR-mCherry and EphA2-mCitrine was determined in their corresponding endosomal masks and divided by the total integrated fluorescence intensity of the cell. FLAP experiments were carried out at 37°C on a Leica SP8. EGFR-mCherry was coexpressed to identify and select regions of endosomal EGFR for photoactivation. Background intensity of EGFR-paGFP before photoactivation was measured and subtracted from postactivation images. Photoactivation of EGFR-paGFP was performed with the 405-nm laser at 90% power. After photoactivation, fluorescence images of EGFR-paGFP were acquired every minute for a total of 15 min. PM EGFR-paGFP fluorescence was quantified as the integrated intensity in a five-pixel ring of the cell periphery and, after subtracting preactivation background intensity, was calculated as the proportion of total EGFR-paGFP intensity.
Fluorescence lifetime imaging microscopy EGFR-mCitrine, PTB-mCherry, and HA-c-Cbl-BFP were ectopically expressed in Cos-7 cells. Fluorescence lifetime measurements of EGFR-mCitrine were performed at 37°C on a Leica SP8 equipped with a time-correlated single-photon counting module (LSM Upgrade Kit, PicoQuant) using a 63×/1.4 NA oil objective. EGFR-mCitrine was excited using a pulsed white light laser at a frequency of 20 MHz and wavelength of 514 nm, and fluorescence emission was restricted to 525 to 570 nm with an AOBS. Photons were integrated for a total of approximately 2 min per image using the SymPhoTime software V5.13 (PicoQuant). Data analysis was performed using custom software in Anaconda Python based on global analysis, as described in (67). Fluorescence lifetime measurements of LIFEA2 were performed and analyzed as previously described (24).
Flow cytometry
Cells were detached using Accutase, centrifuged at 200g for 5 min, and resuspended in serum-free DMEM before EGF stimulation. Cells were fixed with 5% (w/v) sucrose/Roti-Histofix for 15 min at 37°C. Ice-cold methanol was added to 90% (v/v) for 30 min on ice. Cells were rinsed once with 0.5% (w/v) bovine serum albumin/TBS and incubated with Odyssey TBS blocking buffer (LI-COR Biosciences) for 30 min at room temperature. Anti-phospho-Akt(Ser 473 )-Alexa Fluor 647 (4075, CST) was added directly to blocking buffer and incubated overnight at 4°C. Anti-Akt-Alexa Fluor 488 (2917, CST) was added for 2 hours before measurement. Samples were analyzed using an LSR II flow cytometer (BD Biosciences). Alexa Fluor 488 was excited with a 488-nm laser, and fluorescence emission was collected using a 505-nm long-pass dichroic and a 530/30-nm filter. Alexa Fluor 647 was excited with 633-nm lasers, and fluorescence emission was collected using a 670/40-nm filter. Samples were analyzed using FlowJo v10 (FlowJo LLC) to obtain single-cell intensity measurements of phosphorylated and total Akt. Population distributions of log(phosphorylated/total Akt) were fitted with a single Gaussian or a sum of two Gaussian distributions using GraphPad Prism (GraphPad Software).
Cell migration
MEFs were seeded onto fibronectin-coated (1.25 g/cm 2 ; F0895, Sigma) 12-well culture dishes (83.3921, Sarstedt) containing twowell Culture-Inserts (80209, Sarstedt) to create a cell-free area. Immediately before stimulation, inserts were removed and cells were incubated with Hoechst to label nuclei. Wide-field images were acquired using an Olympus IX81 inverted microscope equipped with an MT20 illumination system, a 4×/0.16 NA air objective, and an Orca charge-coupled device camera (Hamamatsu Photonics). Transmission and fluorescence images were acquired every 10 min for 16 hours. The cell-free area created by the Culture-Insert was cropped using ImageJ and defined as the migration region. Individual cells were detected and tracked by their nuclear Hoechst staining as they traveled within the migration region using TrackMate ImageJ plugin (68), and the total distance of each track was quantified. To distinguish between migratory cells and cells that moved into the migration region due to population expansion over the course of the experiment, a minimum migration distance threshold that separated the population of nonmigrating cells from migratory cells was determined ( fig. S5, A and B).
Transwell migration assay
Cells were serum-starved overnight, detached with Accutase, and pretreated with compounds in suspension for 1 hour at 37°C, when required. Cells were subsequently seeded in serum-free medium at a density of 20,000 per well into the upper chamber of CIM-Plates (ACEA Biosciences Inc.). EGF (20 ng/ml) or serum-free medium was added to the lower chamber of the CIM-Plates as the chemoattractant and negative control, respectively. Transwell migration was quantified as cells migrated through a microporous membrane toward the chemoattractant in the lower chamber to microelectrode sensors, generating a real-time increase in impedance measured by the xCELLigence RTCA DP Instrument (ACEA Biosciences Inc.).
Reverse transcription quantitative polymerase chain reaction
Cos-7 cells were trypsinized and pelleted, and mRNA extraction was carried out with TRIzol Reagent (Thermo Fisher) according to the manufacturer's instructions. Reverse transcription quantitative polymerase chain reaction (RT-qPCR) was performed with the Luna Universal One-Step RT-qPCR Kit (New England Biolabs) with validated primers (table S1). If the differences between C t values for a given transcript and a matched negative control (NRT, same primer pair without reverse transcription) were not statistically significant using a two-tailed Student's t test, then the transcript was classified as not detected. Transcript abundance was normalized to the housekeeping gene TATA box-binding protein (TBP) and calculated as 2 −(sample Ct − TBP Ct) .
Immunoprecipitation and Western blotting
Cells were lysed in TGH [150 mM NaCl, 2 mM EGTA/EDTA, 50 mM Hepes (pH 7.4), 1% Triton X-100, 10% glycerol, 1 mM phenylmethylsulfonyl fluoride, and 10 mM N-ethylmaleimide (NEM)] or RIPA [for immunoprecipitation; 50 mM tris-HCl (pH 7.5), 150 mM NaCl, 1 mM EGTA, 1 mM EDTA, 1% Triton X-100, 1% sodium deoxycholate, 0.2% SDS, 2.5 mM sodium pyrophosphate, and 10 mM NEM], supplemented with Complete Mini EDTA-free protease inhibitor (Roche Applied Science) and 100 l of phosphatase inhibitor cocktail 2 and 3 (P5726 and P0044, Sigma-Aldrich). Lysates were sonicated before centrifugation at 14,000 rpm for 10 min at 4°C to pellet nonsoluble material. For immunoprecipitation, cell lysates were incubated with 50 l of washed protein G magnetic beads (10003D, Life Technologies) for 1 hour at 4°C to preclear nonspecific binding proteins from samples. Supernatants were incubated with primary antibody alone for 2 hours followed by the addition and overnight incubation with protein G magnetic beads at 4°C with agitation. SDS-polyacrylamide gel electrophoresis was performed using an XCell II mini electrophoresis apparatus (Life Technologies) according to the manufacturer's instructions. Samples were transferred to preactivated polyvinylidene difluoride membranes (Merck Millipore) and incubated with the respective primary antibodies at 4°C overnight. Detection was performed using species-specific IRDye secondary antibodies (LI-COR Biosciences) and the Odyssey Infrared Imaging System (LI-COR Biosciences). The integrated intensity of protein bands of interest was measured using the ImageJ software, and signals were normalized by dividing the intensities of phosphorylated protein by total protein intensities or by dividing intensities of coimmunoprecipitated proteins by the corresponding immunoprecipitated protein.
|
2018-08-04T14:17:03.908Z
|
2018-07-31T00:00:00.000
|
{
"year": 2018,
"sha1": "64e768b927a7e38d4679bde47a3deffb475f9944",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2018/01/26/202705.full.pdf",
"oa_status": "GREEN",
"pdf_src": "Highwire",
"pdf_hash": "85296b46bd019fef5e0504bc1fc429ed234bcf1c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
119236818
|
pes2o/s2orc
|
v3-fos-license
|
Polarized SERS of individual suspended carbon nanotubes by Pt-Re nanoantennas
We present optical nanoantennas designed for applications that require processing temperatures larger than 800{\deg}C. The antennas consist of arrays of Re/Pt bilayer strips fabricated with a lift-off-free technique on top of etched trenches. Reflectance measurements show a clear plasmonic resonance at approximately 670 nm for light polarized orthogonal to the strip axis. The functionality of the antennas is demonstrated by growing single-walled carbon nanotubes (CNTs) on top of the antenna arrays and measuring the corresponding Raman signal enhancement of individual CNTs. The results of the measurements are quantitatively discussed in light of numerical simulations which highlight the impact of the substrate.
The key feature of plasmonic nanostructures is their ability to confine light on a much smaller scale compared to the far field wavelength 1,2 . As a consequence, such structures are able to amplify the optical near field at their surface by many orders of magnitude, especially if a localized surface plasmon resonance (LSPR) is excited. This has a dramatic impact on optical spectroscopy, where the near field amplification makes it possible to detect spectra of single molecules [3][4][5] . In particular, the use of plasmonic structures represented a fundamental breakthrough for Raman spectroscopy, where it is referred to as surface-enhanced Raman spectroscopy (SERS) [6][7][8] . The name reflects the fact that early experiments made use of granular metallic surfaces 6,9,10 . Subsequently, colloidal Au or Ag nanoparticles [10][11][12] , as well as periodically indented metal layers 13,14 were used as plasmonic resonators. Such resonators act as optical nanoantennas, i.e. they concentrate propagating radiation into a subwavelength-sized region where the near field dominates the far field.
The progress of nanoplasmonics enabled the fabrication of optical nanoantennas specifically designed to work in combination with given target molecules 3,4 . The combination of plasmonic structures with specific devices has nevertheless a limit: the fabrication of the target device or the synthesis of the desired molecule is not always compatible with the fabrication of the optical nanoantennas. Carbon nanotubes (CNTs) represent a good example of such a situation. The fabrication of ultra-clean devices 15 often requires that the CNT growth must be performed as last fabrication step 16,17 . CNTs are grown by chemical vapor deposition (CVD) at temperatures of the order of 800 • C or higher 18 . These temperatures are sufficient to melt thin films or nanoparticles of the most common plasmonic materials, namely Au and Ag. Therefore so far the proposed prototypes [19][20][21][22] of optical nanoantennas for CNTs had to be fabricated or deposited after the CNT growth, i.e. they were not compatible with the ultra-clean fabrication scheme.
The low degree of disorder in ultra-clean CNTs allows for a detailed comparison of quantum transport experiments with microscopic theories 17 . For such analysis an independent determination of diameter and chiral an-gle is highly desirable 23,24 . Such information could in principle be provided by Raman spectroscopy, but only in the rare case that the energy of the incident or scattered photon energy matches the energy separation E ii between two van Hove singularities (VHS) of the CNT density of states [25][26][27] .
In experiments with CNTs grown on top of predefined electrodes one deals typically with few devices. Without the usage of tunable lasers it is unlikely to meet the Raman resonance condition for an individual contacted CNT. The field amplification induced by plasmonic structures can be crucial to obtain a sizable optical signal from a limited number of contacted devices.
In the present work we developed optical nanoantennas which are resistant to the extreme conditions of the nanotube CVD growth. We use spatially resolved reflectance measurements to demonstrate the occurrence of a localized surface plasmon resonance (LSPR) at around 630-670 nm. Finally, we exploit the induced field enhancement to magnify the Raman signal of individual ultra-clean CNTs.
I. EXPERIMENTAL DETAILS
Our sample fabrication is sketched in Fig. 1. The antenna arrays are fabricated on degenerately doped silicon substrates with a 470±10 nm-thick SiO 2 cap layer. The fabrication process starts with the etching of trenches defined by electron beam lithography (EBL). 30-nm-wide, 100-nm-deep and 6-µm-long trenches are distributed in arrays of 9 elements with a periodicity of 240 nm. Four arrays are arranged as the four sides of a rectangle. On top of the trenches, we deposited a Re-Pt (with thickness 7 and 18 nm, respectively) metal bilayer. The choice of Re as sticking layer is motivated by its extraordinary stability at high temperatures. Even when reduced to a few-nanometer-thin layer, Re can stand temperatures as high as 900 • C with negligible structural deformation. On the other hand, at these temperatures Pt nanostructures suffer major roughening, while most metals (including common plasmonic materials as, e.g., Au, Ag, Pd, Al) simply melt. We found that the combination of Re as sticking layer and Pt as top layer is the best compromise between the thermal stability provided by Re, and the plasmonic performances and chemical stability guaranteed by Pt.
Etched trenches make possible to control the gap width between adjacent metal strips: the thicker the metal layer, the thinner is the residual gap 28 . The metallization does not cover the entire sample surface, but it is limited to EBL-defined square frames overlapping the trenches, as shown in Fig. 1(a,c). In particular the inner part of the rectangular frame is not metallized: here a 1 µm-wide cluster of catalyst nanoparticles is patterned by EBL 29 .
The last fabrication step is the CVD growth. This step takes place at 850 • C under a steady stream of CH 4 and H 2 . CNTs start growing vertically from the catalyst cluster. Owing to Van-der-Waals interaction, they bend downwards during the growth until they eventually fall on the substrate or on the antennas 30 . Most CNTs are shorter than the cluster-to-antenna distance l ≈ 4 µm. However, some CNTs grow long enough to bridge the antennas. Owing to the small gap between the antenna strips, the CNTs are suspended over the substrate. The CNT position and orientation is clearly unpredictable, thus it is determined a posteriori by atomic force microscopy (AFM). Beside the antenna structure described above we have investigated also alternative structures where the nanoantenna arrays are defined by EBL followed by metal deposition and lift-off, i.e. without making use of etched trenches. Each strip of the arrays consists a bilayer of 5 nm of Ti (sticking layer) and 20 nm of Pt. Although the final results are qualitatively similar, the use of etched trenches makes the fabrication process more reproducible and reliable. Moreover, etched trenches help to keep the gap between the strips homogeneous and avoid that the roughness caused by the CVD step could short-circuit adjacent strips.
Raman and reflectance measurements are performed in a confocal setup 7,22,29 . Light is transmitted through a beam splitter and focused on the sample by a 100× objective lens. The light spot size is approximately 2 µm for white light and below 1µm for the two laser lines, i.e. the spatial resolution is comparable to the array width. The scattered light is transmitted again through the beam splitter and then sent to the detector, as discussed in our previous work 22 . The configuration of light sources, detectors and polarizers depends on the specific measurement 29 . For reflectance spectroscopy we use a thermal source of white light and a spectrometer as detector. An analyzer is placed in between the sample and the spectrometer. For reflectance maps the light source is a laser (either a He/Ne laser with λ L = 633 nm or a diode laser with λ L = 532 nm) and the detector is a power-meter. A polarizer is placed between the source and the sample. The same configuration is used for Raman spectroscopy. In this case the detector is a spectrometer.
II. EXPERIMENTAL RESULTS
Long metal nanowires display a plasmonic resonance near the visible when excited with a field orthogonal to their axis, where the incompressible electron plasma experiences a non-negligible restoring force 1,28 . The excitation of a LSPR implies losses due to absorption in the metal, which results in a reduction of the scattered light intensity. If a metal nanowire is illuminated with white light polarized orthogonally to the wire axis, plasmonic resonances will appear therefore as minima in the spectrum of the reflected light. On the other hand, when an antenna array is excited with unpolarized light, only the polarization component perpendicular to the array axis will show a minimum at the LSPR, while the parallel component will be scarcely affected.
In our reflectance spectroscopy measurements we keep the analyzer oriented vertically [i.e. parallel to the shorter side of the rectangle in Fig. 1(c)] and we measure the reflected signal from two arrays, one oriented horizontally (I 90 • ) and the other oriented vertically (I 0 • ), where the latter represents the background signal. The difference of the two signals is then normalized to the background signal (I 0 • ) 31 . Figure 2(a,b) shows the difference between I 90 • and I 0 • , divided by the latter. The panel (a) refers to a Re/Pt antenna array patterned on etched trenches. The graph clearly displays a minimum in the reflected signal at ≈ 670 nm, which is close to the laser wavelength λ L = 633 nm used in our Raman experiments. The graph in Fig. 2(b) shows data obtained on a EBLdefined antenna array, i.e. without etched trenches. In this case the minimum occurs at a slightly lower wavelength, ≈ 630 nm. Graphs in Fig. 2(c) and (d) show the corresponding results of the numerical simulations discussed in Sec. III.
To confirm the resonant nature of the observed minimum, we acquired maps of the reflected signal as a function of the polarization of the incident light. In this case we use monochromatic light from a laser source (with λ L = 633 or 532 nm), focused to a 0.5 µm-wide spot. Figure 3 shows how reflectance maps evolve when the polarization of the incident light at λ L = 633 nm is gradually rotated from vertical (a) to horizontal (f). The maps clearly show that horizontal antennas have a minimum in the reflected signal for incident light polarization directed vertically, and vice versa. This corresponds to the excitation of a LSPR as discussed in the previous paragraph. By rotating the incident light polarization the reflected signal from the horizontal antennas continuously increases, while the signal from the vertical ones decreases accordingly. This behavior is not observed for the green laser (λ L = 532 nm), because this wavelength is far from the LSPR minimum 29 .
Having demonstrated the occurrence of a plasmonic resonance, we used antenna arrays to amplify the Raman signal of selected suspended CNTs. After the CVD growth process, AFM scans are used to locate the presence of individual CNTs and where exactly they cross the antenna arrays. A convenient CNT is in this case one far from other CNTs and long enough completely bridge the antenna array. Figure 4 shows the results of Raman measurements performed on three samples. The red curves on the left panels refer to the Raman signal around the G and the D peak measured on CNT segments located on the antenna strips, while the blue ones correspond to Raman measurements on bare portions. The spectra have been acquired after an integration time of 30 s. The right panels in Fig. 4 show AFM topography scans of the three samples, where the red and light blue circles indicate the laser spot size and location for the bare CNT portion and for the CNT on antennas, respectively. The arrows indicate the incident light polarization for each measurement. The difference between red and blue curves in Fig. 4(a)-(c) clearly shows the dramatic amplification of the Raman signal induced by the optical antennas.
In order to quantify the signal amplification, we define the enhancement factor as the ratio between the Raman peak amplitudes measured with and without antennas. The enhancement factor is difficult to estimate when the signal on the bare portion of the CNT is too weak. The intensity of the Raman signal measured on a bare CNT depends on the difference between the incident ω L (or the scattered ω L ± ω q ) photon energy and the energy separation E ii between VHSs. The width of such resonance windows is roughly of the order of 200 meV for the G mode and around 100 meV for the radial breathing mode (RBM) 27 . If the difference between incident (or scattered) photon and the transition energy E ii is larger than the above values, then the Raman signal is drastically suppressed. This is often the case: for a given laser frequency ω L , only a small fraction of the CNTs shows a significant signal. This is precisely where the advantage of optical antennas becomes obvious, as the signal amplification compensates for the suppression due to the energy mismatch 22 . As demonstrated in Fig. 4, also CNTs which are scarcely or not measurable without antennas provide a significant signal. From the spectrum in Fig. 4(a) we deduce an enhancement factor of around 40. While the signal on the bare portion of this CNT is barely discernible, the amplified signal provides useful information: the CNT has a low number of defects, as deduced by the absence of the D peak; furthermore the narrow (FWHM: 17 cm −1 ) G peak implies a very weak G − components, which indicates either an achiral CNT (armchair or zig-zig) or a CNT with very low chiral angle 32 . The graph in Fig. 4(b) refers to a CNT whose Raman spectrum is not measurable on the bare portion even after an integration time of 300 s, i.e. ten times longer than the usual one. This indicates that the CNT has no E ii transitions near 633 nm. However, the signal on the CNT portion suspended on the antennas is clearly detectable, though weak. Since the D peak is stronger than the G peak, we deduce that this CNT has a relatively high number of defects.
Finally Fig. 4(c) shows the results of Raman measurements on antennas made of Ti/Pt, i.e. without the Re sticking layer. As mentioned above, the absence of the Re sticking layer induce large corrugations in the Pt film when subjected to growth temperatures of the order of 850 • C. We note that despite the surface roughness the enhancement factor remains considerably high, of the order of 60. This particular CNT has a separation E ii between VHSs relatively close to the incident photon energy, thus the Raman signal is large enough to be detected without antennas as well. However, the antenna array allows us to measure also the RBM (not shown). By following the standard assignment procedure we deduce the possible chiral indices for the CNT. We found that the most likely chiral indices are (22,5) or (21,7), two adjacent elements of the CNT family 2n + m = 49 which share very similar properties, as e.g. diameter, chiral angle and energy separation between VHSs 29 .
III. DISCUSSION
The widths of the plasmonic resonances measured in our reflectance spectroscopy experiments are of the order of ∆λ ≈ 100 nm. As shown below, simple model calculations considering metallic strips alone lead to plasmonic resonance widths much larger than the observed ones. One possibility to explain the observed narrow resonances is to consider in addition the optical interference due to the SiO 2 /Si interface 470 nm below the sample surface. We will show that the interplay between the optical mode in the SiO 2 cavity and the plasmonic mode within the antenna strips gives rise to much sharper eigenmodes.
Numerical simulations, discussed at the end of this section, show that such modes have a width comparable to that observed in the experiment.
In our analytical calculation 29 we approximate a single strip as an infinitely long cylinder of elliptical crosssection. As sketched in Fig. 5(a) the field is evaluated in the position Q located 5 nm away from the strip edge. The vertical axis t measures 25 nm (equal to our antenna thickness) and the horizontal axis w is varied. The graph in Fig. 5(b) shows the results of the calculation for the electric field amplitude (normalized to the incident field amplitude) as a function of both w and the incident frequency ν. We notice that, owing to the large imaginary part of the refractive index of Pt 33 , for all widths w the LSPR resonance width is much broader than that observed in our experiment.
In principle, the plasmonic eigenmodes of an array of strips differ from those of a single structure owing to the electrostatic interaction between the strips which leads to mode hybridization. However, such a difference is relevant only if the decay length for the near field amplitude in the vicinity of a strip is comparable to the gap between two adjacent strips. In the visible range such decay length for submicrometric plasmonic structures is of the order of a few nanometers 1,29 , which is much smaller than the gap (g = 40 nm). Therefore the interaction between adjacent strips can be neglected in our case.
The sharp minima observed in our reflectance spectroscopy experiment can be explained by considering the effect of the substrate. Indeed, even in a plain SiO 2 /Si substrate the electric field amplitude at the sample surface strongly depends on the frequency ν, owing to the optical interference between the two surfaces of the SiO 2 film. In Fig. 5(d) we plot the calculated electric field amplitude in a point located on the upper SiO 2 surface. For ease of comparison with one of the following figures, the graph is plotted as a function of w and ν as in Fig. 5(b), although in absence of nanoantennas the variable w plays no role. The graph shows that the field amplitude at the surface considerably oscillates as a function of the frequency. Clearly, this significantly alters the LSPR profile of optical antennas patterned on top of a SiO 2 film.
To quantify the impact of the substrate on the LSPR, we performed a finite-difference frequency-domain FIG. 4. Raman spectra for CNTs overgrown on optical antenna arrays (left panels) and corresponding AFM topography images (right panels). Red and blue curves refer to measurements on CNT portions overgrown on antennas or lying on the substrate, respectively. The red and blue circles indicate the position and the size of the focused light spot for the corresponding measurement. The white arrows indicate the polarization directions. In (a) and (b) the antenna strips consist of a Re/Pt bilayer. In (c) antenna strips consist of Ti/Pt and are defined by EBL without etched trenches. Compared to the structures above, the corrugations due to the CVD process are much more pronounced. All the Raman spectra have been obtained after 30 s of integration, except the bare CNT signal in panel (a), where the time was 10 times longer and the signal was then divided by a factor 10.
(FDFD) numerical simulation on the actual geometry, sketched in Fig. 5(e). In the calculation we model the nanoantennas as periodic arrays of infinitely long strips. By sweeping geometry parameters and laser frequency, we do not only extract field enhancement factors but also identify plasmonic modes and the interplay between the metal structure and the SiO 2 /Si substrate stack. The strip section is assumed to be rectangular with rounded corners (with radius of curvature r = 5 nm). The gap g = 40 nm, the etching depth e = 100 nm and the SiO 2 thickness s = 470 nm are kept constant. The result of the simulation is plotted in Fig. 5(f). The graph shows the horizontal component of the electric field amplitude calculated in the point Q indicated in the sketch, plotted as a function of width w and frequency ν. A comparison with the graphs in Fig. 5(b) and (d) reveals that the actual eigenmodes are non-trivial combinations of the cavity modes in the SiO 2 layer and the plasmonic modes in the metal strip. In fact, the graph in Fig. 5(f) shows a broad feature (on the left and bottom part of the graph) modulated by almost vertical fringes which are clearly related to the SiO 2 film interference. As a result, in the visible range the maximum of the near field amplitude for a 200 nm-wide strip occurs at approximately 470 THz, i.e. close to the red laser frequency. We also notice several arc-shaped features on the top-right zone of the graph, which correspond to higher energy-modes. Further details about the calculation and the interpretation of optical modes are given in the Supplemental Material.
The reflection coefficient for a thin film deposited on top of a high refractive index substrate displays a maximum when the electric field at the surface is minimal 29 . This condition corresponds to a frequency ν such that the electric field forms a standing wave within the SiO 2 layer, with nodes at the two interfaces. Vice versa, when the reflection coefficient is minimal, the electric field at the surface is maximal. On the other hand, when the polarizability α of an optical antenna (and thus the near field amplitude) has a maximum as a function of ν then, due to the imaginary part of the metal refractive index, the reflected far field shows a minimum. From these arguments we expect that the color plot of the reflected far field, shown in Fig. 5(g), will display a reverse contrast when compared to that of the near field shown in Fig. 5(f). In order to compare the simulation results with the white light reflectance spectra, in Fig. 5(g) we plot the intensity as a function of the wavelength λ and strip width w. The line cuts for w = 200 and 100 nm are shown in Fig. 2(c,d). These latter indicate that the minimum for the reflected signal occurs at ≈ 620 nm, the resonance width is approximately 100 nm and that the curve for w = 100 nm is slightly sharper and slightly blue-shifted compared to the one for w = 200 nm. These features are in agreement with the experimental data. We notice, however, that the precise shape of the minimum is not suitably captured by the simulation. Note that the exact wavelength λ m for the minimum critically depends on the SiO 2 layer thickness. The SiO 2 layer thickness is measured with an accuracy of 10 nm, which causes a comparable uncertainty for λ m in the simulation.
A conclusion drawn from the calculations above is that the dependence of the antenna resonance frequency on both antenna width and antenna gap is relatively smooth. Therefore the roughness produced by the CVD process has a negligible effect on both the resonance frequency and the enhancement factor. This can explain the good enhancement factor observed in Fig. 4(c) despite the surface roughness of the metal stripes of that sample.
IV. CONCLUSIONS
We have demonstrated directional optical antennas for applications that require extremely high process temperatures as, e.g., those required for the CVD growth of CNTs. We have fabricated devices where CNTs are grown on top of antenna arrays and shown that the latter significantly amplify the CNT Raman signal. Numerical simulations show that the relatively sharp antenna resonance is due to the interplay of plasmonic resonance and thin film interference due to the SiO 2 cap layer. Possible applications of Pt-Re optical antennas go well beyond SERS of CNTs, since the present fabrication scheme can be in principle applied to any nanostructure for optical spectroscopy whose fabrication requires extreme temperature conditions.
|
2017-03-16T08:49:25.000Z
|
2017-03-16T00:00:00.000
|
{
"year": 2017,
"sha1": "f258a32e8afb314ec728462e9c0b96657d3d2510",
"oa_license": null,
"oa_url": "https://epub.uni-regensburg.de/36429/1/PhysRevB.96.035408.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3f5fa6c3a6d222ab642523ca50fc51d54008a96f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119236383
|
pes2o/s2orc
|
v3-fos-license
|
QGP collective effects and jet transport
We present numerical simulations of the SU(2) Boltzmann-Vlasov equation including both hard elastic particle collisions and soft interactions mediated by classical Yang-Mills fields. We provide an estimate of the coupling of jets to a hot isotropic plasma, which is independent of infrared cutoffs. In addition, we investigate jet propagation in anisotropic plasmas, as created in heavy-ion collisions. The broadening of jets is found to be stronger along the beam line than in azimuth due to the creation of field configurations with B_t>E_t and E_z>B_z via plasma instabilities.
Introduction
High transverse momentum jets produced in heavy-ion collisions represent a valuable tool for studies of the properties of the hot parton plasma produced in the central rapidity region [1]. However, present estimates of the strength of the coupling of jets to a QCD plasma are sensitive to infrared cutoffs. We employ a numerical simulation of the Boltzmann-Vlasov equation, which is coupled to the Yang-Mills equation for the soft gluon degrees of freedom. Soft momentum exchanges between particles are mediated by the fields, while hard momentum exchanges are described by a collision term including binary elastic collisions. This way, we are able to provide an estimate of the coupling of jets to a hot plasma which is independent of infrared cutoffs.
The longitudinal expansion of the plasma may lead to a strongly anisotropic momentum distribution in the local rest frame, during the very early stages of the plasma evolution. Due to anisotropies in the particle momentum distributions plasma instabilities appear [2]. These lead to the formation of long-wavelength chromo-fields with E z > B z and B ⊥ > E ⊥ , which affect the propagation of a hard jet and of its induced hard radiation field. This may provide an explanation for the observed asymmetry in measurements of dihadron-correlations. Here a much stronger broadening of jets in pseudorapidity (η) than in azimuthal angle (φ) has been observed [1, 3].
Boltzmann-Vlasov equation for non-Abelian gauge theories
We solve the classical transport equation for hard gluons with SU(2) color charge including hard binary collisions where f = f (x, p, q) denotes the single-particle phase space distribution. It is coupled self-consistently to the Yang-Mills equation for the soft gluon fields. The collision term contains all binary collisions, descibed by the leading-order gg → gg tree-level diagrams.
We replace the distribution f (x, p, q) by a large number of test particles, which leads to Wong's equations [4] x for the i-th test particle, whose coordinates are x i (t), p i (t), and q a i (t). The time evolution of the Yang-Mills field is determined by the standard Hamiltonian method [5] in A 0 = 0 gauge. See [6,7,8,9] for more details. The collision term is incorporated using the stochastic method [10]. The total cross section is given by σ 2→2 = s/2 k * 2 dσ dq 2 dq 2 , where we have introduced a lower cutoff k * . To avoid double-counting, this cutoff should be on the order of the hardest field mode that can be represented on the given lattice, k * ≃ π/a, with the lattice spacing a. For a more detailed discussion see [9].
Jet broadening in an isotropic plasma
We first consider a heat-bath of particles with a density of n g = 10/fm 3 and an average particle momentum of 3T = 12 GeV. For a given lattice (resp. k * ) we take the initial energy density of the thermalized fields to be d 3 k/(2π) 3 (3))/(e k/T − 1) is a Bose distribution normalized to the assumed particle density n g , and ζ is the Riemann zeta function. The initial spectrum is fixed to Coulomb gauge and A i ∼ 1/k. We measure the momentum broadening p 2 ⊥ (t) of high-energy test particles (p/3T ≈ 5) passing through this medium. Fig. 1 shows that in the collisionless case, C = 0, the broadening is stronger on larger lattices which accommodate harder field modes. However, Fig. 2 demonstrates that collisions with momentum exchange larger than k * (a) compensate for this growth and lead to approximately lattice-spacing independent results.
A related transport coefficient isq [11]. It is the typical momentum transfer (squared) per collision divided by the mean-free path, which is nothing but p 2 ⊥ (t)/t. From Fig. 2,q ≃ 2.2 GeV 2 /fm for N c = 2, n g = 10/fm 3 and p/(3T ) ≈ 5. We have verified thatq does not depend on the temperature T as long as the particle density n g and the ratio p/T is fixed. Due to the independence ofq of the temperature and its proportionality to the density n, we can scale to physical densities for a QGP created at RHIC. We adjust for the different color factors in SU(3), and findq ≈ 5.6 GeV 2 /fm, at T = 400 MeV, E jet ≈ 20 GeV (p/3T = 16) in a system of quarks and gluons.
Jet broadening in an unstable plasma
In heavy-ion collisions, locally anisotropic momentum distributions may emerge due to the longitudinal expansion. Such anisotropies generically give rise to instabilities [2,7,8].
Here, we investigate their effect on the momentum broadening of jets, including the effect of collisions. The initial anisotropic momentum distribution for the hard plasma gluons is taken to be f (p) = n g x + p 2 y . We initialize small-amplitude fields sampled from a Gaussian distribution and set k * ≈ p h , for the reasons alluded to above. We add additional high momentum particles with p x = 12 p h and p x = 6 p h , respectively, to investigate the broadening in the y and z directions via the variances κ ⊥ (p The ratio κ z /κ ⊥ can be roughly associated with the ratio of jet correlation widths in azimuth and rapidity: κ z /κ ⊥ ≈ ∆η / ∆φ . Experimental data on dihadron correlation functions for central Au+Au collisions at √ s = 200 GeV [1] are consistent with κ z /κ ⊥ ≈ 3 [12]. Fig. 3 shows the time evolution of p 2 ⊥ and of p 2 z . During the period of instability and for both jet energies we find κ z /κ ⊥ ≈ 2.3 . The explanation for the larger broadening along the beam axis is as follows. In the Abelian case the instability generates predominantly transverse magnetic fields which deflect the particles in the z-direction [13]. Although the interactions are a lot less trivial, in a non-Abelian plasma the instability creates large domains of strong chromo-electric and -magnetic fields with E z > B z , aside from B ⊥ > E ⊥ (Fig. 4). The field configurations are such that particles are deflected preferentially in the longitudinal z-direction (to restore isotropy). Fig. 5 shows the filamentation of the current and the domains of magnetic fields generated by the instability. Figure 5. Slices in the x-z-plane at fixed y = L/2 of the current in the x-direction, J x , and the three color components of the chromo-magnetic field in the y-direction. Filaments are nicely visible. Scales in lattice units. 0 to 5 · 10 −8 for the current, −4 · 10 −3 to 4 · 10 −3 for the chromo-magnetic fields.
|
2008-04-29T08:58:21.000Z
|
2008-04-29T00:00:00.000
|
{
"year": 2008,
"sha1": "aead5f025be0d0d1a9bae718b19ee1259978cd6d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0804.4557",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aead5f025be0d0d1a9bae718b19ee1259978cd6d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
252130996
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of Laboratory Diagnostics of Primary Biliary Cholangitis: When Solid-Phase Assays and Immunofluorescence Combine
The laboratory diagnostics of primary biliary cholangitis (PBC) have substantially improved, thanks to innovative analytical opportunities, such as enzyme-linked immunosorbent assays (ELISA) and multiple immunodot liver profile tests, based on recombinant or purified antigens. This study aimed to identify the best diagnostic test combination to optimize PBC diagnosis. Between January 2014 and March 2017, 164 PBC patients were recruited at the hospitals of Parma, Modena, Reggio-Emilia, and Piacenza. Antinuclear antibodies (ANA) and anti-mitochondrial antibodies (AMA) were assayed by indirect immunofluorescence (IIF), ELISA, and immunodot assays (PBC Screen, MIT3, M2, gp210, and sp100). AMA-IIF resulted in 89.6% positive cases. Using multiple immunodot liver profiles, AMA-M2 sensitivity was 94.5%, while anti-gp210 and anti-sp100 antibodies were positive in 16.5% and 17.7% of patients, respectively. PBC screening yielded positive results in 94.5% of cases; MIT3, sp100, and gp210 were detected by individual ELISA test in 89.0%, 17.1%, and 18.9% of patients, respectively. The association of PBC screening with IIF-AMA improved the diagnostic sensitivity from 89.6% to 98.2% (p < 0.01). When multiple immunodot liver profile testing was integrated with AMA-IIF, the diagnostic sensitivity increased from 89.1% to 98.8% (p < 0.01). The combination of IIF with solid-phase methods significantly improved diagnostic efficacy in PBC patients.
Introduction
Primary biliary cholangitis (PBC) is a chronic and often progressive cholestatic liver disease, characterized by the autoimmune destruction of the intrahepatic bile ducts [1]. An increased value of specific serum anti-mitochondrial antibodies (AMA) [2] is the hallmark of the disease, accompanied by the evidence of cholestasis. In accordance with the largest English epidemiological study, PBC is a rare disease, more represented in women than in men, with a prevalence of about 35/100.000 and an annual incidence of 2-3/100.000 [3,4]. 2
of 12
The disease is mostly represented in late adulthood, with 65 years as the mean age at diagnosis [1].
The etiology of PBC has not yet been clarified, but is thought to be multifactorial due to a combination of environmental and genetic risk factors [5].
The diagnosis is confirmed by the evidence of sustained (>6 months) elevated values of alkaline phosphatase (ALP), accompanied by positive serum AMA at a titer > 1/40, and/or to specific antinuclear antibodies (ANA) [2].
According to the European Association for the Study of the Liver (EASL) guidelines [2], at least two of the following three criteria must be met for confirming a diagnosis of PBC: (1) serum AMA and/or ANA positivity, (2) cholestatic pattern of liver biochemistry tests with at least one increased value among serum bilirubin, ALP, or gammaglutamyltransferase (GGT), or (3) diagnostic liver histology.
AMA positivity can be considered as the serological hallmark of PBC, being typically observed in over 90% of patients. Evidence of autoantibodies by indirect immunofluorescence (IIF) or enzyme-linked immunosorbent assays (ELISA) is highly specific for this condition. Although AMA positivity is a strong indicator of PBC in patients with otherwise unexplained cholestasis, AMA reactivity is only sufficient for diagnosing PBC in combination with abnormal values from these tests [5][6][7][8][9].
While AMA, MND, and RL/M were primarily detected as immunofluorescent patterns, the identification of target autoantigens, such as the E2 component of pyruvate dehydrogenase (PDC-E2), sp100, and gp210, has allowed for the development of ELISA assays and specific immunodot tests based on recombinant or purified antigens [13][14][15][16][17]. Increased awareness of the serological associations of PBC, along with the widespread use of blood test-based screening in the community, have significantly changed the initial presentation of PBC patients in recent years, so that patients diagnosed with clinically overt disease (e.g., advanced liver disease) can now be identified earlier using abnormal liver serum tests at screening [2].
The clinical course of PBC is variable, may lead to hepatic fibrosis, and ultimately, to hepatic cirrhosis or hepatic failure, with enhanced risk of evolution towards hepatic cellular carcinoma [2,18]. Because of their high sensitivity and specificity, AMA targeting the PDC-E2 are currently considered as a diagnostic hallmark of PBC [19], although their positivity and titer are poor predictors of outcome [9].
Current evidence suggests that no established immunological marker can efficiently predict the progression towards end-stage liver cirrhosis. The available prognostic models have been developed based on clinical and biochemical variables (especially bilirubin) and have been tested in patients with advanced disease [20]. Therefore, they are virtually unsuitable for the prognostication of patients with early disease. Hence, new prognostic biomarkers would be needed for early diagnosis and for tailoring follow-up treatments, according to patients' characteristics.
Evidence has been provided that PBC-specific ANA, particularly anti-gp210 or RL/M, may be associated with poor prognosis and more aggressive disease [9]. Although its clinical impact remains uncertain, the assessment of ANA in PBC patients seems promising. This unmet clinical need has prompted us to investigate whether PBC-specific ANA (i.e., anti-gp210 and anti-sp100 antibodies) may provide meaningful diagnostic and prognostic data. The study aimed to explore some substantial issues in PBC diagnostics. In particular, we investigated whether (a) the identification of PBC-specific autoantibodies against gp210 and sp100 would increase the diagnostic sensitivity of immunological testing for PBC, (b) the identification of AMA and PBC-specific anti-gp210 and anti-sp100 by immunodot and ELISA assays (based on molecularly defined antigens) would improve the sensitivity when compared to immunofluorescence-based techniques, and (c) the adoption of panels of autoantibodies would allow for the diagnosis of PBC in AMA negative patients, with the aim to minimize the risk of misclassification.
Our research strategy included the integration and review of previously published data, and the presentation of preliminary and partial results of the complete multicenter study is entirely illustrated in the present paper [21].
Study Design
A multicenter study was carried out at the hospitals of Parma, Modena, Reggio Emilia, and Piacenza between January 2014 and March 2017, recruiting patients diagnosed with PBC or with suspected PBC, according to the guidelines provided in [2] (Figure 1). prognostic data. The study aimed to explore some substantial issues in PBC diagn In particular, we investigated whether (a) the identification of PBC-specific autoan ies against gp210 and sp100 would increase the diagnostic sensitivity of immunol testing for PBC, (b) the identification of AMA and PBC-specific anti-gp210 and antiby immunodot and ELISA assays (based on molecularly defined antigens) would im the sensitivity when compared to immunofluorescence-based techniques, and ( adoption of panels of autoantibodies would allow for the diagnosis of PBC in AMA ative patients, with the aim to minimize the risk of misclassification. Our research strategy included the integration and review of previously pub data, and the presentation of preliminary and partial results of the complete multi study is entirely illustrated in the present paper [21]. At the time of enrollment, a blood sample was taken from each patient, for a to 12 mL. Serum was separated by conventional centrifugation procedures and stored °C until testing. All measurements were centralized in the laboratory of clinical chem and hematology of the University Hospital of Parma. In this laboratory, all samples examined by using different available approaches, including immunofluorescenc solid-phase methods (i.e., ELISA and immunoblotting). The results of antibody t with different analytical methods and for different target autoantigens located in d subnuclear structures were assessed. Clinical data including comorbidities and the logical findings of liver biopsies, when available, were also collected at enrollmen were then correlated with the autoantibody test results.
Laboratory Assays
Indirect immunofluorescence (Alphadia, Wavrem, Belgium, provided by Alifa dova, Italy) was applied on Hep-2 cells and on rat kidney, stomach, and liver sec respectively, for assessing ANA and AMA. The initial dilution was 1:80, according manufacturer's instructions, and the slides were then reviewed by two skilled labo professionals. All tests were performed with Multiple Immunodot Liver profile 7 (Alphadia, Wavrem, Belgium, Alifax, Padova, Italy), according to the manufacture struction. This immunodot contained the PBC-associated antigens M2/native PDC (E E3 subunits of Pyruvate Dehydrogenase Complex, purified from bovine heart), These referral centers were identified for their vast experience with AMA-IIF testing and for the high number of tests performed yearly (i.e., in 2014, 978 tests were performed in Parma, 999 in Reggio-Emilia, 1087 in Modena, and 1638 in Piacenza, respectively). A signed informed consent was collected from all the recruited patients.
At the time of enrollment, a blood sample was taken from each patient, for a total of 12 mL. Serum was separated by conventional centrifugation procedures and stored at −80 • C until testing. All measurements were centralized in the laboratory of clinical chemistry and hematology of the University Hospital of Parma. In this laboratory, all samples were examined by using different available approaches, including immunofluorescence and solid-phase methods (i.e., ELISA and immunoblotting). The results of antibody testing with different analytical methods and for different target autoantigens located in distinct subnuclear structures were assessed. Clinical data including comorbidities and the histological findings of liver biopsies, when available, were also collected at enrollment and were then correlated with the autoantibody test results.
All the collected serum samples were tested for ANA and AMA, PBC Screen, MIT3, M2, gp210, and sp100 antigens.
Statistical Analysis
The diagnostic performance in terms of agreement among different assays (i.e., IIF, ELISA, immunodot) was defined for each PBC-associated autoantibody by means of Cohen's kappa, with 95% confidence interval (95% CI). The agreement was rated accordingly to Altman et al. [22]. Differences between PBC-specific autoantibody values (positive or negative) in different histological grades were assessed with χ 2 test. The statistical analysis was performed with MedCalc Statistical Software version 20.114, MedCalc Software Ltd, Ostend, Belgium.
Results
Throughout the study period, a total of 176 serum samples were collected from the same number of patients with established or suspected PBC. In 12 out of 176 patients (6.8%), the diagnosis of PBC could not be confirmed; therefore, they were excluded from further analysis ( Figure 1). Overall, 3 of these 12 patients were diagnosed with autoimmune hepatitis, 1 with primary sclerosing cholangitis, and 8 with HCV-related hepatitis. The final study population consisted of 164 patients with a confirmed diagnosis of PBC (mean age, 63.5 years; range 34-89 years; male/female ratio, 1:9), in whom all autoantibody profiles were evaluated (Figure 1). The clinical characteristics of the enrolled patients are shown in Table 1.
Detection of Autoantibodies with Different Analytical Techniques
Overall, 147 out of 164 (89.6%) PBC patients using the IIF assay with stomach/kidney/ liver tissue were positive for AMA. Indirect immunofluorescence revealed positivity for Hep2 cells in 24 out of 164 (14.6%), and 25 out of 164 (15.2%) PBC patients were positive for MND patterns and RL/M, respectively. The multiple immunodot liver profile revealed a 94.5% sensitivity for AMA-M2, whereas anti-sp100 and anti-gp210 antibodies were positive in 17.7% and 16.5% patients, respectively. The ELISA PBC screen was positive in 155 out of 164 (94.5%) PBC patients; moreover, 146 (89.0%), 28 (17.1%), and 31 (18.9%) patients, respectively, tested positive for MIT3, sp100, and gp210 using specific ELISA tests. The autoantibody profiles of the 164 PBC patients are displayed in Table 1.
Agreement among Different Assays for PBC-Specific Autoantibodies
The results for AMA, AMA-M2, MIT3, and IIF-AMA showed a moderate agreement (kappa values between 0.538 and 0.465), while a more satisfactory agreement was detected between AMA-M2 and MIT3 (kappa value, 0.698) ( Table 2). A noteworthy agreement was found for PCB-specific ANA between anti-sp100dot and anti-sp100 ELISA and between anti-gp210 dot and anti-gp210 ELISA (kappa values, 0.875), while a lower agreement was observed between MND ANA pattern and anti-sp100 ELISA or anti-sp100dot (kappa values of 0.718 and 0.636, respectively). A satisfactory agreement was also found between RL/M ANA pattern and anti-gp210 immunodot or anti-gp210 ELISA (kappa values of 0.735 and 0.718) ( Table 2). Table 2. Agreement among results obtained using the IIF methods, multiple immunodot liver profile, and ELISA.
Overlap of PBC Specific Autoantibodies in PBC Samples
Regarding ELISA assays (i.e., MIT3, sp100, and gp210), among all patients, 22 (13.4%) showed double reactivity for MIT3 and sp100; 20 (12.2%) for MIT3 and gp210; and 4 (2.5%) displayed reactivity for all these antigens, while no patient displayed combined reactivity for both sp100 and gp210. Regarding the multiple immunodot liver profile, 23 of all cases (14.0%) displayed double reactivity for AMA-M2 and sp100; 16 (9.8%) for AMA-M2 and gp210; and 5 (3.1%) for all antigens, while none of them displayed combined positivity for sp100 and gp210. The results of the ELISA tests and the multiple immunodot liver profile for PBC-specific autoantibodies are shown in Figure 2a for both sp100 and gp210. Regarding the multiple immunodot liver profile, 23 of all cases (14.0%) displayed double reactivity for AMA-M2 and sp100; 16 (9.8%) for AMA-M2 and gp210; and 5 (3.1%) for all antigens, while none of them displayed combined positivity for sp100 and gp210. The results of the ELISA tests and the multiple immunodot liver profile for PBC-specific autoantibodies are shown in Figure 2A and Figure 2B, respectively.
Combined Diagnostic Value of PBC Specific Autoantibodies
By combining the PBC screen assay with IIF-AMA, the diagnostic sensitivity significantly increased from 89.6% to 98.2% (p < 0.01). Likewise, the combination of the multiple immunodot liver profile and IIF-AMA increased the diagnostic sensitivity from 89.6% to 98.8% (p < 0.01). The positivity for single or combined ELISA and for the multiple immunodot liver profile in IIF-AMA negative patients is shown in Table 3. A proposed flowchart for PBC diagnosis based on the availability of the PBC screen or liver dot profile is shown in Figure 3a,b.
Prognostic Value of PBC-Specific ANA
Histological staging was available for 50 PBC patients, among whom 29 w according to the Batts-Ludwig scale [23]. The ability of PBC-specific markers patients presenting with severe disease was hence tested by comparing the p nuclear autoantibodies (anti-sp100 and anti gp-120) with the histological cl (i.e., early versus advanced disease). The percentage of positivity for anti-sp anti-gp120 antibodies depended on stage. The overall positivity percentage wa 50% in low grade vs. high grade, respectively (grade > 1), p = 0.05. The aim of the severity of the disease with antibody positivity was to propose the possib early determination of the risk of developing severe disease.
Discussion
Laboratory tests for the research of specific autoantibodies, especially AM crucial role in PBC diagnosis, making the reliability of liver autoimmune serol amount importance in this clinical setting [2,9,[24][25][26]. The screening for PBCtoantibodies (i.e., AMA and ANA) is conventionally performed, especially in countries, using IIF instead of solid-phase test systems (i.e., ELISA and imm says). IIF has many shortcomings, such as high inter-observer variability, th trained laboratory staff, its lack of suitability for full automation, and a poo international standardization. The interpretation of IIF pattern can also be chall to the physiopathology of PBC. In particular, AMA IIF may be confused with plasmic antibodies not directly associated with PBC, such as anti-cardiolipin [26]. Notably, some antinuclear antibodies frequently encountered in patients matic diseases (e.g., anti-centromere and speckled) can coexist with PBC-spe
Prognostic Value of PBC-Specific ANA
Histological staging was available for 50 PBC patients, among whom 29 were graded according to the Batts-Ludwig scale [23]. The ability of PBC-specific markers to identify patients presenting with severe disease was hence tested by comparing the positivity of nuclear autoantibodies (anti-sp100 and anti gp-120) with the histological classification (i.e., early versus advanced disease). The percentage of positivity for anti-sp100 and/or anti-gp120 antibodies depended on stage. The overall positivity percentage was 15.4% vs. 50% in low grade vs. high grade, respectively (grade > 1), p = 0.05. The aim of correlating the severity of the disease with antibody positivity was to propose the possibility of the early determination of the risk of developing severe disease.
Discussion
Laboratory tests for the research of specific autoantibodies, especially AMA, play a crucial role in PBC diagnosis, making the reliability of liver autoimmune serology of paramount importance in this clinical setting [2,9,[24][25][26]. The screening for PBC-related autoantibodies (i.e., AMA and ANA) is conventionally performed, especially in European countries, using IIF instead of solid-phase test systems (i.e., ELISA and immunodot assays). IIF has many shortcomings, such as high inter-observer variability, the need for trained laboratory staff, its lack of suitability for full automation, and a poor degree of international standardization. The interpretation of IIF pattern can also be challenging due to the physiopathology of PBC. In particular, AMA IIF may be confused with other cytoplasmic antibodies not directly associated with PBC, such as anti-cardiolipin antibodies [26]. Notably, some antinuclear antibodies frequently encountered in patients with rheumatic diseases (e.g., anti-centromere and speckled) can coexist with PBC-specific ANA antibodies, contributing to generate a controversial fluorescent pattern [27]. Indeed, almost half of the patients enrolled in the present study are affected by other autoimmune diseases (Table 1), underlining how frequently the fluorescent pattern may be misleading.
The new solid-phase ELISA and immunodot tests, especially those using recombinant proteins such as MIT-3, now appear to be significantly more sensitive than IIF [11]. Furthermore, the detection of antibodies against gp210 and sp100 by molecular testing is more accurate than that using IIF on Hep2 cells [9].
The results of this multicenter study show that the PBC screen had an overall diagnostic sensitivity of 94.5% in a cohort of ascertained PBC patients, 10.4% of whom were AMA negative with IIF testing. In particular, the PBC screen exhibited a satisfactory diagnostic performance for patients without detectable IIF AMA (14/17, 64.7%) ( Table 3), confirming and corroborating previous evidence published by Liu et al. [28]. Unlike, the M2/PBC immunodot test displayed an overall sensitivity of 94.5% (Table 1), but it was only capable to identify 9 out of 17 PBC patients without detectable IIF AMA ( Table 3).
The better diagnostic sensitivity of the PBC screen observed in this study can be explained by the capability of this technique to also detect anti-gp210 and anti-sp100 antibodies, as previously reported by Liu et al. [28]. Notably, among all PBC patients, sp100 was detected by ELISA in 17.1% of cases and by immunodot in 17.7%; regarding gp210, ELISA returned positive results in 18.9% and immunodot in 16.5% of cases, respectively (Table 1).
Our findings also confirm the significant overlap of all PBC specific autoantibodies, which was found in 46 out of 164 (28%) patients using ELISA and in 44 out of 164 (26.8%) patients using immunodot.
As expected, the comparison of different techniques (i.e., IIF, ELISA, immunodot) for the identification of AMA and ANA PBC-specific antibodies revealed that ELISA and immunodot displayed the highest concordance (Table 2).
Although no reference technique is currently available for diagnosing PBC, our findings suggest that the risk of misdiagnosing AMA-negative PBC patients significantly decreased using innovative techniques such as ELISA or immunodot based on recombinant antigens, either alone or in combination with IIF. As recently reported for autoimmune rheumatic diseases [29], the combination of solid-phase techniques with AMA IIF may be useful to enhance sensitivity from 89.6% to 98.2% and 98.8% when combined with ELISA and immunodot, respectively. These findings suggest that IIF AMA should not be used alone as a first-line assay, but in combination with a new solid-phase technique (e.g., ELISA, PBC screen, or immunodot) to increase its accuracy for diagnosing PBC.
The concordance rates between ELISA and immunodot corroborate the advisability of combining AMA IIF with either one of the two solid-phase techniques. Many factors influence the choice of the best option, including expertise, available laboratory technologies, and economic resources. Therefore, we suggest the use of the PBC screen in association with AMA as first-line investigation for diagnosing PBC, especially when the pre-test probability is low. In this case, the possibility to perform single ELISA tests in sequence, combined with IIF, with the aim to find autoantibodies against each antigen, could allow for a precise diagnosis, limiting costs of the complete panel offered by the multiple immunodot liver profile. On the other hand, in specialized referral laboratories where the prevalence of patients needing PBC-specific antibodies evaluation is assumed to be higher, the option of contextually assessing all the relevant PBC-specific antibodies could lead to a preference for the use of multiple immunodot liver profile as a first-line assay, in association with AMA IIF. Typically, only referral laboratories for the study of autoimmunity possess the availability of both solid-phase tests and immunofluorescence, the proposed diagnostic flowchart aims to advise the optimization of PBC diagnostic to avoid potentially unproductive and expensive tests, especially for smaller laboratories.
With regards to the prognostic value of antibody tests, the findings of the present study suggest that the positivity of autoantibodies anti-sp100 and anti-gp120 could be related to the higher grade of histological severity, in agreement with the guidelines of the British Society of Gastroenterology [9], although this has not been confirmed by other studies [24,25]. Although these data are too limited to be adopted as a standard method for the determination of the prognosis of PBC, a complete characterization of specific autoantibodies in PBC patients could help validate their prognostic value in wider cohorts.
The first limitation of the study is represented by the number of patients enrolled, which is limited for drawing conclusions that can be applied to the general population. At the same time, it can be deemed remarkable in regards to PBC epidemiology, since the results were derived from the enrollment in only 4 centers, located in a small region. Second, the small percentage of available liver histology is another weakness of the study, as it was not proposed to all the patients enrolled, since it was not strictly necessary to obtain the diagnosis. Therefore, it would have been unethical to have all subjects undergo a potentially risky and invasive procedure for the research purposes only.
On the other hand, the standardization of laboratory procedures and the rigorous collaboration among the research groups in the 4 centers represents a strength of the project, guaranteeing both the reliability of antibody determination and the eventual adoption of different laboratory diagnostic protocols derived from the results of the multicenter study.
Innovative technological and analytical opportunities have allowed for substantial improvements in laboratory diagnostics in the field of PBC. In this new scenario, the clinical governance of the autoimmune diagnostics of liver diseases is crucial [14].
Tests performance in the diagnostic pathway of PBC should always follow an algorithm planned with the hepatologist. As the first step for the study of liver autoimmunity is represented by AMA and ANA determination by IIF, an "AMA reflex" profile could be suggested, with the association of ELISA and immunodot, to minimize misdiagnosis and obtain potentially prognostic data.
Large prospective population studies are required to validate the diagnostic efficacy and cost/benefit ratio of the combination of solid-phase methods (i.e., ELISA or immunodot) with AMA-IIF in the diagnostic approach to patients with PBC. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: All available data are published in the present article.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-09-09T15:11:44.847Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "0a524ec06d4ca41e64918e6b27728fcebdfc79d5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/17/5238/pdf?version=1662435935",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1bea19fbe9416c8659314ee65db6599214e9470b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
22149597
|
pes2o/s2orc
|
v3-fos-license
|
Functional Network Community Detection Can Disaggregate and Filter Multiple Underlying Pathways in Enrichment Analyses
Differential expression experiments or other analyses often end in a list of genes. Pathway enrichment analysis is one method to discern important biological signals and patterns from noisy expression data. However, pathway enrichment analysis may perform suboptimally in situations where there are multiple implicated pathways – such as in the case of genes that define subtypes of complex diseases. Our simulation study shows that in this setting, standard overrepresentation analysis identifies many false positive pathways along with the true positives. These false positives hamper investigators’ attempts to glean biological insights from enrichment analysis. We develop and evaluate an approach that combines community detection over functional networks with pathway enrichment to reduce false positives. Our simulation study demonstrates that a large reduction in false positives can be obtained with a small decrease in power. Though we hypothesized that multiple communities might underlie previously described subtypes of high-grade serous ovarian cancer and applied this approach, our results do not support this hypothesis. In summary, applying community detection before enrichment analysis may ease interpretation for complex gene sets that represent multiple distinct pathways.
Introduction
Researchers' experiments that include high-throughput data generation often lead to a set of genes. These genes may be genes that are over-or under-expressed in a disease subtype, are upregulated in response to a drug, or contain variants associated with a disease. After potentially interesting genes are identified, the next challenge is to interpret the biological processes or pathways that underlie the set. Overrepresentation-based methods are commonly used to identify pathways that have more members in the identified set than would be expected by chance 1 . Typically, pathways or similar groups of genes are obtained from structured vocabularies outlined in curated ontologies such as KEGG, PID, GO, or Reactome [2][3][4][5] . Recently, computational researchers have sought to improve the power of such analyses by considering network interactions among pathway members 6,7 . We sought to evaluate overrepresentation analysis in a different setting: one where multiple pathways underlie a set of associated genes. In this situation, applying standard overrepresentation analysis to gene sets constructed by randomly selecting members of multiple pathways identifies many false positive pathways. We hypothesized that reducing the noise of the gene list input via community detection might decrease the number of false positive pathways.
Functional networks are a type of network where genes are connected if they have a high probability of working together in the same pathway or process [8][9][10][11] . To address the challenge posed by multi-pathway gene sets, we developed an approach that incorporates information from functional networks to first partition gene sets into subsets, or communities, which are then analyzed for overrepresented pathways. To accomplish this, enrichment analysis is applied to each extracted community resulting from community detection preprocessing 12,13 of the original gene set. Community detection has been applied to financial data, social media, and biological data 12,14 . To our knowledge, this is its first application to disambiguate the pathways associated with complex gene sets. We evaluate four community detection methods in this context: Fastgreedy, Walktrap, Multilevel, and Infomap. These algorithms all aim to identify groups/communities within a network: • Fastgreedy -This algorithm starts from a completely unclustered set of nodes and iteratively adds communities such that the modularity (score maximizing within edges and minimizing between edges) is maximized until no additional improvement can be made 15 . • Walktrap -This algorithm performs random walks using a specified step size. Where densely connected areas occur, the random walk becomes "trapped" in local regions that then define communities 16 . • Multilevel -This algorithm is similar to fastgreedy, but it merges communities to optimize modularity based upon only the neighboring communities as opposed to all communities 17 . The algorithm terminates when only a single node is left, or when the improvement in modularity cannot result from the simple merge of two neighboring communities. • Infomap -This algorithm uses the probability flow of information in random walks, which occurs more readily in groups of heavily connected nodes. Thus, information about network structure can be compressed in maps of modules (nodes where information travels quickly) 18 .
Outside of the multi-pathway gene set challenge, there are a number of R packages that implement algorithms for network interpretation of experimental results including WGCNA 19 , EnrichNet 20 , pathDIP 21 , and CePa 22,23 . In this work, community detection algorithms are used to partition multi-pathway gene sets before overrepresentation analysis. By detecting these gene communities, we aim to provide cleaner inputs for overrepresentation analyses in the case of multiple underlying pathways -thereby reducing the number of identified false positives. In contrast with other methods that use network information as priors or as post-analysis visualization aides, we group genes before enrichment analysis. While we use the Integrative Multi-species Prediction (IMP) networks, our approach can be applied to a gene set from any source 11,24 . For example, a user may wish to use tissue-specific networks from the GIANT webserver 9 if tissue specificity is important. Finally, our approach makes no assumptions about the covariance structure of the networks 25 and is thus potentially more useful in real world applications where certain assumptions may not apply.
In summary, we propose an alternative gene enrichment approach for cases when multiple pathways are suspected to be implicated in a gene list. In this approach, candidate genes are overlaid onto a functional network and separated into communities of related genes via community detection. Communities are then subjected to an overrepresentation analysis independently and multiple testing corrections are applied. We compare four community detection approaches in simulated experiments and then apply the approach to identifying enriched pathways across high grade serous ovarian cancer (HGSC) subtypes.
Methods
We conducted an experiment that contained a control and an experimental arm. The control arm was an overrepresentation analysis without community detection, and the experimental arm was an overrepresentation analysis with various community detection methods applied as a preprocessing step.
General Approach
From the KEGG ontology, m randomly chosen pathways were selected to form a list of candidate genes. To help evaluate the impact of incomplete pathway discovery, only p percent of the genes in each pathway were randomly selected for inclusion in the final gene list. Finally, a percent of additional random genes selected without replacement from the ontology were added to the gene list to create noise. As to only consider genes that influence pathway analysis, genes that were not in both IMP and KEGG were excluded for a resulting set of 5195 genes. This procedure was performed for both control and experimental arms so that differences in results could be attributed to community detection preprocessing.
We performed one hundred iterations for each parameter level combination of number of pathways (m = 2-8), percentage of genes included from each pathway (p = 30%, 47.5%, 65%, 82.5%, and 100%), and percentage additional random genes from IMP (a = 10%, 32.5%, 55%, 77.5%, and 100%) for a total of 105,000 individual runs. Over the 100 iterations of the specific parameter combination, we measured the number of seeded pathways correctly detected (true positives), incorrectly detected (false positives), correctly missed (true negatives), and incorrectly missed (false negatives). The false positive proportion, false negative proportion, precision, recall, and F1 score were calculated for each parameter combination over the 100 iterations. The F1 score is the weighted average of precision and recall where precision is the number of true positives divided by all positives and recall is the number of true positives divided by the sum of true positives and false negatives.
Control Arm
The control arm followed the steps outlined in General Approach.
Control All (CtrAll)
For this method, we determined true positives, false positives, true negatives, and false negatives using all significantly enriched pathways and complete gene lists of seeded pathways. For example, if a gene list was seeded with three pathways and the enrichment analysis identified ten pathways (including correctly identifying the original three), then all ten pathways would be counted as positives with the seven unseeded pathways considered false positive.
Control M (CtrM)
For this method, true positives, false positives, true negatives, and false negatives were determined using only the top m significant pathways where m is the number of seeded pathways. For example, if three pathways were seeded and there were ten significant pathways, then only the top three pathways in the significant enrichment results would be considered. Thus, if all three seeded pathways were in the top three significant results, the true positive would be three and false positive would be zero. If, however, only two of the three seeded pathways were in the top three significantly enriched pathways, then true positive would be two and false positive would be one. CtrM provides provides an upper bound on possible performance as it is unrealistic in practice for investigators to know a priori the correct number of pathways.
Experimental Arm
For the experimental arm, the subgraph associated with each gene list described in the General Approach was extracted from IMP and subjected to community detection to provide communitylevel gene sets before the overrepresentation analysis. Fastgreedy, Walktrap, Infomap, and Multilevel community detection algorithms were applied in the community detection step. The communities of genes detected by the algorithm were then used as separate candidate gene lists for overrepresentation analysis. True positive, false positive, true negative, and false negative were calculated for all pathways that remained statistically significant after Bonferroni multiple testing correction at α = .05 was applied. This correction was applied for each community if multiple were found.
All simulation analyses were performed using Python 2.7.6 with the iGraph package (version 0.71). Figures were produced using ggplot in R 3.3.1. Open source software to reproduce the results of this paper is provided at https://github.com/greenelab/GEA_Community_Detection. Figure 1 provides an overview of both the control and experimental arms.
HGSC Application
Based on the results of the simulation study, we applied the top performing community detection algorithms to lists of genes characterizing high-grade serous ovarian cancer (HGSC) subtypes. The gene lists were previously identified by a one cluster versus all differential expression analysis 26 of cluster specific genes in common to four HGSC datasets [27][28][29][30] . While previous reports have described four HGSC subtypes, the multipopulation study suggested that the number was three or fewer 26 . Given these conflicting results, we applied community detection to HGSC subtypespecific gene lists previously derived from results classifying 2, 3, and 4 subtypes 26 . Because this is an analysis of cancer genomics data, we used cancer pathways from the Pathway Interaction Database (PID) 5 .
Simulation Study
In general, community detection methods reduced the number of false positive associations in the multi-pathway setting. When seeding a gene list with four random pathways, all community detection methods had higher F1 scores than the standard enrichment analysis, CtrAll (Figure 2). In cases where pathways were incompletely seeded, the community detection methods often outperformed CtrM, which only considers the top m pathways as statistically significant ( Figure 2). These findings are consistent when using the top 2-8 pathways (pathway numbers 2, 3, 5, 6, 7, Fig. 1. In standard enrichment analysis, the full gene list is subjected to enrichment analysis and all significantly enriched pathways are returned. In the proposed experimental community detection enhanced method, the full gene list is first subjected to community detection to parse the gene list into sub-gene lists. Enrichment analysis is then performed for each gene list associated with each "discovered" community. Only the most significant pathway is returned for each community. and 8 are Supplementary Figures S1-6). Performance was robust to the number of genes taken from each seeded pathway over a broad range of values, and the relative performance of methods was largely unaffected by the proportion of genes sampled from the seeded pathways (i.e, 30% or all 100%) to make the gene lists. Thus, our approach may be more useful than standard enrichment techniques in situations where one is presented with a long, heterogeneous, and incomplete gene list and one wishes to find a set of robust pathways for further investigation. The Walktrap and Multilevel methods demonstrated the most success in this context as they resulted in high F1 scores and relatively low false negative and false positive proportions. Compared to other community detection methods, Fastgreedy appeared to have a broader range of performance values, with higher variability and increased outliers. The performance of community detection algorithms may be network-specific; users may wish to apply our open source code to perform a new simulation study if different networks are selected. Fig. 2. F1 scores for the controls (using all (CtrAll), or only the top 4 (CtrM), statistically significant pathways) and the community detection methods: Fastgreedy, Infomap, Multilevel, and Walktrap for various percentages of genes in each pathway (top axis) and percentages of additional genes (right side axis) for simulations using 4 random pathways. The percentage of genes indicates the percentage of random genes selected from each pathway. The percentage of additional genes indicates how many unrelated genes are randomly added to the analysis to represent increasing amounts of noise. Each comparison includes 100 iterations.
The combination of community detection and enrichment was designed to filter false positives in the multi-pathway setting. When we evaluated the proportion of false positives, we observed that the F1 score improvements were driven by successful filtration. In particular, all community detection methods outperformed standard enrichment analyses for false positive proportions ( Figure 3). As expected, when the number of seeded pathways increased, the proportions of false positives steadily increased for control runs that included all statistically significant pathways. The standard enrichment analysis approach was well suited to identifying a single pathway. The more pathways that were present in a single genelist, the worse standard enrichment-based methods performed. Fig. 3. Proportions of false positives for the controls (using all (CtrAll), or only the top 4 (CtrM), statistically significant pathways) and the community detection methods: Fastgreedy, Infomap, Walktrap, and Multilevel for various percentages of genes in each pathway (top axis) and percentage of additional genes (right side axis) for simulations using 4 random pathways.
All community detection methods other than CtrAll usually miss some portion of the true positives using 4 seeded pathways (Figure 4). In general, Walktrap, Infomap, and Multilevel tend to have greater variability in the number of pathways missed compared to CtrAll and Fastgreedy. It is not surprising that the community detection and CtrM methods have higher proportions of false negatives than CtrAll since they were designed to reduce false positives. Thus, a traditional enrichment approach may be more appropriate in sitatuions where false negatives are more of a concern, such as when investigating a relatively small gene list or conducting an exploratory analysis. Fig. 4. Proportions of false negatives in the controls (using all (CtrAll), or only the top 4 (CtrM), statistically significant pathways) and the community detection methods: Fastgreedy, Infomap, Walktrap, and Multilevel for various percentage of genes in each pathway (top axis) and percentage of additional genes (right side axis) for simulations using 4 random pathways.
HGSC Results
To examine the biological applicability of community detection, we independently applied the community detection approach to previously defined, HGSC subtype-specific gene lists for when 2, 3, and 4 subtypes are assigned. We previously derived these gene lists from a differential expression analysis across HGSC subtypes that were concordant across different populations 26 . We selected only the top performing algorithms from our simulation study, Walktrap and Multilevel. Applying these methods to PID pathways, we found that most clusters mapped to either Beta1 integrin cell surface interactions or IL12-mediated signaling events (Table 1). Community detection methods was able to separate upregulated and downregulated genes coming from the same pathway into different communities (Table 1). While many pathways were implicated in the original pathway analysis (see Supplementary Table S6 of Way et al. 2016 26 ), our community detection approach only implicated two distinct pathways consistently, for 2-4 subtypes. This did not support our hypothesis that HGSC subtypes are driven by differences across multiple pathways that are captured in differentially expressed gene lists. HGSC subtypes are known to be primarily characterized by a mesenchymal gene signature and immunoreactivity. Our analysis suggested that up-and down-regulation of beta 1 integrin signaling, and down-regulation of IL12 signaling, primarily define the subtype-specific signatures. However, the lack of PID pathway enrchiment in the presence of community structure may indicate novel biological pathways driving subtype separation. Beta 1 integrin signaling is a well characterized indicator of metastasis 31 and its high expression is associated with poor survival in ovarian cancer patients 32 . IL12 is an important immune system process with many coordinated functions 33 . Importantly, administration of intraperitoneal IL12 is being explored as a therapeutic agent in ovarian cancer 34 . The community detection approach pointed to specific HGSC subtypes that were aligned with this characterization, but did not identify multiple pathways for any specific subtype. We often observed that pathways that were highly expressed for one subtype would be underexpressed for another, which was consistent with a model that HGSC subtypes exist along a continuum of underlying pathway or cell type content. These results are also generally consistent with those found previously 27, 28,35 ,36 . Table 1. The statistically significantly enriched pathways found by Walktrap and Multilevel community detection methods and the number of genes in each pathway that are either upregulated (more highly expressed) or downregulated (less expressed) in HGSC 26 . We identified statistically significant pathways in communities defined by only k = 4 in cluster 1 (k4c1), cluster 2 (k4c2), and cluster 4 (k4c4). The id number of the enriched community is also provided. Clusters 1, 2, 3 and 4 correspond to mesenchymal, proliferative, immunoreactive, and differentiated subtypes as previously defined by TCGA 27 .
Conclusion
In summary, we developed an alternative enrichment method that uses community detection to group genes based on network connectivity prior to enrichment analyses. This approach is designed for situations where a researcher hypothesizes that multiple pathways contribute to a gene set. It trades an increase in false negatives for a dramatic reduction in false positives. The standard enrichment approach may be more appropriate in exploratory stages of research when high power is more desired than false positive control. Applying this method to gene sets that characterize HGSC subtypes did not reveal multiple pathways underlying any of the previously described subtypes. These results are consistent with a model where factors other than the activity of multiple pathways are responsible for the difficult to discern HGSC subtypes.
|
2017-12-10T01:33:12.445Z
|
2017-07-20T00:00:00.000
|
{
"year": 2017,
"sha1": "23848fc54fbc0c69f602b53395d5ecaab77524f5",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2017/10/02/166207.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "fec6a29c2d8e8b35b9bb69d3c2d45dc8b0569d6d",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
}
|
236321004
|
pes2o/s2orc
|
v3-fos-license
|
Recovery of Polyphenolic Fraction from Arabica Coffee Pulp and Its Antifungal Applications
Coffee pulp is one of the most underutilised by-products from coffee processing. For coffee growers, disposing of this agro-industrial biomass has become one of the most difficult challenges. This study utilised this potential biomass as raw material for polyphenolic antifungal agents. First, the proportion of biomass was obtained from the Arabica green bean processing. The yield of by-products was recorded, and the high-potency biomass was serially extracted with organic solvents for the polyphenol fraction. Quantification of the polyphenols was performed by High Performance Liquid Chromatography (HPLC), then further confirmed by mass spectrometry modes of the liquid chromatography–quadrupole time-of-flight (QTOF). Then, the fraction was used to test antifungal activities against Alternaria brassicicola, Pestalotiopsis sp. and Paramyrothecium breviseta. The results illustrated that caffeic acid and epigallocatechin gallate represented in the polyphenol fraction actively inhibited these fungi with an inhibitory concentration (IC50) of 0.09, 0.31 and 0.14, respectively. This study is also the first report on the alternative use of natural biocontrol agent of P. breviseta, the pathogen causing leaf spot in the Arabica coffee.
Introduction
Coffee is a beverage cash crop that is widely cultivated in the tropicals, especially in the Americas, Africa, and Asia with global production reaching 10.5 million tons per year [1]. Coffea arabica L. is cultivated mainly for the premium quality that the selling price is twice as much as other varieties [2,3]. As such, the production volume of Arabica coffee is up to 60-65% and the demand is expected to increase by ca. 10% per year [4,5]. During the coffee processing, the biological losses are accounted for up to 40-45% and the losses include pulp, husk, parchment, silver skin and spent coffee grounds [6]. Coffee pulp is weighed as high as 29% of the total dried weight which is disposed of as waste and has become an environmental pollution that incurs a high cost of management [7]. Therefore, considering their availability, which is practically of low cost, attempts have been made in order to value-add this agro-industrial biomass through the recovery of bioactive components [6][7][8][9][10].
The coffee pulp contains carbohydrates, proteins, fibres, fats, and antioxidants such as phenolic compounds, chlorogenic acid, epicatechin [11,12] and caffeine [13]. Plant phenolic compounds are known as effective pathogenic fungi inhibiting agents [14][15][16]. The possible modes of actions include but are not limited to, toxic effects, inducing cell apoptosis, inhibition of hypha development, inhibition of biofilm formation and disruption of cell membrane integrity [14][15][16]. However, the studies on the utilisation of coffee pulp extract to use against plant pathogenic fungi were only a few. Presently, Alternaria brassicicola is the major pest that causes pre-postharvest diseases of various vegetables such as cabbages and kales [17,18]. Pestalotiopsis sp. induced postharvest diseases in tropical fruits, cut flowers and a new type of leaf fall of para rubber [19][20][21][22]. Furthermore, Paramyrothecium breviseta was recently isolated from the leaf spot of the Arabica coffee [23][24][25]. To fill in the gap mentioned above as to increase the value of the coffee pulp biomass, the objectives of this research are to investigate types of biomass and losses during Arabica coffee processing and to serially extract the Arabica coffee pulp and analyse the chemical compositions. The fraction was then used as plant pathogenic antifungal agents. The outcome of this study would ideally support the global sustainable development goals (SDGs) by providing an alternative way to minimise the volume of non-renewable materials, thereby reducing the cost of management and carbon footprint attached to coffee production.
The Phytochemical Profiles of Coffee Pulp Powder
From the initial processing step, the data indicated that the high-quality green bean yielded only 13% of the total harvesting weight while the others remained as processing by-products. The major by-products consisted of 54% of pulp, 12% of immature fruit and 5% of green fruit ( Figure 1). In Table 1, carbohydrate was the major constituent followed by crude fibre, crude protein, crude fat and ash, respectively. The phytochemical compositions were not apparently different in both steps, except for the contents of crude fat and crude fibre. Carbohydrate is a dominant nutrient in the coffee pulp which is accounted for in the range 35.0-66.0% [26]. The carbohydrate compositions comprised reducing sugars (5.4%) and a high content of pectin (20.5%) [27]. Pectin was possibly separated from the pulp and precipitated from other components by methanol, thus a slight reduction in the content was observed. The crude fibre content of the coffee pulp powder was~18% and remained stable either before or after extraction. In other works, dried coffee pulp obtained from the residue of wet processing is widely used as a source of dietary fibre (33.6%) [28,29]. The dietary fibre in the soluble type of carbohydrate is run off during the extraction process, particularly when a highly polar solvent is used [30].
The protein content of the coffee pulp before and after methanolic extraction was~12%, which is in line with the studies of Ameca et al. [31] and Setyobudi et al. [26]. The elevating content of protein in the pulp may involve the enzyme excretion (polygalacturonase, pectin methylesterase and galactosidase) for pectin modification during the maturity stage [32].
The reduction of crude fat contents in the BCF (4.1%) to the ACF (1.2%) was notably distinct. The major component of fat in coffee is unsaturated fatty acids (such as linoleic acid) [33], which are highly dissolved in methanol, based on the principle of "like dissolves like" [34]. The content of crude fat was somewhat low when compared to other constituents. Other studies reported that fat content was in the range of 0.8-7.0% [26]. Mostly, fat content in coffee was in the form of polyunsaturated and saturated fatty acids [33]. Ash content in coffee pulp was the lowest~0.30% in both types of raw materials. These values were much lower than what had been reported by Ameca et al. [31] and Figueroa and Mendoza [35] at 7.0%. Generally, ash referred to minerals, as well as both micro and macronutrients [36].
The yield of the extracts obtained from methanolic extraction (6.6%) was higher than that of dichloromethane (2.3%) extraction. Based on the polarity of the solvent, where methanol has greater polarity than that of dichloromethane, as the major important substances in the pulp are the polar substances, especially those of the polyphenols [37], it is evident that the recovery of the phenolic compounds was dependent on the solvent used and its polarity [38]. Polyphenols are often soluble in organic solvents that are less polar than water. Effective extraction of plant material depends on the choices of the solvent, extracting temperatures, and mechanical agitation to maximise the polyphenol recovery [39].
Polyphenols, secondary metabolites, are categorised into two classes; flavonoids and phenolic acids [40]. The amount of total flavonoid recovered from the coffee pulp extract was higher than that of total phenolic contents approximately two times by different means of standards. These contents in the dichloromethane fraction, however, were not detected (Table 1). This was comparable to Delgado et al. [41]. Rodríguez-Carpena et al. [42] described that the recovery of the polyphenols from plant materials is affected by the solubility of the phenolic compounds in the solvent of choice. Consequently, solvent polarity plays a key role in increasing phenolic solubility [43]. Geremu et al. [37] reported that the concentration of the total polyphenols was the highest when methanol was used followed by acetone and ethanol, respectively. Therefore, in substantial studies, only the methanol fraction was used.
The antioxidant activities of the extracts obtained from methanol and dichloromethane were also evaluated using DPPH and ABTS assays. The methanol extract provided greater efficiency in both assays than dichloromethane. The same result was also reported by Geremu et al. [37], by which the methanolic extract gave the highest scavenging activity (17.3-70.2%) when comparing ethanol to the acetone extracts. This corresponds with the high polyphenol content that is able to scavenge free radicals with their hydroxyl groups. The finding is in agreement with Haifeng et al. [44] who reported higher polyphenol content along with high antioxidant activity. Therefore, the polyphenol content of plants may contribute directly to their antioxidant potency [45]. Table 2 illustrates the polyphenol compositions (flavonoid and phenolic compounds) in the coffee pulp. Among all others, epigallocatechin gallate was dominant (32.0 mg/g extract) and caffeic acid (68.0 mg/g extract) for the flavonoids and non-flavonoids, respectively. The flavonoids are further classified into flavones (apigenin), flavanones (naringenin), flavonols (quercetin), flavanols (catechin), isoflavones (daidzein), as well as the phenolic acids are grouped into hydroxybenzoic (gallic acid, protocatechuic acid) and hydroxycinnamic acids (coumaric acid, caffeic acid) [46]. The content of non-flavonoids (caffeic acid, caffeine, p-coumaric acid, rosmarinic acid, o-coumaric acid, quercetin, gallic acid) were greater than that of the flavonoid contents (epigallocatechin gallate, naringenin, epicatechin gallate, catechin, gallocatechin gallate). The highest content of flavonoid and non-flavonoid compounds were epigallocatechin gallate (31.8%) and caffeic acid (68.1%), respectively. Heeger et al. [47] reported that chlorogenic acid, gallic acid, protocatechuic acid and rutin are the most prominent compounds identified in the coffee pulp extracts (>80.0% of polyphenol content). The presence of chlorogenic acid (42.2%), epicatechin (21.6%), rutin (2.1%), catechin (2.2%), ferulic acid (1.0%) and protocatechuic acid (1.6%) has also been found in the pulp extracted with 80.0% methanol [48]. The presence of the polyphenols was confirmed by the scanning modes -ESI+ and LC-ESI-modes by the protocol set earlier in our previous work as shown in Table 3 [49]. Among the 12 quantified compounds derived from the HPLC standard curves of the polyphenol and catechin, we were only able to confirm quercetin, gallic acid and caffeine with the m/z values of 361.0, 171.0 and 195.0, respectively by the QTOF-MS possibly due to the impurity of the crude methanolic fraction. These compounds were selected at a minimum of 80% matching score. The chemical structures of the confirmed compounds are illustrated in Figure 2. Low signal performance of the spectrometry was also problematic for crude methanolic extract in our previous study [50]. We recommended that for further structure elucidation the purification step should be undertaken. -Not able to be detected by quadrupole time-of-flight mass spectrometry (QTOF-MS).
Antifungal Activities
The results of the antifungal bioassay of the crude methanolic coffee pulp extract at different concentrations are given in Table 4 with their comparative effectiveness as shown in Figure 3. The results indicated that at 0.5% concentration, the methanolic extract inhibited 78.0% growth against P. breviseta, 71.0% against A. brassicicola and the lowest at 62.0% inhibition against Pestalotiopsis sp. The result convinces us that the coffee pulp methanolic extract is able to accomplish antifungal activity in vitro and, more importantly, the inhibition increases with the increasing extract concentrations [51]. Gupta et al. [51] reported that two botanical extracts viz. Azadirachta indica, Capsicum annum were found to be highly effective against A. brassicicola at both 15.0% and 25.0% concentrations and the mycelial inhibition was recorded at 68.0% and 25.0% concentrations. 1.1 ± 0.001 a × 10 −4 0 ± 0.0 a n/a n/a n/a n/a n/a 9.85 ± 0.001 × 12.85 ± 1.00 b 0 ± 0.0 a n/a n/a n/a n/a n/a 6.65 ± 0.10 bc 6.76 ± 0.12 c 6.37 ± 0.01 b 6.65 ± 0.10 bc 0 ± 0.0 a Width 8.56 ± 0.17 c 8.51 ± 0.21 c 8.38 ± 0.19 c 7.12 ± 0.33 b 0 ± 0.0 a n/a n/a n/a n/a n/a 1.95 ± 0.05 b 2.03 ± 0.05 b 2.22 ± 0.41 c 1.95 ± 0.45 b 0 ± 0.0 a Cell wall thickness 1.03 ± 0.04 c 0.95 ± 0.04 c 0.08 ± 0.04 c 0.69 ± 0.03 b 0 ± 0.0 a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a 184 ± 0.01 a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a = Not available. Data are expressed as mean ± standard error, n = 30; values followed by different letter(s) in the same parameters within the same pathogen are significantly different (p < 0.05).
The 50.0% inhibitory concentration of the mycelial growth of A. brassicicola, Pestalotiopsis sp. and P. breviseta was 0.09, 0.31 and 0.14 g/mL, respectively (Table 4). Chen et al. [52] reported a minimum concentration of crude extract that was able to inhibit conidial germination of A. solani at the lowest (0.19 mg) concentration after a 72 h incubation period. However, the chemical profile of the extract was not reported. Phytochemical extracts are known to exhibit antimicrobial activity from a wide variety of secondary metabolite compositions [49,53]. Crude extract from plants might be a good candidate for an antifungal agent. Pestalotiopsis spp. can cause fruit rot in Chinese olives [54,55]. In this study, Pestalotiopsis sp. was isolated from leaf spot disease of rose apple, however, the effective dose of the coffee pulp extract was the highest. We believe that as an endophyte pathogen, it is able to repel natural products synthesised within the plants per se, thereby requiring a higher concentration for fungal inhibition. Such a mechanism has been described previously [52]. Finally, the report on antifungal effects of P. breviseta has not been documented anywhere.
The numbers of spore counts in A. brassicicola and P. breviseta decreased with the increasing concentrations of the extract in the poison food until reaching the maximum concentration ( Figure 4). The sporulation of the fungi could be induced by environmental stress including lack of nutrients, the resistance of the host tissues and UV light [56][57][58][59]. Plant natural products could initiate fungal cell wall stress, thereby inducing sporulation and inhibiting fungal growth [60]. The concentration of the extracts in the potato dextrose agar, however, did not affect spore morphology as described by spore width and length for P. breviseta. Indeed, the result indicated the morphology alteration at the concentration of 0.1 mg/mL in the A. brassicicola. The mycelial morphology did not illustrate many changes in the poison food. It is possible that light microscopy would not be able to get clear photographs of mycelium morphology. The aberrant mycelium morphology may necessitate examination with a scanning electron microscope (SEM).
Chemometric Relations
The chemometric multivariate has been used to comprehend the relationships between bioactive ingredients and biological activities by many studies [53][54][55]. The score plot relationship of the polyphenols and the biological activities are illustrated in Figure 5. All variations were well distributed across the plot which accounted for 96.4% in the PC1 and 2.0% in the PC2. As depicted in the Figure, the biological functional properties as described by the antioxidant activities (either by the DPPH and ABTS assays) and antifungal activities against all fungal strains clustered together (marked in yellow and blue). These were projected closely to caffeic acid and epigallocatechin gallate. In contrast, the total phenol and flavonoid displayed among the other polyphenols were projected at the opposite end of the score plot. We were then convinced that the caffeic acid and epigallocatechin gallate were responsible for the antifungal properties. Caffeic acid induces the pathogenic lipolytic enzymes which disintegrate the cell membranes of the fungus, thereby causing cell leakage. Additionally, it can directly inhibit protein synthesis in pathogen cells [51,56]. Thus, caffeic acid and its derivatives have been used against many plant-fungal pathogens such as Aspergillus niger, Fusarium graminearum and A. alternata that cause food spoilage, post-harvest browning and plant head white diseases [57,58]. Epigallocatechin gallate also interferes with fungal cell wall integrity by directly binding to peptidoglycan which has affinities toward various cell wall components [59]. Among all others, these coffee polyphenols could also inhibit mycelial growth, inducing the production of H 2 O 2 leading to lipid peroxidation, and the leakage of K+, soluble protein and soluble sugars that are responsible for the increased cell membrane permeability [60].
Raw Material
Fruits of the Arabica coffee (C. arabica L.) in a commercial harvesting stage (~80% red of the overall skin colour) were hand picked from Khun Changkhian Highland Agricultural Research and Training Station, Faculty of Agriculture, Chiang Mai University (18.840093525158775, 98.89823401100753) in January 2021. The coffee cherry was trans-ported to the coffee processing site immediately after harvest, which they were then cleaned by floating in tap water prior to wet processing [61]. The defective and green fruits were then eliminated by this step. Losses (%) were recorded as biomass weight to the total mass of raw materials. The losses of coffee processing consisted of floating and green fruits, pulp, wet parchment and parchment. The coffee pulp was then dehydrated by sun-drying for 2 weeks followed by hot air drying until constant moisture content was reached (4.0-6.0%). The dried material was ground to a fine powder using a high-speed food processor for subsequent uses [62].
Microorganisms
Alternaria brassicicola (CRC152), Pestalotiopsis sp. (CRC151) and Paramyrothecium breviseta (CRC12), obtained from the fungus collections of the Department of Entomology and Plant Pathology, Faculty of Agriculture, Chiang Mai University were used for antifungal activity tests. They were originally isolated from lettuce (Brassica rapa subsp. pekinensis), oil palm (Elaeis guineensis L.) and coffee (C. arabica L.), respectively [63]. Morphological characteristics of the fungus were identified using a Stemi 305 Zeiss stereo microscope and Axiovision Zeiss Scope-A1 microscope. The isolates were submerged in 10.0% glycerol and kept at 4 • C before use. All isolates were activated on the prepared potato dextrose agar (PDA) and incubated at room temperature (28 ± 2 • C) for 7 days before being used [64].
Proximate Analyses
Coffee pulp powder was analysed for proximate compositions according to the methods of the Association of Official Analytical Chemists (AOAC, 2000) [62]. Crude fibre content was analysed using the fibre analyser (Fibertherm FT12, Gerharadt, Germany).
Polyphenolic Fractions
The extraction was according to the serial extraction as described in Wisetkomolmat et al. [65] with some modifications. One hundred grams of the coffee pulp (CF) was first extracted with 400 mL of 95% dichloromethane for 24 h at room temperature to remove compounds of low polarity such as fatty acids. The extract was then separated through filter paper Whatman No.1. The filtrate was concentrated using a rotary evaporator at 40.0 • C and used as the crude dichloromethane fraction. After that, the remaining CF was extracted with 400 mL of 80% methanol, filtered, and concentrated as the crude methanol fraction. Yields of those fractions were recorded, accordingly. Also, for comparison, the proximate contents of the biomass after the serial extractions were also examined.
Total Phenolic Content
As described by Sunanta et al. [66], the extract (30 µL) was mixed with 60 µL of Folin-Ciocalteu reagent and then neutralized with 210 µL of 6.0% w/v saturated sodium bicarbonate and kept at room temperature in the darkness for 2 h. The absorbance reading was taken at a wavelength of 725 nm by UV-Vis spectrophotometer (SPECTROstar, BMG LABTECH, Offenburg, Germany). The calibration standard was prepared using different concentrations of gallic acid (10-200 mg/mL). The total phenolic content was expressed as milligram gallic acid equivalents per gram of dried sample.
Total Flavonoid Content
The content of total flavonoid was determined using the modified method of Sunanta et al. [66]. The methanolic crude extract (25 µL) was mixed with 125 µL of distilled water and 7.5 µL of 5.0% NaNO 2 solution was then added. The mixture was left to stand at room temperature for 5 min, thereafter 15 µL of 10.0% AlCl 3 ·6H 2 O was added and incubated for 6 min. After that, 50 µL of 1 M of NaOH and 27.5 µL distilled water was subsequently added. The absorbance of the test solution was measured at a wavelength of 510 nm using the UV-Vis spectrophotometer. The catechin calibration standard was prepared at different concentrations (30-300 mg/mL). The total flavonoid content was expressed as milligram catechin equivalents per gram of dried sample.
3.6. Antioxidant Activities 3.6.1. DPPH • Radical Scavenging Activity The free radical-scavenging activity was determined using the method described by Sunanta et al. [66]. Twenty-five microliters of the extract were added with 250 µL of 0.2 mM DPPH (2,2-diphenyl-1-picrylhydrazyl) and incubated in the darkness at room temperature for 30 min. The absorbance was measured at a wavelength of 510 nm using the UV-Vis spectrophotometer. The DPPH radical scavenging was calculated using the following equation; DPPH radical scavenging activity (%) = [(Abs control − Abs sample )]/(Abs control) ] × 100 (1) where Abs control is the absorbance of DPPH radical mixed with methanol; Abs sample is the absorbance of DPPH radical reacted with sample extract/standard.
Total Antioxidant Activity by ABTS •+ Radical Cation Decolourization Assay
For ABTS [2,2-azino-bis-(3-ethylbenzothiazoline-6-sulfonic acid)] assay, the method of Adedapo et al. [67] was adjusted briefly. The working solution was prepared by mixing the two stock solutions including 7.0 mM ABTS solution and 2.45 mM potassium persulfate solution. The solutions were mixed in equal quantities and the solution was allowed to react in the darkness at room temperature for 12-16 h. The ABTS solution was prepared by diluting 1.0 mL ABTS with 60.0 mL of 80.0% methanol to obtain an absorbance of 0.7 ± 0.02 units at 734 nm. Thereafter, 10 µL of methanol extract and 200 µL of ABTS working solution were pipetted into the microplate well, shaken and incubated at room temperature for 30 min. The absorbance was then taken at 734 nm. The ABTS scavenging capacity of the extract was calculated by the following equation; ABTS radical scavenging activity (%) = [(Abs control − Abs sample )]/(Abs control )] × 100 (2) where Abs control is the absorbance of ABTS radical mixed with 80% methanol; Abs sample is the absorbance of ABTS radical reacted with sample extract/standard.
Antifungal Activities
The methanol extract was dissolved in 95% methanol at different concentrations (0.01, 0.03, 0.05, 0.1 and 0.5 g/mL). For the preparation of the poison food, 1 mL of each sample was supplemented in a sterilised potato dextrose agar (PDA) and poured onto the petri dish. The active fungal mycelium (plug) cultivated for 7 days (6 mm diameter) was placed into the centre of the media and incubated at room temperature (28 ± 2 • C) for 7 days. Growth development of fungal mycelium was measured and compared with negative control (without the extract supplementation). Images of spore and mycelium were taken by a Canon 6D camera connected with an Axiovision Zeiss Scope-A1 microscope. All measurements were made using the Tarosoft ® Image Framework program v.0.9.0.7. The percentage of growth inhibition of the fungi was calculated using the following equation; where R1 is the colony radius in the control plate and R2 is the radial growth of the pathogen in the presence of plant extract.
Quantitative Analysis of Phenolic and Flavonoid
The methanol fraction was dissolved in 95.0% methanol to a final concentration of 1 mg/mL and analysed for the polyphenol contents using a High-Performance Liquid Chromatography analysis (HPLC) (Shimadzu, Kyoto, Japan) with an automatic injection (SIL-20ACHT), diode array detection (CTO-20AC), pump (LC-20AD) and automatic control (CBM-20A). The running protocol consisted of two conditions. The first condition was according to the modified method of Lux et al. [68]. Reverse-phase column chromatography was performed using an Ultra Aqueous C18 (250 × 4.6 mm, 5 µm) (RESTEK, Bellefonte, PA, USA). The mobile phase consisted of the mixtures of A and B, where mixture A contained formic acid and distilled water with a ratio of 5:95 and mixture B included acetonitrile (ACN) formic acid and distilled water with a ratio of 85:50:10. The gradient elution started at a flow rate of 1 mL/min and the injection volume was 10 µL. The initial condition was 80% A in 4 min and decreased to 25% A in 8 min, 25% A hold on 2 min, and increased to 70% A in 3 min before returning to 95% A in 1 min. The total run time was 18 min per sample. The second condition was applied by Wang et al. [69] with slight modification. The Platinum™ C18-EPS Rocket™ (53 × 7mm 3µm) (Alltech ® , Missouri, United States) was used for reverse-phase column chromatography. The mobile phase consisted of acetonitrile and water with a ratio of 13:87. The injection volume for all samples was 10 µL. Compounds were monitored at 280 nm at a flow rate of 1 mL/mins. All determinations were performed in triplicate. Chromatograms were recorded by photodiode array detection at 280 nm. The calibration standards were prepared by serial dilutions of different polyphenols to obtain the concentrations between 25-50 µg/mL ( Figures S1 and S2).
Characterisation of the Methanolic Fraction on Quadrupole Time-of-Flight Mass Spectrometer (QTOF-MS)
The methanolic fractions of CF confirmed the presence of the polyphenolics by QTOF-MS coupled with ZORBAX Eclipse Plus C18 (2.1 × 150 mm, 1.8 µm) and UV-Vis detector (Agilent Tech., Santa Clara, CA, USA). The sample preparation and cleaning up steps were according to Arjin et al. [49]. The instrument settings were the specific protocol for the polyphenol detection using the UV at 330 nm; 0.2 mL/min flow rate; injection volume of 10 µL. The mobile phase gradients comprised of 5% ACN and 95% water (1% formic acid), decreasing to 20% ACN in 5 min, 30% ACN in 5 min, 35% ACN in 5 min, 45% ACN in 5 min, 75% ACN in 5 min, and 95% ACN until the run ended. The MS conditions involved an electrospray ionization probe in positive and negative mode [70]. The nebulizer was operated at 20 psi with 7 L/min N 2 flow. The capillary temperature was maintained at 300 • C, while the sample flow rate was at 8 µL/min. The m/z range was 50-1000, the capillary voltage was 4500 V, and the dry heater temperature was set at 280 • C.
Chemometric and Statistical Analyses
All experiments were operated in at least triplicate for each test. Comparisons of the mean of differences in antifungal activities of CF extract fractions were analyzed using a oneway analysis of variance and Duncan's Multiple Range Test. All statistical analyses were performed using the SPSS 23.0 software (SPSS Inc., Chicago, IL, USA). A p-value < 0.05 was considered statistically significant. The relationships between the polyphenol compositions, antioxidant and antifungal activities were analysed using Principal Component Analysis (PCA) by the XLSTAT version 2020.
Conclusions
The polyphenols from the Arabica coffee pulp powder possess high inhibitory activity against horticultural pathogens, A. brassicicola, Pestalotiopsis sp. and P. breviseta. It is worth highlighting that this study is the first to use plant extract to control leaf spot pathogens in coffee. It provides an alternative way to value add the coffee by-products that support sustainable development during food production.
|
2021-07-26T05:35:04.908Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4f7abd3d431d96b35237b126b15b88c03a39a3e2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/7/1422/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f7abd3d431d96b35237b126b15b88c03a39a3e2",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
257286520
|
pes2o/s2orc
|
v3-fos-license
|
Asthma and atopic dermatitis as risk factors for rheumatoid arthritis: a bidirectional mendelian randomization study
Background Previous observational studies have shown an association between asthma, atopic dermatitis (AD) and rheumatoid arthritis (RA). However, the bidirectional cause-effect chain between asthma and AD and RA has not been proven yet. Methods We performed bidirectional two-sample Mendelian randomization (TSMR) and selected single nucleotide polymorphisms (SNPs) associated with asthma, AD, and RA as instrumental variables. All of the SNPs were obtained from the latest genome-wide association study in Europeans. Inverse variance weighted (IVW) was the main method used in MR analysis. MR-Egger, weighted model, simple model, and weighted median were used for quality control. The robustness of the results was tested by sensitivity analysis. Results Asthma was found to be the largest effect size for RA susceptibility using the IVW method (OR, 1.35;95%CI, 1.13–1.60; P, 0.001), followed by AD (OR, 1.10;95%CI, 1.02–1.19; P, 0.019). In contrast, there was no causal relationship between RA and asthma (IVW: P = 0.673) or AD (IVW: P = 0.342). No pleiotropy or heterogeneity was found in the sensitivity analysis. Conclusion Findings from this study showed a causal relationship between genetic susceptibility to asthma or AD and increased risk of RA, but do not support a causal relationship between genetic susceptibility to RA and asthma or AD. Supplementary Information The online version contains supplementary material available at 10.1186/s12920-023-01461-7.
Introduction
Rheumatoid arthritis (RA) is a systemic autoimmune disease characterized by joint inflammation and destruction, followed by systemic inflammation. An estimated 0.4-1.3% of people in the world are affected by this disease, and its incidence rate in women is 2 to 4 times higher [1]. The occurrence of RA will not only affects the quality of life of patients but also brings great economic losses to society [2,3]. RA is a disease caused by multiple factors, involving a complex interaction between genetic predisposition and environmental triggers. Smoking has been implicated as a major risk factor for RA as a result of the production of anti-citrullinated peptide antibodies (ACPA) [4,5]. Moreover, the lifetime risk of RA is significantly higher for individuals who carry certain human leukocyte antigen (HLA) alleles [6].
To prevent the occurrence and development of RA, other risk factors are receiving increasing attention from scholars, such as allergic diseases. An observational study has shown that patients with allergic diseases, particularly asthma and allergic rhinitis, are at significantly increased risk for RA [7]. However, a cross-sectional study noted a higher prevalence of RA in non-asthmatic patients compared to asthmatic patients [8]. Moreover, Lu et al. [9] noted in a study that atopic dermatitis (AD) increases the risk of developing RA. While in contrast, Hilliquin et al. [10] suggested that atopic reactivity may have a preventive effect on the development of RA. Meanwhile, several reports have suggested that patients with RA have less atopic disease than patients in normal controls [11][12][13]. In contrast, Kero et al. [14] reported a significantly higher cumulative incidence of asthma in children with RA than in non-RA children. A study showed an association between RA and an increased risk of allergic diseases such as asthma and allergic rhinitis [15]. In light of the controversial association between the results of these studies, and the limitations of observational studies, further research is needed to determine this relationship. At the same time, studies examining the link between allergic diseases and RA at the genetic level have yet to emerge.
Mendelian randomization (MR), an epidemiological method, has been widely executed to evaluate the potential causal associations between exposures and outcomes [16,17]. In MR analyses, using genetic variants as instrumental variables (IVs) could minimize confounders' inverse causations or effects [18,19]. Of note, a large number of recent Genome Wide Association Study (GWAS) data on allergic disease [20] and RA allow the use of MR analysis to explore associated disease causality [21][22][23]. Therefore, to fill the limitations of the current study, we performed a bidirectional TSMR analysis aimed at examining whether there is a causal association between allergic diseases (asthma and AD) and RA, as well as determining the direction of causality.
Study design overview
We executed a bidirectional TSMR analysis to assess the bidirectional causal effects of asthma or AD and RA. We applied previously identified genetic variants (singlenucleotide polymorphisms, SNPs) from published data or the Genome Reference Consortium to estimate their potential causal effects of exposures on outcomes. The valid MR analyses were structured to define the following three key assumptions: [1] genetic variants are strongly associated with the studied exposures, [2] exposures and outcomes are independent of any known confounders, and [3] genetic variants do not affect the outcomes through alternative pathways except the relevant assumption. The relationships between the exposures and outcomes are illustrated in Fig. 1.
Date sources
The summary statistics of the GWAS for asthma, AD, and RA were currently obtained from the IEU OpenGWAS (https://gwas.mrcieu.ac.uk/). There are 56,167 cases and 352,255 controls for asthma; 7,024 cases and 198,740 controls for AD; and 2,843 cases and 5,540 controls for RA. All cases were confirmed by clinical laboratory Fig. 1 The rationale of Mendelian randomization 1 represents the instrumental variables (IVs) that are strongly associated with the exposure; 2 indicates the IVs must influence the outcome only through the exposure; 3 shows the IVs must not associate with con-founders testing, physician diagnosis, or self-reported. To reduce outcome bias from race-related confounders, the study was limited to the European population. Table S1 shows detailed information on our data used.
Instrumental variables
At the beginning of our study design, we selected appropriate SNPs as IVs, which must be robustly associated with exposures (P < 5 × 10 − 8 ). SNPs should be restricted by low linkage disequilibrium (LD, r 2 < 0.01, 5,000 kb) using clumping. In addition, we excluded palindromic SNPs whose minor allele frequency (MAF) of outcomes was less than 0.01.
Mendelian randomization analyses
All analyses were executed in R version 4.2.0 using the TwoSampleMR package 0.5.6. After selecting appropriate SNPs of exposures, the inverse-variance weighted (IVW) analysis was chosen as the primary approach to evaluate the causal-and-effect relationship. Then, we added a series of MR analyses including Mendelian randomization-egger (MR-Egger), Weighted median (WM), Weighted mode, and Simple mode methods. When performing MR approaches, P < 0.05 was regarded as suggestive evidence for potential association. The odd ratio (OR) and standard error (SE) were calculated to show effect sizes. In addition, the IVW method and MR-Egger regression were used to investigate the presence of heterogeneity in the results, which was quantified using Cochran's Q-test [24]. MR-Egger regression is also used to determine the likelihood of pleiotropy, with the intercept term indicating potential horizontal pleiotropy [25].
Meanwhile, we also used the "leave-one-out" method to remove single SNP which has a significant independent influence on the MR method [26].
Strength of genetic instruments
Based on the above criteria for MR analyses, 53 SNPs associated with asthma, 25 SNPs associated with AD, and 40 SNPs correlated with RA were used as instrumental variables for subsequent analysis. All of these IVs had F-values greater than 10, indicating that the bias of these instrumental variables did not directly affect the assessment of causal effects ( Table S2).
Effect of asthma and AD on RA
The effects of each SNPs in asthma and AD on RA can be found in Table 1. We found that the causal relationship between asthma or AD and RA differed among the five MR methods. MR results for the IVW, MR-Egger, and WM methods showed a significant connection between asthma and RA (IVW, OR = 1.35, 95% CI = 1.13-1.60, P = 0.001; MR-Egger, OR = 2.62, 95% CI = 1.20-5.72, P = 0.020; WM, OR = 1.29, 95% CI = 1.01-1.66, P = 0.042) ( Table 1). For AD, the results of IVW method can support this causal relationship (OR = 1.10, 95% CI = 1.02-1.19, P = 0.019) ( Table 1). The MR-Egger's analysis showed no underlying horizontal pleiotropy (asthma: P = 0.094, AD: P = 0.450) ( Table 2). Cochran's Q test (Table 2) showed no heterogeneity in the risk of asthma or AD and RA. Furthermore, we conducted a "leave-one-out analysis" and found that none of the SNPs strongly influenced the overall effect of asthma or AD and RA (Fig. S1). Tables 3 and 40 instrumental variables were included in the reverse MR analysis. To determine the correlation between RA and asthma or AD, we performed a reverse MR analysis and the IVW method showed no correlation between RA and asthma (IVW: P = 0.673). Likewise, there is no causal relationship between RA and AD (IVW: P = 0.342). Sensitivity analyses and tests of heterogeneity did not indicate potential horizontal pleiotropy and significant heterogeneity (Table 4). No single SNP strongly affected the overall outcome of RA on asthma and AD as demonstrated by the "leave-one-out" sensitivity analysis (Fig. S2).
Discussion
In the present study, we assessed the bidirectional causal relationship between asthma, AD, and RA. By using a two-sample MR approach and using GWAS summary statistics, the results showed a positive causal relationship between asthma, AD and RA. In reverse MR analysis, no causal relationship was observed between RA and asthma or atopic dermatitis. These findings suggested that the three diseases may have similar pathogenesis. Previous epidemiological studies have examined the associations between asthma, AD, and RA. Several case-control studies have identified asthma as a possible risk factor for RA, but with the potential for recall bias [27][28][29]. In addition, retrospective cohorts have analyzed the overall risk of asthma and RA [7,8,[30][31][32]. Studies using administrative datasets have also reported an association between asthma and RA risk, but lack data on smoking or serum status with RA [30][31][32]. Certainly, in a large national study in Taiwan based on billed claims data, RA was associated with a 2-fold increased risk of asthma compared to controls [11]. Furthermore, studies on AD have published different views, and several previous epidemiological studies have shown no association between AD and RA [10,33,34]. However, recent studies point to a correlation between AD and RA and that AD may promote the development of RA [15,29,35]. Our study was based on the largest available GWAS dataset and restricted the population to individuals of European ancestry to avoid bias on account of small sample size or ethnic differences. The results showed that both asthma and AD are related to an increased risk of RA and that there is no reverse causality between them, suggesting that the previously observed controversial results may be attribute to confounding factors or ethnic differences.
The underlying mechanisms of these associations are poorly understood, but there are still some mechanisms that can be used to explain their possible connection. In the first place, some common immune pathogenesis between not only asthma but also AD and RA. T helper 17 (TH17) cells are one of the pro-inflammatory T helper cell subsets and their increased activity plays a big role in the development of RA [36]. Interleukin 17 (IL-17), an inflammatory factor produced by TH17 cells, has increased expression in AD patients compared to healthy subjects [37]. Of course, studies also suggested that increased TH17 activity and increased IL-17 expression play an important role in the development of airway inflammation in asthma by inducing Th2-interrelated eosinophilia and airway Mucin 5AC expression, as well as increased airway hyperresponsiveness [38][39][40][41][42]. This immune pathway is also thought to be involved in the pathogenesis of RA, as IL-17 expression and TH17 activity are increased in RA patients compared to non-RA individuals [43,44]. In the second place, studies have pointed out that asthma and RA may have overlapping genetic predispositions, and certain genetic variants in immune-related genes are associated with increased susceptibility to asthma and RA, such as HLA-DRB1 [45,46], CD40L, and CD86 [47]. Interestingly, certain genes (such as HLA-DRB1 and PTPN22) have been confirmed to be associated with AD and RA [48][49][50], but no common susceptibility locus have been identified for these two diseases. Therefore, further studies are needed to explore this potential explanation. In addition, several environmental factors can be used to explain this association of asthma or AD and RA, for example, smoking can contribute to increased inflammation in the lower respiratory tract through a variety of mechanisms and is a predisposing factor for asthma [51]. Smoking also induces the release of peptidylarginine deiminase 2 and enzyme 4 from lung phagocytes, which can convert endogenous proteins into guanylated autoantigens [5]. These guanylated autoantigens, in turn, stimulate the development of anti-citrullinated peptide antibodies in genetically susceptible individuals and may eventually trigger a chronic inflammatory response in synovial junctions, leading to RA development [6,52]. Toxic substances produced by smoking (e.g., nicotine and carbon monoxide) disrupt skin barrier function, and these substances disrupt blood flow and oxygenation of the skin [53]. These skin disorders and associated subcutaneous structures allow the penetration of allergens into the skin, leading to AD [54]. Thus, smoking is a causative factor in their communication. These possible assumptions can provide theoretical support for our research results.
The main strength of this study is MR analysis; while randomized controlled trials (RCTs) can provide the most convincing evidence, they involve many ethical issues and have high financial costs. For observational studies, which are adjusted for other relative variables, the undetected bias cannot be ignored. Hence, MR provides the most convincing results. Bias due to confounding and reverse sources can be reduced by MR. To minimize potential violations of MR assumptions, we also performed continuous sensitivity analyses and detected any outliers by radial MR analysis.
Nevertheless, the limitations of our MR analyses also need to be acknowledged. Firstly, our results of MR analyses were mainly focused on the European population to reduce the ethnic confounding. Hence, reliable datasets on non-European or mixed population are urgently needed, because we need to break the link correlated with particular genes of the local environment. Secondly, potential pleiotropy could not be fully ruled out, resulting in three hypotheses that could not be accurately evaluated. However, sensitivity analyses were performed using multiple methods to obtain consistent results, which makes the results of this study reassuring.
Conclusion
Overall, we used two-sample MR analyses to describe the bidirectional causality between asthma, AD, and RA, focusing on filling a gap in knowledge of this causal chain. The results of MR Analysis provide strong evidence for a positive causal relationship between asthma and AD and RA. Nevertheless, more large-scale studies are needed to support our findings and explore the specific mechanisms involved.
Data Availability
As data for this study was acquired via a database (244,991,502,576 genetic associations from 42,335 GWAS summary datasets), the authors applied a broad consent to allow research participants to query and download a broad range of their data (https://gwas.mrcieu.ac.uk/).
Declarations Ethics approval and consent to participate
This research has been conducted using published studies and consortia providing publicly available summary statistics. All original studies have been approved by the corresponding ethical review board, and the participants have provided informed consent. In addition, no individual-level data was used in this study. Therefore, no new ethical review board approval was required.
Consent for publication
Not applicable.
|
2023-03-03T15:25:12.171Z
|
2023-03-03T00:00:00.000
|
{
"year": 2023,
"sha1": "4b32815ac8dc65ffea8eee6df60fee4908760b57",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "4b32815ac8dc65ffea8eee6df60fee4908760b57",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240521872
|
pes2o/s2orc
|
v3-fos-license
|
Locke’s Knowledge of Ideas: Propositional or By Acquaintance?
Locke seems to have conflicting commitments: we know individual ideas and all knowledge is propositional. This paper shows the conflict to be only apparent. Looking at Locke’s philosophy of language in relation to the Port Royal logic, I argue, first, that Locke allows that we have non-ideational mental content that is signified only at the linguistic level. Second, I argue that this non-ideational content plays a role in what we know when we know an idea. As a result, we can see our knowledge of an idea as a form of knowledge by acquaintance: there is a direct epistemic relation between a mental object (an individual idea) and a knowing subject. But owing to Locke’s logic, that knowledge has a tacit propositional structure expressing the truth of the idea, which gains full signification only linguistically.
THE PROBLEM
Locke defines knowledge as 'the perception of the connexion and agreement, or disagreement and repugnancy of any of our Ideas' (IV.i.2). 1 Those agreements come in four categories: identity, necessary co-existence in the same substance, relation in general, and real existence (IV.i.3). Although when push comes to shove Locke thinks that all four categories of agreements are relations, he tells us that some agreements are so 'peculiar … that they deserve well to be considered under distinct heads, and not under Relation in general, since they are such different grounds of affirmation and negation' (IV.i.7). That is, Locke could have said that to know an agreement of ideas is simply to know a relation of ideas and left it at that. But then what I take to be a meaningful 'peculiarity … of the different grounds of affirmation and negation' would have been glossed over. Locke mentions identity and necessary co-existence in the same substance as having this peculiarity. In this paper, I will confine myself to a discussion of the agreement of identity.
Although very little has been said about what Locke could mean by knowing the identity of an idea and how we know it, some controversy exists in the literature. Of those considering the issue, some think the knowledge is trivial; the agreement is simply the relation between two tokens of an idea type. 2 Others argue that Locke means the identity of an idea to be a non-trivial form of knowledge, but they disagree as to what the agreement could be. 3 That it is non-trivial would be more in line with Locke's own view that our intuitive knowledge of the identity of an idea (or knowing an idea) is 'foundational' to the rest of knowledge while identity propositions composed of two tokens of an idea type are 'trifling ' and entirely 'un-instructive.' 4 Others who see knowledge of an idea as non-trivial and foundational argue that Locke's use of the term 'know' indicates room for a distinction between a kind of knowledge by acquaintance (objectual knowledge) and propositional knowledge. 5 Our knowledge of an idea would be of the former sort-immediate and non-propositional. Where this interpretation hangs its hat is on a particular reading of this passage: 1 All references to Locke's Essay concerning Human Understanding (Locke 1975) will be appear in the body of the paper by book.chapter.section number.
3
For example, focusing on Locke's statements that 'it is the first act of the Mind … to know every one of its Ideas by it self' and 'he can never be in doubt when any Idea is in his Mind, that it is there, and is the Idea is it, ' Lex Newman (2007: 328n18) suggests two complementary agreements: 'Suppose a sensation of red is occurring in my mind. To know that that idea is presently occurring -that it is in the present contents of my consciousness -is to perceive an agreement between that idea and my general idea of perception. (To know, in addition, that the occurring sensation is of red is to perceive, in addition, an agreement between that idea of sensation and my general idea of red.)' But this cannot be Locke's view, for contrary to Locke's claim that it is the 'first act of the Mind' to know its ideas, it would require that we already have the complex general ideas of perception and of red in order to initially know any simple ideas. Matt Priselac (2017: 91-92) has more recently argued that to know the identity of an idea, a non-trivial foundational form of knowledge, 'depends on the discerning operation of the mind.' He continues, 'We achieve knowledge by discerning one idea within some other complex idea. We know, for example, that gold is yellow by discerning yellow within our idea of gold, identifying it as the idea it is within the complex in which it is already perceived.' I suggest that Priselac's explanation falls to the same sort of criticism as does Newman's. It seems that to know the idea yellow, we must already have a complex idea, say of gold, from which to discern it. Given that Locke thinks that all ideas come into the mind 'simple and unmixed' (II.ii.1) and it is the first act of the mind to know its ideas, it seems that we would already know the idea yellow before discerning it as one out of the others that are in the idea gold. This is a clear statement that knowing the identity of an idea, a foundational form of knowledge, conforms to the definition of knowledge as the perception of an agreement. Given these passages, and what they seem to imply, I think the knowledge by acquaintance view as normally understood can't get the job entirely done, for it would fly in the face of Locke's claims that our knowledge of an idea adheres to the definition of knowledge.
What we have, then, is that knowing an idea, for Locke, is a non-trivial, foundational form of knowledge that is consistent with his propositional definition of knowledge as the perception of an agreement. Yet, I think Locke has a significant hurdle to clear in giving a coherent account. For he seems also to claim that we have knowledge of individual ideas. Locke begins his discussion of our knowledge of the identity of an idea with passages telling us that it is the 'first Act of the Mind,' when it has any ideas produced there at all, 'to know each what it is, and thereby … that one [idea] is not another' (IV.i.4; see also, e.g., IV.i.2, IV.i.4). Moreover, Locke classifies our knowledge of the identity of an idea as intuitive knowledge, describing it this way: 'For in this [kind of knowledge], the Mind is at no pains of proving or examining, but perceives the Truth, as the Eye doth light, only by being directed toward it' (IV.ii.1). He adds, 'For a Man cannot conceive himself capable of a greater certainty, than to know that any Idea, in his Mind is such, as he perceives it to be …' (IV. ii.1). Two things are noticeable from just these passages. One, as already mentioned, is that we 6 In addition, in the IV.v.1 definition of truth, Locke is clear that the 'joining or separating of Signs' whose agreement or disagreement constitutes truth 'is what by another name, we call Proposition.' For some of the more recent secondary literature emphasizing Locke's understanding of knowledge as propositional, see for example, Mattern (1978), Soles (1985), Owen (1999) have knowledge of individual ideas. The other is that knowing an idea is an immediate form of knowledge, both temporally and logically. This next passage seems to confirm the claim: Everyone that has any Knowledge at all, has, as the Foundation of it, various and distinct Ideas: and it is the first Act of the Mind, (without which, it can never be capable of any Knowledge,) to know every one of its Ideas by it self, and distinguish it from others. Every one finds in himself, that he knows the Ideas he has; That he knows also, when any one is in his Understanding, and what it is; And that when more than one are there, he knows them distinctly and unconfusedly one from another. (IV.vii.4) So, according to this first claim, we know an individual idea when it is produced in the mind, and we know it just in virtue of perceiving, or having, that idea in the mind. 7 Therefore, Locke seems to have inconsistent commitments: we know individual ideas as soon as they appear in the mind and all knowledge is propositional.
I will argue that Locke is not in as bad shape as might first appear. I will argue, first, that Locke, following the Port-Royal logic, allows that some components of mental propositions are not signified by ideas. Thus, there can be non-ideational mental content present in the perception of a single idea. I will also argue, second, that we can understand Locke as having an acquaintance view of sorts-a form of objectual knowledge-but due to the complexity of perceptions of ideas and the role of that non-ideational mental content, it can be seen as consistent with his claim that all knowledge is propositional. I will argue that when we perceive an idea, there is an immediate and direct epistemic relation between the mind (consciousness) and an individual idea (a mental object). Yet I will also argue that Locke, following the Port-Royal logic, understands knowing the idea as including an affirmation of the truth of the idea-as having propositional structure. 8 Such an account, I will argue, is philosophically and contextually plausible, and it will reconcile Locke's seemingly inconsistent claims: that knowledge is propositional and that we have knowledge of individual ideas.
Before moving on, let me address an anxiety. One might think I have engaged in some sleight of hand in appealing to the definition of knowledge as an agreement of ideas (what many read as 'between ideas') to support that knowledge of an idea has propositional structure and then to claim that that structure includes a direct epistemic relation between a conscious mind and a single idea. To alleviate that worry, let me put my cards on the table. First, for Locke, all thinking is conscious-even self-conscious-and as I and a good bit of the literature sees it, consciousness is not identical to ordinary perception. 9 Second, I have argued elsewhere that for Locke, perceptions of ideas are complex mental states that include more than the act of perception and the idea perceived. In the perception of every idea there is at the very least an idea of existence, and as Locke states also an idea of unity: 7 See also, 'For let an Idea be as it will, it can be no other but such as the Mind perceives it to be; and that very perception, sufficiently distinguishes it from all other Ideas, which cannot be other, i.e. different, without being perceived to be so. No Idea therefore can be undistinguishable from another, from which it ought to be different, unless you would have it different from itself' (II.xxix.5). Contrary to what I am arguing, one might take these passages to be about the mental act of 'discerning' or distinguishing an idea (II.xi.1). But that is simply to focus on the other side of the same coin. As Michael Ayers (1991: 98) acknowledges, 'on Locke's view the perception of the identity or diversity of ideas reduces to the "discerning" or distinguishing of a single idea or two different ideas.' He continues, 'Since we cannot have ideas without distinguishing them, the potentiality for a kind of propositional knowledge is bound up with the very having of ideas.' In a comparison of Locke with Descartes, Louis Loeb (1981: 54) claims that for both 'we have intuitive knowledge of propositions about the content of our present sensory states.' I take Loeb to be saying that perceiving or having an idea is to have a form of propositional knowledge.
8
I should note that I am following Ayers (1991), Owen (2007), and Marušić (2014) in seeing Locke as having a 'one-act' view of knowledge and judgment. See Ott (2004) for the contrary view that first we form a proposition, and then we affirm it in a second act. See Jaffro (2018) for the view that Locke has a one-act account of intuitive knowledge and a two-act account of other forms of knowledge and judgment. In this paper, I am limiting my argument to our intuitive knowledge of an individual idea. See Buroker (1996) for the view that Arnauld held a oneact account of judgment.
9
More recently, see Coventry and Kriegel (2008) Therefore, there are more constituents in the perception of a single idea than the act of perception and the idea perceived; at the very least there is also self-consciousness, an idea of existence, and an idea of unity. So, in the perception of an idea those constituents can form agreements, and if they do, then the perception of a single idea can contain ideational agreements that are expressed propositionally. That is, perceptions of ideas are complex propositional states (Weinberg 2016a).
Another way to think about this is to take a page from Ruth Mattern's playbook. In reconciling Locke's general definition of knowledge as the perception of an agreement of ideas with his puzzling definition of sensitive knowledge as an 'actual real Existence agreeing to any Idea' (IV.i.7), Mattern argues that Locke failed to sharply separate two different senses of 'agreement and disagreement of ideas.' There is the 'agreement and disagreement within propositions.' And there is the 'agreement and disagreement that render propositions true ' (1978: 693, my emphasis). In the latter case, as we have in sensitive knowledge, she argues that what grounds the knowledge is not the relation between the ideas themselves as found in the proposition. Rather, what grounds the knowledge is what guarantees the truth of the proposition. As Mattern explains, these are cases in which there are 'states of affairs that render propositions true and false … . It is what Locke … terms the "grounds of Affirmation and Negation" (E IV.i.7: 527).' But in the former case, which Mattern (1978: 693) calls 'idea-theoretic' truths, 'the two sorts of agreements coincide; the relation between ideas is both the state rendering the proposition true and the state of the proposition expressing this truth.' With this argument in hand, we can see the knowledge of an idea as an 'idea-theoretic' truth: our knowledge is grounded in the state of affairs that renders the proposition true (the various constituents making up the complexity of the perception), but at the same time it is also the agreement expressed in the structure or composition of the proposition itself. Given the complexity of the perception of any idea, that structure is due both to an agreement between ideas and between a knowing subject and an idea. (I will have more to say about the nature of the complexity in section 4.) With this in our pocket let's return to the main argument. Locke seems to be committed to a theory of what we might call 'semantic ascent': ideas serve as signs of the things they represent (initially either external objects perceived in sensation or mental operations perceived in reflection). That is, words signify ideas, which represent thingsexternal objects or mental activities. Thus, signification proceeds stepwise up the ladder from ideas to words. 11 Propositions at the level of ideas, Locke calls 'mental propositions.' At the next level of signification, 'verbal propositions,' ideas are signified by words (IV.v.3). So, on the orthodox understanding, the constituents of any proposition would be initially ideas and then words.
THE FIRST CASE FOR NON-IDEATIONAL MENTAL CONTENT
But notice, in the II.xxi.4 passage, how clearly Locke states that there is one thing we know directly, without needing a sign: the mind itself. He says, 'For since the Things, the Mind contemplates, are none of them, besides it self, present to the Understanding, 'tis necessary that something else, as a Sign or Representation of the thing it considers, should be present to it: And these are Ideas.' The mind makes use of ideas to understand everything 'besides it self.' Udo Thiel (2011: 115-17) suggests that we see the relation between the mind and itself (its immediate presence to itself) as more of a subject-subject relation as opposed to a subject-object relation. I can be immediately conscious of my own mental acts and myself without having to have ideas of them (see also Weinberg 2016a: 39-40). And once we see that the mind is immediately known to itself without ideational signification, other passages can be read as consistent with that view. Although it is normally the case that the constituents of mental propositions are ideas and the constituents of verbal propositions are words, in the last line, Locke seems to be saying, not that the signs we use are always ideas and words, but only 'chiefly' do we find ideas or words. In addition, he is speaking about the truth of ideas, namely the mental proposition that would precede the verbal proposition expressing our knowledge of the identity of an idea. Although this is far from a proof text, and even though this is a passage about signs, I take Locke's use of 'chiefly' to indicate that constituents of mental propositions can in some cases be non-ideational. 12 11 I thank Martha Bolton for pointing out the importance of this to me. I will also have more to say about the IV.xxi.4 passage in section 3.
12 Lex Newman in his commentary on a version of this paper at the 2021 Central Division of the APA and an anonymous referee suggested to me that here Locke could be considering other kinds of signs as, for example, when we write 'I ♥ pizza' or perhaps gestures or something of that sort. I suppose it could be that Locke is thinking something like this, although, to my knowledge, he doesn't ever speak of signs that are neither ideas nor words. But he does say clearly in IV.xxi.4 that ideas are not needed to signify the mind's immediate understanding of itself. I should say that I see the IV.xxi.4 passage as the most important to my interpretation, so even if one is not persuaded to read II.xxxii.19 as I do, I don't see it as counting against the viability of the overall interpretation. And as I alluded to at the end of section 1, and as I hope to make much clearer in section 4, I see my interpretation as fully consistent with Locke's statement in II.xxxii.19 that 'Truth, or Falshood, being never without some Affirmation, or Negation, Express, or Tacit, it is not to be found, but where signs are joined or separated, according to the agreement, or disagreement, of the Things they stand for.' The truth-making mental content signified in the verbal proposition expressing the perception of an idea is made up of an agreement between signs (ideas) as well as between nonideational content and an idea. Yet another passage that can be seen as supporting that Locke is open to non-ideational mental content is his claim that we are aware of our mental operations prior to having ideas of reflection.
And we see the Reason, why 'tis pretty late, before most Children get Ideas of the Operations of their own Minds; and some have not any clear, or perfect Ideas of the greatest part of them all their Lives. Because, though, they pass there continually; yet like floating Visions, they make not deep Impressions enough, to leave in the Mind clear and distinct lasting Ideas, till the Understanding turns inwards upon it self, reflects on its own Operations, and makes them the Object of its own Contemplation. (II.i.8) Even though young children do not have ideas of reflection, they are aware of the 'continually passing' mental operations involved in having ideas of sensation. For example, they would be vaguely aware that they are sensing. It takes another distinct act of reflection by which the mind contemplates, or takes the mental operation as an object of perception to have an idea of that mental operation. Consider also that Locke defines 'consciousness' as 'the perception of what passes in a man's own mind,' and he thinks we are conscious of all that passes in the mind (II.i.19). 13 From these passages we can infer that Locke has room for non-ideational mental content of which we are conscious, namely pre-ideational awareness of mental operations. 14 In the next section, I will argue that Locke does indeed maintain that there are constituents of some verbal propositions that do not signify ideas. That is, some constituents of mental propositions gain signification only at the level of language. Therefore, some mental propositions have non-ideational content.
THE SECOND CASE FOR NON-IDEATIONAL CONTENT: SIGNIFICATION BY PARTICLES
In order to see how Locke could think that we have knowledge of individual ideas and that that knowledge is somehow propositional, there has to be room for mental content that plays a role in knowledge that is not signified by an idea. To see where Locke leaves that room, we need to take a closer look at Locke's philosophy of language and especially the function of particles. Although we have seen some textual evidence, we have to make sure that Locke's main thoughts about 13 A number of recent interpretations, although differing in details, agree that, for Locke, consciousness is a constituent, or ingredient, of the mental state that is the perception of an idea. See Coventry and Kriegel (2008), Lähteenmäki (2008;), Jorgensen (2010, Thiel (2011), LoLordo (2012), Weinberg (2008;2016a).
14 Wolterstorff (1996: 14) seems to allow the same when he says, 'Locke rather often speaks of the mind as being directly acquainted with its own 'operations' (for example in II,i,4 and II,I,8); and he doesn't count the mind's operations among its ideas.' This is not, by any means however, an uncontroversial view. One might want to read II.i.8 as making a distinction between 'clear and distinct' ideas of reflection and those that are not. The interpretive problem is how to maintain Locke's empiricist principle that all ideas have their source in sensation and reflection, yet young children are, even if only vaguely, conscious of their mental operations. For example, LoLordo (2012: 112-20) makes a distinction between 'lasting' and 'non-lasting' ideas of reflection. But then there arises the question what is the relation between consciousness and reflection? For LoLordo, consciousness is identical to reflection, but reflection is a source of ideas only when there is added attention and the idea is lodged in memory: what is produced is a 'lasting' idea. This interpretation has two infelicitous consequences. First, it denies Locke a second (higher) order account of reflection. All reflection on this view is relegated to first-order thinking. Second, it disrupts the symmetry between the production of ideas in sensation and the production of ideas in reflection, for sensation is a source of ideas even when there is no added attention (II.ix.4). See Weinberg (2016a) for more extensive arguments along these lines. On the other hand, we could perhaps follow Lähteenmäki (2011: 163-67) 'mark or intimate.' 21 Moreover, there are some things the mind does for which we don't have names, and the names we have do not clearly convey the mind's action. That is, not only does Locke think we fail to have a full understanding of how these parts of speech operate in a language, but also he thinks that we don't have linguistic marks or intimations of all that the mind does.
In addition to a lack of linguistic items to denote mental actions, we also see in his III.vii.2 explanation of particles that not everything that goes on in the mind is signified by an idea. Some mental acts do not strictly follow the ladder of semantic ascent from non-or pre-ideational content, to ideas, to words in a language; rather, some go right from non-ideational status to language. And when they do make that jump, they are signified by a particle. Therefore, some mental content, 'action[s] or Intimation[s] of the Mind' gains signification (by a particle) only at the semantic level of language.
In spite of Locke's lack of attention, or lack of attention to more than the word 'but,' and his lack of more detailed analysis, here is something of interest that he does say about particles: BESIDES Words, which are the names of Ideas in the Mind, there are a great many others that are made use of, to signify the connexion that the Mind gives to Ideas, or Propositions, one with another. The Mind, in communicating its thought to others, does not only need signs of the Ideas it has then before it, but others also, to shew or intimate some particular action of its own, at that time, relating to those Ideas. This it does several ways; as, Is, and Is not, are the general marks of the Mind, affirming or denying. (III.vii.1) 22 Particles are used to express things going on in the mind that are not signified by ideas, but nevertheless must be 'shew [n] or intimated' in order for aspects of our thinking to be expressed and communicated to others. Locke describes these as 'particular action[s] of its [the mind's] own, at that time, relating to … Ideas.' Some of those aspects he lists as 'Connexion, Restriction, Distinction, Opposition, Emphasis, etc.' (III.vii.2): To think well, it is not enough, that a Man has Ideas clear and distinct in his Thoughts, nor that he observes the agreement, or disagreement of some of them; but he must think in a train, and observe the dependence of his Thoughts and Reasonings, one upon another: And to express well such methodical and rational Thoughts, he must have words to shew what Connexion, Restriction, Distinction, Opposition, Emphasis, etc. he gives to each respective part of his Discourse. To mistake in any of these, is to puzzle, instead of informing, his Hearer: and therefore it is, that those words, which are not truly, by themselves, the names of any Ideas, are of such constant and indispensible use in Language, and do so much contribute to Men's well expressing themselves. (III.vii.2) Unfortunately, Locke doesn't provide any more explanation of these sorts of mental episodes, even though both 'connexion' and 'distinction' have quite important roles in Locke's philosophical 21 According to Ashworth (1984: 57, 59), such a view has its roots in scholastic logic, which Locke's Oxford education would have exposed him to. As opposed to categorematic words, or 'ordinary lexical items,' which signified concepts, '… syncategorematic words, that is logical connectives, quantifiers, adverbs, and so on, signified only "in some way." They corresponded not to concepts, but to mental acts of affirmation, negation, distribution, or other ways of modifying the things thought about, and it was obvious to medieval and post-medieval logicians that there need be no straightforward correspondence between the number and position of syncategorematic words in a sentence and these mental acts.' 22 Remember that in IV.xxi.4 Locke states, 'For since the Things, the Mind contemplates, are none of them, besides itself, present to the Understanding, 'tis necessary that something else, as a Sign or Representation of the thing it considers, should be present to it: And these are Ideas' (my emphasis in bold). That the mind is present to itself without signification by an idea would be consistent with the claim in this (III.vii.1) passage in which Locke says that particles directly 'intimate some particular action of its [the mind's] own' without the need for an intermediary idea. See also Jaffro (2018: 180): 'Even though there is an important difference between marks of affirmation and negation (which, as I understand Locke, "signify the connexion that the Mind gives to Ideas") and particles proper (which "connect not only parts of Propositions but whole Sentences to one another") affirmation and negation are terms that Locke puts in the same category as particles, according to the logico-grammatical tradition, which assimilates particles, verbs, and copula to syncategorematic terms, markers of operations and not conceptions.' How particles perform their function and what they actually signify, insofar as they are signifying 'mental actions of [our] own … relating to Ideas,' will be explained shortly. psychology and his account of knowledge. Indeed, 'connexion' Locke uses as a synonym for 'agreement' in his IV.ii.1 definition of knowledge. And another text sees 'distinction,' or 'distinguishing an idea,' as equivalent to 'knowing' it. 23 His more general example above (in III. vii.1) is of the mental act of affirmation or denial as shown or intimated by the words 'Is' or 'Is not.' One way to read this is to say that Locke is taking 'is' and 'is not' as the general way in which we signify the mind's affirmation or denial of a particular mental act, say the act of perceiving an agreement or connexion of ideas or the mind's affirmation or denial of the act of distinguishing a particular idea. But is this the right way to read it?
Since Locke doesn't give us very much else to go on in his discussion of particles and what they signify, we can get some help by looking to the logic that many think most influenced Locke. 24 In Logic or the Art of Thinking, Arnauld and Nicole consider linguistic expressions of mental actions in terms of verbs: I say that the main function of the verb is to signify affirmation, because we shall see below that it is also used to signify other actions of the soul, such as desiring, requesting, commanding, and so on. But this happens only by changing the inflection and the mood, so in this chapter we will consider the verb only in its principle signification, which is what it has in the indicative. Accordingly, we can say that the verb in itself ought to have no other use than to indicate the connection the mind makes between the two terms of a proposition. Only the verb 'to be,' however, called the substantive, retains this simplicity, and only in the third person present, 'is,' and on certain occasions. (1996: 79) In this passage, Arnauld and Nicole are explaining how verbs ('actions words') are used to signify 'actions of the soul.' The main function of any verb in a sentence in which it appears, state Arnauld and Nicole, is to signify affirmation of an action. Finite verbs 25 also have other linguistic functions, such as signifying number, tense, and person (1 st , 2 nd , 3 rd ) through inflection; an ending is joined to the indicative form of the verb, which serves to signify the number, tense, or person. (I should note that only the substantive, the verb 'to be' in the third personal form 'is,' however, signifies an act of affirmation alone as opposed to verbs generally, which principally signify an act of affirming something-some action.) So, when two terms are joined (connected) in a proposition by a verb, there is always also present (either explicitly or implied) an affirmation: for example, X is seeing Y, X is hearing Y, and so on. Arnauld and Nicole explain, 'Because people are naturally led to abbreviate their expressions, they almost always join other significations to affirmation in the same word ' (1996: 79). For example, we say 'Peter lives' instead of 'Peter is living' mostly out of convenience. Affirming the living of Peter is signified only implicitly in 'Peter lives.' So, all verbs signify affirmation but the signification of the affirming is due to an implicit presence of the indicative, third-personal form of the 'substantive,' or the verb 'to be.' The point is that the mental act of affirming is not always signified on its own, but often as joined implicitly to another word (the verb).
In explaining how verbs function linguistically, Arnauld and Nicole are showing how all verbs implicitly signify the act of affirmation. Importantly, the affirming constitutive of any verb is signified in terms of its exercise or performance, the act in process, rather than as an object of thinking. Gabriel Nuchelmans (1986: 61) explains it this way: That Locke's contemporaries were fully aware of this difference between signification in actu exercito [indirectly] and signification in actu significato [directly] is further confirmed 23 Here again is Locke's IV.i.2 definition of knowledge: Knowledge then seems to me to be nothing but the perception of the connexion and agreement, or disagreement and repugnancy of any of our Ideas. Also, we can see 'distinction' as equivalent to knowing an idea in the first act of perceiving it: '… the Mind clearly and infallibly perceives each Idea to agree with it self, and to be what it is; … And this is does without any pains, labour, or deduction; but at first view, by its natural power of perception and distinction' (IV.i.4). I think this is preliminary evidence that Locke's mention of 'distinction' and 'connexion' as acts signified by particles has to do with acts of the mind involved in knowledge and specifically knowledge of an idea. See also, again, note 7. by two remarkable passages in the chapter on the verb (II, 2) that was added to Arnauld and Nicole's La logique ou l'art de penser (1662) in the fifth edition of 1683. The authors, who regard the copula as the only genuine verb, characterize its principal function as consisting in being a mark of affirmation. The finite verb indicates that the discourse in which it occurs is the discourse of someone who does not only conceive of things, but passes judgment on them as well. It is precisely in this respect that the copula-element in the finite verb differs from such names as affirmans and affirmation. The latter words signify affirmation, but only in so far as an actual operation of affirming has become, by a mental act of reflecting, an object of thought. Consequently, they do not signal that the speaker performs an act of affirming; what they signify is the act as a thing conceived of. 26 There are a couple of important points to draw out here. First, Arnauld and Nicole see the copula as the only 'genuine' verb. I take this to mean that the copula (as an affirmation that an action is occurring) signifies the occurrence of that action by being joined implicitly or explicitly to the verb signifying that action. So, for example, when I say 'Peter lives' I am signifying the affirming of the action (living) by Peter by the implicit joining of the copula ('is'). As mentioned before, really what I am saying when I say 'Peter lives' is 'Peter is living.' Essentially, we can take the word 'living' to be modifying the copula, that is, the signification of the act of affirming is the primary function of the verb where the descriptive aspect of the verb serves to describe what action is affirmed.
Second, the occurrence of a finite verb in a sentence through the implicit joining of the copula signifies a judgment. As Nuchelmans (1986: 61) says, 'The finite verb indicates that the discourse in which it occurs is the discourse of someone who does not only conceive of things, but passes judgment on them as well.' This distinction can be understood as the difference between signifying the occurrence of an action and a later reflective consideration of that action. In the former case, it is the action as performed that is signified, where in the latter case it is the action as consideredconsidered as an object of thinking-that is signified. 27 Would Locke have been following Arnauld and Nicole with respect to these two aspects of signification-that the copula is the only genuine verb, which all other finite verbs modify (as adjectives or adverbs), and that there is a distinction between the signification of the performance of a mental act and a later reflection or consideration of it as an object of thinking? 28 I think so. 29 The former would explain why Locke omitted any discussion of verbs from his Book III account of 26 See also Pearce (2016: 377).
27 Ashworth (1984: 52) notes that we find the same distinction in William Ockham's Summa Logicae (1974: 194-95): 'As Ockham has pointed out in the fourteenth century, it is one thing to exercise an act of affirming or denying, and quite another thing to speak about that act. Particles indicate that a mental act is being exercised, they do not signify these acts as objects.' See also Ayers (1991: vol. 1, 204-5) who finds the same similarity (a link between medieval theory and seventeenth century logic) between Locke and John Buridan.
28 John Wilkins (1668: 304), a contemporary in the linguistic tradition of the Royal Society says this: 'By the word Predicate, I mean likewise all that which follows the Copula in the same sentence, whereof the Adjective (if any such there be) immediately next after the Copula, is commonly incorporated with it in instituted Languages, and both together make up that which Grammarians call a Verb. ' As Land (1974: 3) explains, 'Verbs may invariably be reduced to the form "adjective + copula" and the copula is an "essential particle".' 29 No copy of the 1683 edition of Logic or the Art of Thinking was found in Locke's library according to the best record we have. But given the amount of evidence offered in the next few pages, it seems reasonable, nonetheless, that at the very least Locke was thinking along the same lines. And given that these ideas can be traced to medieval logicians and contemporaries in the Royal Society (see especially notes 27 and 28), it's not a far stretch to see Locke thinking similarly. Moreover, as Kenneth Pearce (2019: 89-90) has compellingly argued, Locke's reasons for writing Book III of the Essay (on language), after having already completed what are now the surviving drafts that do not include it, parallel Arnauld and Nicole's reasons for adding the chapter on the verb (and other material on language) to the 1683 edition of the Logic. First, there is wide agreement that the original design of the Essay mirrors the structure of the Logic: it moves from an account of the origin and nature of ideas and the distinctions between them, to judgments or propositions, to the nature of reasoning, to a division of the sciences. Second, continues Pearce, in the Logic, 'the material is added at the very beginning of Part II, on judgments, and is said to be useful for understanding judgments because "The mind is accustomed to linking [words and ideas] so closely that we can scarcely conceive one without the other, so that the idea of the thing, prompts the idea of the sound, and the idea of the sound that of the thing" [Logic 1996: 73-74].' Locke tells us, similarly, his reasons for adding the material on language in between his treatment of ideas and his account of propositions/judgments and knowledge: 'There is so close a connection between Ideas and words … that it is impossible to speak clearly and distinctly of our Knowledge, which all consists in propositions, without considering, first, the Nature, Use, and Signification of Language' (II. xxxiii.19). These similarities are striking. I agree with Pearce that the evidence that Locke was familiar with Arnauld and Nicole's material on language added to the 1683 edition of the Logic is not conclusive. But also I agree with Pearce that given his arguments, and now with the addition of mine, surely the conclusion is plausible. language and signification. 30 And given that he has no chapter on verbs, it would explain why he singles out 'Is, and Is not' as 'general Marks of the mind, affirming or denying' in his first paragraph explaining the function of particles. Moreover, the latter (a distinction between a mental operation as an object of thinking and its performance) would explain why Locke is careful to say that we have ideas of mental operations produced only by acts of reflection even though we can be conscious of mental operations prior to being able to reflect (Nuchelmans 1986: 64). Indeed, if we reread the earlier IV.xxi.4 passage about Locke's logic in this context, we find that distinction between the signification of mental actions insofar as they are performed and insofar as they are considered or made objects of reflection. So, when, Locke says, 'For since the Things, the Mind contemplates, are none of them, besides it self, present to the Understanding, 'tis necessary that something else, as a Sign or Representation of the thing it considers, should be present to it: And these are Ideas' (IV. xxi.4, my emphasis in bold), we can see him as claiming that mental operations (the mind itself) are not always signified by ideas. Rather, mental operations, although present (conscious) to the mind, are signified by ideas only when they are 'considered,' that is made objects of thought as the result of reflection. 31 So, it seems that the performance of an act of affirming a mental operation can be signified in a verbal (linguistic) proposition even if it is not first signified by an idea. 32 But if an act of affirming a mental action is not signified by an idea, then how is it communicated? Consistent with the III.vii.1 passage already cited, Locke thinks that some mental acts when performed and expressed are signified only linguistically by a particle and not originally by an idea.
To repeat some of that passage, he says, 'The Mind, in communicating its thought to others, does not only need signs of the Ideas it has then before it, but others also, to shew or intimate some 30 See also Nuchelmans (1986: 63-64): 'It seems to me that the fact that Locke does not discuss verbs as such in the third book of the Essay is explained most satisfactorily by the assumption that he follows, among others, the authors of the Port-Royal Grammar and Logic in regarding the copula as the only genuine verb; and, being a syncategorematic mark of the performance of an act of affirming, the copula is not a sign of an idea. ' Kretzmann (1968: 179) agrees that Locke adopts syncategorematic signification 'ruling out' that words signify only ideas. With respect to verbs, Kretzmann (1968: 180) explains, 'Locke does briefly consider the signification of verbs in Book Two and at the beginning of Book Three, but in a way quite detached from the semantic theory developed in Book Three, where verbs are not discussed as such. While he does not explicitly exclude verbs from the scope of his main thesis [that words signify only ideas], he never includes them either; and what he has to say about words in Book Three shows that his governing and perhaps exclusive concern was with nouns and adjectives, or "names."' According to Wilkins (1668: 304) too, a verb has no distinct place amongst integrals in a philosophical grammar: 'By the word Predicate, I mean … all that which follows the Copula in the same sentence, whereof the Adjective (if any such there be) immediately next after the Copula, is commonly incorporated with it in instituted Languages, and both together make up that which Grammarians call the Verb.' See again also Land (1974).
31 Arnauld (1990): 71) makes the same point in his distinction between 'implicit' and 'explicit' reflection. The former, which we can also think of as consciousness, is a reflexive aspect of perception, which accompanies all thought. Explicit reflection is a second order act in which one perception is the object of another: '[O]ur thought or perception is essentially reflective upon itself: or as it is rather better said in Latin, est sui conscia. For I do not think without knowing that I think; I do not know a square without knowing that I know it … I do not imagine I see the sun, without being certain that I imagine I see it… .
[A]s well as this implicit reflection which accompanies all our perceptions, there is also something explicit, which occurs when we examine our perception by means of another perception.' See Steven Nadler (1989: 118-22). For more current agreement with Nadler on this particular point, see Weinberg (2016a: 14-15), and Pearce (2019: 90-91). I see Locke's understanding of consciousness as similar to Arnauld's implicit reflection and to views of La Forge and Lamy, in that there is a one level reflexive aspect to consciousness (Arnauld's implicit reflection) and a second (higher) order reflection (Arnauld's explicit reflection). I see Locke as understanding consciousness as a reflexive constituent of every perception. Therefore, consciousness plays an epistemic role. But I do not agree with those who see two different kinds of reflection to make sense of the different ways in which we are aware of mental operations. For those positions see the only difference between implicit and explicit reflection in Locke as a matter of attention-whether or not the perception is attended to, which is contrary to Arnauld's explicit reflection as a second (higher) order act. (See again note 14.) In addition, that Locke was following the Port-Royal logic-the logic of the day-does not mean he would follow Arnauld in everything else.
In thinking about what to call what Arnauld termed 'reflexion virtuelle' (implicit reflection), I think it more plausible that Locke looked to Cudworth, who had given the English term 'consciousness' philosophical meaning. See also Thiel (1994;. Given the employment of the new term, and Locke's close relationship to Cudworth, it makes sense that Locke would use the English term 'consciousness' instead of reverting to the distinction between mental states as found in French. In thinking about Locke's relation to Cudworth on consciousness, I am most sympathetic to Pécharman (2014).
32 As Nuchelmans duly notes: Interpretations arguing that particles must be signifying ideas (see note 15) fail to notice this distinction and so mistakenly claim that mental acts even in their performance would be signified by an idea of reflection. Clapp (1967: vol. 4, 496) says this: 'Again a difficulty arises. If "is" and "is not" stand for the mind's act of affirming or denying, then either the mind directly apprehends its own actions in some way or we do have ideas of affirmation or denial. If we do have ideas of the mind's acts, then these words ought to signify the ideas of these acts; if we do not have ideas which these words signify, then either we do not apprehend them or something else besides ideas is the object of the mind when it thinks.' See also Bennett (1971: 20 particular action of its own, at that time relating to those Ideas' (III.vii.7). That would mean, for Locke, that there are non-or pre-ideational components of mental propositions, namely mental actions-the performing of a mental operation-that are not signified by ideas but only gain signification through expression in language. In contrast, the same pre-ideational content, say a mental operation (action) when considered or reflected upon, is signified by an idea of reflection. Therefore, in communicating the performing of a mental action, the idea rung on the ladder of signification is skipped. For example, when I look outside and then say to you 'I see a cat in the yard' my mental act of affirming my perceiving a cat in the yard (the affirming the occurrent performance of the act of perceiving) is not signified initially by an idea. 33 Such an understanding of the difference between signifying an object of thinking and an act of thinking (or 'posture of the mind in discoursing' [III.vii.3]) harks back to the medieval distinction between 'categorematic' and 'syncategorematic' (or 'non-categorematic') signification. 34 Nuchelmans (1986: 62-63) explains, The peculiar mode of signifying that is typical of non-categorematic words has as subject the speaker, the speaker's mind, or the particular word employed. For the relation itself Locke sometimes uses the general verb 'to signify', but also, strikingly often, 'to show' and 'to intimate'. These latter verbs seem to be exactly right for indicating the way in which the speaker-and thus the word he uses-reveals a mental act or state of his own which he currently performs or experiences. By uttering the appropriate mark the speaker discloses to the hearer what he is effectively doing and how he is feeling. 35 If Locke is following the Port-Royal logic and a traditional view of the difference between categorematic and syncategorematic signification, then mental actions in their performance are not signified by ideas. That is, when I say, 'I see a cat' when looking into the yard the performance of my mental act of seeing (sensing/perceiving)-in the moment of seeing-is not signified by an antecedent idea. 36 Therefore, a verbal proposition (a linguistic expression) communicating the performance of a mental act in its performance expresses non-ideational mental content. Thus, the performance of any perceptual act, a mental action, skips the idea rung of the semantic ladder achieving signification non-categorematically at the moment of utterance. 37 33 See Landesman (1976: 34): 'Particles signify the mind's own actions in bringing its ideas together into bits of connected thinking (III.vii.1, III.vii.6). Locke's remarks about the copula suggest what he has in mind: "Is and is not are the general marks of the mind, affirming, or denying (3.7.1)" … In general, because the same "names" can occur in different speech acts, a further linguistic mechanism is required to indicate which is the intended act.' 34 See again notes 21 and 27.
35 Again here is Nuchelmans (1986: 64): 'Particles, then, in so far as they are actually used, are marks of some action, posture, or feeling exemplified by the speaker's mind at the moment of utterance… . Someone who wants to describe, classify, and explain their use and force in language must enter into his own thoughts and observe nicely the several actions and postures of his mind in discoursing. From the second-order vantage point taken by philosophers and grammarians, the performed acts and felt states of which particles are marks when actually used, are contemplated and examined through acts of reflecting and thus become objects of thought and ideas of reflection. As such, of course, they can no longer be expressed by particles; the appropriate linguistic tools by which they are mentioned and denoted as things conceived of are those words which are names of ideas in the mind.' 36 Bennett (2001: vol. 2, 116-17) Is there anything else notable that is signified only at the level of linguistic expression? Arnauld and Nicole explain that there is the same sort of non-categorematic signification of the 'I' or the subject of the proposition: Further, in certain cases they have connected it to the subject of the proposition, so that two words, and even a single word, can form a complete proposition. This is possible with two words, as when I say sum homo [I am a man], because sum signifies not only affirmation, but also includes signification of the pronoun ego [I], which is the subject of the proposition… . A single word can form a proposition, for instance when I say vivo, sedeo [I am living, I am seated]. For these verbs contain in themselves both an affirmation and an attribute, as we have already said. Since they are in the first person they also include the subject: 'I am living,' 'I am seated. ' (1996: 79-80) So in terms of its essence, the verb is a word that signifies an affirmation. But if we wished to include its primary accidents in the definition of the verb, we could define it as follows: vox significans affirmationem cum designation personae, numeri, et temporis. A word that signifies an affirmation while designating person, number, and tense. (1996: 80-81) Not only does a verb capture an implicit affirmation, but also through the attribution or inflection (the 'accidents') it can express the subject. This tells us that not only is the mental act of affirming the action signified with the finite verb, but also the subject, the 'I,' is expressed implicitly in the verb. For Locke, this would mean that neither must the affirmation of a mental action (its performance) nor the self acting be signified by an idea. Rather, the performance of the action and the subject acting (namely the 'I' in first-personal experience of a mental action) achieve signification only at the level of language-when expressed or reported in language. Thus, mental actions and the first-personal experience of mentally acting are signified by ideas only when considered from a second-order perspective, namely as objects of reflection. That is, only when I turn inward to reflect on my mental operations or on the first-personal way in which I experience my own thinking, are ideas generated of the mental operation and of myself.
OUR KNOWLEDGE OF AN IDEA
How does the foregoing analysis of the function of verbs and particles in a language help Locke out of his problem? Remember that there seems to be a conflict between Locke's two commitments with respect to the account of knowing an idea. We have knowledge of an individual idea, whenever and as soon as it first enters the mind, and all knowledge is propositional. I contend that these claims are made consistent once we realize, following the Port-Royal logic, that the occurrent performance of a mental action and the subject acting are not signified by ideas prior to their signification by words. Ideas do the signifying only when mental operations and the self are considered as objects of thinking, when they are the objects of reflection. Therefore, there can be elements of the mental state that is the knowledge of an idea (i. more than a single Name of any thing, can be said to be true or false,' he does tell us what this other sort of 'truth' consists in. All truths are propositional, so there must be more to the truth of an idea than the idea simply in itself.
Indeed, Locke tells us that the truth of an idea is propositional: Though, I think when Ideas themselves are termed true or false, there is still some secret or tacit Proposition, which is the Foundation of that Denomination: as we shall see, if we examine the particular Occasions, wherein they come to be called true or false. In all which, we shall find some kind of Affirmation, or Negation, which is the Reason of that Denomination. (II.xxxii.1) He continues, Indeed, both Ideas and Words, may be said to be true in a metaphysical Sense of the Word Truth; as all other Things, that any way exist, are said to be true; i.e. really to be such as they exist. Though in Things called true, even in that Sense, there is, perhaps, a secret reference to our Ideas, look'd upon as the Standards of that Truth, which amounts to a mental Proposition, though it be usually not taken notice of. (II.xxxii.2) The truth of an idea consists in a 'secret or tacit Proposition' expressing, as Locke says, 'really to be such as [it] exists,' namely 'the idea really is as it exists in the mind.' Expressing it this way, I suggest, is no different from other locutions Locke uses to express what we know when we have knowledge of an idea. He says, 'The Idea is as I am perceiving it to be' (IV.ii.1) or he says, 'I mean some object in the Mind, and consequently determined, i.e. such as it is there seen and perceived to be ' (1975: 13).
What, then are the components of the tacit proposition known in knowing an idea? Consistent with Locke's verbal propositions (above) expressing that knowledge, when I know an idea there is an agreement between the idea and my conscious perception of it: it really is as 'I perceive it to be,' 'as I perceive it existing in my mind,' 'as it is there seen/perceived and perceived [by me] to be.' Yet, due the complexity of the perception of the idea, there are other elements (the simple ideas of existence and unity [II.i.7]) contributing to both the structure of the proposition and the truth-making mental content. The agreement, then, is composed of ideational and non-ideational elements internal to the complex mental state, and it is expressed linguistically in the ways just mentioned. That is, there is the idea 'existing' or 'there' (experienced existentially) in my mind due to an agreement between the idea perceived and the simple idea of existence. That perceived idea-the qualitative patterns or features-also agrees to my conscious perception of it, 39 namely to the first-personal way in which the idea appears in my mind-as appearing to me, the subject as I occurrently perceive it, which is a direct epistemic relation between a knowing/conscious subject and an idea (object). 40 (In addition, I suggest, due to the simple idea of unity-also a part of the complexity-the idea perceived is considered as, Locke says, 'one thing' (II.vii.7.)) We might just as easily say, 'I know the idea,' 'I see the idea,' or 'I know that object in my mind,'-both 'that it is there' and 'what it is' (IV.vii.4)-where, the truth-making mental content and the agreements therein (both between ideas and between non-ideational content and an idea) gets cashed out propositionally in Locke's logic.
Therefore, our problem is solved, for we can have knowledge of an individual idea that has a propositional structure. Why? For two reasons: First, following the Port-Royal logic, Locke allows that we have non-ideational mental content that achieves signification only at the linguistic level; Second, perceptions of ideas, for Locke, are complex propositional states with agreeing components such that being in that mental state conveys knowledge. We can see knowing an idea to be knowledge of acquaintance: there is a direct epistemic relation between a mental object and a knowing subject. But owing to Locke's logic, at the level of mental content that knowledge has a tacit propositional structure expressing (or affirming) the truth of the idea, which is revealed explicitly only at the level of language.
CONCLUSION
Locke's account of knowing an idea includes two seemingly conflicting commitments: we know individual ideas in the very first act of perceiving them and all knowledge is propositional. The key to the solution, following the Port-Royal logic, is Locke's acceptance of the linguistic function of verbs and particles to signify the performance of mental actions without those actions having first been signified by ideas. Once we see the elements internal to the perception of an idea and that Locke allows non-ideational mental content to do work in that perception, which constitutes our knowledge an idea, we can see how knowing an idea is a complex mental state with a tacit propositional structure. Thus, Locke's account of knowing an idea can be seen as a kind of propositional knowledge by acquaintance.
|
2021-10-20T15:50:50.876Z
|
2021-09-16T00:00:00.000
|
{
"year": 2021,
"sha1": "00ddc770e3c55d8231da1bb71403a497176575d2",
"oa_license": "CCBYNC",
"oa_url": "http://jmphil.org/articles/10.32881/jomp.121/galley/57/download/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e7f51eaec0579e1e620a1c409c48a8bc9f69848e",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
}
|
25393075
|
pes2o/s2orc
|
v3-fos-license
|
ZmPep1, an ortholog of Arabidopsis elicitor peptide 1, regulates maize innate immunity and enhances disease resistance.
ZmPep1 is a bioactive peptide encoded by a previously uncharacterized maize (Zea mays) gene, ZmPROPEP1. ZmPROPEP1 was identified by sequence similarity as an ortholog of the Arabidopsis (Arabidopsis thaliana) AtPROPEP1 gene, which encodes the precursor protein of elicitor peptide 1 (AtPep1). Together with its receptors, AtPEPR1 and AtPEPR2, AtPep1 functions to activate and amplify innate immune responses in Arabidopsis and enhances resistance to both Pythium irregulare and Pseudomonas syringae. Candidate orthologs to the AtPROPEP1 gene have been identified from a variety of crop species; however, prior to this study, activities of the respective peptides encoded by these orthologs were unknown. Expression of the ZmPROPEP1 gene is induced by fungal infection and treatment with jasmonic acid or ZmPep1. ZmPep1 activates de novo synthesis of the hormones jasmonic acid and ethylene and induces the expression of genes encoding the defense proteins endochitinase A, PR-4, PRms, and SerPIN. ZmPep1 also stimulates the expression of Benzoxazineless1, a gene required for the biosynthesis of benzoxazinoid defenses, and the accumulation of 2-hydroxy-4,7-dimethoxy-1,4-benzoxazin-3-one glucoside in leaves. To ascertain whether ZmPep1-induced defenses affect resistance, maize plants were pretreated with the peptide prior to infection with fungal pathogens. Based on cell death and lesion severity, ZmPep1 pretreatment was found to enhance resistance to both southern leaf blight and anthracnose stalk rot caused by Cochliobolis heterostrophus and Colletotrichum graminicola, respectively. We present evidence that peptides belonging to the Pep family have a conserved function across plant species as endogenous regulators of innate immunity and may have potential for enhancing disease resistance in crops.
Peptides regulate diverse processes pertaining to both development and defense in plants (Matsubayashi and Sakagami, 2006). Defensively, peptides can act as molecular messengers during plant interactions with other organisms, alerting the plant to potential attack and inducing defenses. Microbe-associated molecular patterns (MAMPs) are molecular fragments recognized by plants as indicators of potential invasion, and peptide MAMPs derived from microbial proteins, such as flg22, elf18, and Pep13, are bound by specific plant patternrecognition receptors to elicit a cascade of downstream defense responses (Hahlbrock et al., 1995;Zipfel et al., 2004Zipfel et al., , 2006. Peptides also warn plants of attack by insect herbivores; the inceptin peptide is one such herbivoryassociated molecular pattern (HAMP) that activates downstream defenses in response to herbivory (Schmelz et al., 2006;Mithöfer and Boland, 2008).
In addition to peptide MAMP elicitors that alert plants to the presence of invading organisms, there are several classes of endogenous plant peptides that regulate defenses, acting as internal elicitors . Biotic stress resulting in cellular damage induces expression of the genes encoding endogenous peptide precursor proteins, and the activated peptides then contribute to defense through the amplification of plant responses. Systemin and hydroxyprolinesystemin (HypSys) peptides function as endogenous regulators of defense against herbivores (Ryan and Pearce, 2003;Narváez-Vásquez et al., 2007). Signaling by these peptides promotes a myriad of antiherbivore responses, including the accumulation of proteinase inhibitor proteins and of other antinutritive proteins such as polyphenol oxidase, Thr deaminase, and arginase as well as systemic emission of volatiles (Pearce et al., 1991;Howe and Jander, 2008;Degenhardt et al., 2010). Other peptides are endogenous regulators of pathogen defense responses; recently, soybean (Glycine max) has been discovered to produce a unique peptide signal, GmSubPep, which activates the transcription of pathogen defense genes (Pearce et al., 2010). In Arabidopsis (Arabidopsis thaliana), elicitor peptide 1 (AtPep1) belongs to a family of peptides that interact with the PEPR receptors to regulate the expression of pathogen defense genes, including those encoding the PDF1.2 defensin and PR-1 (Huffaker et al., 2006;Yamaguchi et al., 2006Yamaguchi et al., , 2010. While systemin and AtPep1 are endogenous defense signals as opposed to MAMP/HAMP exogenous elicitors and indicators of nonself, the signaling similarities shared by these peptide regulators closely resemble aspects of MAMP/HAMP-induced signaling . AtPep family peptides and peptide MAMPs such as flg22 and elf18 activate similar downstream responses using many of the same molecular components Krol et al., 2010;Postel et al., 2010;Yamaguchi et al., 2010). Both flg22 and AtPeps bind specific Leu-rich repeat receptors, and both activate downstream defense genes through a myriad of downstream second messenger signals, which in addition to jasmonate and hydrogen peroxide are believed to include ethylene (ET), salicylate, and membrane depolarization (Yamaguchi et al., 2006;Huffaker and Ryan, 2007;Krol et al., 2010). The receptors for both flg22 and AtPep1 associate with an interacting receptor partner, BAK1, and likely activate cyclic nucleotide-gated calcium channels via receptor guanylyl cyclase activity (Ma et al., 2009;Postel et al., 2010). Treatment with flg22 upregulates the transcription of genes encoding PROPEP family precursors and both PEPR receptors, and AtPep1 treatment induces the transcription of FLS2, the flg22 receptor (Zipfel et al., 2004;Ryan et al., 2007).
The breadth of responses regulated by endogenous peptides indicates their potential utility as a mechanism for manipulating resistance, a strategy that has been demonstrated through experiments with transgenic plant lines. Solanaceous plants constitutively expressing the genes encoding prosystemin or proHypSys accumulate herbivore defense proteins to much higher levels than wild-type plants and are more resistant to insect attack (Bergey et al., 1996;Ren and Lu, 2006). Similarly, Arabidopsis plants constitutively expressing the AtPROPEP1 precursor gene have higher basal expression levels of pathogen defense genes and demonstrate resistance to the necrotrophic pathogen Pythium irregulare (Huffaker et al., 2006). Direct application of peptide to plants is also an effective mechanism to manipulate defense; pretreatment of Arabidopsis plants with either flg22 or AtPep1 peptides prior to inoculation with the hemibiotrophic bacterial pathogen Pseudomonas syringae pv tomato DC3000 enhanced plant resistance (Zipfel et al., 2004;Yamaguchi et al., 2010).
Enhanced disease resistance obtained through peptide pretreatment or transgenic constitutive expression indicates that such methods could have potential use in the field, especially if the mechanisms are conserved across species. However, systemin is not active in nonsolanaceous plants, nor are AtPep peptides capable of signaling in other species (Ryan and Pearce, 2003;Yamaguchi et al., 2006). This species specificity has prevented the functional transfer of peptideenhanced defense to diverse plant species. While a proHypSys ortholog has been identified in Ipomoea batatas, indicating that the systemin superfamily does exist in other species, homologs have not yet been identified in any other plant families (Chen et al., 2008). Whether this lack of identified systemin homo-logs is because related peptides evolved only in the Solanaceae or because the amino acid sequence of functional homologs has diverged to the point of being unrecognizable in other species is unknown.
Orthologs of AtPROPEP genes have been identified in other plant species through amino acid sequence comparisons. However, those orthologs share little direct sequence identity to AtPROPEP genes. This lack of sequence identity among species is unsurprising, as Arabidopsis peptides that bind the same receptor have precursor amino acid sequence identity between 12% and 47% (Yamaguchi et al., 2006). All Arabidopsis Pep family precursors do share homologous conserved domains, the combination of which has been used as a means of identification of orthologs in other species. First, all PROPEP family orthologs contain the predicted active peptide sequence at the C terminus of a larger precursor protein, a characteristic also shared by many animal peptide hormone precursors and by prosystemin (McGurl et al., 1992;Huffaker et al., 2006). None of the precursors has a traditional signal sequence for export through the secretory pathway, but each does encode an amphipathic helix motif that is potentially a site of protein-protein interactions (Rhoads and Friedberg, 1997;Huffaker et al., 2006). All predicted peptides are enriched in basic amino acids, and each precursor protein has several repeated EKE motifs, consisting of a high density of Asp/Glu residues interspersed with Lys/Arg (McGurl et al., 1992;Realini et al., 1994;Huffaker et al., 2006). None of the genes designated as AtPROPEP orthologs using the above criteria has been studied for functional homology, and it has been suggested that true AtPep1 homologs likely exist only in species closely related to Arabidopsis (Boller and Felix, 2009).
Our studies present evidence that the gene ortholog in maize (Zea mays), ZmPROPEP1, encodes a peptide, ZmPep1, which is an active signal regulating pathogen defense. The ZmPROPEP1 gene is expressed in response to jasmonic acid (JA) treatment and fungal infection. Treatment of leaves with ZmPep1 promotes production of the hormones JA and ET and induces the expression of genes encoding their biosynthetic enzymes, genes associated with pathogen defense, and the ZmPROPEP1 gene. ZmPep1 activates the biosynthesis of benzoxazinoid defenses and promotes the accumulation of 2-hydroxy-4,7-dimethoxy-1,4-benzoxazin-3-one glucoside (HDMBOA-Glc), a storage form of a highly reactive aglycone hydroxamic acid. Finally, pretreatment with ZmPep1 prior to infection enhances maize resistance to both the foliar pathogen Cochliobolis heterostrophus and the stalk rot pathogen Colletotrichum graminicola.
Maize Transcribes a Pathogen-Inducible Gene Orthologous to AtPROPEP1
Using the AtPROPEP1 sequence to query National Center for Biotechnology Information maize nucleo-tide sequences, we identified ZmPROPEP1 as a potential homolog. While the amino acid identity between the two precursors is only 14%, both share the modular structural motifs characteristic of the PROPEP family (Fig. 1A). These motifs include the amphipathic helix motif that is potentially a site of protein-protein interactions, multiple EKE repeats, and location of the active peptides at the C terminus of both precursors. The native length of ZmPep1 is predicted to be 23 amino acids, as are both AtPep peptides that have been isolated biochemically (Huffaker et al., 2006;Pearce et al., 2008). Neither AtPROPEP1 nor ZmPROPEP1 has a conventional signal sequence for export through the secretory pathway, and both are predicted to localize to the cytosol.
The predicted peptide encoded by the ZmPROPEP1 gene has several conserved residues at the C-terminal end as compared with AtPep1 (Fig. 1B), including the Gly (Gly-17) shown to be essential for AtPep1 bioactivity (Pearce et al., 2008). Like AtPep1, the N-terminal end of ZmPep1 is enriched in basic residues and contains five Arg residues compared with the five Lys residues and one Arg in the N-terminal region of AtPep1 (Fig. 1B). The pI of both peptides is very high, 11.22 and 12.18 for AtPep1 and ZmPep1, respectively.
The maize genomic sequence encoding ZmPROPEP1 was cloned from both var Golden Queen (GQ), a commercially grown sweet corn, and var B73. As in the Arabidopsis AtPROPEP1 gene, both GQ and B73 genes contained a single short intron just upstream of the encoded peptide (Supplemental Fig. S1A). The cloned B73 sequence was found to be identical to database sequences, whereas the GQ gene encoded eight amino acid changes, none of which was in the predicted ZmPep1 peptide (Supplemental Fig. S1B). Several cDNAs encoding the ZmPROPEP1 precursor were amplified from young leaf tissue of 1-month-old GQ plants. Sequencing of six independent cDNA clones revealed that three had the intron alternatively spliced such that the transcripts encoded a precursor with five fewer amino acids (Supplemental Fig. S1B). This differential splicing could potentially contribute to the regulation of peptide processing, as the splice site is just upstream of the region encoding ZmPep1, where proteolytic activity likely would release the active peptide from the precursor.
To ascertain whether the ZmPROPEP1 gene responds to pathogen infection, ZmPROPEP1 transcript abundance was analyzed in intact plants that were infected with the fungus C. heterostrophus versus uninfected control plants. ZmPROPEP1 transcript levels increased in the fungus-infected plants (Fig. 1C). Expression of ZmPROPEP1 was also induced in intact leaves treated with either ZmPep1 peptide or JA but not in leaves treated with water (Fig. 1D).
The ZmPep1 Peptide Activates the Production of JA
To confirm that ZmPep1 acts as a defense regulator, we quantified JA concentrations in excised leaves supplied with water or ZmPep1. After 4 h, ZmPep1 induced the accumulation of JA to levels 4.6-fold higher than that of control leaves supplied with water ( Fig. 2A). To evaluate the dose dependence of ZmPep1 treatment and subsequent JA accumulation, leaves were treated with increasing concentrations of ZmPep1, ranging from 0.2 to 2,000 pmol g 21 fresh weight. After 4 h, JA levels in control leaves supplied with water averaged around half that of leaves supplied with the lowest concentration of ZmPep1 (Fig. 2C). Average JA levels increased with the application of increasing amounts of peptide, with a maximum 10 times that of water-supplied leaves (Fig. 2C). The concentration of ZmPep1 that induced half-maximal JA accumulation fell between 200 fmol and 2 pmol g 21 fresh weight. The vapor phase extraction method followed by gas chromatography (GC)-mass spectrometry (MS) analysis of JA allowed us to simultaneously measure salicylic acid, levels of which were not observed to change in our experiments.
In excised leaves, expression of the gene encoding allene oxide synthase (AOS) was wound inducible; Figure 1. Comparison of the proteins encoded by the AtPROPEP1 and ZmPROPEP1 genes. A, Conserved precursor motifs are the EKE motif (blue), amphipathic helix motif (purple), and bioactive elicitor peptide (red, underlined). B, Comparison of conserved characteristics within the active AtPep1 and ZmPep1 peptides. Basic residues are blue, and identical amino acids are red. C, Average 6 SE (n = 3) induced ZmPROPEP1 gene expression in leaves by the fungal pathogen C. heterostrophus. D, Average 6 SE (n = 3) induced expression of the ZmPROPEP1 precursor gene in response to treatment of intact leaves with ZmPep1 or JA. In C, different letters (a-d) represent significant differences within the plot. In D, different letters (a and b) represent significant differences within each time point (all ANOVAs, P , 0.005; Tukey test corrections for multiple comparisons, P , 0.05).
however, leaves supplied with ZmPep1 exhibited a 3.8-fold greater induction of AOS transcript than did wounded leaves supplied with water ( Fig. 2B). Expression of the allene oxide cyclase (AOC) gene was more specifically induced by ZmPep1 treatment. Compared with unwounded leaves at time zero, excised water-supplied leaves displayed modest 5-fold increases in transcript while ZmPep1-treated leaves exhibited a 30-fold induction (Fig. 2B). Maximal increases in AOC transcript abundance also occurred at 4 h. Similar to JA production, ZmPep1-induced expression of both the AOS and AOC genes was dose dependent. At 4 h, relative transcript levels of both genes showed increases in abundance starting at ZmPep1 applications of 20 pmol g 21 fresh weight (Fig. 2D).
ZmPep1 Induces ET Emission
Given that ET commonly interacts with JA to regulate pathogen defenses, ET production was also investigated. After 2 h, ZmPep1-supplied leaves emitted a 5-fold increase in ET compared with water-supplied leaves (Fig. 3A). ZmPep1-induced ET production was dose dependent, and average emissions increased as the amount of peptide supplied to leaves increased (Fig. 3B). Expression of the gene encoding 1-aminocyclopropane-1-carboxylic acid oxidase (ACC Ox) also responded to ZmPep1 treatment. ZmPep1 induced an 8-fold increase in transcript levels above those detected in water-treated leaves, which showed no measurable change in ACC Ox expression (Fig. 3C). Similar to AOS and AOC genes, 20 pmol g 21 fresh weight ZmPep1 was observed to be the threshold level for effects on ACC Ox gene expression (Fig. 3D). While peak levels of ET emission occurred after 2 h of treatment, increased expression of the ACC Ox gene was greatest after 4 h. This implies that the early induction of ET in ZmPep1-treated leaves occurs either through activation of ACC Ox enzyme activity or translational activation rather than increases in transcription.
ZmPep1 Regulates the Expression of Pathogen Defense Genes
To examine defense processes associated with ZmPep1-activated production of JA and ET, we examined the expression of established defense marker genes (Doehlemann et al., 2008;Erb et al., 2009). Endochitinase A (ECA), pathogenesis-related 4 (PR-4), pathogenesis-related maize seed protein (PRms), and peroxidase (PEX) genes have been shown to be pathogen inducible in microarray experiments (Doehlemann et al., 2008), whereas SerPIN encodes a Bowman-Birk trypsin inhibitor that is strongly induced by JA treatment, elicitors, and biotic stresses (Erb et al., 2009).
Expression of all five genes was elevated in excised leaves that had been supplied with ZmPep1. Within 4 h, ECA transcript abundance increased 6-fold in ZmPep1-treated leaves as compared with unwounded control leaves (Fig. 4A). After longer treatment times, ECA transcripts were also modestly induced by wounding. PEX transcripts demonstrated a 25-fold increase in ZmPep1-treated leaves at 4 h, remaining elevated at 16 h. At 4 h, PEX was not strongly wound inducible, but at later time points, excision resulted in a gradual increase in transcription (Fig. 4A). Transcription of PR-4 was wound responsive in the excised leaves, and at early treatment times it was induced similarly by both water and ZmPep1 treatment. At 12 h, PR-4 transcripts accumulated to 2-fold higher levels in the ZmPep1-supplied leaves compared with water-supplied controls; however, this response was Figure 2. ZmPep1 induces JA accumulation and regulates the expression of related biosynthetic genes. A and B, Time-course analysis of JA levels (A) and AOS and AOC gene expression (B) in excised leaves supplied with water or ZmPep1 (2 nmol g 21 fresh weight). Relative transcript abundance levels were examined using semiquantitative PCR with actin as a control. C and D, Dose dependence of JA levels (C) and AOS and AOC gene expression (D) in response to ZmPep1 at 4 h. Each sample was a pool of two leaves (n = 3; 6SE). At the time point of greatest mean change, different letters (a and b) represent significant differences (all ANOVAs, P ,0.02; Tukey test corrections for multiple comparisons where applicable, P , 0.05). FW, Fresh weight. not statistically significant (Fig. 4A). Expression of the gene encoding PRms was modestly but consistently increased 4-fold higher than water-supplied or unwounded control leaves ( Fig. 4A). At 4 h, ZmPep1 treatment resulted in the accumulation of transcript encoding SerPIN, to average levels 50-fold higher than those observed in either excised or unwounded 0-h control leaves (Fig. 4A).
For each defense marker gene studied, the induced magnitude of change in transcript abundance was found to be dose dependent; excised leaves treated with ZmPep1 for 4 h displayed increased defense gene expression with increasing amounts of peptide application (Fig. 4B). Changes in transcriptional abundance of the gene encoding PRms were observed at the lowest ZmPep1 treatment level, 200 fmol g 21 fresh weight, while expression of PR-4 was clearly enhanced at 2 pmol g 21 fresh weight. Transcription of the ECA, PEX, and SerPIN genes was strongly induced in leaves supplied with 20 pmol g 21 fresh weight ZmPep1.
ZmPep1-induced gene expression was also observed in intact plants using 25 pmol of peptide solution applied to a small wound site (Fig. 4C). Transcript abundance of all five defense-associated genes was found to increase in the ZmPep1-treated leaves relative to wounded leaves treated with water, similar to the excised leaf assay. In intact plants, Rhizopus-derived pectinase elicitor also induced increased transcript abundance of each defense gene to comparable levels as ZmPep1 (Fig. 4C). Average expression of all five genes was also up-regulated in leaf tissue after 24 h of C. heterostrophus infection (Fig. 4D).
ZmPep1 Promotes Accumulation of the Defense Precursor Metabolites Anthranilate and Indole
To examine metabolites that fuel the biosynthesis of chemical defenses in maize, we examined benzoxazinoid hydroxamic acid-related precursors (Romero et al., 1995;Frey et al., 2009). ZmPep1 treatment increased the leaf concentrations of both anthranilate and indole after 12 h (Fig. 5A). Anthranilate increased from approximately 20 ng g 21 fresh weight in watertreated leaves to more than 700 ng g 21 fresh weight in peptide-treated leaves. Indole increased 30-fold in wounded leaves and more that 1,300-fold in ZmPep1supplied leaves. Accumulation of anthranilate and indole in leaf tissue correlated to the amount of ZmPep1 used to treat the leaves (Fig. 5B). Peak induction was achieved at 200 pmol g 21 fresh weight.
To determine whether the increase in anthranilate corresponded with transcriptional regulation of biosynthetic enzymes, expression of the gene encoding anthranilate synthase subunit 2 (ASsub2) was analyzed. Transcript abundance of ASsub2 increased in ZmPep1-treated leaves relative to water-treated leaves and was greatest after 4 h of treatment (Fig. 5C). Induced expression of the gene was dose dependent, and observable increases in transcript abundance were apparent at peptide concentrations as low as 200 fmol g 21 fresh weight (Fig. 5C). Accumulation of both anthranilate and indole occurred in leaves treated with a fungus-derived pectinase elicitor, with observed levels of both metabolites peaking at 12 h (Fig. 5D). Infection with C. heterostrophus only weakly influenced levels of anthranilate and indole in the leaves at the time points examined (Fig. 5E).
Young plants had a 3-fold higher basal total hydroxamic acid content than older plants (Fig. 6A). Total hydroxamic acid content in young plants was modestly increased by ZmPep1 treatment, but in older plants, the total hydroxamic acids more than doubled in response to ZmPep1, indicating that de novo hydroxamic acid synthesis was required (Fig. 6A). When the ratio of HDMBOA-Glc was compared with DIMBOA and DIMBOA-Glc in leaves, it accounted for an increased percentage of hydroxamic acid content in both young and old plants, but whereas DIMBOA-Glc predominated in young plants, HDMBOA-Glc became the predominant hydroxamic acid in old plants (Fig. 6B).
Production of indole by Benzoxazineless1 (BX1) is the first committed enzymatic reaction leading to benzoxazinoid synthesis. Expression of the gene encoding BX1 is responsive to biotic stresses, and modulation of BX1 expression is a mechanism regulating benzoxazinoid pathway activity (Frey et al., 2009;Niemeyer, 2009). To ascertain whether ZmPep1 might activate metabolic flux through the pathway by inducing BX1 gene expression, BX1 transcript abundance was analyzed in leaves. After 4 h of treatment with ZmPep1, BX1 transcripts accumulated to 30-fold higher levels than were found in time-zero control leaves (Fig. 6C). Excised leaves in water also had increased BX1 expression, but only 10-fold higher than that of time-zero control leaves.
ZmPep1 Enhances Resistance to Southern Leaf Blight Disease
Because ZmPep1 activates the production of JA and ET, the expression of pathogen defense genes, and the accumulation of HDMBOA-Glc, we hypothesized that pretreatment of plants would improve plant disease resistance. To test this hypothesis, intact plants were treated with water or with ZmPep1 at 18 h prior to inoculation with C. heterostrophus, a fungal necrotroph that is the causative agent of southern leaf blight. Chlorotic lesions spread from the wound sites of infected leaves that had been pretreated only with water after 3 d (Fig. 7A). In the ZmPep1-pretreated leaves, lesions were contained at the edge of the wound site and had not spread. C. heterostrophus-induced lesion area in ZmPep1-treated leaves was less than half that of water-treated leaves even at high inoculation loads of C. heterostrophus (Fig. 7B).
Leaves that had been pretreated with water had increased cell death, as estimated by ion leakage, relative to leaves that had been pretreated with ZmPep1 (Fig. 7C). As spore inoculation levels increased, subsequent average ion leakage from the infected leaves also increased. Across all inoculum levels, ZmPep1pretreated leaves were more resistant to C. heterostrophusinduced cell death. At the lowest concentration of fungal inoculum applied, percentage ion leakage was 20-fold less in ZmPep1-pretreated leaves as compared with water controls.
ZmPep1 Enhances Resistance to Anthracnose Stalk Rot
To examine ZmPep1 activity in stems, resistance to anthracnose stalk rot was examined. In plants pretreated with water, the progression of lesion spread from the infected node was less than in control plants, indicating that wounding alone enhanced resistance (Fig. 8A). As compared with either untreated or watertreated plants, the nodes of ZmPep1-treated plants displayed very little necrotic spread.
After 4 d, greater than 90% of the stalk area was necrotic in untreated control stems that had been inoculated with 3.3 3 10 3 C. graminicola conidia (Fig. 8B). Stalks of plants that were pretreated with water were 45% to 50% necrotic at high inoculation levels, whereas ZmPep1 pretreatment resulted in only 25% stem rot. Measurements of conductivity to ascertain the extent of cell death as indicated by ion leakage revealed a similar trend. ZmPep1-pretreated stalks had less than a 100 mS cm 22 increase, less than half that of water-pretreated stems and one-quarter that of directly infected controls (Fig. 8C).
ZmPep1 Is an Active Signal in Several Varieties of Maize
To elucidate whether ZmPep1 modulates responses in other maize varieties, excised leaves of pathogenresistant lines HI-27 and MP313E were supplied with peptide and levels of JA and indole were quantified. JA accumulated in leaves of all three maize lines after 4 h of ZmPep1 treatment (Fig. 9A). Similarly, indole was observed to increase in ZmPep1-treated leaves of the three lines (Fig. 9B). For both MP313E and HI-27, the magnitude of indole production was less than in GQ, but the peptide-treated leaves were observably induced compared with water-treated leaves. It may be that these varieties are less sensitive to ZmPep1 as a signal or that the selection of indole as a defense marker metabolite is not ideal for all maize lines.
DISCUSSION
We demonstrate that ZmPep1 acts as a defenseregulating signal and extend the characterization of this family of peptides beyond Arabidopsis. This work examined the molecular and biochemical defenses induced by ZmPep1 that are collectively associated with resistance against invading microorganisms. The maize ZmPROPEP1 ortholog of AtPROPEP1 is functionally homologous, and the gene is transcribed in response to both JA and pathogen infection. As does AtPep1, the ZmPep1 peptide activates numerous com- Figure 5. ZmPep1-induced defense-associated metabolites. A, Average anthranilate and indole accumulation in excised leaves. B, Dose dependence of ZmPep1-induced anthranilate and indole accumulation in excised leaves. C, Time course and dose dependence of ZmPep1induced changes in expression of the ASsub2 gene. Transcript abundance was normalized via comparison with an actin control and expressed as fold change relative to an untreated leaf. For all experiments, unless otherwise indicated, ZmPep1 was supplied at 2 nmol g 21 fresh weight. Each sample was a pool of two leaves. For graphs in A and C, at the time point of greatest mean change, different letters (a and b) represent significant differences (all ANOVAs, P , 0.02; Tukey test corrections for multiple comparisons, P , 0.05). For graphs in B, asterisks represent significant differences from the water-treated control (P , 0.04). n.s.d. (not statistically different) indicates ANOVA P . 0.05. For all graphs, n = 3 (6SE). FW, Fresh weight. ponents of the innate immune response. This maize peptide-activated defense response was characterized by the production of defense-related phytohormones, induced expression of pathogen defense genes, accumulation of benzoxazinoid defenses, and enhanced resistance to multiple pathogens.
Like other endogenous peptide regulators of defense, ZmPep1 functions through the activation of oxylipin signaling, inducing both expression of JA biosynthetic genes and JA accumulation (Howe et al., 1996;Huffaker and Ryan, 2007). Furthermore, ET is also a component of ZmPep1 signaling; the peptide activates expression of the gene encoding ACC Ox and promotes ET emission in a dose-dependent manner. Coordinated activity of JA and ET signaling regulates pathogen defense responses in many plants (Rojo et al., 2003;Glazebrook, 2005;Bari and Jones, 2009). While the molecular mechanisms regulating pathogen defense responses in maize are not as well characterized as in plants such as Arabidopsis, evidence is accumulating that cooperative JA/ET signaling is a conserved motif of defense initiation. Both JA and ET are produced by maize in response to biotic stress and insect elicitor treatment (Schmelz et al., 2003(Schmelz et al., , 2009). Additionally, elicitor-modulated JA/ET signaling by Trichoderma virens is proposed as the mechanism by which this beneficial fungus activates induced systemic resistance in maize (Djonovic et al., 2007). Our results demonstrating that ZmPep1 regulates JA, ET, and pathogen resistance support the cooperative role of these hormones as signals for maize pathogen defense.
In addition to mediating the production of JA and ET, ZmPep1 also promoted increased transcript abundance for genes encoding antimicrobial and defense signaling proteins. Consistent with the proposed role of ZmPep1 as an endogenous elicitor, many defenserelated transcripts were also induced by infection with C. heterostrophus, the exogenous fungal elicitor pectinase. The PR-4 and ECA genes regulated by ZmPep1 are activated by pathogen attack and encode chitinase proteins likely to have direct antifungal activity through degradation of fungal cell walls. In germinating maize embryos, PR-4 gene expression is stimulated by inoculation with fungi and by fungal elicitor extracts; it is inducible in leaves by JA, abscisic acid, and wounding (Bravo et al., 2003). Both PR-4 and ECA transcripts also accumulate in Ustilago maydis-infected ears (Bravo et al., 2003;Doehlemann et al., 2008).
In addition to genes encoding antimicrobial PR proteins, ZmPep1 induced expression of the PRms gene, a homolog of the tobacco (Nicotiana tabacum) PR-1 family that is induced by fungal infection (Casacuberta et al., 1992). Rather than having direct antimicrobial activity, PRms acts as a defense regulator. In both rice (Oryza sativa) and tobacco, constitutive PRms gene expression was found to increase basal levels of defense gene transcripts and to confer enhanced resistance to infection by several pathogens (Murillo et al., 2003;Gó mez-Ariza et al., 2007). This up-regulation of defense by PRms is proposed to occur through the modulation of Suc-mediated signaling, raising the intriguing possibility that in addition to activating defense through JA/ET hormone signaling, ZmPep1 may promote disease resistance through PRms-mediated sugar signaling events as well (Gó mez-Ariza et al., 2007).
ZmPep1-induced PEX may detoxify reactive oxygen species generated through cellular damage or signaling or may cross-link lignin, cellulose, and extensin to strengthen cell walls against attacking organisms (Lagrimini et al., 1987;Hiraga et al., 2001). SerPIN may act in direct defense, since it is a Ser proteinase inhibitor that could inhibit digestive proteases from both insect and microbial invaders (Ryan, 1989). However, serpin family proteins are also regulators of proteolytic signaling cascades required for innate immune responses in mammals and insects (Law et al., 2006). Furthermore, a serpin in Drosophila melanogaster, termed Necrotic, modulates signaling by spätzle, an endogenous peptide signal mediating Drosophila innate im- mune responses (Levashina et al., 1999). It remains to be determined whether ZmPep1-induced SerPIN acts in direct defense or as a signaling modulator.
While the antimicrobial and signaling-related genes up-regulated by ZmPep1 are likely factors contributing to induced disease resistance, small molecule defenses are also likely to contribute. Benzoxazinoids are indole-derived hydroxamic acid defenses in poaceous plants that are associated with herbivore and pathogen resistance (Niemeyer, 2009). Cellular damage caused by attacking organisms releases reactive benzoxazinoids from their glycosylated precursors (Frey et al., 2009). Maize seedlings and young tissues have relatively high concentrations of DIMBOA and the glucoside DIMBOA-Glc, which are believed to help protect these essential tissues; however, the role of benzoxazinoids in older plants is not as well defined (Niemeyer, 2009).
Neither DIMBOA-Glc nor free DIMBOA was found to accumulate in response to ZmPep1, but HDMBOA-Glc was induced in ZmPep1-treated leaves. The second methoxyl group on HDMBOA renders the molecule less stable and more reactive than DIMBOA (Maresh et al., 2006). With respect to invading organisms, HDMBOA seems to have multiple functions, capable of acting as both a toxin and a negative effector of pathogenicity. HDMBOA-Glc is a component of maize defense against southwestern corn borer, Diatraea grandiosella, in resistant varieties (Hedin et al., 1993). Southwestern corn borer-resistant maize lines are enriched in HDMBOA content compared with susceptible lines, and HDMBOA was shown to be directly toxic to larvae. HDMBOA is also a predominant constituent of maize root exudates and is postulated to generate a continuously maintained defensive zone in the soil surrounding the roots (Zhang et al., 2000). In root studies, HDMBOA did not act to prevent colonization of roots by Agrobacterium tumefaciens, but it was found to decompose into an o-imidoquinone intermediate that inhibited A. tumefaciens virulence gene expression (Maresh et al., 2006).
Specific accumulation of HDMBOA-Glc is inducible in both wheat (Triticum aestivum) and maize by treatment with JA, pathogen infection, and herbivory (Bü cker and Grambow, 1990;Oikawa et al., 2001Oikawa et al., , 2002Oikawa et al., , 2004. In these studies, accumulation of HDMBOA-Glc seemed to occur in direct correlation to reduced levels of DIMBOA-Glc, implying that HDMBOA-Glc was generated through methoxylation of existing DIMBOA-Glc pools rather than through de novo hydroxamic acid biosynthesis (Oikawa et al., 2001). For ZmPep1induced HDMBOA-Glc accumulation, the proportion of HDMBOA-Glc relative to DIMBOA-Glc was increased; however, the increase in HDMBOA-Glc did not come at the expense of DIMBOA-Glc. Rather, we observed that total hydroxamic acid content increased in the ZmPep1-treated leaves of older plants, indicating that the peptide activated de novo synthesis of benzoxazinoids that was channeled into HDMBOA-Glc production. ZmPep1-induced expression of the BX1 gene, encoding an indole glycerol lyase that catalyzes the first committed step in benzoxazinoid production, also supports enhanced metabolic flux into the pathway (Melanson et al., 1997;Frey et al., 2009).
Manipulation of innate immune responses by ZmPep1 caused enhanced disease resistance. ZmPep1-treated plants displayed decreases in both lesion size and cell death in leaves challenged with the necrotroph C. heterostrophus and in stems challenged with the hemibiotroph C. graminicola. Mechanisms of maize resistance to both of these pathogens are still poorly understood. C. heterostrophus is divided into two subgroups based upon toxin production: race T produces toxin, and race O does not. Race O is an endemic pathogen in hot and humid climates and continues to cause disease resulting in lost yield, particularly along the south Atlantic coast (Byrnes et al., 1989). Although a single recessive locus, rhm1, exists that can confer resistance to southern leaf blight disease through an unknown mechanism, most maize lines currently grown rely upon additive quantitative traits that confer partial resistance (Simmons et al., 2001;Balint-Kurti and Carson, 2006). Similar to C. heterostrophus, resistance to C. graminicola is also primarily quantitative and polygenic (Venard and Vaillancourt, 2007). C. graminicola is common in maize fields across the United States, and while the fungus is able to infect most maize tissue, it primarily causes yield losses due to anthracnose stalk rot (Bergstrom and Nicholson, 1999;Venard and Vaillancourt, 2007). . ZmPep1 promotes the production of JA and defense-related metabolites in multiple maize varieties. A, Time course of induced JA in excised leaves supplied with water or ZmPep1. B, Indole measured in excised leaves. C, Anthranilate levels in excised leaves. ZmPep1 was supplied at 2 nmol g 21 fresh weight. Samples were pools of two leaves, n = 3 (6SE). For the graphs in A, different letters (a-c) represent significant differences (all ANOVAs, P , 0.001; Tukey test corrections for multiple comparisons, P , 0.05). For the graphs in B, asterisks represent significant differences from the water-treated control (P , 0.05). FW, Fresh weight.
Colletotrichum species are known to actively evade and suppress plant defense, but the fungus was unable to overcome the defense responses preactivated by ZmPep1 treatment (Mü nch et al., 2008). Transgenic Arabidopsis plants constitutively expressing AtPRO-PEP1 exhibited increased basal levels of the same genes that were induced in wild-type plants by treatment with AtPep1 (Huffaker et al., 2006). This constitutive induction of basal immunity resulted in increased pathogen resistance (Huffaker et al., 2006). Transgenic maize plants constitutively expressing the ZmPROPEP1 gene may also display higher basal levels of the genes and metabolites that were observed in plants ectopically treated with ZmPep1 peptide. Several molecular studies of maize resistance to attacking organisms have indicated that resistance is associated with increased basal levels of defense gene expression and defense metabolite accumulation similar to those induced by ZmPep1 (Hedin et al., 1993;Niemeyer, 2009;Alessandra et al., 2010). The ability of ZmPep1 to elicit defense signaling and metabolite accumulation in multiple maize lines indicates that constitutive expression through transgenic means could yield results across varieties.
Because endogenous peptide regulators such as ZmPep1 activate multiple defense pathways rather than one gene or metabolite, they may provide a potentially useful strategy to contribute to quantitative resistance through manipulation of a single gene. In crop plants, quantitative disease resistance relies on the additive effects of multiple defenses to provide broad-spectrum partial resistance to many different pathogens (Wisser et al., 2006;Poland et al., 2009). Although quantitative resistance is highly desirable, direct incorporation of this trait into crop development is difficult because of its combinatorial nature; the additive effects that make this resistance robust and versatile also make it difficult to manipulate. Transgenic modulation of peptide signaling has already shown promise as a mechanism for manipulating quantitative resistance. For example, the gene encoding EFR, a Brassicaceae-specific pattern recognition receptor that binds a bacterial peptide MAMP to elicit broad innate immune responses, was ectopically expressed in Nicotiana benthamiana and tomato (Solanum lycopersicum; Lacombe et al., 2010). Transgenic expression of this receptor enhanced resistance to diverse bacterial species by facilitating the recognition of attack and activation of a broad spectrum of defense responses. Constitutive expression of the ZmPROPEP1 gene could similarly confer quantitative resistance effects through simultaneous up-regulation of basal levels of defense responses in maize plants. Furthermore, because orthologs of the precursors to AtPep1 and ZmPep1 have been identified across the plant kingdom, this strategy of endogenous peptide manipulation of defense responses could potentially be used to enhance disease resistance in many diverse plant species.
Plant and Fungal Materials
Maize (Zea mays) varieties used were B73, HI-27, MP313E, and GQ. All were potted in professional grower's soil mix (Piedmont Pacific) blended with 14-14-14 Osmocote (Scotts). All varieties were cultivated in a greenhouse under the following conditions: 12-h photoperiod with a minimum of 300 mmol m 22 s 21 photosynthetically active radiation supplied by supplemental lighting. Relative humidity was maintained at 70%, and temperature cycled between 24°C at night and 28°C during the day.
Cochliobolus heterostrophus was isolated from leaf material of an infected maize plant growing in Gainesville, Florida. The specimen was streaked on half-strength potato dextrose agar and subcultured until pure isolates were obtained. The fungus was identified by the Florida Extension Plant Disease Clinic at the University of Florida through macroscopic colony appearance, examination of morphology under both dissecting and light microscopy, and PCR analysis of fungal DNA with species-specific primers. Spore suspensions of C. heterostrophus were prepared in 30% glycerol/0.1% Tween and stored at 280°C. For each bioassay, an aliquot of glycerol stock was used to generate a fresh working culture on half-strength potato dextrose agar (Sigma-Aldrich) that was incubated for 2 weeks at 26°C. Colletotrichum graminicola strain M1.001 was acquired from Dr. Jeffrey Rollins (Department of Plant Pathology, University of Florida), and conidial spore stocks were prepared in 30% glycerol/0.1% Tween and stored at 280°C. For each assay, a fresh working culture was prepared by spotting glycerol stock onto V8-agar plates and incubated for 1.5 to 2 weeks at 26°C.
Peptide and Precursor Gene Identification
The previously identified AtPROPEP1 sequence (Huffaker et al., 2006) was used to query GenBank-registered nucleotide sequences from maize through the National Center for Biotechnology Information TBLASTN version 2.2.7 algorithm (Altschul et al., 1997). Alignments with the AtPROPEP1 sequence revealed GenBank accession DY240150, a full-length cDNA that encodes the ZmPROPEP1 precursor in the 21 frame. To determine possible localization of the protein in the cell, the pSORT prediction program was used (Nakai and Kanehisa, 1991).
Peptide Synthesis
A 23-amino acid peptide corresponding to the predicted ZmPep1 active peptide sequence, VRRRPTTPGRPREGSGGNGGNHH, was synthesized by solid-phase peptide synthesis at the Protein Core Chemistry Facility at the University of Florida using N-(9-fluorenylmethoxycarbonyl)-protected amino acids on a 432A Peptide Synthesizer (Applied Biosystems). The peptide was cleaved from the resin with modified reagent K and HPLC purified on an RP-C18 column using a water-acetonitrile gradient in 0.1% trifluoroacetic acid. The peptide was confirmed to be of the expected M r (2,452.63) by MS.
Nucleic Acid Purification and Isolation
DNA was isolated from maize leaves using the genomic DNA isolation reagent DNAzol (Invitrogen) as per the instructions provided with the reagent. For RNA isolation, tissues that had been harvested and frozen in liquid nitrogen were ground to a fine powder, and approximately 100 mg of frozen powdered plant material was extracted in 1 mL of Trizol reagent (Invitrogen). RNA isolation was performed as per the Trizol instructions, supplemented by an acid-phenol-chloroform partitioning step to minimize contaminating DNA.
Cloning of the ZmPROPEP1 Gene and cDNA RNA isolated from young maize leaves was reverse transcribed using the RETROscript kit (Applied Biosystems) as per kit instructions with random decamer primers. The ZmPROPEP1 open reading frame was amplified from the cDNA with the forward primer 5#-GACCTCAGGAAAGGGGAGACC-TGGA-3# and the reverse primer 5#-AAGGAAGCGAACAAGCTAGGGT-CACCGTA-3# using Phusion Hot Start II DNA Polymerase (New England Biolabs). The amplified cDNA was cloned into the pCR BLUNT II TOPO vector using a Zero Blunt PCR cloning kit (Invitrogen) as per kit instructions and transformed by heat shock into TOP10F# chemically competent Escherichia coli (Invitrogen). Colonies were screened by PCR using the ZmPROPEP1 primers, and plasmids from positive colonies were sequenced using ABI Prism BigDye terminator (Applied Biosystems). All sequencing reactions were run at the DNA Sequencing Core Facility at the University of Florida.
Leaf Bioassays for Analysis of Transcript and Metabolite Abundance
For excision assays, leaf 5 of 3-week-old maize plants was cut and placed in 4-mL glass vials containing either water or a ZmPep1 solution in water. For each treatment and time point, six leaves of leaf stage 5 were assayed. At the time points indicated, the entire leaf was harvested in liquid nitrogen for RNA and metabolite analysis. Zero-hour control leaves were harvested directly from the plant into liquid nitrogen. For intact leaf assays, wax was gently scraped from leaves at two sites on either side of the midrib on leaf 5 of 3-week-old plants. Five microliters of water or of solutions in water of 25 pmol of ZmPep1, 500 mg of pectinase elicitor, or 5 3 10 3 fungal spores was applied to each site. After the time indicated, a 7.5-cm segment of leaf surrounding the wound sites was harvested in liquid nitrogen for RNA and metabolite analysis.
Semiquantitative Reverse Transcription-PCR
RNA was reverse transcribed using RETROscript reagents (Applied Biosystems) with reactions assembled and incubated as per kit instructions. Semiquantitative PCR was performed as follows. Template cDNA was used at 120 ng per reaction. Each 25-mL reaction had 0.5 units of Platinum Taq polymerase diluted into Platinum 103 PCR buffer (Invitrogen) with 1.5 mM Mg 2+ , 200 mM each deoxyribonucleotide triphosphate, and 0.4 mM each primer. All primers were designed to be used at an annealing temperature of 56°C, to amplify regions 150 to 350 bp in length, and to span introns when possible; primer sequences are listed in Supplemental Table S1. The Actin1 gene transcript (GenBank accession no. J01238) was used to permit comparisons of relative transcript abundance from sample to sample (Kirchberger et al., 2007;Erb et al., 2009). PCR was performed as follows: 3 min at 94°C, 30 s at 94°C, 30 s at 56°C, 1 min at 72°C, and a final 10 min at 72°C. Cycling time for each transcript was optimized and ranged between 25 and 38 cycles. The number of amplification cycles used for each is listed in Supplemental Table S1.
A 20-mL aliquot of each reaction product was diluted with 2 mL of DNA blue/orange loading dye (Promega Biosciences) and analyzed electrophoretically on a 1% agarose/Tris-acetate-EDTA gel impregnated with ethidium bromide (Promega Biosciences). The gel was visualized on a Gel Doc XR Imaging System (Bio-Rad) using Quantity One version 4.6.2 software (Bio-Rad). A high-resolution image of the gel was captured, and band intensity was measured using the Quantity One program (Bio-Rad). Band intensity of each transcript was normalized by dividing the measured value by the value obtained from measurement of actin band intensity for the same sample. Values obtained for estimation of relative transcript abundance were then defined as fold change in normalized band intensity for each treatment versus normalized band intensity of an untreated control sample.
Measurement of Hormones and Metabolites
Levels of JA, indole, and anthranilic acid were measured using the previously described vapor phase extraction method with GC-MS analysis (Schmelz et al., 2004). Quantification of indole levels in each sample was performed by comparison with an external standard curve. ET emitted by leaves was measured by GC as described previously (Schmelz et al., 2009).
Analysis of Benzoxazinoid Phytoalexins
Benzoxazinoids were extracted and analyzed by HPLC as described by Erb et al. (2009). After 24 h, leaf tissue surrounding the treatment sites was harvested in liquid nitrogen, freeze dried, and extracted in 49:1 methanol: acetic acid prior to analysis. Quantities were estimated using 6-hydroxy-2 (3H)-benzoxazolone as an internal standard. HDMBOA-Glc is the predominant hydroxamic acid in 20-d-old maize roots (Cambier et al., 2000); thus, root tissue was used to confirm the HPLC retention time of this ZmPep1-induced metabolite in leaves. HDMBOA is known to be highly unstable (Maresh et al., 2006). Unlike DIMBOA-Glc, even low levels of water in the extracted tissue caused the complete loss of analyzable HDMBOA-Glc.
Leaf Blight Resistance Assays
Intact 2.5-week-old maize plants were infected with C. heterostrophus as follows. On leaves 5 and 6, the wax was gently scraped from each leaf twice on both sides of the midrib, and 5 mL of water or 25 pmol of ZmPep1 was applied to each wound site and allowed to air dry. After 12 to 24 h, 5 mL of C. heterostrophus spores in a 0.1% Tween 20 solution was applied to each wound site and allowed to dry. Each plant was then incubated in open-bottomed glass chambers under greenhouse lights for 3 d with 100% humidified air passed over each plant at 4 L min 21 . After 3 d, leaves were photographed and the lesion area measured using ASSESS 2.0 image-analysis software for plant disease quantification by Lakhdar Lamari (American Phytopathological Society). The extent of cell death was estimated through the measurement of ion leakage as described by Torres et al. (2002). Briefly, four leaf disc samples, each with an area of 1 cm 2 , were collected from infected or uninfected tissues, immersed in 4 mL of water, and vacuum infiltrated for 1 min. After shaking gently for 1 h, the conductivity of the samples was measured in mSiemens at 25°C using a YSI 3100 conductivity meter (YSI, Inc.). To measure total potential conductivity of each sample, the leaf discs in water were microwaved for 1 min, and after cooling to 25°C, the conductivity was remeasured. A comparison of initial conductivity to total potential conductivity of the same leaf discs resulted in a number expressed as a percentage of total conductivity for each sample. C. heterostrophus-induced ion leakage was then defined as the difference between percentage of total conductivity measured for C. heterostrophus-infected samples and that of samples from uninfected leaf tissue in the same assay.
Stalk Rot Resistance Assays
A 1.0-mm-diameter cork borer was used to bore a hole through the second aboveground node in the stalk of 3.5-to 4-week-old plants. The hole was filled with 10 mL of either water or 50 pmol of ZmPep1 to eliminate any air bubbles. A plastic pipette tip filled with 1 mL of either water or 5 nmol of ZmPep1 was then gently inserted into the hole until it was secure. The plant was allowed to take up the full 1 mL of pretreatment solution, which typically occurred within 2 h. After a 3-h pretreatment, the pipette tips were removed and the hole was inoculated with 10 mL of a C. graminicola spore suspension in sterile water. For untreated control plants, a hole was bored through the second node at the time of inoculation. After 4 d, stems were split open and photographed and lesion area was determined using ASSESS 2.0 image-analysis software. A 12.5-cm segment centered around the inoculated node was cut, immersed in 15 mL of water, and vacuum infiltrated for 1 min, and after 1 h of incubation, ion leakage was measured as described to estimate the extent of cell death (Torres et al., 2002). Values obtained for C. graminicola-induced ion leakage were defined as the ratio of conductivity values measured in infected stem samples compared with conductivity measured in wounded control stem samples.
Statistical Analysis
ANOVAs were performed on the quantified levels of metabolites, transcripts, pathogen lesion size, and ion leakage estimates. Treatment effects were investigated when the main effects of the ANOVAs were significant (P , 0.05). Where appropriate, Tukey tests were used to correct for multiple comparisons between control and treatment groups. t tests were also used in limited specific cases to examine significant differences in treatment groups compared with selected controls. With the exception of percentage data, prior to statistical analysis, all data were subjected to square root transformation to compensate for elevated variation associated with larger mean values. The analysis was accomplished with JMP 4.0 statistical discovery software (SAS Institute).
Supplemental Data
The following materials are available in the online version of this article.
Supplemental Table S1. Primers used for semiquantitative reverse transcription-PCR analysis.
|
2018-04-03T05:41:38.411Z
|
2011-01-04T00:00:00.000
|
{
"year": 2011,
"sha1": "16ec9dbbbad871fe42268e26914226845d405504",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/plphys/article-pdf/155/3/1325/37123046/plphys_v155_3_1325.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "316e85a9a0b02497104cc1d97227d24112d42e19",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
57573462
|
pes2o/s2orc
|
v3-fos-license
|
Severe acute pancreatitis with blood infection by Candida glabrata complicated severe agranulocytosis: a case report
Background Blood infection with Candida glabrata often occurs in during severe acute pancreatitis (SAP). It complicate severe agranulocytosis has not been reported. Case presentation We present a case where a SAP patient presenting with a sudden hyperpyrexia was treated for 19 days. We monitored her routine blood panel and CRP levels once or twice daily. The results showed that WBC count decreased gradually. And the lowest level of WBC was appeared at the 21st day of treatment. WBC 0.58 × 109/L(4.0–10.0 × 109/L), neutrophils 0.1 × 109/L [2.20%] (2.5–7.5 × 109/L). During treatment, Candida glabrata was identified as the infecting agent through blood culture, drainage tubes culture and gene detection. During anti-infection therapy, the patient had severe agranulocytosis. With control of the infection, her WBC and granulocyte counts gradually returned to the normal range. Conclusions Blood infection with Candida glabrata can complicate severe agranulocytosis.
Background
Acute pancreatitis (AP) is an acute inflammatory reaction of the pancreas. Most of AP can be healed by themselves. Approximately 20% of AP cases progress to severe AP (SAP). SAP mortality remains high and clinical diagnosis and treatment continue to be considerable challenge [1,2]. Numerous diseases and symptoms, such as bacteraemia, a high Ranson score, and diabetes, can be significantly associated with mortality in SAP patients [3][4][5]. Infection is a common clinical complication in the latter stages of SAP, and blood infection from Candida glabrata often occurs in such patients. However, complication with severe agranulocytosis has not been reported.
This paper presents a case of a SAP patient who presented with a sudden hyperpyrexia and chills and was treated for 19 days. Blood culture and high-throughput gene detection indicated C. glabrata infection. On the 20th day of treatment, the patient experienced sudden agranulocytosis. She subsequently recovered after active anti-infection and symptomatic treatment for 11 days. This case is reported as follows.
Case presentation
After 5 h of abdominal distention and pain, a 26-year-old Chinese woman reported hospital at 15:30 on December 3, 2017. The patient had previously been hospitalised for AP due to hyperlipidaemia on May 9, 2017, after which she had discontinued the lipid-lowering drugs prescribed by her doctor. During the 3 months before her admission in December, she resumed a high-fat diet. Approximately 7 h before disease onset, the patient consumed fatty food even after the occurrence of abdominal distention and pain. Her abdominal pain gradually worsened, and she vomited twice. The patient was diagnosed with AP based on her medical history, symptoms, signs, hemodlastase, and upper abdominal computed tomography (CT). After 10 h of hospitalisation, her abdominal pain became aggravated, leading to haemodynamic instability. Upper abdominal CT, liver, kidney, and heart function and electrolyte levels were reviewed. A comprehensive evaluation of the patient's condition revealed a Ranson score of 4, Balthazar CT grade of D, APACHE II score of 17, and SOFA score of 9. The patient was diagnosed with SAP and multiple organ dysfunction syndrome (heart, liver and kidney). After hospital admission, the patient was treated with positive expansion, gastrointestinal decompression, and nutritional support, and continuous renal replacement therapy (CRRT) treatment was initiated on the second day. Based on an examination of abdominal ima ging, intraperitoneal puncture and drainage was adminis tered under the guidance of ultrasound on days 2, 4, 8, and 15. Subsequently, eight root drainage tubes were placed (pull out of the two tubes of the eight tubes on the 11th day) and jejunal nutrition was administered for 16 days after admission. By day 18 after admission, the patient's renal function had restored, and intraperitoneal pressure had decreased from 32 mmHg at admission to 13 mmHg. The APACHE II and SOFA scores both became 3 on day 18. Onday 19, the patient's temperature was within the healthy range at 06:00. The results of a routine blood examination were as follows: white blood cells (WBCs) 9.61 × 10 9 /L, neutrophils 7.8 × 10 9 /L (81%), C-reactive protein (CRP) 39.61 mg/L. Until 17:00 on day 19, the patient experienced chills and high fever. Her body temperature reached a high of 40.2°C. After blood culture, linezolid and meropenem were performed to anti-infec tion treatment immediately. Routine blood examination, procalcitonin (PCT), and CRP levels were also observed (WBCs4.64 × 10 9 /L, neutrophils 3.4 × 10 9 /L [73.6%], CRP 51.15 mg/L, PCT 2.68 ng/mL). The patient continued to experience high fever on day 20. Thus, we administered a deep vein puncture tube and removed the remaining six root abdominal drainage tubes. All extracted peritoneal drainage tubes were used to make normalism the patient's etiological cultivation. The causative agents were determined through high-throughput gene detection from a venous blood sample. Blood culture was performed again. These tests revealed that the patient had a fungal blood infection. Based on the conventional treatmet method, caspofungin was added to the drug regimen. On the 23rd day of treatment, Tc. glabrata was identified as the infecting agent through blood culture and gene detection.
Eventually, we discontinued the administration of meropenem and linezolid, but continued that of caspofungin. The patient's body temperature was restored to within the normal range on the 25th day of treatment. During treatment, we monitored her routine blood panel and CRP levels once or twice daily. The results showed that her WBC count decreased gradually-to its lowest level on the 21st day of treatment (WBCs 0.58 × 10 9 /L, neutrophils 0.1 × 10 9 /L [2.20%]). Her haemoglobin and platelet levels also decreased. However, the duration of this decline was shorter than that of the WBC decline. The levels of other inflammatory markers also increased on the 23rd day of treatment (CRP 235.89 mg/L, PCT 10.85 ng/mL), alongside an increase in creatinine levels. On the 32nd day of treatment, the patient's WBC count was finally restored to the normal range.
In summary, the patient's temperature returned to normal with caspofungin treatment for 10 days. Blood culture was carried out on the 26th and 30th days after admission, and no bacterial growth was found in the two blood cultures. She was discharged on the 33rd day after treatment. And she was followed up every 10 days for 30 days after discharge. Results showed that her temperature was normal. Since then, she has been maintaining the low fat diet.
Discussion and conclusions
With the aging of hospitalized patients and the coexistence of many basic diseases, multiple invasive diagnostic procedures, such as broad-spectrum antibiotics, glucocorticoids, immunosuppressive agents, chemotherapy drugs, central venous catheterization and hemodialysis are widly performed [6,7]. Therefore, incidence of acquired candidiasis and the associated mortality rate have increased [8,9]. Because of its lack of specific clinical manifestations, candidaemia cannot be diagnosed early, and thus, its mortality rate is high [10].
C. glabrata is typically a nonpathogenic symbiotic bacteria in the human mucosa, and causes opportunistic infections only occasionally. In recent years, the incidence of blood infections due to C. glabrata has increased [11]. Candidaemia often occurs in patients with neutropaenia. Fw cases of severe agranulocytosis caused by Candida blood infection have been recorded.
Our patient was admitted to hospital for SAP. During treatment, we performed puncture and drainage under the guidance of ultrasound to relieve the intraperitoneal exudation. Although active puncture can effectively relieve intestinal injury caused by SAP [12], repeated invasive surgery and other catheter implantations increased the risk of blood infection due to Candida [13]. On the 19th day of treatment, the patient appeared to have sudden chills and hyperpyrexia. Considering the possibility of candidaemia, we preserved the specimen and removed the catheter. During this time, the patient was treated with caspofungin. She was subsequently diagnosed with C. glabrata infection based on blood culture, central venous catheterisation, abdominal puncture drainage tube culture, and high-throughput gene detection. Before C. glabrata infection, our patient's WBC and neutrophil counts can be high or within the normal range. During anti-infection therapy, our patient exhibited severe agranulocytosis. After the infection had been controlled, her WBC and granulocyte counts gradually returned to within the normal range. The potential impact factors have three aspects. Firstly, some antibiotics may cause agranulocytosis, such as Linezolid and meropenem. However, in this case, the WBC count has decreased before linezolid and meropenem is performed. Therefore, we conclude that WBC decline is not directly related to the use of the above two antibiotics, but related to the blood infection caused by C. glabrata. Sencondly, When the patient developed shivering and high fever, she was mainly administrated with nutritional support. Drugs used in nutritional support do not cause agranulocytosis. Thirdly, during the treatment, we cultured the patient's blood and puncture drainage tubes. There were no pathogenic bacteria associated with agra nulocytosis grown. Moreover, she has no medical history of agranulocytosis. After careful exclusion other potential factors causing agranulocytosis, we concluded that her agranulocytosis was related to blood infection due to C. glabrata.
Abbreviations AP: Acute pancreatitis; CRP: C-reactive protein; CRRT: Continuous renal replacement therapy; CT: Computed tomography; PCT: Procalcitonin; SAP: Severe AP; WBC: White blood cells writing and editorial assistance. National Natural Science Foundation of China played a role in the study design; in the collection, analysis, and interpretation of the data; in the writing of the report; and in the decision to submit the article for publication.
Availability of data and materials
The original data used in the presentation of this case report are available from the corresponding author on reasonable request.
Authors' contributions XXD and QW designed the treatment plan. RS and RF implemented the treatment program and collect data. QMZ took data statistics. All authors have read and approved the manuscript.
Ethics approval and consent to participate Not applicable to case reports. There are no images published which allows patient identification.
Consent for publication
The study participant has signed a written consent form giving his consent for publication of this case in an academic journal.
|
2019-01-05T10:08:41.092Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "7ea4f314a144183189a60eaa930d72755e3d12f7",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-018-3623-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ea4f314a144183189a60eaa930d72755e3d12f7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119275057
|
pes2o/s2orc
|
v3-fos-license
|
Ultra High Energy Neutrinos from Gamma-Ray Burst Afterglows Using the Swift-UVOT Data
We consider a sample of 107 Gamma Ray Bursts (GRBs) for which early UV emission was measured by Swift, and extrapolate the photon intensity to lower energies. Protons accelerated in the GRB jet may interact with such photons to produce charged pions and subsequently ultra high energy neutrinos $\varepsilon_\nu\geq 10^{16}$ eV. We use simple energy conversion efficiency arguments to predict the maximal neutrino flux expected from each GRB. We estimate the neutrino detection rate at large area radio based neutrino detectors and conclude that the early afterglow neutrino emission is too weak to be detected even by next generation neutrino observatories.
INTRODUCTION
Gamma Ray Bursts (GRBs) are the most powerful explosions in the Universe. The widely used phenomenological interpretation of these cosmological sources is the so called Fireball (FB) model (Piran 2000;Meszaros & Rees 2000). In this model the energy carried by the hadrons in a relativistic expanding jet (Fireball) is dissipated internally and distributed between protons, electrons, and the magnetic field in the plasma. Part of the bulk kinetic energy is radiated as γ-rays (i.e. GRBs) by synchrotron and inverse-Compton radiation of (shock-)accelerated electrons. As the jet sweeps up material it collides with its surrounding medium, which could give rise to Reverse Shocks (RS) and Forward Shocks (FS) (Gao & Mészáros 2015). The former may produce an early UV and optical afterglow (Waxman & Bahcall 2000) while the latter is believed to be responsible for the afterglow emission at longer wavelengths (Mészáros & Rees 1997). The same dissipation mechanism responsible for accelerating electrons that produce the prompt and afterglow photons may also accelerate protons to ultra high energies (ε p ≥ 10 19 eV). The interaction of these protons with radiation at the source during the prompt phase (Waxman & Bahcall 1997) and during the afterglow phase (Waxman & Bahcall 2000) could lead to production of charged pions, which subsequently decay to produce neutrinos.
High energy protons can interact with optical and Ultra-Violet (UV) photons that are radiated by electrons in the reverse shock leading to ∼ 10 17 eV neutrinos via photo-meson interactions (Waxman & Bahcall 2000). For afterglow emission that peaks at infra-red energies, neutrinos may be produced with energies up to ∼ 10 19 eV.
These Ultra High Energy Neutrinos (UHENs) would be delayed with respect to the prompt GRB by the time scale of the RS (∼10-100 s). The same energy conversion efficiency arguments made to assess the neutrino flux from GRB 990123 (Waxman & Bahcall 2000) can be used for other GRBs, that have much weaker optical emission, leading to a substantially smaller estimated neutrino flux.
The Swift observatory comprises the γ-ray Burst Alert Telescope (BAT), which triggers the X-Ray Telescope (XRT), and the UV/Optical Telescope (UVOT) (Roming et al. 2008), which provides rapid follow-up observations of GRBs at UV and optical wavelengths. Typical time delays from the BAT trigger to first UVOT observations range from 40 to 200 s, making UVOT a good instrument for measuring the early afterglow optical-UV emission.
Neutrino astronomy has steadily progressed over the last half century, with successive generations of detectors achieving sensitivity to neutrino fluxes at increasingly higher energies. With each increase in neutrino energy, the required detector increases in size to compensate for the dramatic decrease of the flux. IceCube is a Cherenkov detector (Halzen & Klein 2010) designed specifically to detect neutrinos at GeV-PeV energies. Since May 2011 ( (Abbasi 2011;Aartsen et al. 2013)), IceCube has been working with a full capacity of 86 strings, and measured for the first time flux of astrophysical neutrinos. So far no point sources of neutrinos were identified and no correlation with known GRBs were found (Kurahashi 2012;He et al. 2012;Whitehorn 2012;Aartsen et al. 2015). Antarctic ice allows for an effi-cient area coverage that makes it possible to construct detectors of order tens to hundreds of km 2 , and several small-scale pioneering efforts to develop this approach exist (Kravchenko et al. 2012;Landsman et al. 2009;Gorham et al. 2010). A modular, radio Cherenkov emission based experiment, the Askaryan Radio Array (ARA) was initiated four years ago. The current stage includes two functioning stations out of the planned 37. The complete detector, ARA37, would cover a hexagonal grid of ∼ 100 km 2 , and is designed to ultimately accumulate hundreds of cosmogenic neutrinos (Karle et al. 2014). The Antarctic Impulsive Transient Antenna (ANITA) experiment, based on a balloon flying over the Antarctic to detect neutrino hits using radio Cherenkov radiation, has already accumulated data in three flights. Neither experiment has yet to detect a high energy neutrino signal.
In this work, we exploit the optical and UV data from the Swift/UVOT to infer the neutrino flux from each GRB, and estimate the probability that these neutrinos would be detected by future large scale observatories.
In Sec. 2 we introduce the selected GRB sample. In Sec. 3 we describe the model and asses its parameters. In Sec. 4 we describe the resulting prediction for neutrinos, and discuss their consequences for the model.
UVOT SAMPLE
The present UVOT sample includes long GRBs (2 < T 90 < 700 s) detected by Swift from March 2005 to November 2014. We take only UVOT detections that started less than 200 seconds after the BAT trigger, and UVOT exposures T exp ≤ 300 s. We only use GRBs with known redshifts. and exclude GRBs for which only upper limits are provided. For each GRB we use the filter effective area and magnitude to calculate the photon count. We calculate the flux by dividing the photon count by the estimated length of the reverse shock, or the total exposure time, whichever is shorter. We use the BAT fluence (in the 15 − 150 keV band) as well as the GRB duration, T 90 , the time at which the BAT measured flux drops down to 90%. All data are taken from the Goddard Space Flight Center website 1 .
Our sample includes 107 GRBs (out of ∼ 900 Swift bursts). The redshift distribution of the present sample is essentially identical to the full Swift sample. The two distributions are plotted in Figure 1 along with the mean/median figures for the full swift sample vs. the chosen subsample. The requirements for early detection and for redshift measurement are due to observational limitations, and do not bias the sample beyond the Swift field of view and sensitivity limitations.
The mean (median) BAT fluence of the present sample is 75% (55%) that of the full sample, see Figure 2. Since our neutrino flux estimate scales with the GRB UV luminosity, the moderate bias towards high luminosity GRBs in our sample increases the expected mean neutrino flux and so should be considered as an upper limit estimate for the neutrino luminosity of the full GRB population. The BAT measured fluence F BAT can be converted into an isotropic equivalent γ-ray energy at the source,
MODEL PARAMETERS
based on the luminosity distance d L and measured redshift z. The luminosity distance is calculated using the cosmological parameters (Lahav & Liddle 2014): Ø m = 0.3, Ø Λ = 0.7, and H 0 = 73.8 km s −1 Mpc −1 . We adopt the hypothesis of the present model that the total γ-ray energy is equal to the electron energy E e , since in the prompt emission phase the electrons cool much faster than the dynamical timescale. We define ξ e to be the fraction of the total energy E e /(E p + E e ) carried by the electrons, where E p and E e represent the total energy in protons and in electrons, respectively. E p includes all proton energies from ε p,min = Γm p up to ε p.max = 10 22 eV. The proton flux model is assumed to follow a power law with slope α = −2. We assume all GRBs have the same ξ e ≈ 0.1 (e.g. ( Fig. 3.-Two example GRB synchrotron spectra. The slow rise, peak energy εγm and cooling break εγc are shown on the plot. The very bright GRB990123 has peak luminosity only ∼ 2 times stronger than the example GRB050319 from our sample, but has much lower peak energy εγm, and thus a much lower UVOT band fluence. so that the total proton energy is determined directly by the BAT fluence measurement E p = 9E γ . Assuming a single ξ e value is a simplification that yields a sample mean of E p = 10 53 erg (for ε p ≥ 10 19 eV), which allows GRBs to be the source of high energy cosmic rays (Waxman 1995).
Both the FS and the RS can contribute to the early (t ≤ 200 s) optical-UV afterglow, it is not clear, however, if the FS can accelerate protons to energies that could yield ultra-high energy neutrinos. In any case, the symplifying assumption that the optical-UV flux is due to the RS gives only an upper limit on the neutrino flux.
The photon spectrum can be described as a broken power law as expected for synchrotron emission (see Figure 3). The energy at which this emission peaks is where γ is the typical electron Lorentz factor in the plasma, and the general expression is boosted by the jet Lorentz factor Γ.
The typical values of ξ e = 0.1ξ e,−1 and ξ B = 0.01ξ B,−2 have been used, as well as the isotropic equivalent energy E iso = 10 53 E 53 erg, the typical RS time T = 10T 1 s and the ISM density n 0 in cm −3 . The Lorentz factor of the unshocked plasma Γ i ∼ 300 is used.
The photon spectrum follows an approximate power law dN/dε ∝ ε α with index α = −2/3 up to the peak energy, beyond which the photon spectrum drops as α = −1.5.
At a break energy ε γc = 300ξ eV the spectrum steepens to α = −2, as very energetic electrons tend to cool faster than the dynamical time scale.
The luminosity density at the synchrotron peak produced by a total N e number of electrons is which again is the general expression, boosted by Γ i . The specific luminosity depends on the total energy of the burst and other model parameters that cannot vary much between GRBs in the sample, so that the luminosity at the peak changes only by factor of a few. The energy at which the flux peaks, however, is treated as a free parameter, and may take very different values for different GRBs measured.
With UVOT we measure the total energy in the band so the measured UV luminosity depends on the position of the peak energy (Eq. 2), which we find by extrapolating the spectrum from the UVOT band back to lower energies. For a given peak energy, the flux model, extinction corrected, is integrated with the UVOT effective area curve, and compared with the measurement. For each GRB the energy of the peak is adjusted so that the expected and measured fluxes coincide. The calculated ε ob γm values for GRBs in our sample are shown in Figure 4.
NEUTRINOS FROM THE REVERSE SHOCK
Ultra High Energy Neutrinos (UHENu's) may be produced in GRBs through photo-proton interactions that produce charged pions, which in turn decay and emit neutrinos. The fraction of proton energy that is transferred to pions depends on the availability of photons at the right energy to produce pions, e.g., through the ∆ resonances (Waxman & Bahcall 2000). In each interaction a constant fraction (∼ 20%) of the proton energy is transferred to the pion, that decays into four particles (three neutrinos and a positron), each getting ∼ 5% of the proton energy.
The position of the synchrotron peak ε γm determines the efficiency for the relevant proton energy (Waxman & Bahcall 2000), for protons at ε p = 10 20 ε p,20 eV, taking a Lorentz factor Γ s ∼ 250 for the shocked plasma. The efficiency scales linearly with L γm , but has a very strong dependence on the Lorentz factor. For the typical luminosity density and Lorentz factor of the present sample L γm = 10 58 s −1 and Γ s = 250, f π is approximately 10 −4 , which drives down the pion and neutrino yields considerably.
Photons above ε γc follow a steeper power law, causing a steeper dependence of f π ∝ ε p for the relevant proton energies. Therefore the neutrino spectrum can be described as a broken power law, following the baseline proton spectrum modulated by the photon density at each energy: The neutrino break energy is at ε νc = 2 × 10 18 eV, corresponding to a photon break energy of ε γc = 300 eV, and scales inversely with it. Using these formulae, and based on the observed GRB luminosities and redshift, we calculate for each burst the expected neutrino flux. We estimate the expected quasidiffuse neutrino flux by multiplying the mean GRB flux by the total number of GRBs all over the sky per year (1000 yr −1 ), and by dividing by 4π sr (Figure 5, thick black line). When compared with the ANITA sensitivity from Gorham et al. 2010 (pink, full crosses) and the ARA sensitivity from Karle et al. 2014 (blue, empty circles), it is clear that even at high energies, this diffuse neutrino flux is at least four orders of magnitude too weak to be detected by any current or planned detectors. Changes to the kinematic parameters, e.g. the plasma Lorentz factor, make little difference in the overall neutrino flux, and even for very favorable choices the flux of neutrinos is still too low to be detected.
In Figure. 5 we also plot the diffuse flux estimated in Waxman & Bahcall 2000, as the red dotted line. This estimate was based on the assumption that all GRBs are as bright as the single GRB990123 that had a peak luminosity of L γm = 10 60 s −1 in the optical band, which is an order of magnitude higher than the luminosities (at equivalent energies) in the UVOT sample. Therefore, it can not represent the sample in this work, or the population of GRBs at large.
Radio frequency high energy neutrino detectors have fairly similar sensitivity to all flavors of neutrinos. Hence, neutrino oscillations do not dramatically affect the estimates of detection rates. We estimate the expected detection rate for the combined contributions of neutrinos and anti-neutrinos of all flavors. To obtain the number of neutrinos to be measured on Earth, we fold the model neutrino fluence spectrum of a single GRB dN ν /dε ν /dA (from Sec. 4) with the energy-dependent ARA37 effective area A eff (ε ν ). The effective area, shown in Figure 6, is the product of the ARA effective volume (Karle A. & ARA collaboration 2013), the ice density, and the cross section for neutrino absorption by an ice atom nucleon (Connolly et al. 2011). Below ε ν ∼ 10 16 eV the efficiency for radio detection of neutrinos drops very rapidly, while above ε ν ∼ 10 20 eV the detector trigger is saturated, and the effective area rises only through the logarithmic increase in cross section. The number of neutrinos we expect to detect for an average GRB in our sample is 8.4 × 10 −7 . For 1000 GRBs a year (full sky), this implies a total neutrino detection rate of 6.7 × 10 −5 yr −1 , thus we conclude that early afterglow neutrinos will not be detectable even in the next generation of neutrino observatories.
Looking at extremely bright GRBs both before and during the swift era, some bursts show substantial emission at early times that can be attributed to the RS (Gao & Mészáros 2015). The peak synchrotron energy and the neutrino flux can be calculated for these bursts using the same calculation made for the entire sample (supplementing data that was unavailable with reasonable estimates). For GRB990123 we use the measured magnitude M = 9 and redshift z = 1.6 (Gisler et al. 1999) to recover parameter values similar to those presented in (Waxman & Bahcall 2000). The number of neutrinos expected in ARA37 from this single GRB would be N ν ∼ 2 × 10 −4 . If all GRBs had similar parameters, the number of detections per year would be about N GRBs /4π ≈ 100 times this number, still below the detection threshold for ARA.
For GRB080319B, among the brightest GRBs recorded by swift, we can only estimate the true magnitude since UVOT had been saturated at M = 13.9 in the white filter. At this value the number of detections N ν ≈ 5×10 −6 is not exceptional. Using greater magnitude value for this burst, which is estimated to have peaked at M ∼ 5.3 (Racusin et al. 2008), the number of neutrino detections would be 0.1 N ν 10, depending critically on the value chosen for the prompt ξ e and the maximum proton energy. Clearly such a burst is not representative, but had it occurred during the operation period of any large area neutrino detector it may well have been detected.
The data collected by UVOT for GRB130427A is only available starting at T = 358 s, making it ineligible for our sample. It is, however, a very bright GRB, and we can assume its brightness at t ∼ 100 s is similar to the first measurements made. For the magnitude of the first measurement in the V filter M = 12.1 (Maselli et al. 2013) we get N ν ≈ 5 × 10 −6 neutrino detections.
In the present sample and within the assumptions of the model, we find that the UVOT fluence is a good predictor of the expected neutrino rate. Although the neutrino flux in the model scales with the total GRB energy, estimated by the BAT γ-ray fluence, the neutrino flux is strongly modulated by f π , which is determined by the intensity of the optical-UV photons available for photon-proton interactions. Figure 7 shows the strong correlation between the fluence measured in UVOT and the total number of expected neutrinos.
DISCUSSION AND CONCLUSIONS
In this work, we calculated the expected ultra high energy neutrino flux from the reverse shock in the early GRB afterglow. We use the observed Swift/UVOT flux as a proxy of the photon energy content in the reverse shock, which is the target for the photon-meson production of neutrinos. The redshift and BAT fluence distributions of the present UVOT sample are representative of the full Swift GRB sample, with only a slight bias in favor of brighter events in the BAT fluence distribution.
Optical-UV measurements taken within the time frame of the early afterglow (40 t 200 seconds) are much lower than anticipated by the RS-optical flash model, suggesting that the peak of the synchrotron emission for most GRBs in the sample is 10 −6 ε γm 10 −1 eV or 10µm λ γm 1 m. Measurements in this band, or even in the infra red, taken within the same time window can confirm or rule out this scenario. Alternatively, the RS may occur at a much earlier time (i.e. 40 seconds), so that UVOT measurements cannot probe the temporal peak of the emission. In this case the intensity and peak energy of the RS emission may be much higher, as suggested in (Waxman & Bahcall 2000). Further measurements using faster response detectors or a detection of UHENu's 40 seconds after the burst could confirm this hypothesis. The low intensity may also indicate the entire paradigm of reverse shock acceleration does not explain the early afterglow emission (Murase 2007). Using the UVOT data presented here we cannot rule out any of these possibilities. A detection of an UHENu, as well as measurement of the energy and time delay of such a neutrino would immediately differentiate between these possibilities.
The neutrino fluxes obtained based on the low peak energy RS result are approximately four orders of magnitude below the detection sensitivity of present and future high-energy neutrino telescopes. This predicted flux from the reverse shock is much lower than that expected from (100 keV) photon meson production in the prompt phase (Guetta et al. 2004;Yacobi et al. 2014). Moreover, two aspects of our analysis may cause an underestimate of the neutrino flux, hence, the present estimate provides an upper limit on the neutrino flux, which is, even in the most optimistic case, well below the detection threshold.
First, we assume that all of the optical-UV emission is due to the reverse shock, and that these photons are in the same region where the protons are accelerated. This implies that the efficiency for pion production is maximal. If part of the UV flux comes from the forward shock, the expected neutrino flux would be even lower than our estimate. Furthermore, the present sample is somewhat biased towards high-luminosity GRBs, as we miss the weakest UV sources. On average, the present sample is 80% brighter than the full sample. Hence, the average neutrino flux from the full GRB population would be lower by ∼ 0.8 than estimated here.
We note that the spectrum of neutrinos due to the prompt emission phase would peak at ∼ 10 15 eV, while future radio based neutrino detectors (e.g., ARA) will be more sensitive above 10 17 eV. For a high neutrino flux at these energies, ∼ 10 19 eV protons need to interact with the low-energy (keV) tail of the prompt emission at a sufficiently high rate. The fact that prompt neutrinos from GRBs have not yet been detected (Aartsen et al. 2015) together with the low fluxes from the afterglow predicted here, imply that ARA may not be optimal for GRB neutrino detection. This conclusion is independent of the fact that radio based neutrino observatories are still well suited for detecting cosmogenic neutrinos.
|
2015-11-22T13:13:26.000Z
|
2015-11-22T00:00:00.000
|
{
"year": 2016,
"sha1": "3278ef782e014d931c629cfe00c9bbda69674fa9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1511.07010",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3278ef782e014d931c629cfe00c9bbda69674fa9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
23898057
|
pes2o/s2orc
|
v3-fos-license
|
Screening of transporters to improve xylodextrin utilization in the yeast Saccharomyces cerevisiae
The economic production of cellulosic biofuel requires efficient and full utilization of all abundant carbohydrates naturally released from plant biomass by enzyme cocktails. Recently, we reconstituted the Neurospora crassa xylodextrin transport and consumption system in Saccharomyces cerevisiae, enabling growth of yeast on xylodextrins aerobically. However, the consumption rate of xylodextrin requires improvement for industrial applications, including consumption in anaerobic conditions. As a first step in this improvement, we report analysis of orthologues of the N. crassa transporters CDT-1 and CDT-2. Transporter ST16 from Trichoderma virens enables faster aerobic growth of S. cerevisiae on xylodextrins compared to CDT-2. ST16 is a xylodextrin-specific transporter, and the xylobiose transport activity of ST16 is not inhibited by cellobiose. Other transporters identified in the screen also enable growth on xylodextrins including xylotriose. Taken together, these results indicate that multiple transporters might prove useful to improve xylodextrin utilization in S. cerevisiae. Efforts to use directed evolution to improve ST16 from a chromosomally-integrated copy were not successful, due to background growth of yeast on other carbon sources present in the selection medium. Future experiments will require increasing the baseline growth rate of the yeast population on xylodextrins, to ensure that the selective pressure exerted on xylodextrin transport can lead to isolation of improved xylodextrin transporters.
Introduction
Cellulosic biofuel production from plant biomass requires efficient use of all abundant carbohydrates in the plant cell wall [1,2]. To make the fermentation process economically feasible, engineered yeast should be able to consume a mixture of sugars naturally released by enzyme cocktails simultaneously, including hexose and pentose sugars [3][4][5]. Xylodextrins (XDs), such as xylobiose and xylotriose, are oligomers of β-1,4-linked xylose. They are derived from hemicellulose, one of the major forms of biomass in lignocellulose, and are hydrolyzed to xylose by PLOS β-xylosidase. The first example of ethanol production from xylan was reported in 2004, wherein xylanase and β-xylosidase were displayed on the cell surfaces of S. cerevisiae expressing a xylose consumption pathway [6]. However, the xylan degrading and xylose utilization abilities of this recombinant S. cerevisiae strain requires further optimization to be industrially useful. Recently, a XD utilization pathway from N. crassa, which requires a transporter CDT-2 along with two intracellular β-xylosidases GH43-2 and GH43-7, was identified and subsequently engineered into S. cerevisiae, enabling the yeast to grow aerobically on XD, or co-ferment XDs with xylose or other hexose sugars [7]. However, the consumption rate of XDs remains quite slow and thus needs to be improved for future industrial applications. The transporter CDT-2 belongs to the Major Facilitator Superfamily (MFS), one of the largest and most ubiquitous secondary transporter subfamilies [8]. Its members exist in all species from bacteria to mammals, and they range in size from 400 to 600 amino acids, organized into 12 trans-membrane α-helices [9]. CDT-2 belongs to the hexose family of MFS transporters, which includes yeast hexose transporters, human glucose transporters, and xylose transporters [10][11][12][13].
For industrial applications, naturally occurring transporters are generally not optimal without further engineering. Protein engineering, including rational design and directed evolution approaches, has been widely used to improve the performance of a wide variety of enzymes and pathways. However, engineering membrane proteins such as MFS transporters has proven to be challenging [14][15][16]. Directed evolution is an important means for carrying out protein engineering [17], but requires developing a high-throughout screening system [18,19]. In 2014, Ryan et al. developed a new method for performing directed evolution experiments in yeast using the CRISPR-Cas9 system [20]. Integrating linear DNA into the genome by homologous recombination mediated CRISPR-Cas9 overcame high levels of copy number variation seen with plasmid-based expression, and allowed directed evolution of CDT-1 to isolate an improved cellodextrin transporter [20].
To improve xylodextrin utilization in S. cerevisiae, we sought to identify potential XD transporters by characterizing a library of CDT-1 and CDT-2 orthologues. We analyzed cellular localization, transport activity and aerobic growth profiles. Moreover, with one of the best performing transporters identified in the screen, we attempted to use CRISPR-Cas9 mediated directed evolution to improve XD consumption in yeast.
Results
To identify XD transporters that could be used to improve utilization of xylodextrins, we carried out a screening of codon-optimized CDT-1 and CDT-2 transporter orthologues, named STX (X spanning from 1 to 17) (S1 Table). Phylogenetic analysis suggested that orthologues of CDT-2 are widely distributed in the fungal kingdom, indicating that many fungi are able to consume xylodextrins derived from plant cell walls (Fig 1). These orthologues are 28-67% identical in amino acid sequence to CDT-2.
We first constructed plasmids expressing ST1-ST17 fused to enhanced GFP at the C-terminus, using the strong TDH3 promoter and the CYC1 terminator, and expressed the transporters in S. cerevisiae strain D452-2. Using epi-fluorescent microscopy, we found that most transporters localize to the plasma membrane (Fig 2). Transporters ST5, ST8, ST10 and ST14 are dispersed in the cytoplasm or vacuole (Fig 2), indicating these transporters are not well folded in S. cerevisiae, and they were not analyzed further.
Xylodextrins, including xylobiose and xylotriose, are β-1,4-linked xylose oligomers with different degree of polymerization. As a direct test of transport activity for cellobiose (G2), xylobiose (X2) and xylotriose (X3), we chose transporters that localized to the plasma membrane ( Fig 2) and conducted a yeast cell-based sugar uptake assay. Notably, we found that ST3, ST15 and ST16 are XD-specific transporters, while other transporters showing XD transport activity are able to transport cellobiose as well, although the overall activity varied (Fig 3). The uptake activity of CDT-2 for both X2 and X3 was more than ten times that of ST9, ST11 and ST12 (Fig 3). Thus, these transporters were not studied further. The sugar preferences for ST transporters are summarized and shown on the phylogenetic tree in S1 Fig. Many transporters are inhibited by the presence of related sugars in the extracellular medium. For example, xylose transporters are inhibited by the presence of glucose, which has led to many efforts to relieve this inhibition [21,22]. We had previously observed that cellobiose inhibits xylobiose transport by CDT-2 [7]. We therefore tested xylobiose transport activity in the presence of increasing molar concentrations of cellobiose, up to 10 times the concentration of X2. Interestingly, we found that the uptake of xylobiose by ST3, ST15 and ST16 was not inhibited by cellobiose (Fig 4), making them good candidates for co-fermentation with glucose or cellodextrin from lignocellulosic biomass under a number of pretreatment scenarios. However, for other transporters, there was less X2 consumed per OD when increasing molar concentrations of G2 were present in the media, i.e. as observed for CDT-2 (Fig 4).
To further correlate the sugar uptake activity with yeast growth, we engineered the xyloseconsuming S. cerevisiae strain SR8 [23] using CRISPRm (S2 Fig), by insertion of GH43-2 and GH43-7 at the TRP1 and LYP1 loci, respectively. The engineered SR8 strain was named SR8A. Yeast strain SR8A cells with plasmids expressing different transporters were cultured in oMM media under aerobic conditions for 96 hours using xylodextrin as the only carbon source. ST2 and ST16 enabled S. cerevisiae to grow more rapidly on xylodextrin (Fig 5). The area under the curve (AUC) of ST2 and ST16 were 1.16 and 1.26 times higher than CDT-2, respectively. Although ST3 is not inhibited by the presence of cellobiose (Fig 4), it did not allow for better growth of S. cerevisiae compared to ST2 and ST16 (Fig 5), and was not analyzed further. We also tested the anaerobic growth profile of ST2 and ST16 transporter on XD plus sucrose, but there were no significant improvements compared with CDT-2 in anaerobic conditions. The species name from which each transporter was identified is indicated, and NCBI Gene ID's and protein sequences are included in (S1 Table). Taken together, three potentially useful xylodextrin transporters were identified from the screen, each with different advantages and disadvantages (S1 Table). Of these, ST16 is a XDspecific transporter, showing faster aerobic growth and its xylobiose transport activity is not inhibited by cellobiose. We therefore chose ST16, which is derived from Trichoderma virens, as the starting point for setting up a directed evolution experiment to improve XD consumption. Based on experiments published previously, in which XD consumption was rapidly turned on and off in anaerobic conditions [7], we hypothesized that XD transport is limiting in terms of consumption rate. Notably, when we inserted the transporter ST16 under control of PGK1 promoter and TPS3 terminator into the yeast genome at the LEU2 locus, there was a dramatic decrease in the growth rate compared with overexpression ST16 from a plasmid. Furthermore, the strain with incorporated ST16 grew almost as slow as the control (Fig 6A). GFP fluorescence emission spectroscopy verified that ST16 was expressed from the chromosome, but at lower levels than overexpression from a plasmid ( Fig 6B).
To further test whether transport is limiting in XD-utilizing strains, we compared the aerobic growth profile of the SR8 strain with a plasmid expressing CDT-2, GH43-2 and GH43-7 and the SR8A strain, which has GH43-2 and GH43-7 expressed from chromosomal copies, with a plasmid only expressing the CDT-2 transporter. We reasoned that incorporation of the genes for GH43-2 and GH43-7 into the chromosome would drop the protein expression level compared with overexpression using a plasmid, because of the much higher copy number of plasmids [20,24]. As expected for XD transport being the limiting factor for XD utilization, expressing GH43-2 and GH43-7 from the chromosome did not affect aerobic growth on xylodextrin (S3 Fig). These results further supported that the XD transporter is the rate-limiting step of the XD utilization pathway. Next, we attempt to develop a CRISPR-Cas9 mediated directed evolution selection experiment [20] using growth in liquid media followed by screening based on colony size. We first used error prone PCR to generate a mutant library of ST16 of 2x10 3 variants and transformed the library into the SR8A strain. Pooled strains from YPAD+G418 plates were grown aerobically in liquid synthetic media containing XD for 2 days and plated on plates with synthetic media containing XD to enrich for functional ST16 mutants. Surprisingly, after 4 days of aerobic growth on plates, all of the colonies had similar sizes, and we were not able to identify any individual colony that demonstrated faster aerobic growth. Furthermore, we conducted a serial dilution of xylodextrin concentration in making the synthetic plates, and found that the major roadblock with the selection strategy is the background growth of the strains on contaminating xylose and amino acids (S4 Fig), resulting in difficulty identifying functional ST16 mutants. Thus, the yeast strain was able to grow on alternative carbon sources rather than XD, circumventing the selective pressure of XD-dependent growth.
Discussion
CDT-1 and CDT-2 are two important cellodextrin transporters used by N. crassa in cellulose degradation [25], of which only CDT-2 is also involved in hemicellulose utilization [7,26]. In previous results, the discovery of the xylodextrin consumption pathway consisting of transporter CDT-2 along with two β-xylosidases GH43-2 and GH43-7 provided new modes for utilization of xylose derived from hemicellulose [7]. However, the XD pathway as isolated from N. crassa requires significant improvement for its future use industrially. In our study, we confirmed that the XD transporter is the rate-limiting step of the XD utilization pathway (Fig 6A and S3 Fig). Through characterization of a library of CDT-1 and CDT-2 transporter orthologues, we identified three potential transporters from other fungi that could be used as targets for exploring faster xylodextrin utilization (S2 Table). Of these, the transporter ST16 isolated from Trichoderma virens has advantages over CDT-2, including the fact that it is a XD-specific transporter with good membrane localization, and enables faster aerobic growth. Furthermore, the xylobiose transport activity of ST16 is not inhibited by the presence of cellobiose. Given the superior properties of the ST16 transporter, we attempted to develop a CRISPR-Cas9 mediated directed evolution experiment, using ST16 as the starting point to screen mutants that could further improve xylodextrin utilization efficiency. We used liquid medium containing XD for selection, and subsequent screening for larger colonies on XD-containing plates. Through mutagenesis and chromosomal integration, we screened 2x10 3 ST16 mutants. However, due to the background growth of these strains on xylose and amino acids, it was hard for us to select large colonies, resulting in difficulty in identifying more functional ST16 mutants. This background growth is problematic due to the slow growth of these strains on XD, and contrasts with the higher initial growth rates on cellobiose in the directed evolution of cellobiose transporters [20]. To overcome remaining bottlenecks of the current selection strategy, it will be necessary to remove the contaminating xylose from XD preparations, and minimize the amino acid concentrations in the media. In addition, further experiments need to be done for increasing the expression level of chromosomally integrated ST16, as we found that using the PGK1 promoter was not sufficient to drive growth above background levels on XDcontaining plates. Although not ideal, it may be necessary to first carry out directed evolution using ST16 expression from a plasmid, to increase ST16 expression levels. Other strategies might also be explored for directed evolution of XD transporters, including starting with ST2 or ST15. Alternatively, it might prove useful to perform a directed evolution experiment on a cellodextrin transporter like CDT-2 to improve cellobiose uptake, then test whether isolated mutations will also facilitate faster XD consumption.
Although both ST2 and ST16 did not improve anaerobic utilization of XD compared with CDT-2, the presence of xylose or other simple sugars could be used to initiate the consumption of XD, according to our previous results [7]. Based on the rapid turn-off and turn-on of XD consumption in anaerobic conditions [7], we suspect that the XD transporters are internalized in the absence of hexose or xylose sugars [27]. Currently, how CDT-2 and ST16 are regulated in xylodextrin-only media remains unknown. Other systems-level experiments such as transcription and ribosome profiling could also be used to better understand the mechanism by which the yeast strain senses xylodextrins under anaerobic conditions. Finally, the cofactor imbalance problem of the XR/XDH pathway may lead to the accumulation of xylose in the culture supernatant [28], indicating that the metabolic sensing and xylose assimilation pathway might require additional tuning for optimal xylodextrin fermentation.
Strains and plasmids
The N. crassa and the codon-optimized versions of all ST transporters were cloned into the pRS316 plasmid (CEN URA3)-under the control of the S. cerevisiae TDH3 promoter and the CYC1 terminator-using the In-Fusion HD Cloning Kit (Clontech Laboratories, Inc., Mountain View, CA). The cleavage site for the HRV 3C protease followed by eGFP was fused to the C-terminus. cdt-1 and cdt-2 were PCR-amplified from cDNA synthesized from mRNA isolated from N. crassa (FGSC 2489) grown on minimal media plus Avicel (microcrystalline cellulose) as the sole carbon source [25]. Atum (formerly DNA 2.0) performed gene synthesis and codon-optimization for all ST transporters. S. cerevisiae strain D452-2 [29], was used for XD and cellobiose transport studies. Codon-optimized gh1-1 was cloned into pRS315 plasmid (CEN LEU) and was expressed under the control of the PGK1 promoter. Details of the codon optimization process for gh1-1 are described in [30].
S. cerevisiae strain SR8 [23] was used for strain engineering. We used CRISPR-Cas9 genome editing to insert GH43-2 and GH43-7 into the TRP1 and LYP1 loci, respectively, and the engineered SR8 strain was named SR8A. GH43-2 was under the control of TDH3 promoter and the ADH1 terminator, while GH43-7 was drove by CCW12 promoter and the CYC1 terminator.
For directed evolution experiments, strains were grown at 30˚C in either rich (yeast extract-peptone [YP]) or synthetic (S) medium containing 1% xylodextrin with appropriate nutrient supplements to support growth and with certain nutrients omitted to maintain selection for plasmids. Xylodextrin was purchased from Cascade Analytical Reagents and Biochemicals. The composition of the XD samples was analyzed by Dionex ICS-3000 HPLC (ThermoFisher) as described below. Synthetic media contained 2 g/L yeast nitrogen base without amino acids and ammonium sulfate, 5 g/L ammonium sulfate, 1 g/L CSM.
Aerobic growth assays
All growth assays were performed using the Bioscreen C (Oy Growth Curves Ab Ltd., Helsinki, Finland) with biological triplicates or quadruplicates. Single colonies of S. cerevisiae strains transformed with pRS316 containing the MFS transporter of interest were grown in oMM-Ura plus 2% glucose to late-exponential phase at 30˚C in 24-well plates. Cultures were pelleted at 4000 rpm, the spent media supernatant was discarded and cells were resuspended in H 2 O. The OD was measured to calculate the inoculum volume needed for 200 μL cultures at an initial OD % 0.1 or 0.2 in Bioscreen plates in oMM media. The OD at 600 nm was measured in 30 min intervals for 48-96 h at 30˚C.
Epi-fluorescence microscopy D452-2 cells expressing the transporters were grown in oMM-Ura plus 2% glucose media to mid-exponential phase at 30˚C. The cultures were centrifuged, spotted onto glass slides and examined on a Leica DM 5000B Epi-fluorescence microscope at 100x DIC (Leica, Germany). Transporters were visualized using the L5 filter cube; images were captured using the Leica DFC 490 camera and analyzed with the accompanying microscope software.
Yeast-cell based sugar uptake assay
Yeast strain D452-2 cells transformed with pRS316-CDT1, -CDT2 or pRS316-ST were grown to mid-exponential phase. Cells were harvested and washed twice with Transport Assay Buffer (5 mM MES, 100 mM NaCl, pH 6.0) and resuspended to a final OD 600 of 40. 500 μL of cell resuspension was quickly mixed with an equal volume of Transport Assay Buffer containing 400 μM of the respective sugar (final sugar concentration was approximately 200 μM). For the initial time point (t = 0 sec), an aliquot was removed and centrifuged for 1 min at 4˚C at high speed to pellet the cells and the supernatant was removed. The remaining cell resuspension was incubated at 30˚C for 30 min with constant shaking. After incubation, samples were centrifuged for 5 min at 4˚C at 14000 rpm and supernatant was removed. For analysis, 400 μL of supernatant were mixed with 100 μL of 0.5 M NaOH, and sugar concentrations remaining in the supernatant were measured by Dionex ICS-3000 HPLC as described below.
HPLC analysis
HPLC analysis was performed on Dionex ICS-3000 HPLC using a CarboPac PA200 analytical column (150 x 3 mm) and a CarboPac PA200 guard column (3 x 30 mm) at room temperature. 25 μL of sample was injected and run at 0.4 mL/min with a mobile phase using 0.1 M NaOH. Acetate gradients were used to resolve xylodextrin samples. For xylobiose and xylotriose, the acetate gradients were as follows: 0 mM for 1 min, increasing in 8 min to 80 mM, then to 300 mM in 1 min, keeping at this concentration for 2 min, followed by equilibration back to 0 mM for 3 min.
CRISPR-Cas9 genome editing
CRISPR-Cas9 genome editing was performed according to published precedures [20] The mix reagents were incubated 30 min at 30˚C, and then subjected to heat shock at 42˚C for 17 min. Following heat shock, cells were centrifuged at 5000 rpm for 2 min. The pellet was resuspended in 250 μL YPD and recovered for 2 hour at 30˚C, then the entire contents were plated onto YPAD+G418 plates (20 g/L Peptone, 10 g/L Yeast Extract, 20 g/L Agar, 0.15 g/L Adenine hemisulfate, 20 g/L Glucose and 200 mg/L G418). Cells were grown 24 hours at 37˚C, then moved to 30˚C to complete growth. Colonies from the YPAD+G418 plates were picked and grown overnight in 1.0 mL of liquid YPAD medium. Genomic DNA was extracted from these cultures using the MasterPure Yeast DNA Extraction Kit (MPY80200, Epicentre). PCR confirmation of the integration allele was performed, and PCR products were submitted for Sanger sequencing at the UC Berkeley, Sequencing Facility (Berkeley, CA) to confirm the integration sequence.
Protein expression levels using fluorescence emission spectroscopy
For comparison of ST16 expression from plasmid-encoded and chromosomally-integrated genes, cells were grown in 10 mL of SC-Ura media (Sunrise Science Products) with 1% glucose under aerobic conditions at 30˚C overnight. Glucose was used rather than XD, due to the very slow growth of cells in XD-containing media. Cells were normalized to a total OD of 20.8 in 200 μL and measured for GFP fluorescence signal using the Synergy Mx plate reader (BioTek) with the following filter set: excitation 485/20, emission 528/20. Cells expressing ST16 without the GFP fusion were used as a control to establish the baseline fluorescence signal from the yeast cells.
For comparisons of transporter expression from plasmids (S5 Fig), 50 mL cultures were grown at 30˚C until they reached an OD 600 of 3, at which point they were harvested by centrifugation, resuspended in~4-5 mL of media and aliquoted into microcentrifuge tubes to yield a total OD 600 of 30. Samples were spun down at 14,000 rpm for 1 min and the supernatant was aspirated. Cell pellets were quickly flash frozen in liquid N 2 . Frozen cell pellets were thawed on ice and 400 μL of Buffer A (25 mM Hepes, 150 mM NaCl, 10 mM cellobiose, 5% glycerol, 1 mM EDTA, 0.2X HALT protease inhibitor cocktail (ThermoFisher, Waltham, MA), pH 7.5) were added for resuspension. Cells were lysed with zirconia/silica beads in a Mini-Beadbeater-96 (Biospec Products, Bartlesville, OK). Cell debris was pelleted at 10,000xg for 10 min at 4˚C, lysates were diluted three-fold with Buffer A, and their GFP fluorescence was measured using a Horiba Jobin Yvon Fluorolog fluorimeter (Horiba Scientific, Edison, NJ). The λ EX was 485 nm, and the emission wavelength was recorded from 495-570 nm, with both excitation and emission slit widths set to 3 nm. A fluorescence calibration curve was prepared with eGFP purified from E. coli (>95% purity). The settings and the eGFP protein concentration range were chosen to yield a linear correlation between the fluorescence intensity at the maximum λ EM (510 nm) and the protein concentration of the standard. The maximum fluorescence intensity of the samples fell within this range. Target protein concentrations represent the mean from 3 biological replicates. Total protein concentration of the lysate was determined using the Pierce BCA Protein Assay Kit (ThermoFisher).
Phylogenetic analysis
The phylogenetic tree of MFS transporters was constructed using the Phylogeny.fr platform [http://www.phylogeny.fr/index.cgi] [31,32] and the amino acid sequences for the transporters used in this study. Cate.
|
2018-04-03T01:51:30.251Z
|
2017-09-08T00:00:00.000
|
{
"year": 2017,
"sha1": "de4a05180063b511ad0e0c4e2dc4fb9902e00ebf",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184730&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de4a05180063b511ad0e0c4e2dc4fb9902e00ebf",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
195812412
|
pes2o/s2orc
|
v3-fos-license
|
Human Aquaporin-5 Facilitates Hydrogen Peroxide Permeation Affecting Adaption to Oxidative Stress and Cancer Cell Migration
Reactive oxygen species (ROS), including H2O2, contribute to oxidative stress and may cause cancer initiation and progression. However, at low concentrations, H2O2 can regulate signaling pathways modulating cell growth, differentiation, and migration. A few mammalian aquaporins (AQPs) facilitate H2O2 diffusion across membranes and participate in tumorigenesis. AQP3 and AQP5 are strongly expressed in cancer tissues and AQP3-mediated H2O2 transport has been related to breast cancer cell migration, but studies with human AQP5 are lacking. Here, we report that, in addition to its established water permeation capacity, human AQP5 facilitates transmembrane H2O2 diffusion and modulates cell growth of AQP5-transformed yeast cells in response to oxidative stress. Mutagenesis studies revealed that residue His173 located in the selective filter is crucial for AQP5 permeability, and interactions with phosphorylated Ser183 may regulate permeation through pore blockage. Moreover, in human pancreatic cancer cells, the measured AQP5-mediated H2O2 influx rate indicates the presence of a highly efficient peroxiporin activity. Cell migration was similarly suppressed by AQP3 or AQP5 gene silencing and could be recovered by external oxidative stimuli. Altogether, these results unveiled a major role for AQP5 in dynamic fine-tuning of the intracellular H2O2 concentration, and consequently in activating signaling networks related to cell survival and cancer progression, highlighting AQP5 as a promising drug target for cancer therapies.
Introduction
Aquaporins, which are expressed in almost every organism and tissue, constitute a highly conserved group of transmembrane proteins that are crucial for cell homeostasis and volume regulation. AQPs are assembled in a homotetrameric structure in membranes, each monomer being a functional channel that facilitates a rapid bidirectional flux of water, and, in some cases, small uncharged solutes like glycerol, in response to osmotic or solute gradients [1]. The thirteen human isoforms (AQP0-AQP12) are expressed in a cell-and tissue-dependent manner, and are subdivided according to their selectivity and sequence homology. Classical or orthodox aquaporins are considered mainly
Human AQP5 is Localized and Functional at the Yeast Plasma Membrane
To evaluate human AQP5 function, yeast cells depleted of endogenous aquaporins (aqy-null) were transformed with either the empty plasmid pUG35 (control cells) or the plasmid encoding hAQP5. Prior to functional analysis, expression and localization of AQP5 at the yeast plasma membrane were verified by fluorescence microscopy using GFP-tagging ( Figure 1A). AQP5 function was evaluated by stopped-flow fluorescence. Cells were loaded with the volume-sensitive dye carboxyfluorescein and exposed to a hyperosmotic solution with an impermeant solute, inducing cell shrinkage. Water permeability was evaluated by monitoring the time course of fluorescence output that reflects the transient volume change. As shown in Figure 1B, when exposed to a hyperosmotic solution, cells expressing AQP5 readjust their final volume and reach their new osmotic equilibrium faster than control cells, evidencing water channeling. The water permeability coefficient (P f ) was 10-fold higher for AQP5-transformed yeast cells ((3.70 ± 0.31) × 10 −3 cm s −1 and (0.37 ± 0.03) × 10 −3 cm s −1 for AQP5 and control, respectively). Incubation with 0.5 mM HgCl 2 markedly reduced water permeability of AQP5-transformed yeast cells (≈64%) without affecting control cells ( Figure 1C), indicating that AQP5 is a mercury-sensitive water channel. Finally, the activation energy for water transport (E a ), which distinguishes passive water diffusion through lipid bilayer from AQP-mediated diffusion, was lower for AQP5 cells (5.63 ± 0.36 kcal mol −1) compared to the control (12.66 ± 0.69 kcal mol −1 ) ( Figure 1D). These results show that AQP5 is assembled into a functional water channel in yeast.
Human AQP5 is Localized and Functional at the Yeast Plasma Membrane
To evaluate human AQP5 function, yeast cells depleted of endogenous aquaporins (aqy-null) were transformed with either the empty plasmid pUG35 (control cells) or the plasmid encoding hAQP5. Prior to functional analysis, expression and localization of AQP5 at the yeast plasma membrane were verified by fluorescence microscopy using GFP-tagging ( Figure 1A). AQP5 function was evaluated by stopped-flow fluorescence. Cells were loaded with the volume-sensitive dye carboxyfluorescein and exposed to a hyperosmotic solution with an impermeant solute, inducing cell shrinkage. Water permeability was evaluated by monitoring the time course of fluorescence output that reflects the transient volume change. As shown in Figure 1B, when exposed to a hyperosmotic solution, cells expressing AQP5 readjust their final volume and reach their new osmotic equilibrium faster than control cells, evidencing water channeling. The water permeability coefficient (Pf) was 10fold higher for AQP5-transformed yeast cells ((3.70 ± 0.31) × 10 −3 cm s −1 and (0.37 ± 0.03) × 10 −3 cm s −1 for AQP5 and control, respectively). Incubation with 0.5 mM HgCl2 markedly reduced water permeability of AQP5-transformed yeast cells (≈64%) without affecting control cells ( Figure 1C), indicating that AQP5 is a mercury-sensitive water channel. Finally, the activation energy for water transport (Ea), which distinguishes passive water diffusion through lipid bilayer from AQP-mediated diffusion, was lower for AQP5 cells (5.63 ± 0.36 kcal mol −1) compared to the control (12.66 ± 0.69 kcal mol −1 ) ( Figure 1D). These results show that AQP5 is assembled into a functional water channel in yeast. [20,21]. Regarding AQP5, regulation was proposed to involve phosphorylation at Ser156 in cytoplasmic loop D to rapidly and reversibly regulate AQP5 plasma membrane abundance [22]. Phosphorylation of AQP5 in its PKA consensus site (S156) induced colon cancer cell proliferation via the Ras/ERK/Rb pathway [23]. In addition, in silico studies suggested a second gating mechanism [24] where the AQP5 monomer undergoes conformational changes varying between an open/close state and wide/narrow state. The authors proposed that the AQP5 channel could change from open to closed by a tap-like mechanism at the cytoplasmic end, induced by translation of the His67 side chain inside the pore, blocking the water passage, and that the selectivity filter (SF) regulates the rate of water flux when the channel is open. In this case, AQP5 channels could exhibit two different conformations (wide and narrow), determined by the proximity of the H173 side chain to S183: when these residues get close (<5.5 Å), the SF turns to the narrow conformation and water passage is restricted. The channel constriction induced by H173 side chain orientation determines the two states, wide/narrow, when the cytoplasmic end gate switches from closed to the open state. In addition, our recent study with rAQP5 indicated that channel widening results from deprotonation when the protein is in the phosphorylated state [6]. Thus, using the same yeast system, here we investigated mechanisms of human AQP5 gating by phosphorylation and pH.
We generated point mutations in the AQP5 aromatic/arginine region and in intracellular loop D ( Figure 2). Mutations to change wide and narrow state were obtained by substitution of histidine (H) 173 with alanine (A) and with tryptophan (W), respectively. Mutations preventing phosphorylation of S156 and S183 were obtained by substitution of serine (S) with alanine (A). Mutations to mimicking the charge state of AQP5 phosphorylated at the same serine residues were performed by substitution of serine (S) with glutamic acid (E). Water permeability of yeast cells expressing wild-type AQP5 (WT) or AQP5 mutants was determined at 23 • C at both pH 5.1 and pH 7.4 ( Figure 3A). Expression and localization of all AQP5 mutants was confirmed at pH 5.1 and pH 7.4 by fluorescence microscopy using GFP-tagging ( Figure S1). All yeast clones displayed similar GFP-fluorescence intensity at the plasma membrane ( Figure S1 and Figure 3B), indicating that the observed differences in permeability cannot be assigned to impairment of AQP5 trafficking due to mutations. Data are shown as mean ± SEM of three independent experiments. Significance levels: ns, nonsignificant; *** p < 0.001.
S183 and H173 are Important Residues for AQP5 Gating
Recent evidence supports the idea that human AQPs can be gated via different mechanisms, including pH and phosphorylation [20,21]. Regarding AQP5, regulation was proposed to involve phosphorylation at Ser156 in cytoplasmic loop D to rapidly and reversibly regulate AQP5 plasma membrane abundance [22]. Phosphorylation of AQP5 in its PKA consensus site (S156) induced colon cancer cell proliferation via the Ras/ERK/Rb pathway [23]. In addition, in silico studies suggested a second gating mechanism [24] where the AQP5 monomer undergoes conformational changes varying between an open/close state and wide/narrow state. The authors proposed that the AQP5 channel could change from open to closed by a tap-like mechanism at the cytoplasmic end, induced by translation of the His67 side chain inside the pore, blocking the water passage, and that the selectivity filter (SF) regulates the rate of water flux when the channel is open. In this case, AQP5 channels could exhibit two different conformations (wide and narrow), determined by the proximity of the H173 side chain to S183: when these residues get close (<5.5 Å), the SF turns to the narrow conformation and water passage is restricted. The channel constriction induced by H173 side chain orientation determines the two states, wide/narrow, when the cytoplasmic end gate switches from closed to the open state. In addition, our recent study with rAQP5 indicated that channel widening results from deprotonation when the protein is in the phosphorylated state [6]. Thus, using the same yeast system, here we investigated mechanisms of human AQP5 gating by phosphorylation and pH.
We generated point mutations in the AQP5 aromatic/arginine region and in intracellular loop D ( Figure 2). Mutations to change wide and narrow state were obtained by substitution of histidine (H) 173 with alanine (A) and with tryptophan (W), respectively. Mutations preventing phosphorylation of S156 and S183 were obtained by substitution of serine (S) with alanine (A). Mutations to mimicking the charge state of AQP5 phosphorylated at the same serine residues were performed by substitution of serine (S) with glutamic acid (E). Water permeability of yeast cells expressing wild-type AQP5 (WT) or AQP5 mutants was determined at 23 °C at both pH 5.1 and pH 7.4 ( Figure 3A). Expression and localization of all AQP5 mutants was confirmed at pH 5.1 and pH 7.4 by fluorescence microscopy using GFP-tagging ( Figure S1). All yeast clones displayed similar GFP-fluorescence intensity at the plasma membrane ( Figures S1 and 3B), indicating that the observed differences in permeability cannot be assigned to impairment of AQP5 trafficking due to mutations. [24]. In such cases, if the proximity of His173 to Ser183 (D1) is 7Å<D1<10Å, the AQP5 monomer shows a wide conformation. Structures were generated with As proposed, when His67 side chain rotates outside the pore, it allows water passage through the pore (open state) [24]. In such cases, if the proximity of His173 to Ser183 (D1) is 7Å<D1<10Å, the AQP5 monomer shows a wide conformation. Structures were generated with Chimera (http://www.cgl.ucsf.edu/chimera) and are based on the AQP5 X-ray structure (PDB databank code 3D9S). permeability, suggesting that the AQP5 channel remains in a wide conformation, probably because the distance between the H173 side chain and A183 is above 5.5 Å. However, the phosphomimetic mutation of S183 to glutamate (S183E) impaired permeability, indicating that the proximity of S183 to H173, mainly supported by the negative charge of the phosphate group, is responsible for pore constriction. This new AQP5 gating mechanism, involving phosphorylation of S183 at the selectivity filter (SF), has never been reported.
Human AQP5 Transports Hydrogen Peroxide
Several studies have reported H2O2 transport by a few human AQP isoforms, including AQP3 [4], AQP8 [3,26], and AQP9 [7]). Regarding AQP5, our group revealed that the rat isoform could also mediate H2O2 transport [6]. Sequence alignment of human and rat AQP5 isoforms reveals 91% sequence identity, and led us to investigate if human AQP5 can facilitate H2O2 permeation through membranes.
Thus, we measured the consumption of external hydrogen peroxide by yeast cells expressing human AQP5 using an electrochemical assay (O2 electrode). Briefly, hydrogen peroxide was added to cell suspensions, and H2O2 uptake was evaluated by monitoring its conversion into O2 with a Clark Permeability experiments with AQP5 WT, mutants and control cells performed at two pHs (pH 5.1 and 7.4), showed that alteration of external pH did not affect water permeability, and that this is not dependent on protein phosphorylation. A recent study reported that phosphomimetic mutation of S156E increased membrane expression of AQP5 in HEK293 cells [22]. In our work, preventing or mimicking S156 phosphorylation did not alter AQP5 membrane expression ( Figure 3B), nor did it modify AQP5 activity at any pH tested, which may in part be explained by a different signaling pathway for AQP5 trafficking used by yeasts. However, water permeability was fully blocked when histidine was mutated (H173A and H173W). Histidine is a highly conserved residue in the selective filter of water-specific aquaporins, and is considered crucial for selectivity and transport [25]. The observed impairment of water permeability by H173 mutation ( Figure 3A) validates the previously reported in silico data.
To experimentally investigate the phosphorylation gating mechanism previously proposed in silico [24], we measured the water permeability of AQP5-S183A and AQP5-S183E mutant yeast cells. Impairment of S183 phosphorylation by mutation to alanine (S183A) did not affect water permeability, suggesting that the AQP5 channel remains in a wide conformation, probably because the distance between the H173 side chain and A183 is above 5.5 Å. However, the phosphomimetic mutation of S183 to glutamate (S183E) impaired permeability, indicating that the proximity of S183 to H173, mainly supported by the negative charge of the phosphate group, is responsible for pore constriction. This new AQP5 gating mechanism, involving phosphorylation of S183 at the selectivity filter (SF), has never been reported.
Human AQP5 Transports Hydrogen Peroxide
Several studies have reported H 2 O 2 transport by a few human AQP isoforms, including AQP3 [4], AQP8 [3,26], and AQP9 [7]). Regarding AQP5, our group revealed that the rat isoform could also mediate H 2 O 2 transport [6]. Sequence alignment of human and rat AQP5 isoforms reveals 91% sequence identity, and led us to investigate if human AQP5 can facilitate H 2 O 2 permeation through membranes.
Thus, we measured the consumption of external hydrogen peroxide by yeast cells expressing human AQP5 using an electrochemical assay (O 2 electrode). Briefly, hydrogen peroxide was added to cell suspensions, and H 2 O 2 uptake was evaluated by monitoring its conversion into O 2 with a Clark electrode after addition of catalase in excess to samples of the cell suspension. Since we found that human AQP5 is not pH-regulated, we performed the assays at a pH optimal for yeast cells (pH 5.1). Figure 4A, the rate constant of H 2 O 2 consumption was three-fold higher for yeast cells expressing AQP5 compared to control cells. In addition, incubation with 0.5 mM HgCl 2 , shown above to inhibit AQP5 water permeability, reduced H 2 O 2 consumption to the basal level corresponding to diffusion through the lipid bilayer, not affecting control cells. These data confirm that AQP5 can facilitate H 2 O 2 membrane permeability. Since H 2 O 2 uptake measured by the Clark electrode relays on its extracellular disappearance, and to assure that the high rate of H 2 O 2 consumption observed for AQP5-yeast cells was due to cellular uptake, we loaded the cells with the ROS sensitive fluorescence probe H 2 -DFCDA and followed the intracellular increase in fluorescence after incubation with a range of H 2 O 2 concentrations. As depicted in Figure 4B, the rate of ROS accumulation was dependent on H 2 O 2 concentration, largely facilitated by AQP5 expression.
As depicted in
Cancers 2019, 11, x 6 of 17 electrode after addition of catalase in excess to samples of the cell suspension. Since we found that human AQP5 is not pH-regulated, we performed the assays at a pH optimal for yeast cells (pH 5.1). As depicted in Figure 4A, the rate constant of H2O2 consumption was three-fold higher for yeast cells expressing AQP5 compared to control cells. In addition, incubation with 0.5 mM HgCl2, shown above to inhibit AQP5 water permeability, reduced H2O2 consumption to the basal level corresponding to diffusion through the lipid bilayer, not affecting control cells. These data confirm that AQP5 can facilitate H2O2 membrane permeability. Since H2O2 uptake measured by the Clark electrode relays on its extracellular disappearance, and to assure that the high rate of H2O2 consumption observed for AQP5-yeast cells was due to cellular uptake, we loaded the cells with the ROS sensitive fluorescence probe H2-DFCDA and followed the intracellular increase in fluorescence after incubation with a range of H2O2 concentrations. As depicted in Figure 4B, the rate of ROS accumulation was dependent on H2O2 concentration, largely facilitated by AQP5 expression. Values are shown as mean ± SEM of at least 3 independent experiments. Significance levels: ns, non-significant, * p < 0.05, ** p < 0.01, *** p < 0.001.
AQP5 Regulates Cellular Resistance to Oxidative Stress
The involvement of ROS in initiation, promotion, and progression of cancer and their effect on cell behavior depends on concentration, time of exposure, and cellular antioxidant defense, among other factors [27]. In fact, ROS can contribute to the regulation of cell fate, including anti-cancer actions (e.g., by promoting senescence, apoptosis, necrosis, or other types of cell death, and inhibiting angiogenesis) or pro-cancer actions (promoting proliferation, invasiveness, angiogenesis, metastasis, and suppressing apoptosis) [27]. Recent evidence showed a novel role for AQP3-mediated H2O2 transport in the mechanism of breast cancer cell migration [18]. Therefore, we aimed to investigate the physiological role of AQP5-mediated H2O2 transport in cell resistance and growth.
First, we evaluated the effect of long exposure to oxidative stress ( Figure 5A). Yeast cells were grown on solid medium containing 1 mM H2O2 for two days, and it could be observed that cells expressing AQP5 were much more sensitive to oxidative stress than control cells, which were only slightly affected. Significance levels: ns, non-significant, * p < 0.05, ** p < 0.01, *** p < 0.001.
AQP5 Regulates Cellular Resistance to Oxidative Stress
The involvement of ROS in initiation, promotion, and progression of cancer and their effect on cell behavior depends on concentration, time of exposure, and cellular antioxidant defense, among other factors [27]. In fact, ROS can contribute to the regulation of cell fate, including anti-cancer actions (e.g., by promoting senescence, apoptosis, necrosis, or other types of cell death, and inhibiting angiogenesis) or pro-cancer actions (promoting proliferation, invasiveness, angiogenesis, metastasis, and suppressing apoptosis) [27]. Recent evidence showed a novel role for AQP3-mediated H 2 O 2 transport in the mechanism of breast cancer cell migration [18]. Therefore, we aimed to investigate the physiological role of AQP5-mediated H 2 O 2 transport in cell resistance and growth.
First, we evaluated the effect of long exposure to oxidative stress ( Figure 5A). Yeast cells were grown on solid medium containing 1 mM H 2 O 2 for two days, and it could be observed that cells expressing AQP5 were much more sensitive to oxidative stress than control cells, which were only slightly affected. Subsequently, cells were exposed to short-term oxidative stress, i.e., in the presence of 1 mM H2O2 for 15 and 60 min ( Figure 5B). Results indicate that, when exposed to H2O2, cells expressing AQP5 can not only survive to oxidative stress, but also grow better (1.5-and 2-fold after 15 and 60 min, respectively, compared to non-treated cells). This "overcoming" effect was not observed in control cells that were depleted of endogenous aquaporins, in which H2O2 slowly diffused through membrane lipids. As a control, catalase activity was also evaluated in non-treated cells and after incubation with 1 mM H2O2 for 60 min; absence of statistical differences indicate that this antioxidant scavenger is not responsible for the observed cellular resistance of AQP5-yeast cells.
AQP5 Shows High Peroxiporin Activity in Pancreatic Cancer Cells
The BxPC3 pancreatic cancer cell line was tested to confirm the yeast results and further investigate the biological relevance of AQP-mediated H2O2 transport. Evaluation of relative gene expression by quantitative real-time RT-PCR demonstrated that both AQP3 and AQP5 were expressed in BxPC3, although with different levels of expression ( Figure 6A).
Since both AQP5 and AQP3 transport H2O2, and to distinguish the contribution of each isoform to H2O2 permeability, we used BxPC3 cells silenced for AQP3, for AQP5, or double silenced for AQP3-AQP5 to evaluate H2O2 permeability. The knockdown efficiency was evaluated by RT-qPCR after 48 h and compared with the non-targeting control construct, showing a reduction of AQP3 and AQP5 expression of around 50% ( Figure 6B). Subsequently, cells were exposed to short-term oxidative stress, i.e., in the presence of 1 mM H 2 O 2 for 15 and 60 min ( Figure 5B). Results indicate that, when exposed to H 2 O 2 , cells expressing AQP5 can not only survive to oxidative stress, but also grow better (1.5-and 2-fold after 15 and 60 min, respectively, compared to non-treated cells). This "overcoming" effect was not observed in control cells that were depleted of endogenous aquaporins, in which H 2 O 2 slowly diffused through membrane lipids. As a control, catalase activity was also evaluated in non-treated cells and after incubation with 1 mM H 2 O 2 for 60 min; absence of statistical differences indicate that this antioxidant scavenger is not responsible for the observed cellular resistance of AQP5-yeast cells.
AQP5 Shows High Peroxiporin Activity in Pancreatic Cancer Cells
The BxPC3 pancreatic cancer cell line was tested to confirm the yeast results and further investigate the biological relevance of AQP-mediated H 2 O 2 transport. Evaluation of relative gene expression by quantitative real-time RT-PCR demonstrated that both AQP3 and AQP5 were expressed in BxPC3, although with different levels of expression ( Figure 6A).
Since both AQP5 and AQP3 transport H 2 O 2 , and to distinguish the contribution of each isoform to H 2 O 2 permeability, we used BxPC3 cells silenced for AQP3, for AQP5, or double silenced for AQP3-AQP5 to evaluate H 2 O 2 permeability. The knockdown efficiency was evaluated by RT-qPCR after 48 h and compared with the non-targeting control construct, showing a reduction of AQP3 and AQP5 expression of around 50% ( Figure 6B). H 2 O 2 membrane transport was evaluated by epifluorescence microscopy. The rate of fluorescence increase of individual cells incubated with 10 µM H 2 -DCFDA was monitored before and after addition of 100 µM H 2 O 2 ( Figure 6C). As depicted, the rate of H 2 O 2 uptake was maximal for control cells that expressed both AQP3 and AQP5. Cells silenced for AQP5 (siAQP5) still expressing AQP3, and cells silenced for AQP3 (siAQP3) still expressing AQP5, responded to H 2 O 2 addition with a similar rate of fluorescence increase, approximately half the control ( Figure 6D). In AQP3-AQP5 silenced cells, H 2 O 2 uptake was almost abolished, and a similar effect was seen when cells were treated with the non-specific aquaporin inhibitor HgCl 2 . Moreover, when control cells were incubated with the AQP3 inhibitor Auphen [28], inhibiting AQP3 but not AQP5, the rate of H 2 O 2 uptake was considerably decreased but still significant, due to AQP5 peroxiporin activity. H2O2 membrane transport was evaluated by epifluorescence microscopy. The rate of fluorescence increase of individual cells incubated with 10 µM H2-DCFDA was monitored before and after addition of 100 µM H2O2 ( Figure 6C). As depicted, the rate of H2O2 uptake was maximal for control cells that expressed both AQP3 and AQP5. Cells silenced for AQP5 (siAQP5) still expressing AQP3, and cells silenced for AQP3 (siAQP3) still expressing AQP5, responded to H2O2 addition with a similar rate of fluorescence increase, approximately half the control ( Figure 6D). In AQP3-AQP5 silenced cells, H2O2 uptake was almost abolished, and a similar effect was seen when cells were treated with the non-specific aquaporin inhibitor HgCl2. Moreover, when control cells were incubated with the AQP3 inhibitor Auphen [28], inhibiting AQP3 but not AQP5, the rate of H2O2 uptake was considerably decreased but still significant, due to AQP5 peroxiporin activity.
Interestingly, the level of AQP3 expression in BxPC3 cells was 11-fold higher than AQP5, and the silencing was equally efficient for both AQPs. However, the measured rate of H2O2 permeation was similar for AQP3 and AQP5, suggesting that AQP5 may have a higher capacity for H2O2 flux, resulting in a highly efficient peroxiporin. Interestingly, the level of AQP3 expression in BxPC3 cells was 11-fold higher than AQP5, and the silencing was equally efficient for both AQPs. However, the measured rate of H 2 O 2 permeation was similar for AQP3 and AQP5, suggesting that AQP5 may have a higher capacity for H 2 O 2 flux, resulting in a highly efficient peroxiporin.
Effect of AQP-Mediated H 2 O 2 Transport on Cell Migration
To investigate whether AQP5-mediated H 2 O 2 transport underlies signal transduction in pancreatic cancer progression, we evaluated the rate of cell migration before and upon cell treatment with extracellular H 2 O 2 of BxPC3 cells silenced for AQP3 (siAQP3), for AQP5 (siAQP5), and control cells transfected with a non-targeting construct (siCtrl).
Studies on monolayers of BxPC3 control cells (siCtrl) showed a partial recovery of wound area after 12 h (around 20%; p < 0.01) and significant recovery after 24 h (around 60%; p < 0.001) ( Figure 7A). However, silencing AQP3 completely impaired cell migration, and silencing AQP5 substantially reduced cell migration and wound closure compared to control cells. These results are consistent with previous reports in AQP3-null mice, breast cancer, and sarcoma cells [18,29,30], and indicate that AQP3 and AQP5 play a role in the regulation of cell migration in pancreatic cancer cells, representing a promising target for the treatment of human pancreatic adenocarcinoma.
Previous to the oxidative migration assay, we assessed the effect of external H 2 O 2 stimulus on the viability of BxPC3 cells. As depicted, the addition of H 2 O 2 up to 200 µM induced a maximal 30% loss of cell viability ( Figure 7B). Thus, in subsequent experiments, we used 100 µM H 2 O 2 , assuring >70% cell viability for migration assays.
The contribution of each AQP to cell migration under oxidative stress was investigated in control cells ( Figure 7B), in siAQP3 ( Figure 7D), or in siAQP5 cells ( Figure 7E) before and after treatment with 100 µM H 2 O 2 . Downregulation of AQP5 or AQP3 gene expression had a strong impact on cell migration of non-treated cells, which, even at 24 h, were not able to decrease the wound area when compared to control cells ( Figure 7D,E). However, in siAQP5 cells that still express AQP3, oxidative treatment induced a significant recovery at 24 h (50% wound area), similar to control cells, possibly due to H 2 O 2 diffusion via AQP3, as previously reported for breast cancer cells [18]. When siAQP3 cells were treated with H 2 O 2 , the wound recovery was detected earlier at 12 h, although to a lesser extent, showing the positive effect of H 2 O 2 diffusion via AQP5 on cell migration. These data demonstrate for the first time that AQP5 expression and peroxiporin activity in pancreatic cancer cells is critical for cell migration and tumor spread. with extracellular H2O2 of BxPC3 cells silenced for AQP3 (siAQP3), for AQP5 (siAQP5), and control cells transfected with a non-targeting construct (siCtrl). Studies on monolayers of BxPC3 control cells (siCtrl) showed a partial recovery of wound area after 12 h (around 20%; p < 0.01) and significant recovery after 24 h (around 60%; p < 0.001) ( Figure 7A). However, silencing AQP3 completely impaired cell migration, and silencing AQP5 substantially reduced cell migration and wound closure compared to control cells. These results are consistent with previous reports in AQP3-null mice, breast cancer, and sarcoma cells [18,29,30], and indicate that AQP3 and AQP5 play a role in the regulation of cell migration in pancreatic cancer cells, representing a promising target for the treatment of human pancreatic adenocarcinoma.
Previous to the oxidative migration assay, we assessed the effect of external H2O2 stimulus on the viability of BxPC3 cells. As depicted, the addition of H2O2 up to 200 µM induced a maximal 30% loss of cell viability ( Figure 7B). Thus, in subsequent experiments, we used 100 µM H2O2, assuring >70% cell viability for migration assays.
The contribution of each AQP to cell migration under oxidative stress was investigated in control cells ( Figure 7B), in siAQP3 ( Figure 7D), or in siAQP5 cells ( Figure 7E) before and after treatment with 100 µM H2O2. Downregulation of AQP5 or AQP3 gene expression had a strong impact on cell migration of non-treated cells, which, even at 24 h, were not able to decrease the wound area when compared to control cells ( Figure 7D and E). However, in siAQP5 cells that still express AQP3, oxidative treatment induced a significant recovery at 24 h (50% wound area), similar to control cells, possibly due to H2O2 diffusion via AQP3, as previously reported for breast cancer cells [18]. When siAQP3 cells were treated with H2O2, the wound recovery was detected earlier at 12 h, although to a lesser extent, showing the positive effect of H2O2 diffusion via AQP5 on cell migration. These data demonstrate for the first time that AQP5 expression and peroxiporin activity in pancreatic cancer cells is critical for cell migration and tumor spread.
Discussion
In biological systems, ROS are generated endogenously by mitochondrial respiratory chain and oxidase enzymes, or in response to extracellular stimuli. ROS products, including H2O2, contribute to oxidative stress and can lead to initiation and progression of several chronic diseases, like atherosclerosis, diabetes, neurodegeneration, and tumorigenesis. However, at low concentration, ROS can regulate signaling pathways and physiological processes, including cell growth, differentiation, and migration. Recent studies showed that some mammalian AQPs can channel H2O2 across the cell plasma membrane, and reported their involvement in signaling cascades and tumorigenesis [19]. AQP3-mediated H2O2 transport has been linked to cancer cell migration, explaining its overexpression in cancer tissues. Rat AQP5 has been characterized as peroxiporin, but studies with the human isoform were lacking. Knowing that AQP5 is also highly expressed in human tumors [16], we investigated AQP5 functional regulation and its ability to transport H2O2 that may account for a role in tumor progression.
Here, we report that human AQP5 facilitates H2O2 uptake in hAQP5-transformed yeast cells, an ability also detected in a cultured pancreatic adenocarcinoma cell line. It is worth mentioning that AQP5 was found to be overexpressed in pancreatic adenocarcinoma biopsies of patients compared with matched normal pancreas tissues, being correlated with tumor stage and aggressiveness [17].
We found that AQP5 expression increases yeast sensitivity to oxidative damage after a longterm insult (48 h), but renders cells more resistant to short-term oxidative stress, evidencing the positive contribution of AQP5-mediated H2O2 diffusion to cell growth and survival. Mutagenesis studies demonstrated that while phosphorylation of S156 at the cytoplasmic end does not affect permeability, residue His173 located in the selective filter is crucial for water permeability and possibly interacts with phosphorylated S183 for permeability regulation, resulting in blockage of the pore. This mechanism of gating might be involved in the fine-tuning of cell sensitivity/resistance to oxidative external conditions, where NOX-produced H2O2 is taken up via AQPs triggering signaling cascades and induces cell proliferation and migration. In fact, both AQP5 and AQP3 showed measurable peroxiporin activity in pancreatic cancer cells, with AQP5 showing higher efficiency than AQP3. Moreover, both AQP5 and AQP3 were revealed to be crucial for cell migration, as shown here for BxPC3 cells, for which cell migration was drastically reduced when these peroxiporins were silenced.
Discussion
In biological systems, ROS are generated endogenously by mitochondrial respiratory chain and oxidase enzymes, or in response to extracellular stimuli. ROS products, including H 2 O 2 , contribute to oxidative stress and can lead to initiation and progression of several chronic diseases, like atherosclerosis, diabetes, neurodegeneration, and tumorigenesis. However, at low concentration, ROS can regulate signaling pathways and physiological processes, including cell growth, differentiation, and migration. Recent studies showed that some mammalian AQPs can channel H 2 O 2 across the cell plasma membrane, and reported their involvement in signaling cascades and tumorigenesis [19]. AQP3-mediated H 2 O 2 transport has been linked to cancer cell migration, explaining its overexpression in cancer tissues. Rat AQP5 has been characterized as peroxiporin, but studies with the human isoform were lacking. Knowing that AQP5 is also highly expressed in human tumors [16], we investigated AQP5 functional regulation and its ability to transport H 2 O 2 that may account for a role in tumor progression.
Here, we report that human AQP5 facilitates H 2 O 2 uptake in hAQP5-transformed yeast cells, an ability also detected in a cultured pancreatic adenocarcinoma cell line. It is worth mentioning that AQP5 was found to be overexpressed in pancreatic adenocarcinoma biopsies of patients compared with matched normal pancreas tissues, being correlated with tumor stage and aggressiveness [17].
We found that AQP5 expression increases yeast sensitivity to oxidative damage after a long-term insult (48 h), but renders cells more resistant to short-term oxidative stress, evidencing the positive contribution of AQP5-mediated H 2 O 2 diffusion to cell growth and survival. Mutagenesis studies demonstrated that while phosphorylation of S156 at the cytoplasmic end does not affect permeability, residue His173 located in the selective filter is crucial for water permeability and possibly interacts with phosphorylated S183 for permeability regulation, resulting in blockage of the pore. This mechanism of gating might be involved in the fine-tuning of cell sensitivity/resistance to oxidative external conditions, where NOX-produced H 2 O 2 is taken up via AQPs triggering signaling cascades and induces cell proliferation and migration. In fact, both AQP5 and AQP3 showed measurable peroxiporin activity in pancreatic cancer cells, with AQP5 showing higher efficiency than AQP3. Moreover, both AQP5 and AQP3 were revealed to be crucial for cell migration, as shown here for BxPC3 cells, for which cell migration was drastically reduced when these peroxiporins were silenced.
The recovery of the migration rate by external oxidative stimulus demonstrates that when AQP3 or AQP5 are downregulated, signaling events triggered by H 2 O 2 are blocked by the permeability barrier imposed by biomembranes, and additional H 2 O 2 is needed to force cell migration. It has been reported that during normal conditions (eustress) the gradient between extracellular and intracellular H 2 O 2 is higher than 200-fold [31][32][33]; the presence of aquaporins will decrease this gradient, favoring H 2 O 2 permeability and prompting cellular processes like proliferation and migration [15]. The observation that silenced-AQP3 or -AQP5 cells recover cell migration rate by treatment with H 2 O 2 perfectly agrees with this notion.
Altogether, our findings demonstrate that AQP5 can play an important role in cancer cell survival. By allowing a dynamic fine-tuning of intracellular H 2 O 2 to activate signaling networks related to cell survival and proliferation, AQP5 can regulate cellular resistance to oxidative stress as well as facilitate cancer cell migration, and represents a promising target for the development of cancer therapies.
Yeast Strains and Growth Conditions
Transformed yeast strain was grown in YNB medium (2% w/v glucose, 0.67% (DIFCO) yeast nitrogen base) supplemented with the adequate requirements for prototrophic growth [34] and maintained in the same medium with 2% (w/v) agar. For all experiments, the same medium was used for yeast cell growth to mid exponential phase (OD 600 1.0).
Cloning and Heterologous Expression of AQP5 in S. Cerevisiae
Two sets of expression plasmids were generated. One set expresses non-tagged AQP5, while the other set expresses AQP5 C-terminally tagged with yeGFP. All plasmids were constructed by homologous recombination in yeast strain YSH1770 by co-transformation of AQP5 derived PCR fragments and BamHI, SalI, HindIII digested pUG35, as described before [35]. Primers used are shown in Table 1. The nucleotide sequence of all constructs was verified by DNA sequencing at Eurofins Genomics, Germany. Table 1. Nucleotide sequences of the PCR primers used for generating the mutants analyzed in the present study. Nucleotide sequences in turkish are for homologous recombination with pUG35; the bold sequence is a yeast Kozak sequence, while AQP5 sequences are shown in black. The codon changed in each primer is underlined. Non-tagged AQP5 mutations were made by transforming YSH1770 with BamHI, SalI, and HindIII digested pUG35 and two PCR products: one generated by AQP5UG35fw + a mutant rv primer the other by AQP5UG35rv + the corresponding mutant fw primer. GFP tagged versions were made in the same way except for using the AQP5UG35GFPrv primer instead of AQP5UG35rv. Non-tagged and GFP tagged wild type AQP5 were generated by transforming YSH1770 with BamHI, SalI, and HindIII digested pUG35 and PCR products generated by AQP5UG35 fw + AQP5UG35rv and AQP5UG35fw + AQP5UG35GFPrv, respectively. H173WAqp5rv: 5' GATTCCGACAAGCCAGCCCAGGGT 3' S156AAqp5fw: 5' CGCCGCACCGCACCTGTGGGCT 3' S156AAqp5rv: 5' AGCCCACAGGTGCGGTGCGGCG 3' S156EAqp5fw: 5' CGCCGCACCGAACCTGTGGGCT 3' S156EAqp5rv: 5' AGCCCACAGGTTCGGTGCGGCG 3' Table 1. Cont. S183AAqp5fw: 5' CACTGGCTGCGCAATGAACCCAGC 3' S183AAqp5rv: 5' GCTGGGTTCATTGCGCAGCCAGTG 3' S183EAqp5fw: 5' CACTGGCTGCGAAATGAACCCAGC 3' S183EAqp5rv: 5' GCTGGGTTCATTTCGCAGCCAGTG 3'
AQP5 Subcellular Localization by Fluorescence Microscopy
For subcellular localization of GFP-tagged AQP5 in S. cerevisiae, yeast cells in the mid-exponential phase were observed using a Zeiss Axiovert 200 fluorescence microscope, at 495 nm excitation and 535 nm emission wavelengths. Fluorescence microscopy images were captured with a digital camera (CoolSNAP EZ, Photometrics, Huntington Beach, CA, USA) using the Metafluor software (Molecular Devices, Sunyvale, CA, USA).
Cell Culture
Biopsy xenograft of Pancreatic Carcinoma line-3 (BxPC3) was obtained from ATCC (catalog no. CRL-1687) and cultured at 37 • C in 5% CO 2 . Cells were grown in RPMI1640 medium with 10% FBS and 1% penicillin/streptomycin. Medium was changed every 2-3 days and experiments were performed with 70% to 80% cell confluence.
Transfection with siRNA for AQP Silencing
Short interfering RNA (siRNA) targeting human AQP3 (ID: s1521) and human AQP5 (ID: s1527) were purchased from Ambion. Silencer ® Negative Control siRNA #1 (Ambion, ThermoFisher Scientific, Waltham, MA, USA) was employed as the negative control to ensure silencing specificity in all the experiments. Briefly, medium was removed and cells were supplied with Opti-MEM I reduced serum medium without antibiotics (Opti-MEM) (Life technologies, ThermoFisher Scientific, Waltham, MA, USA). siRNA (30 pmol) were diluted in Opti-MEM and mixed with Lipofectamine™ RNAiMAX transfection reagent (Invitrogen, ThermoFisher Scientific, Waltham, MA, USA) pre-diluted in Opti-MEM according to the manufacturer's instructions. After 5 min incubation at room temperature, the mix was added to the cells and incubated at 37 • C in 5% CO 2 . For double silencing, both AQP3-RNAiMAX and AQP5-RNAiMAX complexes were added to the cells. After 48 h of incubation, the knockdown efficiency was evaluated by quantitative real-time RT-PCR, as below.
RNA Isolation and Real Time RT-PCR
Total RNA was extracted from cultured cells using TRIzol Reagent (Invitrogen), according to the manufacturer's protocol. The RNA was treated with RNase-free DNase I (Sigma-Aldrich, St. Louis, MO, USA) to avoid contamination with genomic DNA. Extracted RNA was quantified with Nanodrop™ 2000c spectrophotometer. For template cDNA synthesis, 1 µg of total RNA was reverse transcribed in a 20 µL final volume using random hexamers primers (Roche Applied Science, Penzberg, Germany) and 200 units of M-MLV reverse transcriptase (Invitrogen), as previously described [36].
The relative quantification of gene expression was determined using the 2 -∆Ct method (adapted from Reference [37]). Using this method, we obtained the fold variation in AQP gene expression normalized to an endogenous control (β-actin).
Migration Assay
BxPC3 cells were seeded in six well microplates at a density of 0.15 × 10 6 cells/well and were allowed to adhere for 24 h prior to AQP silencing. After 48 h incubation with silencing reagent, a wound was made with an even trace in the middle of the monolayer using a sterile 10 µL pipette tip. After washing three times with phosphate buffered saline (PBS) to remove cell debris, cells were incubated with vehicle or with 100 µM H 2 O 2 prepared in low serum (2% FBS) RPMI medium (Life technologies, ThermoFisher Scientific, Waltham, MA, USA). The cells were then incubated at 37 • C in a 5% CO 2 incubator, and images of the wound were captured at intervals of 2 h. The distance of the wound was measured under a light microscope and analyzed using the software ImageJ (https://imagej.net). Wound closure was normalized to original wound area at time 0. All samples were tested in triplicate, and the data are expressed as the mean ± SEM.
Water Permeability Measurements
For water permeability assays, yeast transformants grown up to OD 600 Permeability assays were performed by stopped-flow fluorescence spectroscopy, as previously described [38], using a HI-TECH Scientific PQ/SF-53 stopped-flow apparatus, which has a 2 ms dead time, at a controlled temperature, interfaced with a microcomputer. Experiments were performed at temperatures ranging from 9 to 34 • C. Four runs were usually stored and analyzed at each experimental condition. In each run, 0.1 mL of cell suspension was mixed with an equal volume of hyperosmotic sorbitol buffer (2.1 M sorbitol, 50 mM K-citrate, pH 5.1 or 7.4) producing an inwardly directed gradient of the impermeant sorbitol solute that induces water outflow and cell shrinkage. Fluorescence was excited using a 470 nm interference filter and detected using a 530 nm cut-off filter. The time course of cell volume change was followed by fluorescence quenching of the entrapped fluorophore (CF). The initial rate constant of volume changes (k) was obtained by fitting the time course of fluorescence to a one phase exponential. The osmotic water permeability coefficient, Pf, was estimated from the linear relationship between Pf and k [38], Pf = k(Vo/A)(1/Vw(osmout)), where Vw is the molar volume of water, Vo/A is the initial volume to area ratio of the cell population, and (osmout) is the final medium osmolarity after the osmotic shock. The osmolarity of each solution was determined from freezing point depression by a semi-micro-osmometer (Knauer GmbH, Germany). The activation energy (Ea) of water transport was evaluated from the slope of the Arrhenius plot (ln Pf as a function of 1/T) multiplied by the gas constant R.
Hydrogen Peroxide Consumption
The H 2 O 2 consumption was measured in intact yeast cells. Cells were harvested by centrifugation (4000× g; 10 min at RT), resuspended in fresh growth media and incubated at 30 • C with orbital shaking. Hydrogen peroxide (50 µM) was added to intact yeast cells, and the consumption of H 2 O 2 was measured in samples of the cell suspension by following O 2 release with an oxygen electrode (Hansatech Instruments Ltd., Norfolk, UK) after the addition of catalase [39]. H 2 O 2 consumption is reported as a first order rate constant (s −1 ) obtained from the slope of a semi-logarithmic plot of H 2 O 2 concentration versus time.
Yeast transformants grown up to OD 600 1.0 were harvested by centrifugation (4000× g; 10 min; 4 • C) (Allegra ® 6 Series Centrifuges, Beckman Coulter ® ), washed three times with phosphate buffer 0.1 M pH 5 and resuspended in the same buffer to OD 600 1.4. Cells were then incubated with 5 µM H 2 -DCFDA for 45 min at 30 • C and washed one time with phosphate buffer 0.1 M. Cells were transferred to black multi-well plates and incubated with several concentrations of H 2 O 2 (0.5-50 mM). Fluorescence intensity was measured after addition of H 2 O 2 over time until 60 min in a microplate reader at an excitation/emission of 485/520 nm (FLUOstar Omega, BMG Labtech, Ortenberg, Germany). As control, H 2 O 2 non-treated cells were incubated with 5 µM H 2 -DCFDA and fluorescence intensity was followed. The intracellular ROS accumulation was calculated from the slope of a plot of fluorescence intensity versus time and normalized to non-treated cells.
BxPC3 cells were seeded in six well microplates at a density of 0.15 × 10 6 cells/well, and were allowed to adhere for 24 h prior to AQP silencing. After 48 h incubation with silencing reagent, H 2 O 2 transport was measured in individual adherent cells on a coverslip. Briefly, cells were loaded with 10 µM H 2 -DCFDA for 30 min at 37 • C in 5% CO 2 . Next, cells were washed twice with Ringer Buffer (RB) pH 7.4 (140 mM NaCl, 2 mM CaCl 2 , 1 mM MgSO 4 , 1.5 mM K 2 HPO 4 , 10 mM glucose) and the coverslips with the cells were mounted in a closed perfusion chamber (Warner Instruments, Hamden, CT, USA) on the stage of a Zeiss Axiovert 200 inverted microscope, using a 40× epifluorescence oil immersion objective. Fluorescence was excited at wavelength 495/10 nm; emission fluorescence was collected with a 515/10 nm band pass filter. Data were recorded and analyzed using the Metafluor Software (Molecular Devices, Sunnyvale, CA, USA) connected to a CCD camera (Cool SnapTM EZ Photometrics, Tucson, AZ, USA). Cells were equilibrated in RB pH 7.4 for 2 min, and then 100 µM H 2 O 2 , freshly prepared in RB, was added directly to the cells. H 2 -DCFDA fluorescence was scanned every 10 s. For inhibition studies, cells were incubated with 10 µM Auphen for 15 min or 0.1 mM HgCl 2 for 5 min at 37 • C in 5% CO 2 . H 2 O 2 consumption is reported as a first order rate constant obtained from the slope of a semi-logarithmic plot of H 2 O 2 concentration versus time.
Intracellular ROS Analysis
Qualitative growth assay was performed on solid YNB medium, supplemented with 2% (w/v) glucose, containing hydrogen peroxide. Solid YNB medium with 1 mM H 2 O 2 was freshly prepared at the time of inoculation for oxidative stress experiments. Yeast strains were grown in liquid YNB medium, with orbital shaking, at 30 • C up to OD 600 ≈ 1.0 corresponding to 1 × 10 7 cells/mL. Cells were harvested by centrifugation (4000× g; 10 min; 24 • C) (Allegra ® 6 Series Centrifuges, Beckman Coulter ® ) and resuspended to OD 600 1.0 in fresh growth media and incubated with 50 µM curcumin or 50 µM naringenin at 30 • C with orbital shaking for 60 min. Cells were then harvested by centrifugation (4000× g; 10 min; 24 • C) (Allegra ® 6 Series Centrifuges, Beckman Coulter ® ) and resuspended to OD 600 ≈ 10, and multi-well plates were prepared with serial 10-fold dilutions of the original concentrated culture up to 10 -8 ; 3 µL suspensions were spotted with replica platter for 96 well plates device on plates containing YNB solid medium with and without H 2 O 2 and incubated at 28 • C. Differences in growth phenotypes of yeast strains were recorded after 2 days of incubation.
Quantitative growth assay was performed on solid YPD medium. Yeast cells were grown overnight to mid exponential phase (OD 600 1.0). Cells were harvested by centrifugation (4000× g; 10 min; 24 • C) (Allegra ® 6 Series Centrifuges, Beckman Coulter ® ) and resuspended to OD 600 1.0 in fresh growth media and incubated with 1 mM H 2 O 2 at 30 • C with orbital shaking for 15 and 60 min. Multi-well plates were prepared with serial 10-fold dilutions of each strain up to 10 −6 and 3 µl suspensions were spotted on solid YPD plates. As a control for maximum viability, cells without treatment were also diluted and plated as described above. YPD plates were then incubated for 2 days at 28 • C until visible growth was observed and colonies were counted. Results are expressed as percentage of the time 0 (non-treated cells) colony number.
Preparation of Cell Lysates for Colorimetric Assay
For antioxidant measurements, yeast transformants grown up to OD 600 1.0 were harvested by centrifugation (4000× g; 10 min; 4 • C) (Allegra ® 6 Series Centrifuges, Beckman Coulter ® ), washed once with K+-citrate 50 mM pH 5.1 buffer, and resuspended to OD 600 1.0 in fresh growth media and incubated with 1mM H 2 O 2 at 30 • C with orbital shaking for 60 min. Cells were then harvested by centrifugation (4000× g; 10 min; 4 • C) (Allegra ® 6 Series Centrifuges, Beckman Coulter ® ) and dry pellets were stored at −80 • C until analysis. Dry pellet was dissolved in phosphate buffered saline (PBS) and disrupted mechanically by vigorous agitation with acid washed glass beads for seven 1 minute intervals with cooling intervals between each agitation cycle. After disruption, cell lysates were cleared by centrifugation (7200× g; 10 min; RT) (VWR™ Micro 1207 Centrifuge) and the supernatants were used for assays. Prior to performing the assays, protein concentration of cell lysates was determined according to Bradford, using bovine serum albumin as a standard [41].
Preparation of Cell Lysates for Colorimetric Assay
The catalase activity was measured by the modified method of Goth [42]. This method is based on the measurement of H 2 O 2 degradation in cell lysate, which occurs mostly by catalase activity, as it has one of the highest turnover numbers among all the enzymes. For catalase activity assay, 40 µL of supernatant was mixed with H 2 O 2 (final concentration 65 mM) for the start of the reaction. Different dilutions of hydrogen peroxide (0-75 mM) were used for standards. The reaction was stopped after 5 min by addition of ammonium molybdate (final concentration 200 mM), and color development was measured spectrophotometrically in a plate reader at 405 nm (Anthos Zenyth 3100, Beckman Coulter ® ). One unit of catalase activity is defined as the amount of enzyme needed for degradation of 1 µmol of H 2 O 2/ min at 25 • C. Catalase activity was expressed as units of catalase per milligram of proteins in cell lysate (U mg −1 ).
Statistical Analysis
All the experiments were performed in biological and technical triplicates. Results were expressed as mean ± SEM of at least three independent experiments. Statistical analysis between groups was performed by two-way ANOVA and non-parametric Mann-Whitney test using the Graph Prism software (GraphPad Software, La Jolla, CA, USA). p-values < 0.05 were considered statistically significant.
Conclusions
This work unequivocally demonstrates that human AQP5 is involved in redox biology, facilitating the passive diffusion of H 2 O 2 through cell membranes and contributing to cell proliferation and migration. Selective targeting of AQP5 may open new perspectives for anti-cancer drug development.
|
2019-07-07T13:05:14.247Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "8e7b8eb7efd17d12dd548b3004c9fd4cfea045f4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/11/7/932/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ef397861ea0598266621e120a84b0781622b1ba6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
95323829
|
pes2o/s2orc
|
v3-fos-license
|
Atomistic Modeling of Gas Adsorption in Nanocarbons
Carbon nanostructures are currently under investigation as possible ideal media for gas storage and mesoporous materials for gas sensors. The recent scientific literature concerning gas adsorption in nanocarbons, however, is affected by a significant variation in the experimental data, mainly due to the different characteristics of the investigated samples arising from the variety of the synthesis techniques used and their reproducibility. Atomistic simulations have turned out to be sometimes crucial to study the properties of these systems in order to support the experiments, to indicate the physical limits inherent in the investigated structures, and to suggest possible new routes for application purposes. In consideration of the extent of the theme, we have chosen to treat in this paper the results obtained within some of the most popular atomistic theoretical frameworks without any purpose of completeness. A significant part of this paper is dedicated to the hydrogen adsorption on C-based nanostructures for its obvious importance and the exceptional efforts devoted to it by the scientific community.
Introduction
The discovery of novel carbon nanostructures (CNSs) has caused many expectations for their potential impact in gas adsorption, storage, and sensing thanks to their large surface/volume ratio.Gas adsorption is particularly focused on hydrogen for clean energy sources and "small-scale" devices for fuel cells, involving either the hydrocarbon reforming or the hydrogen storage, are currently under study.
The amount of hydrogen storage in solid substrates for commercial use has been estimated to about 9 wt% for the year 2015 by the US Department of Energy (DOE) [1] but none of the major storage media has reached this value so far.
Gas storage is important also for many other technological applications: helium and nitrogen, for example, have many applications in metallurgic industry.
Nanostructured media are also required for high sensitivity monitoring of chemical species in many fields, from medical to environmental applications.Monitoring of nitrogen dioxide and carbon mono-and dioxide, for instance, is important for the environment while the detection of ammonia (NH 3 ) [2,3] and hydrogen sulphide (H 2 S) [4] is compulsory in industrial, medical, and living environments.
Many experiments made on gas adsorption in CNSs have evidenced, however, controversial results with partial understanding of the processes involved [5].
The adsorption processes in CNSs can be, in fact, quite tricky because chemisorption and physisorption phenomena may coexist and, moreover, weak interactions are highly sensitive to temperature, pressure, humidity, and so forth, that may vary between different experiments [6].
Another source of uncertainty is that CNS samples are often impure as uncontrolled phenomena and contamination may occur during synthesis [7] resulting in a variety of carbon structures.
All these aspects are strong motivations for reliable atomistic modelling because the understanding of the adsorption/desorption processes is intimately related to the character and the strength of the atomistic interactions.The methods chosen to model such systems, however, may vary greatly depending on the level of accuracy, on the number of particles treated, and on the specific system considered.
The paper is organised as follows: in the next section we give an overview on the recent literature on gas adsorption CNSs approached by the main atomistic theoretical schemes that are introduced in Section 3; in the following three sections, results on gas physisorption and chemisorption in
Overview of Gas Adsorption in Nanocarbons
Carbon materials exhibit quite different adsorption properties depending on the valence states.Moreover, stable carbon phases may coexist in amorphous carbon where "graphitelike" or "diamond-like" short range order may occur.The other metastable carbon allotropes, such as graphene, fullerenes, carbon nanotubes, carbon nanohorns, and so forth, constitute the backbone of a novel carbon-based chemistry and nanotechnology and exhibit different adsorption properties; in the following of this paragraph, the recent literature on atomistic simulations of gas adsorption is briefly introduced for various allotropes.
Graphene and Activated Carbons.
From graphene, that may be considered a CNS by itself [8,9], several other CNSs can be processed such as armchair semiconducting or zigzag metallic graphene nanoribbons (GNRs), obtained by standard lithography [10], or graphite nanofibers (GNFs).Nanostructured graphite, either hydrogenated or not, can be synthesized by ball milling in a controlled atmosphere, and also activated carbons, consisting of a multitude of stacks of disordered graphene planes of various sizes, are obtained from graphene by steam or chemical agents processing.
Systems for hydrogen storage in graphene-based NSs have been studied concerning both physisorption and chemisorption [11,12] revealing that doping or defects affect the storage capacity as found, for instance, in Li doped graphene [13].
Thanks to their metallic behavior, graphene layers have been widely studied also for gas sensing applications of various gas species (NO 2 , H 2 O, NH 3 , O 2 , CO, N 2 and B 2 ) exploiting the charge carrier density change induced by the adsorption [14][15][16]; some examples of graphene-based "nanodevices" for pH sensors [17] and biosensors [18] can also be found in the literature.
Fullerenes.
Fullerenes and related structures are usually considered as ideal adsorbents.C 60 (bucky-ball) can stably host atoms of the appropriate size, either inside or outside its spherical structure.Hexagonal lattices of C 60 molecules can be deposited on a substrate in monolayered or multilayered films while, at low temperatures, cubic C 60 lattices (fullerite) are favored; since these fullerene lattices have large lattice constants (hundreds of nanometers), they are fairly appealing open structures for gas storage [19,20]; thus different adsorption sites in, f.i., hexagonal C 60 monolayers have been studied.
Charged fullerenes have been explored for helium adsorption [21] or as H 2 storage media as well [22]; moreover it has been shown that "bucky-balls" can also easily bind other gas molecules thanks to their polar properties.Doping of fullerenes may improve the adsorption of molecular hydrogen and many examples can be found involving light elements, such as fluorine, nitrogen, boron [23], alkali [24][25][26] and transition metals (TMs) [27][28][29][30] and silicon [31].
Carbon Nanotubes.
Single walled carbon nanotubes (SWCNTs) [32,33] are single graphene sheets folded to form a cylindrical shape in various ways (chirality) resulting in semiconducting or metallic behavior [7].
Defects insertion, structural deformation, or doping are also employed to improve the binding of low adsorption gaseous species on bare CNTs [53].B or N doped CNTs exhibit nice adsorption features for H 2 O and CO molecules [54] while TMs doped zigzag and armchair SWCNT have been studied for the detection of N 2 , O 2 , H 2 O, CO, NO, NH 3 , NO 2 , CO 2 , and H 2 S [44,48,55].Concerning sensing, however, metal doped SWCNTs are still problematic because their transport properties are weakly affected by the adsorbed molecules [56].CNT bundles have been studied also for the storage of noble gases such as He, Ne, Ar and Xe and N 2 [36,42,47,57,58].
Other CNSs.
Single walled carbon nanohorns (SWC-NHs) are conical-shaped graphene sheets that tend to form spherical aggregates with "internanohorn" and "intrananohorn" accessible pores; hydrogen and nitrogen adsorption in such structures have been studied both experimentally and theoretically [59,60].
During the synthesis it may happen that one or several fullerenes get stacked in the internal cavity of a nanotube [61].Such "peapod" structures are ideal gas "nanocontainers" with enhanced binding properties [62].
Collective studies on gas adsorption are usually studied with the Metropolis scheme [67,68] in various statistical ensambles.
Other models based on the continuum theory of fluids are also used to model the gas adsorption experiments in carbon porous materials [69][70][71][72].
In the next section we introduce briefly the above-listed, not exhaustively, theoretical schemes evidencing their limits of validity and accuracy levels.
3.1.Density Functional Theory ab-Initio Calculations.DFT ab-initio calculations [73,74] are efficient tools to study atomistic systems and processes [75,76].According to DFT, the total energy of a system of ions and valence electrons is a functional of the electron density n(r): where v ext (r) is the ionic potential.The universal Hohenberg-Kohn functional is where T[n(r)] and V e-e [n(r)] are, respectively, the electron kinetic energy and the electron-electron interaction energy; V e-e [n(r)] contains the Coulomb and exchange-correlation . The total energy is variational with respect to the electron density and the ground state is obtained self-consistently [75][76][77].
The key factors affecting the accuracy of the DFT calculations are the pseudopotentials, used to replace the ionic potential (for computational reasons) [78][79][80][81][82][83], and the scheme adopted to approximate the exchange-correlation potential V XC [n(r)] that is unknown a prior; the most popular schemes are the local density (LDA) [73,77] and the generalized gradient (GGA) approximations [84,85] and some of them are very accurate such as PBE, B3LYP, and so forth.[65,[84][85][86][87]. Generally speaking, LDA and GGA are robust for chemisorption but inaccurate for long-range interactions even though recent studies have shown that the LDA results are surprisingly accurate in many cases [88].
Hartree-Fock Based Quantum Chemistry Techniques.
Various strategies are used to include the electron correlation in Hartree-Fock (HF) based calculations [63,64,[89][90][91][92]; in the Configuration Interaction (CI) scheme, the HF groundstate wavefunction is replaced by a linear combination of ground and excited states obtained by populating virtual molecular orbitals (MOs).CI is very accurate but limited to very small systems for computational reasons.Various CI schemes are used, namely, CIS, CISD amd SF-CISD including, respectively, single, single and double (CIS) and spin flips excited states [63,64,90,92].
In the Möller-Plesset method (MP) the correlation potential V is treated as a perturbation of the HF Hamiltonian H = H (0) + λV and, for a system of n electrons, N nuclei and m occupied states, it is formally defined as with J i j , K i j being the usual HF Coulomb and exchange integrals.
The exact wavefunction is obtained by solving the secular equation where both wavefunctions and the eigenvalues are expanded in Taylor series of the perturbation parameter λ; the q-order of the wavefunction expansion in terms of a complete set of the HF eigenfunctions is denoted as MP q .MP2 is enough efficient but the correlation energy can be severely underestimated while MP4 is quite accurate, but limited to small systems due to computational limits.
The Coupled Cluster (CC) theory [63,93] is virtually equivalent to a full-CI approach because the wavefunction is represented as where T = T 1 +T 2 +T 3 +T 4 +••• is the "cluster operator" that formally includes all the possible excited states, T i being the state with i excitations of the HF ground state [90].Among the different CC schemes encountered, one of the most popular is the CCSD(T) that includes also a singles/triples coupling term [90].
Monte-Carlo Sampling
Techniques in the Grand-Canonical Ensamble.The Metropolis algorithm [67] allows the Monte Carlo sampling of a N-particles statistical ensamble such as the Grand Canonical one that is suitable to study gas adsorption.Many particles are required in this scheme and thus reliable classical atomistic interaction potentials must be used [68].A physical quantity is measured by calculating it statistically over the ensamble that is generated by using acceptance rules that depend on the energy and the particles number.Hence, the pressure dependence of the equilibrium gas density in CNSs can be calculated.The above described Grand Canonical Monte Carlo (GCMC) method is suitable for large scale gas adsorption studies provided that chemical events, such as bonding, reactions, and so forth, are excluded; the key factor affecting the reliability of GCMC simulation is the accuracy of the interaction potential and still nowadays, the simple Lennard-Jones (LJ) potential (and the ones derived from it) is a popular choice [36,42,94].Quantum effects are encompassed mainly through the Path Integral Monte Carlo (PIMC) approach where the quantum system is mimicked by a classical polymeric ring system whose equilibrium properties return the statistical properties of the quantum system [37,95,96].
DFT Nonuniform Fluid Models.
In the spirit of DFT, a variational method to find the ground state particles density in fluids was developed with particular emphasis for fluids close to surfaces [69,70].For such systems, the intrinsic free energy functional that must be minimized (e.g., by solving the Euler-Lagrange equations) is made of two terms: where F HS [ρ(r)] is the universal "hard-sphere" free energy functional that contains the repulsive energy, and u(r, r ) is the attractive part of the pairwise interaction potential.As F HS [ρ(r)] is not known a priori, the Local Density (LDA) or the Smoothed Density Approximations (SDA) can be employed [71,72].In the "non-local density functional theory" (NLDFT), SDA is adopted, the density being smoothed by an appropriate weight function, in order to reproduce the Percus-Yevick description of a homogeneous hard sphere fluid [97].With this approach, the structural properties and the adsorption isotherms of gases are calculated, and the pore size distribution of the adsorbant can be extrapolated.
Physical and Chemical Adsorption of Gaseous Species in CNSs
In the previous sections we have emphasized that atomistic modeling of gas adsorption in CNSs should be treated differently depending on the specific phenomena involved, either physical adsorption or chemical bonding.Therefore, we will treat separately the gas adsorption in CNSs depending whether either physical adsorption phenomena or chemical bonds are involved and, as a consequence, the next two sections will be focused on the physisorption (Section 5) and chemisorption (Section 6) phenomena, respectively.
Sometimes, however, the classification of the studied phenomena in terms of physical or chemical adsorption is quite difficult due to the occurrence of strong polar interactions or weak charge transfer that make uncertain the classification of the case under study; in these cases, the calculation of energetic quantities, such as the activation energy or the adsorption enthalpy, may help to get a clearer scenario because it is expected that physical adsorption exhibits lower adsorption enthalpy values than the ones involved in chemical bonds.
Gas Physical Adsorption in CNSs
A great deal of the scientific literature over the past twenty years has been devoted to hydrogen physical adsorption in carbon nanomaterials of different allotropic forms due to the potential impact of nanotechnology to solve this challenging problem that is still preventing the hydrogen economy from the success.
Thus we have chosen to dedicate the next subsection to hydrogen storage and to treat the other gaseous species in the following subsections.
Hydrogen Physical Adsorption.
In order to evaluate the hydrogen storage performance of CNSs, one should always refer to the DOE target as the minimum extractable loading for addressing the commercial storage needs.
The typical parameters used to measure the storage are the gravimetric excess (excess hydrogen adsorption): (m 0 H2 , m H2 , and m ads being, respectively, the free and the precipitated molecular hydrogen mass and the mass of the adsorbent nanostructured material) and the analogous volumetric excess.
CNSs can be considered as ideal hydrogen adsorption media due to the high surface/volume ratio favouring physisorption with fast adsorption/desorption thermodynamic cycles.The atomistic modelling of such phenomena in nanotubes or fullerenes benefit from their well-known atomic arrangements and thus the possible discrepancies between theory and experiment arise from impurities, samples dishomogeneity, and the limits inherent in the theoretical approach adopted.
On the contrary, complex CNSs, such as activated carbon (AC) or microporous carbon, are particularly challenging because require a great deal of effort to build reliable atomistic models.Thus the literature on atomistic modelling of hydrogen adsorption in CNSs is discussed with reference to the various allotropes considered.
Carbon Nanotubes.
Early experiments on CNTs, dating back to the end of the '90s, indicated these CNSs as ideal candidates to fulfill the DOE requirements [98].Since then, many controversial theoretical and experimental results appeared; recent review papers [99][100][101][102] have discussed the experimental data spread (see Figure 1), especially affecting the early measurements, suggesting that it originates from experimental errors and from the samples inhomogeneity and impurity.Nowadays, new purification strategies have been introduced evidencing that hydrogen storage in CNT media may be problematic.
H 2 physical adsorption in SWCNT or MWCNT systems has been mainly studied by GCMC and molecular dynamics simulations using simple LJ-derived potentials which have been proven to give realistic results.Apart from early, unconfirmed results [34] supporting the exceptional uptake performance reported in coeval measurements [98], atomistic modelling have evidenced a complicated scenario: H 2 uptake can occur either on the external (exohedral) or on the internal surface (endohedral) of a SWCNT where atomic hydrogen is unstable and only molecular hydrogen can exist [39].
However the endohedral storage is limited by steric hindrance phenomena that may cause the breakdown of the tube if it is excessive.
The LJ parameters of the carbon-gas interactions are usually obtained from the well known Lorentz-Berthelot rules σ gC = (σ gg + σ CC )/2; ε gC = √ ε gg • ε CC where σ gg , ε gg are the LJ gas/gas parameters and σ CC , ε CC are the LJ carbon/carbon parameters.
Stan and coworkers [36,42,58,94] have integrated the LJ potential over an ideal CNT surface for different gaseous species that casts in the following potential of a molecule in the vicinity of an ideal CNT: where r is the distance from the cylinder axis, θ = 0.38 Å−2 is the surface density of C atoms, and R is the radius of the ideal cylinder, This model has been used to calculate the uptake of different gases in CNT bundles showing that, provided that the adsorbate molecule is small enough (as in the hydrogen case) and that the tubes are properly arranged in a honeycomb structure, the amount of hydrogen in the interstitial regions is comparable to the one inside the tubes.
Anyway, the approximations included in this model were quite severe (for instance the gas-gas interactions have been neglected) and these results have been revised by GCMC simulations [103] with the Silvera-Goldman potential [104] for H 2 -H 2 and the LJ potential for C-H 2 ; it was shown that the adsorption isotherms decrease as the rope diameter increases because the specific area uptake in the interstitial and endohedral sites is nearly independent on the rope diameter (see Figure 2).These results agree with recent experiments and show that the DOE requirements are not satisfied at room temperature in the 1-10 MPa pressure range, even for an isolated SWCNT.PIMC simulations for (9,9) and (18,18) SWCNT arrays, implemented with the Silvera-Goldman and the Crowell-Brown [105] potentials respectively, for H 2 -H 2 and C-H 2 , have shown that quantum effects lower the GCMC results independently on the CNT chirality [106] and confirm that the previous optimistic experimental results on bare CNTs [98,107] cannot be explained by physisorption.
GCMC simulations have been used also to study how hydrogen physisorption in CNT media is affected by oxidation or by hydrogen chemisorption [108] showing that oxidation should favor the endohedral physical adsorption thus increasing both the volumetric and the gravimetric densities (see Figure 3).The theoretical limits of the hydrogen physical adsorption in SWCNT systems have been discussed by Bhatia and Myers [109] who recast the problem as a delivery one involving storage and release.The delivery is defined from the adsorption/desorption Langmuir isotherms at different pressures as Figure 3: Hydrogen absolute delivery (a) and enhancement factor (b) for CNT arrays with VdW gap of 0.9 nm.From [109].
with K = 1/P 0 e ΔS0/R e −ΔH0/RT • P 1 , P 2 , n m are the charge and the discharge pressures and the adsorption capacity and ΔH 0 , ΔS 0 are the average heat of adsorption and the entropy change.Using GCMC simulations and thermodynamic arguments, the theoretical maximum delivery has been estimated lower than 4.6 wt%, even for optimal temperature (see Figure 3) given the adsorption heat ΔH 0 ≈ 6 kJ/mol.In this context, the authors evidenced with persuasive arguments that the H 2 heat of physisorption on the SWCNT "side-wall" (related to the LJ energy parameter ε C-H2 /k B ≈ 31 K) makes pure CNTs unfit to satisfy the DOE requirements.Before drawing conclusive statements, however, it should be emphasized that the LJ energy parameter ε C-H2 used in the simulations discussed so far did not include any curvature correction.To correct this discrepancy, the LJ parameters for endohedral and exohedral adsorption have been calculated using quantum chemistry methods: f.i.Guan and coworkers [110] have used MP2 evidencing that the curvature of a (10,10) CNT makes the endohedral adsorption stronger than the exohedral one.
The difference between the endohedral and exohedral H 2 adsorption has been evaluated in the frame of NLDFT for a CNT square lattice arrays showing that the outer adsorption, that depends on the Van der Waals gap (i.e., the "intertubular" distance in a bundle of nanotubes) can be improved [41].
The binding energy of physisorbed H 2 , calculated by accurate DFT in both zigzag and armchair CNTs, is in the range 0.049 ∼ 0.113 eV due to dipolar interactions [48]; these values are slightly improved in nanotube bundles where the adsorption energy for the interstitial and groove sites is larger.Dag and coworkers [111] have tried to clarify the nature and the strength of H 2 adsorption on armchair CNTs by using a hybrid model accounting for GGA-DFT short-range interactions and Van der Waals long-range forces [112].The equilibrium configuration was found at a distance d 0 = 0.31 nm with a binding energy of 0.057 eV (almost independently on the tube curvature) that, despite implying the revision of previous results, does not change the whole scenario.Indeed, theoretical calculations have shown that, in order to have good delivery properties with efficient charging/discharging cycles, a system with an adsorption heat of about 15 kJ/mol should be considered [109,113].
Therefore many authors have suggested that CNT adsorbing properties could have been improved by doping with different species, mostly alkali and transition metals.In a Li doped SWCNT, lithium atoms donate the 2s valence electrons to the lowest CNT conduction band so that the semiconducting SWCNT becomes metallic.The equilibrium distance from the Li impurity of the physisorbed hydrogen molecule is d 0 = 0.34 nm or d 0 = 0.21 nm for Li, respectively, internally or externally bonded on the tube [111].Generally speaking, if the interaction potential and the configuration of the doping alkali metal species are modeled reliably, the hydrogen adsorption results to be enhanced and SWCNT could possibly approach the DOE threshold as shown in Figure 4 [114] where GCMC simulations of Li doped CNT arrays are reported.
High capacity hydrogen storage has been reported in Bdoped or defective nanotubes with Ca impurities [115]; in this case the empty Ca 3d-orbitals form hybrids with the H 2 σ-orbitals thus enhancing the hydrogen uptake up to 5 wt%.In this case, moreover, Ca impurities do not clusterize and remain dispersed on the tube side-wall.More uncertain is the benefit of this strategy, however, if one considers the whole amount of adsorbed gas and the delivery properties of real samples due to their inherent inhomogeneity and complexity.
Chen and coworkers [43] tried to improve the uptake in a peapod structure obtained by encapsulating a fullerene molecule inside a Li-doped SWCNT; in this case a complex charge transfer process occurs favoring Li charging and its strong bonding to the CNT surface that results in a noticeable increase of H 2 binding.
Activated and Microporous Carbons.
In the case of activated (AC) and microporous carbon (MPC) as gas storage media, a severe bottleneck for theoretical predictions is the definition of reliable atomistic models for such disordered materials.Also these materials, however, suffer from the limiting factor of the C-H 2 interaction properties that make unlikely large amount of H 2 storage by physisorption.
Several potential functions, such as the Mattera, the Taseli [116], and the Steele 10-4 potential [117] have been employed to treat the graphene-H 2 interactions in the contexts of the "slit-pore" model where pores are delimited by graphene planes [109]; most of these studies predict similar values of the adsorption heat and the excess gravimetric percentage that is below the DOE requirements at the operating pressure and temperature.In contrast, about 23 wt% has been obtained by extrapolation at very high pressure conditions where it has been claimed that the hydrogen density exceeds the liquid hydrogen one [118]; anyway this idea has been confuted by molecular dynamics simulations at 77 K showing also that oxygenation, differently from the CNT case, does not improve the uptake [119].
Recently, well-founded atomistic models of ACs and MPCs have been obtained using the Hybrid Reverse Monte Carlo (HRMC) scheme [120,121] starting from an initial realistic "slit-like" pore configuration obtained from experimental data on the pore wall thickness and size distribution (PSD).
The HRMC algorithm has been applied on the basis of the following acceptance criterion: with where N, g s (r), g exp (r), σ i are, respectively, the number of experimental points, the simulated and the experimental radial distribution functions and, lastly, the errors inherent the experimental data (treated as adjustable parameters).
The AC atomistic model is finally obtained (as shown in Figure 5) by simulated annealing in multiple canonical ensambles with gradually decreasing the temperature and parameters σ i in order to minimize the energy and χ 2 contextually [120]; an environmentally dependent interaction potential (EDIP) [122] or a reactive empirical bond order potential [123] can be used to this aim.On this basis, GCMC simulations with the Feynman-Hibbs (FH) correction for the quantum dispersion effect [96,124] have been performed at cryogenic temperatures [125]; the FH interaction potential is ⎞ ⎠ , (11) where ) is the classical LJ potential and the C-H 2 parameters are defined using the Lorentz-Berthelot rule.
The energy parameter of a curved surface has been obtained by correcting the one of a flat surface with the factors C ε = 1.134,C ε 2 for the surface-fluid and the surfacesurface interactions, respectively.Then the effective FH interaction potential of H 2 with an immobile planar carbon wall is calculated and used in a GCMC context where the H 2 -H 2 interactions have been treated with the Levesque parameters [126]; the C-C interactions have been treated by either the Frankland-Brenner [127] or the Wang et al. [128] parameters obtaining good results while the Steele parameters [129] underestimate the adsorption.On this basis, reliable RT isotherms for ACs and MPCs have been obtained using new LJ parameters with an enhanced well depth (about ε flat C-C = 37.26 K) to correct for the increased surface polarizability occurring when H 2 molecules approach the carbon surface.
Other Carbonaceous Structures.
Other carbon nanomaterials, such as nanostructured graphite, GNFs, fullerenes, nanohorns, and so forth, are frequently found in the literature as potential materials for hydrogen adsorption, sometimes combined with alkali metal hydrides where lithium carbide forms after few adsorption/desorption cicles [130].
The interaction of an H 2 molecule with a graphene sheet has been studied by LDA-DFT calculations [11] and the energy curves, obtained by varying the molecule orientation, the adsorption site, and the distance, show the typical van der Waals behaviour for physisorption (see Figure 6).Thus hydrogen uptake in GNFs has been simulated with conventional LJ potentials showing that significant adsorption, in any case below 1.5 wt% at P = 10 MPa and RT [131], occurs only if the interplanar distance is larger than 0.7 nm.
The usage of more accurate potential parameters, fitted on MP2 ab-initio calculations of vibrational energy or on experimental results [116], has demonstrated that, at cryogenic temperature and ambient pressure, the adsorption capacity of GNFs is about 2 wt%.However, MP2 results are affected by long range errors and reliable potential well and equilibrium distance can be obtained only with a large basis set.On the basis of the above predictions, the experimental results reporting adsorption excess data of 10-15 wt% at RT and P = 8 MPa [132,133] are most probably due to chemisorbed contaminants such as oxygen or residual particles of metal catalysts used during the synthesis; this circumstance has evidenced the potential positive role played by contaminants in storage thus driving the researchers to study metal doping also in these systems.Therefore some authors have suggested doping with alkali metals, such as Li and K, to increase the uptake.Indeed, Zhu and coworkers [134] have found that charge transfer, occurring from the metal atoms to the graphene layer, enhances the hydrogen adsorption at low temperature while it is significantly weakened at higher temperature.In this case Li is slightly more effective than K because of the higher charge transfer from Li to graphene (0.5e and 0.2e for Li and K respectively) with an H 2 binding energy almost doubled with respect to graphene.Because the transferred charge remains localized near the metal atom, the uptake enhancement does not hold if H 2 and Li stay on the opposite sides with respect to the graphene layer [13].
Ca doping of zig-zag GNRs, approached by GGA-DFT, has evidenced an H 2 gravimetric capacity of 5% at 0K, with reduced clustering of the impurities; clustering can be suppressed also in armchair GNRs by B-Ca codoping [135].B codoping has been also explored in Li doped graphene to suppress the metallic clustering and to fully exploit the enhanced interaction for Li atoms with H 2 molecules due to van der Waals forces and hybridization [136].Other attempts to improve storage in graphene include the usage of graphene (c) with a hydrogen uptake of 6.67 wt% and 8.04 wt%, respectively.From [22].
layers deposited on metallic substrates [137] showing that Ni and Pt subtrates behave differently, the first one increasing the covalent bonding on graphene.It should be considered, however, that oxygen adsorption is a competing process that strongly suppresses the hydrogen adsorption in metal doped graphene [138] thus making unlikely the usage of such sysyems for hydrogen storage.
Similarly to other CNSs, fullerenes show low binding energy values (few meV) for molecular hydrogen resulting in poor uptake.Charged fullerenes could be used to improve the uptake performance and ab-initio calculations of charged fullerenes C q n (−2 ≤ q ≤ 6, 21 ≤ n ≤ 82) have been performed accordingly [26].
As reported in Figure 7(a), the binding energy of a hydrogen molecule adsorbed at the fullerene surface can be increased between two and five times depending on the fullerene charge state whose polarity affects also the H 2 orientation.An uptake of 8.04 wt% has been predicted at best. Figure 8 shows both the electric field generated by the charged fullerene and the hydrogen molecule charge density map obtained under such an electric field, giving, at least classically, a clear insight of the mechanism responsible of the H 2 storage.
Charged fullerenes can be produced by encapsulating a metal atom inside the fullerene cage: for instance, by entrapping a La atom inside a fullerene, three electrons are transferred to C n .Anyway, in this case, the electric field outside the carbon molecule still does not differ significantly from the neutral case due to charge localization phenomena [22].
Enhanced adsorption on fullerenes can be obtained also with transition metals (TMs) [29]: according to the Dewar-Chatt-Duncanson model [139], the interaction is caused by a charge transfer from the H 2 highest occupied molecular orbital (HOMO) to the metal empty d-states followed by a back donation from a metal d-orbital to the H 2 lowest unoccupied orbital (LUMO).C 60 decorated with Ti have been investigated extensively showing a hydrogen adsorption up to 7.5 wt% depending on doping site: if Ti occupies a hollow site, it strongly binds to the cage and no charge transfer to the hydrogen σ * molecular orbitals occurs thus causing hydrogen physisorption; on the contrary, if Ti atoms occupy other sites, at least one H 2 molecule dissociates and is bonded to the Ti atom while the other hydrogen molecules are physisorbed nearby the impurity.
However, Sun and coworkers [140] have found that Ti, similarly to other TMs, tend to agglomerate after the first desorption cycle, thus reducing the hydrogen physisorption and storage.The same authors have demonstrated also that Li 12 C 60 molecules can bind up to 60 hydrogen molecules resulting in a theoretical gravimetric density of 13 wt% with a nearly constant binding energy [25].This is due to the large electron affinity of C 60 (about 2.66 eV) causing the capture of Li valence electrons that strengthen the bond; then the positively charged Li impurity causes the polarization of the H 2 molecules resulting in an increased interaction.Moreover it was also demonstrated that Li 12 C 60 clustering affects only moderately the hydrogen binding properties.
Alkaly metals doping of C 60 have been studied also by abinitio B3LYP/3-21G(d, p) calculations [24]: being positively charged with respect to fullerenes, these impurities can bind up to six (Li) or eight (Na and K) H 2 molecules.By increasing the number of Na atoms, the average binding energy remains almost constant because each hexagonal ring of the fullerene cage behaves independently showing a highly localized reactivity at the individual rings.Na 8 C 60 is found to be energetically stable with a theoretical hydrogen gravimetric ratio of 9.5wt %.DFT calculations of C 60 doping with alkali metals of the second group (Ca, Sr) have evidenced that a strong electric field arises depending on the significant chemical activity of d-orbitals in these species [141], unlike Be and Mg: the fullerene π * -orbital, that is partially occupied by the electrons of the alkali metal s-orbital, hybridize with the alkali metal d-states thus resulting in net charge transfer that causes the H 2 polarization giving a theoretical hydrogen uptake of 8.4 wt%.In Figure 9 spin resolved PDOS (projected DOS, i.e., projected density of states) of a single hydrogen molecule on Ca-coated fullerene evidences that the hydrogen σ-orbital, located far below the Fermi level, remains unchanged; also the charge density variations, induced by the hydrogen adsorption, suggest that polarization of the H 2 occurs near the Ca atom.Carbon nanocones (CNCs) have been investigated as possible alternatives to CNTs for hydrogen storage [142].The adsorption isotherms at 77 K in CNC structures with different apex angles have been calculated by GCMC simulations [143] where C-H 2 interactions are treated with second order Feynman-Hibbs LJ potentials showing that molecular hydrogen can be confined in the apex region inside the cone, in agreement with recent findings from neutron spectroscopy of H 2 in CNHs [59].The hydrogen density obtained is reported in Figure 10 as a function of the fugacity.The density behaves differently for the high/low pressure regimes.In any case, the theoretical data demonstrate that the hydrogen uptake is larger in a CNC than in a CNT and this behavior is attributed mainly to the high interaction region close to the apex.Quite recently Ca-decorated carbyne networks have been considered in the framework of ab-initio DFT calculations suggesting that this system could benefit of a surface area four times larger than graphene.Each Ca decorated site has been predicted to adsorb up to 6 hydrogen molecule with a binding energy of 0.2 eV and no clustering was observed in the model [144].
Physical Adsorption of Gaseous Species
Other than Hydrogen.Other gaseous species are considered for adsorption CNSs; among them, noble gases are important case studies because they are used in adsorption experiments at low temperature to measure the CNSs pore size distribution; On the contrary, condensation phenomena occur in these systems that, being studied in the context of the low temperature physics, are beyond the aims of the present review and will be omitted.However modelling of porosimetry experiments concerning carbon microporous and nanoporous media where physical adsorption phenomena do not cause condensation will be explicitly treated.
In the following, moreover, a special emphasis will be devoted to methane adsorption that is attracting a growing interest for alternative automotive energy sources.Methane, in fact, can be efficiently stored in CNSs because of its high physisorption binding energy making it attractive for storage at RT and moderate pressure.
Methane Adsorption.
Methane uptake in CNT bundles has been studied by Stan and co-workers for rigid tubes following the same approach adopted for hydrogen (Section 5.1.1)[36,42].LJ parameters and Lorentz-Berthelot rules have been employed to calculate the ideal uptake curves σ gg = σ gg (ε gg ) (for endohedral and interstitial sites) at low coverage for a threshold gas density and fixed chemical potential and temperature; in spite of the deep potential energy wall (ε gg = 145 K) of methane in CNSs, low uptake values at moderate pressure were predicted mainly due to the methane molecular size.
GCMC simulations have been performed to calculate the adsorption excess for both the endohedral and interstitial sites of CNTs for different pressure values and van der Waals gaps between the tubes [117,145,146] (see Figures 11 and 12).The decreasing behavior of the interstice excess adsorption reveals that the outer uptake saturates while the gas density increases linearly for compression.The usable capacity ratio (UCR), that measures the available fuel upon adsorption/desorption cycles with respect to the available fuel in a storage vessel, is calculated for different loading pressures and van der Waals gaps (see Figure 13).
For each CNT type and loading pressure, it can be found a peculiar arrangement of the CNT array that maximizes the UCR with respect to the volumetric capacity of the compressed natural gas (CNG) (about 200 V/V at 20 MPa) showing that the CNG value is obtained at a much lower pressure in these structures [145].The potential advantage of carbon tubular "worm-like" structures (CNW: carbon nano worm) over CNTs for methane storage was evidenced by calculating the "Langmuir-type" adsorption isotherms of these structures compared to (10,10) armchair CNTs [147].
As expected, the measured isosteric heat of adsorption is maximum for the most corrugated wormlike tube examined, in accordance with the large methane adsorption excess measured.
Using basically the same method, the isosteric heat of methane adsorption at zero loading in various CNT arrays has been calculated focusing on different uptake sites such as interstitial, surface, groove, "intratubular", and so forth [148].If allowed, the interstitial adsorption site is the most favorable followed by intratubular, groove, and surface sites.
Hydrogen and methane mixtures (hythane) are also considered for adsorption in CNT arrays, slitlike carbon nanopores, and mesoporous carbons.This is aimed, for instance, to separate hydrogen, and methane in synthetic gas obtained from steam reforming of natural gas or for storage clean fuels on vehicles [149][150][151].
It has been demonstrated that hythane storage in slitlike pores and CNTs can achieve the volumetric stored energy threshold of 5.4 MJ/cm 3 for alternative automotive fuel systems established by the US Freedom Car Partnership.Moreover GCMC simulations using the Feynman-Hibbs quantum effective potential have evidenced important selectivity properties of CNT matrices.For instance, arrays of CNTs with diameter between 1.2-2.4nm have large volumetric energy storage with respect to compression and evidence methane separation properties at RT and low pressure.Methane storage in CMK-1 nanoporous structures has also been investigated by GCMC in combination with porosimetry.The isosteric heat values measured are within a broad range because of the heterogeneous nature of these materials [151].
Physical Adsorption of Other Gaseous Species.
CNSs have been repeatedly proposed also for sensing exploiting gas chemisorption.However, sometimes the scenario is more complicated and, instead of chemisorption, strong physisorption is observed if accurate quantum chemistry methods are employed to describe the system.Moreover a significant part of the literature concerning physisorption of various gas species has been aimed to support porosimetry, especially concerning AC, MPC, and other disordered porous structures.Porosimetry has been often studied in connection with the storage problem to get reliable adsorption volume measurements.For instance, the adsorption isotherms for nitrogen, argon, carbon dioxide, and so forth, have been fitted by using GCMC or NLDFT using several interaction potentials, sometimes quantum corrected [152][153][154][155], in order to infer reliable pore size distributions from the experiments (see Figure 14) [154].
Nitrogen physical adsorption in CNT arrays has been studied at subcritical (77 K and 100 K) and supercritical (300 K) temperatures showing that type II isotherms at subcritical temperatures can be explained by taking into account the outer surface adsorption sites of the CNT bundles [57].
The rest of this subsection is dedicated to the physisorption of gaseous species, different from H 2 and CH 4 , in graphene NSs and CNTs.
(1) Graphene.Thanks to its high conductivity, graphene is considered ideal for sensing purposes also because adsorbed species cause an enhanced response of this two-dimensional structure.Indeed charge carrier concentration can be varied by adsorption of various gases, even though the adsorbate identification may be problematic and accurate atomistic modelling is mandatory.
The graphene charge carrier concentration may be changed by charge transfer that depends on the HOMO and LUMO energy levels of the adsorbate with respect to the graphene Fermi energy.If the HOMO energy is above the graphene Fermi level, a negative charge is transferred from the molecule to the graphene whereas the opposite occurs if the energy of the LUMO is below the Fermi level.In addition, charge transfer is also partially determined by the mixing of the HOMO and LUMO with the graphene orbitals.In general charge transfer occur through bonding phenomena but sometimes a more complicated mixture of weak chemisorption and strong physisorption is evidenced.
P-type doping of graphene, for example, can be achieved by NO 2 (or its dimer N 2 O 4 ) adsorption [15], at large distances (between 0.34 nm and 0.39 nm) with one distinction: the open shell monomer electron affinity is larger than the dimer one suggesting that paramagnetic molecules may act as strong dopants.This hypothesis has been checked by ab-initio calculations on several species, such as H 2 O, NH 3 , CO, NO 2 and NO [156], evidencing that the charge transfer depends on the adsorbate orientation, it is nearly independent on the adsorption site and that paramagnetic molecules may not behave as strong dopants: indeed, while NO 2 adsorption exhibits both chemisorption and physisorption characters with relatively strong doping (−0.1e) at large equilibrium distance (0.361 nm), no doping occurs for NO with negligible charge transfer (<0.02e at 0.376 nm distance).The different behavior of NO and NO 2 on graphene can be understood looking at the spin-polarized DOS reported in Figures 15 and 16.The NO 2 LUMO (6a1, ↑) is 0.3 eV below the graphene Fermi energy and therefore charge is transferred from graphene to the molecule.In the NO case, on the contrary, the HOMO is only 0.1 eV below the Fermi energy, is degenerate (2π x , 2π y ), and coincides with the LUMO.Thus the charge transfer is weak and the leading phenomenon is the mixing between the NO HOMO/LUMO and the graphene orbitals; as the hybridization above the Fermi energy prevails, the orbital mixing leads to charge transfer to graphene.
The stable configuration of triplet O 2 on graphite has been modeled by accurate quantum chemistry techniques and high level DFT calculations [157,158] evidencing that the choice of the exchange-correlation is crucial (LDA and PBE are inappropriate) and that spin unpolarized schemes are mandatory.A consensus was reached concerning the physisorption binding energy of 0.1 eV at a distance of 0.31 nm and negligible charge transfer.
GNRs have been also functionalized with polar groups (COOH, NH 2 , NO 2 , H 2 PO 3 ) evidencing enhanced adsorption for CO 2 and CH 4 physisorption CO 2 binding is by far preferred over CH 4 in hydrogen passivate GNRs [159].
A comparative study on diatomic halogen molecules on graphene has evidenced the crucial role played by the van der Waals interactions (that is more marked for species with large atomic radii) and the inadequacy of standard GGA-DFT [160].
(2) Carbon Nanotubes.The detection of physisorbed molecules on a SWCNT wall is an open problem of great technological interest and ab-initio calculations have been employed to this aim for several gaseous species such as H 2 O, NH 3 , CO 2 , CH 4 , NO 2 , and O 2 [48].Most of the molecules studied are charge donors with small charge transfer (0.010e ∼ 0.035e per molecule) and exhibit a weak binding energy (E b ≤ 0.2 eV) with no substantial electron density overlap between the adsorbate and the nanotube.On the contrary, acceptors such as O 2 and NO 2 exhibit a significant charge transfer, often accompanied by large adsorption energy, thus indicating that chemical and physical adsorption characters coexist.
Aromatic compounds interacting with CNSs show a similar uncertain nature of the bonding and their weak intermolecular forces, including van der Waals interactions, are often referred to as π-stacking interactions as they originate from the π-states of the interacting systems [50,51].Strictly speaking, π molecular orbitals can be found only in planar systems such as graphene but for a CNT this concept still holds if one considers the π bonds between the p-type orbitals, referred to as POAV (π Orthogonal Axis Vector), that are nearly orthogonal to the three σ bonds between a carbon and its three neighbors.There are different metastable adsorption configurations of benzene on a CNT (see Figure 17), the most stable one in narrow CNTs being with the aromatic group above the middle of a C-C bond (bridge position) that is different from the one on graphene (top position).Therefore the most favorable adsorption geometry should evolve from bridge to top as the nanotube diameter increases.In any case, the electronic structure calculation performed on these systems evidenced that the DOS is a superposition of the ones of the isolated benzene and the CNT, consistently with the fact that the π-stacking is accompanied by a very small binding energy.Consequently, the adsorption of benzene on a CNT is more appropriately classified as physisorption, although van der Waals interactions are not involved and a possible explanation is related to the misalignment of the POAV of neighboring carbon atoms.The adsorption of benzene derived molecules with different dipole moment and electron affinity properties, such as aniline (C 6 H 5 -NH 2 ), toluene (C 6 H 5 -CH 3 ), and nitrobenzene (C 6 H 5 -NO 2 ), on semiconducting (8,0) SWCNT have been compared to the ones of benzene and of the "closedshell" functional groups NH 3 , CH 4 and CH 3 NO 2 [161].The general trend found is that compounds with closed shells are always physisorbed with minor changes of the CNT electronic structure while both physisorption and chemisorption are possible for compounds with open shells.Moreover the adsorption is promoted by either the functional groups or the benzene rings depending on the configuration: for perpendicular configuration, the functional groups prevail while for the parallel configuration the interaction occurs through the π electrons.Thus, in the first case the adsorption energies are at least 150 meV smaller.The equilibrium distances are smaller than the C 6 H 6 equilibrium distance and larger than the relevant functional groups ones, with the exception of toluene.
Similarly to the other CNSs, doping has been proposed to improve the physisorption of some molecular species on CNTs; B-and N-doped carbon nanotubes experience a large conductivity when exposed to CO or H 2 O [54]; more specifically, CO molecules are physisorbed onto N-doped CNT because no charge transfer occurs while in the B-doped case chemisorption takes place (see below).
As for graphene, accurate quantum chemistry methods and high level DFT calculations have been performed to study O 2 physisorption at the CNT "side-wall" [157,158].Also in this case the calculation scheme may affect the results and the choice of the exchange-correlation functional is crucial.Using MP2 and other accurate quantum chemistry methods (DFT-B3LYP, DFT-PBE), it has been shown that O 2 in a triplet state is physisorbed at a CNT, independently on the chiral vectors considered, at a distance of nearly 0.32 nm with no charge transfer and low binding energy (about 2.4 kcal/mol).
Gas Chemisorption in CNSs
In this section we treat the systems where the adsorbatesubstrate interaction can be unambiguously ascribed to chemisorption with predominant bonding phenomena.
As in the previous section, we treat separately the case of the hydrogen chemical adsorption on some of the most recurrent carbon nanostructured adsorption media due to its potential importance in new technology and energy sources.
Hydrogen Chemisorption.
Generally speaking, hydrogen chemisorption in carbon nanomaterials is not interesting for storage purposes because of the large binding energy involved that would make the experimental conditions for the adsorption/desorption cycles of little practical use.However, in storage experiments a significant amount of physisorbed hydrogen molecules could be involved in bonding phenomena when the hydrogen molecules get close to the carbons tanks to their thermal energy.Therefore hydrogen chemisorption must be considered explicitly.
6.1.1.Graphene.Chemisorption of atomic hydrogen on graphene leads to the appearance of a magnetic moment [162,163] with a local lattice distortion nearby the adsorption site.The phenomenon gives rise to a strong Stoner ferromagnetism [164] with a magnetic moment of 1 μ B In the "Adsorption" column, C and P stand for chemisorption and physisorption, respectively (a chemisorbed hydrogen molecule involves its dissociation).
In per hydrogen atom, as evidenced by the spin-density in Figure 18.According to the Stoner picture, magnetic ordering is driven by the exchange energy between the p z -orbitals of ther adsorption sites and either ferromagnetism or antiferromagnetism occurs if the H derived bound states are located at equivalent or different lattice sites.The energy difference between different adsorption sites, namely top, bridge and hollow, is small and hydrogen diffusion occurs even at low temperature; as a consequence, two H atoms may easily recombine and form molecular hydrogen that is immediately desorbed from the graphene [165,166].
On the other hand, full hydrogen coverage of both sides of an isolated graphene layer form a stable structure where each carbon undergoes a hybridization transition from sp 2 to sp 3 .The situation is different at intermediate coverage and strongly depends on the overall magnetization as indicated by the linear dependence of the secondary H adsorption binding energy on the "site-integrated" magnetization [167].Therefore, at least at low temperature, it would be possible to control the adsorption dynamics of H atoms by tuning the substrate magnetization.In Table 1 we report some selected data concerning the properties of hydrogen adsorption on graphene.
6.1.2.Fullerenes.Novel fullerene organo-metallic molecules have been deeply studied for hydrogen storage.To this aim, light elements, either in interstitial (Li and F) or in substitutional sites (N, B and Be), have been investigated as doping species of C 36 and C 60 by means of LDA and GGA abinitio total energy calculations [23].Fullerenes doped with B and Be at substitutional sites exhibits large hydrogen binding energies (0.40 and 0.65 eV, respectively) due to the strong interaction between the B (Be) p z -orbital and the hydrogen σ molecular orbital (MO).
The orbital interaction, evidenced in Figure 19, causes the splitting of the H 2 σ MO bonding state below the Fermi level, whereas the B p z -state, that is normally located in the range 1-3 eV above the Fermi energy for B-doped fullerenes, shifts to higher energy values.Similar phenomena occur also for C 35 Be-H 2 .
The charge transfer analysis, performed along the direction orthogonal to the hydrogen axis (see Figure 20), shows that, in the case of B, only few electrons are involved in the formation of a "three-center" bond, in contrast with the Be case; therefore the hydrogen adsorption energy E a for Be is larger and nearly insensitive to the number of adsorbed molecules than the one for B (see Figure 21) confirming that highly localized p z -orbitals are needed for not dissociative adsorption.In the B case, moreover, hydrogen desorption may occur more easily.In spite of the advantages of Be over B, however, a controlled Be doping is difficult to obtain also because of its toxicity while B-doped fullerenes has been already synthesized.In particular, first principles molecular dynamics simulations have revealed that C 54 B 6 hydrogenation is unstable and that the reaction path (see Figure 22) causes the desorption to occur within the picosecond timescale [168].Among the other doping species investigated, Si is interesting because industrial C 60 synthesis is performed on silicon surface [31].H 2 adsorption on the Si site occurs at a distance of d 0 = 0.256 nm with 0.15 eV binding energy that indicates an intermediate state between physical and chemical adsorption.A similar situation is found also in Ni-doped fullerenes [27] where the Ni valence states are depleted by about half an electron resulting in large van der Waals interactions with a gravimetric ratio of 6.8 wt%.From Table 2, where are reported some selected results concerning hydrogen adsorption on fullerenes, it emerges that atomistic simulations predict Si, Li, Ca, and Sr as the doping species that could enhance most of the hydrogen uptake in "fullerene-like" CNSs.
6.1.3.Carbon Nanotubes.Some of the most notable results found in the literature concerning hydrogen adsorption on a CNT are collected in Table 3 including data on both chemical and physical adsorption.Following experimental evidences, hydrogen chemisorption has been treated by DFT total energy calculations studying two energetically favored sites where atomic hydrogen is chemisorbed [39].Both of them are accompanied by an sp 2 to sp 3 hybridization transition, the most stable being characterized by the hydrogen atoms alternating outside and inside on the tube "side-wall" (zigzag type).Hydrogen half-full coverage of CNTs has been investigated with high accurate quantum chemistry models showing that this configuration is more stable with respect to the full coverage case and suggesting that the deformations induced by the adsorption of H atoms can affect the stability of CNTs [38].However, many experimental studies have shown that the chemisorbed hydrogen storage capacity on pure CNT media is less than 0.01 wt% at room temperature, resulting impractical for storage applications.As for fullerenes, CNT doping with metallic impurities can improve the situation, as evidenced using Ti [28].Unpolarized spin density calculations have shown that, while an H 2 molecule approaches a Ti coated zigzag CNT, the energy decreases in two steps, the first one due to a charge overlap resulting in an increased attraction between H 2 and Ti and the second one related to the H 2 molecule dissociation with a final binding energy of 0.83 eV.This scenario is quite different from the case of Ti decorated fullerenes (where H 2 is simply physisorbed) because of the different coordination numbers of Ti in the two cases: in the CNT case, indeed, the larger Ti charge is responsible for the H 2 dissociation and the subsequent chemisorption.The first H 2 chemisorption event is followed by the physisorption of three other hydrogen molecules on the same Ti site.Alternatively four hydrogen molecules can Energy (eV) be simply physisorbed at a Ti decorated site in a low energy (0.1 eV lower than the previous one) and high symmetry configuration.In this case, the bonding mechanism is quite similar to the Dewar, Chatt and Duncanson model because it implies the donation of 1e to the 4H 2 σ * antibonding molecular orbital (hybridized with the Ti d-state) followed by the transfer of 0.4 e to an empty Ti d-orbital.The above scenario is schematically drawn in Figure 23.
Because Pt surfaces can adsorb gaseous molecules reversibly, DFT calculations of molecular hydrogen on Ptdoped armchair CNTs have been performed [44,111] showing that chemisorption is accompanied by an oxidative addition to Pt involving its 5d-orbital.However Pt clustering may occur that favors molecular hydrogen dissociation and reversible atomic hydrogen chemisorption [111].
Pd decoration of SWCNT behaves similarly [45] with a storage capacity of about 3 wt%.The most stable configuration exhibits both the physical and chemical adsorption characters with five hydrogen molecules adsorbed onto two adjacent Pd atoms through a partial hybridization between the H 2 s-orbitals and the Pd d-orbitals.
Recent ab-initio molecular dynamics simulations of nitrogen decorated SWCNTs [169] have evidenced that hydrogen chemisorption occurs at 77 K and is stable at 300 K while physisorption is enhanced at both the temperatures.These results, obtained within a DFT-LDA scheme, have also evidenced that 0 K ground state properties of such systems should be revised at higher temperature where desorption or enhanced chemisorption may occur affecting storage.The scenario emerging from the above discussion and the results resumed in Table 3 is that TMs may enhance physisorption at the expense of having chemisorption on the CNT walls.
Gas Chemisorption for Sensing.
As mentioned in the Introduction, gas chemical adsorption in CNSs has been studied focusing on gas sensing.Of course, the computational techniques required are quantum chemistry techniques, DFT calculations, and similar.Due to the amount of the literature found, we have just treated nanostructured graphene and CNTs.
Graphene-Based NSs.
Graphene charge carrier concentration can be strongly modified by gas chemisorption.Therefore, the electronic and magnetic properties of GNRs can be modified by edge functionalization or substitutional doping.However GNRs with well controlled saturated edges without dangling bonds (DBs) are far to be produced; these defects usually enhance the covalent bonding of chemical groups and molecules thus playing a critical role in the feasibility of using such carbon-based nanostructures as gas sensors.Semiconducting armchair GNRs (AGNRs) are preferred with respect to zig-zag GNRs (ZGNRs) since gas molecule adsorption is expected to induce little modifications on the electronic properties of metallic ZGNRs.In the "Adsorption" column C and P stand for chemisorption and physisorption, respectively (a chemisorbed hydrogen molecule involves its dissociation).Blank spaces in the column of binding energies and/or equilibrium distances indicate the missing of the corresponding values in the original paper.Further details are referred in the cited articles.
Adsorption of many gas molecules (CO, NO, NO 2 , O 2 , CO 2 , and NH 3 ) has been studied by spin-polarized GGA-DFT total energy calculations [170]: among the different gaseous species considered, only NH 3 has been found to greatly enhance the AGNRs conductance after chemical adsorption; in this case a semiconducting/metallic transition occurs thus suggesting that, in principle, a "GNR-based" junction can be used to detect NH 3 (see Figure 24) by I-V measurements.Indeed, the GNR sensor exhibits a semiconducting behavior when no gas molecule is adsorbed while, after NH 3 adsorption, the current increases linearly with the applied bias evidencing a metallic behavior.
Molecular adsorption at vacancy sites in nanostructured graphene has been also investigated as a possible sensing mechanism and this system is expected to behave similarly to GNRs.Vacancies and divacancies can be introduced by ion or electron irradiation under vacuum conditions and their passivation is of crucial interest in the development of graphene nanoelectronics.Divacancies in graphene have been passivated using several possible gaseous species, such as O 2 , N 2 , B 2 , CO, and H 2 O, in the context of DFT ab-initio calculations [14].In the particular case of N 2 , for instance, the molecule undergoes dissociation and subsequent chemical adsorption on the graphene layer resulting in substitutional N impurities that introduce extra carriers and change the charge transport properties.A summary of the most important results discussed here can be found in Table 4 where we have included also data from physisorption studies.It is quite evident that chemical adsorption at divacancies is significantly stronger than adsorption at dangling bonds.
Carbon Nanotubes.
As steam reforming of natural gas is employed to produce hydrogen, the interest in CH 4 and hydrocarbons adsorption has grown rapidly.However the chemical functionalization of a CNT with hydrocarbons is difficult due to the low reactivity of these systems.Classical molecular dynamics and ab-initio calculations have been employed to study the adsorption improvement of accelerated CH 4 molecules (with energy in the range 5-100 eV) on CNTs [49].As methane cracking occurs, the obtained radicals (CH 3 , CH 2 and CH) are adsorbed in different ways depending on the incident energy, while no decoration is observed at low energy, CH 4 dissociates into carbon (that is adsorbed and the CNT wall) and hydrogen molecules for incident energy higher than 60 eV.Collisions can also break the tube wall and form structural defects that can be healed through high temperature annealing (2000 K), provided the incident energy is lower than 70 eV.Among the investigated SWCNT structures, the ones with larger radius show lower reactivity.The adhesion of radicals modifies the SWCNT transport properties as evidenced by the calculated DOS where localized energy state appears in the gap for CH 3 and CH adsorption.Weak binding between CH 4 and a (5,5) SWCNT is confirmed at zero temperature [48] while recent tight binding molecular dynamics calculations have evidenced that at room temperature dissociation reaction proceeds with low enthalpy change, provided the thermal energy is sufficient to get the methane and the CNT close enough [52].Doping with metallic particles can enhance the homolytic dissociation of the H-CH 3 bond; indeed, recent DFT calculations [171] have suggested that a zigzag nanotube, decorated with an interstitial C and Mo, can decrease the energy barrier for CH 4 dissociation thanks to the cooperation between the dipole induced in the CNT by the selfinterstitial C atom and the Mo-d-orbitals.
Also simple alkene (C 2 H 4 ) and alkyne (C 2 H 2 ) have been proposed for catalytic hydrogenation on Pt-doped armchair nanotube and studied by DFT [44].The ethylene interaction with a CNT is relatively weak, despite the significant charge transfer in the case of doped SWCNT.In the acetylene case, instead, the interaction is stronger and is presumably related to the observed hybridization transition from sp to sp 2 .Metallic doping is not the unique way to improve the CNT reactivity at 0K; indeed, accurate GGA-DFT calculation performed with a localized basis set (B3LYP/6−311+G * level of theory) have evidenced that nitrogen doping of CNTs enhances the oxygen stability at the CNT sidewall; this circumstance favors the methane cracking at the oxygen impurity at 0 K through the orbitals overlap [172].Then nitrogen doped CNTs can be engineered in order to obtain highly reactive catalysts, comparable to metal ones.
Besides it is only marginally pertaining the theme of gas-CNT interaction, it is worth to mention that CNTs can be functionalized to improve their solubility in water or in organic solvents that is important for, for example, nanomedicine.For instance, the functionalization of a CNT with a carboxylic group or methane-derived radicals (-CH 2 -OH, -CH 2 -Cl, -CH 2 -SH, and -CONH-CH 3 ), has been investigated with regard to the -OH free radical scavenging capability [173].ab-initio calculations, in the framework of B3LYP hybrid HF-density functional and the 3−311+G(d) localized basis set, have shown that the CNT elicity affects the free radical scavenger capacity, armchair tubes being more effective than zig-zag ones.Moreover it is shown that functional groups with the best performance are the ones containing just carbon, hydrogen and nitrogen atoms.Moreover different vacancy defects affect differently the OH addition on the SWCNT while the Stone-Wales point defects show the largest site dependent effect [174].
The chemical reactivity of CNT for oxygen chemisorption has been addressed by MP2, to obtain accurate binding energy, and DFT calculations (at various levels of theory) for larger system.It is shown that singlet O 2 is the most stable chemisorption configuration but it is not expected to occur at RT due to the large activation barrier [158].It should be emphasized that the exchange-correlation functional adopted affect much the O 2 ground state properties found within DFT [157].
Using analogous theoretical schemes, NH 3 on (9,0) CNTs has been studied evidencing no charge transfer and suggesting that no chemisorption occurs [175].
CNTs as chemical sensors of other gaseous species can take advantage from the change of the electrical conductivity induced by adsorption of functional groups.
Despite recent controversial data [176], theoretical results seem to indicate that donor or acceptor species may change the carrier density of a p-type semiconducting CNT.However, in the case of a metallic CNT, transport properties show a peculiar dependence on the positions of the adsorbed molecules with the possible suppression of conductivity [56].It is known that transport in a metallic CNT occurs through two channels corresponding to the Bloch states at the K and K points of the graphene first Brillouin zone.A simple "tight-binding" picture of the coupling between an impurity level ε 0 and CNT p z -orbitals show that, in the case of an isolated impurity, one of the two channels is suppressed.Accurate DFT and nonequilibrium Green function (NEGF) transport calculations confirm this simple view as shown by the transmission curves for different adsorbates, such as H, COOH, OH, NH 2 , and NO 2 , reported in Figure 25.If two impurities are adsorbed on the CNT sidewall, tight binding and DFT-NEGF calculations still agree showing that the transport behavior depends on the relative positions of the two impurities ΔR = na 1 + ma 2 (being a 1 , a 2 the basis vectors of graphene): if n−m = 3p for all p ∈ Z transmission is the same as the one obtained with only one impurity.If the previous condition is not satisfied, the transmission is completely suppressed (see Figure 26).Some molecular species, such as CO, are not chemisorbed on semiconducting SWCNTs; however the local chemical activity can be changed by applying an uniaxial stress orthogonal to the tube axis so that, for example, the CO molecule can be bonded on the surface [53].
Impurity inclusion in CNTs can improve the chemical sensing of molecules that are not chemisorbed onto pure CNT side wall.ab-initio calculations of a B-doped CNT for CO and H 2 O detection have evidenced an enhanced chemical reactivity with increased binding energy [54] accompanied by a large charge transfer from the nanotube to the molecule.
Systematic investigations on small molecules adsorbed on a Pt doped armchair SWCNT [44] have shown chemisorption and significant charge transfer from the nanotube to the adsorbate for most of the examined species resulting in a change of the CNT conductance.NH 3 behaves in the opposite way due to the high LUMO energy of this molecule that, differently from the other cases, inhibits any "backdonation" from the nanotube.
TM-doped CNT structures, perhaps, are the most promising candidates for detection of small molecules under standard conditions.In recent experiments, CNT samples have been pretreated by irradiation with Ar ion beams to form vacancies where TM atoms are strongly bonded thanks to their partially occupied d-orbitals.
DFT total energy calculations have shown that substitutional atoms of most of the 3d transition metals (Ti, V, Cr, Mn, Fe, Co, Ni) exhibit a high binding energy in different sites of an armchair carbon nanotube (see Figure 27), with the exception of Cu and Zn that are rather unstable because of their fully occupied d bands [55].The general trend emerging is that light transition metals can bind several adsorbate molecular species (N 2 , O 2 , H 2 O, CO, NH 3 , and H 2 S) with large binding energy values.Water molecules are weakly bound to most of the active site, suggesting that these sensors are robust against humidity.Ni-doped CNT systems seem to be the most promising candidate for CO detection as indicated by the conductance data reported, together with the adsorption energies, in Figure 28: indeed for this system we have an electrical resistance change per active site greater than 1 Ω.In the same figures, data concerning adsorption of molecular species in mono-and divacancies on CNTs are also reported.In order to give a general view on gas sensing in CNTs, we have chosen to summarize some of the most important results previously discussed in Table 5 where the main features concerning both the binding energy and the ground state configuration distances are provided together with the indication of the level of theory used.For completeness, in the same Table are also reported the data concerning physisorption in various CNT structures.
Conclusions
The wide variety of data on atomistic simulations of gas adsorption in CNSs, sometimes affected by dispersion, makes it difficult drawing a general scenario.It must be evidenced, however, that "in-silico" experiments play a fundamental role with increasing importance in the understanding of the phenomena involved in adsorption.Besides being a fundamental support to experiments, atomistic simulations and total energy calculations may reveal unexpected phenomena that could lead experimentalists.For instance, while a general agreement has been reached concerning the unsuitability of pure CNSs for hydrogen storage, many predictions on alkaly or transition metal doping indicate a new promising route where, however, problems of contaminants may represent a challenge.
Concerning methane adsorption in CNSs, instead, atomistic simulations predict storage properties close to needs for industrial applications.
ab-initio total energy modelling is mandatory for impurities, doping, chemisorption, and sensing due to the inherent complexity of the processes involved.In these cases, In the "Gas type" column are reported the adsorbed gaseous species and the adsorption phenomena encountered; C and P stand for chemisorption and physisorption, respectively.In the third column.T, H, and B stand for Top, Hollow, and Bridge site, respectively.Blank spaces in the third/fourth columns indicate the missing of the corresponding values in the original paper.Further details are referred in the cited articles.
simulations give encouraging results and evidence new challenges in controlling the CNSs local chemistry for sensing that is still on the way.Generally speaking, atomistic modelling has shown that TM doping is most probably the right way to engineer the various different CNSs in order to obtain valuable materials for sensing devices.A careful choice of the correct scheme is often mandatory to avoid artifacts; this should be evaluated case by case because even DFT-LDA can be appropriate for selected systems.However it should be emphasized that the relationship between in silico and real experiments is often vitiated by the fact that the most accurate predictions available concern ground state properties; higher temperature values (f.i.even room temperature), in fact, may change dramatically the scenario because most of the carbon nanostructured materials investigated may behave quite differently due to the possible hybridization transition induced by thermal distortions.In the near future, the enormous increase of the computational resources and the improvement of the algorithms should play a key role to make RT in silico experiments, such as accurate ab-initio molecular dynamics,
Figure 1 :
Figure1: CNT hydrogen storage capacities from the literature versus the year of publication.From[40].
Figure 8 :Figure 9 :
Figure 8: Electricfield associated with neutral and charged fullerenes, C 28q (q = −2, 0, +2) at the center of a hydrogen molecule located on top of hexagonal ring (a).Hydrogen charge variations induced by an electric field of 2 × 10 10 V/m parallel (b) and perpendicular (c) to the molecule axis.From[22].
Figure 10 :
Figure 10: GCMC calculated isotherms for five cone structures of different sizes: 14.9 nm 3 (a and c) and 29.9 nm 3 (b and d).Plots (c) and (d) show the low fugacity (0-2 bar) details of (a) and (b) respectively.GCMC results for the bulk hydrogen at 77 K are shown as lines.From [143].
Figure 14 :
Figure 14: Differential pore volume (a), surface area distributions (b), cumulative pore volume distribution (c) of carbon B calculated from high-pressure CO 2 adsorption isotherm at 273 K using NLDFT and three center GCMC models.Fit to the experimental isotherm (d) (experimental isotherm: points; theoretical fit: lines).From [154].
2 Figure 17 :
Figure17: Energy curve of different benzene adsorption sites at the CNT side-wall (distance of 0.32 nm).From[50]; copyright 2005 by the American Physical Society.
Figure 19 :
Figure 19: The calculated local density of states (LDOS) for C 35 B(Be)-H 2 (a)-(d).The B(Be) and the H 2 LDOS are solid lines and open circles, respectively.Squared differences between the nonbonding and bonding states due to H 2 sorption (e) and (f).From [23]; copyright 2006 by the American Physical Society.
Figure 20 :
Figure 20: Differential planar electron density for B (a) and Be (b), respectively: solid and dotted contours indicate electron accumulation and depletion, respectively.(c) Differential planar electron density along the x-axis for B and Be.The positions of the H 2 and B (Be) are indicated.From [23]; copyright 2006 by the American Physical Society.
Figure 21 :
Figure 21: Binding energy for each added H 2 as a function of number of adsorbed H 2 molecules (a) and C 54 B 6 with 6H 2 molecules (b).From[23]; copyright 2006 by the American Physical Society.
Figure 22 :
Figure 22: Minimum-energy path of dissociation of a H 2 molecule obtained from the CI-NEB calculation (a)-(e).Numbers are the distance between the hydrogen atoms in angstroms.(f) The calculated minimum energy path of the dissociation of the H 2 molecule.Two activation energy barriers at 32 and 28 meV are found, respectively, from left to right.From [168] copyright 2008 by the American Physical Society.
Figure 23 :
Figure 23: (a) Two different views of optimized structures of t80Ti-4H 2 ; (b) PDOS at the Γ point; (c) σ * antibonding orbital of four H 2 complexes; (d)-(f) isosurface of the state just below E F at three different values: at Ψ = 0.08 Ti d-orbital is visible; at Ψ = 0.04 Ti-d-orbital, two carbon π-orbitals, and 4H 2 σ * antibonding are hybridized; At Ψ = 0.02 it emerges that the other four carbon atoms are also involved in the bonding.From [28]; copyright 2005 by the American Physical Society.
Figure 24 :
Figure24: I-V bias curves for the GNR sensor before and after the adsorption of NH 3 and CO 2 .The inset shows schematics of such a GNR sensor, consisting of one 10-AGNR (detection region) and two metallic 7-ZGNRs leads.From[170].
Figure 26 :
Figure 26: Transmission functions of various tubes with two hydrogen atoms adsorbed.The blue (red) solid lines are obtained with n − m = 3p (n − m / = 3p).The black dashed line is the transmission function of a pure CNT.From [56]; copyright 2008 by the American Physical Society.
Figure 27 :
Figure 27: Structural schematics and binding energies for a 3d transition metal occupied monovacancy (blue), divacancy I (green) or divacancy II (red) in a (6, 6) carbon nanotube.Binding energies of carbon atoms in the same sites are indicated as horizontal lines with the same color code.From [55]; copyright 2010 by the American Physical Society.
Table 1 :
Hydrogen adsorption on graphene: data selected from recent atomistic simulations.
Table 2 :
Hydrogen adsorption on fullerenes: data selected from recent atomistic simulations.In the "Adsorption" column C and P stand for chemisorption and physisorption, respectively (a chemisorbed hydrogen molecule involves its dissociation).Blank spaces in the column of binding energies and/or equilibrium distances indicate the missing of the corresponding values in the original paper.Further details are referred in the cited articles.
Table 3 :
Hydrogen adsorption on a CNT: data selected from recent atomistic simulations.
Table 4 :
A selection of recent calculations concerning gas adsorption on graphene.
In the "Gas type" column the adsorbed gaseous species and the adsorption phenomena encountered; C and P stand for chemisorption and physisorption, respectively.In the second column.T, H, and B stand for Top, Hollow, and Bridge site, respectively.Blank spaces in the third/fourth columns indicate the missing of the corresponding values in the original paper.Further details are referred in the cited articles.
Table 5 :
A selection of recent calculations regarding gas adsorption on a SWCNT.
|
2019-04-05T03:39:29.288Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "031cc96a3691cbd26298916bae60a220babec825",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jnm/2012/152489.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7e9b7bf469454c684188c528c9d0b3a8636c8f7a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
261015188
|
pes2o/s2orc
|
v3-fos-license
|
Estimating Passenger Demand Using Machine Learning Models: A Systematic Review
. This article investigated machine learning models used to estimate passenger demand. These models have the potential to provide valuable insights into passenger trip behaviour and other inferences. The estimate of passenger demand using machine learning model research and the methodologies used are fragmented. To synchronise these studies, this paper conducts a systematic review of machine learning models to estimate passenger demand. The review investigates how passenger demand is estimated using machine learning models. A comprehensive search strategy is conducted across the three main online publishing databases to locate 911 unique records. Relevant record titles, abstracts, and publication information are extracted, leaving 102 articles. Furthermore, articles are evaluated according to eligibility requirements. This procedure yields 21 full-text papers for data extraction. 3 research thematic questions covering passenger data collection techniques, passenger demand interventions, and intervention performance are reviewed in detail. The results of this study suggest that mobility records, LSTM-based models, and performance metrics play a critical role in conducting passenger demand prediction studies. The model evaluation was mostly restricted to 3 performance metrics which needs improved metric for evaluation. Furthermore, the review determined an overreliance on the long-and short-term memory model to estimate passenger demand. Therefore, minimising the limitation of the LSTM model will generally improve the estimation models. Furthermore, having an acceptable trainset to avoid overfitting is crucial. In addition, it is advisable to consider multiple metrics to have a more comprehensive evaluation.
Introduction
Getting to a location to participate in activities such as work, recreation, and socialisation is a necessity for human survival. Transport enables these activities to be carried out. Individuals use private transportation, usually self-owned vehicles. The public uses a system of shared transportation units provided by entities. More people patronise public transport. Therefore, its impacts are immediately felt when it is effective. The demand for transportation services continues to increase as a result of increasing urbanisation. In London, more than two billion passenger trips were made in 2009 (W. Wang et al., 2011). The increasing demand for passengers threatens the safety and quality of transportation services. Public transport operators must optimise operations by accurately estimating passenger demands, fleet, and income (Hänseler et al., 2017). Estimating passenger demand is the foundation of an efficient transportation system. It is challenging if operators do not use modern technologies and mathematical models in managing transportation (Sbai & Ghadi, 2018). The deployment of new transportationrelated technology has resulted in an exponential increase in the availability of data on passenger movements. Furthermore, recent strides in machine learning research have resulted in numerous applications of machine learning (Hillel et al., 2021).
Previous passenger demand estimation
Passenger demand estimation studies aim to forecast the number of people who will use a particular transportation system or service in the future. These studies are crucial for transportation planners to determine the appropriate level of service and guide decisions about infrastructure investments and transportation policies. One popular method for estimating passenger demand is the gravity model, which is widely used in transportation planning. The model assumes that the number of trips between two locations is proportional to the product of their population and inversely proportional to the distance between them (Becker et al., 2018). A study in Madrid paid attention to the demand for interurban rail travel. The paper analysis is based on the estimation of disaggregated Nested Logit models using information from travellers. It analyses the competition between newly built high-speed trains with other modes. An evaluation was conducted using cost-benefit analysis to inform investment decisions. In addition, it analyses the response of the demand to various policy scenarios for high-speed train services. This analysis uses willingness to pay for improved levels of service as an indicator (Román et al., 2010). In general, passenger demand estimation studies are critical for transportation planning and can provide valuable information on factors that influence travel behaviour.
Background
There are ten different types of systematic review. This review adopted an effectiveness review. Effectiveness is the degree to which an intervention, when applied appropriately, produces the desired outcome. When an effectiveness review is adopted, the PICO framework is recommended to develop a review question (Munn et al., 2018).
PICO framework for Question Development
The PICO process uses a case scenario from which a question is constructed. P represents the characteristics of the population, I represents the study intervention, C represents the comparator, and O represents the results. The case statement is as follows; To operate public transport successfully, operators should be able to anticipate passenger demand and infer journeys. However, the operation is negatively affected by the absence of automatic data collection in the service. Table 1 shows the PICO framework.
The Question
Therefore, the question developed for the review is as follows; What machine learning models can be used to estimate travel demands in various data collection systems?
Search Strategy
Adopting a search strategy is to find papers that will be useful for review. Cross-ref, Scopus, and Google Scholar search engines are used to search for articles to provide comprehensive coverage of the field. The exact search procedure is used on each database. This review focusses on papers with trip inferences. Therefore, only papers with titles directly related to trip inference are included.
The following initial phrases are tested: travel pattern, trip inference, passenger density, travel demand, and origin-destination. The phrases are limited to the title, which helps the researcher filter out irrelevant papers. To select papers that discuss machine learning techniques, only papers with one or more selected phrases related to machine learning are selected.
The following initial phrases are tested: machine learning, neural network, decision tree, computer vision, artificial intelligence, random forest, boosting, support vector, deep vision, and image processing. The initial search was not limited to a specific time frame. The Google Scholar search for papers containing at least one of the above machine learning-related phrases alongside at least one of the trip-inference phrases across all relevant fields returns 17,700 results as of August 2022. The search in the Scopus database returned more than 1,240 results. Whilst Crossref returns 1,730 results.
Selection Criteria
The setting of inclusion and exclusion criteria is a standard and required practise when designing systematic research protocols. It helps the researcher produce reliable and repeatable results. The following inclusion criteria are determined for articles found in the search to be included in the study (Booth Andrew et al., 2016):
Inclusion Criteria:
1. Studies that employ one or more machine learning techniques for predictive trips or demands.
2. Studies on the estimation of passenger demands in transport modes. 3. Studies that investigate density passenger densities from deep vision systems, google data, or use nonintrusive methods. 4. Studies that investigate the passenger data from collection systems.
Exclusion Criteria:
The following exclusion criteria are determined for the articles found in the search to be included in the study: 1. All related papers or documents were published before 2018. 2. Studies in peer-reviewed journals or conference proceedings not written in English.
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline is used for paper selection. First, duplicates are eliminated from the search records. Subsequently, the title and abstract of the articles are checked against the eligibility criteria.
Finally, the remaining full-text articles are assessed for eligibility (Liberati et al., 2009). A paper with multiple datasets and methodologies is treated as a separate study for analysis.
Data extraction strategy
A list of attributes is created to extract relevant data without bias from each study. Furthermore, it limits subjectivity in the data extraction process. The attributes are specific, objective, quantifiable, or categorical. The studies are carefully reviewed and every attribute is noted and tallied. The attributes are shown in table 2.
Study selection
The following search terms are used to carry out the search strategy outlined in Section 3.1. Scopus: (TITLE ("trip inference" OR "origin destination" OR "travel pattern" OR "passenger density" OR "travel demand")) AND TITLE-ABS-KEY ("machine learning" OR "neural network" OR "decision tree" OR "computer vision" OR "random forest" OR "boosting" OR "support vector" OR "deep vision" OR "image processing")) Google Scholar: ("trip inference" OR "origin destination" OR "travel pattern" OR "passenger density" OR "travel demand") AND ("machine learning" OR "neural network" OR "decision tree" OR "computer vision" OR "random forest" OR "boosting" OR "support vector" OR "deep vision" OR "image processing").
Semantic Scholar: ("trip inference" OR "origin destination" OR "travel pattern" OR "passenger density" OR "travel demand") AND ("machine learning" OR "neural network" OR "decision tree" OR "computer vision" OR "random forest" OR "boosting" OR "support vector" OR "deep vision" OR "image processing").
Due to the restriction on search length in Google Scholar, this search is divided into two separate searches, with the results combined. The search was carried out on 20/12/2019 on all three databases. Figure 1 shows a PRISMA flow chart of the study selection process.
A total of 2,095 records were returned from the search. There were 346 records returned from Crossref, 635 records from Scopus, and 1,114 records from Google Scholar. Duplicates are then removed, leaving 911 records to be screened. The total number of records after removing duplicates is more than the results obtained from two of the databases, showing that there were results from Crossref/Scopus which were not returned with the Google Scholar search. The remaining records are then screened to determine whether they meet the eligibility criteria outlined in Section 2.4.2. During the selection, 809 articles are excluded for relevance based on their title and abstract.
The full text is obtained for the remaining 102 articles for further review. Of these, another 81 are excluded based on the selection criteria. This process leaves 21 selected articles for data extraction.
Selected papers
The final number of papers used for the data extraction is 21. Table 3 provides a unique identifier for each paper and year of publication. Table 4 provides details of all the journals and conferences/proceedings from which the papers were selected. The articles come from a wide spread of publications, with a total of 13 different journals and 8 different conferences featured. Table 4. Summary of published sources. Figure 1 shows the distribution of the articles from 2019 to 2022. There is a recent spike in interest in the publication of demand estimation using machine learning models. 48% of the articles were published in the recent year 2022. The remaining 52% of the publications were done before 2022. This demonstrates a rising interest in research for demand estimation using machine learning models.
What are the methods for collecting passenger data?
The following sections present an overview of the methods for collecting passenger data used in the selected articles. Question 1.a What type of transport system has passenger data been collected?
Based on the responses to Q1a, the transportation systems are grouped into taxis, buses, and metro/rail. 62% of the transportation systems were taxis, 19% were buses, and 19% were metro/rail systems. Question 1.b What device/instrumentation was used to collect the data?
From the review, 57% of the data were recovered from mobility records or databases, 19% from smart cards, then 19% from automatic fare collection systems, and 5% from image captured records. All studies conducted in a metro/rail system collected information from smart cards except (Han et al., 2022a; Q. . Researchers resorted to smart cards and automatic fare collection systems because it generates the desired data at less cost, since it requires minimal installation, personnel, and man hours (Ait-Ali & Eliasson, 2022). Question 1.c What type of information was collected for the study?
The information collected from devices or through instruments is shown in Table 5. 19% of the studies used the tap-in tap-out information of smart cards for analysis. Smart cards have a large data volume capacity, broad coverage, and high authenticity. Immutable ID allows the researcher to obtain long-term travel information about passengers, which offers the potential to mine travel patterns and travel predictions. Therefore, researchers prefer tap-in tap-out information because it considers complete trip attributes in addition to realtime prediction (Ye & Ma, 2022; Z. Zhao et al., 2018). A significant number of studies used information on boarding and exiting, which is about 43%. This information was from mobility records/databases or smart cards. Passenger request information was about 24%. The information usually contained request ID, pick-up time, pick-up coordinates, drop-off time, and drop-off coordinates (Liang et al., 2019). S11, S12, S18, S19, and S20 combine passenger requests with vehicle GPS, metrological data, or coded road network. These kinds of studies constituted about 25% of the studies.
Information was classified into passenger-orientated information and transport unit-orientated information. Passenger-orientated studies are studies where the primary data used for the analysis were the passenger data, while vehicle-orientated studies analysis was based on the transport unit. Table 5 shows the information collected and the orientation of the study. 90% of the studies were in the passenger-oriented category and 4% were transported unit oriented. S12 was the only study to focus on both passenger and transport unit-orientated categories. Data from vehicleorientated studies were applied to taxi services, while data from passenger-orientated studies were mostly applied to metro, rail, and bus services. The researcher was able to assess vehicular data because they were in an open source dataset from the operators (Huang et al., 2022). The availability of passenger data on smart cards influenced the focus of transport system studies. In transport systems such as metro, rail and bus systems, the desired passenger data was obtained through smart cards, which includes information such as travel patterns, frequency of use, and duration of trips. These data are analysed to gain insight into passenger behaviour and preferences, which informs decisions related to the design and operation of these systems. As a result, studies on these transport systems tend to be passenger-orientated, as they are based on readily available data. The nature, size, period, and temporal aggregation of the collected data are shown in table 6. The nature of the data for most studies was transactional requests or data from smart card cards. This is because it was less costly to obtain transactional data than to conduct manual data surveys. Furthermore, smart card systems and service requests were readily available at locations where the studies were carried out (Sun et al., 2020). The size of the data set ranged from 10,000 to more than 700 million trips. The size of the data sets is correlated with the spatial area under study. For S5, 694,000 records were collected on 18 routes and 1,781 bus stops. While S6 had about 769 million records with 42,000 bus stops and 325 subway stations. S1, S2, S6, S7, S13 and S15 collected more than ten million records from complete networks. S4, S5, S16, S19, and S20 collected records of less than 10 million transactions, these studies were carried out on selected routes or a transport service with a limited time frame for data collection.
Although time is continuous, various time intervals were adopted for the studies. This adoption of the time interval is an essential part of each study. S19 did not state the temporal aggregation for the study. S19 was excluded from the temporal aggregation analysis. Data were temporally aggregated in 5 min, 10 min, 15 min, 20 min 25 min, 30 min, 60 min, and 24 hours, except for S1 and S7 which used 24 hours and 10 minutes, respectively. 21% and 37% of the studies used 15min and 60min, respectively, for temporal aggregation of the data. S21 used all categories of aggregation in the study. The research adopted varying time intervals based on population dynamics. Furthermore, the adopted interval is relatively stable in order not to conceal the timevarying laws of passenger demand. The review showed that 15 min and 60 min were the preferred time intervals for data aggregation (Giraldo-Forero et al., 2019; Yang et al., 2022).
The duration of data collection was from 5 to 365 days. Most studies collected their data in 30 days. Studies such as S6, S11, S12, S13, S14 and S19 data were collected over 120 days. The data was readily available data from automatic collection systems and open-source databases, which made collection easier. Therefore, researchers resorted to collecting longduration data for their studies.
What functions have been used to determine passenger demands and where?
The following sections present an overview of the methods for collecting passenger data used in the selected articles.
Question 2.a Which intervention and class of intervention was used in the study?
Based on the responses to Question 2.a, the interventions were categorised according to the type of model in addition to the snippet of the model. The known types of models are supervised, unsupervised, semi-supervised, and reinforcement learning. The identified model class was further classified into subtypes. Table 7 shows the interventions, the type of model, and the comments. 76% of the studies conducted the research using a supervised regression model. The remaining percentage was shared between the unsupervised clustering model and the unsupervised dimension reduction model at 19% and 5%, respectively. Forecasting traffic demand using deep neural networks has attracted widespread interest. Most studies that used neural network models combined several machine learning models as their intervention. (Han et al., 2022b). Recurrent neural networks are suitable for forecasting studies. However, it is limited in using information from a distant past and also performs poorly for long-term memory (Liyanage et al., 2022). Therefore, S3, S5, S8, S9, S10, S11, S12, S13, S14, S15 and S20 combined long-short-term memory (LSTM) models with other models for their studies. Hybrid LSTM studies constituted 73% of supervised regression model studies. LSTM models are generally identified to outperform RNNs in time-series data forecasting (Yeon et al., 2019). Therefore, the LSTM-based model is preferred for predicting and forecasting series.
Question 2.b
What was the unit of analysis? S1 uses travel patterns to improve the accuracy of passenger travel information prediction. It achieved its objectives by analysing the degree of accuracy of the proposed model. S2 and S7 analyse demand predictions by aggregating data into different time zones and updated transactions. The researcher minimises the abundance of results by aggregating the data. S3, S8 and S15 analysed the prediction of short-term demand and the effect of weather conditions, such as air temperature and the air quality index. The research determined this from the passenger correlation flow and showed that the population is attracted to the same type of facility. Furthermore, the shorter the distance between the bus top or the numerous bus routes to the stop, the higher the degree of correlation of tourist flow. S4 analysis Spatiotemporal dynamic time-warping test where correlation between zones was evaluated based on short-term passenger demand data. S5 article analysis predictions of varying temporarily aggregated short-term data. S6 analyses the variability of the travel pattern. To investigate the variability, probability density functions were developed and curves representing different groups of passengers were fitted and evaluated. S9, S11, S12, S13, S14 and S16 analyse the number of demands from passengers in each region. S11 also incorporates an objective function as part of its analysis. S10 analyses the prediction of passenger demand over a fixed temporal interval. Spatial analysis of S17 and S21 considers the requested demand and attributes the intensity of the demand to land use. They also perform a temporal analysis, in which the demand behaviour is analysed over time. Further analysis was performed in which the observed demand in specified periods was critically analysed. S18 generates various clusters; it uses the elbow method to find the optimised number of clusters. Due to spatial restrictions, a high number of clusters is also used to enable the study to determine hotspots. S19 analysed the distance travelled by passengers; the study also used an evaluation metric and the performance of the model for its analysis. S20 analyses the correlation between travel frequency and travel time by comparing travel frequency and travel time in different time intervals. It should be noted that the following articles S4, S8, S9, S10, S11, S12, S13, S14, S15, S16, S17, S20, and S21 analyse the performance and comparison of the model implemented in the study. Obtain mobility patterns to predict passenger demands from one region to another S10 Predict passenger demand S11 Predict passenger travel demands from one region to another. S12 Improve prediction accuracy to capture the characteristics of urban travel demand.
S13
Predict temporal variability of taxi demand between region pairs S14 Investigate origin-destination-based demand prediction S15 Predict short-term travel demand based on historical data and other information S16 Exploit heterogeneous information to learn the evolutionary patterns of ridership S17 Present spatio-temporal and temporal analysis of demand flow in taxi requests S18 Determining Hotspots by measuring the clusters' hindex S19 Predict the travel destination only depending on the departure time and coordinates S20 Predict passenger travel demands, as well as manage taxi operations and scheduling S21 Reflect the inherent time-space correlations and complexity of the passenger flow
Question 2.c
In what geographical areas have these studies been conducted? Figure 2 shows the geographical locations where the selected studies have been conducted. Most studies have been conducted in Asia (60%) and the United States (30%). This can be attributed to the open data policies in these regions. The data required for these studies are readily available and in the appropriate format. S18 did not state the location where the research was conducted. Public transportation in Africa is usually unregulated and its mode of operation presents a challenge in collecting information on passenger demand (Kumar & Barrett, 2008).
How is the performance of the intervention determined?
The following sections present how the intervention performance has been used in the selected articles. Question 3.a What is the data split ratio used in the studies?
To ensure that the model used can generalise to unseen data, the model is trained on a diverse dataset. Data are split into sets to identify and correct overfitting issues, therefore improving the overall performance of the machine learning model. The training set split between 70% and 80% was 83% of the selected articles. S3, S10, S13, S17, S19, and S20 split more than 80% of their data as a training set and did not consider a validation set. S4. S2, S4, S12, S14, and S16 with a training set below 80% further split the training set into a training and validation set. Splitting a data set into training and testing sets is an important step in machine learning as it also helps to evaluate the performance of a model. The 80/20 data split is a common choice because it strikes a good balance between having enough data to train a model and having enough data to test it. Choosing a small test set presents a risk that the model is unable to generalise well to new data. Models can learn faster with a large training set, thus reducing the overall training time. There is no hard and fast rule on data splitting. However, different data split ratios may be more appropriate depending on the specific problem, the amount of data available and the complexity of the model. It is a good practise to experiment with different data split ratios and evaluate the model's performance before making a final decision (Aurélien Géron, 2019; Raschka & Mirjalili, 2019). Question 3.b What performance metrics were used? Performance metrics are useful for evaluating the performance of models because they provide a standardised way to compare different models. Table 9 shows the performance metrics used for the evaluation of the models used in the studies. Three discrete performance metrics were used in the selected studies. The root mean square error (RMSE), the mean absolute error (MAE), and the mean absolute percentage error (MAPE) were the metrics commonly used to evaluate the models. They provide a quantitative measure of how well a model performs in terms of predicting the output variable compared to the actual values. RMSE is a measure of the average difference between the predicted and actual values, but with a higher weight given to larger errors. It is useful when you want to penalise larger errors more heavily. MAE is a measure of the average difference between the predicted and actual values, with equal weight given to all errors. It is useful when you want to give equal importance to all errors. MAPE is a measure of the average percentage difference between predicted and actual values. It is useful when you want to evaluate the accuracy of a model in terms of percentage error, which can be helpful in situations where the magnitude of the error is important. S1, S2, S6, S7, S9, S13, S14, S19, and S21 employed different metrics or combined with the commonly used metric. S1 used only the prediction accuracy as a metric, where the prediction accuracy is equal to the number of correctly predicted travel data divided by the total number of travel data. The correct prediction referred to the difference between the predicted time and the actual value as less than 1 hour. For S2, the Pearson Correlation Coefficient (PCC) was added to MAE and RSME as a metric. S2 and S21 adopted PCC because the results are linear, the variables are quantitative, normally distributed, and have no outliers. S6 adopted F1 score, precision and recall as its metrics. Recall and precision metrics are used to evaluate the performance of a classification or information retrieval model. Recall measures the percentage of relevant instances that were correctly retrieved by the model out of all relevant instances in the data set. It is used to evaluate the completeness of the results produced by the model. Precision measures the percentage of retrieved instances that are relevant from all retrieved instances. It is used to evaluate the accuracy of the results produced by the model. Then the F1 score was applied, which is the harmonic mean of precision and recall. It symmetrically represents both precision and recall in one metric. S7 adopted the mean error, a fitness function, and the mean standard error as a performance metric for its model. MSE measures the precision of the estimates since it reflects the magnitude of the error to expect in the estimates. A smaller MSE indicates that the estimates are more precise and have less variability, while a larger MSE suggests that the estimates are less precise and have more variability. The MSE values were associated with the neural network training effect, where a lower value indicated better neural network training. The fitness function was adopted because the study involved the use of a genetic algorithm. The function is used to guide the optimisation process. The reciprocal value of the mean squared error is used as a fitness function. ME sums up the variances and divides the result by sample population. The variance is the difference between the measured value and the true value. S9 adopted the RSME and the symmetric mean absolute percentage error (SMAPE), which measure the accuracy based on relative errors. The study justified the choice of SMAPE because it was not scale sensitive. S13 and S14, in addition to MAE and RSME, include the R2 score as a performance metric that indicates how well the data fit the regression model. The R2 score is a statistical measure in a regression model that determines the proportion of variance in the dependent variable that can be explained by the independent variable. The score or value derived from this metric is independent of the context because it measures the goodness of fit of the model. MAE, RSME, and MAPE have primarily featured metrics in the selected papers. MAE, RSME and MAPE were found to be used in 57%, 67% and 52% of the studies, respectively. It suggests that these metrics are commonly used in model evaluation and machine learning studies. The other metrics were selected based on the preference and the objective of the study.
Conclusion
The review investigates how passenger demand is estimated with a collection system using machine learning models. It examines three thematic research questions that cover passenger data collection techniques, passenger demand interventions, and intervention performance. A comprehensive search strategy is carried out across the three main online publishing databases to locate unique 911 records. Relevant record titles, abstracts, and publication information are extracted, leaving 102 articles. In addition, the articles are evaluated according to the eligibility requirements. This procedure yields 21 fulltext papers for data extraction. For data collection techniques, approximately 57% of the data was recovered from mobility records or open databases. This is because the data were readily available, in an appropriate format, over a substantial period, and less expensive to obtain. Traffic data was mostly aggregated in multiples of 15-minute intervals because it provided sufficient time resolution to capture changes in traffic patterns while minimising noise. Furthermore, it becomes more manageable and easier to compare and analyse the aggregated data ( . 83% of the articles split the training set between 70% and 80%. Splitting a data set prevents overfitting and allows one to evaluate the performance of a model. MAE, RSME, and MAPE were featured as performance metrics in the selected articles. MAE, RSME and MAPE were found to be used in 57%, 67% and 52% of the studies, respectively. It suggests that these metrics are commonly used in model evaluation and machine learning studies. Despite the exhaustive nature of this document, the following limitations must be acknowledged. Only three Internet databases were used to search for relevant materials. Furthermore, this review did not evaluate grey and unpublished literatures. However, it is expected that the selected papers will cover new methodologies. The approach to the review is intended to be as objective as possible. Data extraction and discussion are carried out by the first author under the supervision of the coauthors. All results have been double-checked, but the authors are responsible for any residual inaccuracies. The results of this study suggest that mobility records, LSTM-based models, and performance metrics play a critical role in performing passenger demand prediction studies. Besides, the review determined an overreliance on the long-and short-term memory model to estimate passenger demand. Therefore, minimising the limitation of the LSTM model will generally improve the estimation models. Additionally, having an acceptable train set to avoid overfitting is crucial. Furthermore, it is advisable to consider multiple metrics to have a more comprehensive evaluation.
|
2023-08-20T15:09:21.688Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "787e5070bc18ed056c02c5807da35235298a598e",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/55/e3sconf_acc2023_03002.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7fdea5c7e66f02d6f8f7904db66b801c3f529710",
"s2fieldsofstudy": [
"Computer Science",
"Business",
"Economics"
],
"extfieldsofstudy": []
}
|
268407245
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of the Diagnostic Performance of Fully Automated Hepatitis E Virus (HEV) Antibody Tests
The detection of anti-hepatitis E virus (HEV) antibodies contributes to the diagnosis of hepatitis E. The diagnostic suitability of two automated chemiluminescence immunoassays (CLIAs, LIAISON® MUREX Anti-HEV IgG/Anti-HEV IgM test, DiaSorin) was assessed by comparison with the results of a combination of enzyme immunoassays and immunoblots (recomWell HEV IgG/IgM ELISA, recomLine HEV IgG/IgM, MIKROGEN). Samples with a deviating result were analyzed with the WANTAI ELISAs. Compared to the recomWell ELISAs, the Anti-HEV IgG CLIA had a percentage overall agreement (POA) of 100% (149/149; 95% CI: 97.5–100%) and the Anti-HEV IgM CLIA had a POA of 83.3% (85/102; 95% CI: 74.9–89.3%); considering the recomLine HEV IgM results, the POA was 71.7% (38/53; 95% CI: 58.4–82%). The WANTAI test confirmed 52.9% (9/17) of negative CLIA IgMs; HEV RNA was not detectable. Since acute infection with the Epstein–Barr virus (EBV) or human cytomegalovirus (CMV) may influence the results of other serological assays, HEV antibodies were examined in 17 EBV and 2 CMV patients: One had an isolated and probably unspecific HEV IgM in the CLIA, as HEV RNA was not detectable. Both CLIAs are well suited for HEV diagnostics, but isolated IgM should be confirmed. An acute EBV/CMV infection can influence HEV serodiagnostics.
Introduction
The hepatitis E virus (HEV) belongs to the species Orthohepevirus A within the family Hepeviridae and has a single-stranded positive-sense ribonucleic acid (RNA) genome.HEV is prevalent worldwide and is considered one of the main causes of viral acute hepatitis [1].To date, eight HEV genotypes (gts) have been distinguished, which differ in their host tropism and epidemiology [1].In Germany and some other regions of Europe and North America, gt 3 in particular is endemic.Domestic and wild pigs represent important animal reservoirs for this genotype [1,2].The most important source of infection for humans is the consumption of raw or insufficiently cooked meat.Other transmission routes include direct animal contact, consumption of water or agricultural products contaminated with manure, organ transplants, and blood transfusions [3].Under immunosuppression, infections caused by gt 3 (and rarely gt 4) can progress to chronic infections [1].In contrast, gt 1 and 2 are limited to humans as hosts and are rarely detected in industrialized countries.Infections of these types are considered travel-related, especially since major outbreaks have been reported in regions with poor sanitary conditions [1].The number of HEV infections reported annually is steadily increasing in many industrialized countries, mainly due to increased awareness among medical staff and the use of more sensitive diagnostic tests [4,5].
Laboratory diagnostics play a central role in the detection of HEV infections and provide information on the spread of HEV [6,7].According to the guidelines of the European Association for the Study of the Liver, a combination of specific antibodies and viral genome detection is recommended [8].While HEV RNA can be detected very early in the acute course of infection, the detection of HEV IgM and IgG antibodies provides information on acute and convalescent infections as well as seroprevalence.In immunocompromised patients, reverse-transcription PCR-based (quantitative) detection of HEV RNA is essential, as antibodies are sometimes not measurable [7,8].
With few exceptions, most of the available tests for the detection of HEV antibodies are performed manually in an enzyme-linked immunoassay (ELISA) format [9].DiaSorin has recently launched fully automated high-throughput tests for the detection of anti-HEV IgG and IgM antibodies [10].The aim of this study was to evaluate the performance of the DiaSorin LIAISON ® MUREX Anti-HEV IgG and IgM assays in comparison with the established and widely used recomWell/recomLine HEV IgG and IgM ELISAs/immunoblots from MIKROGEN.To our knowledge, there are no data on this topic yet.
Samples
The study was performed with human sera previously characterized with antibody assays from MIKROGEN, which we defined as the reference for the detection of HEV IgG and IgM (Tables S1-S3).In addition, 17 samples with serological evidence of acute Epstein-Barr virus (EBV) infection and two samples with an antibody constellation suggestive of acute human cytomegalovirus (CMV) infection were included to investigate possible IgM cross-reactivity, which may lead to false-positive HEV IgM results and thus to misdiagnosis of acute infection [11][12][13].
All samples were residual samples and, with the exception of the 19 samples mentioned above, were sent to the laboratory of Dr. Krause und Kollegen MVZ GmbH Kiel for serodiagnosis of HEV infection.The samples came from patients of registered doctors from northern Germany and were mainly sent in 2021.The sera were stored in the refrigerator/freezer for several days/weeks after arrival at the laboratory until all tests were completed.Repeated thawing and freezing cycles were avoided as far as possible.Information on clinical symptoms and liver function was not available, which is a limitation of this study.
HEV Assays
The initial HEV antibody status was determined manually using the recomWell HEV IgG or HEV IgM ELISA (MIKROGEN GmbH, Neuried, Germany; negative < 20 U/mL, borderline 20 to 24 U/mL, positive > 24 U/mL) on a BEP2000 system (Siemens Healthineers AG, Erlangen, Germany); these screening assays serve as a reference here.According to MIKROGEN, these indirect sandwich ELISAs are based on recombinant proteins expressed from the second open reading frame (ORF2) of HEV gt 1 and 3 and should cover antibody responses against the capsid of HEV gt 1 to 4. Their performance was recently analyzed in detail [14].
Sera in which HEV IgG or IgM was detected via these ELISAs were immunoblotted (recomLine HEV IgG/IgM on a Dynablot Plus system from MIKROGEN).The manufacturer spotted different parts of the recombinant capsid protein of gt 1 and 3 and the protein derived from ORF3 on a nitrocellulose strip.Each blot contained a separate cut-off control and was automatically evaluated with a BLOTrix Reader and recomScan software (Version 3.4.166;BioSciTec GmbH, Frankfurt/Main, Germany).This method is designed for the detection of antibodies against HEV gt 1 to 4. The results obtained with a combination of MIKROGEN ELISA and blotting serve here as a second reference.
Samples with discrepant results were subjected to WANTAI HEV-IgG and WANTAI HEV-IgM ELISAs (Beijing Wantai Biological Pharmacy Enterprise Co., Ltd., Beijing, China), which are known to have particularly high assay sensitivity and specificity [15][16][17].These assays have a grey zone from 0.9 to 1.1 (sample absorbance value/cut-off, i.e., absorbance value of negative control + 0.16 for IgG or 0.26 for IgM), above which the assay is considered positive.As far as we know from the literature, the WANTAI assays use an antigen derived from a highly conserved region of ORF2 [14].It was assumed that identical results obtained with the assays from two of the three manufacturers were correct.
Samples with discrepant results were also tested for the presence of HEV RNA using a RealStar ® HEV RT-PCR Kit 2.0 (Altona Diagnostics GmbH, Hamburg, Germany).Depending on the RNA extraction method used, this test has a lower detection limit of 49 to 329 IU/mL plasma [18].
All assays are CE-certified for HEV diagnostics and were performed according to the manufacturer's instructions.
Calculation of LIAISON ® MUREX Anti-HEV Immunoassays Performance
The percentage positive, negative and overall agreement (PPA, PNA, POA) of the DiaSorin tests was determined with the help of a four-field table in comparison with the reference method.These parameters and their 95% confidence intervals (CIs) were calculated using freely available software at https://tools.westgard.com/two-by-twocontingency.shtml(see: https://www.westgard.com/lessons/basic-method-validation/879-qualitative-test-clinical-agreement.html;accessed 13 February 2024).
HEV IgG
The diagnostic performance of the LIAISON ® MUREX Anti-HEV IgG immunoassay was evaluated in comparison with that of the MIKROGEN recomWell HEV IgG, which served as the reference assay.For this purpose, 100 sera previously reactive in the recomWell HEV IgG ELISA (isolated IgG positive/borderline, N = 60; IgG and IgM positive/borderline, N = 40) and 49 sera in which no HEV antibodies were detectable in both the recomWell HEV IgG and the recomWell HEV IgM ELISA were reexamined with the LIAISON ® IgG assay.Table 1 shows qualitative agreement between the results of both assays.The PPA was 100% (100/100; 95% CI: 96.3-100%), the PNA was 100% (49/49; 95% CI: 92.7-100%), and the POA was 100% (149/149; 95% CI: 97.5-100%).As the negative HEV IgG results of the recomWell HEV IgG test were not checked again with the MIKROGEN recomLine immunoblot, these missing data cannot be included in the calculation of the percentage agreement.Consideration of the immunoblot data had no influence on the agreement of the positive test results.The linearity of the HEV IgG measurements was demonstrated for three samples, in which a very high IgG concentration of approximately 80 U/mL was detected via the recomWell HEV IgG assay.These samples were serially diluted with HEV IgG-negative serum and measured in duplicate using the recomWell IgG and LIAISON ® MUREX Anti-HEV IgG assays.After a serum dilution of 1:8, the HEV IgG concentration in the recomWell ELISA decreased linearly.In contrast, the LIAISON ® MUREX Anti-HEV IgG assay showed a linear decrease in the HEV IgG concentration across all dilution levels tested (Figure 1).The linearity of the HEV IgG measurements was demonstrated for three samples, in which a very high IgG concentration of approximately 80 U/mL was detected via the recomWell HEV IgG assay.These samples were serially diluted with HEV IgG-negative serum and measured in duplicate using the recomWell IgG and LIAISON ® MUREX Anti-HEV IgG assays.After a serum dilution of 1:8, the HEV IgG concentration in the recomWell ELISA decreased linearly.In contrast, the LIAISON ® MUREX Anti-HEV IgG assay showed a linear decrease in the HEV IgG concentration across all dilution levels tested (Figure 1).The 1:16 dilution was consistently found to be positive in the recomLine HEV IgG assay, while at the 1:32 dilution, only one of the three serum samples was positive in the immunoblot.The coefficient of determination (R 2 ) was calculated both for the regression lines of the three different serum dilution series and for a regression line averaged from these data.When calculating R 2 in Figure 1a, only the linear range from a dilution level of 1:8 was taken into account.The raw data can be found in Table S4.
Presence of (Non-Specific) HEV Antibodies in Patients with Acute EBV or CMV Infection
Seventeen samples with serological signs of acute EBV infection and two samples with evidence of CMV infection were retested for HEV IgG/IgM.The same three EBV blood samples (i.e., Nos. 1, 7, 8) were identified as HEV IgG-positive in the assays of the three manufacturers.Among them, sample No. 1 was also positive for HEV IgM in their ELISAs/CLIA.In addition, two different acute EBV serum samples, Nos. 8 and 2, were identified as HEV IgM carriers using the MIKROGEN ELISA (No. 8) or the DiaSorin CLIA (No. 2).However, no HEV IgG was detectable in the latter sample (isolated HEV IgM).For the two CMV serum samples, the MIKROGEN and DiaSorin tests gave identical results.HEV RNA was not detected in any of the EBV/CMV samples that tested positive for HEV antibodies (Table 4).
Table 4. Influence of an acute Epstein-Barr virus (EBV) or human cytomegalovirus (CMV) infection on the results of HEV antibody tests. Samples with serological evidence for an acute EBV (N = 17)
or CMV (N = 2) infection were tested for the presence of HEV antibodies.In sample No. 2, isolated IgM (without HEV IgG) was detected with only one test, which is why the result was evaluated as most likely false reactive.The measured raw values are given in brackets.Abbreviations: Infect., infection; No., number; +, positive; (+), borderline; −, negative; n.t., not tested; qual., qualitative.
Discussion
Immunoassays for the detection of HEV IgM and IgG antibodies are widely used due to their ease of use and comparatively low cost.The problem, however, is that the tests have different sensitivities and specificities and give qualitative, semiquantitative, or quantitative results [7,9,12,16,17,19,20].The differential performance of HEV IgG assays has important implications for seroprevalence estimates [21].A World Health Organization (WHO) reference serum (NISBSC 95/584) for standardizing HEV antibody tests has been available for several years [22] and could help to improve assay comparability.
In the present study, HEV IgG and IgM tests from two well-known manufacturers were directly compared and the percentage of positive, negative, and overall agreement of the qualitative test results was calculated.If individual results differed, antibody tests from a third, reputable manufacturer were used to make a decision.All assays are approved for routine diagnostics and there are extensive data from the manufacturers and from various studies on their diagnostic quality, although not in a direct comparison of MIKROGEN and DiaSorin.We considered the ELISAs and immunoblots from MIKROGEN, which have been used in diagnostics for some time, as reference tests.
The DiaSorin LIAISON ® MUREX Anti-HEV IgG CLIA test showed qualitative results consistent with those of the MIKROGEN recomWell HEV IgG ELISA, which was used as a reference (Table 1).These assays also provide quantitative results.The LIAISON ® MUREX Anti-HEV IgG test is aligned with the WHO standard.In general, both assays appear to be suitable for seroprevalence studies.The CLIA has the advantage of being fully automated.This assay was recently used in a Brazilian HEV seroprevalence study [23].
Good linearity was previously reported for the MIKROGEN IgG test [17].The HEV IgG assay from DiaSorin has a linear range that goes beyond this range (Figure 1).The HEV IgG concentration of samples in which >10 IU/mL HEV IgG was determined during the initial measurement can be reliably determined after sufficient dilution.This could be of interest for the assessment of HEV IgG kinetics in the context of scientific studies.
The HEV IgM antibody test results (Table 2) are more heterogeneous and require detailed discussion.The highest number of HEV IgM-positive samples was found with the MIKROGEN recomWell HEV ELISA.The comparatively high reactivity of the recomWell HEV IgM assay was demonstrated in an earlier study [17].However, HEV RNA could not be detected in any of the 17 samples that were reactive in this test but not in the DiaSorin assay.In contrast to viral RNA, HEV IgM may not be detectable in the very early stages of infection, while IgM seroconversion may accompany the onset of symptoms and persist for many months thereafter [7,24].The results of a current study with asymptomatic blood donors demonstrate that only one in four to five viremic donors already have HEV IgM antibodies at the time of the first RNA detection [25].Therefore, at least in these 17 nonviremic patients, post-acute infection status, persistent IgM or even false-positive IgM detection are assumed.When the recomWell HEV IgM ELISA was used in combination with the HEV IgM recomLine immunoblot, as recommended by MIKROGEN, the number of IgM detections decreased by approximately 50% (27 out of 53 positive samples were confirmed by immunoblotting).A total of 8 of the 17 samples found to be reactive by the recomWell HEV IgM ELISA were concordantly negative by the recomLine HEV IgM blot and by the DiaSorin and WANTAI IgM assays (Table 3).Recently, good agreement was reported between HEV antibody tests from the latter two manufacturers [10].The indices of samples No. 13 and No. 21 were close to 1.00, above which the DiaSorin CLIA is categorized as IgM positive (Table 3).The data in Tables 2 and 3 show that negative results of the DiaSorin LIAISON ® MUREX Anti-HEV IgM test do not generally have to be confirmed.Only samples with indices close to 1.00 could provide a reason for confirmatory/follow-up diagnostics.In 4 of the 53 IgM-positive serum samples, no corresponding IgG antibodies were detectable (quoted as isolated HEV IgM).Two of these samples, No. 20 and No. 40, were reactive in the recomWell HEV IgM ELISA, the recomLine HEV IgM immunoblot and the CLIA, while two samples, No. 26 and No. 28, were reactive only in the recomWell HEV IgM test (Table S2 and Table 3).The latter two samples were suspected to be false positives for HEV IgM.However, we do not have information on the clinical picture on which HEV serology was requested.These findings underscore the importance of the recomLine HEV IgM immunoblot for the verification of reactive MIKROGEN HEV IgM ELISA results.In particular, the detection of isolated HEV IgM (without concomitant HEV IgG) should lead to confirmatory and follow-up tests [8,11].The exclusion of viremia by the application of nucleic acid amplification techniques (NAAT) like PCR may be very useful in these cases [7].
The investigation of a limited number of samples with serologically suspected acute EBV/CMV infection revealed a possible false-reactive HEV IgM (Sample No. 2, Table 4), confirming the results of previous studies [12,13,26].For the MIKROGEN recomLine HEV IgM assay, for example, Dichtl et al. reported isolated HEV IgM reactivity in 2 of 12 patients with acute EBV infection [12].This phenomenon is most likely due to polyclonal B-cell stimulation associated with herpesvirus infection [26].Therefore, in addition to HEV IgM confirmatory and follow-up testing (including NAAT), other infections should be excluded as appropriate [11].
The significance of this study is limited by the lack of information on the clinical symptoms that led to the request for an HEV antibody test.Furthermore, the number of samples included is comparatively small, but in the same order of magnitude as in several other studies on the performance of various HEV antibody tests [7].Not all samples were analyzed in the HEV PCR.However, at least all that were reactive in just one HEV IgM assay were tested free of HEV RNA, so that a very early HEV infection status is unlikely.We therefore consider the comparison with antibody tests, which have long been used in routine serological diagnostics and have been characterized in a number of studies, to be suitable for an indicative evaluation of the diagnostic performance of the new CLIAs, although examination of a larger number of samples would be desirable.
Conclusions
The fully automated DiaSorin LIAISON ® MUREX Anti-HEV IgG and IgM assays are sensitive and specific high-throughput tests with good performance.Both tests are useful for the diagnosis of acute and convalescent HEV infections in immunocompetent patients.The HEV IgG CLIA is also suitable for seroprevalence studies.The detection of HEV IgM does not necessarily mean that an acute HEV infection is present.In particular, an isolated HEV IgM should be confirmed by follow-up and alternative tests including NAAT.HEV IgM test results may be biased in patients with acute EBV/CMV infection.
Figure 1 .
Figure 1.Linearity of HEV IgG determination across multiple serum dilution levels.Three samples in which HEV IgG was detectable at high levels were serially diluted in HEV IgG-negative serum and measured in duplicate.Mean HEV IgG concentrations are given.The 1:64 dilution was negative according to both the MIKROGEN recomWell HEV IgG assay (<20 U/mL, black horizontal line) (a) and the DiaSorin LIAISON ® MUREX Anti-HEV IgG assay (<0.3 IU/mL, black horizontal line) (b).The 1:16 dilution was consistently found to be positive in the recomLine HEV IgG assay, while at the 1:32 dilution, only one of the three serum samples was positive in the immunoblot.The coefficient of determination (R 2 ) was calculated both for the regression lines of the three different
Figure 1 .
Figure 1.Linearity of HEV IgG determination across multiple serum dilution levels.Three samples in which HEV IgG was detectable at high levels were serially diluted in HEV IgG-negative serum and measured in duplicate.Mean HEV IgG concentrations are given.The 1:64 dilution was negative according to both the MIKROGEN recomWell HEV IgG assay (<20 U/mL, black horizontal line) (a) and the DiaSorin LIAISON ® MUREX Anti-HEV IgG assay (<0.3 IU/mL, black horizontal line) (b).
Table 1 . Qualitative agreement of the MIKROGEN recomWell/recomLine HEV IgG and the DiaSorin LIAISON ® MUREX Anti-HEV IgG immunoassays.
The raw data can be found in TablesS1 and S3.
Table 1 . Qualitative agreement of the MIKROGEN recomWell/recomLine HEV IgG and the DiaSorin LIAISON ® MUREX Anti-HEV IgG immunoassays.
The raw data can be found in TablesS1 and S3.
Table 2 . Qualitative agreement of the MIKROGEN recomWell/recomLine HEV IgM and the DiaSorin LIAISON ® MUREX Anti-HEV IgM immunoassays.
The raw data can be found in TablesS2 and S3.
* HEV RNA was not detected in these 17 samples by RT-PCR (see Table3).
Table 3 . Characterization of 17 sera with a different result in the DiaSorin LIAISON ® MUREX Anti-HEV IgM assay compared to the MIKROGEN recomWell HEV IgM test used as a reference.
These samples were tested in duplicate with both assays (mean values are given for each test) and re-evaluated with the WANTAI HEV-IgM assay.The samples are sorted according to the sample/cutoff values of the WANTAI HEV-IgM ELISA (data from TableS2).Abbreviations: No., number; qual., qualitative; +, positive; (+), borderline; −, negative.
|
2024-03-15T15:52:52.867Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "1f1a06af310a6d7bfb703ce2091b6057b1d6f07a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/14/6/602/pdf?version=1710245108",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdb448d374c68170d960d822f654a19690b8b053",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
253279671
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation, Analysis and Diagnosis for HVDC Transmission System Faults via Knowledge Graph under New Energy Systems Construction: A Critical Review
: High voltage direct current (HVDC) transmission systems play a critical role to optimize resource allocation and stabilize power grid operation in the current power grid thanks to their asynchronous networking and large transmission capacity. To ensure the operation reliability of the power grid and reduce the outage time, it is imperative to realize fault diagnosis of HVDC transmission systems in a short time. Based on the prior research on fault diagnosis methods of HVDC systems, this work comprehensively summarizes and analyzes the existing fault diagnosis methods from three different angles: fault type, fault influence, and fault diagnosis. Meanwhile, with the construction of the digital power grid system, the type, quantity, and complexity of power equipment have considerably increased, thus, traditional fault diagnosis methods can basically no longer meet the development needs of the new power system. Artificial intelligence (AI) techniques can effectively simplify solutions’ complexity and enhance self-learning ability, which are ideal tools to solve this problem. Therefore, this work develops a knowledge graph technology-based fault diagnosis framework for HVDC transmission systems to remedy the aforementioned drawbacks, in which the detailed principle and mechanism are introduced, as well as its technical framework for intelligent fault diagnosis decision.
Introduction
At present, the world is experiencing energy structure reform to gradually form a new power system based on the large-scale integration of various renewable energies [1,2], in which the digital power grid is the core concept and component.A digital power grid can greatly improve the monitoring performance of the operation characteristics of renewable energy during the energy generation, which is conducive to support thousands of renewable energy technologies as the main energy to become involved in the supervision process of the power system.Thus, the digital grid lays the foundation for the new power system to achieve fully renewable energy absorption, safe and stable operation, and clean and low-carbon operation.
The digital grid is a new operation mode of power systems put forward by China Southern Power Grid Corporation according to the current energy development trend.The digital power grid is the process of digitalization, intelligence, and the internet of the traditional power grid.Digital transformation of the traditional power grid relies on the advanced digital technology platform to connect all links of the power system through the age, and finally make a fault diagnosis.However, when constructing the fault mechanism model to simplify the line expression, some system parameters are inevitably discarded.Meanwhile, the method is affected by parameter changes, load changes, and harmonic waves, which ultimately leads to the reduction of the accuracy of fault diagnosis.Thus, some researchers put forward the method of fault diagnosis based on line signal processing, which does not simplify the circuit model.Reference [21] applies the Fourier transform method to decompose the high-frequency signal of the current waveform and classifies and locates faults through spectrum analysis.References [22,23] use wavelet transform to diagnose HVDC system faults.However, the signal analysis method still needs manual determination of the fault threshold, leading to reduced accuracy.In addition, with the advancement and exploitation of AI technology, AI technology has also been extensively applied in the field of HVDC system fault diagnosis.The basic characteristics of the three types of fault diagnosis methods are demonstrated in Table 1.However, the research of AI technology in the field of power system fault diagnosis started late due to the particularity and secrecy of the power system.Meanwhile, there is no prior article to summarize and analyze the research achievements of the application of AI technology in the field of HVDC transmission system fault diagnosis.It is difficult to establish an accurate mathematical model; The simplification of the model can bring negative effects due to the limitations of the data itself.
General Fault diagnosis based on signal processing
The difficulty of system modeling is avoided; Strong practicability; High sensitivity.
There is a time delay under certain conditions; It is relatively difficult to analyze and interpret the fault.
Relatively strong
Fault diagnosis based on AI The difficulty of system modeling is avoided; Real-time fault detection; High running speed.
Large demand for data.Strong The increasing complexity, intelligence, and large-scale integration of power system equipment mean that traditional fault diagnosis methods can basically no longer meet the development needs of the new power system.AI techniques can effectively simplify solutions' complexity and enhance self-learning ability, which are ideal tools to solve this problem.As a key branch of AI, the knowledge graph has attracted research interests in recent years with the great progress of 5G technology, big data, and the Internet of Things.Knowledge graph technology is actually an intelligent database that incorporates AI techniques and traditional databases for large-scale knowledge-structured management [24,25].In 2012, the probability of knowledge graphs was presented by Google for the first time.The core elements such as entities, attributes, and relations are formally described by triples, and the large-scale network information is effectively combined with the minimum cost, to better concatenate and present knowledge [26].Currently, the research of knowledge graph technology in power systems is in its start-up phase, and the relevant literature mainly focuses on application exploration and macro framework design.In view of the research status of AI technology in the field of HVDC transmission system fault diagnosis, there has been no such state-of-art article to systematically summarize and discuss the research outcomes in detail in recent years.Therefore, this work aims to comprehensively summarize the current research status of AI technology-based HVDC transmission system fault diagnosis.The main contributions and innovations of this work are outlined as follows:
•
A variety of fault types and corresponding adverse effects of HVDC transmission system are summarized in detail; The main purpose of this work is to provide a one-stop manual for future researchers who may be involved in this field of research.
The overall structure of the work is as follows.In Section 2, the development status of HVDC transmission systems is briefly introduced, and the fault types and fault impact areas of HVDC transmission system are summarized.Section 3 summarizes the current research achievement of HVDC fault diagnosis technology in recent years with the focus on the application of AI technology.In Section 4, the knowledge graph technology is introduced in detail.Section 5 gives a brief discussion and a new fault diagnosis framework for HVDC transmission systems based on knowledge graph technology is proposed.Section 6 concludes the whole work and gives some promising perspectives.
Development of HVDC Transmission Technology
High voltage direct current (HVDC) transmission technology has been widely used in the world as a powerful complement to AC transmission because of its outstanding strengths in long-distance transmission, high-capacity transmission, asynchronous networking, and submarine cable transmission.According to the different stages of commutator development, the development of DC transmission technology can be divided into three stages: mercury arc valve, thyristor commutator, and voltage source commutator.The mercury arc valve was successfully developed in 1928.Relying on its features of rectifying and inverting, large-capacity DC power transmission was successfully realized.In 1954, the first 20 MW, 100 kV DC single-wire submarine cable was used for power transmission.However, mercury arc valves are complicated, have low reliability, and are difficult to maintain, so they have not been widely used [27].
Since the 1970s, HVDC transmission technology based on phased thyristors has become the main method of large-scale and long-distance power transmission.Compared with mercury arc valves, thyristors have a smaller volume, lower cost, and no reverse arc fault.They are simpler and more convenient to manufacture and maintain than mercury arc valves.Nowadays, most HVDC transmission systems are constructed with commutators.The commutator is intended to transfer the current flowing through the commutator from one current path to another by opening and closing the commutator valve.Line-commutated Converter HVDC (LCC HVDC) is not only the most mature HVDC transmission technology at present but also the mode mainly used in UHVDC transmission.The main converter device of LCC HVDC is the thyristor.LCC HVDC systems are mainly composed of a rectifying station, DC transmission lines, and inverter stations, where the converter, converter transformers, flat wave reactors, reactive power compensation devices, filters, DC grounding, and AC-DC switching equipment are located in the converter stations on both sides [28].However, the operation of the LCC HVDC system requires the AC system to provide commutation support, which is limited by the system ratio [29].
In the 1990s, with the application of insulated gate bipolar transistors as converters and voltage source converters in power systems, voltage source converter-based HVDC (VSC-HVDC) technology was developed and promoted.Compared with the LCC-HVDC system, the VSC-HVDC system does not depend on the AC system and can control the active and reactive power independently and quickly.LCC-HVDC as the third generation of DC transmission technology combines a procession of power electronics, power systems, automatic control, and so on.It combines various advantages of advanced technology, which has good controllability and adaptability, a flexible operation mode, and applicable range.It plays a significant role in large-scale renewable energy integration in new power systems and digital power grid construction [30].
Fault Types of HVDC Transmission Systems
According to the different faulty devices, HVDC transmission system faults can be classified into DC faults and AC faults.DC faults [31] include converter faults, DC line faults, ground pole faults, etc.The converter is the core component of HVDC transmission systems, which controllability and single conduction characteristics constitute the important characteristics of faulty behavior of HVDC transmission systems.Generally, converter faults include control system faults and main circuit faults.The main circuit faults mainly include reversing faults and short circuits inside the converter station.Control system fault mainly refers to the valve being closed and opened by mistake [32].The main failure points of a typical HVDC transmission system are shown in Figure 1 and Table 2.The most serious fault of the converter is the short circuit fault; the short circuit will make the valve lose shut-off ability or the external insulation between the two ends of the valve is destroyed.When the reverse voltage peak has a large jump, the reversing valve is likely to reverse arc, resulting in a short circuit of the valve arm.In addition, when the lightning arrester short circuits or valve insulation is damaged due to cooling system leakage and gasification, this may also cause a short circuit of the valve [33].In addition, the inverter valve arm during the blocking period is mostly under the forward voltage; if the voltage is too high or voltage rise rate is too fast, it will affect the insulation of the valve arm and make it damaged, and valve insulation damage after the valve arm will cause a short circuit.
The commutation failure is a failure to complete the commutation before the commutation voltage reverses.Commutation failures are common in inverters and occur during disturbances such as large tributary currents or low AC voltages.At present, the commutation process is mainly analyzed by the commutation voltage integral area theory, as shown in Equation (1) [34].
where L c represents commutative reactance.t 1 and t 2max are the trigger time and the time when the integral area of the commutation voltage is maximum.I d (t 1 ) and I d (t 2max ) are the current values at t 1 and t 2max , respectively.S demand is the integral demand area of commutation voltage, and S max is the maximum commutation voltage integral area.
Commutation is successful only if inequality (1) is satisfied.
The DC outlet short circuit of the converter is also a common fault, which is a short circuit fault between DC terminals on the converter.The main distinction between the short circuit at the DC side of the rectifier and the short circuit at the valve end is that the valve end of the converter can maintain single-lead connectivity after the short circuit occurs at the outlet of the DC side.After the short circuit at the outlet of the DC side of the rectifier, the current on the conduction valve and converter transformer increases sharply, and it needs to withstand a much higher current value than normal.In addition, the fault point of the DC outlet short circuit of the inverter is similar to that of the DC outlet short circuit of the rectifier.However, under the action of the DC line and the flat wave reactor, the fault current of the DC line and the rising speed of the current are small, and the current on each bridge valve of the inverter will be reduced to zero in a short time, so the fault will not cause harm to the inverter and converter transformer.In addition to the above fault types, the converter faults also include the alternating short circuit of the converter and single-phase ground short circuit of the converter and so on.
HVDC transmission is mainly used for long-distance transmission, so the failure rate of transmission lines is high.The transmission line fault is a serious fault type that must be considered in the design process of HVDC transmission systems.It has an important impact on equipment parameters, control strategy, and protection configuration [35].The control system of actual HVDC transmission projects adopts a hierarchical structure, including master control stage, station control stage, pole control stage, and valve control stage.The control system not only controls the transmission power during normal operation but can also reduce the impact of faults and quickly isolate faults.For the fast transient process of DC lines, only the functional link of the control system with fast dynamic characteristics can affect it [36].It is a problem that both LCC-HVDC transmission technology and VSC-HVDC transmission technology need to face.Among the fault types of DC transmission lines, the short circuit faults account for the largest proportion, and most of the faults are from flashover discharge.Usually, the factors that result in transmission line ground flashover include lightning strikes, pollution, DC line air insulation breakdown, tree branches, and other factors leading to the reduction of insulation level.In addition, the fault current of the line is related to the fault type and the distance from the fault point to the rectifying station [37].
The lighting characteristic of the DC line has specific characteristics.The probability of both poles of the system being struck by lightning at the same place at the same time is almost zero.Generally, the DC line is struck by lightning for a short time, and the DC voltage will rise in a short time under the effect of lightning.If the insulation of the DC line cannot withstand the voltage at this time, the phenomenon of flashover discharge of the DC line to the ground will appear.
Meanwhile, if the insulation of the tower is damaged, ground flashover will also occur.After flashover occurs on transmission lines, changes in voltage and current will be transmitted to both ends.According to the traveling wave theory, the voltage and current at both ends are the superposition of forward and backward waves [38].If a(t) represents the forward wave, b(t) represents the backward travelling wave, and Z represents the wave impedance, then the instantaneous increment is as follows: (2) Moreover, interruptions in the DC line can bring open-circuit faults to the system.When a high-resistance ground fault such as a tree contact appears on the DC line, there is a current difference between the converter stations due to a DC short circuit, but the voltage and current changes caused by the fault cannot be detected by the traveling wave protection.There are also DC switching fields and ground pole faults and AC side faults of a converter station in the DC transmission system.These faults will also affect the operation of the DC transmission system.
Fault Effect of HVDC Transmission Systems
Generally, when a short circuit fault occurs, the DC bus voltage of the rectifier side converter will drop rapidly, even to 0; the current on the faulty valve arm will increase sharply in the opposite direction; the converter valve and transformer are affected by the sharp increase of AC side current, so they need to bear large fault current; and an AC two-phase short circuit and three-phase short circuit occur on the AC side of the rectifier [33].
The commutation faults often occur with the inverter.When commutation faults occur, the extinction angle is less than the time when the switching valve recovers the blocking ability.After the commutation failure, the DC voltage will continue to decrease until it reaches 0. Meanwhile, the DC increases sharply, while the AC side current decreases, and an open circuit occurs in a short time.In addition, DC current continuously flows through the converter transformer to generate magnetic bias.Magnetic bias refers to the presence of a DC component in the transformer excitation current, leading to the increase of excitation current, resulting in the loss and temperature rise, vibration and noise intensification, and other adverse consequences [39].Generally, the causes of magnetic bias current include the unbalanced triggering angle of a DC converter valve, positive sequence second harmonic voltage of AC bus of the converter, fundamental frequency current induced by nearby AC line on DC line, and DC current flowing through the transformer neutral point when a DC system operates in single-pole earth loop mode.DC bias coefficient K dc can describe the magnitude of DC magnetic bias in transformer windings, which is defined as the ratio of peak DC current to peak no-load current, as shown in Equation ( 4) [40].For other types of faults and the influence of HVDC transmission systems, see Table 3.
where I dc is the peak value of DC current in transformer winding; I o is the peak value of the rated no-load current of the transformer.
Not open Bridge arm
Voltage decreases and current increases.
Component failure Valve element
The voltage applied to the element of the valve increases.
Bridge arm short circuit Bridge arm
Voltage decreases and current increases.
Outlet short circuit Bridge arm
Voltage decreases and current increases.
AC system of inverter side
One-wire ground AC line When AC voltage drops asymmetrically, the commutation may fail and the non-characteristic harmonics may increase.
Two-phase short circuit AC line
When AC voltage drops asymmetrically, the commutation may fail and the non-characteristic harmonics may increase.
Three-phase short circuit AC line When AC voltage drops asymmetrically, the commutation may fail.
Fault Diagnosis of HVDC Transmission Systems
Although HVDC transmission systems have many advantages, it is always a challenge for researchers to provide fast and reliable protection.Efficient fault diagnosis will help to adopt reasonable measures to minimize the negative impact of faults.At present, HVDC operation and maintenance face constraints.There is a lack of effective technical With the increase in HVDC system scale and fault sample points, the operation reliability of HVDC systems is confronted with severe challenges.At present, the fast development of neural networks has strongly promoted the research of pattern recognition and data mining [42].In reference [43], a fault diagnosis method based on spectral kurtosis deconvolution for air valve air coolers in UHVDC transmission is proposed, which solves the issue of it being challenging to effectively diagnose early composite faults of air cooler motors in the humid, corrosive, and external vibration environment.Reference [44] proposes a fault diagnosis approach based on a parallel convolutional neural network (PCNN).Reference [45] uses convolutional neural networks (CNN) to detect and classify DC faults.This method makes full use of the strengths of CNN in image feature extraction, and the recognition accuracy reaches 92.5%.Reference [46] proposes a method based on an artificial neural network (ANN) to achieve DC bus protection and line protection of the power grid.The discrete wavelet transform is used as the feature extractor at the input of the network.The input is the frequency range component and the time range component.The output of the network is used to trigger protection.The overall protection flow of this method is shown in Figure 3.However, compared with the traditional fault diagnosis technology, deep learning has its own distinctive shortcomings, for instance, it usually requires a lot of data and brings higher computing costs, which makes it difficult to extend to practical application.Meanwhile, the black box nature of the deep learning network makes it difficult to explain the fault diagnosis process and determine whether the fault diagnosis extraction is complete.Therefore, the current fault diagnosis method based on deep learning is more suitable for on-site personnel's auxiliary judgment means in practical projects [47].In addition, to solve the complex data preprocessing problem, a new fault detection and location approach based on the bidirectional gated recursive unit is proposed in reference [48].This method has obvious advantages in bi-directional structure feature extraction and simplifies fault data preprocessing.To more accurately extract fault features, it is often necessary to preprocess the features, such as the Fourier transform, wavelet transform, and so on.The function of the Fourier transform is to transform a signal from the time range to the frequency range.With different ranges, the perspective of interpreting the same thing will also change, thus making the problem easier to deal with.The Fourier transform is shown in Equation (5).
where w is the frequency, t is the time, and e −iwt is the complex variable function.
The wavelet transform inherits the idea of localization of short-time Fourier transform.In particular, it can deal with the problem that the window does not change with frequency.The wavelet analysis is widely used in fault feature extraction because it can provide localization information of the fault signal in the time range and frequency range, which remedies the shortcoming that Fourier transform cannot describe the fault signal in the time range and frequency range simultaneously and Fourier transform has the same resolution in the whole time-frequency plane for a short period of time.Reference [49] studies the fault diagnosis based on wavelet packet decomposition.It performs one-dimensional wavelet packet decomposition on the original fault signal of the inverter, extracts the energy value of the fault signal in each frequency band as the feature information, and then forms an effective fault feature vector as the input vector of the fault classifier, which realizes fault location.At present, the main steps of applying the wavelet packet decomposition method to fault feature extraction are as follows: 1.
The three-phase current or voltage at the AC side of the inverter in one fundamental wave period is sampled as the fault signal; 2.
The fault signal is decomposed by an n-layer wavelet packet, and the wavelet packet coefficients at the node (j + 1, p) are given by Equation ( 6), where h 0 (k) and h 1 (k) are a pair of conjugate orthogonal filters, which can be obtained by wavelet basis function calculation; 3.
The wavelet packet coefficient d p j at the node (j, p) is reconstructed to obtain the reconstructed wavelet packet coefficient D j,p (k) of the node; 4.
Calculate the energy value E n,p at the pth node of the nth layer according to Equation (7), where l is the number of data points sampled in one fundamental wave period; 5.
Obtain the percentage T n,p of the energy value of each frequency interval in the total energy value according to Equation (8) and select the percentage of the energy value of the first s nodes (0 < s < 2 n ) of the nth layer as the fault characteristic quantity.
At present, most fault diagnosis methods are often combined with AI to achieve automatic classification and fault diagnosis.Generally, the model is established offline, and the established model is used for online feature recognition.Meanwhile, the research on fault feature recognition methods mainly focuses on neural networks, Bayesian networks, fuzzy logic reasoning, data mining, and so on.
Knowledge Graph Technology
With the development of ultra-high pressure (UHV) power grids and renewable energy, power system faults are more complex and diverse, and the fault handling of the power system requires more and more comprehensive professional ability of operation personnel.Meanwhile, with the increasing volume of the power system and an increasing number of elements in the power system, the daily operation is not only satisfied with the application of data mining but it also faces a technical bottleneck in converting data into knowledge.Therefore, the power system needs to improve the experience and operation logic of the staff into knowledge through Internet technology, enrich the means of fault judgment and recovery decision, and help the regulator to control the key information of fault handling actively, rapidly, and comprehensively, and provide a corresponding subsidiarity decision for fault handling [50].
As a method of organizing and constructing knowledge based on AI technology, knowledge graphs are similar to the form of human cognition of the world, which can represent complex associative relationships at the semantic level and provide a higher ability to manage and understand huge amounts of information.By constructing the domain knowledge graph of power grid fault processing, the value of multivariate heterogeneous data in power grid fault processing can be fully explored to solve the problems of low accuracy and poor timeliness of fault processing caused by the difference and lack of knowledge reserves of control and operation personnel.The knowledge graph is an effective way to improve the accident-handling ability of power grid regulators [51][52][53].Triplet is a normal delegate of the knowledge graph, namely G = (E, R, S), where E is the set of entities in the knowledge base, R is the set of relations, and S is the set of triples.Neo4j, FlockDB, and other graph databases are generally used as storage media.
According to the application areas, knowledge graphs can be divided into general knowledge graphs and industry knowledge graphs.The general knowledge graph covers a wide range of content, focusing on a large number of entities, mainly for search, question answering, and other fields.The industry knowledge graph is only for a specific domain, according to the needs and characteristics of the domain, to provide business functions or solve specific problems [54].The architecture of the knowledge graph includes its logical structure and the technical architecture adopted in the construction process, as shown in Figure 4 [55].The knowledge graph framework consists of the following four processes.[24,56].
Knowledge representation learning: Knowledge must be reasonably represented before it can be processed by the computer [57].Thereafter, many scholars improved and suggested PTransE, RotatE, and RDF attribute graph models.Furthermore, several novel approaches have been proposed by scholars to address the difficulties of some specific conditions in particular areas, such as the JAPE model, ConvE model, MTransE model, and BootEA model [24].
Knowledge mining: Knowledge mining refers to the use of link prediction, neural network technology, and the decision tree method of knowledge map implicit knowledge mining and supplement, which is the technical foundation of knowledge reasoning and fusion.It is essential in the process of large-scale knowledge map construction technology, which mainly can be divided into three branches: clues to mining, inference, and prediction [58].
Knowledge reasoning and fusion: It refers to the operation of improving and expanding knowledge graphs and updating knowledge graphs in real-time by deep mining the implicit relationship between old knowledge and new knowledge within the same knowledge graph or between different knowledge graphs, which is the most core step in the construction of knowledge graphs.In the process of building a knowledge graph, it is often necessary to obtain data from multiple sources, and these data from different sources may be crossed, overlapped, and repeated.The purpose of knowledge fusion is to extract useful knowledge and insights from massive data, so as to fuse the knowledge from different sources into a knowledge base [59].
Knowledge Graph Technology in Power System
Currently, the application of knowledge graphs in the electric power field has been explored at home and abroad.Meanwhile, knowledge graph technology is mainly used in power system regulation operation and fault diagnosis [60].Reference [61] builds the knowledge graph of power equipment through the data of power equipment to improve the management efficiency of power equipment.Reference [62] puts forward the construction method of knowledge graphs in the direction of power grid regulation to be applied in the scenario of fault disposal.Reference [63] proposed a knowledge graph construction method for the topology structure of low-voltage distribution networks.By integrating and mining the information system data of multiple low-voltage distribution networks, the identification of the household variation relationship of low-voltage distribution networks in the system was realized.Reference [64] builds a knowledge graph of dispatching automation system, to help operation and maintenance personnel understand the internal structure and business logic of the scheduling automation system.
In terms of the research of fault diagnosis and fault analysis of power systems, reference [65] built the knowledge graph of power equipment defects through the text of power equipment defect records to realize the retrieval of similar fault records.Reference [51] puts forward a multi-level and multi-category knowledge graph application framework for power grid fault processing auxiliary decision-making and preliminarily expounds the key technologies and solutions of functions within the framework.Reference [55] analyzed historical cases of cable faults and processed structured data, extracted relevant fault characteristic information, constructed a cable fault knowledge graph, and then used AI technology to establish a cable fault diagnosis system to achieve rapid analysis and diagnosis of cable line faults.Reference [24] puts forward an approach for building a fault knowledge graph of substation alarm information.The constructed grid accident behavior graph is shown in Figure 5.This method introduces knowledge graph technology to knowledge mining of substation alarm information.In addition, in recent years, equipment fault diagnosis technology based on the deep learning model has also received a wide range of concerns and research efforts [66,67].
In reference [68], a multi-modal semantic model based on deep learning and knowledge graph is proposed by combining knowledge graph technology with the deep learning model.The overall framework of the model is shown in Figure 6, which mainly includes the construction and application of a multi-modal knowledge graph and the application of the YOLOv4 target detection algorithm.Simulation results show that this approach can achieve the goal of intelligent fault diagnosis decisions and improve the efficiency of daily operation, maintenance, and management of power grids.At present, the HVDC transmission system still largely relies on equipment monitoring and manual analysis, and the fault anomaly analysis lacks intelligent means.The status assessment, fault analysis, and disposal of the DC system highly depend on the operation and maintenance experience and skill level of on-site disposal personnel [69].With the intelligent upgrading and digital transformation of power equipment, advanced information transmission, widely deployed sensors, and various management platforms have accumulated a large amount of DC system-related data.On this basis, an AI-assisted decision service platform for operation and maintenance of DC equipment based on this knowledge base is developed.By preprocessing the data, the fault characteristic signal is extracted, and then the fault characteristic signal is judged by the knowledge base, so as to determine whether the power equipment is faulty and feed back to the service platform [70].
Discussion
At present, with the larger scale of HVDC transmission systems in the current power system, there are a variety of challenges, such as the lack of effective management of massive data, insufficient intelligent fault analysis methods, etc.In order to realize the knowledge graph technology-based intelligent fault diagnosis of HVDC transmission systems, the following requirements should be satisfied.It is necessary to build a large number of knowledge bases according to the characteristics of knowledge graph technology.However, since the safe and reliable operation should always be the first and foremost demand due to the particularity of HVDC transmission engineering, the actual fault data of HVDC systems are hard to collect to build sufficient knowledge bases.In order to obtain enough data, various types of HVDC fault data must be obtained through a large number of simulation experiments and hardware-in-the-loop (HIL) experiments.When analyzing the behavior characteristics of HVDC systems in a fault state, an appropriate simulation model will be beneficial to balance its accuracy and complexity.Generally, the LCC-HVDC simulation model can be divided into quasi-steady state model, dynamic phasor model, and electromagnetic transient model.The quasi-steady state model has the least amount of calculation, but its accuracy is relatively low.The accuracy of dynamic phasor model and electromagnetic transient model is higher, but also brings an increasing computation burden.The simulation model of VSC-HVDC can be divided into the device detailed model, device simplified model, variable resistance model, and average value model.The device detailed model has the highest accuracy; however, it is too complex to be applied to large system scenarios.The calculation amount of the simplified device model, variable resistance model, and average model are all comparatively reduced, which is more suitable for most scenarios [71,72].
In addition, with the increasing number of sensors equipped in the power system, the monitoring data of power equipment sensors are characterized by unstable sample quality and few fault samples.Digital twin technology can be used to comprehensively monitor the operation status of equipment [73][74][75].Digital twin is actually a simulation process that makes full use of the physical model, sensor update, operation history, and other data, and then integrates multidisciplinary, multi-physical quantity, multi-scale, and multi-probability factors.Then, it completes mapping in virtual space, thus reflecting the full life cycle process of corresponding physical equipment [76,77].Digital twin technology is based on the dynamic and real-time physical entities of power equipment, and thus constructs a virtual digital model that is fully mapped with its spatial range and time scale.Combined with the multi-source monitoring data of power equipment, the virtual model can simulate the dynamic and real-time changes of large-scale power equipment and monitor the operation status of power equipment in the power system in a panoramic way to improve the ability of fault perception and fault diagnosis under fault conditions [78].
At present, the research of HVDC transmission systems based on knowledge graph technology is relatively blank.Meanwhile, the application of knowledge graphs in power systems is still in its infancy stage, the application scenarios are not clear, and the key technologies such as the construction of knowledge graphs, knowledge reasoning, and graph completion lack in-depth research.In this work, a new fault diagnosis method for HVDC transmission systems is proposed based on knowledge graph technology.The overall framework is shown in Figure 7.According to the fault information, this approach studies DC system state identification and fault inference analysis technology based on a small sample and graph neural network, and then builds an interpretable DC fault inference model combined with a DC knowledge package to realize typical fault analysis, fault location inference, risk analysis, and recommendation push.The data used for this framework are mainly from both the actual operation data and simulation data.The simulation data are mainly generated based on power systems computer-aided design (PSCAD) software.According to the actual engineering design structure, number of groups, and parameters, the components of PSCAD are used to build in detail.This paper considers that a Bayesian network can deal with uncertainty problems well and it also can effectively express and fuse multi-source information.Therefore, the fault diagnosis framework proposed in this work incorporates various components of fault diagnosis and solutions into the network structure through the Bayesian network and performs unified processing and integration based on the relevance of the content [79,80].
ence model combined with a DC knowledge package to realize typical fault analysis, fault location inference, risk analysis, and recommendation push.The data used for this framework are mainly from both the actual operation data and simulation data.The simulation data are mainly generated based on power systems computer-aided design (PSCAD) software.According to the actual engineering design structure, number of groups, and parameters, the components of PSCAD are used to build in detail.This paper considers that a Bayesian network can deal with uncertainty problems well and it also can effectively express and fuse multi-source information.Therefore, the fault diagnosis framework proposed in this work incorporates various components of fault diagnosis and solutions into the network structure through the Bayesian network and performs unified processing and integration based on the relevance of the content [79,80].
Conclusions and Prospects
The safe operation of HVDC systems plays a crucial role in solving the imbalance of resource distribution and realizing the maximum efficiency of energy utilization.Meanwhile, fast and accurate fault diagnosis is the key to ensure the continuous power supply of long-distance HVDC transmission lines.The current fault diagnosis methods for HVDC transmission systems are comprehensively summarized and analyzed in this work, which contribution is concluded as follows: (1) Converter station fault and DC line fault of HVDC transmission systems are summarized and analyzed, respectively.The adverse effects that can be caused by all fault types are also discussed; With the intelligent upgrading and digital transformation of equipment in power systems, there will be great changes in equipment operation inspection, maintenance,
Conclusions and Prospects
The safe operation of HVDC systems plays a crucial role in solving the imbalance of resource distribution and realizing the maximum efficiency of energy utilization.Meanwhile, fast and accurate fault diagnosis is the key to ensure the continuous power supply of long-distance HVDC transmission lines.The current fault diagnosis methods for HVDC transmission systems are comprehensively summarized and analyzed in this work, which contribution is concluded as follows: (1) Converter station fault and DC line fault of HVDC transmission systems are summarized and analyzed, respectively.The adverse effects that can be caused by all fault types are also discussed; With the intelligent upgrading and digital transformation of equipment in power systems, there will be great changes in equipment operation inspection, maintenance, production command, and decision making.A large number of intelligent terminals and massive amounts of new data have brought new challenges to the technical skills of production personnel.The role of production personnel needs to be changed and their ability needs to be improved.Future research on fault diagnosis of HVDC can be carried out in the following directions.
(1) Knowledge graphs can be used to construct the whole system to establish the whole life cycle analysis and evaluation of the power grid; (2) Digital twin technology can be used to simulate the operation state of the whole system to analyze and estimate the potential faults; (3) More equipment state measurement devices can be added to achieve the construction of transparent power grids; (4) Accelerated integration of various advanced AI algorithms for fault diagnosis to satisfy specific requirements, and thus to achieve the construction of digital power grids; (5) The industrial knowledge graph needs to make further breakthroughs in the deep semantic representation of logical relations, causal relations, and turning relations; (6) Since AI techniques can effectively simplify solutions' complexity and enhance selflearning ability, they can be used to deal with highly nonlinear and multiple correlation problems.Meanwhile, the time series or correlation prediction model is established to improve the efficiency and accuracy of the operation state prediction of power equipment; (7) An HVDC fault diagnosis network based on a cloud computing service platform is one feasible and promising application to diagnose fault data in the "cloud", which can improve the data and information processing speed.
Figure 1 .
Figure 1.Major fault points of a typical HVDC system.
and maintenance management.It is difficult to further improve the efficiency of traditional artificial means.It is urgent to find a new driving force to reach high-quality security and development by combining digital transformation and intelligent technology applications.Meanwhile, HVDC fault analysis lacks intelligent methods.The state assessment, fault analysis, and disposal of HVDC systems are highly dependent on the operation and maintenance experience and skill level of on-site disposal personnel.There is a lack of effective intelligent analysis means and expert support systems, and intelligent analysis and auxiliary decision-making means are seriously insufficient.Fault diagnosis technology is divided into fault feature extraction technology and faults feature recognition technology.The classification structure diagram is shown in Figure 2. Compared to traditional troubleshooting methods, AI-based fault diagnosis methods are widely preferred by researchers because of their merits such as high reliability and small communication requirements, which is also a promising tool for solving HVDC fault diagnosis [41].Therefore, this work mainly introduces and summarizes the fault diagnosis methods of HVDC systems based on AI.
Figure 2 .
Figure 2. Classification of fault diagnosis method.
Figure 3 .
Figure 3. Flow chart of fault detection.
Figure 4 .
Figure 4. Schematic of the knowledge graph architecture.Knowledge extraction: Knowledge extraction is divided into three main steps: term extraction, relationship extraction, and concept extraction.The implementation of term extraction mainly includes four-term extraction methods based on dictionaries, rules, statistics, and machine learning.The technical difficulty of relationship extraction lies in the extraction of synonymous relationships.At present, the mainstream concept extraction methods are based on linguistics or statistics[24,56].Knowledge representation learning: Knowledge must be reasonably represented before it can be processed by the computer[57].Thereafter, many scholars improved and suggested PTransE, RotatE, and RDF attribute graph models.Furthermore, several novel approaches have been proposed by scholars to address the difficulties of some specific conditions in particular areas, such as the JAPE model, ConvE model, MTransE model, and BootEA model[24].Knowledge mining: Knowledge mining refers to the use of link prediction, neural network technology, and the decision tree method of knowledge map implicit knowledge mining and supplement, which is the technical foundation of knowledge reasoning and fusion.It is essential in the process of large-scale knowledge map construction technology, which mainly can be divided into three branches: clues to mining, inference, and prediction[58].Knowledge reasoning and fusion: It refers to the operation of improving and expanding knowledge graphs and updating knowledge graphs in real-time by deep mining the implicit relationship between old knowledge and new knowledge within the same knowledge graph or between different knowledge graphs, which is the most core step
Figure 5 .
Figure 5. Construction process of power grid accident behavior graph.
Figure 6 .
Figure 6.Technical framework of the overall research.
•
Data acquisition: (a) Reliable data quality; (b) High data accuracy; (c) Large data volume; (d) Sufficient data type.• Data transmission: (a) High data transmission speed; (b) Low data lost during transmission; (c) Low transmission noises.• Data processing: (a) Online processing capability; (b) Fast processing rate; (c) Secure processing environment.
( 2 )
With the rapid development of AI technology, it has been widely concerned and applied in the areas of fault diagnosis.The prior fault analysis and diagnosis of power systems based on knowledge graph technology are summarized and analyzed comprehensively; (3) Finally, based on knowledge graph technology, a new fault diagnosis framework for HVDC transmission systems is proposed.
( 2 )
With the rapid development of AI technology, it has been widely concerned and applied in the areas of fault diagnosis.The prior fault analysis and diagnosis of power systems based on knowledge graph technology are summarized and analyzed comprehensively; (3) Finally, based on knowledge graph technology, a new fault diagnosis framework for HVDC transmission systems is proposed.
Table 1 .
Characteristics of various fault diagnosis methods.
•
Prior fault diagnosis strategies for HVDC transmission systems in recent years are systematically reviewed.Meanwhile, this work particularly focuses on the application of AI technology in fault diagnosis of HVDC transmission systems; • The knowledge graph technology is introduced in detail, along with its application in power systems.Inspiringly, a new fault diagnosis framework for HVDC transmission systems based on knowledge graph technology is proposed; • According to the current technical foundation and research direction of AI technology, the application of AI technology in HVDC transmission systems is prospected;
Table 2 .
Major fault points of a typical HVDC system.
Table 3 .
Various faults and effects of HVDC transmission systems.
|
2022-11-04T19:23:49.131Z
|
2022-10-28T00:00:00.000
|
{
"year": 2022,
"sha1": "0c1f69834d371c4f6cbba017f2e1300218ef9db0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/15/21/8031/pdf?version=1667287710",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5b5994c6572fd147f86c7e8b5cbdebdbfbb7204e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
236635462
|
pes2o/s2orc
|
v3-fos-license
|
Shadow Art Revisited: A Differentiable Rendering Based Approach
While recent learning based methods have been observed to be superior for several vision-related applications, their potential in generating artistic effects has not been explored much. One such interesting application is Shadow Art - a unique form of sculptural art where 2D shadows cast by a 3D sculpture produce artistic effects. In this work, we revisit shadow art using differentiable rendering based optimization frameworks to obtain the 3D sculpture from a set of shadow (binary) images and their corresponding projection information. Specifically, we discuss shape optimization through voxel as well as mesh-based differentiable renderers. Our choice of using differentiable rendering for generating shadow art sculptures can be attributed to its ability to learn the underlying 3D geometry solely from image data, thus reducing the dependence on 3D ground truth. The qualitative and quantitative results demonstrate the potential of the proposed framework in generating complex 3D sculptures that go beyond those seen in contemporary art pieces using just a set of shadow images as input. Further, we demonstrate the generation of 3D sculptures to cast shadows of faces, animated movie characters, and applicability of the framework to sketch-based 3D reconstruction of underlying shapes.
Introduction
According to an ancient Roman author, Pliny the Elder, the very art of painting originates from trailing the edge of shadow. If art can be defined as creating visual or auditory elements that express the author's imaginative or technical skill, then shadow art represents those skills in play with shadows. Most of us have seen or atleast heard of "someone" making "something" interesting out of shadows. However, it is usually limited to people playing around with their fingers around a lamp, making shadows of rabbits or horses on the wall. In this work, we show how differentiable rendering can be used to generate some amazing 3D sculptures which cast mind-boggling shadows when lit from different directions. Figure 2 (a) shows the cover of the book Gödel, Escher, Bach by Douglas Hofstadter that features blocks casting shadows of different letters when seen from different sides. Kumi Yamashita -one of the most prominent contemporary artists -demonstrated that seemingly simple objects arranged in a certain pattern cast startling silhouettes when lit from just the right direction. An exclamation point becomes a question mark when lit from its side (Figure 2 (b)) and a bunch of aluminum numbers placed in a 30-story office add up to an image of a girl overlooking the crowd below (Figure 2 (c)). All of these, and other pieces made by Kumi Yamashita not only please our eyes, but also inspire emo-tion and pose intriguing questions. Tim Noble and Sue Webster have been presenting this type of artwork since 1997, creating projected shadows of people in various positions (Figure 2 (d)). This specifically arranged ensemble shows how readily available objects can cast the clearest of illusions of clearly recognizable scenes (Figure 2(e)). Figure 2 (f) shows the aquarium of floating characters by Shigeo Fukuda where the shadows of the fish reveal their names in kanji characters. Even after such fascinating effects, the current state of shadow art seems to be well described by Louisa May Alcott, who says "Some people seemed to get all sunshine, and some all shadow. . . ". Shadow art was first introduced to the vision and graphics community by [9] where they formally addressed the problem in an optimization framework. Since then, no significant progress has been observed in this direction. The question: Can we develop a method that learns to create or optimize to 3D sculptures that can generate such artistic effects through their shadows? In this work, we attempt to answer this question through the use of Differentiable Rendering. Here, instead of trying to realistically render a scene of our creation, we try to reconstruct a representation of a scene from one or more images of a scene [2]. Our work is mostly inspired by examples of shadow art shown in Figure 2. Specifically, our objective is to generate 3D shadow art sculptures that cast different shadows (of some recognizable objects) when lit from different directions using a differentiable renderer.
Why differentiable rendering? Most learning-based methods for 3D reconstruction require accurate 3D ground truths as supervision for training. However, all we have is a set of desired shadow images in our case. Differentiable rendering based methods require only 2D supervision in the form of single or multi-view images to estimate the underlying 3D shape, thus, eliminating the need for any 3D data collection and annotation.
Contributions. The following are the major contributions of this work.
• We introduce the creation for 3D shadow art sculptures that cast different shadows when lit from different directions using differentiable rendering just from the input shadow images and the corresponding projection information.
• We demonstrate the efficacy of deploying differentiable rendering pipeline over voxel and mesh based representations to generate shadow art sculptures.
• We show that the proposed framework can create artistic effects that go beyond those seen in contemporary art pieces by generating 3D sculptures using halftoned face images and its sketches drawn from multiple viewpoints.
• To the best of our knowledge, ours is the first work to address shadow art using differentiable rendering.
Organization. We start by describing the literature covering the relevant related works in Section 2. We discuss the problem statement more formally and describe both voxel and mesh based differentiable rendering pipelines in Section 3. Section 3.4.1 describes the loss functions deployed for optimization. In Section 4, we perform qualitative and quantitative analysis of results and compare the performance of the proposed framework with that of the stateof-the-art method. Section 5 describes interesting artistic effects and applications of shadow art before we conclude the work in Section 6.
Related Work
Shadows play an essential role in the way we perceive the world and have been central in capturing the imagination of many artists including stage performers. Several artists have typically used manual, trial-and-error style approaches to create 3D shadow sculptures. However, with the advent of digital design technology, the need of an automated framework is inevitable.
Shadow Art. Shadows in many computer graphics and computer vision applications have been studied from both perceptual (artist's) and mathematical (programmer's) point of views. It started with studying the effect of shadow quality on perception of spatial relationships in a computer generated image [21,22]. Pellacini et al. developed an interface for interactive cinematic shadow design that allows the user to modify the positions of light sources and shadow blockers by specifying constraints on the desired shadows [16]. The idea of computing the shadow volume from a set of shadow images evolved after that. This is similar to the construction of a visual hull used for 3D reconstruction. Visual hull is the intersection volume of a set of generalized cones constructed from silhouette images and the corresponding camera locations [3]. Sinha and Polleyfeys [18] studied the reconstruction of closed continuous surfaces from multiple calibrated images using min-cuts with strict silhouette constraints.
Relation with the state-of-the-art method. The work closest to ours is by Mitra et al. [9]. They described shadow art more formally by introducing a voxel-based optimization framework to recover the 3D shape from arbitrary input images by deforming the input shadow images and handled inherent image inconsistencies. In this work, we demonstrate the potential of differentiable rendering in generating 3D shadow sculptures all from the arbitrary shadow images without any explicit input image deformation. Although the associated 3D object might not exist in the real-world, but the method still creates shadow sculptures that go beyond those seen in contemporary art pieces casting the physically realizable shadows when lit from appropriate directions.
Differentiable Rendering. We briefly review methods that learn the 3D geometry via differentiable rendering. These methods are categorized based on the underlying representation of 3D geometry: point clouds, voxels, meshes, or neural implicit representation. In this work, we primarily focus on voxel and mesh based representations.
Several methods operate on voxel grids [7,12,15,20]. Paschalidou et al. [15] and Tulsiani et al. [20] propose a probabilistic ray potential formulation. Although they provide a solid mathematical framework, all intermediate evaluations need to be saved for backpropagation. This limits these approaches to relatively small-resolution voxel grids. On one hand, Sitzmann et al. [19] have inferred implicit scene representations from RGB images via an LSTMbased differentiable renderer and Liu et al. [6] perform max-pooling over the intersections of rays with a sparse number of supporting regions from multi-view silhouettes. On the other hand, [13] show that volumetric rendering is inherently differentiable for implicit representations and hence, no intermediate results need to be saved for the backward pass. OpenDR [8] roughly approximates the backward pass of the traditional mesh-based graphics pipeline. Liu et al. [5] proposed Soft Rasterizer to replace the rasterization step with a soft version to make it differentiable using a deformable template mesh for training and yields compelling results in reconstruction tasks. We deploy this in our meshbased differentiable rendering pipeline for rasterization.
Both voxel and mesh based representations have their own strength and weaknesses. In this work, we describe the differentiable rendering optimization framework for both these 3D representation and discuss which model fits the best for different scenarios to create plausible shadow art sculptures 3. Method
Problem Formulation
The key idea of our work is to generate an artistic 3D sculpture S that casts N different shadows when lit from N different directions using differentiable rendering based optimization pipeline. The prime focus here is to create interesting shadow art effects using the 3D sculpture S. The input to the pipeline is a set X = {X 1 , X 2 , ..., X N } of shadow configuration X i = (I i , P i ). I i represents the target shadow image and P i is the corresponding projection information.
The shadow of an object can be regarded as its projection on a planar surface. Assuming directional lighting, this projection is an orthographic projection when the surface is perpendicular to the lighting direction and a perspective projection, otherwise [1]. Obtaining shadow of an object is equivalent to finding the corresponding silhouette captured by a camera pointing in the same direction as the light source. Therefore, I i the shadow image, is essentially a silhouette. From here on, we shall use the term silhouette images and shadow images, interchangeably.
The shadow art problem is similar to a multi-view 3D reconstruction problem [4,10], where we try to estimate the 3D structure of an object given its N silhouette views. However, the key differences in shadow art are: (i) the N views can correspond to arbitrary silhouettes (not necessarily of the same object) and (ii) the learned 3D sculpture may bear no resemblance with any real-world object and just be an abstract art that casts the desired shadows when lit from appropriate directions. Undoubtedly, there exist multiple 3D shapes that can cast the same set of shadows. However, our concern is just to learn one such 3D sculpture that can create the desired artistic effects through its shadows.
System Overview
By providing shadow configuration X = {X i = (I i , P i )|i = 1, 2, ..., N } as input to the pipeline, the objective is to learn the underlying 3D sculpture S, as described earlier. The projection information P i corresponds to the camera position (and hence, the light source position) asso- Here, R i and t i are the 3D rotation and translation of the camera, respectively. We start by initialising S with a standard geometry which is further optimized by minimizing image-based losses, such that the rendered silhouette images I i = I i for all i = 1, 2, ..., N . The prime reason for using differentiable rendering is that it allows gradient flow directly from images back to parameters of S to optimize it in an iterative fashion. In other words, it does not require any explicit 3D supervision and optimizes the 3D shape solely from image based losses. For further simplicity, let the set of target shadow images and the associated projec- In this work, we consider two common representations for 3D shapes i.e. voxel and mesh based representations. In the following section, we elaborate the optimization pipelines for voxel and mesh based representations of the 3D object to create visually plausible shadow art using differentiable rendering.
Voxel Based Optimization
In this section, we look at a differentiable rendering pipeline that uses voxels to represent the 3D geometry. A voxel is a unit cube representation of a 3D space. The 3D space is quantized to a grid of such unit cubes. It is parameterized by an N -dimensional vector containing information about the volume occupied in 3D space. Additionally, it encodes occupancy, transparency, color, and material information. Even though occupancy and transparency probabilities (in the range [0, 1]) are different, they can be interpreted in the same way in order to maintain differentiability during the ray marching operation [2]. A typical rendering process involves collecting and aggregating the voxels located along a ray and assigning a specific color to each pixel based on the transparency or the density value. All the voxels that are located along a ray projecting to a pixel are taken into account when rendering that pixel. However, our objective is to do the inverse, i.e., to find the 3D geometry associated with silhouettes corresponding to different directions.
We assume that the 3D object S is enclosed in a 3D cube of known size centered at the origin. Hence, S can be defined by a learnable 3D tensor V that stores the density values for each voxel. We initialize V with all ones. The color value for each voxel is set to 1 and is kept fixed in the form of a color tensor C. Next, we render S using a differentiable volumetric rendering method described in [17]. To restrict the voxel density values to the range [0, 1], V is passed through a sigmoid activation function (σ) to obtain V , as described in Equation 1.
We then pass V through the differentiable volume renderer R vol along with the fixed color tensor C and the associated projection information P to obtain the set of corresponding rendered images I, as described in Equation 2.
The voxel densities V are optimized by minimizing the image level loss between a set of rendered shadow images I and the corresponding target shadows in I. The image level loss L img is a weighted combination of L 1 and L 2 losses, as described in Equation 3.
Here, λ 1 = 10.0 and λ 2 = 10.0 are the weights associated with L 1 and L 2 losses, respectively. The resulting voxel based representation of S can finally be converted to a 3D mesh making it suitable for 3D printing. One simple way to achieve this is by creating faces around each voxel having density greater than a certain threshold value (as described in [17]).
Mesh Based Optimization
In this section, we also propose to use mesh based differentiable rendering to meet our objective. The entire work-flow is described in Figure 3. The 3D object S can be represented as a mesh M(V, F ). Here, V is a set of vertices connected by a set of triangular faces F that define the surface of S.
We start by initializing a source mesh S src = M(V src , F src ) with an icosphere consisting of |V src | vertices and |F src | faces. The idea is to learn the per-vertex displacements V d to deform S src to the final desired mesh that casts desired shadows (silhouettes), when lit from appropriate directions. This is achieved by rendering the deformed mesh S def = M(V def , F def ) through a mesh-based differentiable silhouette renderer R silh (as described in [17]) from the associated projection P such that,
Loss Function
The source mesh is optimized by minimizing image level loss L img (described in Equation 3), normal consistency loss, and imposing Laplacian and edge length regularisation.
Normal consistency. We use normal consistency loss to ensure smoothness in the resulting 3D sculpture. For a mesh M(V, F ), let e = (v x , v y ) be the connecting edge of two neighboring faces f x = (v x , v y , a) and f y = (v x , v y , b), such that f x , f y ∈ F with normal vectors n x and n y , respectively. If E is the set of all such connecting edges e and |F | is the total number of faces in mesh, the normal consistency over all such neighbouring faces f x and f y is given as per Equation 5. where, Laplacian regularisation. In order to prevent the model from generating large deformations, we impose uniform Laplacian smoothing [11], as described by Equation 6.
Here, |V | is the number of vertices in the mesh M and N (v i ) is the neighbourhood of vertex v i .
Edge length regularisation.
Edge-length regularisation is included to prevent the model from generating flying vertices and is given by Equation 7.
Finally, the overall loss function is as described in Equation 8. (8) Here, λ a = 1.6, λ b = 2.1, λ c = 0.9, and λ d = 1.8 are the weights associated with the losses L img , L norm , L lap , and L edge , respectively.
Implementation Details
The aforementioned differentiable rendering pipelines are implemented using Pytorch3D [17]. As an initialisation for mesh, we use a level 4 icosphere composed of 2,562 vertices and 5,120 faces. For the voxel based rendering pipeline, we assume that the object is inside a cube (a grid of 128×128×128 voxels) centered at origin with side of length 1.7 world units. We train the optimization pipeline with custom silhouette images of size 128 × 128 for 2000 epochs. We keep the learning rate to 1 × 10 −4 . We keep the learning rate to 1 × 10 −2 and train the optimization pipeline for 500 epochs. The training is performed on NVIDIA Quadro RTX 5000 with 16 GB memory.
Experimental Analysis
In this section, we perform an extensive analysis over the results obtained using voxel and mesh based differentiable rendering pipelines to create plausible shadow art effects. We start by discussing the evaluation metrics and perform ablation studies to understand the effect of various loss terms in the design.
Evaluation Metrics
Following our discussion in Section 3.1, we assess the quality of silhouettes (shadow images) obtained through the 3D sculpture S as per projections P. To compare the rendered silhouette images with the target silhouette images (representing shadows), we use Intersection over Union (IoU) and Dice Score (DS). Additionally, we need to quantify the quality of the 3D sculpture S obtained after optimization. While we do not have any ground truth for 3D shape, and this is an optimization framework, we need a "no reference" quality metric. Therefore, we decided to use normal consistency evaluated over S to assess the quality of the mesh. Figure 5 depicts the qualitative effect of different loss terms used in the optimization pipeline. The underlying mesh in this figure corresponds to the arrangement shown in Figure 4 (c). The image based loss L img alone is not sufficient for generating plausible 3D sculptures as they are expected to suffer from distortions due to flying vertices (spike-like structures in Figure 5 (a)) or large deformations. Since we do not have any ground truth for explicit 3D supervision, we examine the effect of including regularisation in the objective function. Figure 5 (b) shows that the spikes are reduced by introducing edge-length regularisation. Further, as shown in Figure 5 (c), Laplacian smoothing prevents the sculpture from experiencing super large deformations. Finally, normal consistency loss ensures further smoothness in the optimized surface. Figure 5 (d) shows the result obtained by applying all the aforementioned regularization terms along with the image based loss. The resulting quality of the mesh validates our choice of loss terms.
Qualitative and Quantitative Analysis
In this section, we perform the qualitative and quantitative evaluation on a wide variety of shadow images includ- ing those used in [9] to illustrate the versatility of our approach in generating 3D shadow art sculptures represented using both voxels and mesh. For every result in Figure 4 (a)-(d), we show the learned 3D sculptures (voxel and mesh based) along with the respective shadows casted from different specified directions. We could not include the optimized 3D sculpture from [9] as the associated object file was not downloadable through their optimization tool. We have been able to incorporate both orthogonal (Figure 4 (a, b, c)) and non-orthogonal views (Figure 4 (d) and Figure 1 (b)) to obtain the shadows that are consistent with the desired target shadow images. For a quantitative comparison, we also report IoU and Dice score. As depicted in Figure 4, the IoU and Dice Score are comparable for both voxel and mesh based renderings. However, the corresponding voxel based 3D sculptures are not that smooth (low normal consistency value) when compared to those of mesh based 3D sculptures. It is important to note that the underlying voxel representation has been converted to a mesh representation to compute normal consistency values. While [9] have focused only on foreground inconsistencies (marked in orange color), we also show the background inconsistencies (marked in blue color) that appear in some of rendered shadow images. Ours is an end-to-end optimization approach without any additional editing tool to prune the generated 3D sculpture. In some cases, the mesh based approach is found to produce certain discontinuities near nonconvex regions (Figure 4 (b,d)) for atleast one view. This is mainly attributed to the inability of icosphere to handle sharp discontinuities in the desired shape, especially when regularisation has been imposed (Equation 8). The voxel based approaches may contain a few outliers (voxels outside the desired 3D shape, as marked in blue in Figure 4 (d)) which is generally not the case with mesh based approaches. However, the mesh based differentiable rendering method lags in handling sharp discontinuities and holes present in the shadow images. While these shortcomings are handled effectively by voxel based methods, they tend to generate discretized 3D sculpture and are often associated with high memory and computational requirements. Overall, the differentiable rendering based optimization for both the approaches has been able to generate plausible 3D shadow art sculptures and is observed to have outperformed [9] in handling shadow inconsistencies by a large extent without having to explicitly deform the desired shadow images.
Comparison with the State-of-the-art method
We show the qualitative comparison of the results obtained using our voxel based differentiable rendering pipeline and the voxel based optimization tool presented in [9] without any deformation to the target shadow image. In Figure 6, we observe that the shadows rendered using the proposed pipeline are highly consistent with that of the desired target shadows when compared to those produced by [9]. The authors of [9] argue that finding a consistent configuration for a given choice of input images might be impossible and hence, propose to introduce deformation in the input image so as to achieve consistency of the rendered shadow images with the desired ones. However, the differentiable rendering based optimization can handle inconsistencies without causing any explicit change in the target shadow images.
Applications
In this section, we show some additional artistic shadow art creations and an extension to yet another application that can also benefit from our optimization approach. Figure 7 depicts the creation of faces of well known scientists around the world and movie characters like Minions and Ironman, demonstrating the strength of differentiable rendering based optimization approach to handle complex objects or scenes with consistency. In addition to the binary silhouette images, half-toned images can also be used to generate 3D shadow art sculptures, as shown in Figure 1. Another interesting extension is towards sketch-based modeling [14] where we use hand-drawn sketches of a shape from differ-ent viewpoints to automatically create the underlying 3D object. We demonstrate the creation of a flower vase ( Figure 8 (a)), pen-stand (Figure 8 (b)), and a coffee mug (Figure 1 (c)) solely from hand-drawn sketches from three different views.
Conclusion
We have introduced an optimization framework for generating 3D shadow art sculptures from a set of shadow images and the associated projection information. The key idea is to explore the strength of differentiable rendering in creating visually plausible and consistent shadows of rigid objects, faces, and animated movie characters by generating the associated 3D sculpture. We have discussed both voxel and mesh-based rendering pipelines and have identified the benefits of each of them for the task at hand. Additionally, we have demonstrated the applicability of the proposed framework in reconstructing 3D shapes using their sketches drawn from three different viewpoints. At present, we have primarily considered the shadows that are associated with static sculptures and hence, themselves are static in nature. Dynamic shadow art can also be explored in near future.
|
2021-08-02T01:16:15.667Z
|
2021-07-30T00:00:00.000
|
{
"year": 2021,
"sha1": "d9791aa0f2b030d68d79d029698199ef51ba58cf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d9791aa0f2b030d68d79d029698199ef51ba58cf",
"s2fieldsofstudy": [
"Computer Science",
"Art"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
238229436
|
pes2o/s2orc
|
v3-fos-license
|
CPEB alteration and aberrant transcriptome-polyadenylation lead to a treatable SLC19A3 deficiency in Huntington’s disease
Description Altered CPEBs and mRNA polyadenylation lead to thiamine deficiency in the brains of patients and mice with Huntington’s disease. Supplementing striatal thiamine in Huntington’s disease Although the underlying mutation causing Huntington’s disease (HD) has been elucidated, treatments for the disease are needed. Pico et al. studied cytoplasmic polyadenylation element binding proteins (CPEBs), finding that altered CPEB1 and CPEB4 proteins in the striatum of patients and mice with HD led to a shift in polyadenylation affecting 17.3% of the transcriptome, including SLC19A3, a thiamine transporter that is associated with biotin-thiamine–responsive basal ganglia disease (BTBGD). Patients and mice with HD had decreased striatal thiamine pyrophosphate. High-dose biotin and thiamine supplementation improved radiological, motor, and neuropathological phenotypes in HD mice, suggesting that the treatment might be useful for patients with HD. Huntington’s disease (HD) is a hereditary neurodegenerative disorder of the basal ganglia for which disease-modifying treatments are not yet available. Although gene-silencing therapies are currently being tested, further molecular mechanisms must be explored to identify druggable targets for HD. Cytoplasmic polyadenylation element binding proteins 1 to 4 (CPEB1 to CPEB4) are RNA binding proteins that repress or activate translation of CPE-containing transcripts by shortening or elongating their poly(A) tail. Here, we found increased CPEB1 and decreased CPEB4 protein in the striatum of patients and mouse models with HD. This correlated with a reprogramming of polyadenylation in 17.3% of the transcriptome, markedly affecting neurodegeneration-associated genes including PSEN1, MAPT, SNCA, LRRK2, PINK1, DJ1, SOD1, TARDBP, FUS, and HTT and suggesting a new molecular mechanism in neurodegenerative disease etiology. We found decreased protein content of top deadenylated transcripts, including striatal atrophy–linked genes not previously related to HD, such as KTN1 and the easily druggable SLC19A3 (the ThTr2 thiamine transporter). Mutations in SLC19A3 cause biotin-thiamine–responsive basal ganglia disease (BTBGD), a striatal disorder that can be treated with a combination of biotin and thiamine. Similar to patients with BTBGD, patients with HD demonstrated decreased thiamine in the cerebrospinal fluid. Furthermore, patients and mice with HD showed decreased striatal concentrations of thiamine pyrophosphate (TPP), the metabolically active form of thiamine. High-dose biotin and thiamine treatment prevented TPP deficiency in HD mice and attenuated the radiological, neuropathological, and motor HD-like phenotypes, revealing an easily implementable therapy that might benefit patients with HD.
INTRODUCTION
Huntington's disease (HD) is a devastating hereditary neurodegenerative disorder characterized by atrophy of the basal ganglia, particularly the striatum, and prominent motor symptoms (1). The underlying mutation is an expansion of a polyglutamine (polyQ)-encoding CAG repeat in the Huntingtin (HTT) gene (1), which is ubiquitously expressed, affecting additional brain areas beyond the striatum, as well as other parts of the body (2,3). Although HTT-lowering strategies currently in clinical trials are promising therapeutic strategies (4,5), their use may be limited by the mode of delivery to the affected brain areas and by side effect issues (5,6). It is therefore important to continue investigating the molecular mechanisms by which the triggering mutation elicits toxicity to identify easily druggable targets.
Similar polyQ-encoding CAG mutations in different genes cause spinal-bulbar muscular atrophy, dentatorubral-pallidoluysian atrophy, and multiple dominant spinocerebellar ataxias (SCAs) (7), and there is evidence of toxicity being mediated by both the expanded CAGcontaining mRNAs and the polyQ-containing proteins (8,9), the latter showing the propensity to self-aggregate (10,11). One of the few genes able to act as a dual modifier of the toxicities induced by either CAG-repeat mRNA or polyQ in Drosophila models of SCA-3 is Orb2 (9), the ortholog of mammalian CPEB2-4.
Cytoplasmic polyadenylation element binding proteins 1 to 4 (CPEB1 to CPEB4) are RNA binding proteins that recognize transcripts that harbor CPE sequences in their 3′ untranslated region (3′UTR), about 40% of the transcriptome (12,13). CPEBs repress or activate their translation by inducing shortening or elongation of their polyadenine [poly(A)] tail (14). This CPEB-dependent regulation of transcriptome polyadenylation occurs in the cytoplasm and confers an additional layer of posttranscriptional regulation of gene expression (14,15). CPEBs play a key role in early development (14), and they also act in adult neurons to enable synaptic plasticity through prion-like mechanisms (14,16).
Altered CPEBs and subsequent alterations in transcriptome polyadenylation have been associated with various diseases such as cancer (17,18), chronic liver disease (19), epilepsy (20), and autism (13), leading to the identification of new possible therapeutic targets among CPEB-dependent dysregulated genes. However, a potential role of CPEBs in neurodegenerative disorders has not been fully explored.
We noticed that HD-related genes are prevalent among genes that are mistranslated in the absence of CPEB1 (21). This, together with the known ability of CPEBs to modulate CAG/polyQ toxicity in flies (9), led us to characterize the status of CPEBs and of global mRNA polyadenylation in HD as a way to deepen our understanding of the molecular pathogenesis of HD and to explore new possible therapeutic targets.
Striatum of patients and mouse models with HD shows CPEB1/4 protein imbalance
To explore the status of CPEBs in HD, we performed Western blot analysis on postmortem striatal tissue from patients with HD and control subjects. This revealed markedly increased CPEB1 (303%, P = 6 × 10 −3 ) and decreased CPEB4 (51%, P = 1.4 × 10 −5 ) in the striatum of patients with HD (Fig. 1A), whereas no significant changes were observed regarding CPEB2 or CPEB3 (fig. S1A). We then explored whether a similar alteration of CPEBs takes place in mouse models of HD. We first analyzed the widely used R6/1 mouse model, which overexpresses exon1-mutant Htt, resulting in a robust, yet slowly progressing, motor phenotype. Similar to human samples, striatal homogenates from fully symptomatic R6/1 mice showed increased CPEB1 and decreased CPEB4 (Fig. 1B) without changes in CPEB2 or CPEB3 (fig. S1B). Next, we analyzed zQ175 mice, a heterozygous knock-in HD model with CAG expansion in the endogenous Htt gene that better resembles the human HD mutation but does not develop an overt motor phenotype within the maximal (about 2.5 years) life span of a mouse. In this model of premanifest HD, we only observed the decrease in striatal CPEB4 ( fig. S1C). This prompted us to analyze presymptomatic and early symptomatic R6/1 mice. This revealed a tendency to decrease CPEB4 in 3-week-old R6/1 mice, which reaches significance (P = 4 × 10 −3 ) in 6-week-old or older R6/1 mice, whereas the increase in CPEB1 protein content reaches significance (P = 1.4 × 10 −3 ) at the age of 3 months (fig. S1D). These results demonstrate that, at least in R6/1 mice, CPEB4 decrease precedes CPEB1 increase.
To test whether the marked CPEB1/CPEB4 protein imbalance in symptomatic patients and mice with HD is due to matching changes in gene transcription, we performed real-time quantitative polymerase chain reaction (RT-qPCR) analysis (fig. S1, E and F). Regarding CPEB1, we observed a trend toward increased transcript expression in human HD striatum and a significant (P = 5 × 10 −3 ) increase in R6/1 striatum that might account for the increased CPEB1 protein. However, CPEB4 mRNA was unaltered in the striatum of patients with HD and R6/1 mice, suggesting that the observed CPEB4 decrease may be due to posttranscriptional mechanisms. Given the ability of CPEBs to act as physiological prions (16,22), we performed CPEB4 immunohistochemistry in R6/1 mice to test whether the decrease in soluble CPEB4 was due to sequestration into the characteristic nuclear inclusion bodies. R6/1 mice showed homogeneous decrease of the cytoplasmic neuronal CPEB4 staining (Fig. 1C) without colocalization with the Htt-positive intranuclear inclusions (Fig. 1D). We then reasoned that a possible explanation for decreased CPEB4 could be its degradation by calpain, which is up-regulated in HD (23) and degrades its paralog CPEB3 (24). We observed a calpain-dependent decrease of CPEB4 in HD mouse primary neurons stimulated with kainic acid ( fig. S1G). In summary, these results demonstrate that CPEBs are altered in HD, with CPEB1 being increased and CPEB4 being decreased in striatum of symptomatic patients and mice with HD.
Altered transcriptome polyadenylation affects genes linked to major neurodegenerative diseases
We then tested whether the CPEB alteration observed in the striatum of patients and mice with HD correlates with changes in transcriptome polyadenylation. For this, we performed poly(U) chromatography followed by gene chip analysis of a pool of total RNAs from the striata of four R6/1 and wild-type mice. This revealed that R6/1 mice show increased transcript poly(A) tail length in 8.7% of the analyzed genes and decreased transcript poly(A) tail length in 8.6%. ( Fig. 2A and table S1). We performed gene ontology (GO) analysis [using Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways] of the 1467 genes with absolute poly(A) fold change (FC) above 2 and found that the three terms with significant Benjamini-Hochberg corrected P values were as follows: HD (P = 3.3 × 10 −2 ), Alzheimer's disease (AD; P = 4.3 × 10 −2 ), and Parkinson's disease (PD; P = 4 × 10 −2 ) ( Fig. 2B and table S2). This suggests that altered polyadenylation may contribute to the pathogenesis not only of HD but also across other common neurodegenerative disorders. We also performed Ingenuity Pathway Analysis in the genes showing shortening and lengthening of the poly(A) tail separately ( fig. S2). Regarding diseases and disorders, the lowest P values were found for the genes with shortened poly(A) tails and corresponded to the terms "Neurological Disease" and "Developmental Disorder." Regarding molecular and cellular functions, shortened-poly(A) genes again corresponded to the terms with the lowest P values, and these included "Cellular Assembly and Organization," "Cellular Function and Maintenance," "Cell Morphology," and "Cell Death and Survival," the latter also being found regarding lengthened-poly(A) genes. Genes playing key roles in neurodegeneration, such as those mutated in familial forms of AD/tauopathies, PD or amyotrophic lateral sclerosis (including PSEN1, MAPT, SNCA, LRRK2, PINK1, DJ1, SOD1, TARDBP, and FUS), and HTT itself, showed altered polyadenylation ( Fig. 2C and table S1), thus strengthening the notion that altered polyadenylation might play a role across the main neurodegenerative disorders. This may explain why some of the neurodegeneration-related genes, such as MAPT and GSK3B, have been reported to display detrimental altered protein expression in brains of patients with HD and mice without matching alterations in transcription (25,26). Together, we have found that the striatum of symptomatic HD mice shows an alteration of poly(A) tail length in 17.3% of the transcriptome that markedly affects neurodegeneration associated genes, thus AAAAAA... suggesting a new molecular mechanism in the etiology of HD and possibly also in other major neurodegenerative diseases.
Top deadenylated genes include striatal atrophy-linked genes and show decreased protein expression
We analyzed the presence of CPE sequences in the UTR of the genes showing altered polyadenylation, and we observed an enrichment selectively in the genes showing deadenylation (Fig. 3A). Among the most markedly deadenylated genes (FC < -4.0), the percentage of CPE-containing genes was 93% (Fig. 3B). Shortening of the poly(A) tail is associated to diminished translation and decreased protein content (27), thus suggesting a possible CPEB-dependent decrease in protein expression of deadenylated transcripts. We confirmed reduced protein content of top deadenylated genes such as Autism susceptibility candidate 2 (AUTS2), Rho-associated coiled-coil containing protein kinase 1 (ROCK1), and Kinectin 1 (KTN1) both in HD and R6/1 striatal tissue, despite unaltered transcription (Fig. 3, B and C). Decreased KTN1 may be relevant to the striatal atrophy in HD because a genome-wide association study of common variants affecting volume of subcortical regions revealed that the size of the striatum is proportional to KTN1 gene expression (28). In addition, among the top deadenylated (FC < -4) genes was SLC19A3 (Fig. 3B) mutation of which causes biotin-thiamine-responsive basal ganglia disease (BTBGD; Online Mendelian Inheritance in Man (OMIM) #607483), a devastating neural disorder with prominent striatal involvement that can however be treated with a combination of the vitamins biotin and thiamine (29,30).
Patients with HD show a BTBGD-like thiamine deficiency SLC19A3 encodes the transmembrane thiamine transporter 2 (ThTr2), one of the transporters of thiamine (vitamin B1) (31). Individuals with BTBGD have decreased cerebrospinal fluid (CSF) thiamine content despite normal thiamine in blood (32,33); bilateral atrophy in the head of the caudate nucleus and of the putamen; and a variety of neurological symptoms, including lethargy, irritability, dystonia,
-4.1 Slc19a3
Solute carrier family 19 spasticity, tremor, and chorea, among others. All these symptoms improve upon administration of thiamine (34), to compensate for its decreased transport, and of biotin (vitamin B7) (34), which is believed to increase SLC19A3 transcription because individuals depleted of biotin show decreased SLC19A3 expression in peripheral blood cells (35,36). In view of the observed marked deadenylation of SLC19A3 transcripts (Fig. 3A), we hypothesized that HD might, in part, phenocopy BTBGD due to a decrease in ThTr2 expression. We next confirmed a marked ThTr2 decrease in both striatum and cortex of individuals with HD despite a tendency to increased transcript expression (Fig. 4A). This was mirrored by strongly decreased staining of the protein by immunohistochemistry in both striatum and cortex, which, in agreement with The Human Protein Atlas (proteinatlas.org), revealed neuronal and endothelial localization (Fig. 4B). In the CSF of individuals with HD, we observed decreased content of thiamine monophosphate (TMP), the prevailing form of thiamine in CSF, despite unaltered concentrations of thiamine in blood (Fig. 4C), resembling what has been reported for individuals with BTBGD (32,33). In brain tissue, the predominant form of thiamine is thiamine pyrophosphate (TPP) (37), which is the bioactive form, acting as an enzyme cofactor for several mammalian enzymes in cellular metabolism (30). It is assumed that CSF thiamine deficiency in patients with BTBGD correlates with decreased brain content of the intracellular cofactor TPP, ultimately leading to neuronal dysfunction. To our knowledge, no data on brain TPP content are available regarding patients with BTBGD. However, we analyzed thiamine in postmortem HD striatum and found a marked decrease in TPP concentrations (Fig. 4D). Together, these results demonstrate a thiamine deficiency in HD brain and suggest that individuals with HD might benefit from thiamine and/or biotin supplementation therapy.
HD mice show a BTBGD-like thiamine deficiency
To preclinically test the potential of vitamin supplementation for HD, we first aimed to confirm that HD mouse models also show BTBGD-like features. Slc19a3 transcript expression in mice is essentially restricted to brain endothelium (vastdb. crg.eu), and we observed that ThTr2 protein was mostly absent from the brain vessels isolated from 5-week-old R6/1 mice (Fig. 5A) and its abundance was also reduced in brain vessels from 3-week-old R6/1 mice ( fig. S3A) and from 13-week-old zQ175 mice (Fig. 5B). We found, in R6/1 mice, indicators of thiamine deficiency similar to those observed in patients with BTBGD or in experimental rodent models of thiamine deficiency. For instance, the pyrithiamineinduced rat model of thiamine deficiency is characterized by decreased Glucose transporter 1 (GLUT1) in brain vessels and by altered immunostaining with endothelial markers suggestive of vascular fragmentation (38), and we observed both alterations in R6/1 mice (Fig. 5, C and D). Similarly, patients with BTBGD show increased lactate and branched chain amino acids (32,33), which are indicative of diminished activity of the TPP-dependent enzymes pyruvate dehydrogenase and branched chain -keto acid dehydrogenase complex, respectively. Increased lactate has already been reported in brains of R6 mice (39), and regarding branched chain amino acids, we found increased isoleucine and leucine and a trend toward increased valine in the striata of R6/1 mice ( fig. S3B). Because other biochemical alterations reported in patients with BTBGD include increased content of certain organic acids such as 3-hydroxybutyric acid, glutaric acid, or 4-hydroxyphenyllactic acid (33), we analyzed these and other organic acids in the striata of R6/1 mice and detected increased 3-hydroxybutyric and glutaric acids and a trend toward increased 4-hydroxyphenyllactic acid ( fig. S3C). Next, we investigated whether the striata of HD mice showed decreased TPP and whether this reverts upon thiamine and biotin supplementation. We also tested whether, as postulated (35,36), chronic administration of biotin at high dose results in increased SLC19A3 transcript. This was confirmed in the striata of 5-month-old R6/1 mice that received biotin (5 mg/kg per day) in the drinking water starting at the age of 3 weeks (fig. S3D). We also confirmed that, similar to patients with HD, R6/1 and zQ175 mice also showed decreased striatal concentration of TPP (Fig. 5, E and F). We also found that combined therapy of biotin and thiamine in the drinking water (B+T; see Materials and Methods) at doses similar to those used for patients with BTBGD (34), starting at the age of 3 weeks, when striatal CPEB4 decrease begins to be observed ( fig. S1D) and the decrease of ThTr2 is already noticeable in vessels ( fig. S3A), prevented the decreased striatal content of TPP in both R6/1 and zQ175 mice (Fig. 5, E and F).
High B+T improves radiologic, neuropathologic, and motor phenotypes of HD mice Then, we tested whether B+T treatment, which normalizes TPP in striatum of HD mice, was also able to improve any of their HD-like phenotypes. Although zQ175 mice do not show an overt motor phenotype, they do display striatal atrophy and phosphocreatine spectroscopy alteration (40). We performed magnetic resonance imaging (MRI) on untreated 17-week-old zQ175 mice to verify striatal atrophy (Fig. 6A) and on treated and untreated 24-week-old zQ175 mice to test the possible efficacy of B+T treatment on the striatal atrophy. We found that B+T prevented the additional striatal atrophy experienced by untreated zQ175 mice (Fig. 6B). The increase in striatal phosphocreatine signal seen by magnetic resonance spectroscopy in untreated zQ175 mice was not seen in B+T-treated zQ175 mice (Fig. 6C).
We then tested the effect of B+T treatment on the overt motor phenotype and neuropathology of R6/1 mice. The motor coordination deficit detected with the rotarod test in 13-and 18-week-old untreated R6/1 mice did not take place in B+T-treated R6/1 mice (Fig. 6D). The limb strength deficit of R6/1 mice detected in the inverted grid test at the age of 18 weeks was prevented by B+T (Fig. 6E). Body weight and life span of R6/1 mice were not substantially affected by B+T test; (B to G) Two-way ANOVA followed by Tukey's or Games-Howell post hoc test. *P < 0.05, **P < 0.01, and ***P < 0.001. Graphs show means ± SEM. ppm, parts per million.
( fig. S3, E and F). Last, we assessed the effect of B+T on different neuropathological readouts of R6/1 mouse brains. We immunostained sagittal sections of treated and untreated 4.5-month-old R6/1 mice for dopamine-and cyclic adenosine monophosphate-regulated phosphoprotein 32 (DARPP32), a striatal marker previously used to analyze atrophy in HD mice (41) and for cleaved caspase 3 to detect apoptotic cells (42). Immunostaining revealed that B+T prevented both the striatal atrophy (Fig. 6F) and the increased number of apoptotic neurons (Fig. 6G) seen in untreated R6/1 mice. Together, these results demonstrate that a B+T treatment similar to the one that ameliorates disease in patients with BTBGD is able to prevent the brain thiamine deficiency observed in HD mice and to attenuate their radiological, neuropathological, and motor HD-like phenotypes, thus supporting the idea that patients with HD might benefit from biotin and thiamine supplementation therapy.
CPEB4 overexpression attenuates R6/1 mouse HD-like phenotypes and ThTr2 deficit
To obtain mechanistic evidence that the observed pathogenic decrease of ThTr2 in R6/1 mice is related to the described alterations in CPEB1 and CPEB4, we performed mouse genetics analysis, taking advantage of previously generated CPEB1-deficient [CPEB1 heterozygous knockout (KO), CPEB1 +/− ] (19) and CPEB4-overexpressing (CamkII-tTA:TRE-CPEB4, TgCPEB4) (13) mice. These mice were bred with R6/1 mice to generate R6/1:CPEB1 +/− and R6/1:TgCPEB4 mice. We first verified the attenuation of the CPEB1 increase and CPEB4 decrease seen in R6/1 mice in R6/1:CPEB1 +/− and R6/1:TgCPEB4 mice, respectively ( fig. S4, A and B). We then explored whether any of these attenuations affect the HD-like motor phenotype of R6/1 mice. We found that R6/1:TgCPEB4 mice did not demonstrate the motor coordination deficit and the hypoactivity observed in the rotarod and open field tests seen in R6/1 mice with respect to wild type (Fig. 7, A S4F) precedes CPEB1 alterations, with CPEB4 changes coinciding with or preceding both the decrease of ThTr2 and the appearance of symptoms. In agreement with the ability of Orb2 (the drosophila ortholog of CPEB2 to CPEB4) to act as a modifier of CAG/polyQ toxicity in fly models (9), we found that the striatal atrophy and the number of apoptotic cells observed in R6/1 mice were attenuated in R6/1:TgCPEB4 mice (Fig. 7, C and D). Together, these results demonstrate a positive effect of attenuating the CPEB4 deficit of R6/1 mice, perhaps because this prevents the pathogenic abnormal transcript polyadenylation and altered protein expression of multiple genes, including SLC19A3. To further explore this, we took advantage of the available data on altered transcriptome polyadenylation in the cortex and striatum of CPEB4-modified mice (13) and verified that depletion of CPEB4 resulted in decreased poly(A)-tail length, whereas CPEB4 overexpression resulted in increased poly(A)-tail length of the SLC19A3 transcript (Fig. 7E). Last, we confirmed that CPEB4 overexpression in R6/1:TgCPEB4 mice attenuated the decrease of ThTr2 observed in vessels isolated from R6/1 mice (Fig. 7F). Together, these results suggest that the decreased CPEB4 is pathogenic, at least in part, because it results in diminished polyadenylation of SLC19A3 mRNA and subsequent ThTr2 decrease.
DISCUSSION
By analyzing the status of CPEBs and of global transcriptome polyadenylation in brains of patients with HD and mouse models of the disease, we identified a decrease of CPEB4 and an increase of CPEB1 that correlate with altered polyadenylation of neurodegenerationlinked genes, thus unveiling a possible molecular mechanism across neurodegenerative diseases. Western blot analysis of CPE-containing (13). (F) SLC19A3 protein (ThTr2) in brain vessels isolated from wild-type, R6/1, and R6/1:TgCPEB4 mice. (A to D) One-way ANOVA followed by Tukey's post hoc test. (E) Benjamini-Hochberg corrected P value. *P < 0.05, **P < 0.01, ***P < 0.001, and #P < 0.0001. Graphs show means ± SEM.
deadenylated transcripts allowed us to detect decreased protein expression of key striatal atrophy effector genes not previously associated with HD, such as KTN1 and SLC19A3. The latter led us to find that HD involves a thiamine deficiency that resembles that of BTBGD. In HD, the thiamine deficiency is due to diminished ThTr2 as a result of decreased protein synthesis from SLC19A3 transcripts (instead of the SLC19A3-inactivating mutations that cause BTBGD). These findings suggest an easily implementable vitamin-based therapy for HD, supported by the reported behavioral and neuroanatomical improvements in HD mice upon biotin and thiamine supplementation. Because patients with BTBGD recover with early biotin and thiamine supplementation, patients with HD might also improve, particularly if treated early in the disease course. Biotin and thiamine therapy has multiple advantages such as safety and full central nervous system accessibility. The safety of the combined high doses of both biotin and thiamine has already been reported for patients with BTBGD (34), and both vitamins are Food and Drug Administration approved. Another advantage of this therapy is its low cost. This, together with easy over-the-counter accessibility, may however become a doubleedged sword at the time of clinical testing. Placebo-controlled clinical trials should be launched in the short term to prevent self-prescription that might obscure interpretation of clinical trials, including those aiming to test other therapeutic agents.
Beyond syndromes caused by mutations in genes encoding transporters of thiamine [SLC19A2 (ThTr1) and SLC19A3 (ThTr2)] (33) or biotin (SLC5A6, the sodium-dependent multivitamin transporter SMVT) (43), thiamine and biotin have also been tested or are used for certain neurodegenerative disorders. Wernicke-Korsakoff syndrome (a form of amnesia in long-term alcoholics who rely mainly on alcohol for nutrition) can be stopped by an injection of high-dose thiamine, and there have been suggestions that thiamine may have a beneficial effect in AD (44,45), as supported by a recent clinical trial (46). The therapeutic potential of high-dose biotin, by inducing changes in gene expression (47), has been explored for multiple sclerosis but failed to demonstrate efficacy in the largest cohort of patients tested (48).
Having established here that HD is associated with thiamine deficiency, it would be worth reevaluating in HD some of the biochemical and tissue aspects that are characteristic of thiamine deficiencies, such as alterations in metabolites related to thiamine-dependent enzymes such as the increased lactate that has been reported for HD (49)(50)(51) or the neurovascular alterations that are also known to occur in HD (52). Another aspect that deserves further consideration is whether this thiamine deficiency is brain-restricted or whether it may also affect other organs and how diet and the change in gut microbiota that takes place in HD (53, 54) might affect age of onset. In this regard, what patients with BTBGD have in common with patients with HD is the fact that thiamine content is decreased in CSF but unaltered in blood. This suggests that intestinal adsorption of thiamine is normal both in HD and BTBGD (probably as a consequence of high ThTr1 expression in intestine). However, according to the Human Protein Atlas (proteinatlas.org), ThTr1 is undetectable in brain endothelium, which is the tissue with highest Slc19a3 protein expression in mice (vastdb.crg.eu). Accordingly and given that transcript abundance and polyadenylation of Slc19a2 and Slc5a6 are unaltered in R6/1 mice, our model for the efficacy of B+T treatment in HD mice is that thiamine administered at high dose is efficiently absorbed in the intestine and its access from blood to brain parenchyma is also facilitated by the biotin-induced increase of SLC19A3 transcription in brain endothelium that we also demonstrate here.
The observed phenotypic improvement in HD mice upon CPEB4 overexpression suggests that alteration of CPEBs, especially the decrease in CPEB4, leads to the aberrant polyadenylation and subsequently altered protein expression of numerous etiology-relevant genes, such as SLC19A3. Theoretically, therapeutic strategies to correct the decrease in CPEB4 might lead to amendment of pathogenic gene misexpression beyond that of SLC19A3 and might therefore have additional positive effects with respect to those seen with B+T administration. However, pharmacological modulation of CPEB activity is challenging, particularly to counteract the decreased expression and/or activity seen in HD. A more efficient strategy to identify additional therapeutic targets from this study would be a systematic screening of all the neurodegeneration-associated misadenylated transcripts to see which ones might be druggable and then to verify altered protein content in patients with HD and mouse model tissue before preclinical testing, similar to what we have already done for SLC19A3.
There are some limitations of our study and its implications that deserve discussion. For instance, apart from motor symptoms, R6/1 mice also display some of the cognitive and psychiatric symptoms (55,56) of patients with HD, and we have not explored whether these are also affected by the B+T therapy. In addition, given the importance of biotin and thiamine for the correct function of multiple mitochondrial enzymatic activities and the well-documented mitochondrial dysfunction in both HD (57) and BTBGD (33), a nonspecific attenuation of oxidative phosphorylation defects or structural mitochondrial abnormalities by the high-dose B+T therapy may also be contributing to observed beneficial effects in HD mice, beyond the identified SLC19A3-related deficits. In the absence of diseaseassociated mitochondrial deficits, high-dose vitamins might also boost some functional and morphological parameters, as evidenced by a trend to increased striatal volume in treated wild-type mice.
SLC19A3-unrelated aspects of the toxicity triggered by expanded CAG-repeat RNA and/or expanded polyQ will not be treated by B+T. There are also experimental therapeutic strategies based on preventing the toxic and early self-aggregation of polyQ (58) that can have a pleiotropic positive effect. Such anti-aggregation-related therapies could be combined with therapies aiming to correct particular alterations, such as the B+T treatment that we demonstrate here, because such combinations may result in synergistic effects.
In summary, this study reveals that alteration of CPEBs and of global mRNA polyadenylation emerges as a possible molecular mechanism in neurodegeneration. This study has pinpointed diminished ThTr2 as a pathogenic effector and revealed a brain thiamine deficiency in patients with HD, suggesting that vitamin supplementation regimes similar to those that benefit patients with BTBGD might be beneficial to individuals with HD.
Study design
The objectives of this study were to (i) investigate the status of CPEBs in patients with HD and HD mice, (ii) analyze global mRNA polyadenylation in HD using the R6/1 mice HD model, (iii) study the status of SLC19A3 and thiamine in patients with HD, and (iv) determine whether B+T treatment would alleviate HD pathology in HD mouse models. Sample size was determined by availability and previous experience with biochemical and behavioral characterization of the mouse models. A minimum of three individuals (human/mice) per group were used for studies involving statistical analyses, and the n for individual experiments is indicated in the figures. Treated/nontreated mice (see the "Mouse biotin and thiamine treatments" section) were randomly allocated to experimental and control groups at weaning. Blinding was performed during data collection and analysis. Outliers were excluded using SPSS 26.0 (see statistical analysis). For all human studies, sampling was approved by the local ethics committee, and all subjects signed informed consent. For mouse studies, all experiments were performed according to the guidelines of the Animal Ethics Committee and were approved by the government authorities.
Human tissue samples
Human tissue samples used in immunoblot and immunohistochemistry were provided by the Institute of Neuropathology Brain Bank (HUBICO-IDIBELL, Hospitalet de Llobregat, Spain), the Neurological Tissue Bank of the IDIBAPS Biobank (Barcelona, Spain), the Banco de Tejidos Fundación CIEN (BT-CIEN, Madrid, Spain) and the Netherlands Brain Bank (Amsterdam, The Netherlands). Written informed consent for brain removal after death for diagnostic and research purposes was obtained from brain donors and/or next of kin. CSFs were collected in sterile tubes. Total blood was collected in K2E (EDTA) tubes (368801, BD Biosciences). CSF and blood were collected according to Hospital Universitario Ramón y Cajal (Madrid, Spain), Hospital Universitario Virgen del Rocío (Sevilla, Spain), and HUVR-IBiS Biobank (Andalusian Public Health System Biobank and ISCIII-Red de Biobancos PT13/0010/0056) guidelines. All the human samples were sex and age matched (table S3).
Animals
Different mouse models were used, which have been previously reported. Except for the R6/1 mice transgenic for the human exon-1-Htt gene (59) that were used in B6CBAF1 background, all other used mouse lines were used in pure B6 (C57BL/6J) background: heterozygous knock-in of an expanded CAG track in exon 1 of huntingtin gene, zQ175 mice (40), HD94 mice with a tetracycline-conditional transgene encoding exon-1-Htt with an interrupted (CAG) 94 repeat (60), CPEB1-deficient mice (CPEB1 heterozygous KO, CPEB1 +/− ) (19), and CPEB4-overexpressing mice (CamkII-tTA:TRE-CPEB4, TgCPEB4) (13). When R6/1 mice are bred with any of the other genetically modified strains, all resulting genotypes present an equivalent mixed B6CBA genetic background (as a result of the B6CBAF1xB6 cross). In the resulting mixed background, the contribution of B6 is expected to be close to 75%, and the disparity from the expected background affects all experimental groups, including the wild-type controls, equally because they are all littermates. All mice were housed in Center for Molecular Biology "Severo Ochoa" (CBMSO) animal facility, with four per cage. Food and water were available ad libitum, and mice were maintained in a temperature-controlled environment on a 12-hour light/12-hour dark cycle with light onset at 08:00. Thiamine content in chow diet was food pellets (7 mg/kg), according to the manufacturer (SAFE 150, from Safe-diets, France). Animal housing and maintenance protocols followed the local authority guidelines. Animal experiments were performed under protocols approved by the CBMSO Institutional Animal Care and Utilization Committee (Comité de Ética de Experimentación Animal del CBMSO, CEEA-CBMSO) and Comunidad de Madrid PROEX 293/15 and PROEX 247.1/20.
Mouse biotin and thiamine treatments
Mice were housed four per cage and given thiamine and/or biotin in the drinking water ad libitum. In a pilot group of mice, the volume of water intake was monitored per cage every 3 days; this revealed that each mouse drank, on average, 4 ml of water per day, regardless of whether water was supplemented with thiamine and/or biotin. To achieve the desired intake of vitamin/kg per day, the concentration of vitamins in the drinking water was calculated assuming that each mouse weighs 25 g. Controls for treatment and genotype were mice receiving plain water and nontransgenic littermates, respectively.
For experiments with R6/1 mice and their wild-type littermates, treatment began at 3 weeks of age, just after weaning. Biotin-only treatment consisted of 10 mg/kg per day; thiamine-only treatment started at a dose of 200 mg/kg per day, which was decreased to 50 mg/kg per day from 18 weeks, and combined biotin and thiamine (B+T) treatment consisted of biotin at 5 mg/kg per day and thiamine at 100 mg/kg per day, the latter being reduced to 25 mg/kg per day at the age of 18 weeks. The reason for decreasing thiamine doses from week 18 is that, in the pilot group of mice (for which the dose was not reduced), a possible toxicity in R6/1 mice was detected from 24 weeks of age, evidenced by an increase in volume drunk and excessive urination. These effects were not observed in the following groups for which thiamine concentrations were reduced to a quarter of the initial concentration. Biotin-only and thiamine-only treatments were analyzed in the pilot experiment, and no evidence of attenuation of the phenotype was observed in the motor coordination test (rotarod) or in the locomotor activity test (open field). For zQ175 mice and their wild-type littermates, B+T treatment consisted of biotin at 5 mg/kg per day with thiamine at 100 mg/kg per day starting at 5 or 18 weeks of age. For RT-qPCR analysis of Slc19a3 transcript, R6/1 and control mice were treated with biotin at 5 mg/kg per day, starting at the age of 4 weeks. The number of animals included in each group is indicated in Results and in the figures.
Statistical analysis
Statistical analysis was performed with SPSS 26.0 (SPSS Statistics IBM) and GraphPad Prism version 6.01. Data are represented as means ± SEM with 95% confidence interval. Outliers were plotted individually or not plotted; the criteria of exclusion were applied when point was further than 1.5*interquartile range away from the mean. The normality of the data was analyzed by the Shapiro-Wilk or Kolmogorov-Smirnov tests. Homogeneity of variance was analyzed by the Levene test. For comparison of two independent groups, two-tailed unpaired Student's t test (data with normal distribution), Mann-Whitney-Wilcoxon, or Kolmogorov-Smirnov tests (non-normal distribution) were performed. To compare dependent measurements, we used a paired t test (normal distribution) or Wilcoxon signed-rank tests (non-normal distribution). For multiple comparisons, data with a normal distribution were analyzed by one-or two-way analysis of variance (ANOVA) followed by a Tukey's or a Games-Howell's post hoc test. Statistical significance of nonparametric data for multiple comparisons was determined by Kruskal-Wallis ANOVA. Enrichment tests were carried out with onesided Fisher's exact test. Life span was analyzed by log-rank (Mantel-Cox) test and represented with Kaplan-Meier plot. A cutoff value for significance of P < 0.05 was used throughout the study.
|
2021-10-01T06:16:47.978Z
|
2021-09-29T00:00:00.000
|
{
"year": 2021,
"sha1": "a90b69ae4baa0d6da7d3256522ee044a490bf37f",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/180633/1/12541_6529115_scitranslmed_pico_2021.pdf",
"oa_status": "GREEN",
"pdf_src": "Science",
"pdf_hash": "51d946449c957dfc1e03bcf2c941679883d15a3f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
203382337
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Coronary Collateral Circulation of Perioperative Myocardial Damage in High-Risk Patients Undergoing Coronary Artery Bypass Grafting Surgery
Background: Coronary collateral circulation (CCC) is a small vascular formation that allows the connection between the different parts of an epicardial vessel or other vessels. The presence of collateral circulation contributes positively to the course of coronary artery disease (CAD). The aim of this study was to investigate the effect of collateral circulation on myocardial injury and clinical outcomes during coronary artery bypass grafting (CABG) in a high-risk patient group. Methods: 386 patients who underwent isolated CABG under cardiopulmonary bypass (CPB) were included in the study. Patients were divided into two groups according to the Rentrop scores (n = 225 poor CCC group; and n = 161 good CCC group). Myocardial injury and postoperative clinical results were evaluated as endpoints. Results: The mean age was 62.9 ± 7.5 years, and 61.6% of all patients were male. Postoperative 30-day mortality rate was significantly higher in poor CCC group (4 [1.7%] and 1 [0.6%], P < .001). The frequency of postoperative intraaortic balloon pump (IABP) use (5 [2.2%] and 1 [0.6%], P < .001), low cardiac output syndrome (LCOS) (28 [12.4%] and 10 [6.2%], P < .001) and postoperative atrial fibrillation (35 [15.6%] and 16 [9.9%], P = .038) were significantly higher in poor CCC group. 12th and 24th hour CK-MB and cTn-I values were found to be significantly lower in the good CCC group. Conclusion: It is inevitable that the CPB circuit and operation have devastating effects on myocardium in CABG operations. The presence of CCC reduces postoperative myocardial injury, low cardiac output syndrome, and mortality rates.
INTRODUCTION
Any cardiac surgery performed under cardiopulmonary bypass (CPB), including coronary artery bypass grafting surgery (CABG), is associated with cardiac cell damage to a certain extent, regardless of how it is performed. Myocardial damage caused by the created and controlled ischemiareperfusion periods during operation is one of the important causes of postoperative morbidity and mortality. Especially in CABG, the severity of coronary disease, incomplete revascularization, recent myocardial infarction, and early graft occlusions are the possible causes of myocardial injury. Systemic and topical hypothermia is used to minimize the energy consumption of tissues and therefore to minimize myocardial damage when under CPB. In addition, before the coronary anastomosis and immediately after the placement of the cross-clamp, antegrade and/or retrograde given cardioplegia provides myocardial diastolic arrest and transfers the components which the tissue needs through the coronary circulation to the cells. Thus, minimizing the oxygen consumption and consequently keeping the cardiac necrosis at the minimum level are the aims. Although minimizing this damage caused by the innovative methods is the goal, the distribution of the cardioplegia solutions may not be possible towards the heart tissue due to the nature of coronary artery disease (CAD). In this case, cardioplegia solutions cannot efficiently achieve the desired effect on the myocardium. Cardiac troponin I (cTnI) and creatine kinase-myocardial band fraction (CK-MB) are frequently preferred laboratory tests to detect damage in the myocardial tissue [Aldous 2013;Adabag 2007].
Coronary collateral circulation (CCC) is an important adaptive mechanism for the protection from the ischemic myocardium. Morbidity and mortality are known to be lower in CAD patients with a well-developed CCC [Meier 2007]. We believe that this collateral system may have a significant effect on the preservation of vitality and function of myocardial tissue during coronary artery surgery.
The aim of this study is to evaluate the relationship between preoperatively evaluated CCC, and postoperative myocardial injury and clinical outcomes in high-risk patients who underwent CABG operation.
MATERIALS AND METHODS
The data of 1318 patients who underwent isolated CABG operation under CPB due to CAD between 2014-2018 were analyzed. Among these patients, preoperative, intraoperative, and postoperative data of 435 patients who were found to be at high risk by the reference to the European System for Cardiac Operational Risk Assessment (EuroSCORE) II scoring system and whose coronary angiography images could be reached were analyzed. Patients who had acute myocardial infarction (AMI) for 10 days before the operation, who had elevated CK-MB and/or cTnI levels while taken into the E376 operation, or who underwent urgent operation were excluded from the study. In conclusion, a total of 386 patients were included in this retrospective study.
All patients underwent coronary angiography within 6 months prior to operation, had transthoracic echocardiographic examinations, and their blood samples were taken. The laboratory results of the blood samples taken before and after the operation, the duration of CPB and cross-clamping, intraoperative data, hemodynamic measurements obtained in the intensive care and service follow-up and clinical data were obtained from the database of our institute. Angiography images of the patients were evaluated by 2 cardiac surgeons and an interventional cardiologist. The Rentrop classification was used for CCC, while SYNTAX scoring system was used for the severity and complexity of CAD [Rentrop 1988;Sianos 2005]. According to the Rentrop scoring system; grade 0: no collateral filling is observed; grade 1: monitoring the lateral branches of the artery filled by collateral arteries without monitoring the epicardial segment; grade 2: partially filling the epicardial artery by collateral circulation; grade 3: complete filling of the epicardial vein by the collateral arteries. If more than one collateral system was detected in the patient, the higher value of CCC score was accepted. Patients were divided into two groups as poor (grade 0-1) and good (grade 2-3) according to the Rentrop scores. The same team that performed angiographic examination has also identified the prevalence of CAD using the online calculator at www.syntaxscore.com.
All operations were performed under general anaesthesia, CPB, and via median sternotomy in accordance with our routine clinical practice. Following premedication, anesthesia was started by giving fentanyl, midazolam, and sodium thiopental to the patient respectively, and volatile anesthetic agents (sevoflurane or desflurane) were used to maintain anesthesia. After systemic heparinization (300 U/kg), the activated clotting time (ACT) was increased to over 480 seconds, and an additional dose of heparin was applied to maintain this value. CPB was applied following arterial and venous cannulation. The patient was cooled to 32-34°C and entered into moderate hypothermia. Following the aortic cross-clamping, a diastolic arrest was achieved by administering 12-14 mL/kg of warm blood cardioplegia. Additional cardioplegia doses were given from the aortic root and completed coronary anastomoses for every 15-20 minutes during the operation. CPB was stopped and protamine sulphate was initiated to antagonize heparin activity. All patients were admitted to the intensive care unit, and they were extubated after the full clinical stability and adequate wakefulness was achieved. Blood samples were taken for CK-MB and cTnI values at postoperative 6th, 12th, and 24th hours. All patients underwent a transthoracic echocardiographic examination to check postoperative myocardial function. The patients who were divided into two groups according to the Rentrop scores were evaluated for myocardial injury with reference to CK-MB and cTnI values. Postoperative clinical results were also evaluated as secondary endpoints.
Statistical Analysis SPSS 25.0 for Windows (SPSS, Chicago, IL, USA) was utilized for the statistical analysis. All data were evaluated with regards to the normal distribution using the Kolmogorov-Smirnov test. Continuous variables were compared with Student t test or Mann-Whitney U test according to the distribution of the data. Categorical variables were compared using the chi-square test. Categorical variables were identified as percentages and continuous variables as mean ± standard deviations. Univariate logistic regression analysis was performed to evaluate the relationship between baseline values and LCOS. Odds ratios (OR) are given with the 95% confidence intervals (CI). Baseline variables which were found to be significantly associated with LCOS (P < .05) in univariate regression analysis were determined. Multiple logistic regression analysis was used to assess the independent predictors of LCOS.
RESULTS
A total of 386 patients who underwent CABG under CPB and under elective conditions in our department were included in the study. The mean age was 62.9 ± 7.5 years, and 61.6% of all patients were male. After calculating the Rentrop scores, 225 patients were in poor CCC group and 161 patients were in good CCC group. Syntax score values were significantly higher in good CCC group (25.3 ± 10.2 and 30.7 ± 9.5, P = .036). were higher in the CCC group. The left ventricular ejection fraction (LVEF) was significantly lower in good CCC group (57.4 ± 9.9 and 51.8 ± 10.2, P < .001). On the other hand, no significant differences were observed between groups in terms of age, sex, body mass index (BMI), diabetes, dyslipidaemia, chronic obstructive pulmonary disease (COPD), chronic kidney failure, smoking, and Euroscore 2 values (Table 1).
When the intraoperative data were compared between the groups, cross-clamping and CPB durations were found to be similar (Table 2). Although the number of distal anastomoses was lower in group 1, the difference was not statistically significant (2.8 ± 1.4 and 3.3 ± 1.6, P = .197). The incomplete revascularization rates were significantly higher in poor CCC group (14 [6.2%] and 5 [3.1%], P = .025). The frequency of postoperative intraaortic balloon pump (IABP) use (5 [2.2%] Figure 1 and Figure 2).
DISCUSSION
This study showed that a well-developed coronary collateral system in high-risk patients has a positive effect on the preservation of myocardial viability during the CABG operation. Many previous studies showed that a well-developed CCC reduces mortality and morbidity in coronary artery diseases [Yaylak 2015;Seiler 2013]. This is the first study examining the effects of CCC on the preservation of myocardial viability in CABG operations according to our literature review.
Myocardial damage is not an unexpected condition in cardiac surgery. The magnitude of this myocardial injury affects the morbidity and mortality negatively in the medium to long-term [Paparella 2014]. The clinical situation in which the most destructive effects of myocardial damage were observed is postoperative myocardial infarction (PMI). In our study, no significant difference was found between the two groups in terms of PMI occurrence. Even though the collateral circulation contributes to the coronary circulation, its deficiency or weakness is not an independent variable for PMI occurrence, because PMI mainly develops due to early graft failure [Laflamme 2012]. If the postoperative myocardial injury is not associated with the early graft failure, it is usually clinically asymptomatic and silent [Flu 2010].
CK-MB and cTn-I enzymes are usually preferred to determine the extent of cardiac damage [Croal 2006]. In a metaanalysis in which Domanski et al included seven studies, they demonstrated that CK-MB or troponin elevations within the first 24 hours after CABG operation had an independent association with increased mortality in the medium and longterm [Domanski 2011]. In our present study, the values within the first 24 hours were evaluated; and when we compared the 12th hour and 24th-hour values, the statistically significant difference observed between the two groups was the result which supported our hypothesis. Naturally, when comparing the results between the two groups with laboratory parameters, we also reached similar differences in clinical results. LCOS occurrence postoperatively is another important finding of the myocardial injury. There are different reasons for this pathology, which is quite challenging in the postoperative period such as advanced age, low LVEF, and incomplete revascularization [Ding 2015]. In our study, LCOS development in the good CCC group was significantly lower when compared to the poor CCC group. The high rates of incomplete revascularization in the poor CCC group also reinforce this situation. Although we could not obtain objective data to explain the higher rate of incomplete revascularization in this study, we believe that the vascular structure of the target coronary artery is better protected with a well-developed CCC, which encourages the surgeon to perform the anastomosis. When the 30-day mortality rates were assessed, patients in the good CCC group had significantly lower mortality rates.
A well-developed CCC is known to reduce the size of the affected ischemic area and associated complications in the ischemic heart diseases, especially in acute myocardial infarction [McMurtry 2011]. However, there are few studies related to CCC in CABG operations. In a study by Nathoe et al, they reported that CCC reduced PMI rates in off-pump CABG operations, but the same effect was not observed in on-pump CABG operations [Nathoe 2004]. On the other hand, they stated that in contrast to the idea we support in our hypothesis, the protective effect of collateral circulation may be reduced by the formation of cardiac arrest during on-pump surgery. In another report by Caputo et al, patients who underwent offpump CABG operations were examined in two groups as with or without collateral circulation. They reported that there was no difference in early-midterm clinical outcomes and this result was attributed to fewer risk factors in the non-collateral circulation group [Caputo 2008]. In contrast, in our study we think that evaluating Rentrop grade 1 patients and Rentrop grade 0 patients who do not have any collateral circulation in the same group provides a more homogeneous distribution. As a matter of fact, no difference was observed between the two groups in terms of preoperative risk factors. It is not yet fully understood how the CCC circulation, angiogenesis, or more specifically, arteriogenesis, develops or what factors play a role in the formation of this adaptive system. Although tissue ischemia is widely believed to trigger the formation of collateral circulation, there is no clinical study that can prove this hypothesis. There are studies that report a correlation between hypertension, the severity of coronary stenosis, the location of the lesion, and how long the lesion is present with the development of CCC [De Marchi 2011;Piek 1997;Werner 2001]. Similarly, in our study, the incidence of hypertension and mean syntax score was higher in the good CCC group.
Although the importance of collateral circulation has been repeatedly emphasized by this and similar other studies, the question to be asked is whether this phenomenon is being used effectively as a therapeutical way or not. While the coronary collaterals are believed to have been present since the embryonic life, they are considered to be non-functional structures until they appear in the situations in which the circulation is degenerated, such as CAD; a functional coronary collateral structure without CAD was demonstrated angiographically in a study of patients with normal coronary arteries [Wustmann 2003]. Yinglu Guan et al also reported in their study on animals that the natural processes of the cell cycle, such as proliferation, apoptosis, etc, are constantly active in this collateral vascular tissue [Guan 2016]. Activation of collateral circulation in a potentially short period of time may provide additional benefits in the management of stable CAD and CABG operations in the future.
|
2019-09-17T02:59:11.224Z
|
2019-09-11T00:00:00.000
|
{
"year": 2019,
"sha1": "1910bb6933dcf193786fc99ef7bc39775fc6f0ae",
"oa_license": null,
"oa_url": "https://journal.hsforum.com/index.php/HSF/article/download/2483/3775",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fb4205e6d9258fc0d18196928169622683c6336a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52047230
|
pes2o/s2orc
|
v3-fos-license
|
Remineralizing potential of CPP-ACP in white spot lesions – A systematic review
Objective: The aim of this systematic review was to assess the long term remineralizing potential of casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) only in paste form compared with fluoride varnish, and or placebo in both naturally occurring and post-orthodontic white spot lesions in vivo. Data Sources: The literature search covered the electronic databases: PubMed and Google scholar from 2005-2016. Only articles published in English were included. Randomized control trials in which CPP-ACP delivered by paste form were included. All studies which met inclusion criteria underwent two independent reviews. Study Selection: Two ninety five articles were identified from the search after excluding duplications. Abstracts of forty one articles were reviewed independently. Twenty nine articles were excluded after reading abstract. Full text articles were retrieved for fifteen relevant studies. After reviewing articles independently, three articles were excluded after full text reading. Finally twelve studies were selected based on the eligibility criteria. The remineralizing effect of CPP-ACP were compared with placebo and fluoridated toothpaste and fluoride varnish in randomized control trial. Conclusion: A high level evidence of remineralizing potential of CPP-ACP on naturally occurring white spot lesion and WSL post orthodontic treatment was found in comparison with placebo/fluoridated toothpaste and fluoride varnish without any statistically significant difference. Well-designed RCTs are, therefore, required to improve the level of evidence in this area.
Introduction
Dental caries is a multifactorial disease; it is the result of complex interaction between host, agents, environment, and time. Dental caries is an infectious disease caused by acidogenic bacteria, leading to dissolution of enamel, dentin (coronal caries), cementum, and dentin (root caries). [1] It is a common dental problem in the world and affects 60%-90% of children and the majority of adults. [2] Worldwide contribution of dental caries to the burden of oral diseases is about 10 times higher than that of periodontal diseases and other common oral conditions. Owing to its globally high prevalence, dental caries is termed as a "pandemic" disease characterized by a high percentage of untreated carious cavities causing pain, discomfort, and functional limitations. [3] Dental caries results in the dissolution of apatite crystals and the loss of calcium, phosphate, and other ions, which eventually leads to demineralization of the tooth substrate. [4] Enamel decalcification or the formation of white spot lesions (WSLs) is the first sign of dental caries, usually appearing as chalky white areas on the tooth surface. The subsurface porosity caused by demineralization gives the lesion a milky appearance that can be found on the smooth surfaces of teeth. [5] The prevalence of (WSLs) related to orthodontic treatment ranges from 2% to 96%, and 24% of WSLs may change into cavitated lesions if left untreated. [6] Enamel crystal dissolution begins with subsurface demineralization, creating pores between the enamel rods. The alteration of the enamel refractive index in the affected area of a carious WSL is a consequence of both surface roughness and loss of surface shine plus alteration of internal reflection, all resulting in visual enamel opacity because porous enamel scatters more light than sound enamel. [7] If suitable treatment is presented to these lesions, enamel caries is capable to arrest, reharden and revert to healthy enamel This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com condition through a remineralization process involving the diffusion of minerals into the defective tooth structure. For this purpose, remineralizing agents such as fluorides, xylitol, bio-active glass, casein phosphopeptide-amorphous calcium phosphate (CPP-ACP), tricalcium phosphate, and self-assembling peptides have been used. [8] CPP-ACP application is one of the many techniques that have been proposed for simultaneously enhancing remineralization and reducing the occurrence of WSLs and dental caries. [9] The clinical use of calcium and phosphate ions for remineralization has not been successful in the past due to the low solubility of calcium phosphates, particularly in the presence of fluoride ions. Insoluble calcium phosphates are not easily applied and do not localize effectively at the tooth surface. In addition, acid is required to produce ions capable of diffusing into enamel subsurface lesions. In contrast, soluble calcium and phosphate ions can be used only at very low concentrations due to the intrinsic insoluble nature of the calcium phosphates. Hence, soluble calcium and phosphate ions do not substantially incorporate into dental plaque or localize at the tooth surface to produce effective concentration gradients to drive diffusion into the subsurface enamel. [10] To overcome these difficulties, a new calcium phosphate remineralization technology has been developed based on CPP-ACP, where CPP stabilizes high concentrations of calcium and phosphate ions, together with fluoride ions, at the tooth surface by binding to pellicle and plaque. The calcium phosphate-based remineralization technology has been shown as a promising adjunctive treatment to fluoride therapy in the management of early caries lesions. [11] Casein phosphopeptide forms nanoclusters with amorphous calcium phosphate, thus providing a pool of calcium and phosphate which can maintain the supersaturation of saliva. Since CPP-ACP can stabilize calcium and phosphate in the solution, it can also help in the buffering of plaque pH and so calcium and phosphate level in plaque is increased. Therefore, calcium and phosphate concentration within the subsurface lesions is kept high which results in remineralization. [12] Systematic reviews in the past have assessed the naturally occurring WSL and postorthodontic WSLs separately. In addition, orthodontic appliances are known to impede oral hygiene, which may also affect the clinical efficacy of CPP-ACP on WSLs. The aim of this systematic review was to assess the long-term remineralizing potential of CPP-ACP only in paste form compared with fluoride varnish and or placebo in both naturally occurring and postorthodontic WSLs.
Structured question
• Does CPP-ACP possess remineralizing potential in WSLs in vivo?
• PICO analysis The eligibility criteria were set as: the review included studies from 2005 to 2016 concerning population of all groups. Only randomized control trials involving human population were considered. The studies assessing WSL due to postorthodontic treatment were also taken into consideration. Case reports, abstracts, editorials, review articles, and non-English articles were excluded from the study. Animal studies and in vitro study were not included in this study. Other formulations of CPP-ACP, such as sugar-free gums, lozenges, fluoridated gels, mouth rinse formulations, antibacterial gels are excluded from the study.
Search strategy
The literature search covered the electronic databases: PubMed and Google Scholar. To search databases, strings of search (MeSH) terms, consisting of relevant text words and Boolean links, were constructed. The string of English search terms: "Incipient caries lesion OR early enamel caries OR CPP-ACP OR Tooth mousse OR recaldent AND Fluoride varnish OR remineralization". Our search strategy attempted to identify all trials that could be considered for possible inclusion in this review. The reference lists of all eligible studies were also hand searched for additional relevant studies.
Data collection and analysis
Two calibrated reviewers screened the titles and abstracts (when available) of all identified studies independently. Once the publication was considered by either author to meet the inclusion criteria, the full-text article was obtained and reviewed. Any disagreement during study selection and data extraction was solved by discussion and consensus after consulting a third reviewer. Two ninety-five articles were identified from the search after excluding duplications. A total of 251 articles were excluded after reading titles. Abstracts of forty-one articles were reviewed independently. A total of 29 articles were excluded after reading abstract. Full-text articles were retrieved for fifteen relevant studies. After reviewing articles independently, three articles were excluded after full-text reading. Finally, twelve studies were selected based on the eligibility criteria.
Data extraction
Data extraction was completed independently by the two reviewers using a specifically designed data extraction form. Quality assessment criteria to evaluate the studies were decided by two review authors in accordance with CONSORT guidelines. The following data were collected:
Quality assessment
Each study was assessed using the evaluation method described in the Cochrane Handbook for Systematic Reviews (Higgins and Green. Cochrane reviewers handbook 2009). The quality assessment of the included trials was undertaken independently by two reviewers. The domains evaluated were randomization method, allocation concealment, assessor blinded, dropouts, and risk of bias. Each domain was classified as having a low, high, or unclear risk of bias. Thus, the overall level of risk for each study was subsequently classified as low (if it did not record a "Yes" in three or more of the four main categories), "Moderate Risk" of bias (if two out of four categories did not record a "Yes"), "Low Risk" (if all the four categories recorded were adequate), "Unclear (unclear risk of bias for one or more domain)."
Results
Twelve studies which met the inclusion criteria were taken for the present systematic review [ Table 1].
Study characteristics
The age group assessment included in the studies revealed, nine out of twelve studies considered subjects aged between 12 and 20 years. Three studies included an age group of 12-36 months, [13] 2½-3½ years, [14] 6-14 years. [15] Of the twelve studies, five were double-blinded, three-single blinded, and one-triple blinded. Rest of the trials have not mentioned explicitly about the blinding in the methodology. [16][17][18] [14][15][16][22][23][24] The remaining six of the studies made no mention regarding the concentration of CPP-ACP. [13,[17][18][19][20][21] Primary outcome The primary outcome for assessment was considered as the remineralization potential of CPP-ACP on naturally occurring caries and WSLs postorthodontic treatment.
In the present systematic review among the twelve studies, four evaluated the effect on naturally occurring caries and eight studies evaluated effect on WSLs postorthodontic treatment. The measurement methods used among the twelve studies included a combination of the following; International Caries Detection and Assessment System (ICDAS) II (5 studies), DIAGNOdent (3studies), decayed-missing-filled surfaces (DMFS) (2 studies), Qualitative laser fluorescence (3 studies), digital photograph (3 studies), and enamel decalcification index (2 studies) [ Table 1].
Among the studies evaluating the effect of CPP-ACP on naturally occurring caries lesion, three studies showed a significant reduction in caries increment using CPP-ACP compared with placebo. There were significant differences between groups in the mean DMFT index and decrease in the mean WSL area was also reported. DIAGNOdent values were also found to be significantly reduced. However, one study reported that there was no significant difference in the enamel caries lesion transition using CPP-ACP compared to fluoridated toothpaste.
The remaining eight trials assessed the effect of CPP-ACP on WSLs associated with postorthodontic treatment. The studies reported that there was no significant difference among the intervention and comparison group. However, there was significant improvement in lesion depth over a period, but no significant difference between the groups among four studies. DIAGNOdent values were found to be decreased in one study and enamel decalcification index was decreased in two studies. One study lack adequate information regarding the primary outcome.
Secondary outcome
One or more nonserious adverse events such as minor gastrointestinal symptoms were recorded in the trial conducted by Bailey et al. [16] Only two studies reported information about side effects of using CPP-ACP. [22,23] The rest of the studies included in this systematic review did not present information on incidence of any adverse events.
Risk of bias
The risk of bias of the studies included in this review is summarized in Tables 2 and 3. Out of twelve studies which met eligibility criteria, seven studies have low risk of bias, three studies were judged to have moderate, and two studies were high risk of bias. The main risk of bias associated with these studies included inadequate sample size, unexplained allocation concealment, and lack of mention about attrition rates.
Discussion
The primary objective of this systematic review was to determine the remineralization effect of CPP-ACP through studying the published clinical trials. The present review has highlighted a lack of relevant research with low risk of bias on the effect of CPP-ACP on carious lesions, suggesting that CPP-ACP has remineralizing effect on early caries lesions in vivo compared with placebo and fluoridated toothpaste and fluoride varnish. WSLs resulting from orthodontic treatment were also included in our literature search because they represent the preliminary stage of subsurface enamel demineralization and are generally considered the early stage of the carious process. [25] In contrast to normal population, however, the microecology of orthodontic patient seems to change following the placement of fixed appliances and due to inability to maintain proper oral hygiene, this may influence the effect of CPP-ACP in this population group. [26] The WSLs, which determine the earliest phase of the caries process and which are reversible can be treated by conventional approaches, involving the disadvantage of being invasive. [27] Therefore, remineralization agents can be used to promote ion-exchange mechanism instead of invasive techniques. [18] CPP-ACP has been reported to have the potential in promoting remineralization and which maintains calcium and phosphate at a supersaturated level compared to calcium in saliva and helps to preserve them in proximity to the enamel lesion, thereby decreasing demineralization and enhancing remineralization of enamel lesion. [28] The age group assessment included in the studies revealed, nine out of twelve studies considered subjects aged between 12 and 20 years. CPP-ACP used in the included studies were marketed under the trade names of " GC Tooth mousse", "MI Paste", " MI Paste plus," and "Topacal C-5". The highest concentration of CPP-ACP that is currently available in commercial dental products is 10%w/w (ex: 5%w/w "Topacal C-5" and 10%w/w in "GC Tooth mousse", "MI Paste" and "MI Paste plus"). The strength or concentration is an important requirement in clinical trials since CPP-ACP has promising effect on dose-related increase in enamel remineralization. [6] The twelve studies included in this systematic review utilized the following methods to assess the primary outcome measures; clinical assessment using the ICDAS criteria or decayed surface/DMFS index, clinical or photographical assessment using enamel decalcification index, bitewing radiography for proximal caries increment, and reading of fluorescence-based devices (QLF/DIAGNOdent). Since no single method provides adequate reliability for caries In contrast, four of the studies found that CPP-ACP promotes remineralization of enamel subsurface lesion in postorthodontic WSL population.
The follow-up time of studies also varied from 1 month to 2 years. Evidence from studies on CPP-ACP suggests that a follow-up period of more than 3 months is usually needed to observe the changes of demineralization/ remineralization. Moreover, a relatively long follow up is required to determine the efficacy of CPP-ACP.
Reporting of adverse effects due to the use of CPP-ACP was lacking in most of the studies. Safety assessment should always be considered as an important and necessary part of a well-designed randomized controlled trial. [29] Quality assessment of the studies showed that among twelve trials, seven studies showed low risk of bias. However there was a difference in the concentration of CPP-ACP, intervention measurement methods, outcome assessments, and follow-up period, randomization and blinded methods which could affect the trial results.
Conclusion
Within the limitation of this systematic review, a high level of evidence of remineralizing potential of CPP-ACP on naturally occurring WSL and WSL postorthodontic treatment was found in comparison with placebo/fluoridated toothpaste and fluoride varnish without any statistically significant difference. Reporting of such trials should follow the CONSORT statement and in particular, carry out blinding for reducing the risk of bias which influences the outcome. Well-designed randomized controlled trials are, therefore, required to improve the level of evidence in this area of research.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2018-08-22T21:31:15.734Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "6f3206f2e68d44bf6b221ac7ec85f00ff1c61aed",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijdr.ijdr_364_17",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "73f19b16aec3318d4f9e340638c15a689755e272",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85310336
|
pes2o/s2orc
|
v3-fos-license
|
Ornithoglossum pulchrum ( Colchicaceae : Colchiceae ) , a new species from southern Namibia
We describe a new species in the sub-Saharan genus Ornithoglossum Salisb. from southern Namibia. Ornithoglossum pulchrum from near Aus, is remarkable in having bright to dark pink flowers, a feature previously unknown in the genus. The perigone is almost concolorous apart from a contrasting, pale yellow nectary region, narrowly outlined with darker red, near the base of each tepal. The undulate leaves together with the long filaments, which are nearly as long as the tepals, suggest a relationship with O. undulatum, a widespread species in the western parts of southern Africa, and O. zeyheri from Namaqualand and the northwestern Cape. * Compton Herbarium, South African National Biodiversity Institute, Private Bag X7, 7735 Cape Town. E-mail: d.snijman@sanbi.org.za. ** Swedish Museum of Natural History, Box 50007, SE-10405 Stockholm, Sweden. *** P.O. Box 193, Windhoek, Namibia. E-mail: manfam@iafrica.com. na. MS. received: 2011-03-15. 232 Bothalia 41,2 (2011) erecto-patent pedicels 20–35 mm long, actinomorphic, ± widely campanulate, ± 40 mm diam.; tepals equally spreading, 27–32 × 4–5 mm, claw tubular-flattened, ± 4.0 × 1.5 mm, blade lanceolate, faintly 7–9-veined, slightly canaliculate, bright to dark pink, with a pale yellow nectary region narrowly outlined with darker red, becoming paler with age; nectary concave, widemouthed, ventral margin simple. Stamens slightly spreading; filaments straight, slightly curved distally, ± filiform but slightly thickened in proximal half, 25–28 mm long, uniformly dark pink; anthers oblong, 2.5 × 0.5 mm, slightly curved, dull yellow. Ovary oblong-globose, 5.0 × 3.5 mm, dark pink; styles free from base, spreading, straight proximally, slightly curved distally, ± 25 mm long, dark pink; stigma capitate, minutely papillate. Capsule elliptical-oblong, shortly and bluntly lobed, 7 × 4 mm (when immature but not known when mature), erect, coriaceous. Seeds unknown. Flowering time: June to early Aug. Figures 1; 2. Distribution and ecology: Ornithoglossum pulchrum is currently known from just one locality in the pro-Namib, a broad, undulating plain in southern Namibia (Figure 3). The species has been recorded in ephemeral watercourses from the uplands near Aus, which lie just below the inland escarpment, at approximately 1 380 m. The plants grow in coarse gravel, close to gneiss outcrops of the Namaqua Metamorphic Complex. Lying on the border of the winter and summer rainfall zones, Aus has bimodal rainfall, averaging 85 mm per year. Most rain falls in late summer (January to April) with a second, lower peak in June (Pallett 1995). Precipitation also occurs in the form of occasional fog which moves in from the coast, as well as rare snowfalls. Winds in the region are a powerful climatic force which can severely limit plant growth. Like most other Ornithoglossum species from southern Africa’s winter rainfall region, O. pulchrum flowers in winter, usually from mid to late June into August. B A B A FIGURE 1.—Ornithoglossum pulchrum. A, plant in habitat showing sheathing, undulate leaves and compact inflorescence; B, detail of campanulate flower showing long stamens and six large, concave nectaries each with a simple margin near base of tepals. Photographs: C. Mannheimer. FIGURE 2.—Ornithoglossum pulchrum. A, type specimen, Mannheimer CM2710 (WIND); B, specimen on left shows shallowly bilobed corm, specimen on right shows undulate, crispulate-edged leaves and young, upright capsules, Mannheimer CM4004a (WIND). Scale bars: A, B, 100 mm. Bothalia 41,2 (2011) 233 Diagnosis and relationships: Ornithoglossum pulchrum is distinguished by its flowers, which are suberect to spreading in a compact raceme that barely exceeds the leaves, and in which the ± filiform filaments are almost as long as the tepals. The flowers are easily distinguishable from those of other Ornithoglossum species by their bright to dark pink colour (47C, 48C in R.H.S Colour Chart 1966). This colouring is almost unbroken apart from a pale yellow patch, narrowly outlined with dark red, near the base of each tepal in the region of the nectary. O. pulchrum shares the character of undulate, crispulate-edged leaves with four other taxa in the genus, viz. O. undulatum, O. gracile B.Nord. and O. zeyheri B.Nord., and also O. parviflorum var. namaquense B.Nord. Compared with the floral characters, however, this vegetative feature is of less taxonomic importance, as quite plane-edged leaves are known in some populations of O. undulatum (Nordenstam 1979, 1982). Nevertheless, the undulate leaves together with the long filaments, which nearly equal the length of the tepals, suggest a relationship with both O. undulatum, a widespread species in the western parts of southern Africa, and O. zeyheri which is confined to Namaqualand between the Steinkopf region and the lower Olifants River Valley. O. undulatum differs from O. pulchrum in its markedly asymmetric flowers in which one or two tepals point downwards and the other tepals flare upwards at anthesis. These are slightly smaller than those of O. pulchrum (16–30 × 2–5 mm vs 27–32 × 4–5 mm) and are bicoloured, white in the centre with reddish purple tips. In contrast, the flowers of O. pulchrum are actinomorphic and campanulate, features that are shared with O. zeyheri. Unlike O. pulchrum, however, this species has few and inconspicuous flowers which are typically produced in May, with tepals that are short and narrow (12–15 × 1–3 mm) and coloured pale greenish with a purplish tinge towards the base and tip. The regular symmetry of the flowers of O. pulchrum prevent possible confusion with any other species from southern Namibia even in the early fruiting stages, since its withered tepals remain evenly spread around the developing suberect to spreading capsule, unlike those of O. undulatum which are distinctly reflexed from a markedly down-turned capsule. Both O. zeyheri and O. undulatum share a simple-margined nectary with O. pulchrum, although this feature is variable in O. undulatum, sometimes taking the form of an entire or bifid lobe. Ornithoglossum pulchrum has only been collected twice so far, once when in flower and again in the early stages of fruiting. As yet, mature fruits and seeds are not available for comparison with other species. Species from other families that are narrowly endemic to the Aus area are Moraea graniticola Goldblatt (Iridaceae) and Oxalis ausensis R.Knuth. (Oxalidaceae), both geophytic herbs, and the succulent shrub Juttadinteria ausensis (L.Bolus) Schwantes (Aizoaceae). M. graniticola and J. ausensis flower only after winter rain and when temperatures begin to rise. O. ausensis is one of four autumn-flowering Oxalis species in the vicinity of Aus.
INTRODUCTION
Ornithoglossum Salisb. is a small, sub-Saharan genus in the family Colchicaceae (Nordenstam 1998).All but one of the eight species recognized by Nordenstam (1982) are concentrated in the western half of southern Africa, mostly in the winter rainfall region.Only O. vulgare B.Nord. is found in southern Africa's eastern parts and as far north as East Africa.Previously the genus was placed in the tribe Iphigenieae (Buxbaum 1936), but in a new classification, based on an analysis of molecular data of the family, the genus now falls within the tribe Colchiceae, together with Colchicum L., Gloriosa L., Hexacyrtis Dinter and Sandersonia Hook.(Vinnersten & Manning 2007).
Considering its small size, the genus shows remarkable diversity in floral morphology.The perianth is actinomorphic or zygomorphic in form and coloured cream to attractive yellow, green, brown or purple, sometimes almost black, and often bicoloured.In species such as Ornithoglossum undulatum Sweet, the flowers are large, showy and sweetly scented, but in O. parviflorum B.Nord.they are small, inconspicuous, dull and unscented (Manning et al. 2002).Other variable features are the shape of the nectaries on the basal part of the tepals and the length and thickness of the filaments.Several micromorphological differences in the pollen and seeds also help to distinguish groups of species.All the species are reported to be highly toxic to stock due to their colchicine-type alkaloids, which accounts for their common name 'slangkop' (Watt & Breyer-Brandwijk 1962).
Ornithoglossum pulchrum
Distribution and ecology: Ornithoglossum pulchrum is currently known from just one locality in the pro-Namib, a broad, undulating plain in southern Namibia (Figure 3).The species has been recorded in ephemeral watercourses from the uplands near Aus, which lie just below the inland escarpment, at approximately 1 380 m.The plants grow in coarse gravel, close to gneiss outcrops of the Namaqua Metamorphic Complex.Lying on the border of the winter and summer rainfall zones, Aus has bimodal rainfall, averaging 85 mm per year.Most rain falls in late summer (January to April) with a second, lower peak in June (Pallett 1995).Precipitation also occurs in the form of occasional fog which moves in from the coast, as well as rare snowfalls.Winds in the region are a powerful climatic force which can severely limit plant growth.Like most other Ornithoglossum species from southern Africa's winter rainfall region, O. pulchrum flowers in winter, usually from mid to late June into August.(Nordenstam 1979(Nordenstam , 1982)).Nevertheless, the undulate leaves together with the long filaments, which nearly equal the length of the tepals, suggest a relationship with both O. undulatum, a widespread species in the western parts of southern Africa, and O. zeyheri which is confined to Namaqualand between the Steinkopf region and the lower Olifants River Valley.O. undulatum differs from O. pulchrum in its markedly asymmetric flowers in which one or two tepals point downwards and the other tepals flare upwards at anthesis.These are slightly smaller than those of O. pulchrum (16-30 × 2-5 mm vs 27-32 × 4-5 mm) and are bicoloured, white in the centre with reddish purple tips.In contrast, the flowers of O. pulchrum are actinomorphic and campanulate, features that are shared with O. zeyheri.Unlike O. pulchrum, however, this species has few and inconspicuous flowers which are typically produced in May, with tepals that are short and narrow (12-15 × 1-3 mm) and coloured pale greenish with a purplish tinge towards the base and tip.The regular symmetry of the flowers of O. pulchrum prevent possible confusion with any other species from southern Namibia even in the early fruiting stages, since its withered tepals remain evenly spread around the developing suberect to spreading capsule, unlike those of O. undulatum which are distinctly reflexed from a markedly down-turned capsule.Both O. zeyheri and O. undulatum share a simple-margined nectary with O. pulchrum, although this feature is variable in O. undulatum, sometimes taking the form of an entire or bifid lobe.
Ornithoglossum pulchrum has only been collected twice so far, once when in flower and again in the early stages of fruiting.As yet, mature fruits and seeds are not available for comparison with other species.
Species from other families that are narrowly endemic to the Aus area are Moraea graniticola Goldblatt (Iridaceae) and Oxalis ausensis R.Knuth.(Oxalidaceae), both geophytic herbs, and the succulent shrub Juttadinteria ausensis (L.Bolus) Schwantes (Aizoaceae).M. graniticola and J. ausensis flower only after winter rain and when temperatures begin to rise.O. ausensis is one of four autumn-flowering Oxalis species in the vicinity of Aus.
FIGURE 1.-Ornithoglossum pulchrum.A, plant in habitat showing sheathing, undulate leaves and compact inflorescence; B, detail of campanulate flower showing long stamens and six large, concave nectaries each with a simple margin near base of tepals.Photographs: C. Mannheimer.
Ornithoglossum pulchrum (Colchicaceae: Colchiceae), a new species from southern Namibia
.A. SNIJMAN*, B. NORDENSTAM** and C. MANNHEIMER*** Colchicaceae, Colchiceae, new species, Ornithoglossum Salisb., southern Namibia, taxonomy ABSTRACT We describe a new species in the sub-Saharan genus Ornithoglossum Salisb.from southern Namibia.Ornithoglossum pulchrum from near Aus, is remarkable in having bright to dark pink flowers, a feature previously unknown in the genus.The perigone is almost concolorous apart from a contrasting, pale yellow nectary region, narrowly outlined with darker red, near the base of each tepal.The undulate leaves together with the long filaments, which are nearly as long as the tepals, suggest a relationship with O. undulatum, a widespread species in the western parts of southern Africa, and O. zeyheri from Namaqualand and the northwestern Cape.Compton Herbarium, South African National Biodiversity Institute, Private Bag X7, 7735 Cape Town.E-mail: d.snijman@sanbi.org.za.** Swedish Museum of Natural History, Box 50007, SE-10405 Stockholm, Sweden.
|
2018-12-07T19:35:35.165Z
|
2011-12-17T00:00:00.000
|
{
"year": 2011,
"sha1": "43484da5d4011be311c3d71dc475a858f23eb751",
"oa_license": "CCBY",
"oa_url": "https://journals.abcjournal.aosis.co.za/index.php/abc/article/download/54/54",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "43484da5d4011be311c3d71dc475a858f23eb751",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
78712541
|
pes2o/s2orc
|
v3-fos-license
|
Antiretroviral Therapy that Includes, Protease Inhibitors-Induced Hepatotoxicity: A Review
HIV is a retrovirus known to be the primary cause of Acquired Immune deficiency Syndrome (AIDS). Due to the large scale of morbidity and mortality it causes, HIV is fast becoming a major threat in developing countries including the Indian sub-continent. Infection with HIV is associated with prolonged latent period during which the virus continues to actively replicate, usually resulting in symptomatic illness. (1)
Introduction
HIV is a retrovirus known to be the primary cause of Acquired Immune deficiency Syndrome (AIDS). Due to the large scale of morbidity and mortality it causes, HIV is fast becoming a major threat in developing countries including the Indian sub-continent. Infection with HIV is associated with prolonged latent period during which the virus continues to actively replicate, usually resulting in symptomatic illness. (1)
Antiretroviral Therapy
The introduction of combination of antiretroviral therapy has led to significant reduction in morbidity and mortality associated with HIV infections. (2) There are different combination therapies presenting activity against both wild-type and multidrug resistant HIV.
Pharmaceutical agents that can be combined to make up highly active antiretroviral therapy (HAART) can be divided into three categories, namely, nucleoside reverse transcriptase inhibitors (NRTIs), nonnucleoside reverse transcriptase inhibitors (NNRTIs) and protease inhibitors (PIs), based on their mechanism of action. Human immunodeficiency virus (HIV) is a retro virus known to be the primary aetiological agent of Acquired Immunodeficiency syndrome (AIDS). It is reported that about 39 million people globally are living with HIV. HIV infected patients are frequently present with elevated levels of serum alanine aminotransferase (ALT) and aspartate aminotransferase (AST). This has been often attributed to the hepatic effects of Antiretroviral-Protease Inhibitor drugs (PI's). A review of cohort studies investigating the incidence of hepatotoxicity among patients receiving Antiretroviral-Protease inhibitor drugs suggests that the overall rate of ALT and AST elevations is similar among all Protease inhibitor drugs and considering the importance of drug induced hepatotoxicity as major cause of liver damage, this review also throws light on protease inhibitor drugs which induce hepatotoxicity, with their mechanism of liver damage and clinical scenario.
With inception of highly active antiretroviral therapy (HAART), the quality of life of HIV infected individuals is gradually improving. The number of people contacting new infections has been on decline globally and those having access to HAART are increasing.
Hepatotoxicity
Hepatotoxicity is a general term for liver damage. Medications, including those used to treat HIV infection, may cause hepatotoxicity. Drug induced hepatic injury is currently responsible for < 50% of cases of acute liver failure in the united states. (3) The mechanisms of antiretroviral-induced hepatic toxicity include dose-dependent toxicity, idiosyncratic reactions, hypersensitivity reactions, mitochondrial toxicity and immune reconstitution.(4) Antiretroviral drugs, particularly nevirapine and ritonavir-boosted protease inhibitors, may cause hepatitis.(5-10) The risk of development of hepatitis is higher in patients with pre-existing liver problems, especially those with HBV and HCV co-infection. (11)(12)(13)
Classification of Hepatotoxicity
Liver toxicity is defined as an increase in Aspartate Aminotransferase (AST) or Alanine Aminotransferase (ALT) levels 5 times above the upper limit of normal (corresponding to WHO grade 3-4 toxicity). There has been discrepancy in the classification of hepatotoxicity.
A standard toxicity grade scale is available and is used by AIDS research group. Patients with normal AST and ALT levels before treatment are classified with respect to changes relative to the upper limit of normal UNL.
Protease Inhibitors
Ritonvir: Ritonavir has been the most frequently implicated PI to cause hepatotoxicity. Sulkowski,et al. recorded severe heptotoxicity defined as a rise in transaminases greater than 5 times normal in 27.3% of patients treated with ritonavir. (14) cytochrome P450 inhibition is an important factor in ritonavir heptotoxicity.
Indinavir: Severe acute hepatitis has been reported with indinavir therapy and this may occur early or late in to therapy. (15,16) However the most common laboratory finding in patients on indinavir is unconjugated hyperbilirubinaemia seen in upto 40% of patients. (17) This occurs due to inhibition of the enzyme uridinediphosphoglucoronoside (UDP) -Glucuronosyl transferase that is the enzyme involved in Gilbert's syndrome mutant allele. (18) Despite this frequency, severe hepatotoxicity has been reported in only 3.4% of patients and in whom therapy must be ceased.
Saquinavir-Severe hepatitis due to saquinavir is uncommon. In clinical trial NV 14256 less than 1% of patients developed hypertransaminemia levels of squinavir. When used in combination with ritonavir may rise 20 times and the combination is discouraged. (19) Nelfinavir: It appears to be safer than the other protease Inhibitors, A reterospective study of 118 HCV/ HIV coinfected patients receieving PI therapy for longer than 3 months was performed.(20) 38% of patients receiving NFV therapy and remainder received other protease Inhibitors, indinavir 32%, saquinavir 16%, ritonavir 13% and amperinavir 1%. The rate of grade 3-4 hepatotoxicity was 3 & in NFV treated group compared to 8% in the non NFV group.
Amprenavir: There are few reports of amprenavir hepatotoxicity in the literature. In a review of data from 358 adults and 268 children enrolled in phase II and III studies severe hepatotoxicity related to amprenavir was rare. (21)
Mechanism of PIrelated Hepatotoxicity
Substrates of P-glycoprotein, an ATPdependent efflux membrane multidrug resistance transporter, comprise one class of molecules that can limit the absorption of most PIs. For example, oral administration of saquinavir, indinavir, or nelfinavir in knockout mice lacking this transporter resulted in two-to fivefold increases in plasma drug concentrations. (37) Higher plasma drug of Hepatology concentrations can therefore produce toxicities in human patients that might lack P-glycoprotein. While drug interactions should be examined closely whenever prescribing medication in combination with PIs, this is a particularly important consideration with ritonavir, given its powerful inhibition of cytochrome p450 (CYP) 3A4 and its effects on several other mechanisms of drug interactions. (38) These can lead to increased levels of many coadministered medications, and consequently ADRs. Moreover, there is a potential for interaction with nutritional supplements.(39) Physicians should also be aware that patients with chronic viral hepatitis coinfection have additional impairment of CYP3A activity in the presence of ritonavir, compared to HIV patients without viral hepatitis, even at the low doses of 100 mg/day typically used for pharmacokinetic boosting. (40)
Liver function tests (LFTs)
These tests measure whether liver is being damaged. (Things that can damage the liver are viral hepatitis, alcohol, medications, and street drugs.) These tests measure alkaline phosphatase, ALT, AST, albumin and bilirubin. It is important to have a baseline measure of liver health, because it may need to take HIV medications in the future, and some of these medications can cause liver damage.
Hepatitis A, B, and C
Liver is an organ that processes almost everything put into body, including drugs. In conclusion, hepatotoxicity of Ritonavir, Indinavir, Saquinavir, Nelfinavir became more evident after the introduction of ART (Anti retrovirals) of high activity, which initially included invariably protease Inhibitors.
Based on the review of drug induced hepatotoxicity, it has been found that none of the studies has been able to prove the higher potential for liver toxicity of this particular family of drugs.
Among the PI's, in some studies full dose ritonavir (RTV) has been found to be more hepatotoxic.(41) Although these results have not been confirmed by others. (42,43) In certain cases, RTV has caused fatal acute hepatitis.(44) Several cases of liver toxicity associated with the use of indinavir (IDV ) and saquinavir (SQV) have also been reported. (45) Nelfinavir was found to be less hepatotoxic than the other PI's analyzed (RTV, IDV, SQV, APV) in study evaluating 1052 patients. (46)
|
2019-03-16T13:11:49.883Z
|
2017-01-15T00:00:00.000
|
{
"year": 2017,
"sha1": "4fca9f49e1c658df0655bae7890337e70d52de98",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/6-1-2017/V.S.%20Chopra,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "58449b70aaa2bd4469339d5767fcf592cba842ce",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221498755
|
pes2o/s2orc
|
v3-fos-license
|
High sensitivity of Bering Sea winter sea ice to winter insolation and carbon dioxide over the last 5500 years
Anthropogenic CO2 emissions and long-term winter insolation forcing drive Bering Sea ice extent to lowest in the last 5,500 years.
SI Tables
. Grey shaded area represents the likely range of calibrated ages, grey stippled lines are the 95% confidence intervals, and the red curve is the single 'best' age produced by the model, based on the weighted mean of each depth, and (B) sedimentation rates calculated from 2-sigma median age model. showing spatial patterns of isotope anomalies for "high composite" and "low composite" years.
SI discussion
The age model suggests consistent deposition over the history of this peatland (table S1; fig. S1), but as fig. S1B suggests, deposition decreased during certain periods of the record. These decreases in deposition also increase the age-model uncertainty from multi-decadal and centuryscale to multi-century to over half a millennium between dated intervals. The use of plant macrofossils gives us high confidence in the dates obtained and Bayesian age modeling (38) produces the most likely age for a given depth.
The 18 Oc values primarily reflect the environmental water at the time of plant cellulose synthesis and therefore should record synoptic changes in atmospheric circulation, as mediated by ocean circulation changes and sea-ice extent, and to a lesser degree temperature (17,18,43).
Relative humidity can also impact the stable isotopic signature of the plant cellulose, leading to more enrichment in 18 O of the plant cellulose with lower humidity. This is likely to occur with mosses growing on hummocks, but a recent study from a transect across an Alaskan peatland on a hydrological gradient found that the isotopic differences between the mosses closest to the water table near a stream were not statistically different from those located 40 m from the stream and largely above the water table, when corrected for the local water source (44). Differences emerged between the non-vascular (moss) and vascular (sedge) components where peat cellulose oxygen isotope analysis has previously been carried out, which were attributed to the roots of the vascular plants accessing water from deeper depths as well as stomatal regulation of water loss (44), similar to previous studies (45). However, modern analysis of mosses and sedges on St.
Matthew Island found no statistically significant difference (P<0.0001) between the two ( fig. S3), suggesting that under the maritime climate of this small island, relative humidity is high and plant evaporative stress is low. The offset of water to modern plant cellulose was on average -32‰ (table S2) S5). Most of the variability also correlates the Bering Sea sea-ice extent for that same time period ( fig. 4). The one-year apparent lag in the oxygen isotope values compared to the sea-ice extent could suggest that atmospheric-ocean conditions in other seasons could also play a role, or that ocean warming precedes atmospheric circulation changes. The one-year lag correlation between these two records yields a correlation of -0.733 (P<0.00001), suggesting that when seaice decreases the 18 O of FMAM precipitation increases. The one-year lag may be explained by ocean warming resulting in atmospheric circulation changes, but that these atmospheric patterns then become a positively reinforcing feedback to promote further warming and winter sea-ice loss (48). Surface warming has been shown to be primarily amplified in autumn, when ocean warming from thinner or absent sea-ice is transferred to the atmosphere (49). The range of modern 18 O interannual variability is ~3‰ for the annually average values and ~7‰ for the FMAM interannual variability, both of which are smaller than the 22‰ inferred precipitation
|
2020-09-05T13:05:41.182Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "78dd383466a19b3279b152c733baf0b9dc86b994",
"oa_license": "CCBYNC",
"oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.aaz9588?download=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f91d7009d5ee184318c702027b6f4cb72b98b475",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
}
|
112806075
|
pes2o/s2orc
|
v3-fos-license
|
DEVELOPMENT AND IMPACT BEHAVIORS OF FRP GUARDER BELT FOR SIDE COLLISION OF AUTOMOBILES
Carbon fiber reinforced plastic (CFRP) laminates are used in the wide field, because they have excellent properties of specific strength and of specific stiffness. In automobiles, the CFRP has a possibility of weight reduction in automotive structures which can contribute to improve mileage and then to reduce Carbon dioxide. On the other hand, the safety of collision should be also made clear in the case of employing the CFRP to automotive structures. In this paper, the CFRP guarder belt equipped in the automotive door is developed and examined by an experiment and a numerical analysis for replacing the conventional steel door guarder beam. In the numerical analysis, a commercial FEM solver (ANSYS) was employed and the laminated shell element were used in the CFRP guarder belts. The contact element between the impactor and the upper surface of CFRP guarder belt and between the supporters and the inner surface of the belt was Contact element 173 with the Target element 170. The experimental relation of impact load to displacement for CFRP guarder belt agreed well with that of numerical result. From the comparison of both results, the numerical method developed here is quite useful for estimating impact behaviors of CFRP guarder belt.
Introduction
It is well known that CO2 emissions, which are one of the greenhouse gases emitted from passenger vehicles such as automobile, are major cause of global warming. In the automobile industry, to reduce CO2 emissions, it is well known that the most effective method is to produce the fuel efficient automobile. To increase the fuel efficiency of the automobile, the most effective approach is to reduce the automobile weight by using lightweight material such as composite materials. Carbon fiber reinforced plastics (CFRP) have been widely used in aerospace, industrial goods and other application fields because of their high specific strength and high specific modulus compared with metal. This means that the CFRP contributes to lighten automobiles greatly. Otherwise, the safety of automobiles is also very important and the collision safety of the automobile has been evaluated by full flap frontal crash, offset frontal crash and side impact tests. In the frontal crash test, it is possible to absorb the energy by largely deforming the front part and the rear. With an increasing an interest in the lightening of the automobile and in the securing the safety of passengers, many researches for them have been performed [1][2][3][4][5]. However, in the side impact test, it is hard to absorb the energy similarly, because of being very narrow for the survival space of passengers. In the inside of the door, a reinforcement member as shown in Figure 1, namely door guarder beam made of steel has been installed to absorb impact energy and its deformation is limited to about 150mm.
In this study, the CFRP door guarder belt is developed for the purpose of designing impact energy absorption members under side collision as shown in Figure 2 Figure 2 shows the schematic diagram of the impact energy absorption by the CFRP guarder belt. The impact energy is effectively absorbed by installing the CFRP belt between two free fulcrums for the rotation and changing the vertical impact load of the falling weight to the tensile load. In order to prevent fracture in the support edge of specimen due to the stress concentration, a belt-shaped specimen was adopted. Thin CFRP belt specimens were manufactured from unidirectional prepregs (T800H/Epoxy) by using the sheet winding method and its thickness, width and length were 0.23mm, 50mm and 1642mm, respectively. Figure 3 shows an actual CFRP guarder belt.
Tower drop weight impact test
In order to evaluate the capacity of crash energy absorption and to show the micro and macro fracture behavior of the CFRP guarder belt, the large size of drop tower facility for the impact test was designed as shown in Figure 3. The CFRP guarder belt was received the impact load generated by a free drop weight of 100kg from 12m height. Therefore, the impact speed was approximately 55km/h just before the impact. The shape of impactor was a half cylinder having 100mm radius and 200mm width. The impact load of the specimen was measured by a load cell installed behind the rotary pin. In order to investigate the fracture mechanism of the CFRP guarder belt, a high-speed camera was chosen. And the dynamic strain of the specimen from collision to fracture was measured by a strain gauge. The strain gauge stuck on three places at the center of the specimen as a collision point (T1), near the rotary pin (T3) and middle point of both (T2) as shown in Figure 3. The CFRP guarder belt specimen supported both ends in the rotary pin of a diameter of 40mm. Figure 4 shows the measured horizontal load and the longitudinal strain of the center of the specimen variation after impact. The horizontal impact load of CFRP belt specimen increase nonlinearly, and the impact load reached to the maximum and then instantly dropped to almost zero as shown in the left side of Figure 4. Within the time of 5.0~5.4msec, the impact load recovered to 38~40kN and become to zero at the displacement of about 85mm. The longitudinal strain of the specimen at the impact location also increases nonlinearly from a moment of impact, and becomes the maximum value just before fracture of the CFRP belt specimen. Figure 5 shows the observed fracture modes and the fracture location of CFRP guarder belt specimen. In the experiment, the observed fracture mode was fiber breakage in the entire width of CFRP belt specimen due to the tensile load acted on the whole the specimen. And almost specimen fractured at the location of T3, that is larger tensile stress occurred around the rotary pin is supposed. High speed Camera Figure 6 shows the comparison of experimental load-displacement curve between the vertical impact and the impact with offset angle of 15°. The horizontal load of the impact with offset angle of 15° increases nonlinearly until the displacement of 80mm, and it's almost same tendency for the vertical impact case. The maximum impact load for offset impact case is 20% smaller than the vertical impact case.
Numerical analysis
To simulate the impact response behavior and absorbed energy of the CFRP guarder belt under impact loading, a finite element model was developed by using the commercial FEM solver (ANSYS). In this FE analysis, the laminated shell element (SHELL_63) and the solid element (SOLID_45) were used in the CFRP guarder belts and the impactor and rotary pins, respectively. Details of the finite element model are shown in Figure 7. The contact element between the impactor and the upper surface of CFRP guarder belt and between the rotary pins and the inner surface of the belt was Contact element 173 with the Target element 170 of the friction coefficients of 0.25. And, we adopted 1/2 models with a symmetric condition to reduce a calculation time. Table 1 shows the material properties of CFRP guarder belt used for the analysis.
Prediction of the Fracture time of CFRP guarder belt
Next, the fracture time of the CFRP guarder belt was predicted from the results of the numerical analysis by using the fracture criterion of composites. In case of the unidirectional reinforced CFRP, the material property in the transverse direction is same as that of thickness direction, therefore Tsai-Hill fracture criterion was used in this analysis.
Where F L , F T and F LT are the strength of the CFRP belt along the fiber direction, that of the transverse direction and that of the in-plane shear, respectively. The F L being much bigger than the stress of the transverse direction σ T for thin CFRP belt specimen, so that equation (1) can be rewritten as below Substituting the occurred stress and the strength in fiber direction σ L , those of transverse σ T and those of membrane shear τ LT in the equation 2, it can be consider the fracture of CFRP belt specimen when the value of the left-hand side become 1.
Calculation of the absorbed energy for CFRP guarder belt
The absorbed energy of CFRP guarder belt can be calculated by an area of the curve of an impact load P and a displacement of the impact location of the specimen δ until its fracture as Figure 8 shows the experimental and the numerical relation of impact load to displacement of the CFRP guarder belt for vertical impact. The experimental and numerical impact load increase with increasing the displacement of CFRP belt specimen nonlinearly, then, they become the maximum value just before fracture. The experimental and the numerical relation of impact load to displacement for CFRP guarder belt agreed well generally.
Results and discussions
Next, Table 2 shows the predicted fracture time was calculated by equation (2), the location of fracture in the specimen and the absorbed energy obtained by FE analysis are compared with the experimental ones. The fracture time and the location of fracture are almost exactly predicted and the numerical absorbed energy is also close to the experimental ones.
The experimental strain variations at three locations in the specimen compare with the numerical ones as shown in Figure 9. The strain of each locations increase with time transit nonlinearly.
The measured maximum impact strain at each location is almost same, though the numerical strain variation at the location of T3 is larger than experimental ones. This is because the numerical result wasn't considered a little looseness between the CFRP guarder belt and the rotary pin. Both results of the strain variations at the locations of T1 and T2 agreed well. From the comparison of both results, the numerical method developed here is quite useful for estimating impact behavior of the CFRP guarder belt.
In the case of side collision, there is the collision from diagonal direction actually. Therefore, the impact response behavior for the side collision with offset angle of CFRP guarder belt was examined. The experiment of the side collision with offset angle was performed to incline the supported base of the specimen so that the impactor hit the specimen diagonally as shown in Figure 10. The drop weight impact test with offset angle of 15 was carried out in this experiment, then the difference in impact response behavior of the side collision was examined. Figure 11 shows the experimental and the numerical relation of impact load to displacement of the CFRP guarder belt for impact with offset angle of 15°. Though the maximum impact load by numerical result is slightly larger than experimental ones, both relation of impact load to displacement for CFRP guarder belt agreed well generally. Table 3 shows the predicted fracture time, the location of fracture in the specimen and the absorbed energy obtained by FE analysis are compared with the experimental ones for the impact with offset angle of 15°. The fracture time and the absorbed energy are almost exactly predicted. The fracture time for the offset impact test is slightly longer than that for vertical impact, but the absorbed energy is almost same for the both case.
The experimental strain variations at three locations in the specimen compare with the numerical ones as shown in Figure 12. The strain of each locations increase with time transit nonlinearly. Both results of the strain variations at the locations of T1 and T2 agreed well.
Coclusions
The CFRP guarder belt was developed for the purpose of designing impact energy absorption member under side collision. The drop weight impact tests were carried out and the impact response behavior and the absorbed energy of the CFRP door guarder belt under impact loading were examined by using the numerical analysis and the experimental results. From these results, we could be concluded as below.
1. The CFRP guarder belt absorbed crash energy along the entire length of it and tension stress is applied on both the upper and lower side of belt. 2. From the comparison of FEM results with the experimental ones for the specimens, the proposed numerical method by ANSYS code was supposed to be useful for analyzing the CFRP guarder belt. 3. The fracture time of the CFRP guarder belt can be predicted by using FEM and fracture criteria of composite materials and agreed well with the experimental fracture time. 4. The impact response behavior and impact strength of the CFRP guarder belt were obtained by the tower drop weight impact test. The CFRP guarder belt contributes to lightweighting and safety improvement of safety of the car body greatly is supposed.
|
2019-04-14T13:03:46.846Z
|
2014-07-28T00:00:00.000
|
{
"year": 2014,
"sha1": "e22b3b7fc63cae583f5d45e757ce66954d4f0435",
"oa_license": "CCBYNC",
"oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/550fe346bed48.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a7dcc4e0c860ba4bf8632dfbeda6b8b7528ffdd0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
118368741
|
pes2o/s2orc
|
v3-fos-license
|
Influence of a magnetic guide field on wakefield acceleration
Enhancement of the trapping and optimization of the beam quality are two key issues of Laser Wake Field Acceleration (LWFA). The influence of stochastic acceleration on the trapping of electrons is compared to the one of cold injection. It is shown that when considering a high intensity wave perturbed by a low intensity counter-propagating wave, in the non-linear blowout regime, the influence of the colliding pulses polarizations (either parallel linear or positive circular) on the beam quality seems weak when the electron density is below $\sim 10^{-3}$ critical density. The effect of a homogenous constant magnetic field $B_0$, parallel to the direction of propagation of the pump pulse, is studied in the blowout regime. Transverse currents are generated at the rim of the bubble, which results in the amplification of the $B_0$ field at the rear of the bubble. Without $B_0$ field the beam periodically explodes and re-confines, this phenomenon is suppressed when $B_0$ reaches some threshold, which is a function of the laser pulses parameters (intensity, waist, duration). Therefore the dynamics of the beam is modified, its maximum energy is slightly boosted and above all transverse emittance reduced. Moreover the low energy tail, observed in the non magnetized case, can be completely suppressed leading to very sharp mono-energetic beam when $B_0$ is applied. If the available $B_0$ field is limited then one has to fine-tune the spatio-temporal shape and intensity of the colliding pulse in order to get an acute control on the beam quality.
I. INTRODUCTION
In laser-wakefield acceleration (LWFA) [1][2][3][4], a laser creates a plasma wave wakefield with a phase velocity close to the speed of light (c). The acceleration gradients in these wakefields can easily exceed 100 GeV/m, hence a cm-long plasma based accelerator can produce GeV-energy electron beams. A particle injected in such a wave gains energy from the longitudinal component of the electric field, as long as the pump pulse is not depleted and the dephasing length is not reached. These wakefields have ideal properties for accelerating electrons. The transverse focusing field increases linearly with the radial distance and the accelerating longitudinal field is independent of the radial coordinate [5,6]. LWFA can be split into three different options . The first corresponds to a plasma density n e ≈ 10 19 cm −3 , a pulse length (cτ ) matching half of a plasma period and a spot size (w 0 ) roughly equals to the bubble radius, w 0 ≈ cτ ≈ √ a 0 , where a 0 is the normalized vector potential of the laser. This is the idea of the bubble regime [7,8]. For these conditions, a hundred-joule class laser would have an intensity of the order ∼ 10 21 W cm −2 . In this regime, the electrons are continuously injected, this results in tremendous beam loading and the loaded wake is noisy. In this paper we explore different techniques to improve the beam quality of LWFA when electrons are injected in the wake with a colliding pulse. Hence, the bubble regime is not appropriate. We rather select moderate laser intensity I ≤ 10 19 W.cm −2 and plasma density n e ≤ 10 18 cm −3 according to the guidelines proposed by Lu et al. [9] to achieve a more controlled and stable blowout of the electrons. Self-injection of electrons can occur when the pump pulse intensity is high but the accelerating structure is almost the same. In this frame beam loading effects clamp further injection leading to beams with a smaller energy spread. In order to limit the computational requirements of our PIC simulations the propagation of the pump pulse will not exceed 1 cm. We tend to avoid self-injection into the wake by adjusting the pump pulse intensity and the electronic density. Many different combinations of polarizations can be chosen for both waves, each of these possibilities results in particular force acting on the plasma electrons, when the two waves collide [10]. The first point of this article is to summarize the dependance to pump and colliding pulse intensities, and to plasma density of this force. The relative influence of stochastic heating and beat wave force on the injection mechanism, and later on the beam quality will be discussed. After the choice of polarization in the blowout regime is clarified, we focus on the study of wakefield acceleration in the presence of an external, homogenous, magnetic field and study its influence through simulations. The mechanisms leading to the enhancement of the beam quality will be examined. In a third part, fine-tuning of the counter-propagating low intensity pulse will be considered in order to limit the intensity of the external field. This situation will be illustrated in the case when the intensity of the pump pulse is raised to a 0 = 10. Then we will conclude. The wakefield propagates in the plasma at the group velocity of the laser β g defined by : , where c is the speed of light, ω p and ω 0 respectively denote the plasma and laser frequencies. We use the quasi-static approximation and assume that the potential φ created by the pump pulse only depends on ξ = x−β g t, where x and t denote the space and time coordinates normalized by c/ω 0 and ω −1 0 respectively. Then, the hamiltonian of an electron in the wakefield potential φ, created by the pump pulse, reads : where γ 2 ⊥ = 1 + p 2 ⊥ . The normalized transverse momentum is defined by p ⊥ = u ⊥ mec , where u ⊥ is the transverse momentum and m e denotes the electron mass. The hamiltonian of a particle is an invariant H(ξ, p x ) = H 0 , using (1) we deduce two solutions for the longitudinal momentum : where γ 2 g = 1 + β 2 g . The separatrix between trapped and untrapped orbits is given by a critical value, H s = γ ⊥ γg − φ min , of the Hamiltonian. Replacing H 0 by H s in (2) and retaining p ± x we get the two branches of p sep x (ξ). Let us now comment the dynamics of an electron in vacuum, in the presence of two counter-propagating laser pulses.
The hamiltonian reads H(u x , x) = 1 + u 2 x + u 2 ⊥ 1/2 = γ, the longitudinal force acting on the electron is given by Taking only into account the influence of the lasers, denoted by their potential vectors A 0 and A 1 , we have u ⊥ = A 0 + A 1 . We will consider parallel linear polarization (P linear polarization) and positive circular polarization, that is u ⊥ = a 0 cos(ω 0 t−k 0 x)e y +a 1 cos(ω 0 t+k 0 x)e y and u ⊥ = a0 √ 2 sin(ω 0 t + k 0 x) e z respectively. Substituting the above expressions in Eq. (3) yields for the P linear polarization case, and for the positive circular polarization case. When P linear or positive circular polarizations are used Eqs. (4) and (5) show the existence of a force F bw spatially oscillating with a λ 0 /2 period. This force is not time-dependant, it is usually interpreted as a ponderomotive force associated with the beat-wave [10][11][12][13]. We should compare the beatwave force F bw to the longitudinal ponderomotive force, this latter scales as F p = (2γ) −1 a 2 0 /(cτ ) where τ is the pulse duration. Taking the maximum value of F bw the ratio F bw /F p becomes When F bw /F p > 1 electrons are trapped inside the λ 0 /2long beatwave buckets and cannot be wiped out by the longitudinal ponderomotive force. On the whole for both polarizations electrons will undergo the beatwave force, therefore when the separatrix is such that min(u x ) < 0 electrons will be trapped in the wakefield as a bunch [14]. However in the P linear polarization case other terms are added to the force and the equations of the motion are no longer integrable, electron trajectories become chaotic [15]. This phenomenon known as stochastic heating can provide very large momenta to some electrons [16][17][18][19][20][21][22]. When min(u x ) ≥ 0 the beatwave force is not efficient and electrons can hardly be trapped with positive circular polarizations. In this case P linearly polarized colliding waves are necessary to give electrons the appropriate momentum in order to fill the gap between trapped and untrapped orbits. In the following two subsections, the sensibility of the injection process to polarization and intensity of the waves will be summarized.
B. Low density: Strong dependance to polarization and wave intensity
When the plasma density is about ∼ 10 −3 n c a weak variation of the pump pulse is sufficient to modify the mechanisms allowing trapping of a bucket in the wakefield. To illustrate this strong dependance to laser parameters we launched three sets of simulations with the PIC code CALDER [23]. The simulation setup consists in two 30 fs linearly polarized waves with wavelength λ = 0.8µm having their electric fields either linearly or circularly polarized (orthogonal linear polarization is denoted by S). The pump pulse, which creates the accelerating wakefield, is focused to an 18µm full width at half maximum (fwhm). The peak normalized vector potential of the main pulse is associated to the laser intensity I by the formula a = 0.853 × 10 −9 λ(µm)[I(W cm −2 )] 1/2 , we considered a = 1.5 and a = 2. The low intensity pulse is counter-propagating and is focused to a 31µm focal spot at a peak normalized vector potential a 1 = 0.1 or 0.4. The waves interact with a mm-size plasma with a density n e = 4.3 × 10 −3 n c . The fluctuations between the two regimes are illustrated by Figs. 1. When a = 1.5 and a 1 = 0.4 the 1D separatrix between trapped and opened orbits is higher than p x = 0, in this case the beatwave force cannot provide enough momentum to electrons to push them in the wakefield. Nevertheless stochastic heating due to P linear polarizations is a way to bridge the gap and inject a bunch in the wakefield ( Fig. 1(a)). When a = 2 the 1D separatrix is lowered (min(p x ) < 0), in this case the beatwave force is enough to trap electrons in the wake therefore we can accelerate a beam using either P linear or positive circular polarizations. Note that no trapping occurs with negative circular polarizations, which is consistent with the theory introduced in subsection II A. Indeed straightforward algebra gives F x = 0, then the ponderomotive force hinders trapping of electrons. The quality of the accelerated beam will depend on the force which dominates during collision of the pulses. A relatively high intensity of the counter propagating laser (a 1 = 0.4) will foster stochastic heating [21,22] and a higher charge will be injected with P linear polarizations compared to positive circular polarizations ( Fig. 1(b)). On the contrary, if we reduce the intensity of the counter propagating pulse (a 1 = 0.1) the beatwave force is favored and rules the injection mechanism ( Fig. 1(c)). In this latter case, the beam quality is better without stochastic heating. C. Very low density: Weak dependance to laser polarization We now chose parameters relevant for the study of the blowout regime [6,[24][25][26]. The plasma density was set to n e = 2.5 × 10 −4 n c and the pump pulse intensity to a = 4, the rest of the simulation setup was not modified. Considering two circularly polarized waves rotating in the same direction (positive circular polarizations) is as efficient as considering P linear polarizations (Fig. 2). It means that in this case cold injection [14,27] due to the beating force is the key mechanism governing electron injection. Stochastic acceleration weakly changes the number and the energy of electrons trapped in the wake field. The electron momentum distribution along the direction of propagation of the waves during their collision shows that the effects on electron dynamics of the two polarizations are close, as a result the electron energy distributions are almost identical (Fig. 2). This dominant influence of the beating force upon injection was not clearly stated by Davoine et al. [14], here we underline that no relevant difference is triggered through use of P linear polarizations or positive circular polarizations in the blowout regime. The next section is devoted to the study of a new means to enhance beam quality after colliding pulse injection, in the blowout regime.
III. INFLUENCE OF A MAGNETIC GUIDE FIELD
The influence of a constant homogeneous guide magnetic field on LWFA is studied in this part. This magnetic field is assumed to be parallel to the direction of propagation of the waves. Electrons are still externally injected using a colliding counter propagating laser pulse [13]. The idea is to guide the electrons in order to improve the quality of the beam which is trapped in the wake field. This mechanism was first proposed in the context of LWFA, with a single pump laser (a = 3.5) and an electronic density n e = 3 × 10 −3 n c prone to selfinjection into the wake field [28], with these parameters self-injection can be dramatically enhanced and beam quality degraded. Here we aim at studying the influence of a magnetic guide field in the blowout regime [9] with colliding pulse injection of the electrons, hence we chose a = 4 and n e = 4.4 × 10 17 cm −3 = 2.5 × 10 −4 n c , thus abiding by a 0 ≥ 4 and 2 ≤ a 0 ≤ 2ω 0 /ω p criteria proposed by Martins et al. [25]. The required magnetic field necessary to curve electron trajectories is about a hundred teslas [28], such values are particularly strong but still available from the current pulsed magnet technology [29], the most advanced magnets can reach 90T for tens of ms durations and centimeter size lengths [30]. The simulation setup consists in two 30 fs linearly polarized counterpropagating waves with λ = 0.8µm wavelength. They propagate along a constant homogeneous guide field B 0 in a cm-long plasma, the normalised value of B 0 is given by B 0 = eB 0 /(m e ω 0 ). Their electric fields are in the same plane (P linear polarizations). The pump pulse, which creates the accelerating wakefield, is focused to an 18µm full width at half maximum. The peak normalized vector potential for this pulse is still a = 4. The low intensity pulse is focused to a 31µm focal spot at a peak normalized vector potential a 1 = 0.1. The plasma frequency in the presence of a magnetic field can be approximated by ω m = (ω 2 e +Ω 2 ) 1/2 where ω m and ω e represent the frequencies of the magnetized and unmagnetized plasma, respectively. The cyclotron frequency is defined by Ω = eB 0 /m. When B 0 = 125T and B 0 = 250T, one has Ω 2 /ω 2 e = 0.34 and Ω 2 /ω 2 e = 1.38 respectively which proves that the magnetization of the plasma may have some effect on the deformation of the wakefield as will be further discussed. Before the collision of the waves the magnetization has no influence on self-injection, no electron is trapped in the wake.
A. Electronic density and transverse currents induced at the rim of the bubble Let us first identify the differences brought by the addition of a magnetic guide field to the electronic distribution at the vicinity of the bubble boundaries (Figs. 3). After the electron beam injection into the wake field, the strong ponderomotive force due to the main pulse still repels electrons and thus provides them longitudinal and transverse momenta. In the absence of magnetic field, electrons are submitted to the recall electric field induced by the bubble but the balance between this force and the ponderomotive force is favorable to the latter ; as a result electrons flee along straight line trajectories ( Fig. 3(a)). When a longitudinal magnetic field is added electrons start to revolve around the bubble as a result of the magnetic force, this force combined with the electric force induced by the bubble completely modifies the dynamics of the electrons. The gyro-radius of the electrons with p y = 0 is reduced when B 0 is raised. Therefore the corresponding flight path in the (x, y) plane appear to be bent, the trajectory will be even more curved when the . This current will act as a small solenoid, and thus the longitudinal magnetic field will be amplified. This feature and its influence on the beam dynamics will be detailled in the next subsection.
B. A new mechanism to enhance the beam quality
The intensity of the magnetic field is almost doubled locally (Figs. 5) compared to the initial (t = 0) uniform map of B x . We shall underline that this pattern is stable as we obtain quasi identical maps of B x in this region of the bubble when the pump pulse has just entered the plasma around ω 0 t = 2280. Moreover we note that the geometry of the magnetic field lines is weakly altered by the electronic density modulations induced by the propagating bubble. Magnetic field lines stay almost parallel to the propagation direction. Let us now examine the effect of the magnetic field on the dynamics of the accelerated beam. To evidence the differences between the magnetized and the unmagnetized regimes, we plotted kinetic energy density maps showing the evolution of the trapped beam at the rear of the bubble. In the unmagnetized case (Figs. 6), the beam alternatively explodes (due to space charge effects) and focalises (due to the focalizing effect of the transverse electric field). This behavior has a typical ∼ 2000ω −1 0 period. Note that the beam acceleration is degraded because the components of the splitted beam will not see the maximal value of the longitudinal electric field (which is located on axis). The dynamics is completely different in the magnetized case (Figs. 7), the longitudinal magnetic field is strong enough to curve the trajectories and hinder the explosion of the beam. As a result the beam is almost concentrated on axis, the transverse emittance is reduced and the main part of the beam, which also corresponds to the region where the magnetic field is the strongest, always sees the maximum value of the electric field. Next we will quantify the enhancement of the beam quality through the evolution of the energy distribution functions of the beam.
C. Enhancement of the beam quality
As already mentionned, in the magnetized case, the beam is submitted to a more uniform accelerating field, this pattern boosts the particle acceleration leading to a slightly higher maximum kinetic energy when compared to the unmagnetized case (Fig. 8).
The focalizing magnetic field reduces the low energy tail of the energy spectrum, as can be seen in Figs 8. Obviously, this trend is enhanced when the guide field rises. With no guide field, the relative variation of the energy at full width at half maximum (fwhm) ∆E f whm /E max ≈ 1% is excellent, but the rms value of the energy spread has small variations and reaches 7% at the end of the simulation (Table I). When B 0 = 125 T, on the one hand the spread of the low energy tail of the distribution is reduced as shown by Fig. 8(a), and confirmed by the rms value ∼ 4%, but on the other hand ∆E f whm /E max is slightly degraded. When B 0 = 250 T, untrapped electrons carrying energies about 10 Mev concentrate (n e locally reaches 1.5 × 10 −3 n c ) at the rear of the bubble. These low energy (i.e. 0 < E K < 25 Mev) electrons are evidenced by bumps in the beam energy distribution (Fig 8(b)). This low energy bump slowly slides out of the simulation box as these electrons are not injected in the wakefield, and therefore should not be considered for the interpretation of the diagnostics concerning the accelerated beam. According to this comment, we note that a 250 T guide field is enough to completely suppress the low energy tail during the whole simulation. A clear enhancement of the beam quality is obtained, first the final rms value is below 3% and ∆E f whm /E max < 1% (table I) thus providing a very sharp control on the final energy of the beam, and second the number of electrons at the highest energies does not vanish, as in the unmagnetized case, but on the contrary grows up to 60 × 10 6 part/Mev, nearly twice the value of the unmagnetized case ! To our knowledge such an acute mono-energetic electron beam production, with complete extinction of the low energy tail has never been evidenced. For a given pump pulse, two well-known controllers of the low intensity laser can be used to optimize the beam quality. The low intensity pulse duration can be monitored and the transverse fwhm of the spatial envelope adjusted, these two parameters together usually make it possible to get a quasi-mono-energetic electron beam in the blow-out regime [14,25]. There are other ways to enhance the beam quality. For example, one can resort to a longitudinal gradient of the electronic density to enhance trapping [14,31,32]. In an alternate approach, assuming initially homogenous plasma, one can slowly evolve the laser pulse shape to alternate periods of expansion and contraction of the bubble, to respectively trigger and stop self-injection [33] of the electrons into the bubble. However this technique seems hard to adjust to get a unique mono-energetic bunch. In this paper, injected electrons are confined by using a magnetic guide field but we note that the intensity of the field, required to substantially enhance the beam quality, depends on lasers pulse shapes and durations. In this section we did not pay attention to the tuning of the low intensity colliding pulse, we have shown that the guiding induced by B 0 is enough beyond some threshold, function of the parameters of the simulation. However, if we decide to lower the blowout stability by increasing the main pulse intensity, the required intensity of the guide field grows to values largely out of reach of the current technology, then fine-tuning of the colliding pulse becomes necessary. The next section is devoted to this issue.
IV. INFLUENCE OF A MAGNETIC GUIDE FIELD AT HIGHER INTENSITIES
A very high intensity wave is considered now, the pump pulse which is linearly polarized is assumed to have a peak normalized intensity a = 10, and a duration of 30 fs. It is focused to a 35.65µm large focal spot. The colliding pulse has a peak normalized intensity a 1 = 0.1, a 30fs duration and three focal spot sizes were considered D = 60µm, 36 µm and 10 µm. The wavelength of the waves is λ = 0.8µm. In these simulations, the plasma density is still n e = 2.5 × 10 −4 n c . Unless otherwise mentionned the following simulations were run with B 0 = 125T. When considering a = 10 and a counterpropagating wave focused to a 10 µm focal spot, one has a paramount effect of the magnetic field on the distribution function (Table II, Fig. 9). The electron energy distribution becomes almost mono-energetic. After the laser pulse has propagated through the plasma by 3.8 mm, the electron energy distribution is still quite monokinetic and the maximum electronic energy exceeds one GeV ( Fig. 9(b)). Table II shows the evolution of the energy spread of the electron energy distribution with time. When no field is applied the quality decreases whereas we get an acute control on the beam energy with B 0 . It must be pointed out that the accelerated charge is small (close to 50 picocoulombs). The electron energy distributions corresponding to four focal spots are compared at some time. Figure 13 shows that the distribution becomes much more mono-energetic when the focal spot of the perturbing wave is smaller than the one of the main pulse. The magnetic field is much more efficient when considering a small value of D. Then, we have checked that this peak exists in a very small range of values close to a 1 = 0.1. When a 1 = 0.08 the distribution function shows a lower magnitude peak, and the charge accelerated in the first bubble is about 10 picocoulombs. The peak still exists when a 1 = 0.102. But in the case of a higher value of a 1 (for instance a 1 = 0.15) the energy distribution does not show a very thin high-energy peak any longer because the injected charge is higher and the magnetic field is too weak to concentrate the beam efficiently. No high-energy peak was seen in the distribution function for larger values of a 1 , actually the same kind of distribution is obtained when a 1 = 0.2, a 1 = 0.5 and a 1 = 1. Figure 11 shows that the electron energy distribution becomes more mono-energetic when the mag-nitude of the magnetic guide field is increased to 250 T, accordingly with the results obtained with a lower intensity of the pump pulse (a 0 = 4) in section III C. Figure 12 shows that, a shorter pulse duration (∆t = 10 fs) for the counterpropagating wave, also makes the electron energy distribution more mono-energetic. Then, in order to obtain a more mono-energetic distribution when a = 10 and a 1 = 1, a very strong magnetic field B 0 = 250T and a short duration for the counterpropagating wave ∆t = 10 fs were considered (Fig. 13). As expected, a high energy peak is obtained. One should point out that the charge accelerated in the first bubble is still close to 50 picocoulombs. To summarize when a 1 is too high, that is when the charge injected in the wakefield exceeds some treshold, we can not stop a drop of the beam quality by imposing an external field alone, at least the main parameters (focal spot size and duration) of the colliding pulse shall be reduced.
V. CONCLUSIONS
In the first part of this paper, we summarized some essential results about the most efficient choice of polarizations for injection in the bubble, in the colliding pulse scheme. At rather high electron density ( > ∼ 10 −3 n c ) and moderately relativistic electromagnetic wave intensity (a 0 < ∼ 2), more particles are accelerated to high energies in the case of P linear polarizations that is FIG. 13: Electron energy distribution from 2D PIC simulations at ω0t = 30000 with a = 10, a1 = 1, ne = 2.5 × 10 −4 nc and B0 = 250 T. The spatio-temporal shape of the laser enveloppe is defined by D = 10µm and ∆t = 10fs.
to say when electrons undergo the action of the beatwave force and all the others. For higher intensities and lower densities (∼ 10 −4 n c ) the beatwave force, that is cold injection, can be more efficient. The second and main part of this paper has been devoted to the study of the influence of an external static magnetic field on the wakefield acceleration process, within the colliding pulse scheme. To our knowledge this idea was never explored. The magnetic field is supposed parallel to the direction of propagation of the two counter-propagating waves. It has been shown that the B 0 field creates a transverse current, the latter current can induce a raise of B x at the rear bottleneck of the bubble. Therefore the beam dynamics is substantially modified as the beam is constrained to stay in the maximum acceleration region of the bubble. Beam emittance is considerably reduced and maximum kinetic energy slightly boosted compared to the unmagnetized case. This mechanism provides means to dramatically enhance the beam quality in the blowout regime. We achieved tremendous amelioration with the setup: a = 10, a 1 = 0.1 and n e = 2.5 × 10 −4 n c . After roughly 4 mm of wakefield acceleration, without B 0 field the electronic energy distribution is noisy ∆E f whm /E max ∼ 100% whereas we get ∆E f whm /E max < ∼ 3% when the plasma is magnetized with a 125 T field. Nevertheless the intensity of the B 0 field may be limited by technological considerations [30], thus acute control of the beam quality may require some fine-tuning of the colliding pulse parameters. For a given pump pulse, one should adapt the intensity, duration and focal spot size of the counter-propagating laser pulse.
|
2012-02-24T07:06:11.000Z
|
2012-02-23T00:00:00.000
|
{
"year": 2012,
"sha1": "4fd57dce25e895ee60c7ed981a9c254400c2fe36",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4fd57dce25e895ee60c7ed981a9c254400c2fe36",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
17397527
|
pes2o/s2orc
|
v3-fos-license
|
G-protein inwardly rectifying potassium channel 1 (GIRK 1) gene expression correlates with tumor progression in non-small cell lung cancer
Background G-protein inwardly rectifying potassium channel 1 (GIRK1) is thought to play a role in cell proliferation in cancer, and GIRK1 gene expression level may define a more aggressive phenotype. We detected GIRK1 expression in tissue specimens from patients with non-small cell lung cancers (NSCLCs) and assessed their clinical characteristics. Methods Using reverse transcription-polymerase chain reaction (RT-PCR) analyses, we quantified the expression of GIRK1 in 72 patients with NSCLCs to investigate the relationship between GIRK1 expression and clinicopathologic factors and prognosis. Results In 72 NSCLC patients, 50 (69%) samples were evaluated as having high GIRK1 gene expression, and 22 (31%) were evaluated as having low GIRK1 gene expression. GIRK1 gene expression was significantly associated with lymph node metastasis, stage (p = 0.0194 for lymph node metastasis; p = 0.0207 for stage). The overall and stage I survival rates for patients with high GIRK1 gene expressed tumors was significantly worse than for those individuals whose tumors had low GIRK1 expression (p = 0.0004 for the overall group; p = 0.0376 for stage I). Conclusions These data indicate that GIRK1 may contribute to tumor progression and GIRK1 gene expression can serve as a useful prognostic marker in the overall and stage I NSCLCs.
Background
Lung cancer is one of the leading causes of cancer death in North America [1]. Lung cancer is divided into two morphological types: small cell lung cancer (SCLC) and nonsmall cell lung cancer (NSCLC). The results of surgery still remain unsatisfactory; even in stage I NSCLC (no lymph node metastasis and no distant metastasis) about 30% of patients die due to disease recurrence within 5 years after curative resection [2]. Despite major advances in cancer treatment in the past decades, the prognosis of patients with NSCLC has improved only minimally [1]. A new knowledge of the molecular pathogenesis of cancer has emerged from investigative advances in the field of molecular biology [3]. Increased knowledge of the biologic role of genetic changes has provoked an intriguing search for clinical applications of these alterations [4], and has enabled the more aggressive tumors from the less aggressive ones to be distinguished [5].
G-protein inwardly rectifying potassium channels (GIRK) are found in both the heart and the brain, where they are associated with a slowing of the heart rate and suppression of neuronal response [6]. The channel found in the sinoatrial node and on atrial myocytes is formed from the homologous channel subunits GIRK1 and GIRK4 [7]. Although the function of GIRK1 still remains unclear except in cell proliferation in cancer, GIRK1 gene expression has been found to correlate with lymph node metastasis in breast carcinomas [8], but the correlation with GIRK1 gene expression and prognosis has never been analyzed in NSCLC. To our knowledge, this is the first report to analyze the prognostic influence of GIRK1 gene expression in NSCLC and the possible associations between this parameter and other clinical factors. In this study, we used RT-PCR for detecting GIRK1 in tumor tissues. We compared with GIRK1 gene expression with autocrine motility factor-receptor (AMF-R) gene expression, known as a marker of lymph node metastasis and tumor progression, and we investigated how GIRK1 gene expression is related to tumor progression and prognosis in a series of 72 cases of curatively resected NSCLC.
Tissue specimens
Tumor tissue was collected from 72 patients with NSCLC who underwent curative surgery between 1993 and 1995 at Department of Surgery, Teikyo University School of Medicine. Patients who died within one month after surgery and patients with a past history of another cancer were excluded from the study. Of the 72 patients included, 52 were men and 20 were women and their ages ranged from 34 to 80 years (mean of 66 years). With regard to histological type, 41 were adenocarcinomas, 28 were squamous cell carcinomas and 3 were large cell lung carcinomas. The lesions of these 72 patients were staged on both operative and pathologic findings according to the UICC TNM classification (1997) [9]. There were 24 patients with stage IA, 11 patients with stage IB, 1 patients with stage IIA, 13 patients with stage IIB, and 23 patients with stage IIIA. These patients were performed curative operation with lymph node dissection. The mean followup time was 52.0 months (range, 2.7-120.0 months). Freshly removed pulmonary cancer tissues for RNA extraction were immediately frozen in liquid nitrogen and stored at -80°C until further use. And in five of the cases, adjacent normal pulmonary material from the same patient was also used in this study. Tissue samples to be used for hematoxylin-and-eiosin were fixed in formalin and paraffin embedded.
Reverse transcription-polymerase chain reaction (RT-PCR) analysis
Total RNA was purified from fresh soft tissues by the acid guanidinium-thiocyanate procedure [10]. The human pulmonary adenocarcinoma cell line PC-14 (Riken Gene Bank Co., Ltd. Tokyo, Japan) was used as a positive control. All the RNA (5 µg) was used for cDNA synthesis, and the first-standard cDNA solution was then used for the PCR, with primers designed to amplify a 230 bp sequence (sense primer sequence: 5'-GGGATTTGGACAT-GGCTAAGTC-3' ; antisense primer sequence: 5'-GGCCT-GTTTTCATTCTCTTAACTGATAC-3').
The reaction mixture was overlaid with 20 µl of mineral oil. PCR was performed for forty cycles (10 s at 95°C; 60 s at 60°C, and 120 s at 72°C) as previously described [8]. S14 cDNA amplification using the same temperature profile for 30 cycles served as the internal control [11]: the sense and antisense primers for S14 cDNA amplification were 5'-GGCAGACCGAGATGAATCCTC-3' and 5'-CAGGTCCAG-GGGTCTTGGTCC-3'. The amplified DNA samples were electrophoresed on 1% agarose gels, and photographed with a Polaroid camera. Densitometric analysis of the photographic negatives was used for band quantification.
Specimen classification based on RT-PCR results
The densitometric value obtained for the GIRK1 band of a given tumor tissue sample was divided by the correspond-Agarose gel electrophoresis of RT-PCR amplified 230 bp GIRK1 cDNA, and 143 bp S14 DNA as internal PCR control Figure 1 Agarose gel electrophoresis of RT-PCR amplified 230 bp GIRK1 cDNA, and 143 bp S14 DNA as internal PCR control. Lane 1, size marker; Lane 2, human pulmonary adenocarcinoma cell line PC-14 (positive control); Lane 3, pulmonary adenocarcinoma with high GIRK1 expression. Lane 4, pulmonary adenocarcinoma with low GIRK1 expression. Lane 5, squamous cell carcinoma with high GIRK1 expression. Lane 6, squamous carcinoma with low GIRK1 expression. Lane 7, large cell carcinoma with high GIRK1 expression. Lane 8, large cell carcinoma with low GIRK1 expression. Lane 9, and Lane 10, normal pulmonary tissue with low GIRK1 expression.
ing S14 value, and was referred to as the GIRK1 gene expression rate. The level of the GIRK1 mRNA expression in PC-14 cell line is elevated. The expression ratio of the tumor was then divided by that of the human pulmonary adenocarcinoma cell line PC-14 to obtain the GIRK1 conservation rate. When the conservation rate of a given specimen was м 0.8, it was considered to indicate high expression of the GIRK1 gene, and if the rate was < 0.8, it was defined as low expression.
Comparison with GIRK1 gene expression and AMF-R gene expression
To ascertain whether links between GIRK1 gene expression and another gene expression, known as a marker of lymph node metastasis and tumor progression in NSCLC, we examined the relationships between GIRKI gene expression and AMF-R gene expression in 46 cases [12]. The studies of AMF-R gene expression were performed as described previously [12]. The expression ratio of the tumor was divided by that of the cell line PC-14, and the conservation rate of a given specimen was larger than the mean ratio, it was considered to indicate high expression of the AMF-R gene, and if the rate was lower than the mean ratio, it was defined as low expression of the AMF-R gene.
Statistical analysis
All data regarding the clinical and histopathological variables were stored in a Macintosh computer. The Stat View program (Aracus Concepts, Berkeley, Ca, USA) was used for all statistical analyses. The relationship between the incidence of GIRK1 expression and clinico-pathologic factors, and AMF-R gene expression was examined by the chi-squared test with Fisher correlation. Survival curves were calculated using the Kaplan-Meier method and analyzed by the log-rank test. Statistical significance was identified as p < 0.05.
Detection of GIRK1 using RT-PCR in NSCLC tissues
To determine the number of PCR cycles appropriate for quantification, from 20 and 50 cycles of PCR were performed, in 5-cycle increments. The expression ratios of GIRK1 to S14 were reasonably constant 35 to 45 cycles (data not shown). Therefore, in the subsequent experiments the values obtained at 40 cycles were defined as the expression of the target genes. Using forty RT-PCR cycles, we found that the ratio of GIRK1/cell line PC-14 expression ranged from 0 to 2.2 (means, 0.8) in tumor specimens (Fig. 1). Of 72 NSCLCs studied, 50 (69%) were classified as GIRK1 high gene expression, and 22(31%) were classified as having GIRK1 low gene expression. Five adjacent normal pulmonary materials ranged from 0 to 0.1 (means = 0.1), and all of them were classified as GIRK1 low gene expression.
Relationship between GIRK1 gene expression and clinicopathological factors
The relationships between GIRK1 gene expression and various clinicopathological factors are shown in Table 1. There were no statistically significant relationships between gene expression and age, gender, T factor and histology. In contrast, GIRK1 gene expression was associated with N factor and stage (p = 0.0194 for lymph node metastasis, p = 0.0207 for stage).
The relationships between GIRK1 gene expression and AMF-R gene expression
Of 46 NSCLCs studied, 33 (72%) were classified as AMF-R high gene expression, and 13(28%) were classified as having AMF-R low gene expression. In most GIRK1 high gene expression cases, AMF-R gene was also expressed. As shown Table 2, the results of GIRK1 gene expression agreed significantly with AMF-R gene and 72% of cases had no discrepancy (p = 0.0303).
Association of tumor GIRK1 gene expression and survival
The survival was compared between the high GIRK1 gene expression group and the low GIRK1 gene expression group in overall, stage I and stage II/III. Figures 2 and 3 show the significance in survival between the high GIRK1 gene expression group and the low GIRK1 gene expression group in overall and stage I groups (P = 0.0004 for the overall group; P = 0.0376 for the stage I group). Although the 5-year survival in low GIRK1 gene expression group is better than that in the high GIRK1 gene expression group in the stage II/III, there is not a difference in survival between the 2 groups ( Figure 4).
Discussion
In spite of significant advances in surgery and the use of new, more effective chemotherapeutic regimens, the overall 5-year survival of patients with NSCLC is 17% [13]. Identification of new prognostic factors might be of value in directing therapy and intensifying follow-up for a select group of patients. Lymph node metastasis and stage are the most powerful prognostic markers for NSCLC. Identifying new genes that are associated with tumor growth, metastasis, and prognosis is very important in advancing the understanding of cancer biology.
Human carcinomas exhibit hyperpolarized membrane potential as compared with surrounding normal tissue [14,15]. GIRK1 acts to conduct potassium ions into the cell rather than out of the cell, and play a role in maintaining membrane potential. Though GIRK1 act to hyperpo-Overall survival of 72 lung cancer patients according to GIRK1 amplification Figure 2 Overall survival of 72 lung cancer patients according to GIRK1 amplification. Survival curves were calculated by the Kaplan-Meier method, and statistical evaluation was determined by the log-rank test (p = 0.0004).
Survival curves of the patients with stage I NSCLC on the basis of GIRK1 amplification Figure 3 Survival curves of the patients with stage I NSCLC on the basis of GIRK1 amplification. A significant difference was seen between the 2 groups (p = 0.0376).
Survival curves of the patients with stage II / III NSCLC on the basis of GIRK1 amplification Figure 4 Survival curves of the patients with stage II / III NSCLC on the basis of GIRK1 amplification. A significant difference was not seen between the 2 groups.
larizing the cell membrane, the function of GIRK1 still remains completely unclear in cancer. Cell proliferation and the density of intracellular potassium are controlled at specific stages of the cell cycle [16], and cell membrane potential indeed changes during the cell cycle [14]. And GIRK1 is reported to play a role in cell proliferation [3].
Receptors known to activate GIRK1 belong to the family of G-protein coupled receptors, and the G-protein-coupled receptors are reported to be able to induce cell proliferation and activate a pathway leading to angiogenesis in tumor [17]. GIRK1 gene overexpression is reported to follow a general trend of increasing expression if lymph node metastasis is involved in breast carcinomas [8]. Though the mechanical role of GIRK1 in lymph node metastasis in cancer is not clear, angiogenesis is reported to be correlated with lymph node metastasis, tumor progression and poor prognosis in most of solid tumors [18,19]. Therefore, GIRK1 is thought to be able to act not only in cell proliferation but also as an angiogenesis activator as well as G-protein-coupled receptors. S-kinase-associated protein 2 (Skp2) plays a critical role in regulating cell cycle progression and human factor-8-related antigen (F8RA) is assessed to show angiogenesis by microvessel density. So we determined whether or not expression of GIRK1 mRNA correlated with immunohistochemical assays of Skp2 and F8RA. Patients with high expression of GIRK1 mRNA were tendency to show high MVD and positive Skp2 expression without significance (data not shown). GIRK1 could be a candidate for a pharmaceutical target, depending upon further functional studies.
In this study, we used human pulmonary adenocarcinoma cell line PC-14 as a positive control for GIRK1 gene expression. The patients were classified into two groups according to the cutoff point of mean ration of GIRK1/cell line PC-14 expression in tumor specimens. The mean number has been widely used as the cutoff point to divide the patients into two groups [20,21]. GIRK1 was expressed at higher levels in cancer tissue than in adjacent normal lung tissue. It was shown that a high GIRK1 gene expression was detected in 69% of the tumor samples in our patient population with NSCLC. Furthermore, GIRK1 gene expression was also associated with nodal status, and tumor stage. These results, in correlation with nodal status, were similar to a previous report on breast carcinomas [8]. We examined the relationships between GIRKI gene expression and AMF-R gene expression, known as a marker of lymph node metastasis and tumor progression in NSCLC [12], and the results of GIRK1 gene expression agreed significantly with AMF-R. Statistical associations between GIRK1 expression and clinicopathological variables (age, T-factor, histology, and AMF-R) were examined by regression analysis. This analysis also showed that GIRK1 was correlated with AMF-R (data not shown). It was observed that patients with high GIRK1 expression NSCLC showed an unfavorable prognosis compared with those whose tumors had low GIRK1 expression in overall. Many patients in stage II/III disease had high GIRK1 expression than low GIRK1 expression. Therefore the poorer survival in overall was possible to be due to stage. So we compared patients in each stage, and we found a positive correlation between GIRK1 expression and surgical outcome in stage I cancer but not a positive correlation in stage II/III disease. Our results suggest that a high GIRK1 gene expression was strongly associated with an increased recurrence in stage I cancer and that patients with high GIRK1 gene expression may be prone to metastasis, or may already have occult micrometastasis to the lymph node in stage I cancer. On the other hand, GIRK1 expression does not seem to be a prognostic predictor for stage II/III disease individuals. GIRK1 gene expression level may play one of a key role in the biology of lung cancer and define a more aggressive tumor phenotype. Further studies are needed on GIRK1 to evaluate the mechanism of GIRK1 and more studies with a larger group of patients will be necessary to substantiate these data. A real quantative PCR amplication is now the standard approach, and more sensitive and accurate than RT-PCR. We would use the real-time quantative PCR amplication instead of RT-PCR in the next study for estimating the gene expression.
In conclusion, the present study suggests GIRK1 may be contribute to tumor progression and could be a useful prognostic marker in patients with overall and stage I NSCLC. Thus the current findings provide evidence to support a potential utility of this gene in developing a diagnostic test for NSCLC patients.
|
2017-06-16T23:20:03.015Z
|
2004-11-13T00:00:00.000
|
{
"year": 2004,
"sha1": "b7131435c27a7dc51dfabadc9afefb86fda35e0d",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-4-79",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7131435c27a7dc51dfabadc9afefb86fda35e0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
265018508
|
pes2o/s2orc
|
v3-fos-license
|
Mixed Methods Evaluation of Satisfaction with Two Culturally Tailored Substance use Prevention Programs for American Indian/Alaska Native Emerging Adults
American Indian/Alaska Native (AI/AN) communities are disproportionately affected by the opioid epidemic. AI/AN emerging adults (ages 18–25) in urban areas are at particularly high risk, with the overdose death rate among urban-dwelling AI/AN people 1.4 times higher than rural-dwelling AI/AN people. Despite these challenges, there are no evidence-based culturally tailored prevention or intervention programs to address opioid, alcohol and other drug use among urban AI/AN emerging adults. This study focused on understanding AI/AN emerging adults’ experiences with two culturally tailored programs addressing opioid, cannabis, and alcohol use as part of the randomized controlled trial for Traditions and Connections for Urban Native Americans (TACUNA) in order to enhance feasibility of this intervention. Using a convergent mixed methods design at 3-month follow-up, we collected satisfaction and experience ratings and written narratives (total n = 162; intervention n = 77; control n = 85) from a sample of urban-dwelling AI/AN emerging adults who participated in both programs. We analyzed data through simultaneous examination of qualitative and quantitative data. The quantitative ratings show that both programs were rated highly. The qualitative data contextualized these ratings, illustrating pathways through which specific components were perceived to cause desired or observed behavioral change in participants. Among the elements that mattered most to these participants were the convenience of the virtual format, having a comfortable and safe space to share personal stories, and learning new information about their social networks. Negative comments focused on workshop length and inconvenient scheduling. This is one of the first studies to explore participant satisfaction and experience with culturally tailored substance use programming among a historically marginalized and understudied population. It is important to consider the voices of urban-dwelling AI/AN people in program development because hidden factors, such as limited financial resources, limited time, and misalignment with cultural values may prevent existing programs from being feasible. Supplementary Information The online version contains supplementary material available at 10.1007/s11121-023-01612-3.
Background
Communities across the USA have been affected by the opioid epidemic, with urban American Indian/Alaska Native (AI/ AN) communities disproportionately affected.For example, AI/AN individuals had the second highest overdose rates from all opioids in 2017 (15.7 deaths/100,000 population) (Wilson et al., 2020).In addition, for this population, the overdose rate is 1.4 times higher in urban compared to rural areas (Joshi et al., 2018).According to the 2020 Census, 87% of those who identify as AI/AN alone or in combination live outside tribal lands, with 60% of that 87% based in metropolitan areas (HHS, 2022).
AI/AN communities have historically thrived in networks of immediate, extended, and communal families that play an important role in practical and spiritual support (Palimaru et al., 2022).This fabric of AI/AN life was undermined (directly and indirectly) by a combination of government-enforced relocations from tribal lands to urban areas; purposeful assaults against family, social, and cultural traditions; and other political and economic structural barriers that have fueled traumatic experiences and economic disenfranchisement for decades across multiple generations (Brave Heart & DeBruyn, 1998;Dickerson et al., 2020).Prior research found that historical and intergenerational trauma are key drivers of stressful and challenging social circumstances (Gibbs et al., 2018).
As a result of challenges at multiple levels of historical and social ecology, AI/AN emerging adults in urban areas face distinct and complex pressures around social and geographical fragmentation and limited opportunities for cultural involvement, which may in turn put some youth at risk for substance use (Besaw et al., 2004;Palimaru et al., 2022), poorer mental health (CDC, 2017) and death by suicide (Serfaini et al., 2017).Other reasons why AI/AN emerging adults may be vulnerable to substance use is the influence that occurs in their social networks, i.e., their families and peers, which may create normative pressure to take risks (Kennedy et al., 2022).Furthermore, national data show that alcohol and cannabis use are the most frequently used substances by emerging adults (Patrick et al., 2022).Likewise, our focus groups with AI/AN emerging adults, parents and providers in the development of this program highlighted the importance of not only addressing opioid use as part of the program, but also discussing how to make healthy choices around alcohol and cannabis (Dickerson et al., 2022).
Framework for Program Development
In the face of these challenges, there are no evidence-based culturally tailored prevention programs to address alcohol and other drug use among urban AI/AN emerging adults (Venner et al., 2018).Given the understandable historical hesitation of some AI/AN communities to engage with established or US-government linked institutions and research projects, it is important to develop such programs using a community-based participatory approach (CBPR) (Crump et al., 2020;Gittelsohn et al., 2020;Whitesell et al., 2020).CBPR is a research approach centered on partnerships between scientific researchers and community members to investigate and address issues that affect minority communities disproportionately (Crump et al., 2020;Gittelsohn et al., 2020;Whitesell et al., 2020).
The benefits of CBPR to AI/AN communities are multifaceted.For instance, CBPR can help strengthen communitylevel identity and capitalize on collective strengths (Israel et al., 1998;Walters et al., 2020).Collaborative partnerships across all stages of a study can also promote mutual learning and assist with revitalizing and preserving traditional culture and knowledge (LaVeaux & Christopher, 2010).Ultimately, one of the key benefits of aligning the rigors of research with community values and needs relates to building trust, which in turn could improve program implementation (Moran, 2001;Olson, 1999;Patel et al., 2022;Whitesell et al., 2020).In taking this approach, we worked with a community partner, Sacred Path Indigenous Wellness Center, to develop a culturally grounded opioid, cannabis, alcohol use prevention program for urban AI/AN emerging adults (D'Amico et al., 2021).We relied extensively on qualitative data and engaged throughout with our Elder Advisory Board and the broader urban AI/AN community (Dickerson et al., 2022).
In addition to drawing on quantitative data to develop interventions, it is equally important to capitalize on qualitative and mixed methods to evaluate intervention implementation.This helps ensure that interventions are continually responsive to community feedback.Qualitative data have often been poorly and inconsistently utilized in evaluation of randomized controlled trials, and few use a convergent approach as part of the evaluation to understand participants' experience with the intervention and ways to improve it while the trial is ongoing (Davis et al., 2019).And yet, when used properly, qualitative data (such as participant narratives) can shed light on individual and contextual dynamics, and can help address many of the complex challenges that randomized control trials focused on preventive interventions face (Davis et al., 2019).Such information can help explain why programs do not achieve their full potential, and can be especially useful in adapting the design (Flemming et al., 2008) or cultural components of programs (Montgomery, 2016;Pallmann et al., 2018).
Much of the existing work in this area and for this population underscores the need to integrate traditional practices into prevention programming (Blue Bird Dickerson et al., 2020;Jernigan et al., 2020).For example, a previous study developed by our team (Motivational Interviewing and Culture for Urban Native American Youth, or MICUNAY) combined traditional practices with motivational interviewing to address substance use among urban AI/AN adolescents (Dickerson et al., 2016).At that time, we found that this approach helped promote resilience, and adolescents enjoyed the program (D'Amico et al., 2020;Dickerson et al., 2016).As a result, we took a similar approach to the development of the two prevention programs in this study (Traditions and Connections for Urban Native Americans and the Health and Wellness Program), by conducting formative focus groups, and building the intervention around motivational interviewing, which is a counseling method that includes careful listening and empowering discussions to encourage behavior change if adolescents are ready and willing (Miller & Moyers, 2017).The details of that development process, including how we used findings from focus group data to create program content are discussed at length in another manuscript (Dickerson et al., 2022).In addition, there is a significant body of work about the benefits of coupling social network visualizations with Motivational Interviewing in substance use prevention programs (Martinez et al., 2015;Rees et al., 2014;Tingey et al., 2016).Similarly, the feasibility and acceptability of incorporating social network visualizations into a culturally tailored motivational network intervention is described at length in a separate manuscript (Kennedy et al., 2022).The prior manuscript draws on formative focus group data before intervention implementation (collected from November 2019 to February 2020) to examine social network aspects as part of program development.This paper uses mixed methods and focuses on data from the 3-month follow-up after implementation of the two cultural programs (collected from April 2021 to July 2022) to understand social networks and other aspects of satisfaction from an implementation perspective.
This paper is one of the first to address participant satisfaction and experience during a randomized controlled trial among urban-dwelling AI/AN emerging adults.It is extremely important to consider the voices of this population in prevention program development, as hidden structural and socio-cultural factors, such as limited financial resources, competing demands on participant time, limited privacy, consequences of historical trauma, linguistic considerations, and misalignment with cultural values may prevent existing interventions from being feasible.We therefore describe pathways through which specific components of a culturally and developmentally tailored intervention can affect motivation for or actual behavioral change in participants.This study also provides a methodological advancement in the use of mixed methods to elicit early participant feedback on how to improve an intervention, by using joint displays of qualitative and quantitative data on similar topics.When used early during randomized controlled trials, this approach can help refine intervention design and implementation.
Study Goals
This study focused on understanding AI/AN emerging adults' experiences of two culturally tailored substance use prevention programs during the RCT so that we could utilize information to make improvements in format and content as needed.We focused on three research questions: How satisfied were participants with the workshops?How well did the workshops address mechanisms that prevent risk and enhance protection?
What actionable recommendations for program improvement did participants suggest?
TACUNA Study
Traditions and Connections for Urban Native Americans (TACUNA) is a new opioid, cannabis, and alcohol use prevention program designed for urban AI/AN emerging adults.We are testing TACUNA as part of a longitudinal, mixedmethods clinical trial that draws on both quantitative (survey) and qualitative (focus groups, survey narrative elicitations) data (D'Amico et al., 2021).This study is comprised of two phases.Phase I focused on developing a culturally appropriate substance use prevention program that addressed opioid, alcohol, and other drug use (Dickerson et al., 2022).We are following a community-based participatory research (CBPR) approach, which is an equity-focused approach to the scientific process, with communities, researchers, and other stakeholders collaborating and partaking in the decision making and dissemination process (Crump et al., 2020).Phase II consists of a randomized controlled trial comparing the benefits of TACUNA to a culturally tailored control condition (D'Amico et al., 2021).For Phase I, we conducted 13 focus groups across California, involving 32 emerging adults, 33 providers, and 26 parents (Dickerson et al., 2022).Findings from focus groups are reported in detail elsewhere (Brown et al., 2022;Dickerson et al., 2022;Kennedy et al., 2022;Palimaru et al., 2022).Overall, these groups provided valuable information about the context of urban life, life challenges and aspirations, perceptions of social networks, and concrete ideas on how to execute the workshops.Our team tailored the workshop components based on these findings, for example adapting the cultural elements to preferences expressed by focus group participants (D'Amico et al., 2021).The initial plan for Phase II was to test in-person workshops in California.However, because of the COVID-19 pandemic, we shifted to a virtual format, which in turn allowed us to expand recruitment to urban areas across the USA.
TACUNA is comprised of three separate 2-h virtual workshops based on the Medicine Wheel, which is often included in traditional healing approaches (Dickerson et al., 2022).The first workshop focuses on healthy choices for the brain and a discussion of Native American identity; the second covers healthy choices for the body and Native American cooking; and the third workshop focuses on making healthy choices and improving spiritual health with a sage burning ceremony.See Table 1 for more details about what each workshop covered.
The TACUNA workshops included a novel social network component.To do this, we used a personal network interview platform, EgoWeb 2.0, an open-source survey software customized for social network data collection and visualization in interventions (see egoweb.info).Immediately after answering a series of questions about their social networks, participants in TACUNA viewed three visualizations of their network generated by EgoWeb 2.0.We also provided workshop participants with links to visualizations of their networks during the workshops.Workshop facilitators then used motivational interviewing, which is a goal-oriented style of communications that uses language focused on change (Miller & Rollnick, 2012), to generate group discussions about how social relationships relate to risk and resilience and help people make healthy choices in life (Kennedy et al., 2022).All workshops were piloted and refined based on feedback sessions that lasted approximately one hour (D'Amico et al., 2021).See Supplementary Fig. 1 for an overview of the social network visualization output.
For our control condition, we developed a 2-h culturally tailored opioid education workshop, hereafter referred to as the Health and Wellness Program.We included this active control condition for ethical reasons based on feedback from the community and our Elder Advisory Board.Specifically, the board felt that all participants should be given culturally appropriate programming relevant to opioids, in order to properly address the risks and disparities faced by AI/AN communities.The information in this workshop is based on prevention and education materials recommended by the National American Indian & Alaska Native Addiction Technology and Transfer Center, which is funded by the Substance Abuse and Mental Health Services Administration (NA-ATTC, 2019).The Health and Wellness Program differed in that it was more didactic and included a general overview of opioids, a discussion about the effects of the epidemic on AI/AN communities, as well as discussion of treatment options, physical wellness, and cultural traditions (D'Amico et al., 2021).See Supplementary Fig. 2 for an overview of the Health and Wellness Program content.
Sample and Recruitment
Potential participants were eligible for TACUNA if they were: (1) between the ages of 18-25; (2) currently living
Demographics
Participants provided their age, gender, race/ethnicity, education level, and state of residence.
Alcohol, Cannabis, and Opioid Use
Separate items assessed number of times in the past 3 months participants reported drinking a full drink, 5 or more drinks (defined as heavy drinking), and using marijuana/cannabis or opioids (none, 1 time, 2 times, 3-5 times, 6-9 times, 10-19 times, 20-30 times, and 31 + times).More than half the sample reported alcohol use (77%), and cannabis use (52%) in the past 3 months.Close to half reported heavy drinking (48%).Few participants reported using opioids (2%).
Workshop Quality and Satisfaction
At the 3-month follow-up, participants were asked to rate the quality of the workshop they attended.Participants were asked, "How would you rate the quality of the workshops?"with answer options ranging from poor (1) to excellent (4).Satisfaction was measured with both an overall item ("Generally, I am satisfied with the workshop I attended") and with a scale that included items about satisfaction with overall content, the workshop facilitator, learning new skills, understanding AOD use in one's social network, and motivation to make changes to one's social network (D'Amico et al., 2020).The answers ranged from strongly disagree (1) to strongly agree (5).
Peer Influence
We gauged peer influence on substance use by asking participants how much time they spent around others who use alcohol and other drugs (D'Amico et al., 2008).Questions asked "How often are you with people who are… (drinking alcohol, using marijuana, or smoking cigarettes) with response options from 0 (never) to often (3).In this paper, we focus on time spent with peers drinking alcohol, as it had one of the higher frequencies.
Cultural Identity
To assess AI/AN cultural identity, we used the Multigroup Ethnic Identity Measure (MEIM) (Phinney, 2016).The scale consists of 12 questions rated from 1 (strongly disagree) to 5 (strongly agree).For the purposes of our work with Indigenous communities, we modified MEIM items to focus on AI/ AN heritage (e.g., "I have a clear sense of my AI/AN identity and what it means to me") (Brown et al., 2019).For the mixed methods analysis we examined cultural themes along with the first item in the scale: "I have spent time trying to find out more about my American Indian/Alaska Native identity, such as its history, traditions, and customs," which was the item that was closest conceptually to the themes with which it was overlaid in the mixed methods analysis (see Stage 2 analysis below).
Qualitative Data Collection
We supplemented quantitative data with six open-ended questions about participants' workshop experience for both conditions: "Please describe how you feel about your experience"; "What did you like most?"; "What did you like least?"; "How might you improve the workshops?";"How did you feel about the virtual experience?" and "How did you feel the workshops addressed your experiences as an urban Native American young adult?".In addition, TACUNA workshop participants answered four open-ended questions addressing the Social Network component: "Please describe what you thought about seeing the picture of your social network and the discussion of social networks"; "How did the discussion of social networks help you think about drug and alcohol use in your own social network?"; "How did seeing the social network visualization and the discussion help you understand traditional practices and Native American culture in your social network?"; and "Describe any changes you made to your social network, or relationships to people in your network, that were the result of seeing the visualization and the discussion." There were no character limits to comment length that participants could write in response to any question.
Mixed Methods Analysis
Both qualitative and survey data were uploaded to NVivo, a mixed methods software for coding and organizing survey and qualitative data (QSR, 2018).
Stage 1: Qualitative Analysis
First, we conducted manifest content analysis on all text responses to open-ended questions, using 30 codes focused on experiences during the workshops, 10 codes focused on the social network visualization and discussion, and 15 codes describing actionable recommendations for improving the intervention (Kleinheksel et al., 2020).Codes were developed inductively (Cho & Lee, 2014) by one person (first author), and reconciled with another team member (second author).Both coders were trained in qualitative methods in the context of health services research and anthropology; and both have considerable prior experience with the methodology employed, the subject matter, and Indigenous communities.Thus, the analytic process may have occasionally drawn on assumptions and expectations associated with prior work.Neither coder is AI/AN, however both participated in the formative focus groups as moderators, and have previous experiences partnering with AI/AN communities across the USA to ensure the research process reflects community traditions, values, and preferences.Furthermore, numerous discussions were held with the entire research team regarding these data, including author DLD, who is a Native American addiction psychiatrist working in the Native American community, and author CLJ who is CEO of our community partner SPIWC, who is also Native American and has worked with AI/AN communities for over two decades.
In some cases, we developed codes based on the topical focus of each question.For example, many of the comments in response to the question "How did you feel about the virtual experience?" were coded with a Virtual Format parent code, with relevant subcodes captured under that code, such as "technical challenges," "liked virtual format overall," and "disliked virtual format overall."The same applies for the question about recommended improvements.For broader questions, such as "How did the discussion of social networks help you think about drug and alcohol use in your own social network?", the codes were based entirely on the comments, which include themes focused on network size, impact of network relationships, isolation, and so on.
We applied some codes to multiple questions, because sometimes answers went beyond the scope of the immediate question.For instance, participants described changes to their social networks in response to the prompt "Describe any changes you made to your social network, or relationships to people in your network, that were the result of seeing the visualization and the discussion."But content about network changes also occurred in response to other questions, such as "How did seeing the social network visualization and the discussion help you understand traditional practices and Native American culture in your social network?"Some codes contained content that was exclusively positive in valence, some were exclusively negative, and others had both positive and negative content.Also, the experience codes and the suggestions for improvement codes are separate because we did not necessarily want to implicate respondents into recommended changes on basis of experience comments, especially as they were given as part of a distinct prompt for improvement suggestions.Given that narratives were rich, and that we coded segments of text that were sufficiently long and coherent to be interpretable on their own, some segments were assigned multiple codes.The full codebook is available online as a Technical Supplement.
Stage 2: Analysis of Themes by Survey Answers
Next in the analysis, we followed a "convergent" mixed methods approach wherein we examined qualitative experiential themes sorted by categorical survey ratings (Creswell, 2015;Fetters, 2019).The convergent approach was chosen because it would provide multiple pictures of the concept of interest, i.e., satisfaction, from several angles.Gauging only closed-ended ratings would preclude narrative content about dimensions of experience that may relate to participant satisfaction ratings but are not captured with the survey questions.Likewise, only relying on narratives may not exhaust all the dimensions of satisfaction within the closedended scale.This convergent approach allowed the authors to iterate and draw "meta-inferences," i.e., to find linkages between qualitative and quantitative data, and to interpret both types of data relative to each other (Creswell, 2015;Fetters, 2019).
Results
In total, 162 respondents provided ratings to the survey items (TACUNA n = 77; Health and Wellness Program n = 85), of whom 152 provided at least one comment.
Demographic Characteristics and Other Descriptive Information
Table 2 summarizes demographics for both TACUNA and Health and Wellness Program participants.Overall, participants were 18-26 years (mean = 22.2, SD = 2.19) and were predominantly female (85%).Ninety-eight percent of participants (all but two) identified as AI/AN.Of those 160 participants, 42% endorsed AI/AN alone, 32% identified as AI/AN in combination with Hispanic ethnicity (and in some cases an additional racial category as well), and 26% endorsed AI/AN plus another race (but not Hispanic ethnicity).These racial and ethnic categories are consistent with Census 2020 data where respondents identified as AI/AN alone or in combination, and with prior evidence (Brown et al., 2016).We do not provide tribal affiliation to protect participant confidentiality.
More than half of respondents graduated from high school and nearly a third had a Bachelor's degree.The two groups were comparable with regards to age and education, with no statistically significant differences.Participants resided in 22 different states.Eighty-six percent of participants in each group provided comments in response to the open-ended questions.
Satisfaction and Quality Ratings
We present descriptive information on satisfaction and quality ratings in Table 3. Within a range of 1 to 4, the mean quality rating for TACUNA was 3.2, with 81% rating it as "excellent" or "good."In the Health and Wellness Program group, the mean quality rating was 3.1, as 79% rated it "excellent" or "good."From a range of 1 to 5, the average satisfaction rating of TACUNA participants was 4.34 (77% "somewhat agreed" or "strongly agreed" with the statement), and 4.45 for the Health and Wellness Program group (83% agreed "somewhat" or "strongly").
Quality Ratings Matched Diverse Qualitative Experiences
Table 4 lists the proportion of participants across both positive and negative themes.Most participants liked the virtual format (primarily due to its convenience) and enjoyed learning new information.TACUNA participants also indicated they enjoyed meeting and connecting with AI/AN emerging adults, whereas Health and Wellness Program participants did not mention this theme.Participants from both groups felt it was a comfortable and safe space to share their views and felt validated in their experiences.Several other positive dimensions of satisfaction were present only for the TACUNA group, such as appreciating the traditional practice and the cultural grounding of the content.Negative themes related to not enjoying the virtual format (mostly because of technical challenges), and some felt the workshops were too long.Also, some participants commented on inconvenient scheduling.Negative comments also indicated that some groups were perceived to be too small, with limited opportunities to interact with others.
Table 5 displays quality ratings along with the three most common positive themes and illustrative examples, both for the TACUNA workshop and the Health and Wellness Program.Of all TACUNA participants who offered both ratings and comments (n = 66), 49% rated it as "Excellent" and 43% rated it as "Good."Among these, 92% said they enjoyed the virtual format, writing, for example that: "It was convenient, since I didn't have to go anywhere far for it and it made the length of it more manageable."Thirty-nine percent felt TACUNA was a comfortable and safe space, writing, for example that "I feel it was a great safe space to talk about the experiences I have dealt with growing up as an urban native."I feel that the things I did in the workshops will help me to make the changes that I want 70% 71% I could use information from the workshops in my daily life 77% 80% I could understand the information from the workshops 82% 87% I developed new friendships as a result of participating in the workshops 38% 35% The workshops helped me better understand the connections between alcohol, drug use, and people in my social network 70% 74%
Social Network Awareness Motivated Change
The workshops helped me better understand the connection between traditional practices and people in my social network 68% 69% The workshops inspired me to make changes to my own social network 55% 61% Participating in the TACUNA cultural activities can help me lead a healthier life 74% n/a I enjoyed the discussion of social networks 69% n/a I enjoyed seeing pictures of my social network 62% n/a illustrate how the workshops addressed mechanisms that prevent risk and enhance protection inherent in social relationships.These themes were mentioned only by the TACUNA participants, because only they received the social network component.Notably, more than half of respondents (53%) indicated that they understood how their social network relationships influenced their alcohol and other drug use, as well as participation in traditional practices.They also described either real or desired changes in their social networks (52%).Table 7 displays social network themes by the frequency of being around people who drink alcohol.Of all TACUNA participants who offered ratings (n = 65), 25% were "often" and 45% were "sometimes" around people who are drinking.Of these, almost half (49%) described real or desired changes to their networks, as this comment illustrates: "The visualization helped me think in the future about my choices of who I am hanging out with and more specifically what we are doing.I am more interested in doing activities and things sober and want to try to bring that to my friend groups." Thirty-one percent of TACUNA participants were "hardly ever" or "never" around people who are drinking.Of these, 60% were motivated to make real or desired changes to their network; for example, "I changed my social network by hanging out with different people and expanding my friend groups, but also drifted away from some friends."
Culturally Adapted Segments Validated Urban Native Experience
Among TACUNA participants, 39% said they liked the workshops because they were able to meet and connect with other AI/AN emerging adults; 27% felt validated in their experiences as young Native American people; 26% felt that TACUNA addressed their urban experience; and 24% enjoyed the traditional practice components.Others appreciated TACUNA's cultural grounding (17%), and 8% were motivated to learn about their culture, often through reaching out to their community.Of all TACUNA Workshop participants who offered both ratings and comments (n = 66), 49% rated it as "Excellent" and 43% rated it as "Good" 92% Enjoyed virtual format "It was convenient, since I didn't have to go anywhere far for it and it made the length of it more manageable."
52%
Learn new information "Learning more about sage and food.Also learning more about the medicine wheel."
39%
Comfortable and safe space to share "I feel it was a great safe space to talk about the experiences I have dealt with growing up as an urban Native." Of all HWP participants who offered both ratings and comments (n = 73), 37% rated it as "Excellent" and 47% rated it as "Good" 74% Enjoyed virtual format "I greatly enjoyed the virtual workshop and was grateful for the caution that it applied to prevent the spread of Coronavirus."
67%
Learned new information "I've had a good experience with everything including the workshop over zoom where we talked about opioids and the various effects of the drug and how hard it is to stop doing an opioid due to the overwhelming withdrawal cycle."
32%
Comfortable and safe space to share "I felt more connected, given that each person was most likely in a space that they were already comfortable in, and with this it provided a sense of safeness and allowed the discussions be more open and really went in-depth about the topics." Table 8 displays prominent culture and identity themes for the majority of TACUNA participants -those who indicated they have sought information on Native identity.Of all TACUNA participants who offered both ratings and comments (n = 54), 57% "Strongly Agreed" and 43% "Agreed" with the statement "I have spent time trying to find out more about my AI/AN identity, such as its history, traditions, and customs."Of these, 47% appreciated that they were able to meet and connect with other AI/AN emerging adults during the workshops, as illustrated by this quote: "I loved the opportunity to speak with other Indigenous young adults about topics that aren't easily brought up."Thirtyfive percent suggested they felt validated: "
Actionable Recommendations for Improvement
Table 9 shows the proportion of participants by each improvement theme.Nearly a fifth of the TACUNA participants suggested improving facilitation techniques and increasing participant interaction.For example, one respondent wrote, "I would improve the workshops by maybe doing more icebreakers so it does not feel as awkward and there is a greater sense of connection with the other participants."Another suggested, "I feel there should be discussion questions given to us where we can talk amongst ourselves regarding how we may improve our Native community." Sixteen percent also recommended more tailored Native cultural content, as illustrated by the following quote: "I would improve the workshops by breaking down information by region.I know traditions for Plains tribes are very different than say those on the East coast, so maybe I would take the time to elaborate on that." Among Health and Wellness Program participants, 20% recommended having larger groups and more participant interaction; for example, "I would include more areas to discuss the material, case studies etc. to dive more deeply into the material.Also, larger groups to interact if possible."
Discussion
This study describes urban AI/AN emerging adults' satisfaction with two culturally tailored programs addressing opioid, cannabis and alcohol use.Results from this community-based study highlight the importance of analyzing satisfaction levels and feedback from participants during the randomized controlled trial.This feedback can help to address implementation issues early in the research process, We utilized a convergent mixed-methods design to elicit actionable information about implementation, feasibility, and acceptability.The quantitative ratings show that both programs were rated highly, and the qualitative data helped contextualize ratings, to understand how the programs worked.We expected that participants would report high satisfaction with both programs as content was developed with extensive input from the community.Methodologically, this study shows the utility of garnering both quantitative and qualitative satisfaction and experience data early on in randomized control trials, which can flag implementation issues early.For instance, even participants who rated the TACUNA workshop highly offered suggestions for improvement regarding workshop duration, size, and scheduling convenience.Without qualitative data, such important actionable details might have been missed.
Overall, participants in this study reported high satisfaction levels with both interventions.Participants liked the convenience of the virtual format, the comfortable and safe space to share personal stories, and learning new information.The narratives also provided insights on mechanisms that prevent risk and enhance protection.Participants in the TACUNA workshops reported that the social network component helped raise awareness of their own social networks, inspired motivation to change their social networks, and inspired motivation to connect to culture.Participants' comments illustrated how seeing illustrations of their social networks helped them think about who was around them, how they interacted with others, whether they needed to make changes, find support, or take other action.Respondents also noted the importance of the cultural practice components, saying they enjoyed learning about traditional practice and history, with some signaling motivation to connect with the community more.Overall, findings help substantiate our approach of incorporating social network discussions and AI/AN traditional practices within the TACUNA program.
Moreover, the qualitative data offered actionable information regarding implementation, such as the positive regard for the virtual format and requests for more regionally focused traditional information.We found pathways through which specific components of TACUNA were perceived to increase motivation for or actual behavioral change in participants.Our team has used these comments to further enhance implementation of the intervention.For example, responding to negative comments about the duration of the intervention, we reduced the workshop length from two to 1 h.We used the findings relating to perceptions of facilitators in our facilitator training sessions, for example to help better pace the sessions.We also plan to enhance the final manual and intervention approach to reflect the need for more local cultural information.For example, prefacing the Native American cooking component with historical overviews of Native plants and local or regional preferences for seeds and other ingredients.Finally, our work reinforces the importance of using community-based participatory research throughout the entire study.Many of the 3-month respondent observations in this study aligned with insights from our formative focus groups and pilot tests, wherein emerging adults indicated that the social network component was helpful as it created an understanding of how their networks may influence them, and many felt motivated to make healthy connections (Brown et al., 2022;Kennedy et al., 2022;Palimaru et al., 2022).We actively engaged members of the community at key steps along the way, including the design and content of the intervention, ensuring culturally-appropriate recruitment, and dissemination of results (Dickerson et al., 2022).This is especially important in under-represented communities that have faced historic abuses in the name of research.
There are a few limitations to note.First, recall bias may be an issue, as participants responded 3 months after the intervention.The narratives included occasional comments such as "I don't know" or "I don't remember."Moving forward, study designs that examine satisfaction and experience at multiple points in time, such as immediately after intervention and at 3 months, could offer more insights into the optimal time to elicit such feedback.Also, we had a "Not everyone has good wifi service.I wish the cameras would stream everyone's faces."(TACUNA) "Train staff on how to work the platform they are on."(HWP) slightly smaller qualitative sample compared to the survey ratings sample.There were respondents who answered the closed ended questions, but not the open-ended qualitative questions.Finally, a majority of our sample reported female identity; this aligns with prior findings in prevention research, showing that females typically have higher participation rates (Reed et al., 2022).Thus, these findings may overrepresent female perspectives and sensitivities relating to substance use and social networks, while underrepresenting other gender identities.
Conclusion
This is one of the first studies to examine participant satisfaction and experience with substance use prevention programming among a historically marginalized population.This study elicited actionable information about feasibility and acceptability of two culturally tailored programs that were developed through community-based participatory research.Overall, findings highlight the importance of engaging communities throughout the intervention development process as part of a continuous dialogue on how to ensure programs are relevant and grounded in community priorities and needs.Collecting and analyzing participant ratings and narratives during the implementation process provided a deeper understanding of the workshops, including successful and less helpful aspects, which can aid in future development and refinement of programs.
Table 1
(D'Amico et al., 2021)iguez, A., Brown, R. A., Kennedy, D. P., Palimaru, A. I., Johnson, C., Smart, R., Klein, D. J., Parker, J., McDonald, K., Woodward, M. J., & Gudgell, N. (2021)D.J., Parker, J., McDonald, K., Woodward, M. J., & Gudgell, N. (2021).Integrating traditional practices and social network visualization to prevent substance use: study protocol for a randomized controlled trial among urban Native American emerging adults.Amico et al., 2021).This study occurred during the COVID-19 pandemic from December 2020 to October 2021; therefore, recruitment occurred online via social media across the USA, and participants completed surveys online.Participants completed an online screener, and those who were eligible were contacted by staff from our Survey Research Group and consented to be part of the study.They were then asked to complete a baseline survey and randomized to receive either one virtual workshop or three virtual workshops and a Wellness Circle(D'Amico et al., 2021).Procedures were approved by the institution's Internal Review Board and the project's Urban Intertribal Native American Review Board.This study has been preregistered with Clinical Trials, registration NCT04617938, and has published the study protocol (D'Amico et al., 2021)e., not on a rancheria, reservation, or other tribal lands); (3) self-identified as AI/AN; (4) had no opioid use disorder; and (5) spoke English (D'(D'Amico et al., 2021).In addition to the baseline data, participants complete 3-, 6-, and 12-month follow-up surveys.The current analysis draws on the 3-month follow-up survey and open-ended comment data from AI/AN emerging adults across the USA who completed 3-month follow up surveys between April 2021 and July 2022 (total n = 162; TACUNA n = 77; Health and Wellness Program n = 85).
Table 2
Sample Demographics (N = 162)* racial and ethnic groups do not add up to 100% as participants could report "all that apply."TACUNA stands for the intervention group, Traditions and Connections for Urban Native Americans.HWP stands for the control group, Health and Wellness Program Table6lists the proportion of TACUNA participants who mentioned social network themes along with quotes that
Table 3
Proportion of respondents by quality and satisfaction ratings * For quality, percent reflects participants who reported excellent or good *
Table 4
Positive and negative themes among participants (N = 152) TACUNA stands for the intervention group, Traditions and Connections for Urban Native Americans.HWP stands for the control group, Health and Wellness Program
Table 5
Overall satisfaction themes by workshop quality ratings TACUNA stands for the intervention group, Traditions and Connections for Urban Native Americans.HWP stands for the control group, Health and Wellness Program
Table 6
Proportion of TACUNA participants with Social Network-specific themes which can help with community-based delivery of interventions and development of interventions that can be applied nationwide.It is important to consider participants' voices as they can reveal hidden structural and socio-cultural factors that may undermine program effectiveness.This study also uses mixed methods to elicit early participant feedback on how to improve the intervention, by using joint displays of qualitative and quantitative data.
Table 8
Prominent culture and identity themes by pursuit of information on Native identity among TACUNA participants
Table 9
Improvement themes among participants (N = 152) TACUNA is the Intervention, Traditions and Connections for Urban Native Americans.HWP is the Control, Health and Wellness Program "I would say having the facilitators be open to answering the questions themselves and talking about their personal experiences, if comfortable, since it felt more like a lecture than an open dialogue between two people."(TACUNA) "The way the speakers deliver information.No room for real discussion."(HWP) traditions for Plains tribes are very different than, say, those on the East coast, so maybe I would take the time to elaborate on that."(TACUNA) "I wanted to hear more about the difficulties of being a reconnecting Native."(HWP) Offer better scheduling options 12 2 "More weekends workshops or earlier workshops during the week."(TACUNA) "Make it more accessible for people with various schedules."(HWP) Improve pace 10 4 "I may improve it by not letting the silence go on too long or spending too much time on a topic."(TACUNA) "Maybe send the videos out for people to watch before hand."(HWP) Shorter duration 10 11 "Maybe make them shorter.It was a bit hard for me to attend them because of my schedule."(TACUNA) "Maybe make the workshop more concise, taking maybe 45 min to an hour."(HWP) Have larger groups than just the three plus one moderator so that there could be more discussion in the meeting."(TACUNA) "I would have added more people if possible."(HWP) Offer in-person options 6 4 "I think the only thing that could've made these workshops any better would be the opportunity to meet in person."(TACUNA) "I think in-person would have been nice."(HWP)
|
2023-11-06T06:18:30.869Z
|
2023-11-04T00:00:00.000
|
{
"year": 2023,
"sha1": "2a91a1582c51df5bd98d8ac0659df09cf40a074d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11121-023-01612-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2878391e4195a9c3fd1a08002458900e0077c35",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253759643
|
pes2o/s2orc
|
v3-fos-license
|
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle
Human infections caused by the toxin-producing, anaerobic and spore-forming bacterium Paeniclostridium sordellii are associated with a treatment-refractory toxic shock syndrome (TSS). Reproductive-age women are at increased risk for P. sordellii infection (PSI) because this organism can cause intrauterine infection following childbirth, stillbirth, or abortion. PSI-induced TSS in this setting is nearly 100% fatal, and there are no effective treatments. TcsL, or lethal toxin, is the primary virulence factor in PSI and shares 70% sequence identity with Clostridioides difficile toxin B (TcdB). We therefore reasoned that a neutralizing monoclonal antibody (mAB) against TcdB might also provide protection against TcsL and PSI. We characterized two anti-TcdB mABs: PA41, which binds and prevents translocation of the TcdB glucosyltransferase domain into the cell, and CDB1, a biosimilar of bezlotoxumab, which prevents TcdB binding to a cell surface receptor. Both mABs could neutralize the cytotoxic activity of recombinant TcsL on Vero cells. To determine the efficacy of PA41 and CDB1 in vivo, we developed a transcervical inoculation method for modeling uterine PSI in mice. In the process, we discovered that the stage of the mouse reproductive cycle was a key variable in establishing symptoms of disease. By synchronizing the mice in diestrus with progesterone prior to transcervical inoculation with TcsL or vegetative P. sordellii, we observed highly reproducible intoxication and infection dynamics. PA41 showed efficacy in protecting against toxin in our transcervical in vivo model, but CDB1 did not. Furthermore, PA41 could provide protection following P. sordellii bacterial and spore infections, suggesting a path for further optimization and clinical translation in the effort to advance treatment options for PSI infection.
Introduction
Human infections caused by the toxin-producing, anaerobic and spore-forming bacterium Paeniclostridium sordellii are associated with a treatment-refractory toxic shock syndrome (TSS) and are typically lethal [1]. Reproductive-age women are at increased risk for P. sordellii infection (PSI) because this organism can cause intrauterine infection following childbirth or abortion [1]. Clinical indications of disease include a marked leukemoid reaction, i.e., a vast increase in white blood cells, increased vascular permeability, hemoconcentration, and, in most cases, the absence of a fever [1]. When women present with PSI-TSS, very little is known on how to treat the patient [1]. In most cases, a hysterectomy is performed along with definitive antibiotic therapy. However, even if antibiotics are successful in killing the bacteria, there are bacterial toxins that can continue circulating the body to cause disease.
P. sordellii secretes two cytotoxins that are similar in structure and function to toxins generated by the pathogen Clostridioides difficile: lethal toxin (TcsL), similar to C. difficile TcdB, sharing 76% sequence identity, and hemorrhagic toxin (TcsH), similar to C. difficile TcdA, sharing 78% sequence identity. Both TcsL and TcsH, like the C. difficile toxins, are glucosyltransferases that inactivate host GTPases. Some TcsL-positive isolates lack the gene encoding TcsH and are rapidly lethal in an animal model, indicating that TcsH is not essential for virulence [2]. Genetically derived TcsL-mutant strains were nonlethal in a mouse infection model giving evidence that TcsL is an essential virulence factor responsible for disease in PSI [3]. Neutralizing the cytopathic effect of TcsL might protect humans against toxic shock caused by TcsL-expressing P. sordellii.
Two anti-TcdB monoclonal antibodies, PA41 and CDB1, have been characterized and shown to neutralize TcdB in cell culture and animal models [4][5][6]. PA41 binds the glucosyltransferase domain (GTD) of TcdB, inhibiting the delivery of the enzymatic cargo into the host cell [4]. CDB1, a mAB whose Fab sequence is identical to that of Bezlotoxumab, neutralizes TcdB by blocking binding of TcdB to mammalian cells [6]. Given the high levels of sequence identity between TcsL and TcdB, we wondered if these anti-TcdB antibodies would also neutralize TcsL, and if so, if they would provide protection in an animal model of infection. We were particularly interested in the potential of CDB1, as the mAB Bezlotoxumab is an FDA approved therapeutic for the prevention of CDI recurrence [7]. To have a clinically available mAB that also targets TcsL, the key virulence factor in PSI, could represent a significant tool in the limited therapeutic arsenal when faced with human PSI.
Developing effective interventions against PSI (and TSS) is stymied by a lack of animal models and an incomplete understanding of how P. sordellii induces disease. Some investigators have used intraperitoneal injection of toxin [8,9], but this model is not optimal in terms of physiological relevance. To increase relevancy, a uterine mouse model was established to study PSI-associated TSS [10]. This intrauterine infection involves survival surgery to allow for ligation at the cervical junction and direct introduction of bacteria into the uterine lumen [10]. This model, however, provides additional pain and stress to the animals and increases the risk of infecting the blood stream directly. To address this problem, we developed an innovative mouse model system in which to study PSI using a transcervical (TC) inoculation method. Such a model allows for a non-surgical transfer of inoculum through the vaginal orifice, past the cervix, and directly into the uterus. This method eliminates the need for intensive survival surgery and produces a disease that more closely represents the nature of PSI in postnatal and post-abortive women.
Neutralization of recombinant TcsL by monoclonal antibodies, PA41 and CDB1, in vitro
To assess the effects of anti-TcdB antibodies PA41 and CDB1 on TcsL in vitro, cell neutralization assays were performed. First, Vero cells were treated with serial dilutions of TcsL in the presence and absence of 100 nM concentrations of the mABs for 72 hours and then assayed for ATP levels as an indicator of viability ( Fig 1A). In panel B of Fig 1, we display the same data as a bar graph, indicating the concentrations of TcsL where we see a statistically significant difference. At 100 fM TcsL, PA41, but not CDB1, provided a statistically significant improvement in viability, although TcsL is not very toxic at this concentration. The cells, however, were very sensitive to 1 pM TcsL, and PA41 was able to completely neutralize the cytotoxic activity of the toxin. CDB1, also, was able to show statistically significant neutralization at this TcsL concentration, though to a lesser extent than PA41. Vero cells were not viable following intoxication with 10 pM and 100 pM TcsL alone. In the presence of PA41, cells were completely or partially viable when intoxicated with 10 pM or 100 pM TcsL, respectively. CDB1 was not able to protect at 10 pM or 100pM TcsL. Altogether, both antibodies neutralized the cytotoxicity of TcsL on Vero cells, though PA41 appeared more effective than CDB1 in all conditions. We also performed a dose titration of both antibodies at a cytotoxic dose of TcsL (1 pM) to get an understanding of their relative potencies (Fig 1C and 1D). Strong neutralization of TcsL was observed with 100 pM PA41. Following a titration of CDB1, we report a sharp reduction of antibody potency below 100 nM.
PA41 and CDB1 neutralization of TcsL, in vivo, following intraperitoneal injection
To assess the effects of PA41 and CDB1 on TcsL in vivo, it was first necessary to determine the lowest lethal intraperitoneal (IP) dose of TcsL. Female, 9-12-week-old, C57BL/6J mice were IP injected with 1 ng and 2.5 ng TcsL in 100 uL PBS (Fig 2A). In this study, 2.5 ng TcsL was the lowest lethal dose administered, with all animals succumbing to intoxication prior to 24 hours post administration. Consistent with previous findings [9], all mice intoxicated with this amount and higher of TcsL had a buildup of fluid in the thoracic and peritoneal cavities (S1 Fig). Mice intoxicated with 1 ng TcsL survived the study and showed no signs of disease. Next, we wanted to perform a survival study with 2.5 ng TcsL IP alone or in the presence of PA41 or CDB1 (Fig 2B). Following our in vitro assays, we determined a 10000-fold excess (0.75mg/kg) of antibody to lethal TcsL (2.5ng) would be a reasonable dose to use in our in vivo IP intoxications. We found that PA41 was able to effectively neutralize TcsL, and all the animals survived the study with no signs of disease. CDB1 was not able to completely neutralize
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle TcsL when given at 0.75 mg/kg, resulting in a survival of 66%. This result led us to increase the antibody dose ten-fold to 7.5 mg/kg to be used for all in vivo studies moving forward to give both antibodies the best opportunity for neutralization. To better understand how long the antibody circulates in the bloodstream following a single administration of PA41, we performed a three-day study where on Day 0, three animals were administered 7.5 mg/kg PA41. On days 1, 2, and 3, an animal was euthanized and the whole blood was collected. From western blot analysis of the serum using an anti-human Fab antibody, we could consistently observe PA41 in serum each day for up to three days without showing signs of depletion (S2 Fig).
From these in vivo studies, both C. difficile monoclonal antibodies were able to neutralize 2.5 ng TcsL following IP injection, with PA41 appearing more effective than CDB1. In addition, antibody administration protected mice from TcsL-induced pleural effusion.
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle
In vivo neutralization of P. sordellii vegetative bacteria following intraperitoneal injection
Our next steps were to infect mice with P. sordellii vegetative bacteria and assess whether PA41 and CDB1 offered protection. We chose to infect with a highly virulent P. sordellii reference strain, ATCC 9714, that lacks the gene for TcsH. Intraperitoneal injections of 10 6 , 10 7 , 10 8 CFUs vegetative bacteria were administered to determine the number of bacteria to use in neutralization studies (Fig 2C). We found that injection of 10 7 CFUs resulted in a survival curve that was penetrant with 11 of the 12 infected mice dying over the course of 60 hours. Injection of 10 8 CFUs resulted in death by 12hr, a time we predicted to be too short to allow for mAB neutralization. A lower bacterial count of 10 6 CFUs, resulted in only 50% survival, but with only two animals tested.
Next, we tested antibody neutralization of 10 7 CFUs P. sordellii vegetative bacteria. Since PA41 showed the most efficacy in TcsL neutralization in vitro and in vivo, these studies were done with PA41 (7.5 mg/kg). It is plausible that by using vegetative bacteria, TcsL is already being produced and may overwhelm the antibody when administered at the same time. To
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle reduce this possibility, antibody was administered by IP injection 18 hr prior to IP injection of both vegetative bacteria and a second dose of antibody. PA50, a monoclonal antibody against C. difficile TcdA, was used as a negative control [11]. In this experiment, the PA41-treated mice had a marginally higher survival rate when compared to control-treated mice ( Fig 2D). This result, though, was not found to be statistically significant.
Transcervical instillation of recombinant TcsL or vegetative P. sordellii to study uterine PSI
Having observed some efficacy with the antibodies using the IP models, we next wanted to test their effectiveness in a more physiologically relevant animal model. We developed an innovative mouse model system in which to study PSI using a transcervical (TC) inoculation method. The model allows for a non-surgical transfer of inoculum through the vaginal orifice, past the cervix, and directly into the uterus (Fig 3A). For TC instillation, a speculum was inserted into the vaginal cavity to allow for dilation and passage of a gel loading pipette tip through the cervix and transfer of inoculum directly into the lumen of the uterine horn. This method enabled the simple instillation of vegetative bacteria, spores, or recombinant protein into the uterus. Following instillation, a cotton plug applicator was inserted into the vagina, and the cotton plug was expelled from the applicator and into the vaginal cavity using a blunt needle. This cotton plug was used as an absorptive material to keep inoculum in the reproductive tract and to minimize any leakage into the environment.
While 2.5 ng TcsL IP injections resulted in death before 24 hours post-intoxication (Fig 2A), the TC instillation of 5, 25, or 50 ng TcsL did not result in any signs of disease or death ( Fig 3B). This suggested that TcsL alone is not cytotoxic to the epithelium of the reproductive tract, but perhaps requires assistance from other P. sordellii virulence factors. To test this hypothesis, TC inoculations of P. sordellii strain ATCC 9714 vegetative bacteria were performed. However, instillations of either 10 7 or 10 8 CFUs resulted in minimal death/signs of infection as compared to IP infection ( Fig 3C).
The murine reproductive cycle determines the pathogenic outcome of P. sordellii uterine intoxications and infections
We next tested whether manipulation of the host hormonal environment influences the murine susceptibility to TcsL intoxication and P. sordellii infection. For estrous cycle synchronization, medroxyprogesterone acetate was administered subcutaneously five days prior to intoxication/infection to prolong diestrus, and beta-estradiol was administered subcutaneously two days prior to intoxication/infection to prolong estrus. Immediately prior to instillation, we confirmed via vaginal lavage analysis that the mice were in the expected stage of the reproductive cycle. Animals were weighed and monitored daily for six to eight days (Fig 4A). To begin, animals in diestrus or estrus were transcervically instilled with 50 ng TcsL. All animals in diestrus succumbed to intoxication by 24h ( Fig 4B). Conversely, all animal in estrus survived the study with no signs of disease or sickness. Diestrus animals were then subjected to 5, 10, 20, 50 and 500 ng TcsL, and their resulting survival curves revealed increasing severity with each increase in dose. (Fig 4C). Next, animals in diestrus or estrus were transcervically inoculated with 10 7 CFUs P. sordellii 9714 vegetative bacteria. A statistically significant difference was found between animal in diestrus compared to estrus, with animals in diestrus having a more adverse outcome to infection compared to animals in estrus (Fig 4D). A bacterial titration of vegetative bacteria was administered TC to animals in diestrus and the resulting survival curves revealed 10 7 ,10 6 , and 10 5 CFUs to be similar in terms of severity [~15-30% survival] and 10 4 CFUs to be less severe [80% survival], followed by 10 2 CFUs which did not cause any detectable signs of sickness in the animals (Fig 4E). From these experiments, we conclude that the mouse reproductive cycle can influence the pathogenic outcome of TcsL intoxications and uterine P. sordellii infections.
PA41 and CDB1 neutralization studies of TcsL, in vivo, following transcervical instillation of animals in diestrus
We next sought to determine the efficacy of PA41 and CDB1 in neutralization of TcsL intoxication in our hormone-inducing transcervical instillation model. All animals were administered medroxyprogesterone acetate five days prior to instillation to induce diestrus. For antibody administration, since we know TcsL is rapidly lethal (Fig 4C), we wanted to have a higher amount of PA41 in the bloodstream prior to intoxication. We tested a single IP
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle injection of 15 mg/kg PA41, however, and found that the animals had a more severe outcome to a vegetative bacterial IP infection (S3 Fig). This presumably is due to an immune reaction to a high amount of foreign material. Instead, to have a higher amount of antibody circulating in the bloodstream, we performed sequential antibody dosing on days -5, -3, and -1 of 7.5 mg/kg to allow time for the animals to acclimate to the antibody administrations. Then, on day 0, animals were intoxicated with 10 or 50 ng TcsL and weighed and monitored for seven days ( Fig 5A). PA41, and not CDB1, was able to neutralize the cytotoxic activity of 10 ng TcsL (Fig 5B). PA41 also showed efficacy in neutralizing up to 50 ng TcsL, and all animals survived the study (Fig 5C). Uterine tissues were harvested upon euthanization and processed for histology. H&E-stained tissue (S4 Fig) was scored from mild to severe in edema, acute inflammation, and epithelial injury by a pathologist blinded to the experimental conditions (Fig 5D). Scores of moderate to severe were assigned to animals that had been transcervically instilled with 50 ng TcsL. When PA41 was administered to pre-treat 50ng TcsL, scoring was reduced in all criteria. CDB1 administration did not improve the scoring in mice that had been treated with 10ng TcsL. Complete blood counts were performed on blood at time of death or at end of study. Total white blood cell (WBC) counts for 10 and 50ng TcsL showed no statistically significant difference compared to PA41, CDB1, and PBS treated mice ( Fig 5E). However, in a WBC differential analysis, 50 ng TcsL alone showed an increase in neutrophils (NE) and a decrease in lymphocytes (LY) when compared to PBS control animals ( Fig 5F). Additionally, hematocrit (HCT) levels, i.e., the proportion of red blood cells in the blood, was increased in animals instilled with 50 ng TcsL. Animals administered PA41/TcsL (50 ng) had NE, LY and HCT levels similar to PBS control mice. WBC differential analysis of animals treated with 10ng TcsL in the presence or absence of CDB1 were found to be similar to PBS control mice (Fig 5G).
Prophylactic administration of PA41 in treatment of transcervical P. sordellii infection
All animals were administered medroxyprogesterone acetate five days prior to instillation to induce diestrus. Animals were intraperitoneally injected with 7.5 mg/kg PA41 or PBS one day prior to TC infection of 10 5 CFUs vegetative P. sordellii (Fig 6A). Although not statistically significant, we did see a delay in mouse death at 36h post infection with 80% survival of animals treated with PA41 compared to 40% survival of PBS-treated mice. At the end of study at 96h, however, there was only a slight non-significant difference between PA41-and PBS-treated animals, with 40% and 30% overall survival, respectively (Fig 6B).
PA41 in treatment of transcervical P. sordellii spore infection
In addition to evaluating vegetative bacteria, we wanted to test if the TC instillation model would be responsive to P. sordellii spores. Indeed, TC inoculation of 10 5 and 10 6 P. sordellii 9714 CFU spores in diestrus mice was found to be lethal, with 10 5 CFU spores having a delayed onset of disease and animal mortality beginning after Day 6 ( Fig 6C).
Finally, we wanted to assess if PA41 could be used in the treatment of TC P. sordellii 9714 spore infections. To test this, all animals were induced into diestrus five days prior to transcervical inoculation of 10 6 spores. One-, three-, and five-days following infection, animals were intraperitoneally injected with 7.5 mg/kg PA41 or PBS. Animals were weighed daily and monitored for 7-10 days (Fig 6D). We found that animals administered PA41 following PSI had higher survival rates compared to PBS treated mice (Fig 6E). The differences fell short of statistical significance when using the log-rank (Mantel-Cox) multiple comparison test (p = 0.06) but were significant when using the Gehan-Breslow-Wilcoxon test that gives more weight for
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle (0.04). These data support our overall conclusion that PA41 can reduce the impact of PSI in a mouse model of uterine infection.
Discussion
Reproductive-age women are at increased risk for PSI because this organism can cause a nearly 100% fatal intrauterine infection following childbirth or abortion [1]. When women present with PSI-associated TSS, there is very little information on how to treat the patient [1]. Antibiotics can be used to treat the P. sordellii, but the toxins remain active. Antitoxin preparations against lethal toxin were shown to prevent cytotoxicity of culture supernatants of P. sordellii, as well as C. difficile, in cell culture and lethality in mouse studies [8]. However, there is no commercially available antitoxin for treatment of human infection. A therapeutic drug that targets TcsL, the key virulence factor in PSI, could significantly reduce mortality in these patients.
In this study, we tested C. difficile anti-TcdB mABs, PA41 and CDB1, for their capacity to provide protection against TcsL. TcsL and TcdB share, not only a high level of sequence identity (~76% identity) and structural homology, but also similar mechanisms of intoxication. Antibody cross-reactivity was reported when TcsL was first purified and characterized, showing an antibody that neutralized TcsL was also able to recognize and bind TcdB [8]. This report is consistent with our finding that both anti-TcdB mABs significantly protected Vero cells from the cytotoxic activity of TcsL, though PA41 appeared to have better efficacy (Fig 1). PA41 and Bezlotoxumab (which shares epitope binding sequences with CDB1) interactions with TcdB have been characterized and their epitopes and modes of neutralization are known [4][5][6]. PA41 binds the GTD and prevents the translocation of the enzymatic domain into the host cell [4]. Bezlotoxumab, on the other hand, functions to block TcdB binding to the CSPG4 host cell receptor [6,12]. Due to their high sequence identity, it is presumed that TcdB and TcsL have similar antibody epitopes. For example, the high sequence identity between TcdB and TcsL at the known TcdB/PA41 interface suggests a similar mechanism of antibody neutralization (S5 Fig). It is likely, that in the presence of PA41, TcsL is able to bind and enter the host cell, but the enzymatic domain is unable to be translocated into the cytosol and thereby unable to inactivate host GTPases. In the case of CDB1, it is perhaps unsurprising that this mAb was less effective than PA41 in TcsL neutralization, as the TcsL receptors in the Sema6 family bind the TcsL delivery domain, in a distinct location from the CSPG4 and Bezlotoxumab binding sites [13,14].
Nevertheless, we wanted to test both mAbs for their neutralization efficacy against TcsL in vivo. Using a murine IP intoxication model, we observed that both PA41 and CDB1 were able to neutralize a lethal IP dose of TcsL (2.5 ng) and protect the animals from death (Fig 2B). Similarly, PA41 increased mouse survival following IP injection of vegetative P. sordellii bacteria (Fig 2D).
To test PA41 and CDB1 neutralization of TcsL and P. sordellii in a more physiologically relevant animal model, we developed a transcervical inoculation method. The method allows for a non-surgical transfer of inoculum through the vaginal orifice, past the cervix, and directly into the uterus. By eliminating the surgical laparotomy method used in a prior model [10], we expected to minimize the risk of introducing an undesired infection and cut out a surgical recovery period. However, initially we found that TcsL was not able to cause disease when with statistical significance set at a p value of <0.05. (E) White blood cells count following Complete Blood Counts. Neutrophil (NE), lymphocyte (LY) and hematocrit (HCT) blood cell percentages of (F) TcsL/PA41 and (G) TcsL/CDB1. Kruskal-Wallis multiple comparison test was used with statistical significance set at a p value of <0.05. https://doi.org/10.1371/journal.ppat.1010997.g005
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle given transcervically (Fig 3B), and vegetative P. sordellii infection resulted in minimal, inconsistent disease in the mice (Fig 3C).
We tested if female sex hormones play a role in the pathogenesis of infection. We used estrogen and progesterone to induce prolonged stages of estrus (ovulation) and diestrus (sexual quiescence), respectively. Animals in diestrus were found to have a more adverse outcome following transcervical TcsL intoxications ( Fig 4B) and P. sordellii infections (Fig 4D) compared to animals in estrus. Our data complement findings of other investigators who have reported that uterine ascending infections of Neisseria gonorrhoeae, Chlamydia trachomatis, and Herpes simplex virus type-2 in mice show profoundly different disease outcomes at different stages of the reproductive cycle [15][16][17]. For example, in the case of Neisseria gonorrhoeae, under the influence of progesterone, significant epithelial remodeling allows gonococcal entry into the underlying stroma [15]. We speculate that the epithelial remodeling associated with diestrus is allowing TcsL to access endothelial cells and the blood stream. In addition, it is known that in estrus there is an increased production of mucus in the uterus. Presumably, this could give the animals a layer of protection preventing toxin from reaching the epithelium of the uterus. It is also possible that other factors, such as P sordellii toxin receptors may be differentially expressed under differing hormone treatments. Additional studies are needed to understand how different hormonal environments impact P. sordellii pathogenesis in the genital tract.
Having established a hormone-dependent transcervical inoculation method, we again tested PA41 and CDB1 for their capacity to neutralize TcsL toxicity. In this model, we found that PA41 was able to protect mice from TcsL lethality but CDB1 was not (Fig 5B and 5C). Although it would have been exciting to see protection in CDB1-treated mice, given the clinical availability of the related Zinplava mAB, it is reasonable that the differences between TcsL and TcdB receptor specificity account for this lack of efficacy.
We further analyzed the uterine tissues from PA41-treated mice that were intoxicated with TcsL and found reduced levels of edema, inflammation, and epithelial damage when compared to PBS-treated TcsL intoxicated mice (Fig 5D). A characteristic of PSI is the onset of a leukemoid reaction (LR), i.e., a significant increase in white blood cells. We did not observe a significant difference in white blood cell numbers between TcsL and PBS instilled mice, suggesting that TcsL alone is not responsible for the LR (Fig 5E). In TcsL instilled mice, compared to PBS-control mice, we did, however, observe a shift in WBC differential counts where lymphocytes (LY) were decreased, and neutrophils (NE) were increased (Fig 5F). We also observed an accumulation of fluid in the thoracic cavity of these mice suggesting an increased permeability of the vascular system. This increased permeability could account for the increased hematocrit (HCT) found in animals instilled with TcsL, where the blood becomes concentrated with RBCs ( Fig 5F). Animals administered PA41/TcsL had NE, LY and HCT levels similar to those of PBS control mice (Fig 5F).
We were curious to move forward with PA41 to determine its efficacy against vegetative P. sordellii. We began with prophylactic administration 24hr prior to transcervical infection. Although the resulting survival curve was not statistically significant, there does appear to be a delay in mortality in animals treated with PA41 compared to PBS at approximately 36h post infection (Fig 6B). At this timepoint, PA41 neutralization appeared to be occurring but this was rapidly followed by survival decline. With only a single mAb administration, it is possible that PA41 is capable of neutralization at a 36h timepoint but is being depleted and can't keep up with additional bacterial production of TcsL. Perhaps additional administrations of PA41 and/ or incorporation of antibiotic therapy to deplete bacterial reproduction would demonstrate further efficacy of PA41. In addition, the organism produces several additional virulence factors, e.g., a sialidase and phospholipase C that play unknown roles in PSI. These are ideas we plan to explore in future studies.
Lastly, we show that P. sordellii spores can germinate and cause disease when given transcervically to mice in diestrus ( Fig 6C) and that PA41 treatment can lead to increased survival relative to PBS-treated mice (Fig 6E). There are several variables and questions for follow-up study. For example, what are the germinant and environmental conditions within the host that affect the efficiency of spore germination and is this influenced by the reproductive cycle of the mice? Would the level of protection improve if using a mouse-derived mAB or the addition of antibiotics? We hope that the availability of this relatively easy uterine infection model will provide a system to address these fundamental questions and facilitate the work needed to advance candidate therapeutics for addressing human PSI.
Ethics statement
This study was approved by the Institutional Animal Care and Use Committee at Vanderbilt University Medical Center (VUMC) and performed using protocol M1700185-01. Our laboratory animal facility is AAALAC-accredited and adheres to guidelines described in the Guide for the Care and Use of Laboratory Animals. The health of the mice was monitored daily, and severely moribund animals were humanely euthanized by CO2 inhalation.
Recombinant P. sordellii toxin purification
TcsL was amplified from P. sordellii strain JGS6382 and inserted into a BMEG20 vector (Mobi-Tec) using BsrGI/KpnI restriction digestion sites in the vector, as reported previously [18]. Plasmids encoding His-tagged TcsL (pBL552) were transformed into Bacillus megaterium according to the manufacturer's protocol (MoBiTec). Six liters of LB medium supplemented with 10 mg/liter tetracycline was inoculated with an overnight culture to an optical density at 600 nm (OD600) of *0.1. Cells were grown at 37˚C and 220 rpm. Expression was induced with 5 g/liter of d-xylose once cells reached an OD600 of 0.3 to 0.5. After 4 h, the cells were centrifuged and resuspended in 20 mM HEPES (pH 8.0), 500 mM NaCl, and protease inhibitors. An EmulsiFlex C3 microfluidizer (Avestin) was used to generate lysates. Lysates were then centrifuged at 40,000 × g for 20 min. Supernatant containing toxin was run over a Niaffinity column. Further purification was performed using anion-exchange chromatography (HiTrap Q HP, GE Healthcare) and gel filtration chromatography in 20 mM HEPES (pH 6.9), 50 mM NaCl.
Monoclonal antibodies
PA41 and PA50 were supplied by AstraZeneca (previously MedImmune). CDB1 DNA constructs for the light chain and heavy chain equivalents of Bezlotoxumab were synthesized and cloned into custom plasmids encoding the heavy chain IgG1 constant region and the corresponding kappa light chain region (pTwist 314 CMV BetaGlobin WPRE Neo vector, Twist Bioscience). The antibodies were transiently expressed in Expi-293F mammalian cells with PEI transfection reagent. Cells were cultured in FreeStyle F17 expression Medium supplemented with 10% Pluronic F-68 and 10% GlutaMAX until expression was terminated 5-7 days post-transfection. The mAb was isolated by protein A affinity (HiTrap Protein A HP, 17-0403-01, GE Healthcare) according to the manufacturer's instructions. All mAb administrations were given via IP.
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle
Vero cell culture and viability assays
Vero cells were maintained in DMEM supplemented with 10% fetal bovine serum and cultured at 37˚C with 5% CO 2 . Cells were seeded into 96-well plates at 1,500 cells per well and allowed to grow overnight. For intoxication, toxin and mABs were diluted in DMEM/FBS and incubated together for 1hr at 37˚C. Toxin/mAB mix was incubated on the cells for 72hr at the concentrations indicated and viability was measured using the CellTiter-Glo luminescent cell viability assay (catalog number G7573; Promega). Dose response curves were plotted and fit to a sigmoidal function (variable slope) to determine EC 50 using Prism software (GraphPad Prism Software).
P. sordellii vegetative and spore preparation
P. sordellii strain ATCC 9714 was obtained from David Aronoff and cultured at 37˚C in an anaerobic chamber (90% nitrogen, 5% hydrogen, 5% carbon dioxide). For vegetative bacteria, a single colony was picked to inoculate Reinforce Clostridial Medium (RCM) [BD, 21081], followed by incubation overnight. A 10 mL RCM subculture (OD600 = 0.05) was prepared and allowed to grow for 2-3h. The OD600 was measured and CFUs were determined from a previous growth curve. The culture was centrifuged and washed three times with PBS to remove any secreted toxins. The bacterial pellet was resuspended in desired CFUs/mL. For spore preparation, a single colony was picked to inoculate 10 mL RCM culture, followed by incubation overnight at 37˚C. The next day, 2 mL of that culture was inoculated into 2 mL Columbia broth for overnight growth at 37˚C. The next day, 4mL of that culture was inoculated into 40 mL of Clospore medium, followed by growth for 7 days. The culture was centrifuged and washed three times in cold sterile water. Spores were suspended in 1 ml of sterile water and heat treated at 65˚C for 20 min to eliminate vegetative cells. Viable spores were enumerated by CFU on RCM plates. Spore stocks were stored at 4˚C until use.
Animals and housing
All mouse experiments were approved by the Vanderbilt Institutional Animal Care and Use Committee (IACUC). C57BL/6J mice (all females, age 9 to 12 weeks) were purchased from Jackson Laboratories and were housed five to a cage in a pathogen-free room with clean bedding and free access to food and water. Mice had 12h cycles of light and dark.
Virulence studies
For intraperitoneal intoxications and infections, mice were anesthetized and intraperitoneally injected with recombinantly purified TcsL or vegetative P. sordellii bacteria alone or in the presence of mAb. For transcervical instillation, mice were anesthetized, and a speculum was inserted into the vaginal cavity to allow for dilation and passage of a flexible gel-loading pipette tip through the cervix and transfer of recombinant protein, vegetative bacteria, or spores directly into the uterus. Following instillation, a cotton plug applicator was inserted into the vagina, and a cotton plug was expelled from the applicator and into the vaginal cavity using a blunt needle. Mice were monitored daily for morbidity and signs of sickness. Mice were humanely euthanized by CO 2 inhalation when moribund or at end of study. In some cases, the uterus was harvested, fixed, paraffin-embedded, and processed for histology.
Hormone administration and estrous cycle staging
Mice were subcutaneously injected with water soluble beta-estradiol (0.5 mg/mouse, Sigma Aldrich) two days prior to infection to prolong estrus, or medroxyprogesterone acetate (2 mg/
PLOS PATHOGENS
Paeniclostridium sordellii uterine infection is dependent on the estrous cycle mouse, Amphastar Pharmaceuticals) five days prior to infection to synchronize in diestrus. Immediately prior to infection, the estrous stages of the animals were confirmed via vaginal lavage. To accomplish this, the vagina was washed with 20uL saline using a 20 μl micropipette. Wet smears were examined under 40x objective and the stage of the estrous cycle determined based on cytology [19].
Statistical analysis
Statistical testing and graphical representations of the data were performed using Graphpad Prism (Statistical significance was set at a P � 0.05 for all analyses ( � , P � 0.05; �� , P � 0.01; ��� , P � 0.001; ���� , P � 0.0001). The Log-rank (Mantel-Cox) multiple comparison test was used for survival curve comparisons. The Gehan-Breslow-Wilcoxon test that gives more weight for earlier timepoints, was used for Fig 6E survival curve comparison. The Mann-Whitney-Wilcoxon rank sum (Mann-Whitney) test was used to compare two groups, or the Kruskal-Wallis test was used to calculate significance using Dunn's test when two groups were compared within multiple comparisons. Resource for assistance with tissue embedding and blood analyses. The Translational Pathology Shared Resource is supported by NCI/NIH Cancer Center Support Grant P30CA068485.
|
2022-11-23T06:17:31.419Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "5143639d07dcc47a2052837412fa8d9fe028019e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1010997&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5028ea61ba7bbfe33c3b7eaf080ab65909bc6479",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226338381
|
pes2o/s2orc
|
v3-fos-license
|
pyOptSparse: A Python framework for large-scale constrained nonlinear optimization of sparse systems
pyOptSparse is an optimization framework designed for constrained nonlinear optimization of large sparse problems and provides a unified interface for various gradient-free and gradient-based optimizers. By using an object-oriented approach, the software maintains independence between the optimization problem formulation and the implementation of the specific optimizers. The code is MPI-wrapped to enable execution of expensive parallel analyses and gradient evaluations, such as when using computational fluid dynamics (CFD) simulations
Summary
pyOptSparse is an optimization framework designed for constrained nonlinear optimization of large sparse problems and provides a unified interface for various gradient-free and gradientbased optimizers. By using an object-oriented approach, the software maintains independence between the optimization problem formulation and the implementation of the specific optimizers. The code is MPI-wrapped to enable execution of expensive parallel analyses and gradient evaluations, such as when using computational fluid dynamics (CFD) simulations, which can require hundreds of processors. The optimization history can be stored in a database file, which can then be used both for post-processing and restarting another optimization. A graphical user interface application is provided to visualize the optimization history interactively.
pyOptSparse considers optimization problems of the form where x is the vector of design variables and f (x) is a nonlinear objective function. A is the linear constraint Jacobian, and g(x) is the set of nonlinear constraint functions. At time of writing, the latest released version of pyOptSparse is v2.2.0.
Features Support for multiple optimizers
pyOptSparse provides built-in support for several popular proprietary and open-source optimizers. Each optimizer usually has its own way to specify the problem: It might require different constraint ordering, have different ways of specifying equality constraints, or use a sparse matrix format to represent the constraint Jacobian. pyOptSparse provides a common Python interface for the various optimizers that hides these differences from the user. By isolating the optimization problem definition from the optimizer, the user can easily switch between different optimizers applied to the same optimization problem. The optimizer can be switched by editing a single line of code.
Although pyOptSparse focuses primarily on large-scale gradient-based optimization, it provides support for gradient-free optimizers as well. Also, discrete variables, multi-objective, and population-based optimizers are all supported. Because of the object-oriented programming approach, it is also straightforward to extend pyOptSparse to support any additional optimizers that are not currently available. All of the features within pyOptSparse, including problem scaling and optimization hot-start, are automatically inherited when new optimizers are added.
String-based indexing
Unlike many other publicly available optimization frameworks, pyOptSparse is designed to handle large-scale optimizations, with a focus on engineering applications. With thousands of design variables and constraints, it is crucial to keep track of all values during optimization correctly. pyOptSparse employs string-based indexing to accomplish this. Instead of using a single flattened array, the related design variables and constraints can be grouped into separate arrays. These arrays are combined using an ordered dictionary, where each group is identified by a unique key. Similarly, the constraint Jacobian is represented by a nested dictionary approach. This representation has several advantages: • The design variable and constraint values can be accessed without knowing their global indices, which reduces possible user error. • The global indices are also often optimizer-dependent and this extra level of wrapping abstracts away potentially-confusing differences between optimizers. • The constraint Jacobian can be computed and provided at the sub-block level, leaving pyOptSparse to assemble the whole Jacobian. This mimics the engineering workflow where different tools often compute different sub-blocks of the Jacobian. The user only has to ensure that the indices within each sub-block are correct, and the rest is handled automatically.
Support for sparse linear and nonlinear constraints
One prominent feature of pyOptSparse is the support for sparse constraints. When defining constraints, it is possible to provide the sparsity pattern of the Jacobian. This can be done at the global level by specifying which constraint groups are independent of which design variable groups, thereby letting pyOptSparse know that the corresponding sub-blocks of the Jacobian are always zero. For nonzero sub-blocks, it is also possible to supply the sparsity pattern of that sub-block, again using local indexing, such that the actual derivative computation can use sparse matrices as well.
pyOptSparse also provides explicit support for linear constraints since some optimizers provide special handling for these constraints. In these cases, only the Jacobian and the bounds of the constraint need to be supplied. The values and gradients of these constraints do not need to be evaluated every iteration, since the optimizer satisfies them internally.
Automatic computation of derivatives
If analytic derivatives for the objective and constraint functions are not available, pyOptSparse can automatically compute them internally using finite differences or the complex-step method (Martins, Sturdza, & Alonso, 2003). For finite differences, the user can use forward or central differences, with either an absolute or relative step size. Computing derivatives using finite differences can be expensive, requiring n extra evaluations for forward differences and 2n for centered differences. Finite differences are also inaccurate due to subtractive cancellation errors under finite precision arithmetic. The complex-step method, on the other hand, avoids subtractive cancellation errors. By using small enough steps, the complex-step derivatives can be accurate to machine precision (Martins et al., 2003). The user must make sure that the objective and constraint functions can be evaluated correctly with complex design variable values when using this feature.
Optimizer-independent problem scaling
pyOptSparse offers optimizer-independent scaling for individual design variables, objective, and constraints. By separating the optimization problem definition from the particular optimizer, pyOptSparse can apply the scaling automatically and consistently with any supported optimizer. Since the optimization problem is always defined in the physical, user-defined space, the bounds on the design variables and constraints do not need to be modified when applying a different scaling. Furthermore, for gradient-based optimizers, all the derivatives are scaled automatically and consistently without any effort from the user. The user only needs to pass in a scale option when defining design variables, objective, and constraints. This is particularly useful in engineering applications, where the physical quantities can sometimes cause undesirable problem scaling, which leads to poor optimization convergence. pyOptSparse allows the user to adjust problem scaling for each design variable, constraint, and objective separately, without needing to change the bound specification or derivative computation.
Parallel execution
pyOptSparse can use MPI to execute function evaluations in parallel, in three distinct ways. Firstly and most commonly, it can perform parallel function evaluations when the functions themselves require multiple processors. This is usually the case when performing large-scale optimizations, where the objective and constraint functions are the result of a complex analysis, such as computational fluid dynamic simulations. In this scenario, pyOptSparse can be executed with multiple processors, where all processors perform the function evaluation, but only the root processor runs the optimizer itself. That way, we avoid the scenario where each processor runs an independent copy of the optimizer, potentially causing inconsistencies or processor locking.
Secondly, it is possible to perform parallel gradient evaluation when automatic finite-difference or complex-step derivatives are computed. If the function evaluation only requires a single processor, it is possible to call pyOptSparse with multiple processors so that each point in the finite-difference stencil is evaluated in parallel, reducing the wall time for derivative computations.
Lastly, some population-based optimizers may support parallel function evaluation for each optimizer iteration. In the case of a genetic algorithm or particle swarm optimization, multiple function evaluations are required at each optimizer iteration. These evaluations can be done in parallel if multiple processors are available and the functions only require a single processor to execute. However, the support and implementation of this mechanism is optimizer-dependent.
Leveraging the history file: visualization and restart
pyOptSparse can store an optimization history file using its own format based on SQLite. The history file contains the design variables and function values for each optimizer iteration, along with some metadata such as optimizer options. This file can then be visualized using OptView, a graphical user interface application provided by pyOptSparse. Alternatively, users can manually post-process results by using an API designed to query the history file and access the optimization history to generate plots.
The history file also enables two types of optimization restarts. A cold start merely sets the initial design variables to the previous optimization's final design variables. A hot start, on the other hand, initializes the optimizer with the full state by replaying the previous optimization history. For a deterministic optimizer, the hot start generates the same sequence of iterates as long as the functions and gradients remain the same. For each iteration, pyOptSparse retrieves the previously-evaluated quantities and provides them to the optimizer without actually calling the objective and constraint functions, allowing us to exactly retrace the previous optimization and generate the same state within the optimizer in a non-intrusive fashion. This feature is particularly useful if the objective function is expensive to evaluate and the previous optimization was terminated due to problems such as reaching the maximum iteration limit. In this case, the full state within the optimizer can be regenerated through the hot start process so that the optimization can continue without performance penalties.
Simple optimization script
To highlight some of the features discussed above, we present the pyOptSparse script to solve a toy problem involving six design variables split into two groups, x and y. We also add two nonlinear constraints, one linear constraint, and design variable bounds. The optimization problem is as follows: The sparsity structure of the constraint Jacobian is shown below: x (2) y (4) +---------------+ con (2) This allows us to only specify derivatives for the two nonzero sub-blocks. For simplicity, we supply the linear Jacobian explicitly and use the complex-step method to compute the derivatives for the nonlinear constraints automatically.
The linear Jacobian for this problem is which we construct as jac and pass to pyOptSparse. For large optimization problems, the Jacobian can be constructed using sparse matrices.
Finally, we set up SLSQP (Kraft, 1988) as the optimizer and solve the optimization problem.
Statement of Need
pyOptSparse is a fork of pyOpt (Perez, Jansen, & Martins, 2012). As the name suggests, its primary motivation is to support sparse linear and nonlinear constraints in gradient-based optimization. This sets pyOptSparse apart from other optimization frameworks, such as SciPy (Virtanen et al., 2020) and NLopt (Johnson, 2020), which do not provide the same level of support for sparse constraints. By using string-based indexing, different sub-blocks of the constraint Jacobian can be computed by separate engineering tools, and assembled automatically by pyOptSparse in a sparse fashion. In addition, other frameworks do not offer convenience features, such as user-supplied optimization problem scaling, optimization hotstart, or post-processing utilities. Although pyOptSparse is a general optimization framework, it is tailored to gradient-based optimizations of large-scale problems with sparse constraints.
|
2020-10-29T09:04:12.788Z
|
2020-10-24T00:00:00.000
|
{
"year": 2020,
"sha1": "dc02a0c116664371df24b550cc7ea75db17d9c6f",
"oa_license": "CCBY",
"oa_url": "https://joss.theoj.org/papers/10.21105/joss.02564.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3f57044d68fa6888e789155b3a2ab097bcaae36b",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
2877648
|
pes2o/s2orc
|
v3-fos-license
|
Weight enumerator of some irreducible cyclic codes
In this article, we show explicitly all possible weight enumerators for every irreducible cyclic code of length $n$ over a finite field $\mathbb F_q$, in the case which each prime divisor of $n$ is also a divisor of $q-1$.
Introduction
A code of lenght n and dimension k over a finite field F q is a linear k-dimensional subspace of F n q . A [q; n, k]-code C is called cyclic if it is invariant by the shift permutation, i.e., if (a 1 , a 2 , . . . , a n ) ∈ C then the shift (a n , a 1 , . . . , a n−1 ) is also in C. The cyclic code C can be viewed as an ideal in the group algebra F q C n , where C n is the cyclic group of order n. We note that F q C n is isomorphic to R n =
Fq[x]
x n −1 and since subspaces of R n are ideals and R n is a principal ideal domain, it follows that each ideal is generated by a polynomial g(x) ∈ R n , where g is a divisor of x n − 1.
Codes generated by a polynomial of the form x n −1 g(x) , where g is an irreducible factor of x n − 1, are called minimal cyclic codes. Thus, each minimal cyclic code is associated of natural form with an irreducible factor of x n − 1 in F q [x]. An example of minimal cyclic code is the Golay code that was used on the Mariner Jupiter-Saturn Mission (see [6]), the BCH code used in communication systems like VOIP telephones and Reed-Solomon code used in two-dimensional bar codes and storage systems like compact disc players, DVDs, disk drives, etc (see [5,Section 5.8 and 5.9]). The advantage of the cyclic codes, with respect to other linear codes, is that they have efficient encoding and decoding algorithms (see [5,Section 3.7]).
For each element of g ∈ R n , ω(g) is defined as the number of non-zero coefficient of g and is called the Hamming weight of the word g. Denote by A j the number of codewords with weight i and by d = min{i > 0|A i = 0} the minimal distance of the code. A [q; n, k]-code with minimal distance d will be denoted by [q; n, k, d]-code. The sequence {A i } n i=0 is called the weight distribution of the code and A(z) := n i=0 A i z i is its weight enumerator. The importance of the weight distribution is that it allows us to measure the probability of non-detecting an error of the code: For instance, the probability of undetecting an error in a binary symmetric channel The weight distribution of irreducible cyclic codes has been determined for a small number of special cases. For a survey about this subject see [3], [4] and their references.
In this article, we show all the possible weight distributions of length n over a finite field F q in the case which every prime divisor of n divides q − 1.
Preliminaries
Throughout this article, F q denotes a finite field of order q, where q is a power of a prime, n is a positive integer such that gcd(n, q) = 1, θ is a generator of the cyclic group F * q and α is a generator of the cyclic group F * q 2 such that α q+1 = θ. For each a ∈ F * q , ord q a denotes the minimal positive integer k such that a k = 1, for each prime p and each integer m, ν p (m) denotes the maximal power of p that divides m and rad(m) denotes the radical of m, i.e., if m = p α1 1 p α2 2 · · · p α l l is the factorization of m in prime factors, then rad(m) = p 1 p 2 · · · p l . Finally, a ÷b denotes the integer a gcd(a,b) . Since each irreducible factor of x n − 1 ∈ F q [x] generates an irreducible cyclic code of length n, then a fundamental problem of code theory is to characterize these irreducible factors. The problem of finding a "generic algorithm" to split x n − 1 in F q [x], for any n and q, is an open one and only some particular cases are known. Since [7] theorem 2.45), it follows that the factorization of x n −1 strongly depends on the factorization of the cyclotomic polynomial that has been studied by several authors (see [9], [8], [11] and [2]).
In particular, a natural question is to find conditions in order to have all the irreducible factors binomials or trinomials. In this direction, some results are the following ones
weight distribution
Throughout this section, we assume that rad(n) divides q − 1 and m, m ′ l, l ′ and r are as in the lemmas 2.1 and 2.2. The following results characterize all the possible cyclic codes of length n over F q and show explicitly the weight distribution in each case.
and its weight enumerator is
Proof: As a consequence of Lemma 2.1, every irreducible factor of x n − 1 is of the form x t − a where t|n and a n/t = 1, so every irreducible code C of lenght n is generated by a polynomial of the form } is a base of the F q -linear subspace C. Thus, every codeword in C is of the form a 0 g + a 1 xg + · · · + a t−1 x t−1 g, with a j ∈ F q , and Since ω(g) = n t , it follows that ω(a 0 g + a 1 xg + · · · + a t−1 x t−1 g) = n t #{j|a j = 0}.
Clearly we have A k = 0 for all k that is not divisible by n t . On the other hand, if k = j n t , then exactly j elements of this base have non-zero coefficients in the linear combination and each non-zero coefficient can be chosen of q − 1 distinct forms, hence A k = t j (q − 1) j . Then the weight distribution is as we want to prove. [10] (see also Theorem 22 in [4]).
Remark 3.2. The previous result generalizes Theorem 3 in
Remark 3.3. As a direct consequence of Lemma 2.1, for all t positive divisor of m, there exist ϕ(t) t gcd(n, q − 1) irreducible cyclic [q; n, t, n t ]-codes. In order to find the weight distribution in the case which q ≡ 3 (mod 4) and 8|n, we need some additional lemmas.
On the other hand is a polynomial whose degree is n − 2t and every non-null monomial is such that its degree is divisible by t. Now, suppose that there exist 1 ≤ i < j ≤ n t − 2 such that the coefficients of the monomials x n−t−jt and x n−t−it in the polynomial g λ := g(x) − λx t g(x) are simultaneously zero. Then So, in the case which λ = 0, we have This last equality is equivalent to a (q−1)(j−i) = 1, i.e., ord q 2 a divides (q − 1)(j − i). In the case λ = 0, we obtain that ord q 2 a divides (q − 1)j and (q − 1)i by the same argument. Thereby, we can treat this case as a particular case of the above one making i = 0. It follows that 2 r gcd(q−1,n) gcd(2 r (q−1),n,u) divides (q − 1)(j − i). So, by the Equation (3.1), the condition ord q 2 a|(q − 1)(j − i) is equivalent to and therefore 2 r−ν2(u) |(j − i).
In other words, if the coefficient of the monomial of degree n − t − it is zero, then all the coefficients of the monomials of degree n − t − jt with j ≡ i (mod 2 r−ν2(u) ) are zero. Thus, if λ / ∈ Λ u , then any coefficient of the form x tj is zero and the weight of g λ is n t . Otherwise, exactly n t · 1 2 r−ν 2 (u) coefficients of the monomials of the form x tj are zero, then the weight of g λ is n t 1 − 1 2 r−ν 2 (u) , as we want to prove. Corollary 3.5. Let g be a polynomial in the same condition of lemma 3.4. Then Proof: If µ = 0 and λ = 0, then ω(λx t g(x)) = n t 1 − 1 2 r−ν 2 (u) and we have (q − 1) ways to choose λ.
and its weight enumerator is
In particular, if n t2 r−ν 2 (u) ∤ k, then A k = 0.
|
2014-05-08T14:13:33.000Z
|
2014-04-27T00:00:00.000
|
{
"year": 2014,
"sha1": "d34488d7c687b067dfff7d458cb24af8e5e9eebd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1404.6851.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d34488d7c687b067dfff7d458cb24af8e5e9eebd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
85501147
|
pes2o/s2orc
|
v3-fos-license
|
The Meaning of Aggression Varies Across Culture: Testing the Measurement Invariance of the Refined Aggression Questionnaire in Samples From Spain, the United States, and Hong Kong
Abstract Cultural differences in aggression are still poorly understood. The purpose of this article is to assess whether a tool for measuring aggression has the same meaning across cultures. Analyzing samples from Spain (n = 262), the United States (n = 344), and Hong Kong (n = 645), we used confirmatory factor analysis to investigate measurement invariance of the refined version of the Aggression Questionnaire (Bryant & Smith, 2001). The measurement of aggression was more equivalent between the Chinese and Spanish versions than between these two and the U.S. version. Aggression does not show invariance at the cultural level. Cultural variables such as affective autonomy or individualism could influence the meaning of aggression. Aggressive behavior models can be improved by incorporating cultural variables.
To improve its structural stability, Bryant and Smith (2001) shortened the original AQ to 12 items (AQ-R). This version allows for efficient administration and maintains high standards of validity and reliability (Gallardo-Pujol et al., 2006). It has also been translated into Chinese (Maxwell, 2007) and Spanish (Gallardo-Pujol et al., 2006). Yet, its measurement invariance across culture remains unknown.
Measurement invariance or measurement equivalence consists of different levels (Kankara s et al., 2010;Vijver & Leung, 1997). Structural or configural invariance exists when the given construct shows the same factor structure across different cultures. Metric invariance exists when factor loadings (which reflect the meaning of the construct) are equal across different cultures. Finally, scalar invariance exists when the intercepts of the indicators are the same across groups. This implies that mean differences across cultures might reflect actual mean differences in the latent constructs.
Many studies have explored the configural invariance of the AQ-R (Fossati et al., 2003;Gallardo-Pujol et al., 2006;Maxwell, 2007;Nakano, 2001;Vigil-Colet, Lorenzo-Seva, Codorniu-Raga, & Morales, 2005), confirming the same set of factors in all adaptations so far. Yet, its full measurement invariance (configural, metric, and scalar) across cultures has not been investigated. Establishing metric invariance is the first step in showing that cross-cultural differences in mean aggression scores reflect differences in aggression levels rather than unknown factors. Indeed, directly comparing mean scores (scalar invariance) without establishing metric invariance could produce distorted conclusions. Hence, the aim of this study was to evaluate measurement invariance across three different versions of the AQ-R: Spanish, U.S. English, and Chinese (Hong Kong).
The reasons for choosing these three cultures are not trivial. Benet-Mart ınez (Aaker, Benet-Mart ınez, & Garolera, 2001;Benet-Mart ınez, 2007) proposed an approach for evaluating cultural differences based on a triangulation of three cultures that vary with respect to at least two explanatory constructs (Benet-Mart ınez, 2007). Hence, we selected samples from these three cultures because they vary on two sociocultural dimensions (Schwartz & Bilsky, 1987). These dimensions describe preferences for one state of affairs over another that distinguish countries (Hofstede, 2001;Hofstede & McCrae, 2004). In this case, we evaluated individualism (the United States vs. Spain and Hong Kong) and affective autonomy (Hong Kong vs. Spain and the United States). Individualism (vs. collectivism), defined as the preference for a framework in which individuals are expected to take care of themselves (Hofstede, 2001;Hofstede & McCrae, 2004), has been linked to violence and aggression in Western societies (Menzer & Torney-Purta, 2012). Affective autonomy refers to the independent pursuit of affectively positive experiences (Schwartz & Bilsky, 1987); high affective autonomy is related to leading a pleasant, happy, and exciting life. Hence, low affective autonomy might be related to unhappiness, poor emotion regulation, frustration, and therefore proneness to exhibit aggressive behaviors (Matsumoto, Yoo, & Nakagawa, 2008).
This analysis differs from earlier work in two ways: (a) it is the first study to test the measurement invariance of the AQ-R across Eastern and Western cultures; and (b) it systematically selected three cultures that differ in terms of the possible explanatory or mediating variables responsible for observed structural differences.
Participants and procedure
The Spanish sample, taken from Gallardo-Pujol et al. (2006), consisted of 262 students from Catalonia (154 females, 99 males, and 9 who did not report gender). Mean age was 21.68 (SD ¼ 2.84). Further details are available in Gallardo-Pujol et al. (2006).
The U.S. sample, taken from Bryant and Smith (2001), consisted of 344 U.S. undergraduates (250 females and 94 males) at a private Midwestern metropolitan university. Mean age was 18.49 (SD ¼ 1.26). Further details are available in Bryant and Smith (2001).
The Hong Kong sample, taken from Maxwell (2007), consisted of 645 undergraduate Hong Kong Chinese students (372 females, 272 males, and 1 who did not report gender) at the University of Hong Kong. Mean age was 19.71 (SD ¼ 1.26). Further details are available in Maxwell (2007).
For all samples, participation was voluntary and anonymous, and all participants provided informed consent for the inclusion of their data. The analyses conducted in this study are secondary to already existing data. Secondary analyses involve reanalyzing data collected with different purposes to pursue a new research question not addressed by the original study.
Measures
The AQ-R (Bryant & Smith, 2001) is a short self-report questionnaire that consists of 12 Likert-type items rated on a 5-point 1 scale ranging from 1 (never) to 5 (always). The AQ-R is organized in four scales of three items each: Physical Aggression (PA), Verbal Aggression (VA), Anger (ANG), and Hostility (HO). All versions showed good psychometric properties (Bryant & Smith, 2001;Gallardo-Pujol et al., 2006;Maxwell, 2007).
Statistical analysis
Multigroup confirmatory factor analysis was conducted using polychoric correlations with diagonally weighted least squares (WLSMV) with a mean-and variance-adjusted chisquare test as implemented in Mplus 7.2 (Muth en & Muth en, 2016). For model identification, factor loadings of the first item for each factor were freely estimated, but all factor variances were fixed at 1 to avoid the use of a marker item (Kim & Yoon, 2011). Factors were allowed to intercorrelate. Factorial invariance across the three samples was tested with the chi-square test (Asparouhov & Muth en, 2006) for nested models (Byrne, 2011;Vandenberg & Lance, 2000) estimated using mean-and variance-corrected statistics. This is the procedure DIFFTEST implemented in Mplus. We started with a configural model (Model 1), in which all parameters were freely estimated across samples but the same theoretical model was specified across populations. Then, full metric invariance was tested (Model 2) by equating factor loadings across populations, and freeing factor variances in the second and third groups (which had been fixed at 1 in the first group for model identification, as in Ezpeleta & Penelo, 2015). The metric invariance model across the three populations was rejected. Then, we tested full metric invariance across two of the populations (Models 3-5). Then, we examined partially invariant models (Models 6-9) in which the parameters of one item were relaxed sequentially using a backward procedure (Kim & Yoon, 2011). Finally, should metric invariance have been met, scalar invariance would have been explored. Goodness of fit was assessed using (Jackson, Gillaspy, & Purc-Stephenson, 2009) v 2 , comparative fit index (CFI), Tucker-Lewis Index (TLI), and root mean square error of approximation (RMSEA) using conventional thresholds (Marsh, Hau, & Wen, 2004). To compare all three questionnaires, we decided to recode values of 6 into values of 5 for the U.S. sample, given that the frequencies of 6 responses were extremely low, median frequency ¼ 4%, compared with the total sample. Converting the 6-point scale to a 5-point scale by recoding 6s as 5s produced item scores that were virtually identical (rs > .988) to those produced by subtracting 1 from 6-point-scale scores, multiplying the result by 0.8, and adding 1 to the product to obtain a 5-point scale. Additionally, the Spain and Hong Kong samples retained the original AQ 5-point rating scale that was modified in the U.S. AQ-R. To make sure that recoding category data did not affect the results obtained, we repeated all measurement invariance analyses using the original coding (6-point scale for U.S. sample and 5-point scale for the China and Spain samples). The results obtained are consistent with those reported here, only partial measurement invariance holds, and for the same items and combinations of countries reported in this brief report. Table 1 reports descriptive statistics for each item and subscale, and internal consistency for each dimension in each of the three samples. Table 2 summarizes the results for the tests of measurement invariance across the three samples. 2 Full metric invariance did not hold across all three samples (Model 2) or between any of the three pairs of samples (Models 3-5). Partial metric invariance held across pairs of samples as follows: six factor loadings equivalent for Spanish and U.S. samples (Model 6), eight factor loadings equivalent for Spanish and Hong Kong samples (Model 7), and six factor loadings equivalent for U.S. and Hong Kong samples (Model 8). Finally, analysis of partial metric invariance across the three samples was conducted simultaneously. Partial metric invariance could not be rejected, Dv 2 (6) ¼ 12.3, p ¼ .05. Fit statistics for the final, partially invariant model (Model 9) were v 2 (157) ¼ 554.7, CFI ¼ .96, TLI ¼ .94, and RMSEA ¼ .080. Each model always included a multigroup approach, assessing all three groups, but just fixing parameters across two of the samples and freeing the third not involved (detailed results of sequential analyses are available on request). Figure 1 shows standardized (unstandardized factor loadings are available on request) factor loadings and factor correlations for the final partially invariant model (Model 9). Equivalent factor loadings between samples were as follows: five items (two for PA and one for each of the other factors) across Spanish and U.S. samples, five items (all three for PA and one for VA and HO) across U.S. and Hong Kong samples, and seven items (two for PA, VA and AN, and one for HO) across Spanish and Hong Kong samples. Of these, four items showed equivalent factor loadings across the three samples: two for PA, one for VA, and one for AN. In contrast, two items did not have equivalent factor loadings .057 Note. CFI ¼ comparative fit index; TLI ¼ Tucker-Lewis Index; RMSEA ¼ root mean square error of approximation. a Based on difference chi-square test for mean-and variance-adjusted chi-squares (Asparouhov & Muth en,2006).
Results
2 Gender invariance was also tested within each country, given the asymmetry between males and females in terms of aggression. We found absolute gender invariance in the United States, v 2 (8) ¼ 7.123, p ¼ .5234, and Hong Kong, v 2 (8) ¼ 3.887, p ¼ .8672. There was partial gender invariance (20% freed parameters) for the Spanish sample, v 2 (6) ¼ 6.938, p ¼ .3266. The two items involved were one from the VA scale (My friends say that I'm somewhat argumentative./Mis amigos/as dicen que soy discutidor/ra), and another one from the HOST scale (My friends say that I'm somewhat argumentative./Mis amigos/as dicen que soy discutidor/ra). In both cases, females had larger factor loadings than males.
across any of the three samples: one for VA and one for AN, being items showing lower loadings in the Spanish sample. That HO was the only AQ factor with no equivalent loadings across all three samples suggests that culture influences the meaning of hostility more than the meaning of physical or verbal aggression or of anger-a conclusion consistent with cross-cultural research using the 29-item AQ (Vigil-Colet et al., 2005, p. 607).
Discussion
Our aim was to assess metric invariance across three versions of the AQ-R. Of the 12 AQ-R items, 7 (58.3%) were metric invariant for the Spanish and Chinese samples, whereas only 5 (41.6%) were invariant for the Spanish and Chinese samples, and 5 (41.6%) for the U.S. and Chinese samples. This pattern of results suggests that aggression is closer in meaning between the Chinese and Spanish versions than between each of these two versions and the U.S. version.
One potential explanation for discrepancies is the use of an imposed-etic approach. This approach refers to the generalized practice of translating and adapting items originally adapted within one culture to another one, in contrast to an emic approach, which relies on items originally developed from within that culture (Berry, 1980;Berry, Poortinga, Breugelmans, Chasiotis, & Sam, 2011). Although imposedetic instruments allow for quick comparisons across cultures, measurement was not metric equivalent in all three countries, suggesting that the meaning of aggression differs across culture. However, this does not explain the similarities between the two adaptations from English into Spanish and Chinese. These cross-cultural similarities could be attributed to certain values present in each of these societies (Schwartz, 1992). In particular, the similarity between Spain and Hong Kong with respect to the PA and VA subscales might be explained by the similarity between both cultures with respect to individualism (Menzer & Torney-Purta, 2012). Collectivistic societies report fewer episodes of violence at schools (Menzer & Torney-Purta, 2012). Spanish and U.S. adaptations are closer when considering the AN and HO subscales. Thus, it might be reasonable to think that these societies conceive and promote both aspects of aggression in a similar way, given that Spain and the United States show similar levels of affective autonomy compared to Hong Kong society (Aaker et al., 2001, p. 494). However, the variables studied here cannot explain the high degree of variation that remains across all three cultures with respect to the self-reported manifestations of aggression.
Future research should include comparisons among cultures differing on other cultural dimensions (Schwartz, 1992). Such research would complement current aggression models (e.g., the general aggression model) that do not go beyond proximal causes of aggression (Anderson & Bushman, 2002). Moreover, because contemporary models of aggression are culturally centered within the perspective of Western societies (Henrich, Heine, & Norenzayan, 2010), it is important to develop cross-cultural models. Additionally, an important avenue of research could be using item-response theory analyses to study cross-cultural differences in AQ-R (and other measures) with respect to differential item functioning or differential test functioning. It is likely that in the future it would enable fine-grained comparisons (e.g., Hambrick et al., 2010). This work is not exempt from limitations that should be addressed in further studies. We found that mean age was different in all three samples. This could actually be affecting the composition of the sample and thus hampering the robustness of our findings. However, there is evidence that by the age of our subjects, aggression has already peaked in late adolescence and is actually slowly steadily declining at similar levels (Liu, Lewis, & Evans, 2013;Moffitt, 1993). With respect to gender, we conducted separate analyses to explore gender invariance (see footnote 2) within each country. We only found partial invariance in Spain, but at the threshold for accepting it in practical applications (Dimitrov, 2010), as it is intended this questionnaire (Gallardo-Pujol et al., 2006).
All in all, our results have shown that (a) metric invariance should be tested before proceeding to direct comparisons of national and cultural mean levels of aggression, and (b) certain cultural variables, such as individualism and affective autonomy, could influence the meaning of aggression across culture (Schwartz, 1992). As has typically been the case in previous comparative cross-cultural research on the AQ, this study did not assess criterion measures as correlates of AQ-R subscales across multiple countries. However, because such criterion measures are crucial for establishing cross-cultural construct validity, future international work on the AQ-R should include criterion measures. Our results suggest that this future research should be careful to address potential cross-cultural differences in factor structure, which could otherwise produce misleading evidence about the generalizability of construct validity across culture.
|
2019-03-26T13:02:51.661Z
|
2019-03-25T00:00:00.000
|
{
"year": 2019,
"sha1": "259ead1b89cbfb9fc9d01321829b71e75e143bd9",
"oa_license": "CC0",
"oa_url": "http://repositori.uji.es/xmlui/bitstream/10234/184914/1/JOURNAL_OF_PERSONALITY_ASSESSMENT.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "784d2ea22069717c72aaf5b44158593785c2c74e",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
231720828
|
pes2o/s2orc
|
v3-fos-license
|
Implementation of the Randomized Embedded Multifactorial Adaptive Platform for COVID-19 (REMAP-COVID) trial in a US health system—lessons learned and recommendations
Background The Randomized Embedded Multifactorial Adaptive Platform for COVID-19 (REMAP-COVID) trial is a global adaptive platform trial of hospitalized patients with COVID-19. We describe implementation at the first US site, the UPMC health system, and offer recommendations for implementation at other sites. Methods To implement REMAP-COVID, we focused on six major areas: engaging leadership, trial embedment, remote consent and enrollment, regulatory compliance, modification of traditional trial management procedures, and alignment with other COVID-19 studies. Results We recommend aligning institutional and trial goals and sharing a vision of REMAP-COVID implementation as groundwork for learning health system development. Embedment of trial procedures into routine care processes, existing institutional structures, and the electronic health record promotes efficiency and integration of clinical care and clinical research. Remote consent and enrollment can be facilitated by engaging bedside providers and leveraging institutional videoconferencing tools. Coordination with the central institutional review board will expedite the approval process. Protocol adherence, adverse event monitoring, and data collection and export can be facilitated by building electronic health record processes, though implementation can start using traditional clinical trial tools. Lastly, establishment of a centralized institutional process optimizes coordination of COVID-19 studies. Conclusions Implementation of the REMAP-COVID trial within a large US healthcare system is feasible and facilitated by multidisciplinary collaboration. This investment establishes important groundwork for future learning health system endeavors. Trial registration NCT02735707. Registered on 13 April 2016.
A Randomized Embedded Multifactorial Adaptive Platform (REMAP) trial combines features of adaptive platform and pragmatic point-of-care trials to simultaneously evaluate multiple treatment strategies and maximize trial conduct efficiency [1,2].
REMAP-CAP is a global adaptive platform trial of patients with severe community-acquired pneumonia (CAP) admitted to the intensive care unit (ICU) that was launched in 2016 [3]. The trial design and rationale have been previously published [4]. Briefly, REMAP-CAP is governed by an International Trial Steering Committee (ITSC) and uses a core protocol that defines broad eligibility criteria, outcomes, and the statistical analysis plan. Multiple investigational treatments are layered within the core protocol as "domains". Domains test investigational treatments with similar mechanisms of action, and multiple domains are simultaneously tested such that patients are randomized within each domain for which they are eligible. Testing of interactions between domains can be predefined or performed post hoc. As the number of domains increases, the likelihood a patient receives at least one investigational treatment increases accordingly. Randomization adapts as the trial evolves such that subjects are preferentially randomized to receive better performing arms based on interim analyses-termed "response adaptive randomization" [5].
Adaptations occur approximately monthly and use data from patients from all sites [4]. Domains are flexible such that additional investigational treatments can be introduced in a rolling fashion (or dropped, as appropriate). Thus, the trial is built to run perpetually as long as the disease and ideas to treat it exist.
In light of challenges conducting research relevant to pandemic infection uncovered by the 2009 H1N1 influenza experience, the REMAP-CAP trial was initially drafted with a pre-specified Pandemic Appendix to be activated in the event of an emergent pandemic. In February 2020, as the Coronavirus Disease 2019 (COVID-19) pandemic spread, the Pandemic Appendix to REMAP-CAP was activated and the trial labeled REMAP-COVID [4]. REMAP-COVID uses the same core design as REMAP-CAP, expands enrollment to include all hospitalized patients with clinically diagnosed or microbiologically confirmed COVID-19, adds COVID-19-specific treatment domains (Table 1), and enables the addition of new sites and regions. In this manuscript, we describe the implementation of the REMAP-COVID trial in the first US site, the UPMC health system, lessons learned, and recommendations for implementation at other sites.
To implement REMAP-COVID, we focused on six major areas ( Table 2): engaging leadership, embedment into routine care processes and the electronic health record (EHR), remote consent and enrollment, regulatory compliance, modification of traditional trial management procedures, and alignment with other COVID-19 studies. We identified these areas by reviewing the major tasks we had done to launch the trial, proposing a finite list of conceptual areas that captured these tasks and would communicate to sites the work required to join the trial, and iteratively edited and completed the final list.
Leadership engagement
The UPMC system is comprised of multiple community hospitals, regional tertiary referral centers, and one quaternary referral institution, all predominantly located in western and central Pennsylvania. For several years, the system has worked to develop a learning health system and previously launched a REMAP trial (https:// [6] as part of an overall "Learning While Doing" program. This program seeks to accelerate development of a learning health system by encouraging synergy between the clinical research and clinical practice enterprises [7]. In January 2020, UPMC leadership decided to primarily keep patients with COVID-19 at the hospital to which they initially presented, with remote critical care support via telemedicine [8]. To ensure trial availability to all patients regardless of location, we engaged UPMC leadership to support implementation of REMAP-COVID across the system. We conducted meetings with administrative leaders, department chairs, informatics groups, the pharmacy and therapeutics committee, blood bank, and others to describe the vision of REMAP-COVID and propose specific implementation steps. Due to UPMC commitment to become a learning health system and to test novel therapies within trials, support was obtained, primarily in the form of access to existing infrastructure resources and institutional willingness to engage. In parallel, we engaged University of Pittsburgh leadership to collate and prioritize the multiple COVID-19 studies that were being proposed. The University of Pittsburgh and UPMC jointly designated the University of Pittsburgh Clinical and Translational Science Institute as the central hub for collecting information and providing resources related to COVID-19 studies, and investigators were asked to register their studies to optimize coordination.
Embedment into routine care processes
A key REMAP design philosophy is for both the clinical care and clinical research enterprises to "lean in" towards one another [7], such that streamlined research processes become embedded within routine care processes, and informing future best practices becomes a part of daily care. To operationalize this philosophy, we identified opportunities to integrate REMAP-COVID into existing clinical care processes.
First, to promote trial awareness and enthusiasm, we worked with systemwide UPMC administrative groups to disseminate information via posting educational materials in the COVID-19 resources section of an internal UPMC website, adding trial announcements to COVID-19 communications from UPMC leadership, and identification of stakeholders at each hospital. We also presented the trial at Grand Rounds and other venues for virtual dissemination, to aid in reaching UPMC hospitals with limited prior research participation.
Second, we partnered with the pharmacy and therapeutics committee to set policy prioritizing use of experimental therapies within clinical trials, determine The UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators Trials (2021) 22:100 which REMAP-COVID investigational treatments were best suited for UPMC, and embed the trial into routine pharmacy operations. For example, the clinical pharmacies at each hospital dispense drugs with FDA Investigational New Drug (IND) exemptions, rather than an investigational pharmacy. To speed study launch, we began the trial only with interventions with IND exemptions. We educated pharmacists at the system level for each investigational treatment and emphasized medication order verification differences for COVID-19 versus non-COVID-19 indications. Third, UPMC deployed a telemedicine program to facilitate local patient care and minimize COVID-19 transmission and provided tablet computers to help hospitalized patients communicate. We used these resources to aid remote connection with patients.
Embedment into the electronic health record (EHR)
Embedment into the EHR facilitates low operational complexity at the bedside, despite complex internal trial machinery. We launched the trial at the 20 UPMC adult acute care hospitals that use Cerner (Cerner Corporation, North Kansas City, MO). To facilitate both clinical care and clinical research for COVID-19, we created a "one-stop shop" for operational efficiency-a COVID-19 tab in the provider facing interface of the patient's electronic record. Physicians and advanced practice providers can use the tab to access treatment protocols and order sets, as well as complete a COVID-19 intake form (Fig. 1). The technology team programmed automated EHR alerts which prompt clinicians to complete the form when a SARS-CoV-2 test is ordered, a COVID positive isolation flag is entered, or a COVID order set is initiated, to capture basic data for surveillance reporting to public health authorities. Clinicians can suppress the form if they do not consider their patient to have active COVID-19.
The intake form also assists with identifying REMAP-COVID eligibility criteria and provides the only route for entry into the trial by requesting providers ask their patient or legally authorized representative (LAR) if they would be interested in hearing about potential additional therapies for COVID-19. An affirmative response represents assent to be approached about research, and intake form completion generates an automated email to research staff, which triggers the informed consent and enrollment process (described in the next section). After obtaining informed consent, randomization and in-trial notification processes are embedded within the EHR. Research staff complete a web-based application Enrollment Form, linked within Cerner, and hosted on a local server behind the health system firewall. The application references a response adaptive randomization table to generate treatment arm assignments for each domain (a trial "recipe"), which are then written by the application to a custom-built table within Cerner. Each unique trial recipe triggers a custom series of pop-up Cerner Discern alerts, which appear the next time after enrollment a provider enters a patient order. The alerts contain guidance for clinicians regarding eligibility criteria and protocol procedures. Where appropriate, the alerts pre-populate investigational treatment orders corresponding to the assigned trial recipe, which clinicians are requested to sign unless previously undetected exclusion criteria are present, or an investigational treatment is not thought to be in the patient's best interest (Fig. 2). To further ensure in-trial notification, enrollment automatically triggers an icon in the banner bar that indicates the patient's status as a trial participant, and a display of the assigned recipe on the COVID-19 tab in the patient's record. Finally, research staff document an enrollment note in the EHR. If a patient declines participation or is ineligible, research staff document a trial note accordingly.
Remote consent and enrollment
We developed a centralized remote consent and enrollment process (Fig. 3) applicable across all UPMC hospitals, to minimize COVID-19 transmission risk to research staff and preserve personal protective equipment.
Both the state of Pennsylvania and the University of Pittsburgh Institutional Review Board (IRB) require physician consent for clinical trials [9], with the University IRB requiring face-to-face consent, and a signed informed consent form is a clinical research standard [10]. Trial personnel execute four key steps in the remote consent process: (1) eligibility determination via chart review (with consultation of the treating clinician as needed); (2) a brief phone call to the patient's personal or hospital room phone to introduce the trial; (3) a face-to-face consent discussion between the patient or legally authorized representative (hereafter, collectively referred to as "patient"), research staff, and physician investigator via secure video conference; and (4) signature capture via a software platform (e.g., DocuSign) that allows the patient to electronically sign their name using a smart device. We provide the software platform by texting or emailing a link to the patient's smart device, investigators use the same platform to sign, and the final signed consent form is electronically sent to the patient. Bedside clinicians are engaged to assist with the process on an ad hoc basis, including providing patients who do not have a personal smart device a hospital phone or tablet, and assisting in the videoconferencing process if needed. Clinicians follow UPMC infection control procedures when assisting. On rare occasions, we have used paper consent, in which case a provider or investigator in personal protective equipment provides the paper forms and sends pictures of the signed forms to the research team. A team of coordinators and multidisciplinary physician investigators trained in the protocol, informed consent, and Cerner currently screen and enroll 7 days a week, approximately 12 h a day. Once the informed consent signature is obtained, research staff then complete the web-based Enrollment Form, which triggers the randomization and alerts as described above.
Regulatory compliance and oversight
UPMC and the University of Pittsburgh serve as both a REMAP-COVID site and the US Regional Coordinating Center. As most UPMC hospitals are under the purview of the University of Pittsburgh IRB, we obtained local approval to launch the trial. In parallel, we worked with the US sponsor, the Global Coalition for Adaptive Research (GCAR) to centralize protocol review at the Western Institutional Review Board (Puyallup, WA) for implementation across the USA. We presented each IRB the core protocol and appendices for each domain making the approval process modular and adaptable. Repurposed FDAapproved drugs are frequently granted an exemption from the Investigational New Drug (IND) application process, whereas experimental agents require IND approval. In partnership with GCAR, we submit documentation to the FDA for approval and adverse event monitoring where necessary. Trial registration at www.clinicaltrials.gov is updated with each newly activated domain (NCT02735707).
Modification of traditional trial management procedures
We regularly update trial management procedures as the adaptive design necessitates frequent updates to our internal work environment [11], workflow [12], and data processes [13]. We work closely with UPMC information technology groups to add and drop study arms as the trial adapts, generate automated screening and eligibility logs, and create automated email alerts for all positive COVID-19 tests as a backup screening tool. To comply with COVID-19 travel restrictions, we remotely oversee trial execution at each hospital. To maintain trial awareness, we continuously reach out to hospitals across the system to promote the trial, identify local champions, and address questions. In addition, we recognize local champions for outstanding contributions and share trial updates through established UPMC information distribution mechanisms.
Serious adverse event monitoring and protocol adherence
We use a combination of automated EHR detection alerts, traditional coordinator oversight via manual monitoring of enrolled patients, and input from clinical staff to monitor adverse events and protocol adherence. A REMAP goal is to rely on automated EHR detection alerts where possible as the trial progresses, and to optimize alerts by comparing them to manually adjudicated events. Medical monitoring is managed in a two-tiered system with a local investigator providing oversight of IND exempt interventions and a central medical monitor contracted by the sponsor for oversight of IND domains requiring FDA oversight. As the coordinating center for the US region of REMAP-CAP, essential documents are collected and filed with the UPMC program management team.
Data collection and export
Trial-relevant data are continuously and automatically extracted from the Cerner database and curated into a dataset suitable for export to the international data coordinating center at Monash University in Australia. Data are stored in a MySQL Server database, processed using a scheduled combination of SQL statements and Python scripts, and formatted and exported for trial reporting. An unblinded investigator reviews initial exports in detail to ensure data accuracy. EHR data are reviewed for completeness and validity at the time of entry into the EHR, the time of extraction into a trial database, and following transformation into trial-relevant elements. Validation and process refinement are ongoing, to account for impurities in EHR data as well as frontline context of care and documentation practices that have changed due to the pandemic. A key challenge has been the need to rework both workflow and analytic models to keep pace with the adaptive nature of the trial and the evolution of the pandemic.
To support adaptive randomization, relevant patient characteristics and outcomes are iteratively transferred to the international data coordinating center as a Health Insurance Portability and Accountability Act limited dataset using Globus (University of Chicago, Chicago, IL, USA) secure file transfer services. Post-hospital discharge outcomes are collected by a dedicated follow-up team via phone calls and query of national mortality databases. Secondary outcomes during hospitalization are captured from the EHR.
Alignment with other COVID-19 studies
The COVID-19 pandemic sparked intense interest within academia and industry. To minimize competition for protocols, temper inconvenience to patients, regulate biospecimen volumes, and promote the "Learning While Doing" philosophy, UPMC and the University of Pittsburgh centralized all COVID-19 clinical trial recruitment through the REMAP infrastructure. As previously reported [4], the REMAP-COVID core protocol is broadly encompassing with basic eligibility criteria. As such, the REMAP-COVID core protocol and trial coordination team provide a gateway for entry into all COVID-19 clinical research, whereby trial personnel aid investigators of studies operating alongside REMAP in connecting with subjects. For example, consent for biospecimen collection and storage is coordinated with REMAP consent and allocation of specimens to individual investigators is centralized through a university assigned committee review; observational study data collection is integrated with REMAP EHR data extraction; and local investigator partnerships with industry are supported.
Lessons learned and recommendations for implementation in other health systems
The pandemic challenged all REMAP-CAP sites to respond with innovation and efficiency. As a new site, we had limited preparation time before trial launch. We went live April 9, 2020, with two domains comprising three investigational treatment arms (hydroxychloroquine, moderate dose, and high dose hydrocortisone). We enrolled our first patient on April 12, added an immunoglobulin (convalescent plasma) domain on May 8, discontinued the initial two domains in response to emerging international data in mid-June [14-16], added Vitamin C and anticoagulation domains on July 23, and added an immunomodulation domain on October 21. The immunomodulation domain is under an IND and required coordination with the UPMC investigational pharmacy and additional steps to comply with FDA regulations. Additional domains are pending (Table 1). On September 21, we also went live with the seven adult acute care UPMC hospitals that use Epic (Epic Systems Corporation, Verona, WI), with an analogous EHR embedment process based on the one created for Cerner. As of December 14, 2020, we have enrolled 319 patients from 2005 screened (16% enrollment) at 21 hospitals (Fig. 4), after excluding those who were not candidates due to age < 18 years, not suspected to have COVID-19, unlikely to be admitted for more than 24 h due to either discharge or death, or having previously participated in REMAP-COVID in the previous 90 days. We are encouraged enrollment has occurred in community hospitals with limited prior research participation. Of the hospitals that have not yet enrolled a patient, most are low-volume, transfer patients to a regional center, or had ineligible patients. We offer the following implementation recommendations, outlined in Table 2. As with the design transition of REMAP-CAP to the pandemic mode (REMAP-COVID), proactive and agile planning for implementation level is essential. All six implementation areas are necessary, with leadership engagement and, when applicable, alignment with other COVID-19 studies, the first priorities.
Leadership engagement
We found an inclusive and transparent approach effective, while in retrospect we should have created and disseminated a preliminary organogram at the outset to rapidly communicate the complex trial structure. Although we had advantages of institutional desire and scale, in practice, implementing a learning health system (and REMAP-COVID) is achieved at the bedside and therefore local challenges with daily workflow and staffing require individualized attention. Similar conditions likely exist in other large health systems. Alignment of institutional and trial goals is imperative. Thus, simultaneously engaging leadership while keeping daily bedside realities in the forefront is essential. The optimal ways to engage leadership will vary by institution, as will unification needs, barriers, and solutions. Institution-level incentives for individual and group participation may increase bedside engagement and will also vary by institution. Participation in REMAP-COVID can help institutions lay the groundwork for future development as learning health systems. Smaller institutions may lack scale, but can likely move faster, and in the authors' experience can match or exceed trial enrollment performance compared to larger institutions.
Trial embedment
Embedment into routine care processes is essential as an implementation philosophy and design. We found partnering with existing institutional information dissemination, pharmacy, and telemedicine structures effective, and recommend this approach for efficient embedment, and as a means of engaging local leadership and front-line personnel.
While adding REMAP messages into existing information channels is relatively straightforward, constant and consistent re-education of frontline personnel is also required, particularly as investigational treatments are added or dropped. For drugs that require IND approval and are not available in clinical pharmacies, engagement of an investigational pharmacy is required, which may not be available in all hospitals. We have embedded the trial into the investigational pharmacies at the UPMC flagship academic hospitals and are expanding access to community hospitals using a combination of flagship hospital investigational pharmacy outreach and community hospital pharmacies.. We are also considering expanding intake form completion ability to pharmacists.
Many health systems have increased their telemedicine capabilities in response to the pandemic. This shift to after excluding those who were not candidates due to age < 18 years, not suspected to have COVID-19, unlikely to be admitted for more than 24 h due to either discharge or death, or having previously participated in REMAP COVID in the previous 90 days. The one patient enrolled at UPMC Children's Hospital was older than 18 years of age but maintained longitudinal care with clinicians at Children's The UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators Trials (2021) 22:100 Page 8 of 11 telemedicine will likely be sustained, and opportunities to embed trials within existing telemedicine infrastructure should be sought. Our initial intent was to deploy telemedicine critical care nurses and physicians for informed consent and enrollment discussions. However, we abandoned this plan due to the complexity of ensuring all were trained in research ethics and protocol nuances, and uncertainty as to whether trial duties could coincide with telemedicine duties both operationally and ethically under current clinical research regulations. Nonetheless, it is our long-term goal to further embed trial operations into routine care processes and rely less on research personnel. Embedment into the EHR is an optional feature, and traditional trial processes are utilized effectively in most regions of the world enrolling in REMAP-COVID. If feasible though, EHR embedment, while requiring significant upfront investment, renders trial operations more efficient than traditional processes and reduces both clinician and research staff burden, and positions institutions for future embedded research. In addition, EHR embedment allows enrollment at hospitals geographically distant from a centralized research team. Therefore, we recommend leveraging a system's existing EHR to implement the trial if possible.
Customization of EHR-based workflow to support trial infrastructure, as has been accomplished within Cerner and Epic at UPMC, can conceivably be adapted to a range of other non-Cerner EHR systems. Most large health systems likely have the requisite information technology resources and expertise, while smaller systems may need to rely on traditional trial processes until standardized EHR solutions can be established. To this end, interactive web response system (IWRS) and electronic data capture (EDC) platforms have been established for REMAP-COVID to support trial deployment and software training resources are available from the coordinating center. Seamless to sites, data are collated, verified, and cleaned at the data coordinating center for analysis by the REMAP-CAP statistical analysis committee. In addition, data mapping across traditional IWRS/EDC systems and EHR-based approaches are handled at the coordinating center level.
Remote consent and enrollment
We found remote consent and enrollment most effective when combined with engaged bedside care providers. Thus, as with implementation of any hospital-based project, stakeholder engagement cannot be overemphasized. Regulatory requirements for consent vary by state and institution. Videoconferences should be secure and electronic signature software should be compliant with part 11 of Title 21 of the Code of Federal Regulations [17]. If telephone consent is allowed, we recommend still having a video conference option should a patient or representative desire it. If coordinator consent is allowed, we recommend maintaining a physician investigator call pool, should a patient or representative wish to speak to a physician, and to support coordinators. Technological challenges including internet bandwidth for video connection, institutional firewalls, and patient smart device availability and familiarity require creative solutions and partnership with bedside providers. Thus, we recommend testing the remote consent and enrollment process prior to launch, including mock enrollments with a variety of individuals with varying familiarity with the protocol and with videoconferencing and electronic signature software. Similarly, intermittent competency training of research personnel to support real-time troubleshooting thought processes is recommended. Combining technological simplicity for the patient with regulatory compliance is essential.
Regulatory compliance and oversight
We have streamlined the regulatory compliance and oversight processes to simplify onboarding of new sites in the US region in partnership with GCAR and the Western IRB. We recommend interested sites engage their local IRB to partner with the Western IRB for centralized ethics approval, and our group for protocol compliance, contract, and other key areas to facilitate rapid onboarding and activation of participant enrollment.
Modification of traditional trial management procedures
Frequent adaptive trial updates and the lack of traditional milestones such as target enrollment numbers can be challenging for classically trained frequentist investigators and trial personnel. As recommended by other platform trials groups, we have found that identifying, pausing, and briefly celebrating natural points of progress and contributions from local champions aid morale [11]. Virtual town halls can efficiently augment outreach and trial awareness. Efficient adherence to trial conduct standards, including Good Clinical Practice (GCP) and data quality must be maintained. Protocol adherence, adverse event monitoring, screening and enrollment logs, and data collection and export for multiple hospitals can be facilitated by automated EHR extraction, combined with traditional manual oversight and validation. Sites can implement REMAP trial procedures using entirely traditional methods with research coordinator and investigator "boots on the ground" using IWRS-based randomization and electronic data capture, while developing more efficient EHR processes if resources are available.
Alignment with other COVID-19 studies
Finally, implementation of the REMAP-COVID core protocol has enabled coordinated and collaborative clinical research focused on COVID-19 at UPMC. Other institutions have similarly centralized COVID-19 research [18,19]. Establishing an integrated platform for clinical research will facilitate coordinated efforts not only The UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators Trials (2021) 22:100 Page 9 of 11 during the pandemic but also into the future. For systems with minimal other COVID-19 research activity, such integration may be unnecessary, while for other large hospital systems integration is key.
Future
We envision using the REMAP-COVID trial infrastructure to develop a learning healthcare system approach beyond COVID-19. The infrastructure can serve to implement best practices, facilitate quality improvement, and expedite clinical research. Participation of community hospitals is essential, expanding enrollment and generalizability beyond quaternary academic centers, as well as facilitating enrollment of vulnerable populations, who often disproportionately seek care locally rather than at large academic centers. Enrollment of patients by bedside clinicians as is done in the UK for the RECOVERY trial [20] is a longterm goal and will require enabling and incentivizing bedside clinicians to do so. REMAP-COVID provides an intermediate step to familiarize providers with research. Lastly, expansion of enrollment and data extraction protocols into non-Cerner EHR systems within UPMC can provide a framework for other US sites to facilitate EHR embedment.
In conclusion, implementation of the REMAP-COVID trial within a large US healthcare system is feasible and facilitated by multidisciplinary collaboration. This investment establishes important groundwork for future learning health system endeavors.
|
2021-01-29T05:27:40.813Z
|
2021-01-28T00:00:00.000
|
{
"year": 2021,
"sha1": "ae09e2807674d835091716e3c74de5ff3e5cd128",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-020-04997-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae09e2807674d835091716e3c74de5ff3e5cd128",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204734321
|
pes2o/s2orc
|
v3-fos-license
|
Scattering on a square lattice from a crack with a damage zone
A semi-infinite crack in an infinite square lattice is subjected to a wave coming from infinity, thereby leading to its scattering by the crack surfaces. A partially damaged zone ahead of the crack tip is modelled by an arbitrarily distributed stiffness of the damaged links. While an open crack, with an atomically sharp crack tip, in the lattice has been solved in closed form with the help of the scalar Wiener–Hopf formulation (Sharma 2015 SIAM J. Appl. Math., 75, 1171–1192 (doi:10.1137/140985093); Sharma 2015 SIAM J. Appl. Math. 75, 1915–1940. (doi:10.1137/15M1010646)), the problem considered here becomes very intricate depending on the nature of the damaged links. For instance, in the case of a partially bridged finite zone it involves a 2 × 2 matrix kernel of formidable class. But using an original technique, the problem, including the general case of arbitrarily damaged links, is reduced to a scalar one with the exception that it involves solving an auxiliary linear system of N × N equations, where N defines the length of the damage zone. The proposed method does allow, effectively, the construction of an exact solution. Numerical examples and the asymptotic approximation of the scattered field far away from the crack tip are also presented.
Introduction
Among other distinguished as well as popular works [1], Peter Chadwick has made several contributions to the wave propagation problems in anisotropic models with different kinds of symmetries as well as those applicable to theory of lattice defects [2,3,4,5,6,7,8].The researches on elastic cubic crystals are specially relevant in the context of the present paper as a discrete counterpart of square lattice is natural when one considers waves interacting with a crack-tip [9,10,11,12,13,14].
Indeed, the role of discrete models in the description of mechanics and physics of crystals [15] and related structures has been dominant in studies Figure 1: Schematic of the incident wave on a crack-tip with damage of several critical phenomena like dislocation dynamics, dynamic fracture and phase transition, bridge crack effects, resonant primitive, localised and dissipative waves in lattices among others [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32].The concomitant issues dealing with the propagation of waves interacting with stationary cracks and rigid constraints as well as surface defects have been explored in [33,12,13,14,34,35,36,37,38,39,40,41,42].It is noteworthy that the continuum limit, that is low frequency approximation, of the scattering problem for a single crack [13,43], recovers the well known solution of Sommerfeld [44,45].With respect to the crack-tip geometry, note that the discrete scattering problems have been solved in [12,13,38,40,35] for atomically sharp crack tips.Typically, such situations of discrete scattering due to crack surfaces are further complicated as the crack-tip is endowed with some structure, as schematically shown in Fig. 1, due to the presence of cohesive zone, partial bridging of bonds, etc, commonly used in continuum mechanics [46,47,48].The notion of cohesive zone used in this paper is considered in a wider sense than in the fracture mechanics (it is not clearly eliminating any singularities that are not arising in discrete formulation).The zone simply emphasizes the fact that different links, subjected to a high amplitude vibration, near the crack-tip may undergo phase transition, damage and/or even breakage at different time depending on materials properties (manifesting by respective damage/fracture criteria [49]).As a result, a naturally created partial bridging and/or forerunning zone can be observed during the crack propagation (see for example [30,50]).
The problem considered in this paper, in fact, becomes much more intractable when compared with the scattering due to an atomically sharp cracktip that has been solved in [12,13] using the scalar Wiener-Hopf factorization [51,11].As an example, it is shown that in the case of partially bridged finite zone, the corresponding Wiener-Hopf problem becomes vectorial as it involves a 2 × 2 matrix kernel which belongs to a formidable class [52,53,54,55].In this paper, it is shown that a reduction to a scalar problem is possible with the additional clause that it involves solving an auxiliary linear system of N × N equations where N represents the size of the cohesive zone.Such reduction resembles the one proposed for the Wiener-Hopf kernel with exponential phase factors in continuum case [52,53,54,55], and its recently investigated discrete analogue of scattering due to a pair of staggered crack tips [56,34].It is also relevant to recall for such kernels an asymptotic factorization based alternative, but approximate, approach [57,34].
Overall, the method proposed in the paper does allow, effectively, the construction of an exact solution, even in the general case of arbitrary set of damaged links.The paper presents some numerical examples to demonstrate the effect of certain kind of damaged links on the pattern of scattered field.The expression obtained after an asymptotic approximation of the scattered field far away from the crack-tip is also presented as a perturbation over and above that for the atomically sharp crack-tip obtained earlier in [12].A careful analysis of the continuum limit [43], in the presence of damaged links which demands adoption of a proper scaling, is relegated to future work.The question of the behavior of edge conditions vis-a-via sharp cracks [58,51] is anticipated to be crucial in such exercise.
As a summary of the organization and presentation of the main aspects of this paper, §1 gives the mathematical formulation of the scattering problem.§2 provides the exact solution of the Wiener-Hopf equation modulo the reduced form to an auxiliary linear system of N × N equations.§3 presents some special scenarios of the distribution of the damaged links which allow either an immediate solution of the auxiliary equation or demonstrate the difficulty and richness of the problem by mapping its difficulty to a class of problems.§4 gives the far-field behaviour away from the crack-tip as a perturbation in addition to that for a sharp crack-tip, as well as some numerical examples.§5 concludes the findings of this paper.One appendix appears in the end giving technical details of application of the Wiener-Hopf method.For details of the theory of scattering and the Wiener-Hopf method we refer to [59,51] whereas the mathematical aspects of convolution integrals and Fourier analysis can be found in [60,61,62,63,64,65,66].For the issues dealing with the difficult cases of the matrix Wiener-Hopf problems, the reader is referred to [67,68,69,70,57,71,72,73] 1 Problem formulation Let us consider a square lattice structure consisting of a semi-infinite crack that involves an additional structural feature near the crack-tip.The bulk lattice is constructed with the same masses, m, situated at the points (x, y), x∈ Z, y ∈ Z and connected by elastic springs with stiffness, c > 0 (see Fig. 1).The space-coordinates are dimensionless and define the position of the corresponding mass (x, y) = (x/a, ỹ/a) (normalised by the length of the links between the neighboring masses a).Displacement of the mass at each point is denoted as u x,y (t).
The bonded interface between the two half-planes consists of a finite segment of distributed springs of the stiffness, {c −x } −N x=−1 (c −x ≥ 0) that connecting masses from the different sides of the interface attributed to the values of the variable x (see Fig. 2).Note that some of the links can be also considered fully destroyed; thus, the geometry of the damage zone can be rather complex.
In the following, we will use the standard notation: We assume that an incident wave imposes the out-of-plane small deformation of the lattice.Here k x , k y ∈ R are wave numbers; also, sometimes we use k x = k cos Θ, k y = k sin Θ with k > 0 and Θ ∈ [−π, π].The symbol A ∈ C is the complex dimensional amplitude of the wave.It is further assumed that ω = ω 1 + iω 2 (where ω 2 > 0 is an arbitrary small number).The latter guarantees the causality principle to be addressed.Note that this implies k = k 1 + ik 2 where k 2 is small when ω 2 is small.We seek the harmonic solution to the problem of the form: where u s x,y and u i x,y are the scattered part and the incident part, respectively.The following set of equations are valid in each part of the lattice structure outside the interphase (y ≥ 1, and y ≤ −2): Here ∆ is the discrete Laplace operator with ∆u x,y = u x+1,y + u x−1,y + u x,y+1 + u x,y−1 − 4u x,y (see [74,11,12]) and in the following u x,t = u t x,y .The interphase consists of two lines y = 0 and y = −1 (see Fig. 1).Let the damaged portion be denoted by the values of the coordinate x lying in Let us denote the Kronecker delta by the symbol δ; it is equal to unit when x ∈ D and zero otherwise.Also we denote the discrete Heaviside function by As a result, for x ∈ Z, the conditions, linking the top part of the lattice with the bottom one, can be written as The skew symmetry follows immediately, i.e., and in general u s x,−y−1 + u s x,y = 0, y ∈ Z + .Hence, it is enough to look at y = 0, or a difference of above equations ( 7) and (8).Let A be an appropriate annulus in the complex plane, same as that stated in [12], i.e., Taking into account the skew symmetry of the problem under consideration (see [12] and ( 9)), we conclude that Applying the Fourier transform to the equation ( 4) for scattering waves in the upper space y ≥ 0 we obtain following Slepyan [11] and Sharma [12]: with and An analogous result can be obtained in the lower space (y ≤ −1).The details are identical to those for crack without damage zone as provided in [12].
Taking into account the condition (9) as well as (13) (in particular, u F 1 = −u F −2 = u F 0 λ with λ given by ( 14), we obtain Thus, with i.e., Simplifying further for z ∈ A, we get where L(z) = h(z)/r(z), while P s (and P i ) is a polynomial in z given by Equation ( 19) is the Wiener-Hopf equation for the Fourier transform of the bonds v ± in the cracked row (x ∈ Z ± ).By inspection and comparison with the results for a single crack without damage zone obtained in [12] reveals that the kernel remains the same but there is a presence of an extra unknown polynomial on the right hand side of the Wiener-Hopf equation.
Solution of the Wiener-Hopf equation
The relevant multiplicative factorization of the kernel L in (19) on the annulus A, i.e., L = L + L − has been obtained in an explicit form in Equation (2.27) from [12].Thus, using this fact, (19) can be written as where Note that L + is analytic and non-vanishing for |z| > R + and that L − is analytic and non-vanishing for |z| < R − .Now (notice that |z P | > R − in ( 10)), so that with [12] C a Here Further (recall that D is defined in ( 5)) Using the expressions from [13], L −1 + can be expanded in a series of the form where and the first term is analytic outside a circle of radius R + while the second is analytic inside a circle of radius R − in the complex plane.Therefore, in the context of ( 27), Above additive splitting of L −1 + P t , naturally, allows the following additive decomposition, where which are analytic outside and inside of circle of radius R + and R − in the complex plane, respectively.As a final step, following the analysis in [12] and using the expressions ( 24), ( 31) and ( 19) leads to where χ is an arbitrary polynomial in z and z −1 .It can be shown in [12] that as z → ∞, Due to (34) Also the total field v (the total oscillatory filed along the symmetry axis) is given by In particular, expanding (34) 2 further, with |z| < R − .Re-arranging (38), we get Let P D denote the projection of Fourier coefficients of a typical f − (z) for |z| < R − to the set D, then above equation (39) leads to which yields a N × N system of linear algebraic equations for {v t x } D , i.e., the unknowns {v s x } D , since {v i x } D are known in terms of the incident wave (2).Indeed, with the notation C κ (p) to denote the coefficient of z κ for polynomials p of the form C 1 z + C 2 z 2 + . . ., we get Above equation can be written in a symbolic manner as where Formally, applying the inversion of coefficient matrix in (42), i.e., χ = A −1 b, gives {v t x } x∈D ; substitution of this expression back in (38), via (26), as well as (34) leads to the complete solution of the Wiener-Hopf equation.Let ãνκ denote the components of the inverse of A. Then The expression (44) has been verified using a numerical solution (based on scheme described in Appendix D of [12]) of the discrete Helmholtz equation ( 4) and assumed conditions on the crack faces for several choices of the damaged links; we omit the graphical plots of the comparison as they are indistinguishable on the considered graph scale.Remark 1: When ω 2 (= ω) is positive, it follows from the Krein conditions that there exists a unique solution in square summable sequences since only a finite number N of damaged links are present.This is a statement on the lines of that provided by Sharma for the sharp crack-tip [12,13] and the rigorous results of Ando [75].The limiting case as N → ∞ can be a different story altogether and it is not pursued here.
Examples of specific damage zones.
Choosing different values of the coefficients c −j , j ∈ [1, N ] one can consider various damage zones.Some of them are discussed below.
Completely destroyed zone.
Consider the simplest case when c −x ≡ 0. (In fact, it is a bad choice of the left end of the cohesive zone).Then (40) reduces to (using ( 29)) , so that it is a special case of the complete exact solution given in [12], i.e., v [12] and Eq.(4.1b) in [13]).The detailed analysis and expressions of the solution based on latter appears in [13] where the single crack was considered.It is natural, as this special case corresponds to a single (a bit longer) crack.
"Healthy" (no damage) zone.
For the case c −x ≡ c, the above extra equation ( 40) arises again due to a "bad choice" of the origin (cohesive crack-tip) to define the half-Fourier transforms!Consider the simplest case when c −x ≡ c.Evidently, this case coincides with the previous one when c −x ≡ 0, except for a shift in the origin from (0, 0) to (−N, 0) (a single a bit shorter crack).Then (40) reduces to (using ( 29)) With the substitution z → z −1 , x → −x in above equation, we get and finally, Here the reference expression from [12] and [13] to which it agrees.
A zone with continuously distributed damage.
Lets consider a relatively general case that models a real damage accumulation in the damage zone.In this case, one can reasonably assume that at the crack-tip the stiffness of the interfacial zone is minimal (the damage is most pronounced) then increasing monotonically and, finally at the other end of the zone, it takes the same magnitude as of non-damaged lattice.A typical representative of such interface is the exponential distribution The parameter α regulates the rate of damage accumulation.Note that α 1 and α 1 correspond to part (a) and part (b) of this section, respectively.On Fig. 3 we present an illustration of v t x given by ( 44) for N = 100.It is emphasized here that the graphical results for the same choice can be obtained using the numerical scheme (described in Appendix D of [12]) and these are found to coincide with the plot in Fig. 3(b).
As one can see, the presence of a high gradient in the elastic properties of the cohesive zone significantly amplifies the local scattered field near the tip of the zone.As a result, a pronounced damage should be expected exactly here that is consistent with the assumptions.However, when α is close enough to zero, an opposite phenomenon happens as now the gradient is small while the jump of the material properties undergoes its maximum value (in fact it is equivalent to the second case above).It is thus important to compare which part of the damage zone can be subjected to higher risk for further damage.It is also evident that the angle of the incident wave θ may influence essentially the discussed effect.Respective graphical results for the ratio v −1 /v −N are presented in Fig. 4 showing impact of the incident wave frequency by considering two different normalised values ω = 0.6 and ω = 1.2.As expected large and small values of the parameter α determining the damage gradient inside the zone change the effect significantly.Namely for small value of α the left hand end of the damage zone is impacted by higher amplitudes and vice versa.In the right hand end of the zone (contacting with undamaged part of the zone) the effect is less straightforward.Also for the incident waves parallel to the crack (θ = 0 and θ = π) the results are different.The first type can be in fact interpreted as the so-called feeding waves (see for example [30]) for the dynamic case.
In Fig. 5 we show in more detail influence of the big and small values of the parameter α.Exact values of the parameters are depicted in the captions of the respective figures.(37) for N = 100.The curves in blue and red correspond to min and max value of α, respectively.
Damage represented by a bridge crack.
Let N be even.In the following, we will use the standard notation: for different subsets of the set of entire numbers.Consider the case when (see Fig. 6).Here max |D ∩ Z e | is N which is replaced by 2M for convenience; thus the intact bonds on even sites in the cracked row begin at x = −2M.The difference of ( 7) and ( 8) becomes Utilising the Fourier transform (12) to the equation ( 51) and taking into account the following representations of the functions u Plots for a range of α with darker shade for smaller α. v we get with where we have defined new plus and minus vector functions v ± (z) = (v ± e , v ± o ) .The components of the matrices A(s), B(s) and the right-hand side of the equation ( 58 60) where d(z) has been defined already above in (56).
(61) Equation ( 58) can be rewritten in an equivalent form: where C(z) = A −1 (z)B(z).The matrix C possesses a structure which in general does not admit factorization by standard techniques for arbitrary N (for N = 1, perhaps).
On the other hand, as it has been proven above, this special case can be reduced to the solution of N linear algebraic equations (see also [56]).For example, the problem with a cohesive zone of similar geometry in continuous formulation [76] cannot be reduced to a scalar Wiener-Hopf problem and requires an application of other numerical techniques [57,71,72,77].
In Fig. 7 we show the ratio of amplitudes in the two last points from the left-hand side of the damage zone to that on the right hand-side of the zone (x = 0).Exact values of the parameters are depicted in the captions of the respective figures.Now we examine in more detail an impact of the frequency of the incident waves.In the context of the matrix kernel (61), with the distinguished presence of the off-diagonal factors z −2M and z 2M , the reduction to linear algebraic equation obtained above, is reminiscent of that proposed for the Wiener-Hopf kernel with exponential phase factors that appear in several continuum scattering problems in fluid mechanics and fracture mechanics [52,53,54,55], and their discrete analogues in the form of scattering due to a pair of staggered cracks and rigid constraints [78,34,56]; both based on an exact solution of the corresponding staggerless case [79,80,81,35].
Reconstruction of the scattered field
Let C be a contour in the annulus A. By the inverse Fourier transform u s x,y = where v F is given by (35).For y = 0, 1, 2, . . ., u s x,−y−1 = −u s x,y , x ∈ Z, due to skew-symmetry.The total wave field is given by Concerning the effect of the damage, using the decomposition v F (z) = v F a (z) + v F P (z) (35), it is easy to see that v F a (z) coincides with the solution given in [12], i.e., it describes the scattering due to undamaged crack-tip; thus, the effect of the damage zone is represented by the second term v F P (z) in (35).The perturbation in the scattered field (63) induced by the damage zone is given by ûx,y = 1 2πi where For ξ x 2 + y 2 1 and ω/c ∈ (0, where ξ ∼ ω/c is related to the wave number of incident wave, a far-field approximation of the exact solution (63) can be constructed; also an analogous result holds for ω/c ∈ (2, 2 √ 2).It is sufficient for our purpose to focus on effect of the damage zone D so that we investigate the far-field approximation of (65), i.e., mainly associated with the expression of Λ m given by (66) for each m ∈ D. Following [12], the approximation of farfield can be obtained using the stationary phase method [82].The substitution z = e −iξ maps the contour C into a contour C ξ .In terms of polar coordinates (R, θ), the lattice point (x, y) can be expressed as Let Φ(ξ) = η(ξ) sin θ − ξ cos θ, η(ξ) = −i log λ(e −iξ ), The function Φ (68) possesses a saddle point [83,84] at ξ = ξ S on C ξ , with Φ (ξ S ) = η (ξ S ) sin θ − cos θ = 0, Φ (ξ S ) = η (ξ S ) sin θ = 0, which is same as that discussed in [12].Omitting the details of the calculations, it is found that ûx,y ∼ 1 2 √ π 1 + i sign(η (ξ S )) 2c λ y (e −iξ S )e −iξ S (x−1) (R|η (ξ S )| sin θ) 1/2 m∈D (c −m v t m Λ m (e −iξ S )), (69) as ωR/c → ∞.The expression (69) has been verified using a numerical solution of the discrete Helmholtz equation (based on scheme described in Appendix D of [12]); a graphical demonstration of the same is omitted in the paper.
Concluding remarks
We have shown how the scattering problem in square lattice with an infinite crack having a damage zone near the crack-tip of arbitrary properties can be effectively solved by We were able to reduce it to a scalar Wiener-Hopf method.We have used a new method that utilises specific discrete properties of the system under consideration.It consists of solving an auxiliary N × N system of linear equations with a unique solution (Remark 1).Effectiveness of the method has been highlighted by some numerical examples and the constructed asymptotic expression of the scattered field at infinity.Analysis of the solution near two ends of the damage zone and at infinity can be used in a non-destructive testing procedure, among other applications.The method may be useful to solve other matrix Wiener-Hopf problems appearing in analysis of the dynamics of discrete structures with defects.the Wiener-Hopf technique.Indeed, the discrete scattering problem for the bridge damage zone has been written in a vectorial problem with 2 × 2 matrix-kernel and simultaneously transformed it, by the aforementioned approach, to a scalar one (modulo the accompanying linear algebraic equation).This gives rise for a hope for building a close form standard procedure that allows for effective factorisation of similar matrices of an arbitrary size.
Figure 2 :
Figure 2: (a) Schematic of the incident wave parameters relative to the typical contours for square lattice dispersion relation.(b) Geometry of the square lattice structure and the notation for the number of damaged sites N = 5.
Figure 3 :
Figure 3: (a) Illustration of c −x with c −x = c exp(αx/N ), x ∈ D. (b) Illustration of (total) v x given by(37) for N = 100.The curves in blue and red correspond to min and max value of α, respectively.
Figure 6 :
Figure 6: Geometry of the square lattice structure with partially open crack-tip and N = 2M = 6.
|
2019-10-15T07:38:47.000Z
|
2019-10-15T00:00:00.000
|
{
"year": 2020,
"sha1": "d37b716aa7a1d82d1cd53e6e04382f8437e695e7",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2019.0686",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "d37b716aa7a1d82d1cd53e6e04382f8437e695e7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Mathematics"
]
}
|
4598357
|
pes2o/s2orc
|
v3-fos-license
|
Long-Range Capture and Delivery of Water-Dispersed Nano-objects by Microbubbles Generated on 3D Plasmonic Surfaces
The possibility of investigating small amounts of molecules, moieties, or nano-objects dispersed in solution constitutes a central step for various application areas in which high sensitivity is necessary. Here, we show that the rapid expansion of a water bubble can act as a fast-moving net for molecules or nano-objects, collecting the floating objects in the surrounding medium in a range up to 100 μm. Thanks to an engineered 3D patterning of the substrate, the collapse of the bubble could be guided toward a designed area of the surface with micrometric precision. Thus, a locally confined high density of particles is obtained, ready for evaluation by most optical/spectroscopic detection schemes. One of the main relevant strengths of the long-range capture and delivery method is the ability to increase, by a few orders of magnitude, the local density of particles with no changes in their physiological environment. The bubble is generated by an ultrafast IR laser pulse train focused on a resonant plasmonic antenna; due to the excitation process, the technique is trustworthy and applicable to biological samples. We have tested the reliabilities of the process by concentrating highly dispersed fluorescence molecules and fluorescent beads. Lastly, as an ultimate test, we have applied the bubble clustering method on nanosized exosome vesicles dispersed in water; due to the clustering effect, we were able to effectively perform Raman spectroscopy on specimens that were otherwise extremely difficult to measure.
S.I. 1 Optical setup
Sketches of the optical setups for the white light and fluorescence images:
S.I. 2 Trapping of objects during the rapid expansion of the bubble
It is well known that colloidal particles and bacterial strains are inclined to be trapped at air-water interface. 1,2 Interfacial accumulation of colloidal particles results from surface tension effects. In fact, as far as two liquids or fluids are immiscible, it is thermodynamically favorable for a particle to adsorb to the interface, no matter whether the particle is hydrophobic or hydrophilic (although hydrophobic interactions promote the accumulation). 1,2 Once a particle has been located at an infinitely large interface, the energy gain is: Where r ୮ is the radius of particle, γ ୵ୟ the interfacial energy of water-air interface, θ is the wetting angle of the particle. In the particular case of a micrometer sized expanding bubble, there are three forces at play, namely the component of surface tension force along the radius of the bubble, Fs, Laplace pressure force from inside the bubble, Fp, and drag force Fd. At the equilibrium we have the condition F s +F p +F d =0. From Laplace pressure, it can be shown 3,4 that with R bubble radius and α the half angle between the particle's center and particle-liquid-gas line of contact. The drag force F d due to rapid bubble expansion can be calculated from Stokes' law F ୢ ൌ 6πηr ୮ v where η is the viscosity of water, and v is the relative velocity between the particle and the fluid. Considering values r ୮ ≈ 2 • 10 ି m, v ≈ 100μm/s and R ≈ 10 ିହ m from our experiments, it results that F ୢ ≈ 4 • 10 ିଵଷ N, significantly smaller than the Laplace pressure force F ୮ ≈ 1.8 • 10 ିଽ N and surface tension force F ୱ .Therefore the equilibrium is given by the condition: The Laplace pressure force F p pushes the particle out of the bubble and the surface tension force Fs acts as a restoring force directed toward the center of the bubble bringing the particle back in equilibrium.
For a very large bubble radius R , when also the Laplace pressure can be neglected (F ୮ ൌ 0), the radial component of surface force F ୱ ൌ 0 and α ൌ θ.
As explained above, the drag force F ୢ cannot contribute to the detachment of particles, while it actually contributes to the accumulation. In fact in the case of a static interface, accumulation of particles is mainly a diffusion-limited process, depending on the particle arrival rate at interface. In the presence of a moving interface, as the case of a rapidly expanding bubble investigated in this manuscript, the particle arrival rate is strongly increased. The effectiveness in trapping particles increases with the bubble expansion velocity v and particle radius r ୮ . v . If we assume that the particle is subject only to Stokes forces F ୗ ൌ 6πηr ୮ ሺuሺr, tሻ െ rሶ ሻ with η dynamic viscosity, r ୮ and rሶ particle radius and velocity, the equation of motion of the particle can be written as: To evaluate the dynamics of particles in the proximity of expanding bubble, we can assume r ≈ Rሺtሻ, . The solution is: where rሺ0ሻ is the particle position at t=0.
For the rapidly expanding bubble to reach and trap the particle Rሺtሻ ൌ v t > rሺtሻ, that leads to the simple relation rሺ0ሻ < ୴ ୟ ൌ ଶ୴ ୰ ౦ మ ଽ ൌ r ୲୰ୟ୮ . Particles below r ୲୰ୟ୮ are reached and trapped, while above r ୲୰ୟ୮ they are accelerated to v .
S.I. 3 Analytical simulation details
To simulate numerically the collapse of a gas bubble in liquid water and the fluid flows in vicinity of a micrometer sized structure, we use the classical equations of fluid dynamics coupled to the "level set method" for the dynamics of interface implemented in the "OpenFoam free CFD Software". We perform two dimensional simulations assuming the liquid to be an incompressible Newtonian fluid and take into account instead of the compressibility of the gas in the bubble by regulating the bubble pressure in agreement to the ideal gas law, once fixed initial external pressure P ୣ୶୲ , bubble pressure P and volume V .
The initial bubble radius is r ൌ 3μm, the antenna and deflecting wall are both 4μm high. We initially set the system at equilibrium with P ൌ
|
2018-04-04T00:05:04.609Z
|
2018-03-28T00:00:00.000
|
{
"year": 2018,
"sha1": "47243aa6d7d33f3d4e1697543bfa668d54b98664",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsnano.7b07893",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "47243aa6d7d33f3d4e1697543bfa668d54b98664",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
15023056
|
pes2o/s2orc
|
v3-fos-license
|
Phosphate depletion modulates auxin transport in Triticum aestivum leading to altered root branching
Summary When grown in a low-phosphate environment Triticum aestivum showed reduced basipetal auxin transport and altered root PIN and AUX/IAA expression profiles, with a concurrent reduction in root branching density.
Introduction
The plasticity of root system architecture in response to environmental cues is a crucial component of a plant's nutrient foraging capacity. The production of lateral root branches is genetically controlled and may increase root surface area in nutrient-rich soil (Drew, 1975;Linkohr et al., 2002), or enable the exploration of a greater soil volume by lateral growth through the topsoil in nutrient-poor soil (Linkohr et al., 2002;Zhu and Lynch, 2004). An example of this process is the acquisition of inorganic phosphate (Pi), in which the production of lateral roots is crucial for Pi accumulation in some plants (Lynch, 2011). The factors controlling root branching to form new lateral roots are therefore of great interest, and are the focus of this study.
Phosphate is an essential plant nutrient required for photosynthesis and a key building block in biological molecules such as nucleic acids and phospholipids. The concentrations of Pi in soil solution are, however, typically very low, due to Pi's propensity to bind strongly to soil surfaces or form insoluble complexes with cations (Norman and Hemwall, 1957). This means that Pi is often a limiting factor in plant growth and development. This has resulted in a large number of developmental traits amongst plant species that can enhance Pi uptake. Physiologically these include the modulation of root elongation (Sánchez-Calderón et al., 2005), branching (Linkohr et al., 2002;López-Bucio et al., 2002), and root hair density (Ma et al., 2001). The root system may also act to enhance Pi uptake by exuding protons (Hinsinger, 2001), organic acid anions (Ryan et al., 2001), and phosphatases (Tadano and Sakai, 1991) into the rhizosphere, or by the formation of symbioses with arbuscular mycorrhizas or ectomycorrhizas (Péret et al., 2011;Smith et al., 2011). Understanding the mechanisms controlling these traits is therefore of great importance in the pursuit of improved crop Pi uptake. The wheat crop is a major source of cereal for the world's expanding population, and this work investigates the response of the root system of the crop plant spring wheat (Triticum aestivum) to Pi deficiency. Work on the model plant Arabidopsis thaliana has been very successful in determining the sequence of molecular and cellular processes behind lateral root production. Primed pericycle founder cells, formed in the basal root meristem and located opposite xylem poles (Dolan et al., 1993), undergo several rounds of ordered asymmetric cell division to form dome-shaped lateral root primordia (LRP) which then emerge from the parent root (Dubrovsky et al., 2001(Dubrovsky et al., , 2008(Dubrovsky et al., , 2011De Smet et al., 2006;De Rybel et al., 2010;Moreno-Risueno et al., 2010). The spatial distribution of lateral root production is a tightly controlled process, in which the phytohormone auxin plays a key role.
At the root apex, auxin distribution is tightly controlled by the differential expression and subcellular localization of the AUXIN RESISTANT (AUX) and PIN-FORMED (PIN) auxin carrier proteins which mediate influx and efflux, respectively, in a process known as polar auxin transport (PAT) (Palme and Gälweiler, 1999). The protein AtPIN1 unloads leaf-derived auxin from the vascular tissue into the root apical meristem (RAM) , where AtPIN3, AtPIN4, and AtPIN7 proceed to create auxin maxima in both the quiescent centre cells at the heart of the RAM and in the collumella root cap distal to it (Friml et al., 2002a(Friml et al., , b, 2003. The expression of AtPIN2 and AtAUX1 in lateral root cap cells and AtPIN2 epidermal cells then drives a basal flow of auxin on the root periphery . This basipetal transport of auxin in the lateral root cap and epidermis is crucial for auxin accumulation in the basal portion of the RAM. This is the driver of both gravitrophism (Abas et al., 2006) and, importantly for this study, lateral root formation (Casimiro et al., 2001;De Smet et al., 2006). In the basal RAM, basipetally transported auxin accumulates in groups of pericycle cells, which have been specified by oscillating gene expression, to and from primed pericycle founder cells (Dubrovsky et al., 2001(Dubrovsky et al., , 2008(Dubrovsky et al., , 2011De Smet et al., 2006;De Rybel et al., 2010;Moreno-Risueno et al., 2010). These founder cells retain many cytological features characteristic of meristematic cells (dense cytoplasm, large nuclei, and small vacuoles) and maintain a level of multipotency whilst the remainder of the root tissue differentiates around them (Dubrovsky et al., 2008;Parizot et al., 2008). Genes related to the cell cycle are subsequently triggered in these founder cells, and so the cell division events which drive the formation of LRPs are also induced by auxin (Himanen, 2002;Himanen et al., 2004;Dubrovsky et al., 2008).
The majority of auxin signal transduction is known to require three major protein components: AUX/IAA transcriptional repressors (Abel and Theologis, 1996), AUXIN RESPONSE FACTOR (ARF) transcriptional activators (Guilfoyle and Hagen, 2007), and the Skp1-cullin-F box protein E3 ubiquitin ligase (SCF) and its F box component TRANSPORT INHIBITOR RESPONSE 1 (TIR1) (Dharmasiri et al., 2005;Kepinski and Leyser, 2005). Briefly, in the absence of auxin, AUX/IAAs bind to ARFs and prevent them from activating transcription of auxin-responsive genes. Auxin acts as a molecular glue, stabilizing the direct interaction between TIR1 and the AUX/IAA (Dharmasiri et al., 2005;Kepinski and Leyser, 2005). This enables the SCF complex to ubiquitinate the AUX/IAA, targeting it for degradation (Gray et al., 2001), and thus allows the ARF to activate transcription (Dharmasiri et al., 2005;Kepinski and Leyser, 2005). Auxin signalling also involves a reverse feedback loop, whereby AUX/IAA genes are among those whose expression is activated by ARFs: thus auxin signal transduction is a tightly restricted process, and AUX/IAA expression correlates well with increased auxin concentrations (Vanneste and Friml, 2009). A number of AUX/IAA genes have been linked to lateral root initiation and development in Arabidopsis: AtIAA28 regulates founder cell specification (De Rybel et al., 2010), AtIAA14 regulates the asymmetric divisions that form the first committed steps in lateral root production (De Smet et al., 2010), and AtIAA12 and AtIAA13 also participate in lateral root development subsequent to AtIAA14 (Goh et al., 2012).
Greater production of lateral roots has been shown to increase Pi acquisition efficiency substantially (Zhu and Lynch, 2004), and there is a variation amongst plant species in how lateral root production is used to maximize Pi uptake in low-Pi environments (Niu et al., 2013). Arabidopsis, Brassica nigra, and Hordeum vulgare root systems have been reported to respond to homogeneously low-Pi environments by promotion of lateral growth at the expense of vertical growth (Linkohr et al., 2002;Huang et al., 2008). Here the primary RAM terminally differentiates (Sánchez-Calderón et al., 2005), resulting in the cessation of root growth, and an increase in the frequency of lateral root initiation and lateral root elongation (Linkohr et al., 2002;López-Bucio et al., 2002). However, conflicting reports demonstrate that, subsequent to longer term exposure to low Pi conditions, Arabidopsis, H. vulgare, and Phaseolus vulgaris root systems show reductions in lateral root branching density (Drew, 1975;Borch et al., 1999;Nacry et al., 2005). In Arabidopsis, this temporal contrast is proposed to be caused by low Pi conditions stimulating the emergence of existing LRPs, yet reducing the overall number of primordia generated (Nacry et al., 2005). A contrast can, however, be drawn between Arabidopsis and H. vulgare in their reactions to localized areas of high soil Pi: Arabidopsis shows no branching response to these Pi patches (Linkohr et al., 2002), whereas H. vulgare responds by significantly increasing branching frequency (Drew, 1975). This difference between the branching responses of these dicot and monocot species to Pi supply highlight the potential hazards of extrapolating developmental responses to nutrient availability between species that differ in their morphology, physiology, and phylogenetic history. Monocot cereals have a fine fibrous root system composed of multiple seminal and crown roots, rather than a tap root. This results in greater exploration of the topsoil than in the tap root system of the model plant Arabidopsis; therefore, the cereal root system as a whole encounters a more diverse range of nutritional environments (Hodge, 2009). This is especially important for Pi given its lack of mobility in soil solution.
This study uses spring wheat (Triticum aestivum) as a model to investigate how cereal root systems respond to variable Pi availability at a molecular and physiological level. This crop was selected because of its agronomic importance, the inaccuracy of extrapolating responses between species, and the lack of studies focused on the molecular mechanisms behind such processes in cereals. Despite its crucial role in global food production, the complex nature of the T. aestivum genome means that, until recently, very few studies have focused on the molecular basis of its developmental plasticity.
Growing conditions
Triticum aestivum L. (cv. Paragon) seeds were surface-sterilized for 5 min in a solution containing 10% Na hypochlorite and 0.01% Tween-20 (w/v). These seeds were then germinated on autoclaved tissue paper, moistened with sterile de-ionized water, for 3 d. The resulting seedlings were then planted in 50 ml polypropylene tubes filled with autoclaved, washed quartz sand, and the whole system was watered to field capacity with an adapted Hoagland's nutrient solution (Hoagland and Arnon, 1950). Water losses due to plant uptake and evaporation reduced the water content of the sand by around one-third over 24 h; therefore, it is unlikely that flooding of the root system could be a confounding factor. Sand culture was used to minimize solid phase phosphorus interactions and the release of native phosphorus from soil organic matter, and to facilitate recovery of intact root systems. The Hoagland's solution contained: 5 mM KNO 3 ; 5 mM Ca(NO 3 ) 2 ; 2 mM MgSO 4 ; 765 nM ZnSO 4 ; 320 nM CuSO 4 ; 46.3 μM H 3 BO 3 ; 497 μM Na 2 MoO 4 ; 9.14 μM MnCl 2 ; 1 mM NH 4 NO 3 ; 38.7 μM Fe.EDTA; and either 500 μM KH 2 PO 4 (high-Pi) or 5 μM KH 2 PO 4 and 495 μM KCl (low-Pi) (all Sigma Aldrich, Poole, UK). The solution pH was adjusted to 6.0 and autoclaved before use. Solutions containing the synthetic auxin 2,4-dichlorophenoxyacetic acid (2,4-D) and the PAT inhibitor 2,3,5-triiodobenzoic acid (TIBA) (both Sigma Aldrich) were 0.2 μm filter sterilized and then applied to the nutrient solution after autoclaving. 2,4-D was used rather than the endogenously produced auxin indole acetic acid (IAA) due to its increased stability over the time span of the experiment. The tubes were kept in a climate-controlled cabinet at 20 °C, 70% humidity, 16 h/8 h day/night cycle, and light intensity (PAR) of 500 μmol m -2 s -1 , with a randomized layout. The tubes were re-watered daily to field capacity with the relevant nutrient solution. Seven days after planting (10 d after germination), the tubes were emptied and the roots were washed to remove any sand prior to subsequent analysis. Depending upon the experiment, the root systems were then either assayed for their lateral root branching density and longest lateral root length, or 3 cm lengths of five seminal roots and five lateral roots per sample were frozen in liquid N 2 for future quantitative PCR (qPCR) analysis. Root branching frequency was assayed on washed roots by first measuring the distance between the oldest and the newest, emerged, lateral root, and then counting the frequency of lateral root branches from the seminal root axis. Only the longest seminal root of the 3-5 present per plant was used. The longest lateral root length was determined using a ruler. The initial seminal root growth rate was measured under the same conditions as described above: each seedling's seminal roots were, however, measured prior to planting, and then seedlings were harvested 24 h later and the seminal roots re-measured to find the daily growth rate. All statistical significance testing was performed using Student's t-test on MS Excel.
Quantitative reverse transcription-PCR (RT-PCR)
RNA was extracted from the liquid N 2 -frozen harvested roots. Briefly, the first 1 cm of root tip was excised using a scalpel and 10 root tips were pooled per extract. Each root tip was excised from a separate plant, and each pool of 10 was treated as one biological replicate. These were flash-frozen in liquid N 2 and then the RNA was extracted, using a GeneMATRIX RNA/miRNA purification kit (Roboklon, Berlin, Germany) as per the manufacturer's instructions. The dART RT kit (Roboklon) was then used to construct cDNA from this RNA extract using oligo d(T) primers. The target genes for qPCR analysis were obtained by performing a tblastn search of both the NCBI and the TIGR online databases using the protein sequences of AtPIN2 and AtIAA2. These sequences are referred to herein as: IAA2 (GenBank: CK213604), IAA3 (GenBank: CK170519), IAA4 (GenBank: CK163783), IAA5 (GenBank: BI751049), IAA6 (GenBank: AK332471), IAA7 (GenBank: AK331670), IAA8 (GenBank: AK330790), PIN3 (GenBank: CK208792), and PIN4 (GenBank: CK208849). These candidates, alongside the previously uploaded sequences of IAA1 (GenBank: AJ575098), PIN1 (GenBank: AY496058), and PIN2 (BK005137), were assessed to ensure that they all contained characteristic PIN or AUX/IAA domains (Supplementary Figs S1, S2 available at JXB online) by first using the ExPASy translate tool (web. expasy.org) to determine their amino acid sequences, aligning them using MUSCLE (www.ebi.ac.uk), then using TMPred (Hofmann and Stoffel, 1993) to predict transmembrane helices. Primers were designed using the NCBI primer blast tool (Supplementary Table S1), produced by Eurofins (Eurofins MWG-Operon, Ebersberg, Germany), and tested for specificity by performing standard PCR on cDNA extracts and performing electrophoresis on agarose gels. Quantitative RT-PCR was performed using a thermocycler (Applied Biosystems, Life Technologies Ltd, Paisley, UK) and SYBR Green qPCR mix (Roboklon), and normalized to actin (GenBank: AB181991) and tubulin (GenBank: U76558) controls performed using primer pairs published by Teng et al. (2013) and Zhang et al. (2012). Normalization was performed by dividing the relative expression values for each sample by the square root of the product of that sample's actin and tubulin relative expression values. A further set of quantitative RT-PCR assays were performed on cDNA extracted from the root tissue of plants submerged in high-Pi media ±1 μM 2,4-D for 1 h, having been grown to 10 d after germination as above, to ensure that their expression showed the auxinresponsive increase in transcription expected of AUX/IAA genes ( Supplementary Fig. S3). All statistical significance testing was performed using Student's t-test on MS Excel.
[ 14 C]IAA transport assays PAT assays were conducted by a method adapted from Mishra et al. (2009). Seeds were surface-sterilized as previously described, and then germinated in Petri dishes containing the high-Pi Hoagland's solution (described above) solidified with 10% Agar agar (Sigma Aldrich). Environmental conditions were as described earlier. Split Petri dishes were then created, with one half containing high or low-Pi Hoagland-agar medium previously described, and the other half containing either high-or low-Pi Hoagland-agar medium supplemented with 50 nM of the endogenous auxin IAA labelled with 14 C (American Radiolabelled Chemicals Inc., St. Louis, MO, USA). Two days after germination, seedlings were transferred to these split plates so that the first 1 mm of the longest seminal root's tip was in contact with the agar containing [ 14 C]IAA, with a 1 mm gap between the [ 14 C]IAA-containing agar and the non-radioactive agar with which the remainder of the root, and root system was in contact. The [ 14 C]IAA-containing agar did not contact the non-radioactive agar. These seedlings were left for 1 h at 20 °C. The roots were then dissected so that 2 × 2 mm sections were taken from immediately behind the 1 mm that was in contact with the agar containing [ 14 C] IAA. These sections were oven dried at 105 °C for 24 h, and their 14 C content was then determined with an OX-400 Biological Sample Oxidizer (RJ Harvey Instrument Corp., Hillsdale, NJ, USA) with the 14 CO 2 evolved collected in Oxosol scintillation fluid (National Diagnostics, Hessle, UK). Four root sections were pooled per replicate to ensure a sufficient 14 C signal, with three such replicates performed per treatment. 14 C was then quantified using a Wallac 1404 scintillation counter (Wallac EG&G, Milton Keynes, UK). The ratio of [ 14 C]IAA content between the 2 mm section closest to the root tip and the 2 mm section immediately basal to it was used as an approximate estimate of relative auxin flow. These values were scaled to be proportionate to the high tip Pi, high basal Pi environments value. Replicates exposed only to agar containing no added [ 14 C]IAA displayed no measurable 14 C signal. Statistical significance testing was performed using Student's t-test on MS Excel, and twoway analysis of variance (ANOVA) in SPSS.
Triticum aestivum branching frequency reduces in low-Pi environments yet remains auxin sensitive
As previously demonstrated for H. vulgare root systems (Drew, 1975), T. aestivum seminal roots produced a lower frequency of lateral roots in low-Pi environments than when exposed to high concentrations of Pi (Fig. 1A, B). Alongside this observation, initial seminal root growth rates were unaffected by environmental Pi supply ( Fig. 2A, B, E), whereas low-Pi conditions resulted in a significant limitation in maximum lateral root length (Fig. 2C, D, F).
Seedlings grown in low-Pi media supplemented with 1 μM 2,4-D demonstrated a significant recovery in root branching frequency, demonstrating that they retained the capacity to respond to exogenous auxin (Fig. 1C, D). Interestingly, seedlings grown under high-Pi and at this dosage of 2,4-D demonstrated a drastic reduction in lateral root elongation, a characteristic of auxin application, whereas the low-Pi+2,4-D seedlings showed levels of lateral root elongation more similar to the no auxin controls (Fig. 1A, B). The inclusion of 100 μM TIBA (an auxin transport inhibitor) in the growth media showed that inhibition of auxin transport could severely reduce lateral root outgrowth (Fig. 1E), a similar response to that found in other plant species (Karabaghli-Degron et al., 1998). The 1, 5, and 50 μM TIBA treatments allowed lateral root outgrowth, whilst also showing no significant effect of environmental Pi concentration on lateral root density. Therefore these data suggest that unimpeded PAT is required for Pi-mediated modulation of lateral root density.
Expression of putative AUX/IAA genes is perturbed in response to environmental Pi
Bioinformatic analyses identified eight matches with predicted protein sequences with a highly similarity to the AtIAA2 probe used. These sequences all demonstrated domains III and IV characteristic of AUX/IAA sequences, and either also possessed domains I and II or were incomplete sequences ( Supplementary Fig. S2 at JXB online). Expression levels of three of the seven identified potential AUX/IAA genes were significantly altered by the Pi status of the growth media (Fig. 3). The expression of IAA1, IAA4, and IAA7 was significantly up-regulated under low-Pi conditions (Fig. 3), which contrasts with the reduced sensitivity of root elongation to exogenous auxin (Fig. 1C, D). However, the expression of IAA3 was significantly reduced under low Pi.
Basipetal auxin flow is reduced under phosphorus starvation, as is the expression of putative PINs
Radiolabelled [ 14 C]IAA was used to assess the root's capacity to transport auxin basipetally from the root apex. The results in Fig. 4A show that there was a significant reduction in basipetal auxin flow when the root tip was in contact with low-Pi medium compared with that containing high-Pi, whatever the basal medium Pi content. Furthermore, two-way ANOVA shows that both root tip Pi supply and basal root Pi supply have significant impacts on this measure of basipetal auxin flow, with a significant interaction between the two factors (P<0.001).
In the database searches for PIN auxin transporter sequences, two new sequences, and the previously annotated TaPIN1 and TaPIN2, which had predicted amino acid sequences that were highly similar to the AtPIN2 probe sequence used, were identified. Quantitative RT-PCR measurements made on these sequences also showed reduced expression of PIN3 and PIN4 in seedlings grown in low-Pi media ( Fig. 4B; Supplementary Fig. S1 at JXB online). This down-regulation of PIN gene expression, coupled with the reduction in [ 14 C]IAA flow, provides evidence that auxin transport capacity was significantly altered in T. aestivum roots in response to low-Pi environments.
Auxin fluxes in the root tip are affected by Pi availability, potentially driving alterations in root branching
The results presented herein shed new light on how T. aestivum roots integrate phosphorus availability into the processes driving lateral root production. Auxin is well established as a key component in the control of lateral root production. The basal flow of auxin in the lateral root cap and epidermis, and its subsequent accumulation in pericycle founder cells, is thought to drive lateral root branching and elongation (Dubrovsky et al., 2001(Dubrovsky et al., , 2008De Smet et al., 2006), with disruption of this process inhibiting lateral root production ( Fig. 1E; Casimiro et al., 2001).
The results in Fig. 4A demonstrate that when T. aestivum roots are in a low-Pi environment the basal auxin flow is greatly reduced, and this potentially causes the reduced lateral root density observed in Fig. 1A. Root tip contact with low-Pi environments has previously been shown to have the capacity to drive the remodelling of a plant's root Fig. 4. Polar auxin transport and expression of candidate genes for PIN-FORMED (PIN) auxin carrier proteins are down-regulated in low Pi. (A) Relative IAA transport rates measured by using a [ 14 C]IAA label for Triticum aestivum seedlings with root tips in contact with either low-Pi (5 μM) or high-Pi (500 μM) media, and the rest of the root system in contact with non-[ 14 C]IAA-containing low-Pi (5 μM) or high-Pi (500 μM) media. The 14 C content of 2 mm root sections starting 1 mm from the root apex, divided by the 14 C content of 2 mm root sections starting 3 mm from the root apex, expressed as a proportion of the high Pi controls. Values are the average of three biological replicates (four pooled roots from distinct organisms per replicate). Error bars are the standard errors (n=3). Letters indicate values significantly different from each other using Student's t-test (P<0.05); two-way ANOVA determined that both tip phosphorus, basal phosphorus, and the interaction between the two had significant effects on the results (P<0.001). (B) Relative expression levels of PIN gene candidates in T. aestivum seedlings grown in either low-Pi (5 μM) or high-Pi (500 μM) media. Each sample was normalized for actin and tubulin expression as detailed in the Materials and methods, and plotted as a value relative to each gene's high-Pi value. Values are the averages of three biological replicates (10 pooled root tips per replicate), with each of these being a pooled average of three experimental replicates. Error bars are the SEM (n=3). Asterisks indicate where the low-Pi value is significantly different from the high-Pi value within each gene using Student's t-test (P<0.05). system architecture (Svistoonoff et al., 2007). The alterations in the root expression profile of AUX/IAA genes caused by the level of Pi supply shown in the present study (Fig. 3) point to a remodelling of the auxin response profile within the root system. The experiments performed here demonstrate a modulation of basipetal PAT, and the transcriptional regulation of TaPIN genes, which could potentially produce this altered auxin response profile. There are several downstream steps where PAT could be modulated further, such as PIN endosomal cycling (Geldner et al., 2001;Huang et al., 2010) or MDR/PGP-PIN interaction (Blakeslee et al., 2007). However, the results in Fig. 4A provide evidence that there is a net effect of environmental Pi level on PAT when perceived both at the root apex and in basal portions of the root. These experiments do not provide evidence of how this effect of PIN transcription is enacted. However, given that a measurable difference in auxin flow occurs in previously Pi-sufficient plants within 1 h, the implication is that a signalling process produces this effect rather than a more long-term nutrient shortage response.
The Pi-PAT interaction demonstrated in the present study adds to the Pi-auxin interactions previously documented in other species. Auxin sensitivity modulation in response to Pi status has been previously demonstrated to occur in Arabidopsis roots by up-regulation of TIR1 auxin receptor expression (Pérez Torres et al., 2009). This is proposed to cause the increased lateral root and root hair density and reduced primary root growth characterized by the low-Pi response in Arabidopsis (Ma et al., 2001). Given the importance of root hair production in phosphorus uptake (Bates and Lynch, 2001;Zhu et al., 2010), it would be beneficial for the plant's nutrition for the control of root hair plasticity to continue unabated under low-Pi conditions. In T. aestivum a reduction of basipetal PAT in low-Pi conditions is shown (Fig. 4A), yet previous experiments have demonstrated no effect of varying Pi conditions on root hair density (Ewens and Leigh, 1985). This could be explained by the spatial separation of the basal meristem where lateral root founder cells are specified and the differentiated tissues where root hairs are produced. However, the scarcity of information on Pi effects on T. aestivum root hair density, due to the large variability in root hair production between cultivars (Wu and He, 2011), hinders making definitive conclusions.
Triticum aestivum Pi scavenging responses differ from those of the model plant Arabidopsis
These experiments also highlight the imprecision of extrapolating nutrient scavenging responses from the model plant Arabidopsis into other species. Previous studies using Arabidopsis and H. vulgare have shown that under low-Pi conditions the primary root meristem undergoes a process of terminal differentiation, whilst the maturation rate of LRPs is enhanced (Linkohr et al., 2002;López-Bucio et al., 2002;Sánchez-Calderón et al., 2005;Huang et al., 2008). Following this, PAT reduces after ~11 d, which could potentially be related to the terminal differentiation of the meristem and root cap, providing a reduction in the density of LRPs and the continued elongation of the remaining emerged lateral roots (Nacry et al., 2005). Ten days after germination in the present study, growth of young T. aestivum seminal roots after germinationcontinued unabated ( Fig. 2A, B, E), with significant limitation to maximum lateral root length (Fig. 2C, D, F) and lateral root density (Fig. 1), which is consistent with observations in long-term studies of other crop species (Borch et al., 1999).
Pi is usually found in largest quantities in the topsoil, and therefore enhanced exploration of this area is beneficial to a plant subject to Pi deficiency (Zhu et al., 2005). A short-term enhancement in lateral root production can be an effective method of increasing topsoil exploration, and this is reflected in the increased lateral root production and lateral root growth relative to that of primary roots in low-Pi conditions observed in some studies (Linkohr et al., 2002;Huang et al., 2008). However, in plants with fibrous root systems, such as T. aestivum, the production of a multitude of seminal and crown roots at varying angles from the seed/hypocotyl affords an alternative method of topsoil exploration. This has been demonstrated in Solanum lycopersicum, where low Pi conditions caused a significant increase in the number of adventitious roots in a process mediated by ethylene (Kim et al., 2008). Unfortunately, within the timeline of this study, the number of seminal roots was still very low (3-4) and therefore did not show any significant alterations in number. Nevertheless their presence possibly de-emphasizes the importance of the explorative function of lateral roots, and means that their chief benefit is to modulate the root surface area in response to more local environmental stimuli. A switch to a root system dominated by lateral roots has been shown to enhance Pi uptake efficiency greatly (Zhu and Lynch, 2004); therefore, a deeper understanding of the molecular mechanisms behind the production of lateral roots is potentially of great importance for both targeted crop breeding and localizing application of fertilizers to improve uptake efficiency.
PIN candidates
This study also presents new insights into the PIN gene family within T. aestivum, identifying candidates from published cDNA libraries and marking expression locations for two family members. PIN proteins are characterized by two hydrophobic domains, each containing five transmembrane helices, connected by a hydrophilic domain presumed to protrude into the cytoplasm (Křeček et al., 2009). The predicted amino acid sequences of the genes used in this study, TaPIN1, TaPIN2, TaPIN3, and TaPIN4, contain the N-terminal hydrophilic domain and the hydrophobic domain, complete with five transmembrane helices, and PIN1 and PIN2 also contain the C-terminal hydrophobic domain ( Supplementary Fig. S1 at JXB online). The absence of a C-terminal hydrophobic domain in PIN3 and PIN4 cDNA sequences was attributed to the incomplete nature of the sequences. All the amino acid sequences also each contain two di-acid motifs, involved in trafficking from the endoplasmic reticulum, and a tyrosine-based internalization motif, for recruitment into clathrin-dependent vesicles (Supplementary Fig. S1). Both of these features are characteristic of PIN genes in other species (Chawla and DeMason, 2004;Schnabel and Frugoli, 2004;Křeček et al., 2009;Zhou et al., 2011;Watanabe et al., 2012), and so give more credence to the notion that these sequences encode T. aestivum PIN proteins. As only a limited portion of these cDNA sequences is available, it remains unclear as to which subgroup of PIN proteins TaPIN3 and TaPIN4 belong.
AUX/IAA candidates
A study identifying various family members of the AUX/ IAA gene family in T. aestivum has already been published (Singla et al., 2006), and the AUX/IAA candidate cDNAs used in Fig. 3 add to this. The methodology used here identified the complete sequence of IAA1 published by Singla et al. (2006) alongside the candidate sequences, but the other results identified in their study did not score as highly using the present methodology. This is perhaps due to the differences between using Oryza sativa AUX/IAA and Arabidopsis AUX/IAA amino acid sequences as the query sequence in the BLAST search. There are four conserved domains that are characteristic of AUX/IAA proteins identified in other organisms (Dharmasiri and Estelle, 2004;Jain et al., 2006) and in T. aestivum (Singla et al., 2006). The amino acid sequences predicted from the candidate cDNAs used in this study all contained domain III and IV, and a STOP codon at the C-terminus. These sequnces all either also contained domain I and II, or were incomplete sequences missing the N-terminal portion of the sequence ( Supplementary Fig. S2). The modulation of AUX/IAA expression shown in Fig. 3 is the first example of a Pi-modulated auxin response shift that has been demonstrated in T. aestivum. The conclusion from these data when viewed in conjunction with the [ 14 C]IAA transport data in Fig. 4A is that the alteration in PAT auxin flow causes a corresponding alteration in auxin responses, and therefore AUX/IAA expression levels. Figure 3 shows that IAA3 expression appears to be positively correlated with Pi supply. As Fig. 4A shows that Pi supply significantly influences the basipetal flow of auxin, this may indicate that IAA3 expression is localized to the basal regions of the RAM auxin maximum. However, as mapping the specific locations of AUX/IAA expression is not covered in this study, further work is required to verify this.
Conclusions
The results presented here illustrate that the Pi-dependent modulation of auxin transport, driven by putative PINOID auxin export carrier gene expression, alters the auxin responses at the root tip. This is corroborated by a corresponding alteration in the root tip AUX/IAA expression profile, providing a potential mechanism for the decreased root branching observed in T. aestivum grown in low-Pi environments (Fig. 5). This significantly advances our understanding of the mechanism by which the developmental plasticity of the T. aestivum root system exploits heterogeneous soil environments. This is a potential mechanism for the widely observed phenomenon of localized branching in response to localized hotspots of soil phosporus (i.e. as would occur with banded Pi fertilization). Beyond advancing knowledge of plant biology, these findings have implications for the agricultural sector. Improved understanding of the mechanisms underpinning nutrient-stimulated root branching could improve targeting of agricultural fertilizers to regions where dense root branching is more probable, and highlights molecular mechanisms that could be exploited through plant breeding to improve existing varieties. There has also been a recent trend towards inoculation of agricultural plants with plant growth-promoting microorganisms, including auxin producers (Lugtenberg and Kamilova, 2009). Further understanding of the consequences of exogenous auxin application in crop species is therefore highly desirable. In conclusion, the present findings provide an understanding of the role of auxin in regulating root nutrient responses which should permit the more effective design of agricultural systems through combination of crop breeding and Pi fertilization regimes targeted at enhanced food security and the sustainable intensification of cropping systems.
Supplementary data
Supplementary data are available at JXB online. Figure S1. Sequence alignment of TaPIN3 and TaPIN4 amino acid sequences, displaying functional PIN protein motifs. Figure S2. Alignment of the amino acid sequences of TaIAA candidates, displaying functional AUX/IAA domains. Figure S3. AUX/IAA candidate sequence expression is elevated in response to 1 h induction with auxin. Table S1. Primers pairs used for qPCR analysis.
|
2016-05-12T22:15:10.714Z
|
2014-08-02T00:00:00.000
|
{
"year": 2014,
"sha1": "0e1fb54da9f946fcfd685a7eb6ffa8ad0a2e19f6",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jxb/article-pdf/65/17/5023/17136058/eru284.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e1fb54da9f946fcfd685a7eb6ffa8ad0a2e19f6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
105573313
|
pes2o/s2orc
|
v3-fos-license
|
Impact of common reed and complex organic matter on the chemistry of acid sulfate soils
Acid sulfate soils (ASS) are naturally occurring soils or sediments formed under reducing conditions that either contain sulfuric acid or have the potentail to form it, in an amount that can have adverse imapcts on the environment. The negative impacts of ASS are associated with the release of acidity produced and the release of toxic metals and metaloids from solubulised soil matricies into the environment. It has been shown recently that addition to ASS of dead plant material as organic matter creates microenvironments for soil microbes to ameliorate sulfuric soil and prevent sulfidic soil oxidation. Initial breakdown of the organic matter results in an oxygen demand that generates anaerobic conditions conducive to the reduction of sulfate to sulfides by sulfate reducing bacteria using the residual organic material as a carbon source and causing the pH to rise. There is also evidence that live plants increase acidification, potentially by aerating the soil. In nature, plants shed dead material as they grow, so that both live and dead organic matter co-exist. It is not known what happens to ASS chemistry, particularly pH, under such natural conditions. In this study, Phragmites australis was used to examine the combined effect of growing plants and incorporated organic matter on ASS chemistry (pH, redox potential and sulfate content) under aerobic and anaerobic conditions. In almost all cases, live plants enhanced sulfuric soil acidity and sulfidic soil oxidation. The mechanism for these changes on ASS chemistry appears to be the facilitation of oxygen penetration into the soil via aerenchymatous tissues in the plant roots.
Introduction
In aerobic soils, cellular respiration of plant roots is supported by oxygen that reaches the rhizosphere as soils get loosened by root growth (Tinh et al., 2001).Under anaerobic soil conditions, plants use specialized aerenchymatous structures to transport oxygen from the shoots to support root respiration (Armstrong, 1979).The presence of oxygen in the root zone of plants presents problems when there are oxidisable sulfides present because of the formation of sulfuric acid (H 2 SO 4 ).Under anaerobic, reduced soil conditions, ASS pose no problem unless the oxidisable sulfides are exposed and react with oxygen to form H 2 SO 4 (Nordmyr et al., 2008).Release of the H 2 SO 4 in turn dissolves the soil matrices in which iron species (Fe 2+ , Fe 3+ ), aluminium (Al 3 +) and other potentially toxic contaminants (elements, metals or metalloids) are held, which are released into the soil and water systems (Ljung et al., 2010;Ljung et al., 2009;Åström et al., 2007).Production and propagation of H 2 SO 4 , and mobilisation and leaching of toxic contaminants are the major processes through which ASS pose adverse ecological impacts on the environment (Michael, 2013).
Acid sulfate soils are of two types: sulfuric soil with pH<4 and sulfidic soil with pH>4 (Melville and White, 2012) when measured in water (soil: water solution 1:5, w/v) (Sullivan et al., 2010).In sulfuric soil, plants with aerenchymatous tissues would facilitate oxygen movement and maintain sulfuric soil acidity (Michael et al., 2017).In sulfidic soil, excessive oxygen penetration would lead to oxidation of oxidisable sulfides, generating sulfuric acidity (Michael et al., 2012).Sulfuric soil acidity has severe negative impacts on the general use and management of the environment in which ASS are present (Fitzpatrick et al., 2008;Michael, 2013).Of the estimated 17-24 million ha of ASS (Simpson and Pedini, 1985;Ljung et al., 2009), 6.5 million occur in Asia, 4.5 million in Africa, 3 million in Australia, 3 million in Latin America, 260 000 in Finland, 225 000 in Sweden and 100 000 in North America (Andriesse et al., 2006;Beucher et al., 2015), respectively.
In planted soil, turnover of organic matter and secretion of organic substances influence microbial activity (Muhammad et al., 2016) and alter the chemistry of soils (Shamshuddin et al., 2004;Lin et al., 2017).
Several studies have shown that the addition of or-ganic matter can ameliorate sulfuric soil and stabilise the pH of sulfidic soil (e.g.Michael et al., 2015;Michael et al., 2016;Jayalath et al., 2016).On the other hand, Reid and Butcher (2011) found that live plants increased acidification of sulfidic soil.Under natural soil use and management conditions, plant turnover adds decaying organic matter so that both the live and dead plant material co-exist (Yan et al., 1996).It was therefore interesting to see how the combined effects of live and dead plant material would influence the main chemical soil parameters that characterise ASS: pH, Eh and sulfate content.
In this study, investigations on the changes in ASS chemistry (pH, Eh and sulfate content) caused by addition of dead plant material was extended using a common reed (Phragmites australis), which is often found in inland and wetland soils (Marks et al., 1994).
It is known that Phragmites possesses aerenchyma which could transport oxygen into the soil (Tornberg et al., 1994) and cause problems, such as oxidation of sulfidic soil.In addition, the plant has an extensive rooting system and self-mulching effect due to rapid turnover of organic matter (Dubey and Sahu, 2014), making it ideal to assess the effects of co-existing live plants and organic matter on ASS chemistry.
Soil
The origin of the ASS was described in Michael et al. (2015;2016;2017).The sulfidic soil was collected from a "sulfuric subaqueous clayey soil" (Fitzpatrick, 2013) at a depth of approximately 1 m in the Finniss River in South Australia (35˚24'28.28''S;138˚49'54.37''E)shown in Figure 1.Details on soil classification using the Australian ASS Identification key and Soil Taxonomy (Soil Survey Staff, 2014) are given in Table 1 as per Michael et al. (2016).In addition, a comprehensive Impact of common reed and complex organic matter on the chemistry Plant Nutrition, 2018, 18 (2), 542-555 list of references containing further information on the soil morphology and geochemistry prior to rewetting (i.e.sites AA26.3 and FIN26) and after reflooding (e.g.Baker et al., 2013) are given in the same table.
Journal of Soil Science and
Figure 1.Locality of samples from the Finniss River site at Wally's Landing (Michael et al., 2015).
Table 1.Classification of acid sulfate soil materials from the Finniss River used in the study as per Michael et al. (2016).The pH of the freshly collected sulfidic material measured in water 1: 5 (pH w ) was 6.7 (Table 2) and the water holding capacity was estimated to be 49%.The residual organic matter content, estimated using the weight loss-on-ignition method (Schulte and Hopkins, 1996) was 10.6%.The pH following peroxide treatment (pH ox ) was 1.4.To manufacture "sulfuric medium (compost: sandy loam 2:1 w/w).The wellrooted plantlets used in setting the experiments were approximately 8-12 weeks old.In each treatment, two plantlets each was transplanted which produced multiple shoots throughout the experiment.
Experiments and treatments
Three experiments were conducted as described below with P. australis (common reed) plants established with organic matter incorporated in the soils (80:1, soil: organic matter w/w) by bulk mixing.Bulk mixing was done by weighing out the amount of soils or organic matter needed using a portable scale at 80:1 (w/w), and thoroughly mixed in 20 L mixing troughs using a spade.All the experiments were conducted in (Fitzpatrick et al., 2008).
horizon material" by oxidising the sulfides, the sulfidic soil was spread thinly on plastic sheets and kept moist until pH w was less than 4. The manufactured sample is henceforth referred to as "sulfuric soil" (pH w <4) and the freshly sampled sulfidic material as "sulfidic soil" (pH w >4) respectively, including the initial sulfate contents of the soils are given in Table 2.
Organic matter and plantlets
To use as organic matter, the first three younger and fully open leaves of P. australis were collected and prepared as previously described (Michael et al., 2016).All the leaves were chopped into pieces, airdried overnight under room temperature and then oven dried at 60 ⁰C for three days.The dry pieces were finely chopped using an electric blender to pass through a ≈0.5 mm sieve.The nitrogen content of the organic matter analysed by ICP-OES using a 0.5 g samples (n=3) was estimated to be 3.7%.The carbon content can be approximated to be similar to grass (leaf) clippings from the data in Kamp et al. (1992).The Phragmites plants were initially raised as shoots (plantlets) by rooting rock stocks in a rooting 50 cm tall (9 cm in diameter) stormwater tubes whose bottom ends were tightly capped.In all the tubes, the bottom 22 cm was filled with sand and the top 22 cm with 1300 g of the ASS.The ASS used in all the experimental treatments was also weighed to add the exact amount in each tube.Treatments of all the experiments were replicated three times and set out in a complete randomized design under glasshouse conditions in polythene crates.In all the treatments, measurements were made only from the top 22 cm of ASS.
Although the 'aerobic treatments' were regularly watered daily with approximately 100 ml of tap water, it was probable that the moisture was unevenly distributed over time, with the upper parts being aerobic and the lower parts of the profile becoming waterlogged.
The anaerobic treatments were always under flooded conditions with adequate amount of water ponding on the surfaces by regular addition of water (once in the morning and in the evening).
Measurements
Changes in redox potential (Eh), pH and sulfate content were measured from the surface (0-2 cm), middle (5-10 cm) and deep (15-20 cm) profiles as described previously (Michael, 2015).Sulfate content was quantified using soil samples obtained from the three profile depths.Redox was measured using a single Ag/AgCl reference and platinum (Pt) electrode combination using an automated data logger.
To measure the Eh, a handheld electric drill, with a drill bit head the size of the Pt electrode was used to make holes through the tubes with care taken to avoid disturbing the soil.The Pt electrode was inserted in the holes made using the electric drill and reference electrode inserted into the soil from the surface.
This was allowed to equilibrate for 10 min and then Eh measured at 1 min intervals for the next 10 min and averaged (Rabenhorst et al., 2009).These values were corrected for the reference offset to be relative to the potential of a standard hydrogen electrode by adding 200 mV (Fiedler et al., 2007).The stability and accuracy of the electrodes were maintained as per Fiedler et al. (2007).pH was measured using 2 g soil (1:5, soil: water) with a pre-calibrated Orion pH meter (720SA model).
To quantify the root biomass, the tubes were marked out at 0-2, 5-10, and 15-20 cm, the profiles from which the changes in Eh, pH and sulfate content in the presence of plants were measured, and cut into small sections.Soil from these sections were placed in a sieve (0.05 mm) and held under a gentle running tap water and the soil carefully broken up to free the roots using the aid of forceps.The loose soil particles were allowed to drain through but the roots, those that were trapped by the sieve and got floating during washing were collected.These roots were taken, gently washed again to remove soil material, placed in weighing boats and oven dried for 48 hrs.The dry weights were taken by weighing and weights of the replicates were pooled, averaged and kept as the final data.
Sulfate was extracted according to the method of Hoeft et al. (1973) for soluble soil sulfate.Replicate samples (0.5 g each) were placed in tubes with 1.5 ml of an extraction solution (0.2 g CaH 2 PO 4 , 12 g glacial acidic acid, and 88.5 g deionised water).After 30 min, soil was sedimented by centrifugation for 5 min and duplicate aliquots from the three replicates were transferred into 4 ml cuvettes and diluted with 1.5 ml of the extraction solution.The samples were mixed with 0.7 ml of 0.5 M HCl and 0.7 ml of 0.1 M barium chloride-polyethylene glycol reagent was added and mixed again.After 10 min, the samples were mixed again and the absorbance read at 600 nm using a spectrophotometer.The readings were compared to a standard solution of 0-2 mM Na 2 SO 4 (Michael, 2015).
In order to help interpret the changes in pH in relation to the changes in Eh, an Eh-pH range for surface environment is shown in Figure 3 and the approximate Eh range at which microbial reduction of sulfate occurs under various soil conditions is shown in Figures 2, respectively (Fiedler et al., 2007).
Statistical analysis
The Eh values obtained over a 10 min period were ANOVA with all combination was performed using Tukey's HSD (honest significant difference) and pairwise comparisons.
Effects of Phragmites plants on the chemistry of sulfuric soil
Experiment 1: This experiment examined the changes in ASS chemistry induced by plants on sulfuric soil with incorporated organic matter under aerobic conditions (regular watering) in tubes without bottom drainage.From the Eh profile of the control soil (Figure 4c) it is clear that the soil remained aerobic down to at least 10 cm but increasingly anaerobic at greater depth, conditions that would favour sulfate reduction.The pH increases down the profile (Fig- ure 4b) are consistent with this.In the treatment with both added organic matter and roots, the pH at the surface was notably higher than the control but decreased down the depth (Figure 4b), which correlates with both increased root mass (Figure 4a) and the maintenance of aerobic conditions even at depth.In both treatments there were significant reductions in sulfate content, which would be expected to occur where pH increases are observed, as occurred here (Figure 4d).
Experiment 2:
The second experiment examined the effects of plants on soil chemistry in sulfuric soil with incorporated organic matter under anaerobic (flooded) conditions.The anaerobic conditions at all depths in the control soil (Eh near or less than 0 mV) resulted in pH increases of around 2 units (Figure 5c).In the planted treatment, the root biomass was greater towards the surface (Figure 5a).Compared to the control, Eh was much higher and pH significantly more acidic (Figure 5b).Again, there were large reductions in sulfate content in both treatments, consistent with the increasing pH (Figure 5d).
Effects of Phragmites plants on the chemistry of sulfidic soil
Experiment 3: This study assessed the impact of organic matter and live plants on neutral sulfidic soil under conditions similar to those described for Experiment 2. In the control soil, the Eh (Figure 6c) was highly reduced and becoming more negative at depth.These changes correlate well with the observed increase in pH down the profile (Figure 6b).In the planted treatment, roots were evenly distributed (Fig- ), and the Eh remained significantly higher than the control soil, consistent with the lower pH (Figure 6c).The sulfate content of the control soil was mark-Impact of common reed and complex organic matter on the chemistry Journal of Soil Science and Plant Nutrition, 2018, 18 (2), 542-555 edly lower than in the planted soil, and roughly correlates with the differences in pH (Figure 6d).
Acid sulfate soils can have adverse impacts on the environment unless carefully managed (Michael, 2013).
The two most common management strategies are either to neutralise the sulfuric (actual) acidity by application of mineral lime, or to prevent sulfidic soil (that has the potential to form sulfuric soil) oxidation by surface and ground water management (Baldwin and Fraser, 2009).Under general soil use and management conditions, application of lime to manage sulfuric acidity is considered expensive taken into consideration the area of land to be treated (Shamshuddin et al., 2004), and keeping sulfidic soil flooded to prevent oxidation is not desirable as very few crops can be cultivated under flooded soil (Hanhart et al., 1997).
In trying to establish alternative management strategies, we recently showed that addition of organic matter in the form of chopped Phragmites leaves effectively increases the pH of sulfuric soil and prevents sulfidic soil oxidation (Michael et al., 2016).It was not clear from those studies, whether the observed increases in pH could be sustained if living plants were also present.The limited data from Reid and Butcher (2011) showed that significant acidification could result from the growing of roots of Phragmites into sulfidic soil.The acidification of the sulfidic soil resulting from oxygen pumped into the rhizosphere of the Phragmites plants through the specialised parencymatous tissues (Marks et al., 1994).The formation (genesis) of sulfuric acid from sulfides exposed to oxygen is essentially an abiotic inorganic chemical oxidation process (e.g.Lin et al., 2000): At very acidic pH (<3), this reaction can be accelerated by bacteria such Acidothiobacillus ferrooxidans (Valdés et al., 2008).The reverse process, reduction of sulfate to sulfides, occurs naturally very slowly, but is greatly accelerated by sulfate reducing bacteria in the absence of oxygen and with sufficient organic carbon and nitrogen for metabolism according to equation 2 (Bloomfield and Coulter, 1973).
Sulfur-reducing bacteria grow optimally around pH 6 but are still able to reduce sulfate at appreciable rates down to at least pH 3 (Luo et al., 2017).
The growth experiments demonstrated that Phragmites is very adaptable in terms of its ability to grow in soils of variable pH and variable oxygen availability (Figures 4-6).In the experiment described in Figure 4, the initial pH was close to 4, a level of acidity that would challenge most plants.Biomass actually increased with depth, where the control soil Eh was close to 0 mV.The most obvious impact of the roots on the soil was in the difference in Eh, with the planted soil remaining aerobic throughout the profile (Figure 4c).Clearly this can only occur if access to atmospheric oxygen is retained (Armstrong, 1979).
In non-flooded soil, the root growth itself could exert a loosening effect, creating channels for oxygen diffusion (Michael et al., 2017).Perhaps more likely though, is oxygen diffusion down the channels inside the root created by aerenchymatous tissues (Marks et al., 1994;Michael et al., 2016).The fact that high Eh was also observed under flooded conditions (e.g.
Figure 5b
) especially within the surface soils tends to favour the idea that aerenchyma are the primary pathway of oxygen to the rhizosphere (Tornberg et al., 1994).It is important to note that the Eh is probably more representative of the bulk soil rather than the soil immediately in contact with the root where the actual Eh values may be considerably higher.
In sulfuric soil, plants did not actually increase the acidity, they simply did not increase the pH as much as in the unplanted treatment.In sulfuric soil flooded with tap water and with incorporated organic matter alone, Michael et al. (2016) demonstrated pH increases from less than 4 to 7.5.By comparison, the combination of organic matter and live plants only increased the pH from round 4 to 5.6 (Figure 4b).Except in the upper profile of the sulfuric soil under aerobic (non-flooded conditions), the pH of both the sulfuric (Figure 5b) and sulfidic (Figure 6 b) soils containing growing plants was always more acidic than the bare control soil.This can be caused by plant roots cracking the soil which facilitated oxygen penetration, in addition to the oxygen pumped into the soil by the aerenchyma pathway (Michael et al., 2017).
Under flooded conditions, the lower pH was consistently correlated with a higher Eh in the planted treatment (Figures 5 and 6).This was also true for the control soils where the lower Eh correlated with a higher pH.These results are consistent with previous findings where reduced soil conditions of low Eh (Figure 4) resulted in higher pH and lower sulfate contents (Michael et al., 2015(Michael et al., , 2016)), confirming that sulfate reduction occurs under reduced soil condition (0 --100 mV), increasing the soil pH (Lin et al., 2003;Johnson and Hallberg, 2005).
In sulfidic soil under flooded conditions, the pH changes were small with a slight alkalinisation (increase in pH) observed in the control and a slight acidification when plants were present (Figure 6).
The dominating effect here appears to be the low Eh created by the flooded conditions which prevented significant oxidation (Michael et al., 2015).
Nevertheless, the differences in Eh between the treatments could explain the differences in pH as shown in Figure 3, and the greater decrease in sulfate concentration in the control, where the lower Eh would favour sulfate reduction as per Figure 2. One factor that has not been considered in this study is the effect of oxidation on reduction of Fe, processes that would also affect Eh and potentially the changes in states of sulfur compounds in the soil (Li et al., 2012).
In all the experiments shown in Figures 4-6, almost all the results tend to agree with the Eh-pH range of surface environments (Figure 2) and redox range of microbial reduction of sulfate (Figure 3).In the control treatments with no plants either under aerobic or anaerobic conditions, soil profiles with acidic, oxidising conditions resulted in high sulfate content (e.g. Figure 4), whereas profiles with basic, reducing conditions resulted in lower content of sulfate (Figures 5 and 6).In the planted treatment under aerobic conditions, soil profiles of higher root mass resulted in acidic, oxidising conditions with high sulfate content (Figure 4), whereas under anaerobic conditions, no clear relationship is established between the root mass and the soil chemical properties or between the pH, Eh and sulfate content measured (Figures 5 and 6).The main reason for this being that the acidic, oxidising soil conditions (Eh <300 mV, Figure 3) created by the oxygen pumped into the rhizosphere of the soil via the arencyma pathway (which would have resulted in an acidic, oxidising soil conditions) were consumed by the reduction reactions of the anaerobic condition created by the continuous flooding (Michael et al., 2012(Michael et al., , 2015;;2016).oxidising, (iii) acidic-reducing and, (iv) basic-reducing (adapted with slight modifications from (Krauskopf, 1967) as per (Delaune and Reddy, 2005;Poch et al., 2009).The lower and upper Eh limits are shown by the red dotted lines.The purple dotted line shows the break between an aerobic and anaerobic condition (Fiedler et al., 2007).Values are means ± s.e. of three measurements (n=3).
The dotted line is the initial pH.An asterisk indicates significant difference (P<0.05) between treatment and control at the same depth.
Conclusions
The management implication is that Phragmites is a averaged and a treatment average obtained by taking the mean of the three replicates.Similarly, treatment average pH and sulfate content were obtained by taking the mean of the three replicates.To compare the treatment means, significant differences (P<0.05) between treatments means of each profile was determined by two-way ANOVA using statistical software JMPIN, "AS Institute Inc. SAS Campus Drive, Cary, NC, USA 27513".If an interaction between the treatments and profile depths was found, one-way
Figure 2 .
Figure 2. Approximate redox ranges for microbial energy metabolism for different electron acceptors.
Figure 4 .
Figure 4. (a) Fresh weight of Phragmites roots at different depths and their effects on (b) pH, (c) redox and (d) sulfate contents of sulfuric soil with organic matter maintained by regular watering for 12 months (closed symbols), compared to control soil with no plants and organic matter (open symbols).Values are means ± s.e. of three measurements (n=3).The dotted line is the initial pH.An asterisk indicates significant difference (P<0.05) between treatment and control at the same depth.
Figure 5 .
Figure 5. (a) Fresh weight of Phragmites roots at different depths and their effects on (b) pH, (c) redox and (d) sulfate contents of sulfuric soil with organic matter maintained under anaerobic (flooded) condition for 12 months (closed symbols), compared to control soil with no plants and organic matter (open symbols).Values are means ± s.e. of three measurements (n=3).The dotted line is the initial pH.An asterisk indicates significant difference (P<0.05) between treatment and control at the same depth.
Figure 6 .
Figure 6.(a) Fresh weight of Phragmites roots at different depths and their effects on (b) pH, (c) redox and (d) sulfate contents of sulfidic soil with organic matter maintained under anaerobic (flooded) conditions for 12 months (closed symbols), compared to control soil with no plants and organic matter (open symbols).
vigorous plant capable of growing in aerobic soil and also partially submerged.It is commonly associated with both inland and coastal ASS.Results from recent studies suggested that slashing large stands of Phragmites to provide surface mulch or to integrate dead shoot material into ASS would be a realistic strategy to increase the pH of sulfuric soil and to reduce or prevent acidification of neutral sulfidic soils.The results presented here however are consistent with our recent findings, indicating that the growth of roots of live plants has an acidifying effect and may negate the positive impacts of the dead organic matter.More work needs to be done to determine how the balance between alkalinisation by dead organic matter and acidification by live plants is influenced by the relative rates of organic matter addition and the density of growing plants.The other concern is that even if the Phragmites shoots were removed at ground level, the existing culms may still provide a conduit for oxygen diffusion into the deeper soil layers.The suitability of this strategy is also challenged by the fact that the efficacy of organic amendments is predicated on the need to keep the soil as anaerobic as possible to prevent sulfide oxidation, a condition that is not conducive to most types of agriculture.Acknowledgement This research was funded by the Commonwealth of Australia through an ADS scholarship provided to Patrick S. Michael.The authors thank Prof. Robert W. Fitzpatrick, Sonia Grocke and Nathan Creeper for their generous assistance provided in soil sampling and use of the redox probe.We are grateful too to the anonymous reviewers whose comments led to improvements in the manuscript.
Table 2 .
Descriptions of the soils from Finniss River used in the experiments.
Acid sulfate soil classification used in this paper is based on Australian Acid Sulfate Soil Identification key
|
2019-04-10T13:11:27.145Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e992a98cace078103543cc897ea3f8a72632d572",
"oa_license": "CCBYNC",
"oa_url": "https://scielo.conicyt.cl/pdf/jsspn/v18n2/0718-9516-jsspn-01603.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d35eba8e0d9e880ff328130efbdb19ba33792cec",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
256848632
|
pes2o/s2orc
|
v3-fos-license
|
Experienced Meditators Show Multifaceted Attention-Related Differences in Neural Activity
Mindfulness meditation (MM) is suggested to improve attention. Research has explored this using the “attentional-blink” (AB) task, where stimuli are rapidly presented, and a second target stimulus (T2) is often missed if presented ~300 ms after an initial target stimulus (T1). Previous research has shown improved task accuracy during the AB task and altered neural activity following an intensive 3-month MM retreat. We tested whether these results replicated in a community sample of typical meditators. Thirty-one mindfulness meditators and 30 non-meditators completed an AB task while electroencephalography (EEG) was recorded. Between-group comparisons were made for task accuracy, event-related potential activity (posterior-N2 and P3b), theta and alpha oscillatory phase synchronisation to stimuli presentation, and alpha-power. The primary aim was to examine effects within the time windows reported in previous research. Additional exploratory aims assessed effects across broader time windows. No differences were detected in task accuracy or neural activity within our primary hypotheses. However, exploratory analyses showed posterior-N2 and theta phase synchronisation (where the phase of theta oscillations were synchronised to stimuli onset) effects indicating meditators showed a priority towards attending to T2 stimuli (p < 0.01). Meditators also showed more alpha-phase synchronisation, and lower alpha-power (with smaller amplitudes of activity in the alpha frequency) when processing T2 stimuli (p < 0.025). Our results showed multiple differences in neural activity that suggested enhanced attention in meditators. The neural activity patterns in meditators aligned with theoretical perspectives on activity associated with enhanced cognitive performance. These include enhanced alpha “gating” mechanisms (where alpha activity acts as a filter between sensory and higher order neural processes), increased oscillatory synchronisation to stimuli, and more equal allocation of neural activity across stimuli. However, meditators did not show higher task accuracy, nor were the effects consistent with our primary hypotheses or previous research. This study was not preregistered.
breath, bodily sensations, awareness) (Crane et al., 2017;Van Dam et al., 2018).Over recent decades, MM has taught as part of mindfulness-based interventions (MBIs), which attempt to alleviate symptoms of depression, pain, and addiction (Hayes, 2012;Kuyken et al., 2008).Our understanding of the mechanisms of MM is rapidly improving, with studies replicating mechanistic relationships between mindful attention, emotional regulation, and well-being outcomes with moderate consistency (Britton et al., 2018;Chambers et al., 2009;Kiken et al., 2015).However, there are an array of theoretical perspectives regarding the neurophysiological mechanisms that underpin the effects of MM, and not enough empirical evidence to draw strong, comprehensive, or specific conclusions about the accuracy of the proposed mechanisms (Hölzel et al., 2011;Tang et al., 2015;Van Dam et al., 2018).A better mechanistic understanding of MM is thus required.Specifically, there is the need to elucidate the neurophysiological changes that underlie the benefits of the practice to well-being.This might allow the design of MM interventions with enhanced efficacy by specifically targeting the effective mechanism.
One promising psychological mechanism that may underlie the effects of MM could be improved attentional function (Kiken et al., 2015;Tang et al., 2007), with meta-analyses indicating that mindfulness meditation and mindfulnessbased programs are associated with improved performance in a range of attention tasks (Sumantry & Stewart, 2021;Verhaeghen, 2021;Yakobi et al., 2021).Meta-analysis of functional magnetic resonance imaging research has also indicated the improved attention function is underpinned by altered neural activity in the default mode, salience, and executive attention networks of the brain (Ganesan et al., 2022).The suggestion that MM improves attention is also supported by controlled behavioral studies that show MM practice increases sustained and executive attention (Jha et al., 2007;Lutz et al., 2009;Slagter et al., 2009;Tang et al., 2007) and improves performance on various attentional tasks (Atchley et al., 2016;Bailey et al., 2022;Bailey, Freedman, et al., 2019a;Van Dam et al., 2018).
One sophisticated approach to measure potential MMrelated changes in attention could be to examine the limited temporal capacity of attention using the attentional blink (AB) phenomenon (Martens & Wyble, 2010;Shapiro et al., 1997).In a typical AB task, individuals are presented with a rapid stream of ~20 distractor stimuli.Within that rapid stream of stimuli, two targets (T1 and T2) are presented in close temporal succession, with T1 typically appearing randomly after 2-8 stimuli have already been presented and the T2 stimulus appearing 200 to 700 ms after T1 (Ward et al., 1996).The AB phenomenon refers to a reduction in accuracy at recalling the T2 stimulus when it is presented within 200-500 ms after T1, with AB trials presenting T2 stimuli at this brief delay often referred to as a "short interval" attention blink trials (Shapiro et al., 1997).A number of cognitive models have been proposed to explain the AB phenomenon (for a review, see Martens & Wyble, 2010).Capacity-based models suggest competition between stimuli for attentional resources, so T1 induces a drain on limited attentional resources and insufficient attentional resources are available to successfully process T2 (Potter et al., 1998;Shapiro et al., 1997).In contrast, selection-based models consider the role of attentional control to be more important in explaining the attention blink effect, where the magnitude of an individual's AB is affected by the extent to which distracting information is suppressed (Di Lollo et al., 2005;Olivers & Meeter, 2008).However, it is worth noting that thus far, evidence supporting one analytical model of the AB phenomenon does not necessarily negate the explanations provided by other models, and it is possible that the AB phenomenon involves mechanisms and functional processes proposed by multiple models (further discussion of this point is available in the Supplementary Materials Section 1).
The neurophysiological mechanisms that underpin the AB phenomena have been explored using EEG (Slagter et al., 2007;Vogel et al., 1998).This research has focused on an event-related potential (ERP) known as the P3b, which is a positive voltage occurring maximally in parietal electrodes around 350 to 600 ms following stimulus presentation, and which has been associated with voluntary attention when examined in healthy non-meditators (Falkenstein et al., 1991(Falkenstein et al., , 1993)).Research in healthy non-meditators has found the P3b time-locked to the T2 stimuli to be entirely suppressed in trials in which the second target is "blinked" (not consciously perceived) and ultimately not recalled (Dell'Acqua et al., 2015).A reduced AB effect (i.e.increased accuracy at detecting T2 stimuli) has also been associated with an earlier onset and smaller amplitude of the T1-induced P3b, suggesting that when less neural activity is devoted to the T1 stimulus, more neural resources are available to detect and encode the T2 stimulus (Sergent et al., 2005;Slagter et al., 2007).In addition to the P3b AB effect, research in healthy non-meditators has also suggested that short interval AB trials reduce the amplitude of the visual processing related posterior-N2, an ERP peaking approximately 200 ms after stimuli presentation, with posterior-maximal negative voltages (Zivony et al., 2018).This is thought to reflect the lack of engagement of attention processes time-locked to T2 stimuli (Zivony et al., 2018).In addition to the ERP AB findings, research in healthy nonmeditators has suggested that theta oscillations (rhythmic brain activity occurring between 4 and 8 Hz) are related to a range of cognitive processes, including attention (Mizuhara & Yamaguchi, 2007).Within research on meditators, a positive relationship between the successful detection of T2 stimuli and theta phase synchronisation (TPS) to the onset of the T2 stimuli has also been identified (Slagter et al., 2009).
An increase in phase synchronisation reflects an increase in the consistency of the angle of ongoing oscillatory cycles within neural activity to stimuli presentation (Slagter et al., 2009).Finally, decreased synchronisation of alpha to the onset of the distractor stimuli presentation (which are presented prior to T1) and increased alpha-power (8-13 Hz) just prior to T1 stimuli presentation has also been associated with improved performance in the AB task after a 3-month meditation retreat (Slagter et al., 2009).Alpha oscillations are thought to be related to the functional inhibition of brain regions when examined in healthy non-meditators (Klimesch, 2012).As such, it is possible that desynchronisation of alpha oscillations around the time of the stimulus presentation and increased alpha-power just prior to the target stimulus onset inhibits processing of the distractors.This may be followed by a release of any inhibitory processes ongoing in brain regions responsible for processing the target AB stimuli, resulting in better AB performance.
Perhaps unsurprisingly, MM training and experience have been shown to reduce the AB phenomenon, with increased accuracy at the detection of T2 stimuli, both in long-term meditators, following a 3-month meditation retreat, and after an 8-week mindfulness-based stress reduction program (Slagter et al., 2007;van Leeuwen et al., 2009;Wang et al., 2022).However, to date, only one study (Slagter et al., 2007) has measured neural activity while meditators perform the AB task.They compared EEG activity from non-meditator controls and experienced mindfulness meditators (with an average of 2967 hr of meditation experience) before and after the experienced-meditators underwent an intensive 3-month meditation retreat, and the non-meditators practised MM for 20 min per day for 1 week.Following the retreat, the experienced-meditators were better at identifying the T2 AB stimuli compared to the controls (demonstrating a reduced AB effect) (Slagter et al., 2007;Slagter et al., 2009).The improved accuracy in responding as to which number was presented as the T2 stimuli was correlated with a reduced P3b following T1 stimuli, as well as increased T2-locked TPS (Slagter et al., 2007;Slagter et al., 2009).Slagter et al. (2007) suggested that the reduction in T1-elicited P3b in meditators may reflect "decreased mental capture by any stimulus" in the meditators, whereas the elevated TPS may reflect an increased capacity to process experience from moment to moment.They also found a reduction in alpha phase synchronisation (APS) to the distractor stimuli (prior to the onset of T1) in meditators, potentially implicating the release of alpha inhibiting the processing of distractor stimuli before T1 presentation (Slagter et al., 2009).Notably, these findings were only after an intensive 3-month retreat, meditation training that is not typical of many mindfulness training programs, and it is unclear if more typical daily MM practice will produce similar effects.Exploring a community sample of MM may provide findings that are more generalisable to a typical (and increasingly popular) MM practice (Cramer et al., 2016).Additionally, while Slagter et al. (2007) have been cited over 1000 times, no replications of their study have been attempted.
Given this background, the primary aim of the study was to compare brain activity related to the AB phenomenon (P3b, TPS, APS, and alpha-power) between a cross-sectional sample of experienced community-meditators and healthy control non-meditators in order to assess whether the findings demonstrated following intensive meditation retreats translate to more typical meditation practice.The present study also utilised advanced EEG analysis methods, which can separately detect differences in overall neural response strength and differences in the distribution of brain activity.Following the research by Slagter et al. (2007Slagter et al. ( , 2009)), our primary hypotheses were that (PH1) compared to nonmeditator controls, meditators would show a smaller allocation of attention-related neural resources to T1 as indexed by a lower amplitude T1-elicited P3b during short interval trials; (PH2) meditators would show more consistency in the timing of theta oscillatory neural activity (higher TPS) in response to T2 during short interval trials but not long interval trials, indexed by higher T2-locked TPS values; and (PH3) the meditators would show greater alpha-power around stimuli presentation in short and long interval T1 trials compared to controls.Finally, the AB task presented stimuli every 100 ms (at 10 Hz), which is within the alpha frequency.This is likely to produce alpha synchronisation to the task stimuli, an effect that may be modified in the meditation group, which has undergone considerable training in an attention-based practice.Slagter et al. (2009) reported a reduction in APS during the presentation of the distractor stimuli prior to T1 presentation after the meditation retreat (in contrast to the increased alpha-power).As such, we had one further primary hypothesis: (PH4) APS would be reduced in the meditation group during the presentation of the distractor stimuli prior to T1 stimuli.Additionally, while we tested these primary hypotheses within the time windows reported by Slagter et al. (2007Slagter et al. ( , 2009)), to ensure we did not miss significant effects that appeared outside these specific windows, we conducted additional exploratory analyses for the ERP, TPS, APS, and alpha-power variables which included all time points in the EEG epochs for each of these measures (exploratory hypotheses are explained below), while employing data-driven multiple comparison controls.Additionally, since behavioral research using a cross-sectional design has previously shown that meditators show a reduced AB effect compared to non-meditator controls, we had a non-primary replication hypothesis, (RH1) that our meditation group would show a reduced AB effect as indicated by meditators showing higher accuracy than controls in short interval T2 trials.Further, while Slagter et al. (2007) focused on the P3b in response to T1 only, our view is that it is sensible to hypothesise that (EH1) ERPs to T2 would be increased in meditators, or (EH2) the relationship between ERP amplitude to T1 and T2 is different in meditators, perhaps reflecting an increased ability to attend to the T2 stimulus as a result of a reduced focus on the T1 stimuli.Additionally, since previous research has not examined potential differences in the topographical distribution of neural activity in meditators during the AB task, four non-directional exploratory hypotheses were that: (EH3) meditators would show differences in the scalp distribution of ERPs, (EH4) meditators would show differences in the scalp distribution of TPS, (EH5) meditators would show differences in the scalp distribution of alpha-power, and (EH6) meditators would show differences in the scalp distribution of APS.
Participants
A sample of 39 experienced community-meditators and 36 healthy control non-meditators were recruited after responding via phone call or email to community advertising at universities, meditation organisations, and on social media.To meet the eligibility criteria for classification as an experienced-meditator, participants were required to have had at least 2 years of meditation experience and have practised meditation for a minimum of 2 hr per week over the last 3 months.Meditation was defined by Kabat-Zinn's definition: "paying attention in a particular way: on purpose, in the present moment, and nonjudgmentally" (Kabat-Zinn, 1994).This definition included participants who practice both open-monitoring-meditation, which involves simple awareness without a specific focus besides awareness itself and focused attention meditation, which involves deliberate attention on a specific object, such as the breath (Cahn & Polich, 2009;Lutz et al., 2008).Trained MM researchers (OB, JEP, GH, HG) interviewed and screened participants to ensure the participants' practices fit the criteria, and screening uncertainties were resolved through discussion and consensus between the principal investigator (NWB) and one other researcher.Eligibility as a non-meditator control required participants to have less than 2 hr of lifetime meditation experience.
Participants were considered ineligible to participate if they were currently taking psychoactive medication; had experienced brain injury; had previously been diagnosed with a psychiatric or neurological condition; or met the criteria for any drug, alcohol, or neuropsychiatric disorders as measured by the Mini International Neuropsychiatric Interview (MINI) (Sheehan et al., 1998).Participants who scored above the moderate range (greater than 25) in the Beck Anxiety Inventory (BAI) (Beck et al., 1988) or the mild range (greater than 19) in the Beck Depression Inventory-II (BDI-II) (Beck et al., 1961) were also excluded to reduce potential confounds, as depression and anxiety are associated with alterations to brain activity (Bailey et al., 2014;Miljevic et al., 2023;Murphy et al., 2019).
Ethical approval of the study was provided by the ethics committees of the Alfred Hospital and Monash University.All participants provided written informed consent prior to participation in the study.Before participants underwent EEG recording, participants provided their gender, age, years of education, and meditation experience (total years of practice, frequency of practice, and the usual length of a meditation session).Participants also completed the Five Facet Mindfulness Questionnaire (FFMQ; Baer et al., 2006), BAI, and BDI-II.Example items from these scales are provided in the Supplementary Materials.Two controls were excluded from the study due to scoring above the moderate anxiety range on the BAI.Two controls and one meditator were excluded after scoring in the mild depression range on the BDI-II.Another control was excluded after revealing a history of meditation.Two meditators were excluded due to a previous history of seizures, substance abuse, or mental illness, and another three were excluded from the analysis due to not completing the AB task.Lastly, two meditators and one control were excluded from the study as their performance of the AB task was near chance.
The final sample included 31 meditators aged between 20 and 64 years and 30 healthy controls aged between 20 and 60.The two groups did not differ in any demographic or self-report measure except for the FFMQ score (all p > 0.05, except for the FFMQ, where p < 0.001).Table 1 summarises all measures (note that one participant did not complete the BAI, and another did not complete the FFMQ, so their data were excluded from those measures).The final sample of meditators had a mean of 6.44 (SD = 4.25) years of meditation experience, 7.65 hr (SD = 2.21) of current practice per week, and a mean of 55.65 min (SD = 44.90) of meditating per session.
Procedure
The current study was a single component of a larger research program that assessed the associations between mindfulness practice and a number of cognitive functions.As such, participants completed multiple cognitive tasks within the EEG session, the results of which have been or will be reported in separate publications.Participants first performed a Go/Nogo task and an auditory oddball task (Payne et al., 2020), followed by the AB task.The AB task was a replication of the task used by Slagter et al. (2007).The task involved 12 practice trials followed by four blocks of 90 trials where the participants viewed a stream of 19 stimuli (letters and numbers) presented for 66 ms, with a 33-ms blank screen between each stimulus.Before the task began, participants were instructed that there could be one or two numbers in each trial.They were instructed to enter the number/s they observed on a number pad once each trial ended.Each new trial began after the participant pressed the Enter key to continue, and participants were offered the option of a short break between each of the four blocks.T1 occurred at a random position from 3 to 9 in the stream, after 2-8 distractor stimuli had already been presented.In trials with two numbers, T2 could occur either 300 ms (short interval) or 700 ms (long interval) after T1.Each block contained 54 short interval trials, 18 long interval trials, and 18 T1-only trials (where no T2 stimulus was presented).The order of the trials within each block was randomised.The number of correct trials (both T1 and T2 correct), the number of trials where T1 was incorrect, and the number of trials where T2 was incorrect were recorded for each participant.The total task time was approximately 45 min (see Fig. 1 for a visual depiction of the task).After the AB task, participants were administered transcranial magnetic stimulation concurrent 1 Visual representation of the procedure for the attentional blink (AB) task.Each trial presented a fixation cross, followed by 19 items in the centre of the screen.The majority of the items were letters, presented for 66 ms each with a 33-ms blank screen between each stimulus.Target stimuli (T1 and T2) were numbers presented within the stream of letters.T1 appeared after between 2 and 8 letters had been presented, and T2 appeared either 300 ms after T1 (short interval) or 700 ms after T1 (long interval), unless it was a T1 only trial (in which case T2 was not presented) with EEG to assess for potential meditation-related differences in cortical reactivity to magnetic stimulation.
Measures
Electrophysiological Recording and Pre-Processing EEG data including 64 channels were recorded continuously during the tasks using a Quick-Cap containing Ag/ AgCl electrodes and SynAmps 2 amplifier (Compumedics, Melbourne, Australia).Data were recorded by Neuroscan Acquire software, with samples obtained at 1000 Hz and an online bandpass filter from 0.05 to 200 Hz (24 dB/octave roll-off).Each electrode was connected to a reference electrode positioned between CPz and Cz.Prior to the start of the recording, all electrode impedances were reduced to < 5kΩ.EEG recordings were pre-processed offline in MAT-LAB R2018b (The MathWorks, Inc.) using the RELAX EEG cleaning pipeline (Bailey, Biabani, et al., 2023a;Bailey, Hill, et al., 2023b), which calls EEGLAB (Delorme & Makeig, 2004) and fieldtrip functions (Oostenveld et al., 2011).Within the RELAX pipeline, data were first bandpass filtered with a fourth-order Butterworth filter from 0.25 to 80 Hz and bandstop filtered from 47 to 53 Hz to reduce the line noise.Next, the default RELAX settings were used to reject extreme outlying channels using multiple validated methods (Bailey, Biabani, et al., 2023a;Bailey, Hill, et al., 2023b;Bigdely-Shamlo et al., 2015), followed by the marking of extreme outlying EEG periods for exclusion from the Multiple Wiener Filter cleaning and deletion before independent component analysis (see Bailey et al., 2022 for details).Three sequential Multiple Wiener Filters were used to reduce (1) muscle activity (Fitzgibbon et al., 2016), (2) eye blinks, then (3) horizontal eye movement and electrode drift (Somers et al., 2018).Finally, data were rereferenced to the robust average reference (Bigdely-Shamlo et al., 2015), and the remaining artifacts were cleaned using wavelet-enhanced independent component analysis (ICA) (Castellanos & Makarov, 2006) to reduce artifactual components identified by ICLabel (Pion-Tonachini et al., 2019) after ICA decomposition using cudaICA (Raimondo et al., 2012).Full details of the pre-processing pipeline are available in Bailey, Biabani, et al. (2023a) and Bailey, Hill, et al. (2023b).
After cleaning, EEG activity was epoched to the onset of the AB task stimuli from −200 to 1000 ms surrounding the T1 or T2 stimuli for ERP analysis and from −2000 to 2000 ms for oscillation analyses.The fieldtrip "ft_freqanalysis" function was used with Morlet wavelet analysis settings and a cycle width of 5 to compute frequency power.
ERP data were baseline corrected using the baseline subtraction method to the average activity in the −200 to 0 ms period prior to target stimulus onset, as per the methods of Slagter et al. (2007).To test our first primary hypothesis (PH1) for the P3b ERP, we averaged data within the 350 to 600 ms time window following the stimuli.
TPS and APS were quantified through the calculation of a phase-locking factor (PLF) value within the theta range (4 to 8.5 Hz) and alpha range (8.5 to 15 Hz, in replication of Slagter et al., 2009) (Lachaux et al., 1999;Ueno et al., 2009).PLF values range from 0 to 1, where 1 represents perfectly correlated phase differences between trials, and 0 represents completely uncorrelated phase differences (Ueno et al., 2009;Varela et al., 2001).The methods for this computation are described in more detail in the Supplementary Materials (Section 2b).To test hypothesis PH2, TPS data were averaged within the 121 to 501 ms window after T2 stimuli.To test hypothesis PH4, APS data were averaged within the −414 to −214 ms window prior to T1 stimuli.
For alpha frequency power analyses, trials were baseline corrected to oscillatory power across the entire epoch (in replication of Slagter et al., 2009).While this means a potential signal reduction in potential "active" periods (as the data from those periods is contained within the baseline subtraction), this approach prevents spurious conclusions about differences in active periods being, in fact, driven by an arbitrarily selected baseline period.As such, significant differences at any time point in the epoch reflect an increase or decrease of oscillatory power at those time points relative to the ongoing oscillatory power across the entire epoch.Baseline correction of frequency power data was performed using the relative method ([all active period datapoints − the mean baseline activity] / mean baseline activity).To test hypothesis PH3, alpha power was averaged within the −31 to 160 ms time window following T1.Only epochs from target stimuli that participants responded to correctly were used in the EEG analysis (for epochs locked to T1, this meant trials where T1 was responded to correctly, while for T2 locked epochs, this meant trials where participants correctly identified both T1 and T2 stimuli).Each condition was averaged separately within each participant for ERP and oscillation analyses (note that the conditions were: short vs long interval and T1 vs T2 stimuli).
Data Analyses
EEG data comparisons of ERPs, TPS, alpha-power, and APS, between meditators and non-meditators, were performed using the randomized graphical user interface (RAGU) method (Koenig et al., 2011).RAGU compares scalp field differences over all epoch time points and electrodes using rank order randomisation statistics with no preliminary assumptions about time windows and electrodes to analyse (Koenig et al., 2011).Prior to conducting primary tests, a topographical consistency test (TCT) was conducted to confirm the consistent distribution of scalp activity within each group and condition.A significant TCT result suggests that potential between-group differences in the global field power (GFP) and topographic analysis of variance (TANOVA) tests (described later in this paragraph) are due to real group differences instead of variation within one of the groups (Koenig & Melie-García, 2010).RAGU allows for comparisons of global neural response strength (independent of the distribution of activity) with the GFP test.The GFP is an index of the total voltage differences across all channels, regardless of the specific locations of the activity; it is equivalent to the standard deviation across all channels at each time point (Habermann et al., 2018).The GFP test compares differences between Groups or Conditions from the real data against randomised permutation data to identify specific time periods following a stimuli where Groups or Conditions significantly differed in neural response strength.RAGU also allows for comparisons of the distribution of neural activity with the TANOVA (with the recommended L2 normalisation of the amplitude of neural activity which transforms data for such that the overall GFP = 1 within each individual, providing distribution comparisons that are independent of differences in global amplitude).Note that there are currently no Bayesian statistical approaches analogous to the TANOVA.
TPS, alpha-power, and APS values were compared with root mean square (RMS) and TANOVA tests (to separately compare overall neural response strength and distribution of neural activity, respectively).The RMS is computed in the same manner as the GFP, but without implementing an average re-referencing across the data prior to its computation.This is the recommended approach when oscillatory power or phase synchronisation comparisons are computed with RAGU, as the average reference was computed prior to the oscillation measurement transforms.As such, the RMS test is a comparison of the RMS between Groups rather than the GFP, a measure which is a valid indicator of neural response strength in the power or phase synchronisation domain (Habermann et al., 2018).In other respects, the statistic used to compare RMS between Groups is identical to the GFP test described in the previous paragraph.
RAGU controls for multiple comparisons in space by using only a single value representing all electrodes for the GFP/RMS and TANOVA tests (the GFP/RMS value for the GFP/RMS test and the global dissimilarity value for the TANOVA).RAGU also controls for multiple comparisons across time points in the epoch using global duration statistics (referred to as the "global duration control") which calculate the periods of significant effects within the epoch that are longer than 95% of significant effects in the randomised data with the alpha level at 0.05 (Koenig et al., 2011).However, because the computation of measures of oscillatory power or phase consistency elicits a dependence in values across neighbouring timepoints, RAGU's global duration control method is only appropriate for ERP analyses.For our oscillatory power and phase measures, we implemented the same duration controls as Slagter et al. (2009).Because our primary hypotheses were obtained from Slagter et al. (2007Slagter et al. ( , 2009)), we averaged data within specific windows of interest for our primary analyses.However, to explore potential effects outside of these windows, we also used RAGU for whole epoch analyses (from −100 to 800 ms for ERPs and from −500 to 1500 ms for oscillatory analyses), with multiple comparison controls implemented using the global duration statistics.The recommended 5000 randomisation permutations were conducted with an alpha of p = 0.05.For more in-depth information about RAGU and its analyses, please refer to Koenig et al. (2011), Koenig andMelie-García (2010), andHabermann et al. (2018).The p-values from our primary hypotheses (with data averaged within a priori hypothesised time windows of interest) were submitted to false discovery rate (FDR) multiple comparison controls (Benjamini & Hochberg, 2000) to control for experiment-wise multiple comparisons (referred to as FDRp).For the sake of brevity, only main effects and interactions involving Group are reported in the manuscript, while other results of interest are reported in the Supplementary Materials (Section 3).For brevity, the full details of all statistical analyses are reported in the Supplementary Materials (Section 2).However, we note here that some time windows of interest occurred prior to the presentation of T1 stimuli, in line with Slagter et al. (2009).These time windows were analysed as the results from Slagter et al. (2009) suggested differences in the meditation group in the synchronisation of neural activity to the distractor stimuli that were presented prior to T1, perhaps suggesting less reactivity to those stimuli in preparation for processing the target.
To test our hypotheses for ERPs (PH1, EH1, EH2, and EH3), global field power (GFP) and topographical analysis of variance (TANOVA) tests were averaged between 350 and 600 ms (P3b period) (Polich, 1997) after T1 onset to make direct comparisons with Slagter et al. (2007).For this averaged activity, GFP and TANOVA tests were used to conduct repeated measures ANOVA design statistics, examining 2 Groups (meditators vs controls) × 2 Conditions (short and long interval).To test our exploratory hypotheses that differences might be present outside of this specific time window or might be present following T2 (EH1, EH2, and E3), GFP and TANOVA tests were used to conduct the repeated measures ANOVA design statistics, examining 2 Groups (meditators vs controls) × 2 Conditions (short and long interval) × 2 Targets (T1 and T2) for event-related potential (ERP) data across the entire −100 to 800 ms interval after T1 onset.
To test our hypotheses for TPS (PH2 and EH4), we compared TPS between the Groups; root mean squared (RMS) and TANOVA tests were used to conduct repeated measures ANOVA design, examining 2 Groups (meditators vs controls) × 2 Conditions (short and long interval) comparisons for TPS data surrounding T2 onset.To make comparisons with Slagter et al. (2009), RMS and TANOVA tests were averaged within the 121 to 501 ms window (where Slagter et al., 2009 detected an effect that was maximal at electrodes FC6 and Fz) and the 309 to 558 ms window (where Slagter et al., 2009 detected an effect that was maximal at electrode T8) after the T2 stimuli.An additional exploratory analysis was performed on TPS data from −500 to 1500 ms around the stimuli, to determine if any effects were missed by the analysis focused only on T2.This analysis included T1 stimuli in a repeated measures ANOVA design examining 2 Groups (meditators vs controls) × 2 Conditions (short and long interval) × 2 Conditions (T1 and T2).
To test our hypotheses related to alpha-power and APS (PH3, PH4, EH5, and EH6), RMS and TANOVA tests were used to conduct repeated measures ANOVA design comparisons of alpha-power and APS (separately), examining 2 Groups (meditators vs controls) × 2 Conditions (short and long interval) comparisons for data averaged within a −31 to 160 ms period for alpha-power and averaged within a −414 to 214 ms period for APS.Similar to the ERPs and TPS tests, we also performed a whole epoch analysis from −500 to 1500 ms surrounding T1 onset to test for effects outside those reported by Slagter et al. (2009).
In addition to the RAGU analysis, traditional single electrode comparisons were conducted for comparison with previous research, using time windows and electrodes that showed significant results in comparisons by Slagter et al. (2007Slagter et al. ( , 2009)).Methods and results for these comparisons are reported in the Supplementary Materials (Sections 2 and 3 respectively).
Between-group comparisons of the demographic and behavioral data were performed using SPSS v25 or the robust statistics WRS2 package from R where parametric assumptions were not met (Field and Wilcox, 2017).Independent samples t-tests compared age, BAI, BDI-II, FFMQ, and years of education.A three-way repeated measures ANOVA was planned to analyse behavioral data.Interval (short or long) and Target (T1 or T2) were within-subjects factors and Group (meditators vs controls) the betweensubjects factor.The dependent variable was AB accuracy, defined as the percentage of correctly responded to trials (T1 and T2 identified correctly).This tested hypothesis RH1, with post hoc tests planned to assess the specific hypothesis that meditators showed a reduced AB effect (defined by increased short interval T2 accuracy) if an interaction between Group, Target, and Interval were present.Where possible, Bayesian analyses were also performed using JASP (Love et al., 2019) to provide the strength of evidence for either the null or alternative hypotheses (for all of the behavioral, demographic, and EEG comparisons), and a small number of follow-up exploratory linear mixed models were used to test our explanations for significant results (described in full in the Supplementary Materials, Section 3).For these Bayesian analyses, Bayes factor (BF) values were provided to indicate the strength of evidence.BF10 is provided to indicate the strength of support for the alternative hypothesis, BF01 to indicate the strength of support for the null hypothesis.BFincl is reported to indicate the strength of support for the positive hypothesis of an interaction (indicating the support for the alternative hypothesis when the interaction was included in the model compared to when the interaction was excluded), and BFexcl to indicate the strength of support for the null hypothesis of an interaction (indicating the support for the null hypothesis when the interaction was not included in the model compared to when the interaction was included).
ERP Comparisons
To test our first primary hypothesis (PH1) that meditators would show a smaller allocation of attention-related neural resources to T1, reflected by a lower amplitude of the P3b neural response strength to T1 stimuli in meditators compared to controls, the GFP test was performed on the P3b time window (from 350 to 600 ms following T1, consistent with Slagter et al., 2007).No difference was detected for the main effect of Group in GFP averaged across the P3b period (p = 0.798, FDR-p = 0.798, ηp 2 = 0.001, see Table 2 and Fig. 2), nor was there a significant interaction between Group and Interval (p = 0.732, ηp 2 = 0.004).To test the strength of evidence for the null hypothesis, averaged P3b GFP values from within the time window of interest (350 to 600ms) were tested with a Bayesian repeated measures ANOVA.This analysis showed that the null hypothesis was more likely than the alternative hypothesis for both the Group factor and the interaction between Group and Interval.Comparing models including Group and a Group by Interval interaction to the model only including Interval provided BF01 = 6.520, while comparing the main effect of Group independently to equivalent models stripped of the Group effect and excluding higher-order interactions, BFexcl = 1.835, and for the interaction between Group and Interval, BFexcl = 3.553.Our single electrode analyses, which focused on time windows and electrodes reported to be significant by Slagter et al. (2007), showed similarly null results (Supplementary Materials Section 3b).
As mentioned in our hypotheses, while Slagter et al. ( 2007) focused on the P3b in response to T1 only, our view is that it is sensible to hypothesise that effects might occur in components other than the P3b, that ERP amplitudes time locked to T2 might be increased in meditators (EH1), or for the relationship between ERP amplitudes time locked to T1 and T2 to be different in meditators (EH2).To test these exploratory hypotheses (EH1 and EH2), a GFP test was performed across the entire epoch (−100 to 800 ms), including all conditions (both T1 and T2 targets and short/long intervals).This test showed a significant interaction between Group and Target from 214 to 258 ms following the stimuli (averaged across this time Interval: p = 0.002, ηp 2 = 0.0914, see Fig. 3), which survived multiple comparison controls for duration (global duration control = 41ms).This effect falls within the typical posterior-N2 time window.Within this interaction, controls showed significantly higher GFP amplitudes in response to T1 compared to T2 (p = 0.022, ηp 2 = 0.1657), while meditators showed no difference between T1 and T2 (p = 0.279, ηp 2 = 0.0403).When Group comparisons were restricted to short interval T1 stimuli only (averaged within the 214 to 258 ms window), meditators showed significantly lower posterior-N2 GFP amplitudes than controls (p = 0.029, ηp 2 = 0.0784, see Figs. 3 and 4).To determine the strength of evidence for this significant interaction between Group and Target, averaged GFP values for each participant across both short and long intervals were calculated for both T1 and T2 targets separately and submitted to a repeated measures Bayesian ANOVA design.When comparing the interaction effect against models that did not include the interaction effect, the Bayes factor showed moderate evidence for the effect (BFincl = 3.411).As such, while hypothesis EH1 was not supported (as meditators did not show larger amplitude ERPs following T2 stimuli), hypothesis EH2 was supported, as meditators showed a more equal distribution of ERP amplitudes between T1 and T2 than controls (although not within the P3b window).Finally, in our test of the exploratory hypothesis that the distribution of ERPs would differ between meditators and controls (EH3), the TANOVA showed no significant main effect of Group or interaction involving Group that exceeded multiple comparison controls for the number of comparisons across the epoch (all p > 0.05).
In the Supplementary Materials, we report exploratory linear mixed models and generalised linear mixed models to explore the potential associations between single trial GFP values within the posterior-N2 effect and whether single trials were responded to correctly assess potential explanations for this result (Supplementary Materials, Section 3b).In brief, these exploratory analyses showed that correct identification of short interval T2 stimuli was associated with lower posterior-N2 GFP time-locked to T1 (similar to the and second target (T2), averaged over the significant window for the test of the interaction between Group and Target (from 214 to 258 ms following the stimuli) and averaged across both short and long intervals.Bottom right: the mean (non-normalised) topography within the significant 214 to 258 ms period from each group, averaged across T1 and T2 locked epochs separately pattern shown by the meditators) (Supplementary Materials, Fig. S3).This suggests that when fewer attentional resources were devoted to processing T1, T2 could be more accurately identified.Additionally, in single trial analysis, the relationship between T2 posterior-N2 GFP, trial number, and response accuracy differed between the Groups.To begin with, both meditators and controls were less likely to identify T2 stimuli if their T2 posterior-N2 GFP was high.Controls showed the same pattern throughout the task.However, by the end of the task, this pattern reversed for the meditators who were more likely to identify T2 targets when they showed high posterior-N2 GFP values.
Theta Phase Synchronisation (TPS) Comparisons
The TCT for TPS showed consistent neural activity across groups and conditions from −280 ms across the first 600 ms after stimulus presentation, with TCT inconsistency in controls locked to the T1 stimuli prior to this time that did not overlap with any of our significant effects in the RMS test, but did overlap with some of the significant effects within the TANOVA tests.This demonstrated that our RMS TPS results were not driven simply by inconsistent topographical activation within a single group or condition (Supplementary Materials, Fig. S5).For our test of hypothesis PH2, that meditators would show higher TPS following short interval T2, RMS TPS was averaged within short interval T2 trials across the 121 to 501 ms window for direct comparison with Slagter et al. (2009) (who found an effect within this window, maximal at Fz and FC6).No significant difference was detected, indicating that meditators did not show higher TPS following short interval T2 stimuli within the 121 to 501 ms window (p = 0.086, FDR-p = 0.173, ηp 2 = 0.0482, BF01 = 1.104).Similarly, for the 309 to 558 ms period (where Slagter et al., 2009 found an effect within this window that was maximal at electrode T8), no significant difference was detected (p = 0.118, ηp 2 = 0.0418, BF01 = 1.373).
However, when all conditions and time points were included in an exploratory analysis of RMS TPS, a significant interaction between Group, Target, and Interval Fig. 4 Averaged event-related potentials (ERPs) averaged within fronto-central (top) (F1 Fz F2 FC1 FCz FC2) and parietal-occipital (bottom) electrodes (PO7 PO5 PO6 PO8 O1 Oz O2) time-locked to T1 with the significant period marked (red dashed lines).Note that our analyses were based on the GFP, so while the averaged electrodes demonstrate the difference (with N2 ERPs showing smaller amplitudes in meditators regardless of polarity), our significance tests were not based on these values.Note also the oscillatory pattern in the alpha frequency, synchronised to the stimuli presentation rate was present from 117 to 295 ms (averaged across the significant window: p = 0.0002, ηp 2 = 0.2358, Fig. 5).This effect lasted longer than the duration controls for multiple comparisons over time used by Slagter et al. (2009) (175.1 ms).When RMS TPS was averaged within the significant window (117 to 295 ms), Bayesian analysis of the interaction indicated strong support for the alternative hypothesis (BFincl = 41.612), and the model including this Group, Target, and Interval interaction effect as well as the nested comparisons was 5.502e+9 times more likely than the null model (BF10 = 5.502e+9).In assessing the cause of the 3-way interaction with reduced ANOVA designs (where data was averaged across one of the original factors prior to re-analysis to enable easier interpretation), our results indicated it was driven by two features: firstly, controls showed larger RMS TPS during long interval T2 trials than short interval T2 trials, while meditators showed very little difference in RMS TPS between the short and long interval conditions (p = 0.0094, ηp 2 = 0.1718, BFincl = 29.574).Secondly, the interaction was also driven by an effect where meditators showed a more even distribution of RMS TPS between T1 and T2, in comparison to controls who showed higher RMS TPS values to T1 compared to T2 (short interval T1 vs short interval T2) (p = 0.0022, ηp 2 = 0.1626, BFincl = 25.192).However, counter to the results of Slagter et al. (2009), the interaction was Fig. 5 Root mean squared (RMS) comparisons of Group, Target, and Interval for theta phase synchronisation.Left: P-graphs for the main effect of Group and interactions involving Group.The black line reflects the p-value, white areas reflect significant time points, and the light blue area indicates the effect that passed the duration con-trol used by Slagter et al. (2009).Right: Theta phase synchronisation RMS showing the significant interaction of interest between Group, Interval, and Target from the averaged activity within the 117 to 295 ms window (p = 0.004, ηp 2 = 0.2526, BFincl = 41.612)not driven specifically by a difference between Groups in short interval T2 TPS (averaged within the 117 to 295 ms window showing the significant interaction, there was no significant difference between the groups in short interval T2 RMS TPS, p = 0.136, ηp 2 = 0.0373).Single electrode analyses replicating the electrode and window of interest used by Slagter et al. (2009) showed the same pattern of results as the effect we detected within the 117 to 295 ms window, with Bayesian evidence supporting the alternative hypothesis for the interaction between Group and Interval for T2 stimuli (BFincl = 4.621 within the time window used by Slagter et al. (2009), and BFincl = 35.908when restricted to the significant time period detected in our exploratory analysis, reported in full in the Supplementary Materials, Fig. S6).
To assess whether these differences in TPS might have behavioral relevance, we performed Pearson's correlations between TPS and percentage correct from short interval T2 trials across both groups together.These results indicated that TPS from all conditions correlated with short interval T2 accuracy (statistics reported in full in Table 3, and scatterplots for these comparisons can be viewed in Fig. 6).We also conducted the same correlations within each group separately.While these separate within-group correlations are lower in statistical power, they suggest that the TPS correlates with short interval T2 accurately more strongly in the control group than in the meditator group, and that within the control group, TPS time locked to T1 correlates more strongly to short interval T2 accuracy (see Table 3).However, the 95% confidence intervals of the Pearson r values (based on 1000 bootstrap replications) overlapped between the two groups, so we cannot be confident that the difference in correlation strength between the groups represented a statistical difference.
There was also an interaction between Group and Interval from 455 to 560 ms (averaged across the significant window: p = 0.0218, ηp 2 = 0.1520).However, this period did not survive the 175.1 ms minimum duration used by Slagter et al. (2009).No other main effect or interaction involving Group was significant for any part of the epoch (all p > 0.10).
With regard to our exploratory hypothesis that the scalp distribution of TPS would differ between Groups (EH4), the TANOVA, including all conditions and all time points, showed an interaction between Group and Target during the presentation of the distractor stimuli from −385 to −100 ms prior to T1, which lasted longer than the duration control for multiple comparisons (175.1 ms) used by Slagter et al. (2009).When averaged across the significant window, the statistics were as follows: p = 0.001, ηp 2 = 0.0609 (Fig. 7).When the interaction was explored by averaging TPS within the significant period and performing TANOVA comparisons between the groups for T1 and T2 stimuli separately, the effect was shown to be driven by a difference in TPS distribution between the Groups prior to T1 stimuli (p = 0.018, ηp 2 = 0.0336), with meditators showing more TPS in occipital electrodes (meditator minus control t-max at Oz = 2.908) and meditators showing less TPS in right frontal electrodes (meditator minus control t-min at F6 = −3.384).Groups did not differ in TPS locked to T2 stimuli (p = 0.1358).It is worth noting that the period that showed the significant result overlapped with a period of topographical inconsistency in T1-locked TPS in the control group (with inconsistent topographical distributions across the control group prior to −280 ms).This suggests that at least part of the interaction may have been driven by an inconsistent topographical pattern in the control group (rather than a between-group difference during that time period).No other differences were present in any of the main effect or interactions involving Group within any time point in the epoch (all p > 0.05).
Alpha-power Comparisons
The TCT for RMS alpha-power showed consistent neural activity across all groups and conditions from −400 ms until the end of the epoch, indicating our alpha-power results were not driven simply by inconsistent topographical activation within a single group or condition (details are reported in the Supplementary Materials, Fig. S7).When RMS alpha-power was averaged across the −31 to 160 ms window for direct comparison with Slagter et al. (2009) and test of our third primary hypothesis (PH3 -that meditators would show greater alpha-power around T1 presentation), no significant difference was detected (p = 0.2976, FDR-p = 0.3968, ηp 2 = 0.0189, BF01 = 2.379).The exploratory RMS test for alpha-power, including all time points within the epoch time-locked to T1 stimuli, showed a significant main effect of Group from 475 to 685 ms, in which meditators showed less alpha-power (averaged within this window: p = 0.023, ηp 2 = 0.0844, see Fig. 8).This effect was longer than the duration control criteria implemented by Slagter et al. (2009) (83.5 ms for alpha).No interaction was detected between Interval and Group in RMS alpha-power that lasted longer than the duration controls used by Slagter et al. (2009).Nor was there any Group main effect or interaction between Group and Interval in the alpha-power TANOVA (all p > 0.05).This provided a null result for hypothesis EH5 (that there would be differences between the groups in the scalp distribution of alpha-power).
To explore potential explanations for these results, we performed a number of additional tests of the pattern of relationships between trial number, single trial accuracy (to assess potential learning across the task), and alphapower within this significant period (these are reported in the Supplementary Materials Section 3e).In brief, the baseline-corrected RMS alpha-power within the 475 to 685 ms window decreased across trials as participants completed the task, which was concurrent with improved performance across the task, suggesting participants may have been learning attention-based strategies to enable improved short interval T2 detection.However, across all participants, averaged baseline-corrected alpha-power RMS within the 475 to 685 ms window after T1 did not correlate with the accuracy of short interval T2 detection.Further, an exploratory linear mixed model indicated that incorrect responses were associated with slightly, yet significantly, lower short interval RMS alpha-power than correct responses (Supplementary Materials, Fig. S11).
Fig. 6 Scatterplots depicting the correlations between root mean squared (RMS) theta phase synchronisation (TPS) averaged within the significant window (117 to 295 ms) from each condition and accuracy at detecting the second target stimuli (T2) in short interval trials.Note the common pattern across all groups and conditions.The grey and light green areas reflect relative variance from the line of best fit at point on the x-axis However, lower short interval trial RMS alpha-power within a later 685 to 1050 ms window was strongly associated with correct responses (Fig. S12).Short interval alpha-power RMS was also strongly correlated between these two periods (between the 475 to 685ms period and the 685 to 1050 ms period).This relationship was stronger within incorrect trials than for correct trials, and long interval RMS alpha-power increased in the later 685 to 1050 ms window compared to the earlier period in both groups.This suggests that lower short interval Fig. 7 Topographical analysis of variance (TANOVA) test results for the theta phase synchronisation (TPS).Left: p-graphs for the main effect of Group and each interaction involving Group.The black line reflects the p-value, the white areas reflect significant time points, and the light blue periods reflect windows where the effect passed the duration control used by Slagter et al. (2009).Right top: A multi-dimensional scaling graph depicting the differences between each group's TPS topographies in response to the first (T1) and second (T2) target stimuli averaged during the window of the significant Group × Target interaction (−390 to −85 ms around the target).Within the multi-dimensional scaling graph, the topography maps indicate the ends of the eigenvector spectrum in each of the x-and y-axis, and the points on the graph indicate where each group and condition's mean topography lay on that spectrum (for both the x-and y-axis) relative to the other points in the graph (note that the topographies along the x-and y-axis do not represent the actual topography for a group/condition).As such, the interaction between Group and Target in topographical activation is demonstrated by the graph.Right bottom: the t-map for the meditator minus control theta phase synchronisation topography for T1 stimuli (averaged from −390 to −85 ms around T1), after normalisation for overall amplitude (so that all individuals had a GFP = 1).Red indicates areas where meditators showed higher values, blue indicates areas where controls showed higher values (indicating that topographical differences were present, without suggesting that TPS was higher in the control group in a specific electrode, due to the normalisation for amplitude) 1 3 RMS alpha-power in the later 685 to 1050 ms window was required to identify the T2 stimuli.As such, perhaps the lower RMS alpha-power in the earlier period might have been a compensatory mechanism on trials when participants noticed their attention waning, reflecting an attempt to regulate alpha-power in the later period, during which low alpha-power was more important for stimulus processing.
Alpha Phase Synchronisation Comparisons
In test of our fourth primary hypothesis (PH4 -that APS would be reduced in the meditation group during the presentation of the distractor stimuli prior to T1 stimuli), we conducted an RMS test of APS time-locked to T1 stimuli averaged across the period where distractor stimuli were presented prior to T1 (within the −414 to −214 ms window 2009), our results indicated a non-significant main effect of Group, where meditators showed higher APS, which is in the opposite direction to the findings provided by Slagter et al. (2009) (p = 0.061, FDR-p = 0.173, ηp 2 = 0.0586).Additionally, our exploratory analysis of APS across the entire epoch showed a significant main effect of Group from −258 to −90 ms, and from 288 to 1500 ms (both of which survived the duration controls of 83.5ms used by Slagter et al. (2009), see Fig. 9).Within both the shorter pre-stimulus and longer post-stimulus period, meditators showed larger RMS APS (averaged within the −258 to −90 ms period = 0.031, ηp 2 = 0.072, BFincl = 2.089, averaged within the 288 to 1500 ms period: p = 0.018, ηp 2 = 0.092, BFincl = 1.752, with the best model including the main effect of Group and the main effect of Interval, BF10 = 42.23 for the average Interval from 288 Top right and middle: the p-graphs for the main effect of Interval (orange, middle left), Group (blue), and interaction between Interval and Group (yellow, middle right).The black line reflects the p-value, white areas reflect significant time points, and light blue periods reflect windows where the effect passed the duration controls used by Slagter et al. (2009).Bottom: Mean root mean squared alpha phase synchronisation (RMS APS) from each group in response to T1 long (LIT1) and short (SIT1) interval trials, averaged within the significant window from the RMS APS test to 1500 ms).Our results also indicated a brief significant interaction between Group and Interval in APS RMS (706 to 786 ms) which did not pass the duration controls used by Slagter et al. (2009).
With regard to the TANOVA test of APS (which tested exploratory hypothesis EH6 -that meditators would show a different scalp distribution of APS), a significant Group main effect was detected from 990 to 1500 ms where meditators showed higher APS values in fronto-central and parieto-occipital electrodes and lower APS values in lateral central electrodes (p = 0.007, ηp 2 = 0.0408, with a meditator minus control t-max of 3.417 at PO5 and t-min of −3.035 at C5, see Fig. 10).This effect passed the duration control (83.5ms) used by Slagter et al. (2009).There was also a Group main effect in the TANOVA from −244 to −2 ms (p = 0.030, ηp 2 = 0.0329) and brief significant interaction The black line reflects the p-value, white areas reflect significant time points, and light blue periods reflect windows where the effect passed the duration controls used by Slagter et al. (2009).Bottom: Topography maps for APS averaged within the 990 to 1500 ms period for each group and the t-map of meditator APS minus control APS after normalisation for overall amplitude (so that all individuals had a GFP = 1).Red indicates areas where meditators showed higher values, blue indicates areas where controls showed higher values (indicating that topographical differences were present, without suggesting that APS was higher in the control group in a specific electrode, due to the normalisation for amplitude) between Group and Interval in the APS TANOVA (150 to 280 ms, p = 0.011, ηp 2 = 0.0364), both of which passed the duration control (83.5 ms) used by Slagter et al. (2009).
RMS APS averaged within the 282 to 1500 ms period significantly correlated to percentage correct for short interval T2 trials, in both short interval and long interval trials -for the correlation between APS RMS during short interval T1 trials and T2 short interval percentage correct: Pearson's r = 0.314, p = 0.014, BF10 = 3.093, and for the correlation between APS RMS during long interval T1 trials and T2 short interval percentage correct: Pearson's r = 0.307, p = 0.016, BF10 = 2.717.Scatterplots depicting these correlations can be viewed in the Supplementary Materials (Fig. S14).These correlations may indicate that participants who synchronised their alpha oscillations more consistently with the stimulus stream (which was presented at 10 Hz, within the alpha frequency) were better able to perceive and correctly identify the T2 stimuli.It is worth noting that the T2 stimuli in short interval trials were presented at 300 ms, just after the point at which the meditation group showed higher alpha synchronisation to the stimuli.
Behavioural and EEG Epoch Inclusion Comparisons
Levene's test indicated the assumption of equality of variances was met for all conditions within the analysis of the behavioral data (all p > 0.15).However, the Shapiro-Wilk test indicated significant deviations from normality for 8/10 of the variables included in the Condition × Group combinations, so robust statistics were implemented in R, using the mixed ANOVA (bwtrim function) from the WRS2 package (Field and Wilcox, 2017).Violations of the assumptions of traditional parametric ANOVAs (including normality violations) do not affect these robust statistics.However, only Group × Condition designs are currently available (rather than Group × Condition × Condition), so this analysis was restricted to a Group × Interval comparison for T2 responses only (as the primary comparison of interest), and the originally planned parametric statistical analyses are reported in the Supplementary Materials (Section 3a).Means and standard deviations, as well as both parametric and robust statistics, are presented in Table 4, and the data can be viewed in Fig. 11.
In testing our first replication hypothesis (RH1 -that our meditation group would show a reduced AB effect, with more correct responses to short interval T2 stimuli), the robust statistics showed no main effect of Group for percent correct in response to T2: value (1,33.997)= 0.325, p = 0.572, and no interaction between Group and Interval: value (1,33.898)= 0.220, p = 0.642.The parametric statistics showed the same pattern of null results.The Bayesian statistical model that included Group or interactions that involved Group as a factor was 259.326 less likely than the model that only included Target, Interval, and the interaction between Target and Interval (BF01 = 259.326).These results suggest it is highly unlikely that the meditation group showed higher percentage correct in any condition compared to the control group.
No main effects or interactions involving Group were significant for the number of epochs provided by each participant for each Condition (all p > 0.1).The TCT for the ERP data also showed mostly consistent neural activity across Groups and Conditions, with a brief period of inconsistency in the pattern of topographical activation within some Group × Condition combinations that did not overlap with any of our significant effects.These two tests indicate our ERP results were not driven simply by differences in the number of epochs included in ERP averages or inconsistent topographical activation within a single group or condition (details of these tests are reported in the Supplementary Materials, Table S1 and Fig. S1).
Discussion
This study aimed to comprehensively examine if neurophysiological markers of attention differed between communitymeditators and non-meditator controls.In our sample of meditators with typical daily MM practice, our results did not show support for our primary hypotheses regarding the neurophysiological markers obtained from within our time windows of interest (the P3b, TPS, alpha-power, and APS, with the windows of interest overlapping with the significant effects reported by Slagter et al., 2007Slagter et al., , 2009)).No differences were found between meditators and non-meditators in the amplitude or distribution of the P3b neural response following T1 or T2 stimuli in the attention blink task.Nor were there any differences between meditators and non-meditators in TPS, alpha-power, or APS within our a priori selected time windows of interest.Frequentist statistics provided null results, and Bayesian statistics provided weak to moderate evidence against these primary hypotheses, suggesting we can be slightly to moderately confident there was no difference between the groups in TPS, alpha-power or APS within our a priori selected time windows of interest.
However, our exploratory analyses (which included all time points within the epochs around all T1/T2 and short/ long interval conditions) did show significant effects, which were further supported by very strong Bayesian evidence in favour of the alternative hypothesis.In particular, meditators showed more equal posterior-N2 amplitudes across T1 and T2 stimuli than non-meditators (who showed larger posterior-N2 amplitudes to T1 than T2).Similarly, meditators showed more equal TPS values between the first and second target in short interval trials, and meditators showed similar TPS values to T2 in both short and long interval trials, in comparison to controls who showed higher TPS following the first target, and higher TPS to T2 in long interval compared to short interval trials.Meditators also showed lower alpha-power than controls during a period where short interval T2 stimuli would be processed, and increased APS to T1 stimuli.These effects are aligned with theoretical perspectives on the effects of mindfulness on attention function and align with the explanation that Slagter et al. (2007Slagter et al. ( , 2009) ) provided for their results -that meditators distribute their neural activity more equally across stimuli, rather than biasing responses towards T1 (however, our results did not align with the time windows of significant results reported by Slagter et al., 2007Slagter et al., , 2009)).Each pattern of neural activity shown by the meditation group was also associated with higher performance, either correlated with percentage correct across all participants, or associated with correct responses rather than incorrect responses in single trial analyses, suggesting the activity shown by meditators might reflect functionally relevant attentional mechanisms.However, unexpectedly, our analyses of behavioral performance provided non-significant frequentist results, and our results showed strong overall Bayesian evidence against any main effect or interaction that involved Group.Combined with our null results for our primary analyses, this suggests caution is warranted in the interpretation of our results, and conclusions drawn from our exploratory analysis should be considered tentative.We discuss the details and implications of these findings in the following.
Our primary analysis did not detect a difference in the P3b following T1 stimuli in our sample of community-meditators.However, our exploratory analyses showed that the meditator group generated an equal amplitude posterior-N2 response across T1 and T2 stimuli, while controls showed higher posterior-N2 responses to T1 stimuli than T2 stimuli.As such, while our study did not replicate the findings reported by Slagter et al. (2007) with regard to the P3b, our result is conceptually similar, suggesting that meditators distributed attentional resources more equally across the two stimuli.While a frontally distributed N2 is often detected in tasks requiring cognitive control (Folstein & Van Petten, 2008), our study indicated the AB task generated a posterior-N2 instead, similar to previous research using the AB task (Zivony et al., 2018).Previous research in healthy control individuals has also demonstrated a reduced posterior-N2 to T2 stimuli following short interval trials, which has been suggested to reflect a lack of attentional engagement to enable stimuli processing (Zivony et al., 2018).As such, our results suggest that meditators are more equally distributing the engagement of attentional resources across the two AB stimuli.In support of this, an exploratory single trial analysis of the posterior-N2 GFP showed that correct identification of short interval T2 stimuli was associated with a smaller posterior-N2 GFP time-locked to T1, suggesting that when fewer attention resources were devoted to processing T1, T2 could be more accurately identified.As such, although the meditation group did not show higher task performance overall, their neural activity averaged within each condition showed the same pattern that was associated with higher performance.
It is not clear, however, why our study detected differences in the posterior-N2 rather than the P3b, given the findings reported by Slagter et al. (2007) of a difference in the P3b.This inconsistency might be explained by a progressive change of neural activity during the AB task with more intensive meditation experience.The sample tested by Slagter et al. (2007) underwent a 3-month intensive retreat, while our participants were experienced meditating members of the lay public (although with an average of 6 years of meditation experience, and an average of approximately 7 hr per week of practice at the time of the study).However, if the difference in meditation experience explains the conflict between our finding of a difference in the posterior-N2 compared to the finding reported by Slagter et al. (2007) of a difference in the P3b, it is not clear why the less experienced meditators in our study would show altered T1 processing at a shorter delay following T1 presentation than the sample tested by Slagter et al. (2007) of more experienced meditators.Despite the ambiguities in interpreting our results, the characterisation of meditators showing more equal distribution of posterior-N2 amplitudes between rapidly presented stimuli that compete for attentional resources aligns with previous research demonstrating mindfulness enhances the distribution of attentional resources (Bailey et al., 2018;Slagter et al., 2007Slagter et al., , 2009;;Wang et al., 2020).
Our primary analysis of TPS to short interval T2 trials showed no difference between meditators and controls.In contrast, our exploratory test of TPS showed strong Bayesian evidence of an interaction between TPS and long/short interval trial condition.This interaction indicated that while controls showed higher TPS to T2 for long interval trials than short interval trials, meditators showed similar TPS to T2 for both short and long interval trials.Strong Bayesian evidence also indicated that meditators showed a more even distribution of TPS between the first and second target in short interval trials, in comparison to controls who showed higher TPS following the first target.Multiple validation checks of this test demonstrated the same result.These validation checks included single electrode analyses averaged within our a priori time window of interest, and a repeat of the test that excluded participants who provided fewer epochs (ensuring the test possessed maximal validity).These results align with the interpretation proposed by Slagter et al. (2009) that theta synchronisation reflects increased consistency of neural processes, allowing increased attention as a result of meditation training.Our results also support this interpretation, indicating that theta synchronisation was higher following T2 in long interval trials than short interval trials (suggesting theta synchronisation to T2 is disrupted by T1 processing in short interval trials) and that higher theta synchronisation was related to performance.
However, despite the association between increased theta synchronisation and performance and our finding of a higher TPS in our meditation group, we found evidence against increased AB task accuracy in our meditation group.Our significant result also only overlapped with the first half of the window in which Slagter et al. (2009) detected increased TPS in their meditators after the retreat, and unlike Slagter et al. (2009), our TPS result was not present when the analysis was focused specifically on the difference between meditators and controls in TPS following short interval T2 trials.This may suggest that while typical community-meditation is associated with an effect on theta synchronisation attentional mechanisms, the theta synchronisation after stimulus presentation is not as prolonged as in post-intensiveretreat meditators.Additionally, the effect may be weaker, only appearing relative to the non-short interval T2 conditions (in which theta synchronisation is perhaps less vital for task performance than it is in the commonly attentional blinked short interval T2 condition).However, the more equal distribution of TPS to short interval T2 stimuli in meditators in our study may suggest that the meditation group is distributing limited attentional resources to better encode the T2 stimuli, as suggested by Slagter et al. (2009).The efficacy of this neural strategy seems to be reflected in the correlation between higher TPS and higher accuracy at accurately identifying short interval T2 stimuli.However, when correlations between TPS and performance were conducted within each group separately, only the correlations between TPS locked to T1 stimuli and short interval T2 accuracy remained significant.Additionally, these correlations were only significant within the control group.As such, it may be that TPS reflects a general mechanism enabling attentional focus on the task in the control group (with higher TPS to T1 reflecting an increase in overall attentional focus on the task, rather than accurate identification of T2 depending on TPS specifically locked to T2).In contrast, the relationship between TPS to a single target stimuli and performance in the meditation group may have been weakened, perhaps due to an alteration in the relationship between TPS to both stimuli (with meditators showing a more equal distribution of TPS across both T1 and T2 stimuli), or the influence of the posterior-N2 and alpha activity differences in the meditation group.As such, the functional interpretation of this result is not clear, more research is required to elucidate the finding, and the result should be interpreted with caution, as we note that these within-group correlations had reduced statistical power compared to the correlations across both groups, and that the confidence intervals for the correlation strengths from the two groups overlapped.
Our exploratory analysis of the distribution of TPS also indicated that meditators showed more TPS in occipital electrodes prior to T1 stimuli than controls.There was also a more consistent topographical distribution of activity within the meditation group than within the control group, perhaps indicating a consistent synchronisation of oscillations to the target stream in a functionally relevant brain region in preparation for the detection of the relevant stimuli.Similar to our findings for the posterior-N2, the pattern demonstrated where meditators showed a more equal distribution of theta activity between rapidly presented stimuli that compete for attentional resources provides further support for research that has indicated mindfulness enhances the functional allocation of attentional resources (Bailey et al., 2018;Slagter et al., 2007Slagter et al., , 2009;;Wang et al., 2020).However, if this interpretation is correct, it is not clear why the meditation group did not show higher accuracy than the control group.As such, our exploratory results should be viewed with caution, and require replication.It may be that ultimately research will show there is no significant difference in TPS between meditators and non-meditators.
The current study did not find a significant difference in our primary analyses focused on specific time windows within which we analysed alpha-power and alpha phase synchronisation (with time windows of interest derived from Slagter et al., 2009).However, in our exploratory analysis, the meditation group showed a larger reduction in the level of ongoing alpha-power from 475 to 685 ms following T1 stimuli (relative to the alpha-power across the rest of the epoch).Higher alpha-power has been associated with the inhibition of non-relevant brain regions during attention tasks, with the suggestion that this allows the brain to prioritise processing in brain regions that are relevant to the task, without the relevant brain regions being "distracted" by processing in non-relevant regions (Klimesch et al., 2007).In contrast, lower alpha-power is found in brain regions where active processing is required to complete the task, such that alpha-power can be increased to inhibit processing or decreased to enable processing in specific brain regions (Klimesch et al., 2007).In support of this interpretation of the function of alpha-power, previous research has shown higher levels of brain region-specific alpha-power modulation in experienced-meditators when attention is required to either tactile oddball or visual working memory stimuli (Wang et al., 2020).Results in that study indicated that alpha-power increased or decreased in specific task-relevant regions dependent on the specific task demands, and that meditators produced stronger task-relevant increases or decreases (Wang et al., 2020).The results also indicated that alongside the differences in alpha-power, meditators performed the task more accurately (Wang et al., 2020).The current study provides further support for the interpretation of alpha as an inhibitory mechanism, with alpha-power remaining high during distractor stimuli presentation but decreasing (releasing inhibition) earlier in short interval trials in alignment with short interval T2 processing, and decreasing later in long interval trials, in alignment with long interval T2 processing (see Figs. S8 and S9 in the Supplementary Materials Section 3e for a complete explanation and evidence in support of this point).This decrease in alpha-power during short interval T2 stimuli processing and increase in alpha within long interval trials during the same time period likely reflects a "gating" mechanism.In particular, the decrease in alpha power might reflect a release of inhibition to process target stimuli, while the increase in alpha power might reflect an increase of inhibition to reduce distractor processing.Indeed, lower alpha-power RMS within a 685 to 1050 ms window was strongly associated with short interval T2 correct responses (Supplementary Materials, Fig. S12).
As such, the results of the current study might suggest that the reduction in alpha-power immediately following the timing of the presentation of short interval T2 stimuli in the meditation group reflects an attentional mechanism.This attentional mechanism might enable increased neural processing during the period where processing of the short interval T2 stimuli would be required.This appears to occur regardless of whether the short interval T2 stimuli was presented or not presented, perhaps reflecting the fact that participants were unable to determine if the trial would be a short or long interval trial at the time they would have to engage this attentional mechanism (so engaged the mechanism regardless of the trial type).Two possible interpretations of the fact that meditators showed this prolonged alpha-power reduction to enable short interval T2 processing even for long interval trials are possible.The first is that it may reflect a neural activity pattern prioritising awareness in general.The second is that it may reflect increased carefulness.The increased processing of stimuli, regardless of whether they might be task-relevant, might reflect increased general awareness.Alternatively, the increased processing of the time period during which T2 might be present may indicate increased carefulness in anticipation of a potential T2 stimuli being presented.Some previous research has reported results that suggest the "increased awareness" interpretation is more likely -research using mathematical modelling of performance in a behavioral task has suggested that the improved attention function from mindfulness is related to enhancements in an individual's ability to extract higher information quality during a working memory task rather than increased caution in responding (Van Vugt & Jha, 2011), a finding supported by neuroimaging research showing earlier activation of working memory-related brain regions in meditators (Bailey et al., 2020).Our task did not require participants to respond quickly, so it did not provide the ability to assess reaction times.However, previous results indicated meditators have shown increased performance without reaction time slowing (Van Vugt & Jha, 2011) and increased accuracy across both fast and slow reaction times (van den Hurk et al., 2010).In contrast, other research has indicated that meditators perform better in a movement task when the action required to meet the task goals is ambiguous and changing, and that they achieve this by performing a speed-accuracy trade-off for slower but more accurate responses (Naranjo & Schmidt, 2012).Trait mindfulness has also been shown to reduce the accelerating but accuracy-reducing effects of worry on performance (Hallion et al., 2020), supporting the "increased carefulness" interpretation.Further research may be able to elucidate the reasons for this pattern further.
While this pattern whereby meditators may have shown prolonged alpha-power reduction to enable short interval T2 processing even for long interval trials and our suggested interpretations of the pattern would have had no effect on task-relevant stimulus perception and, therefore, could not lead to improved task performance, the pattern does align with the "non-judgemental" aspect of mindfulness practice -maintaining awareness of the present moment as it is, without evaluation.This contrasts with the pattern shown by the controls, which indicates they reduced the processing of non-target distractor stimuli within the short interval T2 period, eliminating the distractor stimuli from awareness.As might be expected, given the lack of relevance to task performance of this neural strategy, across all participants, averaged alpha-power within the time window where meditators showed reduced alpha activity did not correlate with the accuracy of short interval T2 detection.In fact, our exploratory analysis indicated that incorrect responses on short interval trials were associated with slightly, but significantly, lower alpha-power within this window than correct responses (Supplementary Materials Section 3e, S11).This might provide support for a conjecture that the careful or non-judgemental neural strategy of the meditators prioritised present moment awareness at the expense of accurate task performance.However, alpha-power RMS was also strongly correlated between the earlier (during-T2 processing) and later (post-T2 processing) alpha power time periods.This relationship was also stronger within incorrect trials than for correct trials.As such, it may be that the alpha-power reduction during the earlier (during-T2 processing) period might reflect a preparatory mechanism that attempted to engage attention when attention had drifted, so that the neural activity required for successful task performance in the later (post-T2 processing) window would be present.We note that at this stage, these explanations are conjecture.Alternatively, it may simply be that the lower alpha-power in meditators during the earlier (during-T2 processing) period reflects a non-optimal neural activation in the context of the task.Further research is required to test whether our exploratory results can be replicated, and to determine which explanation is correct.
Similar to the alpha-power results, our study did not find a significant difference in our primary analysis focused on specific time windows, within which we analysed alpha phase synchronisation in replication of the results reported by Slagter et al. (2009).However, in contrast with the lower alpha-power during the short interval T2 stimuli time window, the meditation group showed a prolonged period of higher alpha synchronisation to T1. Meditators also showed a different scalp distribution of alpha synchronisation to T1, with more parietal and frontal APS than controls.While alpha-power has been associated with the inhibition of brain regions that are not relevant for processing the current attention task (Klimesch et al., 2007), the same relationship has not been reported for APS.Indeed, the correlation between APS and task performance in our study, along with the more occipital distribution in the meditation group, suggests that inhibition of non-relevant brain regions (in our visual task) is not likely to be the explanation for the higher APS in our meditation group.Instead, we suspect the increased APS in our meditation group reflects synchronisation to the ongoing stream of stimuli presentation timing (as stimuli were presented at 10 Hz, within the alpha frequency).Previous research has suggested that the synchronisation of ongoing endogenous neural oscillations to external stimuli may increase the likelihood of neurons firing in response to those stimuli, which is then related to the increased encoding of those stimuli into working memory (Buzsáki & Moser, 2013;Fujisawa & Buzsáki, 2011;Lisman & Buzsáki, 2008;O'Neill et al., 2013).This process is likely to reflect a mechanism underlying attention function, and a similar phenomenon may underlie the alpha synchronisation to stimuli in the current study.As such, it may be that the attentional training the meditation group had undertaken increased their ability to time lock their alpha oscillations to stimuli in occipital regions responsible for processing the visual stimuli, and frontal regions responsible for attending to the stimuli.We note here that it might be valuable to analyse connectivity between these regions in future research.
While our results suggest differences in neural activity in meditators that align with improved attention function, the meditator and control groups did not differ in task performance.There are a number of potential explanations for this null result, as well as the null results for our primary analyses.For the sake of brevity, these are summarised here, and explained in full in the Supplementary Materials (Section 4).Firstly, the behavioral effects of meditation in the AB task may be dependent on a meditation-induced mindful state, or particular types of meditative practices, which may not have been sampled in our study.Secondly, it may be that more meditation experience is required before differences in AB task performance are detected, or that the AB task we used was not sensitive enough to detect differences between our groups.On this point, we note that the effects of meditation on attention function reported in meta-analyses are small (Sumantry & Stewart, 2021), so may be easily "washed out" by variations in context, such as the use of a task with lower sensitivity, a factor that may explain the not uncommon null results reported by studies of mindfulness and attention (Bailey et al., 2018;Osborn et al., 2022;Payne et al., 2020).Age may have also been a factor -perhaps meditation protects against age-related decline in AB performance, and our young meditation group had not aged enough to show this effect.Indeed, Slagter's participant's median age was 41, whereas the median age of our meditation group was 35, and ERP latency is known to increase with age (Polich, 1997).However, these explanations seem unlikely given our meditators were more experienced than those included in many studies, our task replicated a number of previous AB task studies that did detect differences, and some research has indicated older meditators showed improved AB task performance compared to both age-matched controls and a younger control group (van Leeuwen et al., 2009).Next, our study design differed from Slagter et al. (2007Slagter et al. ( , 2009) ) -most notably in that their study involved the repetition of the AB task before and after an intensive retreat whereas our study focused on community-meditators.It may be that MM is not associated with generalised better performance in the AB task, but rather an increased ability to learn the task and as a result, increased performance on the second repetition of the task following meditation practice.This feature meant that the within-subject design used by Slagter et al. (2007Slagter et al. ( , 2009) ) controlled for interindividual variability, while our between-groups study did not.Overall, there are a number of potential explanations for our null result with regard to our behavioral measures, and it may be useful for future research to systematically explore variations in task parameters, participant ages, test-retest performance, and other factors to determine the parameters under which meditators do show improved AB task (or attention task) performance.
Our study also included updated EEG analysis methods from Slagter et al. (2007Slagter et al. ( , 2009)).Most notably, the current study used a high pass filter of 0.25 Hz, whereas Slagter et al. (2007) used a high pass filter of 1 Hz.The amplitude of ERPs, including the P3b, has been shown to be produced at least in part by < 1 Hz activity, and are adversely affected by high pass filtering out < 1 Hz data (Rousselet, 2012;Tanner et al., 2016).As such, the P3b data Slagter et al. (2007) analysed may have had considerable signal removed from the P3b, and their analysis may have been adversely affected.Lastly, it may be that either our result or the results reported by Slagter et al. (2007Slagter et al. ( , 2009) ) are spurious, reflecting a sampling bias, chance-like effect, or similar "non-effect of interest."However, we note that a spurious chance-like result is less likely in studies with a larger sample size, as per the current study, according to Stevens (2017).
As such, our results indicate that the specific alterations detected by previous research, including those to the P3b (within a specific window of interest), increased T2-locked TPS, and improved performance on short interval AB trials, are not necessarily markers of regular mindfulness meditation practice.Despite the potential explanations outlined in the previous sections for the differences between the meditator and control group in our study, these findings were exploratory and were not controlled for experiment-wise multiple comparisons.As such, it is possible that there are simply no differences between groups and that ultimately, previous mindfulness experience may not result in behavioral improvements in the AB task (although unlikely given the number of positive findings, even if the findings were exploratory).Although our EEG findings are uncertain, our behavioural results provide confidence in the null result for differences in task performance.This was surprising as it conflicted with previous findings (Slagter et al., 2007;Slagter et al., 2009).It was especially surprising considering that the meditators in the current study reported at least 2 years of meditative practice, which we expected would be sufficient to produce differences in attention performance if MM did indeed affect attention.From our perspective, the most likely explanation for the difference between our results and those of Slagter et al. (2007Slagter et al. ( , 2009) ) is that our participants were regular meditators, whereas theirs were tested before and after a 3-month retreat.As such, when viewing both studies together, our results suggest that differences in AB performance among meditators may be exclusively present following intensive meditation interventions.
It may be that the type of attention captured by the AB task is less relevant to the attention trained through mindfulness meditation practice.This interpretation is supported by our alpha-power findings, which suggested meditators may not have engaged alpha to inhibit distractor processing when short interval T2 stimuli were absent as strongly as the controls.Other EEG markers or neuroimaging methods using different attention tasks may be better suited to detect differences between meditation and control groups, and the null results for behavioral analyses in the current study may help refine our understanding of exactly which mechanisms are altered (and which are not altered) by meditation practice.With AB literature suffering from lack of published replications, the present study also underscores the importance of replication studies in different populations and contexts, as some of the effects of meditation may be specific to certain populations only (Bailey, Raj, et al., 2019b;Osborn et al., 2022;Vago et al., 2019;Van Dam et al., 2018).Slagter et al. (2007) have been cited over 1000 times, yet this is the first even partial replication attempt, which, despite using a larger sample size, revealed null results for our replication of the outcome measures reported by Slagter et al. (2007Slagter et al. ( , 2009)).
Limitations and Future Directions
The most obvious limitation of our study is that it utilised a cross-sectional design.A longitudinal approach, assessing participants before and after meditation practice, may allow for the determination of causality.However, we note that this is difficult to achieve with the level of meditation experience tested in the current study.Another limitation of this study was that it utilised a broad definition of meditation (Kabat-Zinn, 1994) and included both "focused attention" and "open monitoring" practitioners.Meditation literature is unclear on the direct impact of different varieties of meditation practice on AB performance, with research suggesting both focused attention and open monitoring meditation affect AB performance (van Leeuwen et al., 2009), other research suggesting AB performance is exclusively impacted by open monitoring meditation (Colzato et al., 2015), and some studies suggest neither practice affects AB performance (Sharpe et al., 2021).While delineating between the different MM practices and their potential impacts may be valuable, the conclusions that can be drawn from our broad sample may be more reflective of everyday mindfulness meditators in the community.It would also be interesting to assess the potential dose-response relationship between mindfulness practice and the differences in neural activity we have reported.Unfortunately, our sample size was likely too small to provide a good test of a potential dose-response relationship, and the measures of meditation experience we obtained are not likely to provide a robust assessment of meditation experience, so we did not conduct this analysis in our study.It would be interesting for future research to consider potential dose-response relationships.Finally, it is important to emphasize that the significant results detected in our study were only from our exploratory analyses, and our primary analyses replicating the effects demonstrated by Slagter et al. (2007Slagter et al. ( , 2009) ) did not show significant results.Furthermore, there was no difference in behavioral accuracy between the groups, and this was unlikely to be due to a ceiling effect (with a mean short interval T2 accuracy of 67.1% for meditators and 63.9% for controls).As such, it is not clear the potential meditation-related differences in neural activities are meaningful, and replication is required to test our interpretations of the potential functional relevance of differences in neural activity in our meditator group (for additional strengths and limitations of the study, see the Supplementary Materials).
Trust Small Grant Scheme (T11801).ATH was supported by an Alfred Deakin Postdoctoral Research Fellowship.PBF is supported by a National Health and Medical Research Council of Australia Practitioner Fellowship (6069070).
Fig. 2
Fig. 2 Event-related potential data time-locked to T1 stimuli, averaged within the 350 to 600 ms time window for direct comparison with Slagter et al., (2007).Left: Grand averaged ERP data from Pz time-locked to short (top) and long (bottom) interval T1 stimuli (error shading reflects 95% confidence intervals).Right top: Global field
Fig. 3
Fig. 3 Left: p-value graphs for the main effect of Group and interactions involving Group for the whole epoch comparisons of the eventrelated potential (ERP) global field potential (GFP).The black line reflects the p-value, white areas reflect significant time points, and green periods reflect windows where the effect passed global duration controls.Top right: GFP activity in response to the first target (T1)
Fig. 8
Fig. 8 Root mean squared (RMS) alpha-power comparisons timelocked to T1 stimuli onset.Top left: The cumulative variance explained (ηp 2 ) at each time point across the epoch by each main effect and condition, with each colour reflecting the ηp 2 from the effect being tested, colour coded to match the p-graphs.Top right and middle: the p-graphs for the main effect of Interval (orange, middle left), Group (blue), and interaction between Interval and Group (yel-
Fig. 9
Fig. 9 Root mean squared (RMS) alpha phase synchronisation (APS) comparisons time-locked to T1 stimuli onset.Top left: The cumulative variance explained (ηp 2 ) at each time point across the epoch by each main effect and condition, with each colour reflecting the ηp 2 from the effect being tested, colour coded to match the p-graphs.Top right and middle: the p-graphs for the main effect of Interval (orange, middle left), Group (blue), and interaction between Interval
Fig. 10
Fig. 10 Alpha phase synchronisation (APS) topographical analysis of variance (TANOVA) comparisons time-locked to the onset of the first target stimuli (T1).Top left: The cumulative variance explained (ηp 2 ) at each time point across the epoch by each main effect and condition, with each colour reflecting the ηp 2 from the effect being tested, colour coded to match the p-graphs.Top right and middle: the p-graphs for the main effect of Interval (orange, middle left), Group (blue), and interaction between Interval and Group (yellow, middle right).The black line reflects the p-value, white areas reflect significant time points, and light blue periods reflect windows where the effect passed
Fig. 11
Fig. 11 Attentional blink performance, measured in percentage correct for each group and condition.Long interval refers to conditions in which the T2 stimulus was presented 700 ms after T1.Short interval refers to conditions in which the T2 stimulus was presented 300 ms after T1.Figures on the left (T1) indicate the percentage of T1 stimuli correctly identified by each participant, whilst figures on the right (T2) indicate the percentage of T2 stimuli correctly identified by each participant.The single trial T1 label refers to T1-only trials (where no T2 stimulus was presented)
Table 1
Demographic and self-report means (M), standard deviations (SD), and statistics.BAI Beck Anxiety Inventory, BDI-II Beck Depression Inventory II, FFMQ Five Facet Mindfulness Questionnaire
Table 2
Global field potential (GFP) values averaged across the P3b period of interest
Table 3
Pearson's correlations between percent correct responses to the second target stimuli (T2) in short interval trials and the averaged root mean squared (RMS) theta phase synchronisation (TPS) within the 117 to 295 ms period in response to both the first target stimulus (T1) and T2
Table 4
Attentional blink behavioural performance means (M), standard deviations (SD), and statistics for each group and condition.T1 the first target stimuli, T2 the second target stimuli
|
2023-02-15T14:11:49.576Z
|
2023-02-12T00:00:00.000
|
{
"year": 2023,
"sha1": "ae4493e56cda0119c3c1cb4c2cb85159baa45910",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12671-023-02224-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8ba4a852832698b0ec4caaa67a1a03cb4f106e1c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
58991836
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence, antimicrobial susceptibility, serotyping and virulence genes screening of Listeria monocytogenes strains at a tertiary care hospital in Tehran, Iran
Background and Objectives: Listeria monocytogenes is the etiological agent of listeriosis, a highly fatal infection which causes miscarriage or stillbirth in pregnant women. The objective of this study was to detect the prevalence, serotypes, antimicrobial susceptibility and virulence factors of L. monocytogenes isolated from pregnant women with vaginitis at a tertiary care hospital in Tehran, Iran. Materials and Methods: During September 2015 to February 2017, a total of 400 vaginal swabs were collected from pregnant women. The presumptive isolates were characterized biochemically. All L. monocytogenes isolates were further analyzed by serotyping and antimicrobial susceptibility tests. All positive samples for L. monocytogenes were analyzed for presence of virulence genes (hlyA, actA, inlA, inlC, inlJ and prfA). Results: Twenty-two (5.5%) of the samples were found positive for presence of L. monocytogenes. Most isolates are resistant to trimethoprim/sulfamethoxazole (81.82%) and chloramphenicol (54.55%). The majority of tested isolates (59.10%) belonged to serotype 4b, followed by 1/2a (22.73%), 1/2b (13.63%), and 3c (4.54%). The hlyA, actA and inlA were detected in all of the 22 L. monocytogenes isolates, but two, three and five isolates were found to lack inlC, inlJ and prfA, respectively. Only one isolate lacked three inlC, inlJ and prfA genes, and two isolates simultaneously lacked both inlJ and prfA genes. Conclusion: Evaluation of virulence factors and antimicrobial susceptibility can be highly helpful to develop effective treatment strategies against L. monocytogenes infections. This study is noteworthy in that it documents prevalence, virulence characteristics, and antimicrobial resistance of L. monocytogenes.
INTRODUCTION
Listeria monocytogenes is a foodborne pathogen that can cause life-threatening disease in fetuses, newborns, elderly and immunocompromised people (1). It has been stated that pregnant women account for 20-30% of listeriosis cases and listeriosis in pregnant women can lead to bacteremia, amnionitis and infection of the fetus, resulting in premature delivery, miscarriage, stillbirth and other serious health problems for neonates (2,3). Listeriosis has a mortality rate of about 20% (3).
L. monocytogenes includes a spectrum of strains with a wide variation in virulence and pathogenicity. Although the numerous strains of L. monocytogenes are naturally virulent and capable of producing high morbidity and mortality, others are non-virulent and unable to cause an infection within hosts (4). Distinction between virulent and avirulent strains is of great importance in assessing the potential implications of these bacteria in food safety and public health (5).
L. monocytogenes infection is mediated by many virulence factors. Diverse Listeria determinants, which are well known as important factors in the pathogenicity of L. monocytogenes, include listeriolysin O (encoded by hlyA gene), actin (encoded by actA gene), internalins (encoded by inlA, inlC and inlJ genes) and virulence regulator (encoded by prfA gene) (6). The quick and reliable diagnosis of listeriosis has been recommended to be preferably based on the recognition of virulence determinants of L. monocytogenes via molecular techniques (7). The objectives of the present study included the detection and characterization of L. monocytogenes using cultural and biochemical tests, antimicrobial susceptibility, serotyping and survey of its hlyA, inlA, inlC, inlJ, actA and prfA virulence genes in isolates obtained from pregnant women using conventional and molecular methods.
Samples.
During September 2015 to February 2017, a total of 400 vaginal swabs were collected from pregnant women with vaginitis. These women had a complicated obstetric history like spontaneous and repeated abortions, stillbirths, pre-term labor and were hospitalized at a tertiary care hospital in Isolation and identification. Initially, the specimens were inoculated in Buffered Listeria Enrichment Broth (BLEB, Merck, Germany) and were incubated at 4°C for 2 weeks to 1 month. The inoculum was then plated on PALCAM agar (Merck, Germany), Oxford agar (Difco, USA) and CHROM agar Listeria (Paris, France) plates. After 48 h of incubation at 37 o C, colonies morphologically resembling Listeria were submitted for confirmatory examinations using Gram staining, catalase and oxidase tests, motility and sugar fermentation tests (xylose, rhamnose, mannitol, α-methyl D-mannopyranoside), hemolysis on 5% sheep blood agar and CAMP test (8,9). In CAMP test, the L. monocytogenes isolates were streaked perpendicular to Staphylococcus aureus on 5% sheep blood agar plates and zones of hemolysis were investigated, after 24-48 h of incubation at 35°C (10).
Molecular detection of virulence genes.
Genomic DNA was isolated from pure cultures of the selected L. monocytogenes strains using Qiagen RNA/DNA Kits (Qiagen, USA). All isolates were screened for the hlyA, inlA, inlC, inlJ, actA and prfA genes. The primers described by Liu et al. (2007), and were used for detection of inlA/C/J, and actA, respectively (4,15). Also, the hlyA and prfA primers were designed in this study ( Table 1). The PCR mixture contained 12.5-μL mastermix PCR, 1 μL of each primer, and 50 ng of DNA in a 25-μL final volume. PCR amplification was performed in a thermal cycler instrument (MJ Research Inc., MA, USA) and included initial denaturation at 94°C for 5 min, and then subjected to 30 cycles of amplification (denaturation at 95°C for 1 min, annealing at primer-specific temperature for 30-60 s, and extension at 72°C for 30 s) followed by a final extension step at 72°C for 10 min. Amplicons were separated via gel electrophoresis (70 min at 90 V) on 1% agarose 0.5 X TBE buffer and visualized under UV light after staining with ethidium bromide.
Statistical analysis. All data were collected and analysis was done using SPSS version 23 and for survey of significance, Chi-square test was calculated. A value of p ≤ 0.05 was also considered statistically significant.
RESULTS
A total of 400 samples were screened for the presence of L. monocytogenes. Twenty-two (5.5%) of the samples were found positive for the presence of L. monocytogenes. All the 22 isolates showed characteristic enhancement of the hemolytic zone with S. aureus in the CAMP test.
In total, all the L. monocytogenes isolates were resistant to three or more antimicrobial agents. Among the resistant isolates, two, five, nine and three isolates, respectively, were resistant to three, four, five and six antibiotics. Also, one isolate was resistant to 8 antibiotics and one isolate was resistant to 9 antibiotics. Surprisingly, an isolate was resistant to all antimicrobials.
Twenty-two isolates of L. monocytogenes obtained from vaginal samples were screened for the presence of hlyA, actA, inlA, inlC, inlJ and prfA genes. The hlyA, actA and inlA genes were detected in all the 22 L. monocytogenes isolates (Fig. 1), but two, three and five isolates were found to lack inlC, inlJ (Fig. 2) and prfA, respectively. Only one isolate simultaneously lacked three inlC, inlJ and prfA genes, also two isolates lacked both inlJ and prfA genes ( Table 2).
DISCUSSION
Serotyping is an additional effective tool for identifying L. monocytogenes isolates (16). Although most clinical isolates belong to serotype 4b, the majority of food isolates belong to serotype 1/2a or 1/2b. Thus, it is likely that serotype designation is related to virulence potential (17). The majority of tested isolates (13, 59.10%) belonged to serotype 4b, followed by 1/2a, 1/2b and 3c.
Recently, there have been reports of increased resistance to most commonly used antibiotics among L. monocytogenes strains, causing serious problems in the management of human listeriosis cases. The multidrug resistance (MDR) L. monocytogenes related to human listeriosis has been described from food and the environment (18). Some studies conducted in Iran have described the resistance of L. monocytogenes to tetracycline, penicillin G, streptomycin, sulfamethoxazole, gentamycin, erythromycin, and ciprofloxacin (19). Dehkordi et al. (20), Rahimi et al. (21), and Jamali et al. (6) isolated MDR L. monocy-togenes from veterinary, food, environmental and clinical samples. Like other studies, the present study showed that most isolates of L. monocytogenes are resistant to three or more antibiotics.
Instant isolation and confirmation techniques for L. monocytogenes are still required. Some non-pathogenic strains behave phenotypically closely related to pathogen strains (22), and many strains of L. monocytogenes are different in pathogenic potential and virulence (23). A number of L. monocytogenes strains are naturally virulent yielding high morbidity and mortality, while others which are avirulent produce no obvious disease (24). PCR-based tests for the key virulence-associated genes yield quick and reproducible results (18,25). In a study by Eslami et al. 16.7% of samples tested had been positive for L. monocytogenes (26). In Sadeghi Kalani's study, the incidence of L. monocytogenes in clinical samples was reported as 8.23% (27). In a study conducted by (2015), the incidence of L. monocytogenes-associated abortion and stillbirth was from 0 to 8.39% through out 1989 to 2009 (31). Also, Shindang et al. (2013) isolated L. monocytogenes from 8.04% of blood and placenta samples (32). In a study carried out in India by Kaur et al. in 2007 on spontaneous abortion, they isolated Listeria spp. and L. monocytogenes from 14.8% and 3.3% of specimens, respectively. In the research, they also studied plcA, prfA, actA, hlyA and iap genes (33).
Probably, the best results were achieved through evaluation of several genes; therefore, it is recommended that numerous major virulence factors of L. monocytogenes should be investigated.
CONCLUSION
In Iran, the real prevalence of L. monocytogenes is indefinite and only few studies have been conducted on listeriosis. Moreover, listeriosis is not a report- able disease in the Iranian health system. Therefore, further attention and studies are required to investigate and determine accurate listeriosis status in Iran.
Regarding the high sensitivity and specificity of molecular techniques, we suggest to use these methods for the identification of virulence genes and also differentiate between virulent and avirulent strains of L. monocytogenes. In conclusion, the evaluation of virulence factors and antimicrobial susceptibility can be highly helpful in development of effective treatment strategies against L. monocytogenes infections.
|
2019-01-12T23:03:52.485Z
|
2018-09-02T00:00:00.000
|
{
"year": 2018,
"sha1": "6904fdfcf22cf7a6c7341db5e5627036f29f2b45",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ffc41e5bb1386c40f82694bc9fe811687c134ca4",
"s2fieldsofstudy": [
"Medicine",
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
259039488
|
pes2o/s2orc
|
v3-fos-license
|
Transcendental Happiness in the Thought of Ibn S¯ın¯a and Ibn ‘Arab¯ı
: This article explores the concept of transcendental happiness in the philosophies of arguably the two most important figures in Islamic intellectual thought, Ab¯u ‘Al¯ı ibn S¯ın¯a (d. 428/1037) and Muh.y¯ı al-D¯ın ibn ‘Arab¯ı (d. 638/1240). The most striking parallels between the philosophy of Ibn S¯ın¯a and that of Ibn ‘Arab¯ı is in their agreement on the Aristotelian principle of transcendental happiness as the comprehension of God, combined with their emanationist cosmologies. Based on Neoplatonist emanationism, especially as it is put forth by Plotinus, Ibn S¯ın¯a and Ibn ‘Arab¯ı argue that there is a necessary emanation from God that results in the existence of the universe. As corollaries of the divine emanative process, those endowed with rationality seek to return to the divine in a reciprocal upward motion that aims to ‘reverse’ the downward motion of the original divine descent. The impetus for the two-way process incorporating divine descent through emanation and the longing for ascent found in humans is love. Despite these points of confluence, there are others of divergence. Ibn ‘Arab¯ı disagrees with his predecessor that transcendental happiness is found in absolute annihilation in the divine, while still maintaining that annihilation of the self is a necessary first step in the attainment of transcendental happiness. Transcendental happiness, argues Ibn ‘Arab¯ı, is ultimately the realization of human potentiality to become a complete locus of divine manifestation. This is carried out through the body for Ibn ‘Arab¯ı, whereas for Ibn S¯ın¯a, transcendental happiness requires the divestment of materiality.
Introduction
This article explores transcendental happiness in the philosophies of arguably the two most important figures in Islamic intellectual thought, Abū 'Alī ibn Sīnā (d. 428/1037) and Muh . yī al-Dīn ibn 'Arabī (d. 638/1240). The Thomistic definition of transcendental happiness is adopted, in which true happiness can only be found with God (Theron 1985, p. 361;Wang 2007). It is known that St. Thomas Aquinas' conception of happiness was influenced by Ibn Sīnā (Dudley 2018, pp. 180-81); however, this article argues that traces of Ibn Sīnā's conception of happiness can also be found in Ibn 'Arabī's works, in which he retrofits his theological ideas to those of his predecessor in his own way. It is the parallels in terms of what constitutes happiness, how it is attained, and what relationship it has with the overarching metaphysics of these two thinkers that constitute the unique contribution this work makes to the existing literature on transcendental happiness. This study, therefore, builds on the work of Seyyed Hossein Nasr, 'Happiness and the Attainment of Happiness', in the Islamic tradition (2014). More specifically, it reveals the ways in which Ibn 'Arabī's meditation on the concept of transcendental happiness has many of the same features as that of Ibn Sīnā, as delineated by Shams Inati in her work, 'The Relevance of Happiness to Eternal Existence' (Inati 1995).
Aristotle then observes that since everyone has the perception that whatever they honor is best, the thing that is truly 'honorable and pleasant is what is so to the excellent person' (Aristotle,book 10,chp. 6,p. 162). This is because: to each type of person, the activity that accords with his own proper state is most choiceworthy; hence the activity in accord with virtue is most choiceworthy to the excellent person [and hence is most honorable and pleasant] (Aristotle,book 10,chp. 6,p. 162). Ibn Sīnā echoes this sentiment when he writes: Indeed, food and [the possibility of] sex could be presented before a person who is virtuous [but] . . . due to the presence of modesty, he withdraws his hand from them in order to observe modesty (h . ishma) (Ibn Sīnā 1968, book 4, tenet 8, chp. 1, p. 8).
Since the virtue of modesty is 'most choiceworthy to the excellent person', it is more pleasurable, and the person then 'withdraws his hand' from food or sex in order to act in accordance with that virtue.
Having shown that happiness is found in acting in accordance with virtue, Aristotle argues that it therefore stands to reason that the greatest happiness is to act in accordance with the greatest virtue: If happiness is activity in accord with virtue, it is reasonable for it to accord with supreme virtue, which will be the virtue of the best thing. The best is understanding . . . and to understand what is fine and divine, by being itself either divine or the most divine element in us. Hence, complete happiness will be its activity in accord with its proper virtue, and we have said that this activity is the activity of study (Aristotle,book 10,chp. 6,p. 163).
'Complete happiness', then, is found in the activity that is most in accordance with 'the most divine element in us', which, he tells us, is understanding. This means that the activity that leads to complete happiness is 'the activity of study'. Accepting this premise, Ibn Sīnā explains that this is why the pleasures of the internal senses, such as victory, are preferred to those of the external senses, such as food and sex, and that the internal senses themselves are lower than the theoretical intellect that carries out 'the activity of thinking'. Consequently, after proving that the internal senses trump the external ones, he asks, rhetorically, 'So if the internal pleasures (al-ladhdhāt al-bāt . ina) are greater than the external ones (al-z .ā hira), even though they are not intellectual ('aqliyya), then what do you think about the intellectual [pleasures]?' (Ibn Sīnā 1968, book 4, tenet 8, chp. 1, p. 9). Thus, Ibn Sīnā delineates a 'pecking order' when it comes to pleasures that is based on Aristotle's works, with the exercise of contemplation resulting in the greatest happiness and the pleasures of the external senses constituting the least (McGinnis 2010, p. 218). In his Treatise on Happiness (Risāla fi'l-sa'āda), Ibn Sīnā draws a direct equivalence between the pleasure and joy that a person experiences and the happiness that they feel (Ibn Sīnā n.d.b., pp. 259-80;Khademi 2014). Ibn Sīnā goes on to say, 'Surely, pleasure is to perceive (idrāk), and to obtain what is needed to arrive at (nayl li wus .ū l), what the person who perceives (mudrik) to be perfection (kamāl) and good (khayr)' (Ibn Sīnā 1968, book 4, tenet 8, chp. 3, p. 11). This means that pleasure has two constituents, as he makes clear when he writes, 'Every pleasure is related to two things' (Ibn Sīnā 1968, book 4, tenet 8, chp. 3, p. 15). These are: (1) perceiving that something is good, and (2) attaining that thing. Now, the thing that is perceived to be good and is, therefore, sought, says Ibn Sīnā, 'is the perfection that is particular to it (yakhtas . s . bih), and in the direction of which it goes with its first preparedness (isti'dād awwal)' (Ibn Sīnā 1968, book 4, tenet 8, chp. 3, p. 15). Thus, each thing desires its own perfection, that which is bespoke to it because of its 'first preparedness'. Nas .ī r al-Dīn al-T .ū sī (d. 672/1274), who is regarded as a faithful interpreter of Ibn Sīnā's doctrine (Adamson and Noble 2022), elaborates that by 'first preparedness', Ibn Sīnā refers to the fact that: A thing can have two preparednesses (isti'dādān), where one overtakes the other, but the thing to which something moves towards with its second preparedness (isti'dād thānī) cannot be better than the relation it has to its essence (dhāt) (Ibn Sīnā 1968, book 4, tenet 8, chp. 3, p. 15).
Each thing, then, has a first preparedness, which is related to the essence of that thing and allows it to actualize, along with a second preparedness that allows the function of the thing to actualize (Inati 1996, pp. 10-11). Therefore, there is an essential preparedness, comprising the first preparedness, and a functional preparedness, comprising the second. Ibn Sīnā is adamant that the true perfection that is desired by all things is the perfection of its essence, or its essential or first preparedness. This, in turn, means that there is nothing that desires a perfection that is not in accordance with its own essence. Ibn Sīnā bases this argument on the Aristotelian view that: What is proper to each thing's nature is supremely best and most pleasant for it, and hence, for a human being, the life in accord with understanding will be supremely best and most pleasant (Aristotle,book 10,chp. 7,p. 165).
Since the proper perfection for the theoretical soul is the contemplation of the divine, this is what it (1) perceives as good, and, therefore, (2) seeks to attain. Fakhr al-Dīn al-Rāzī (d. 606/1209), one of the most influential of the Ash'arite theologians (Griffel 2007), explains in his commentary of this work that: The things perceived by the senses (mudrikāt al-h . awās) are only the particulars like colors, tastes, smells, hotness, and coldness, whereas the things that the intellect perceives are the essence of the Originator (Al-Bārī), the Exalted, His attributes (s . ifātih), and His actions. It is therefore known [from this perception] that there is no relation (nisba) between one and the other in terms of honor. So, if it is proven that . . . the things that the intellect perceives are more honorable than that which the senses perceive, then [it is known] . . . that intellectual pleasure (al-ladhdha al-'aqliyya) is more perfect than sensual pleasure (al-ladhdha al-h . issiyya) vol. 2,p. 92).
The theoretical intellect, al-Rāzī elucidates, perceives 'the Originator' Himself, as opposed to the particulars that are grasped by the senses, which is why the theoretical intellect is so superior to the external (and internal) senses. However, not all theoretical intellects are the same. Ibn Sīnā writes: The perfection of the intellectual substance (al-jawhar al-'āqil) is such that the lucidity (jaliyya) of the First Truth (Al-H . aqq al-awwal) is represented (tatamaththal) in it, so far as it is possible for it to attain the splendor (bahā') that is particular to it (Ibn Sīnā 1968, book 4, tenet 8, chp. 9, p. 22).
Thus, the theoretical intellect becomes the locus in which the divine is represented. Ibn Sīnā is careful to attach the proviso that the 'lucidity' with which the divine is manifested in the theoretical intellect is commensurate with what is possible. This means that not only is there a hierarchy when it comes to the pleasures, with the intellectual pleasures being at the summit, there is also a hierarchy within the intellectual pleasures, in terms of the lucidity with which the divine is represented. The more lucid the representation of the divine, the greater the pleasure.
At this point, it is generally assumed that Ibn Sīnā transitions to the Plotinian model because he identifies this contemplation of the divine with union with Him. Inati explains: We find that the Aristotelian notion of contemplation is transformed into the notion of union or being with the object. A human is no longer to seek knowledge as such; he is now to seek union or being with the object (Inati 1995, p. 13).
While there is no denying that Ibn Sīnā's major influence in articulating union with the divine is the writings of Plotinus (see below), it would be a little hasty to attribute this notion entirely to Plotinus; even in the Nicomachean Ethics, we find the intimation of some kind of deep association with the divine when Aristotle says: If understanding is something divine in comparison with a human being, so also will the life in accord with understanding be divine in comparison with human life. We ought not to follow the makers of the proverbs and 'Think human, since you are human', or 'Think mortal, since you are mortal'. Rather, as far as we can, we ought to be pro-immortal and go to all lengths to live a life in accord with our supreme element (Aristotle,book 10,chp. 7,.
He may not go as far as to assert union with the divine, in the Plotinian sense, but the contemplation of the divine, says Aristotle, does imbue us with 'pro-immortality' and takes us beyond our mere humanity to something approaching divinity, which he claims brings us the greatest happiness. Nevertheless, as stated, Ibn Sīnā commits more fully to Plotinian emanationist cosmology.
The Influence of Plotinus
Ibn Sīnā believes there is an 'ontological descent from God by necessary emanation' and the 'ascent of the creature in a movement of love' (Houben 1956, p. 217). This 'emanationistic doctrine' (Morewedge 1971, p. 469) has ostensible similarities with Plotinus' model of the cosmos as an emanation from 'the One' (Peters 1968, p. 14). However, there are significant differences between Ibn Sīnā's conception of God and Plotinus' idea of 'the One'; whereas Ibn Sīnā's God is self-aware (Ibn Sīnā 1993, book 3, tenet 4, chp. 28, p. 53), Plotinus' figure of 'the One' is not (Plotinus 2018, pp. 880-98). Indeed, this was one of the explicit attacks leveled against Aristotle by Plotinus, whose Self-Thinking Intellect was clearly conscious of Itself (Plotinus 2018, pp. 880-98). Furthermore, Ibn Sīnā seems to view God as a determination of 'being-qua-being'; Plotinus, in contrast, places 'the One' above 'being' since it is the generative force behind 'being' (Morewedge 1972, p. 11). In light of these key differences, it would be more accurate to assert that Ibn Sīnā's conception is Neoplatonised Aristotelianism (Wolfson 1976, pp. 444-48;Dastagir 2001-02, pp. 1-14). Indeed, as Dimitri Gutas observes, Ibn Sīnā's theological system: [ . . . ] is unique in comparison to the theology and philosophy that came before him, since he synthesized in a comprehensive fashion Aristotelianism, Islamic theology, and Islamic tenets into his metaphysical scheme (Gutas 1988, p. 252).
Yet it may be argued that he was unique only in the manner in which he synthesized these myriad trends, for his predecessor, Abū Yūsuf al-Kindī (d. 259/873?), had synthesized Aristotelianism, Neoplatonism, Islamic theology, and Islamic tenets in his own way and to serve his own ends (Al-Kindī 1948;Adamson 2016, pp. 26-32). Ibn Sīnā was, thus, operating in a tradition that was inaugurated by al-Kindī, but he made a highly original contribution to that tradition through his unique mode of synthesis.
The impetus for the two-way process of divine descent through emanation and the longing for ascent by humans, Ibn Sīnā tells us, is love (Ibn Sīnā 1899, pp. 1-27;Anwar 2003, pp. 331-45). Ibn Sīnā elaborates that the innate love humans have for God, and the longing that they feel to return to Him, is a direct corollary of His effulgent emanation, which caused the emergence of the cosmos (Ibn Sīnā 1986, pp. 106-8). God, the Highest Being (al-Mawjūd al-'ālī), is the object of love and the cause of its dissemination and reciprocity in sensible existence (Ibn Sīnā 1899, pp. 1-27). It is God's self-love that is manifested in humans' longing for Him and it is that which galvanizes them to seek a return to Him. This is their supreme happiness. Goichon explains that God's self-love necessitates the emanative process, and it also provides the impetus for the reverse ascent of the soul. Quoting from the Najāt, she writes: The Necessary Being is, thus, He Himself for Himself, the greatest Lover and the greatest Beloved, the One who enjoys the greatest bliss and is the greatest object of bliss. His love for his essence is, therefore, the most perfect and the most faithful. (Goichon 1956, pp. 116-17) Ibn Sīnā is influenced by the Aristotelian tradition when he underscores the idea that God is the supreme object of love, as mentioned previously (Dudley 1983, pp. 126-37); however, in his theoeroticization of this love, he is drawing on the Sufi tradition in which the love of the human for the divine takes on a distinctly individualistic and personal connotation, wherein happiness can only be attained with the divine (Massignon 1982;Anwar 2003, p. 341). Unification with the divine, says Ibn Sīnā, is the utmost perfection of the human soul, and thus, its greatest happiness (Ibn Sīnā 1899, pp. 1-27). Deep parallels between Ibn Sīnā's conception of human perfection and the Sufi concept of annihilation in the divine (fanā') are perceptible here (Sells 1996;Karamustafa 2007). Since a human being can achieve perfection and complete transcendental happiness through their annihilation in the divine, Ibn Sīnā asserts that the yearning for the divine is simultaneously a yearning for self-perfection (Anwar 2003, p. 343).
Delineating this emanatory process in the Ishārāt, Ibn Sīnā writes: Think about how existence (wujūd) began, from the noblest (al-ashraf ) to what is noble, until it ended up in matter. Then, it returned from what is basest (al-akhass) to the less base, then to what is nobler to what is noblest, until it reached the rational soul (al-nafs al-nāt . iqa) and the acquired intellect (al-'aql al-mustafād) (Ibn Sīnā 1993, book 3, tenet 7, chp. 1, pp. 241-42).
God is the noblest (al-ashraf ), says Ibn Sīnā, and existence began with Him and descended to the less noble intellects, which are also not sensible, until it ended up in matter, which is the basest point. As one would expect, he provides more detail about this in the Ilāhiyyāt: Since existence begins from the One, every proceeding existent being from Him is of a lower level (adwan martaba) than the first, and the ranks continue to fall. The first of these is purely spiritual angels (al-malā'ika al-rūh .ā niyya al-mujarrada) that are called 'intellects' ('uqūl). Then come the levels of the spiritual angels called the 'souls' (nufūs) . . . then the levels of the heavenly bodies. . . . Then, after these, begins the existence of matter (al-mādda) (Ibn Sīnā 1998, book 10, chp. 1, p. 435).
The resultant existents seek to return to the perfection whence they came, which is why, according to Ibn Sīnā, 'the fundamental principle is that everything that exists desires its perfection; some sort of an ontological love', as Louis Gardet observes (El-Bizri 2001, pp. 762-63). It is, thus, their ontological love for and need to meet God-the source of perfection-that drives their ascent upward in order to attain transcendental happiness. Thus, ontological love is the force behind the upward creational motion toward God that imbues us with transcendental happiness. Taking his cue from Plotinus, Ibn Sīnā states that, through a process of contemplative emanationism, from the One is derived the intellects, the souls, the heavenly bodies, then the sensible realm, which means that all existents are the result of this divine emanative process: It is proven for us, through that which we have adduced, that the Necessary Existent (Wājib al-wujūd) in itself is one, and that He is not a body (laysa bi-jism), nor in a body (fī jism), and that He cannot be divided in any way. The existence of all existent beings (mawjūdāt), thus, comes from Him (Ibn Sīnā 1998, book 9, chp. 4, p. 402).
Due to the fact that everything comes from Him by means of the emanative process, it seeks a return to Him, its divine source, and this return constitutes absolute transcendental happiness. Ibn Sīnā precludes intentionality as the impetus for the emanative process when he says: It is not possible that there is, for Him, a principle (mabda') in any way, nor a cause (sabab), not in that which comes from Him, and not in that in which something comes to exist in it or by it, or because of it. It is due to this that it is not possible for the being of everything (kawn al-kull) to come from Him through an intention (qas . d), like our intention for creating everything, and for the existence of everything because, in that case, He would be intending [this] for the sake of something other than Him (Ibn Sīnā 1998, book 9, chp. 4, p. 402). Ibn Sīnā believes that God, inasmuch as He is the Necessary Existent, is knowable, unlike Ibn 'Arabī's absolute essence of God. Thus, he regards the emanation of the universe from Him to be nothing but a manifestation of Himself, in the same way that Ibn 'Arabī regards the emanation of the universe as a manifestation of God's divine Names (see next section). Ibn Sīnā argues that, because it is impossible that there should be any cause for God's existence, it likewise follows that all things emanating from Him would neither have another cause, because no other cause exists, nor that God would intend the emanation of all things for some other reason, since there is no other existent but Him. In other words, it is God's self-love that drives the emanative process, and it is the reversal of this process that constitutes transcendental happiness. Ibn Sīnā continues: His essence (dhāt) knows that His perfection (kamāl) and His exaltedness ('uluww) are such that from them emanate goodness, and that this is one of the requisites (lawāzim) of His majesty (jalāl) that the object of love is in itself (Ibn Sīnā 1998, book 9, chp. 4, pp. 402-3).
The levels of existence are the corollaries of this self-love, which are numerous before they enter our sensible world in Ibn Sīnā's ontology. Ibn Sīnā explains that the first product of the emanation from God cannot be sensible and must, therefore, be immaterial: The first of the existent beings (mawjūdāt) that comes from the First Cause (Al-'Illa al-ūlā) is one, and its essence (dhāt) and quiddity (māhiyya) is one, but its matter (mādda) is not. So nothing of [sensible] bodies (ajsām)-or the forms, which are the perfections of the bodies-is a proximate effect (ma'lūl qarīb) of God. The first effect (al-ma'lūl al-awwal) is a pure intellect ('aql mah . d . ) because it is a form (s .ū ra), and it does not have matter (Ibn Sīnā 1998, book 9, chp. 4, p. 404).
The first product of the emanation from God is as one, in terms of its essence and quiddity, says Ibn Sīnā. He uses the terms essence and quiddity synonymously here, in opposition to existence (wujūd). This bifurcation constitutes one of the most basic distinctions in Islamic philosophy, as Toshihiko Izutsu notes: The distinction between 'quiddity' and 'existence' is undoubtedly one of the most basic philosophical theses in Islamic thought. Without exaggeration, the distinction may be said to constitute the first step in ontologico-metaphysical thinking among Muslims (Izutsu 1969, p. 49).
The essence and quiddity of the first existent to proceed from God, reasons Ibn Sīnā, is immaterial because God, the First Cause of the universe, is a pure intellect and is thus entirely free from matter. Through careful argumentation, Ibn Sīnā proves that if matter were to proceed from God and if it were the first product of the emanative process, this would mean that matter is the cause of further products of emanation, but this cannot be the case since it is only a recipient of emanation and not the cause of it (Ibn Sīnā 1998, book 9, chp. 4, pp. 404-5). This means that 'it is necessary (wājib) for the first effect [of emanation] to be in a form that is not material (s .ū ra ghayr māddiyya): in fact, it is an intellect' ('aql) (Ibn Sīnā 1998, book 9, chp. 4, p. 405).
Ibn Sīnā answers the question of how unity leads to multiplicity by appealing to the intrinsic activity of the intellect. As an emanation from God, the first intellect is one and has 'pure unity' (wah . da mah . d . a); however, because it is an intellect, it understands that it has necessary existence through God, and has possible existence (mumkinat al-wujūd) in itself. Thus, 'the multiplicity (kathra) is not from the First [Cause]', says Ibn Sīnā (Ibn Sīnā 1998, book 9, chp. 4, pp. 405-6). Ibn 'Arabī has a completely different answer to this question, one that involves the multiplicity of the divine Names, which is the way in which God is known by His creation, but not as He truly is in His essence (see Section 3). Nevertheless, Ibn 'Arabī agrees with the essential simplicity and immateriality of the first product of divine emanation (Chittick 1982, p. 113).
When the first intellect thinks about God, it causes the emanation of another intellect, and when it thinks about itself as the product of God's thought, it causes the emergence of a celestial sphere (falak) with only its matter and form, which Ibn Sīnā calls its soul (nafs).
Finally, when it thinks about itself as having a possible existence, it brings forth the body of the celestial sphere. Ibn Sīnā writes, There is, under every intellect, a celestial sphere-with its matter (mādda) and form (s .ū ra)-which is its soul (nafs), and an intellect under it. This means that, under every intellect, there are three things in existence (Ibn Sīnā 1998, book 9, chp. 4, p. 406).
The three things in existence comprise the celestial intellect (the result of the intellect thinking about God as the cause of its coming into being), the celestial soul (the product of its thinking about its necessity, inasmuch as it is the necessary corollary of God's thought), and the celestial body (the outcome of its thinking of its intrinsic possibility (imkān)) (Inati 1995, p. 14). These, then, are the intellectual, the spiritual, and the celestial levels of existence, which Ibn Sīnā delineates more concisely in the Ishārāt (Ibn Sīnā 1998, book 10, chp. 1, p. 435). He dubs the first level the 'purely spiritual angels' (al-malā'ika al-rūh .ā niyya al-mujarrada), which corresponds to Ibn 'Arabī's 'angelic world of determinations' (Corbin 1997, p. 225). Ibn Sīnā calls the second level the rank of the souls, which correlates with Ibn 'Arabī's 'determinations of the souls' (ta'ayyunāt rūh . iyya) (Corbin 1997, p. 225); the third level he calls the ranks of the heavenly bodies, which Ibn 'Arabī dubs the 'world of the Idea-Images' ('ālam al-mithāl) (Corbin 1997, p. 225). These levels would come to be formalized by his followers as the divine presences (h . ad . arāt) (Chittick 1982).
These three levels pertain to each of the ten intellects (besides God), souls, and celestial bodies, says Ibn Sīnā, the last of which is the lunar sphere, when there is the emergence of the corporeal realm: There is always the necessary [emanation of] an intellect after every intellect until the sphere of the moon (kurrat al-qamar) comes into existence, and then the elements come into existence (Ibn Sīnā 1998, book 9, chp. 4, p. 409).
This rounds off the levels of existence, according to both scholars. However, because it is so many emanations or differentiations away from the level of divinity, this material realm is the least perfect of the levels of existence, according to both thinkers. Ibn 'Arabī agrees with Ibn Sīnā that the physical world is the least perfect of all the levels of existence, not only because it is furthest removed from the divine Essence, in the same way as the lower intellects are further removed from the First Cause than the higher ones and the physical world is the product of the most remote intellect from God, but also because the sensible world is dependent on the pre-sensible world, in the same way that lower intellects are dependent on those intellects that are above them. Ibn Sīnā explains that: [ . . . ] every intellect is higher in level (martaba) [to others] because of a 'meaning' (ma'nā) that it has, which is that because it thinks about the First [Cause], there is necessarily the existence of an intellect under it, and because it thinks about its own essence, there is necessarily a celestial sphere (falak) from it, with its own soul and body (jirm) (Ibn Sīnā 1998, book 9, chp. 4, p. 409).
It is the process of undoing this emanative process that-in its furthest differentiation from the divine-brings about sensible reality, which constitutes transcendental happiness, according to Ibn Sīnā. He adds that since the soul is eternal, after the body passes away, it unites with God; that is its absolute transcendental happiness. Here, again, Ibn Sīnā parts ways with Aristotle, for although Aristotle maintains that humankind has the capacity for the divine activity of contemplation, he denies that he can ever be 'identifiable with God' (Morewedge 1972, p. 8).
Transcendental Happiness as Union with the Divine
Many scholars argue that union with the active intellect is what constitutes transcendental happiness for the rational soul, as suggested by Ibn Sīnā, and that there is no union with the divine (Gutas 2014a;Rapoport 2019, p. 180). Dmitri Gutas, for instance, writes that Ibn Sīnā 'saw the supreme happiness in the contact of the human intellect with the active intellect during the split-second of hitting upon the middle term' (Gutas 2014b, p. 10), which is something that the rational soul of the philosopher would continually achieve after detachment from the body (Gutas 2014b, p. 62). It is fair to say that Ibn Sīnā defines transcendental happiness as both of these things (Fakhry 1976), which would make it rather plausible that while both constitute transcendental happiness, this conjunction (ittis .ā l) with the active intellect represents a lower level than the supreme transcendental happiness that is achieved by union with the divine (Inati 1995, pp. 15-16). This is because Ibn Sīnā sees no barrier to union with the divine; he writes in numerous works that he has proven 'the Necessary Existent . . . is, in His essence (dhāt), the act of intellecting ('aql), the One who intellects ('āqil), and the article of intellection (ma'qūl)' (Ibn Sīnā n.d.a., p. 200;n.d.b., p. 248;2007, p. 131), going so far as to dedicate a chapter to this topic in his Najāt (Ibn Sīnā n.d.a.,.
This being the case, supreme transcendental happiness would be found in union with the divine, in a reversal of the emanative process. Indeed, Ibn Sīnā makes many references to union with the divine in the mystical section of his Ishārāt: The one who deems it permissible to make God an intermediary is the recipient of mercy (marh .ū m) only from a certain perspective (min wajh) because he is not nourished with (yut . 'am) the pleasure of magnificence in Him (ladhdhat al-bahja bih), so he can seek this attachment (yata't . afah). His knowledge of pleasure is deficient (mukhdaja), so he yearns for it (h . anūn ilayh), oblivious to what is beyond it (Ibn Sīnā 1968, book 4, tenet 9, chp. 6, p. 74).
There are many points worthy of note in this passage. First, Ibn Sīnā mentions that God should not be an intermediary but the absolute purpose, and that supreme transcendental happiness only lies in this. Ibn Sīnā then defines what he means by this kind of happiness and says that it is 'the pleasure of magnificence in Him'. He, therefore, seems to articulate that it can only be union with the divine that can afford a person supreme transcendental happiness. He follows this up by stating that those who make God an intermediary do not 'seek this attachment' with God; again, he is intimating that it is an absolute attachment to God that yields transcendental happiness. Furthermore, he chastises those who do not have this conception of pleasure and happiness, describing their view as 'deficient' (mukhdaja). It is significant that the term he uses denotes 'the young one of a camel brought forth imperfectly formed, even if the period of gestation have [sic] been completed' (Lane 2003, vol. 2, p. 707). What Ibn Sīnā insinuates is that despite having had the same length of time as the philosophers to know what transcendental happiness is, those who do not realize that it is found in union with the divine have an incomplete or deficient understanding of it. This is why they 'yearn for' their own deficient conception of happiness and do not seek 'what is beyond it'.
Later, Ibn Sīnā says of the advanced knower of God ('ārif ), who has reached the penultimate stage in his path, that: If he crosses over from spiritual exercise (riyād . a) to attainment (nayl), his essence [lit. his secret, sirruh] becomes a polished mirror (mir'āt majluwwa), through which he faces the direction (shat . r) of God, and the exalted pleasures (al-ladhdhāt al-'alī) flow copiously on him (darrat 'alayh). So he rejoices within himself due to the traces of God that are in it/them (Ibn Sīnā 1968, book 4, tenet 9, chp. 16, p. 91).
After perfecting his spiritual exercise, says Ibn Sīnā, the knower can achieve 'attainment' of the divine, whereby his essence becomes a polished mirror in which is reflected the divine. Therefore, he rejoices because the traces of the divine are present in it/them. The pronoun can refer to two things here: either it refers to what flows copiously on him from the exalted pleasures or, as is more fitting, it refers to his soul (nafs). It would, therefore, mean that the traces of the divine are present in the soul of the knower, due to his union with it. Indeed, this is how al-T .ū sī seems to understand it, writing in his commentary: The knower ('ārif ), if he has perfected his spiritual exercises, and if he does not need them to arrive (wus .ū l) at what he seeks, which is his permanent conjunction (ittis .ā l) with God, his essence becomes void (khālī) of everything that is not God, like a polished mirror . . . so the traces of God are represented (yatamaththal) in it (Ibn Sīnā 1968, book 4, tenet 9, chp. 16, p. 91).
Al-T .ū sī speaks of the knower having a 'permanent conjunction with God', which means that he unambiguously interprets Ibn Sīnā's view as championing union with the divine. He further consolidates this position with the statement that in this station, the essence of the knower 'becomes void of everything that is not God', which is why it becomes a 'polished mirror' in which the divine is faithfully reflected. The astronomer and philosopher, Shams al-Dīn al-Samarqandī (d. 710/1310?), who was instrumental in promulgating the ideas of Ibn Sīnā, as well as making some original contributions, especially in the field of Avicennan logic (Faydei 2020), also seems to favor this interpretation. He writes in his commentary on this passage that in the soul of the knower, 'the traces of God are represented, and true pleasures (al-ladhdhāt al-h . aqīqiyya) are poured on him, as well as the trace of divine perfections (athar al-kamālāt al-ilāhiyya)' (Al-Samarqandī 1979, vol. 3, p. 411). Al-Samarqandī identifies the acquisition of 'divine perfections' and the 'traces of God', as represented in the rational soul, with experiencing 'true pleasures'. Transcendental happiness, then, only occurs when there is union with the divine.
In the following stage, which represents the final step in the progress of the knower, Ibn Sīnā explains that there is absolute conjunction with the divine, such that the knower: [ . . . ] withdraws (yaghīb) from his soul, and observes (yalh . az . ) only the side (jānib) of sacredness (quds). And if he observes his soul, it is only in the sense that he notices [the divine], not in the sense that his soul is bedecked with it. And, at this point, the arrival (wus .ū l) is complete (Ibn Sīnā 1968, book 4, tenet 9, chp. 17, pp. 92-93).
In this final stage of the knower's progress, the soul no longer observes itself as being a separate entity from the divine; it observes only the divine and does not even realize that it is 'bedecked' with divinity, according to Ibn Sīnā. This represents absolute union with the divine, or complete arrival, which is when the soul experiences supreme transcendental happiness. Al-T .ū sī draws an equivalence between this stage and the Sufi terms of 'effacement' (mah . w) and 'annihilation' (fanā') in God (Ibn Sīnā 1968, book 4, tenet 9, chp. 17, p. 92) when there is absolute union with the divine, according to some Sufi writers (Massignon 1982). Al-Rāzī employs the same term of 'annihilation' (fanā') to describe this stage, writing: It is the first of the stations of 'absolute arrival' (al-wus .ū l al-tāmm) to God, and it is the complete annihilation (fanā') of everything besides God, and the complete subsistence in Him (baqā' bih) vol. 2,p. 119).
According to al-Rāzī, at this stage, everything besides God vanishes, and the only existence that is left for the soul is its subsistence in the divine.
These commentators of the Ishārāt are clearly of the view that Ibn Sīnā believes that union of the rational soul with the divine, and not just with the active intellect, represents the highest level of transcendental happiness. Ibn Sīnā provides more detail on this issue in Al-Shifā', when he speaks of the exalted rank of the rational soul: The perfection that is particular to (khās . s . bih) the rational soul is that it becomes an intellectual realm ('ālam 'aqliyy) wherein the form of everything (s .ū rat al-kull) is inscribed, as well as the arrangement (niz .ā m) of everything that is comprehended, and the good (khayr) that pours forth to everything. This starts with the basis of everything (mabda' al-kull), then goes to the exalted, purely spiritual substances (al-jawāhir al-sharīfa al-mut . laqa), then to the spiritual substances that are related to bodies in some way (al-muta'alliqa naw' mā bi'l-abdān), then to elevated bodies (alajsām al-'ulwiyya) with their formations and their faculties, then it carries on until it exhausts in itself the formation of all of existence (wujūd). So it turns into an intellectual realm that completely corresponds to the existing realm. It therefore bears witness to absolute excellence (al-h . usn al-mut . laq), absolute goodness (al-khayr al-mut . laq), and the beauty of the absolutely existent Truth (Al-H . aqq al-mut . laq) and is united with it (muttah . ida bih) (Ibn Sīnā 1998, book 9, chp. 7, vol. 1, pp. 425-26).
In this long passage, Ibn Sīnā clearly states that the rational soul becomes an intellectual realm that mirrors the sensible world, which is an idea that Ibn 'Arabī makes extensive use of in his conception of transcendental happiness (see Section 3). He elaborates that this mirroring starts with 'the basis of everything', which is an unambiguous reference to God, and then proceeds to 'the exalted, purely spiritual substances', which are the celestial intellects that have no connection to matter. After this, the rational soul reflects 'the spiritual substances that are related to bodies in some way', which are the celestial souls. Next, it reflects the 'elevated bodies', which are the celestial bodies. Finally, 'it carries on until it exhausts in itself the formation of all of existence', which refers to the sensible world. When it has completed this reflection, it becomes an 'intellectual realm that completely corresponds to the existing realm'; therefore, it bears witness to 'the beauty of the absolutely existent Truth, 2 and is united with it'. This means that once the rational soul has become a complete mirror for the whole of existence, it witnesses the beauty of God and is united with God. Ibn Sīnā writes a virtually identical passage in the Najāt (Ibn Sīnā n.d.a., pp. 240-41), underscoring his commitment to the idea that supreme transcendental happiness for the rational soul lies in becoming a microcosmic mirror for all of sensible reality, and, ultimately, in union with the divine. Although Ibn 'Arabī does not agree with union with the divine as a source of supreme transcendental happiness, he is conspicuously influenced by Ibn Sīnā's general conception of transcendental happiness.
The Influence of Aristotle
Ibn 'Arabī is known to have been influenced by Ibn Sīnā (Inati 1996, p. 62) and his exposition of transcendental happiness bears the hallmarks of Aristotelian contemplative perfection and Plotinian emanationism, by which Ibn Sīnā's conception is characterized. Ibn 'Arabī declares that all happiness lies only in comprehending God (Chittick 1989, p. 151;Nasr 2014, p. 84). This declaration has ostensible similarities with Aristotle's notion of happiness in terms of exercising the activity of our 'most divine element'-the rational soul-that is, understanding (see above). 3 In his most detailed exploration of this topic, comprising chapter 167 of his magnum opus, Al-Futūh .ā t al-makkiyya, and entitled On the esoteric knowledge of the alchemy of happiness (Fī ma'rifat kīmiyā' al-sa'āda), 4 he explains the concurrence between alchemy and happiness: Alchemy is a term for knowledge that relates to [things] . . . that have the capacity for transformation (istih .ā la), I mean, to change the states (taghayyur al-ah . wāl) of one essence (al-'ayn al-wāh . ida) (Ibn 'Arabī n.d., vol. 2, p. 270).
The main point of confluence between alchemy and happiness, Ibn 'Arabī reveals, is the capacity to change from one state to another, even though the essence is the same. It is the transformation from potentiality to actuality that is the definition of happiness, according to Ibn 'Arabī. Deep resonances with Aristotle's notion of happiness are felt here (Blumenfeld 2022; also, see above). Ibn 'Arabī elaborates that: [ . . . ] all minerals (ma'ādin) come from one base. This base seeks, in its essence (bi dhātih), to attain the rank of perfection (darajat al-kamāl), which is 'goldness' (dhahabiyya) (Ibn 'Arabī n.d.,vol. 2,p. 270).
'Goldness', then, is the full actualization of the potentiality that is present in the essence of all minerals, according to Ibn 'Arabī; this is their 'rank of perfection', which they seek to attain. In the same way, humans seek to attain their own rank of perfection, in which lies their supreme transcendental happiness. However, much as humans face the obstacle of materiality that impedes their path to the perfection of universal intellection, according to Aristotle (Inati 1995, p. 13), minerals encounter similar obstacles because they are a 'natural affair' (amr t . abī'ī) (Ibn 'Arabī n.d., vol. 2, p. 270). Ibn 'Arabī gives examples of the effects of nature, from the ravages of time to the fluctuations of temperature and moisture, etc., which hamper their path to the full actualization of their potentiality of perfection (Ibn 'Arabī n.d.,vol. 2,p. 270). The analogy of alchemy and transcendental happiness, therefore, is an apt one.
He emphasizes the parallel between the human quest for perfection and transcendental happiness with that of minerals when he says: In the same way as the bodies of minerals (ajsād al-ma'ādin) are [arranged] in levels (marātib) due to causes ('ilal) that affect them while they are being createdeven though they all seek the rank of perfection (darajat al-kamāl), on account of which their essences (a'yān) are manifest-so, too, is humankind created for perfection. Thus, the only things that can turn it away from that (s . araf 'an dhālik) are the deficiencies ('ilal) and diseases (amrād . ) that affect it, either in the essences themselves (as . l dhawātihim), or because of accidental ('arad . iy) causes (Ibn 'Arabī n.d.,vol. 2,p. 272).
Ibn 'Arabī explains that the potentiality of humans, as with minerals, is to achieve actuality, which is perfection. However, despite both humans and minerals seeking perfection, not all attain it. He has already mentioned the external impediment to this attainment in the case of minerals, when he spoke of the effects of nature. Likewise, for humans, the external impediments are those causes that prevent them from pursuing contemplation of the divine. Nevertheless, he adds another cause in this passage, which is what is found 'in the essences themselves'. Whereas Aristotle attributes the desires of the body as a corollary of materiality that impedes full actualization, Ibn 'Arabī effects a bifurcation in which parts of the essence are one of the obstacles to perfection and transcendental happiness, while the natural effects of the world represent the other. In the characteristic emphasis that he places on homonymy (Lala 2019(Lala , 2023a, he states that the 'causes' ('ilal) that negatively affect the minerals during their stages of formation are the same as the 'deficiencies' ('ilal) that afflict the essences of humankind. This, then, is their intrinsic preparedness (isti'dād), which, in addition to the natural effects of the world, determines whether they can achieve the perfection of transcendental happiness (Lala 2023b).
Ibn 'Arabī elaborates on this preparedness when he says: Know that souls, in terms of their essence, are made ready (muhayya') to accept the preparedness (isti'dād) that emanates for them from what the divine carries out (altawqī'āt al-ilāhiyya). So, among them are those who just obtain the preparedness for carrying out sainthood (isti'dād tawqī' al-wilāya) and do not go past that. And among them are those who are given the preparedness for all or some of the stations (maqāmāt) that we have mentioned (Ibn 'Arabī n.d.,vol. 2,p. 272).
Every person, therefore, has a preparedness that is divinely imbued. For Ibn 'Arabī, as for Aristotle, the preparedness that each person possesses to achieve transcendental happiness is in their contemplation of the divine. Ibn 'Arabī makes this clear when he writes: God has given mastery (mallaka) to particular souls (al-nufūs al-juz'iyya) over conducting the affairs of (tadbīr) the body, and He has appointed them as vicegerents (istkhlaf ) of them. He has, therefore, made it apparent to the bodies that they are their vicegerents, in order for them [i.e., the souls] to alert them (tatanabbah) [i.e., the bodies] to the fact that they have an Originator (Mūjid) who has appointed them as vicegerents, so it is their duty (yata'ayyan 'alayhā) to seek knowledge about He who appointed them [i.e., the souls] as vicegerents of them [i.e., the bodies] (Ibn 'Arabī n.d.,vol. 2,p. 272).
Ibn 'Arabī, as does Aristotle, asserts that the practical intellect of the rational soul has the function of managing the body. He argues that the only reason God appointed the rational soul as a vicegerent over the body was so that this would lead to the realization that there must be someone who gave the rational soul this power. The rational soul, thus, alerts the body that it has an Originator who gave it this power and that it is the raison d'être of humankind to seek knowledge of this divine Originator; it is only in this search that its transcendental happiness resides.
In much the same way as Ibn Sīnā-who discusses the role of the prophet-legislator in terms of the individual pursuit of transcendental happiness, asserting that obedience to him is necessary and is in accordance with the dictates of the rational soul to contemplate God (Ibn Sīnā 1968, book 4, tenet 9, chp. 4, pp. 60-67)-Ibn 'Arabī argues that the prophetlegislator: [ . . . ] prescribes laws that make apparent the path that allows one to attain the rank of perfection and happiness (darajat al-kamāl wa'l-sa'āda), in keeping with what contemplation necessitates (Ibn 'Arabī n.d.,vol. 2,p. 273).
The role of the prophet-legislator, therefore, is to make clear the path to transcendental happiness, which lies in the contemplation of the divine. Ibn 'Arabī writes that the prophetlegislator 'makes clear the path of knowledge (t . arīqat al-'ilm) that leads to Him, on which lies their happiness' (Ibn 'Arabī n.d.,vol. 2,p. 273). This, argue Ibn Sīnā and Ibn 'Arabī, is why the laws of the prophet-legislator are in conformity with the essential activity of the rational soul. However, not everyone will attain transcendental happiness. Revealing the rationale behind calling the chapter 'The Alchemy of Happiness', Ibn 'Arabī writes: It is on account of there being no happiness except in it. Furthermore, there is nothing that people-from among the people of God (ahl Allāh)-have that is better than it. And it is that He gives you the rank of perfection (darajat alkamāl) that behooves humankind to attain. This is due to the fact that not every person who is happy (s .ā h . ib al-sa'āda) is given perfection, so that all those who have perfection (s .ā h . ib al-kamāl) are happy, and not all who are happy are perfect. Happiness is a term denoting the attainment of a lofty rank (darajat al-'ulyā), which is imitation (tashabbuh) of the Cause (Ibn 'Arabī n.d.,vol. 2,p. 272).
There are many topics of interest in this passage. Ibn 'Arabī asserts that transcendental happiness only lies in full actualization, 'which it behooves humankind to attain'. However, he then goes on to explicate that while it is axiomatic that everyone who has achieved full actualization possesses transcendental happiness, there are also those who are happy but who have not attained perfection. There are similarities here with Ibn Sīnā's classification of people into seven classes, of which three classes are afforded transcendental happiness, two classes are given relative happiness or suffering, and three classes are doomed to absolute suffering (Inati 1996, pp. 18-27). This is because, according to Ibn Sīnā: [ . . . ] eternal happiness or eternal suffering . . . are caused by theoretical perfection and theoretical imperfection, respectively. It is obvious, though, that not all theoretical imperfection leads to suffering, but only that which is accompanied by knowledge of one's perfection (Inati 1996, p. 27).
Ibn Sīnā says that one requires theoretical and moral perfection in order to achieve supreme transcendental happiness without undergoing any suffering in the hereafter. Those who attain moral perfection, but who do not attain theoretical perfection because they were unaware of what the latter entailed, will only attain relative happiness in the second life (Inati 1996, p. 19). Ibn 'Arabī, likewise, accords those who achieve moral perfection but who are not aware of the true reality of things a state of relative happiness, but they do not have the supreme transcendental happiness that is the preserve of the spiritual elite. These are the people who have attained the 'lofty rank (darajat al-'ulyā), which is imitation (tashabbuh) of the Cause'. It is in this aspect of imitating the Cause, or God, that Ibn 'Arabī is most influenced by the writings of Plotinus.
The Influence of Plotinus
Ibn 'Arabī adheres to the Plotinian notion of ontological love as a downward motion from the divine to the creation, along with an upward motion that seeks to return to Him, as espoused by Ibn Sīnā. However, for Ibn 'Arabī, God is a being in its most unrestricted sense, not as a determination of it, as William Chittick explains when he says that: [ . . . ] anything that exists is a particular mode, within which the One Being displays Itself. But being is not any thing that exists, for, if it were one thing, it could not be, at the same time, another thing. Being is the 'thing in every respect', not in one respect or another (Chittick 1982, p. 111).
Chittick makes it clear that, for Ibn 'Arabī, everything that exists is a manifestation of the One Being that is God. God is not, as Ibn Sīnā asserts, a type of being. He is all being. As being itself, which is what God is in His absoluteness (Izutsu 1983), God is beyond human understanding, according to Ibn 'Arabī. However, since Ibn Sīnā views God as a determination of 'being-qua-being' (Morewedge 1972, p. 11), he proceeds from a level below that espoused by Ibn 'Arabī. This means that, unlike Ibn 'Arabī, Ibn Sīnā believes that the Necessary Being is humanly comprehensible; however, he argues, much like Ibn 'Arabī, that all existents are nothing but God. There is also a difference in the overall purpose served by emanationism. Ibn Sīnā conscripts the emanatory process as a justification for the denial of creation ex nihilo (Morewedge 2001, p. 79), in opposition to Ibn 'Arabī, for whom the divine yearning for self-expression does not contradict its temporal unfolding (Ibn 'Arabī 2002, p. 48).
Ibn 'Arabī begins his most popular work, Fus .ū s . al-h . ikam, by delineating the impetus for the emanation coming from Him. In one of the most well-known and often-translated passages, he states: God, be He exalted, desired to see the essences of His most beautiful Names (Al-Asmā' al-h . usnā), which cannot be counted, or, you could say, He wanted to see His essence. So, He chose to do this through a comprehensive creation (kawn jāmi') that encapsulates the whole matter through being characterized by existence (wujūd). God's secret would, thus, be manifest to Him via this creation because seeing something in itself is not the same as seeing it in something else that becomes like a mirror for it (Ibn 'Arabī 2002, p. 48).
The cause of this emanation from the divine was the love that God had to see Himself manifested in the Other, or as Ibn 'Arabī puts it, God wanted to see the essences of 'His most beautiful Names' (Al-Asmā' al-h . usnā), which are 'His essence . . . in a comprehensive creation' (kawn jāmi'). Ibn 'Arabī bases this opinion on a tradition in which God declares, 'I was a hidden treasure (kanz makhfiyy), and I wished to be known, so I brought forth the creation so that through it they would know Me' (Ibn 'Arabī n.d.,vol. 2,p. 303). Ibn 'Arabī offers a commentary on this tradition, in which he says: So, God wished to be manifested in the forms of existence (s . uwar al-wujūd), and He wished for Himself to be known to Himself in the mirrors of contingency (marāyā al-mumkināt), just as humans observe their forms in the mirror so as to attain something that they could not attain in themselves without the existence of this form. So that is the love that is the cause ('illa) of the creation of the world, and it is the true basis (al-asās al-h . aqīqī) for which He brought forth existence (Ibn 'Arabī n.d.,vol. 2,p. 303).
Ibn 'Arabī explicitly declares that the 'cause of the creation of the world' is God's self-love, which is the 'true basis' for His bringing forth existence. This ontological love results in the existence of the universe as the disparate loci of divine manifestation and imbues them with a love to return to Him. For Ibn 'Arabī, then, because divine ontological love is a love for self-manifestation in the form of His most beautiful Names (as mentioned in the Qur'an), it is by manifesting these Names most precisely that this proximity to the divine is achieved (Lala 2021;Nettler 1978, pp. 219-29;Nettler 2003, pp. 17-22).
This means that even though Ibn 'Arabī and Ibn Sīnā agree on Plotinian emanationism, they disagree about the essential impetus behind it, in addition to viewing transcendental happiness as perfection in different ways. Ibn 'Arabī's explanation suggests that there was a divine 'yearning' to be known, which is the reason why God brought about existence. Even though Ibn 'Arabī does not accept that this 'yearning' implies a lack in the way that humans yearn for self-perfection, wherein lies their transcendental happiness, because they do not possess it, yet, here, he differs from Ibn Sīnā, who rejects the idea that there could ever be divine yearning because that would that mean God does not possess something, as Inati explains: God does not, and cannot, yearn for anything because . . . yearning implies some lack, and God does not lack anything. Even if no other beings conceive the presence of His essence and, therefore, love Him, He would still not lack anything (Inati 1996, p. 28). Therefore, even though divine self-love brings to pass the emanative process for Ibn Sīnā and Ibn 'Arabī, the latter's conception admits of some form of 'yearning', whereas the former's does not. In addition, for Ibn Sīnā, perfection and transcendental happiness can be found in an upward motion in which the rational souls (al-nufūs al-nāt . iqa) become more and more perfect as they acquire perfections. One of these is the perfection of the acquired intellect (al-'aql al-mustafād), which enables the rational soul to have the intelligibles, so that it can perceive the intelligibles, which are universal concepts, whenever it wishes (Inati 2014, p. 201). At this stage, Ibn Sīnā repeatedly asserts that the rational soul becomes 'like a polished mirror upon which are reflected the forms' of things as they are in themselves [i.e., the intelligibles without any distortion]' (Gutas 2012, p. 424).
Ibn 'Arabī agrees with the essential notion of acquiring perfections, but rather than believing that it is an upward motion in which the baseness of materiality is divested, he views it in completely the opposite way. Since the purpose of the universe is so that God could see Himself in the Other, in something that is not Him, materiality is not something that is base, according to Ibn 'Arabī, and thus an impediment to perfection, as it is for Aristotle and Ibn Sīnā. Instead, it is the opposite: it is the way in which the divine purpose for the universe is achieved. This is because, as Ibn 'Arabī clarifies in his commentary of the tradition in which God likens Himself to a hidden treasure, what is gained by the form of the divine Names that exist in sensible reality cannot be gained from the self in itself, much as the form of a person that exists in a mirror cannot be perceived without the mirror.
It is in this sense of manifesting the divine Names, and in this sense alone, that Plotinian union with the divine occurs for Ibn 'Arabī, and it is this that constitutes supreme transcendental happiness. Ibn 'Arabī rules out absolute union with the divine that Ibn Sīnā seems to allow. Indeed, he regards annihilation (fanā') as an initial stage in which the person annihilates their creaturely traits and takes on divine traits. This is why he pairs annihilation with subsistence (baqā') (Al-H . akīm 1981, p. 203), as al-H . akīm elaborates: Annihilation (fanā') is when the blameworthy characteristics (al-khis .ā l al-madhmūma) are annihilated from a person. And subsistence (baqā') is that praiseworthy characteristics (al-khis .ā l al-mah . mūda) are maintained and made firm in a person. So, the seekers on the spiritual path (sālikūn) differ about annihilation and subsistence: some of them annihilate their base desires, that is, what they desire of worldly things, so when their desires are annihilated, their [pure] intention (niyya) and sincerity (ikhlās . ) in servanthood ('ubūdiyya) remain. And whoever annihilates their blameworthy traits, like envy, pride, hatred, and others, will be left with magnanimity and sincerity (Al-H . akīm 1981, p. 202).
Considering that one needs to divest oneself of creaturely traits before divine traits are adopted, annihilation precedes subsistence, and subsistence represents a higher level than annihilation. However, there is also another reason why subsistence is superior, as Ibn 'Arabī explains: The connection (nisba) of subsistence, in our opinion, is more exalted in the spiritual path than the connection of annihilation . . . for annihilation is that which annihilates in you [creaturely traits] . . . and subsistence is your connection to God (Ibn 'Arabī n.d.,vol. 2,p. 515).
Since annihilation is simply breaking free from the shackles of creaturely desires, whereas subsistence is a state in which the connection to the divine is maintained, the latter represents a higher level than the former. Taking on divine traits, then, or subsistence, is the actualization of human potentiality and, therefore, constitutes transcendental happiness. However, this is not a divestment of materiality, as it is for Aristotle and Ibn Sīnā. Quite the contrary. For Ibn 'Arabī, materiality is a conduit for transcendental happiness, for it is only when the physical form of a person becomes a locus of manifestation of all of God's most beautiful Names that they attain full actualization and transcendental happiness. This is the rank of the Perfect Man (Al-Insān al-kāmil) (Al-Jīlī 1997;Morrissey 2020).
Transcendental Happiness as the Perfect Man
Ibn 'Arabī asserts that the reason for the creation of the whole universe was so that God could see His knowable aspect-as represented by His most beautiful Names-in something other than Himself. The whole of the universe, therefore, is a manifestation of God's most beautiful Names, as Ibn 'Arabī elaborates in his commentary of Q45:37.
For Him is all majesty (kibriyā') in the heavens and the earth, and that is the essence of God, so it is not possible for His essence not to be a locus because all that is in the heavens and earth is a locus (mah . all) for Him. And His being praised in the universe itself is what 'majesty' means, for He is too exalted for anything to be not Him (Ibn 'Arabī n.d.,vol. 3,p. 538).
Everything in the universe is a locus of divine manifestation because that was the very purpose for His bringing it into existence. Ibn 'Arabī states that God is far too exalted for there to be anything besides Him that exists in the universe. Therefore, all that exists is a locus of manifestation of one of His most beautiful Names.
Even though all individual things in the world represent individual Names from the list of God's most beautiful Names, the rank that humankind-withĀdam as its representative-occupies is different, as Ibn 'Arabī explicates: God, the Exalted, brought forth the whole universe in a form of existence (wujūd) that was vague and undifferentiated, which had no soul; that is why it was like an unpolished mirror (mir'āt ghayr majluwwa). And it is the nature of the divine decree (al-h . ukm al-ilāhī) that it only prepares a locus if it is to receive the divine spirit (rūh . ilāhī) . . . soĀdam was the very polish (jalā') of this mirror and the soul of this form (Ibn 'Arabī 2002, p. 49).
Adam specifically, and humankind more generally, holds a special rank because of being the polish of the mirror in which God sees Himself, in something other than Himself (Sells 1988, pp. 121-49). Ibn 'Arabī then explains what it means to be the polish of the mirror: 'All the divine forms that are the [divine] Names are manifest in the formation (nash'a) of humankind, so it has attained the degree (rutba) of completeness and all-inclusiveness' (Ibn 'Arabī 2002, p. 50). Thus, humankind has the potential to be the locus of divine manifestation for all of God's most beautiful Names, which represents its 'degree of completeness and all-inclusiveness'. However, it is only when humankind fulfills this potentiality that it reaches the rank of the Perfect Man, who has the right to be called the vicegerent (khalīfa) of God, according to Nūr al-Dīn al-Jāmī (d. 898/1492) (Al-Jāmī 2005, p. 79), one of the most important disseminators of Ibn 'Arabī's philosophical thought (Rizvi 2006). It is the fulfillment of this potentiality that represents perfection and supreme transcendental happiness, as Ibn 'Arabī clarifies when he states that happiness is: [ . . . ] the perfection (kamāl) that is sought, which is the reason humankind was created to be a vicegerent, thatĀdam, peace be upon him, attained by divine providence (al-'ināya al-ilāhiyya) (Ibn 'Arabī n.d.,vol. 2,p. 272).
Transcendental happiness, says Ibn 'Arabī, lies in becoming a manifestation of all the divine Names in a single locus, which represents the actualization of our potentiality. Al-Jāmī elaborates that this is whyĀdam (and humankind more generally) has a 'divine form' (s .ū ra ilāhiyya) (Al-Jāmī 2005, p. 74). It is the true meaning, he continues, of the prophetic tradition: 'Surely God createdĀdam in His form ('alā s .ū ratih)' (Muslim n.d., vol. 4, p. 2017;'Abd al-Razzāq 1983, vol. 9, p. 444;Ibn H . ibbān 1988, vol. 12, p. 420;Al-Bazzār 1988-2009Ibn H . anbal 2001, vol. 12, p. 275). The potentiality of this form can only be fulfilled when all the creaturely traits are divested, and all the divine traits are adopted. This is the point when one becomes a mirror for the divine, and this can only be achieved through orthopraxy.
To emphasize his fidelity to orthopraxy as the only vehicle by which to attain this level (Addas 1993;Chittick 1992, pp. xii-xiii;De Cillis 2014, p. 169), Ibn 'Arabī states: In the same way as your happiness is secured from your actions, likewise, the divine Names (al-asmā' al-ilāhiyya) are only affirmed through His actions, which are you and are originated. Thus, in terms of His traces (āthār), He is called 'God', and in terms of your actions, you are called 'happy' (Ibn 'Arabī 2002, p. 95).
It is only by following the formalistic aspects of religion through the body and by being cognizant of one's inner reality that one can attain the rank of the Perfect Man, in which one becomes a mirror for the divine and achieves transcendental happiness. God is so named in terms of the manifestation of His actions in the universe, but it is only through these actions that humankind achieves transcendental happiness. The commentators of the Fus .ū s . are in complete agreement with Ibn 'Arabī on this issue. The influential early commentator, Mu'ayyid al-Dīn al-Jandī (d. 700/1300?), whose commentary became a model that subsequent generations would emulate (Dagli 2016, pp. 95-104), states in his commentary on this passage: There is no doubt that your following the commands of God are your actions and that in respecting His commands and prohibitions resides your happiness. . . . So, it is only your actions that lead to your happiness, which are only Him in reality because the actions of God are originated and established by the most beautiful Names (Al-Jandī 2007, p. 330).
Al-Jandī explains that because humans are merely the loci of manifestation of all the divine Names, their actions are the actions of God. It is in this respect that the actions of God are 'originated' because they are nothing but the actions carried out by the manifestation of the divine Names, which are originated in themselves. Therefore, it is only through these acts, and through realizing their true reality, that humans can achieve transcendental happiness.
Following the writings of al-Jandī, 'Abd al-Razzāq al-Qāshānī (d. 736/1335?), whose formalization of Ibn 'Arabī's philosophical thought exerted an abiding influence on the reception of the former's ideas (Lala 2019), articulates that it is only actions that lead to happiness because 'happiness is an attribute that you possess, and this attribute is only achieved by your actions, so your happiness is derived from your actions because every action is voluntary (ikhtiyārī) and inevitably produces an effect in the agent' (Al-Qāshānī 1951, p. 125). He concludes by echoing the sentiment of his predecessor that these actions are only performed by a locus of the divine Names and so, are in that sense, divine (Al-Qāshānī 1951, p. 125). Al-Qāshānī's disciple and author of the most widely circulated commentary on the Fus .ū s . in the Ottoman era, Dawūd al-Qays . arī (d. 751/1350) (Rustom 2005), clarifies that this does not mean that 'actions are the causes of the Names since it is the Names that are the causes of the actions and their source. But as the Names are the divine realities hidden within creation, their manifestation is only achieved through their traces and actions.' This is the source of happiness for humankind because it is through this that the 'fixed essences' (a'yān thābita), or intrinsic preparedness to achieve transcendental happiness, can be realized (Al-Qays . arī 1955, p. 669). Al-Jāmī stresses that it is not only adherence to formalistic worship that enables one to achieve transcendental happiness; rather, it is achieved by the full realization of the body as a locus of the divine, and its actions as manifestations of the effects of the divine Names in phenomenality (Al-Jāmī 2009, p. 210). The important early-modern commentator, 'Abd al-Ghanī al-Nābulusī (d. 1143/1731), is even more explicit when he declares that the perfection (kamāl) of the divine Names can only be expressed through actions, which means that 'actions are . . . from His perfection' (Al-Nābulusī 2008, vol. 1, p. 338). Ibn 'Arabī and his commentators agree, then, that transcendental happiness is only attained through the fulfillment of one's potentiality through the physical body and the actions that it performs. In other words, it is only through materiality that transcendental happiness is achieved. Ibn Sīnā speaks of transcendental happiness occurring when the rational soul becomes an intellectual realm that mirrors the sensible world, which corresponds to Ibn 'Arabī's notion of the Perfect Man becoming the 'microcosmic universe' (al-'ālam al-saghīr), along with the universe being 'the macrocosmic man' (al-insān al-kabīr) (Ibn 'Arabī n.d.,vol. 3,p. 11). Both become mirrors for the divine when they attain transcendental happiness. Nevertheless, the rational soul, according to Ibn Sīnā, becomes polished when it divests its materiality since it is materiality that is an impediment to transcendental happiness. For Ibn 'Arabī, the opposite is true. As God only achieves His purpose of manifesting His Names in the Other through materiality, it is only through materiality that transcendental happiness is attained. It is only when the physical self-the divine form-fulfills its potential of manifesting all the divine Names through orthopraxy that this occurs. It is in the sense of taking on all the divine traits and divesting all the creaturely traits that true contemplation of, and 'union' with, the divine takes place.
Conclusions
There are many parallels between Ibn Sīnā and Ibn 'Arabī's notions of transcendental happiness. Both writers agree with the Aristotelian conception of happiness as an understanding of the divine. They also agree on the Plotinian idea of divine emanation; for both writers, this is driven by divine self-love, as is the reciprocal upward motion that seeks to 'reverse' it. This upward motion, both writers maintain, is propelled by love for God. However, they disagree as to how transcendental happiness is attained. For Ibn Sīnā, when the rational soul is completely liberated from materiality, it becomes a mirror for the divine and the soul is then able to unite with it. This is its supreme, transcendental happiness. While agreeing that to become a mirror for the divine is the realization of the potentiality of humanity, and that in this lies transcendental happiness, Ibn 'Arabī makes materiality a necessary ingredient for the attainment of that happiness. Thus, it is in the acceptance of divine traits and the divestment of creaturely traits that transcendental happiness resides. It is also only in this sense that humans can unite with the divine. Funding: This project has been supported by Gulf University for Science and Technology under project code: ISG-CAase 14.
Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest.
1
All translations from the Arabic are our own, unless otherwise indicated. 2 'The Truth' is commonly used to refer to God by the Sufis (Al-Jurjānī 1845, p. 96). 3 A key figure in Shi'ite philosophy, Mullā S . adrā (d. 1045/1636), synthesizes the ideas of Ibn Sīnā and Ibn 'Arabī in his conception of transcendental happiness as the point at which the virtuous soul meets God (Murtada'i 2012). Mullā S . adrā underscored the principality of existence (wujūd) over quiddity (māhiyya); he asserted that change in the phenomenal world was not just accidental change but also existential change, which he called 'trans-substantial motion' (al-h . arakat al-jawhariyyaj) (Nasr 2014). This change is the cause of the gradations of existence. Therefore, just as there are gradations of existence, there are gradations of happiness; indeed, the former is the cause of the latter (Kalin 2010). The lowest level, as Ibn Sīnā and Ibn 'Arabī delineate, is the happiness that derives from the body, followed by intellectual happiness, and culminating in the transcendental happiness of meeting with the divine (Kalin 2010;Nasr 2014).
|
2023-06-03T15:13:29.539Z
|
2023-05-31T00:00:00.000
|
{
"year": 2023,
"sha1": "e5ec75bba8f69095752834bb0a662e17fe8f7f3e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/rel14060729",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e1bd6a2955dd15120fb707057f533da6566a444d",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": []
}
|
225365767
|
pes2o/s2orc
|
v3-fos-license
|
Recognising the threat of vision loss in people living with HIV on antiretroviral therapy without retinitis
Should we examine the impact of this virus, with its treatment of the eyes of people living with HIV (PLHIV), until such time as a vaccine is found? The ‘test and treat’ approach makes it possible to preserve the lives of PLHIV with universal access to antiretroviral therapy (ART). It is the driving force for reduced episodes of retinitis in PLHIV by elevating cluster of differentiation 4 (CD4) counts and reducing viral loads. All of this is possible as a result of the Joint United Nations Programme on HIV/AIDS (UNAIDS) goal of 90-90-90, which suggests that this disease and its treatment cannot be separated from targeting 90% of PLHIV who are diagnosed, and who are on ART, and achieve viral suppression by 2020. South Africa has the largest ART programme in the world with 68% of its 7.7m HIV sufferers on ART, which raised their life expectancy to 67.7 years in 2015.1 This translates to 62% of all PLHIV on ART worldwide.
Background
Should we examine the impact of this virus, with its treatment of the eyes of people living with HIV (PLHIV), until such time as a vaccine is found? The 'test and treat' approach makes it possible to preserve the lives of PLHIV with universal access to antiretroviral therapy (ART). It is the driving force for reduced episodes of retinitis in PLHIV by elevating cluster of differentiation 4 (CD4) counts and reducing viral loads. All of this is possible as a result of the Joint United Nations Programme on HIV/AIDS (UNAIDS) goal of 90-90-90, which suggests that this disease and its treatment cannot be separated from targeting 90% of PLHIV who are diagnosed, and who are on ART, and achieve viral suppression by 2020. South Africa has the largest ART programme in the world with 68% of its 7.7m HIV sufferers on ART, which raised their life expectancy to 67.7 years in 2015. 1 This translates to 62% of all PLHIV on ART worldwide.
Ophthalmic clinicians need to be aware that while PLHIV on ART may not show clinical signs of eye deterioration, which usually is indicated by a lack of evidence of retinitis, as a result of the benefits of ART, this does not mean that the retinal structure is healthy. Presently, ophthalmic clinicians manage HIV only when the eye shows clinical signs of deterioration. When vision is threatened, the retina may be affected by showing retinitis, in particular cytomegalovirus (CMV) retinitis. All of this usually occurs when PLHIV have CD4 counts of less than 50 cells/mm 3 . As the current standard practice is for PLHIV to be initiated onto ART upon diagnosis, there is a reduced prevalence of opportunistic infections, including those that affect the eyes. However, this review of relevant research argues that there is evidence that ophthalmic clinicians need to adopt a proactive approach in managing the eyes of PLHIV who are on ART, until such time as a vaccine is found. Understanding this will help clinicians to better manage the threat of blindness in PLHIV.
This review attempts to highlight the structural integrity of the retinal nerve fibre layer (RNFL) in PLHIV who are on ART without retinitis. The RNFL is a fundamental layer of the inner retina, as it conducts most of our visual functions including visual fields (perimetry), contrast sensitivity (CS), colour vision and visual electrophysiology. The integrity of these visual functions will be explored, including the existence of a structural relationship with the RNFL to assess its impact in the absence of retinitis.
The review will first discuss studies on RNFL thickness in PLHIV, then describe studies focusing on visual function assessment in PLHIV, and then describe studies that have assessed the RNFL in the presence of various visual functional changes in PLHIV, as an attempt to demonstrate a direct relationship between the RNFL and visual changes. Finally, it will explore neuro-ophthalmic associations in PLHIV and conclude with the effects of ART itself on the retina. The value of this review is to alert practitioners to the structural and functional integrity of the retina and its impact on vision in this focus group of PLHIV on ART without retinitis.
Methods
The literature search was performed for this review using the PubMed, Google Scholar and EBSCOhost platforms' databases published up until September 2019. The search terms used were HIV and RNFL, HIV and perimetry, HIV and contrast sensitivity, HIV and colour vision, HIV and visual evoked potentials, HIV and electroretinograms, HIV and brain, HIV and ART, and retina. The inclusion criteria for the articles were all types of peer-reviewed article related to PLHIV on ART without retinitis. The exclusion criteria were all grey literature. The methodology includes title screening followed by abstract screening and full text screening for each database. Literature was then categorised according to their different foci on issues related to the purpose of the study. The categorisations and publications that were found and reviewed are presented in Tables 1, 2, 3 and 4.
Ethical considerations
The study has received ethical clearance from the Biomedical Research Ethics Committee, University of KwaZulu-Natal (ethical clearance number BE359/17). Informed consent was obtained from all participants prior to any form or part of the data collection.
Retinal nerve fibre layer in people living with HIV
The RNFL is the innermost layer of the retina comprising axons of the retinal ganglion cells, with a mean RNFL thickness in adults of approximately 100 microns. However, variations in RNFL thickness are associated with population genotype differences; in short, with race, sex and age. Thinning of the RNFL, ganglion cell layer (GCL), and of the retina generally, are a physiological change of ageing. 4 in 2007, also reported a reduction in ppRNFL thickness in subjects with CD4 counts of less than 100 cell/mm 3 . Besada et al. 5 also observed a reduction of ppRNFL thickness but in subjects with CD4 counts of greater than 100 cells/mm 3 . All of this growing evidence suggests that in the absence of clinically significant retinal signs, the RNFL is affected in PLHIV on ART but with CD4 counts considered to indicate an immunocompromised state. Table 1 shows the RNFL thickness measurements using different tools from the above-mentioned studies. It specifically shows the effects on mean ppRNFL thickness and furthermore highlights the thinning in superior and inferior ppRNFL thickness. The different tools used makes it challenging to directly compare the measurements, but two studies, performed by Plummer et al. 2 and Besada et al. 5 using HRT, showed identical thinning with a mean ppRNFL of 230 microns in PLHIV with minimum CD4 counts of 100 cell/ mm 3 while on ART when compared to controls. However, this empirical evidence of the integrity of the RNFL is limited even while on ART because it occurs at low immunocompetence. Accordingly, these studies do not focus on higher immunocompetence (higher CD4 counts), which may be more applicable for the present as PLHIV are living longer and in a healthier state as a result of adhering to ART, which elevates the CD4 count and suppresses the viral load.
More recent research has indicated a different threat to the structural integrity of the eye in PLHIV by drawing attention for the first time to a thickening at the macula in contrast to ppRNFL thinning in PLHIV. Arcinue et al., 6 using a comparative study of HIV-positive and HIV-negative persons, noted a reduction in cone density of 20% -30% at the macula in six subjects with CD4 counts of less than 100 cells/mm 3 while on ART. Moreover, an increase in thickness of 24.4 microns in the total macula, of 8.6 microns in the inner retina and 9.7 microns in the ganglion cell layer, was also found. The suggested cause was that of inner retinal oedema secondary to the retinovascular disease in HIV. The importance of the macula to central vision is fundamental and the suggestion of macula oedema does infer an impact on central vision; however, the level of effect is not discussed.
There is debate about the underlying causes of these structural changes to the eye amongst PLHIV. It has been suggested that HIV disrupts microvascular circulation in the eye which otherwise maintains RNFL integrity. 5 There is a propensity with HIV infection for damage towards the central anatomical structures of the retina, specifically the posterior pole (optic nerve and macula) and major vascular arcades. 4 This is supported by the studies shown in Table 1 that evaluate changes in ppRNFL changes and macula thickness. A competing explanation for RNFL changes in PLHIV is the stimulation of cytokine release syndrome which leads to the death of retinal ganglion cells. 2 Ongoing replication of HIV itself causes apoptotic changes in specific neural tissues by stimulation of cytokine release (CR) by the immune system.
There is evidence that provides tangential support for the CR hypothesis; autopsies of HIV patients revealed axonal loss in the optic nerve. 7 Secondary axonal loss results from damage to retinal ganglion cells due to the release of neurotoxins. Loss of myelinated optic nerve fibres is perhaps secondary to inner retinal damage, directly or indirectly resulting from HIV. The axonal degeneration may be a part of a wider neuroretinal degeneration in HIV, which postulates this form of pathogenesis of HIV as a neuroretinal disorder and not an isolated retinal disorder.
Proposed pathogenesis suggests that PLHIV may have accelerated ageing in comparison to HIV-negative persons. 8,9 The hypothesis is that the final common pathway of accelerated ageing (immunosenescence) occurs as a result of mitochondrial toxicity, para-inflammation (which is a lowgrade inflammatory response to tissue stress or dysfunction associated with ageing), microvasculopathy and rheological abnormalities (caused by HIV, ART, genetic factors and associated risk factors such as injection drug use, co-infection and race). 10 The use of ART in these studies also requires consideration. Colas et al. 11 showed an increase in the RNFL thickness in PLHIV after 1 year of ART, while Kalayani et al. 12 reported that the thinning of inferior and nasal RNFL was associated with the length of time an individual had been on ARTnotably a minimum of 180 months, therefore suggesting that a longer duration on ART may cause changes. All studies reviewed about PLHIV and on ART showed RNFL changes http://www.avehjournal.org Open Access but these two studies highlighted the impact of ART on the RNFL, factoring its duration which, according to these studies, may be inversely proportional to treatment. Going forward, in studies evaluating PLHIV and the retina, it is necessary to take ART into account.
At present it can be stated with some confidence that changes to the RNFL is evidence that HIV infection itself forms part of the pathogenesis of neuroretinal dysfunction in PLHIV. 3,12,13 The RNFL thins in PLHIV without retinitis and as we cannot stop ART, we should not ignore the threat of RNFL loss over time in PLHIV in light of the extended lifespan afforded by ART.
Visual function in people living with HIV
This review will now describe results from studies that have investigated changes in vision functionality in PLHIV. This is defined as visual electrophysiology and aspects of visual function that communicate physical changes occurring in the vision of PLHIV, referred to as psychophysical visual changes. The RNFL-focused research described thus far is the fundamental research to date that provides evidence of a structural change in the eye for understanding vision changes amongst PLHIV, particularly vision loss that is present when there is no active retinitis present. The following evidence presented has shown on electrophysiology that there are changes in visual function in PLHIV on ART without retinitis. Table 2 shows the aspects of visual functions affected and include visual evoked potential (VEP) and multifocal electroretinography (mERG) as well as psychophysical visual changes, namely CS, perimetry and colour vision.
Visual electrophysiology in people living with HIV
Electrophysiological techniques such as VEP and electroretinograms convey the integrity of the transmissions of the signals from the eye to the brain at different levels of transmission. Any changes in these electrical transmissions independently and objectively affirm clinical signs of damage in the retina and visual pathway, which may aid in a timeous diagnosis.
The pattern visual evoked potential (PVEP) is designed to establish the functional integrity of the entire visual pathway, while the pattern electroretinogram (PERG) assesses the functional integrity of the inner retina. These are observed at a sensitivity beyond funduscopic examination, which refers to the examination of the physical integrity of the inner eye, namely the retina and optic nerve. Axonal pathology in the optic nerve can be recognised by VEP independent of clinical signs and detectable morphology. 14 The full field monocular PVEP P100 peak shows activation of the primary visual cortex (V1) and illustrates integrity of the pre-chiasmatic pathway. 14 Both VEP and PERG have shown defects in activity of the ganglion cell layer in PLHIV without retinitis. 15 This suggests that beyond psychophysical visual tests, the integrity of the visual function in PLHIV on ART without retinitis is affected.
Iragui et al. 16 found that in PLHIV with a CD4 count of less than 200 cells/mm 3 there was reduced P1 transient PERG amplitude, implying ganglion cell dysfunction, as well as latency delay of the transient PVEP (P100), implying that this extends to the visual pathway. Furthermore, PLHIV with a CD4 count of greater than or equal to 200 cells/mm 3 showed a reduced amplitude of the transient PERG P1 potential. This establishes the dysfunctional changes at low CD4 counts in the visual pathway in PLHIV on ART without retinitis, as shown in Table 2.
Multifocal ERG is valuable in assessing visual function when the retina appears normal, as in this case of no retinitis in PLHIV. It can further distinguish retinal from optic nerve changes and most specifically focal retinal damage. In mERG studies, the first-order kernel represents outer retinal function whilst the second-order kernel is representative of inner retinal function.
Studies using mERG found reduced amplitudes and delayed latencies functionally; whilst this may not translate to the direct impact on the quality of life, it does show the impaired function of the inner retina and the visual pathway at low CD4 counts in PLHIV.
Falkenstein et al., 15 using only mERG, suggests that the outer retina, which contains the photoreceptors, is quiescient in PLHIV on ART without retinopathy. Falkenstein et al. 17 went on to show that HIV's effects are subtle in amplitudes between PLHIV with nadir CD4 counts of less than and greater than 100 cell/mm 3 for both kernels; but found widespread delays in implicit times or latent periods in both kernels for nadir CD4 counts of less than or greater than 100 cells/mm 3 . Goldbaum et al. 18 went further to show that apart from second-order kernels in mERG being a sensitive detector for inner retinal abnormalities, it separated HIV-positive eyes from normal eyes by showing the delayed implicit times in b-latency of the second-order kernels of mERG, thus showing impaired transmission of electrical stimuli. All this highlights that PLHIV on ART without retinopathy had retinal-processing abnormalities in both prolonged immunosuppression and PLHIV who have never been compromised immunologically.
Psychophysical visual changes (perimetry, colour vision and contrast sensitivity) in people living with HIV
Exploratory studies in the last two decades have sought to find the underlying causes of visual changes amongst PLHIV. These studies first reported associations between HIV infection and the erosion of visual functionalities amongst PLHIV.
Psychophysical assessments refer to the analysis of visual function changes such as visual fields (perimetry), the effect on visual acuity (CS vision) and colour vision.
Perimetry (visual field) in people living with HIV:
As early as 1997, Mueller et al. 19 found abnormal changes using achromatic automated perimetry in PLHIV. These changes were at a mean CD4 count of 173 cells/mm 3 in PLHIV with 20/20 visual acuity and no retinitis. Specifically, it was showed that 21.6% of PLHIV had a mean deviation of less than -2.10 dB and that 27.5% had a GHT 'outside normal limits'. These findings suggest global and local changes in visual fields, where global or general loss of the visual field is represented by mean deviation scores, and sectoral loss of the visual fields by GHT. In 2008, Freeman et al. 13 went on to find that 39% of PLHIV had abnormal mean deviations of less than -2.63 dB and 33% had abnormal pattern standard deviation (PSD) of greater than 2.57 dB, in a sample with a median current CD4 count of 180 cells/mm 3 . Despite the mean versus median CD4 counts in these studies, both are in agreement that in immunocompromised PLHIV (with a CD4 count of less than 200 cells/mm 3 ) there are losses, albeit subtle, demonstrated by perimetry.
In 2015, Jabs et al. 20 defined HIV as a neuroretinal disorder (HIVNRD) in PLHIV, if mean deviations on the 24-2 programme (Humphrey's perimeter) was worse than -3 dB and CS was worse than log 1.5. It was found, using this criterion, in a sample with a median CD4 count of 178 cells/ mm 3 that 16% were classified with HIVNRD. The median mean deviation for those with HIVNRD was -4.9 dB. These perimetric changes support the argument that HIV is an immune disorder, resulting in the addition of a specific neuroretinal disorder in the absence of retinal or central nervous system (CNS) infections.
The evidence summarised in Table 2 suggests that perimetric changes in PLHIV without retinopathy occurs at a CD4 count of less than 200 cells/mm 3 . This manifests as reduced mean deviations by at least 2 dB in the central 24 degrees of the visual field, and local field changes manifest by changes occurring in PSD and GHT. This suggests that visual fields are generally depressed as local defects, and in specific areas of vulnerability, as sub-served by the GHT. The magnitude of perimetric changes may appear subtle but over time this may threaten vision loss. Studies of PLHIV with CD4 counts of above 500 are not included in this review.
Contrast sensitivity in people living with HIV:
The value of assessing CS over plain visual acuity is that CS measures visual performance at varying contrasts and spatial frequencies while visual acuity is limited to high spatial frequencies (superior vision) at a single contrast level. This therefore makes CS assessment a more sensitive evaluation of vision. Contrast sensitivity in PLHIV has been investigated primarily using two techniques: the Vistech CS Test which was mostly used in the 1990s and, more recently, the Pelli-Robson Chart.
The Mueller et al. 19 study, described earlier, recorded a general loss of CS amongst individuals and showed that amongst PLHIV there was a correlation between CS and perimetry and between CS and colour vision.
The 2000s marked the start of the common use of the Pelli-Robson CS Test. Studies 13,21 on HIV and CS using the Pelli-Robson Chart found a median CS log of 1.65 and concluded that in PLHIV the CS was abnormal for scores of less than log 1.5, benchmarking and providing empirical evidence for CS in PLHIV for subsequent studies. Using this criterion, Shah et al. 21 found 7% of PLHIV with abnormal CS and, furthermore, reported that abnormal colour vision and CS can occur independently in these patients. Freeman et al. 13 found 12% of PLHIV with CD4 counts of less than 200 cells/mm 3 with abnormal CS in PLHIV. In addition to Shah et al., 21 the study findings with regards to CS were that there was a correlation between diminishing CS and decreasing levels of CD4 count levels (180 cells/mm 3 ).
In 2015, Jabs et al. 20 reported findings from a longitudinal study of the ocular complications of AIDS (LSOCA) between 1998 and 2013, and reported on CS abnormalities amongst PLHIV. These researchers asserted that the abnormalities should be defined as HIVNRD when visual field assessments showed mean deviations that were worse than -3 dB and CS tests produced scores that were worse than log 1.5. Using these terms of reference, 16% of the study sample could be classified as exhibiting HIVNRD and, notably, the visual field functionality of these individuals was more reduced. Importantly, the study reported that the risk of HIVNRD doubled when PLHIV had CD4 counts of less than 100 cells/mm 3 .
In 2012, Holland et al. 22 concluded that abnormal CS was associated with increased mortality and is an independent risk factor for death amongst PLHIV in view of the association with microvascular pathology, similar to diabetes mellitus. Eight percent of the sample was assessed to have abnormal CS, using a median CS of log 1.65. Table 2 provides quantitative evidence that CS is affected in PLHIV on ART without retinopathy and with CD4 counts of less than 200 cells/mm 3 , and that this can be evaluated when the Pelli-Robson score is less than log 1.5. There is a dearth of evidence amongst PLHIV with high immunocompetence in an age where ART is lengthening the lifespan of PLHIV, which questions if these observed changes will increase over time.
Colour vision in people living with HIV:
The evidence for the prevalence of colour vision loss amongst PLHIV is limited and broad in terms of its findings. This may be owing to the differences in the types of tool used to test colour vision and the varied ranges of CD4 counts and immunocompetence. The tools used include the anomaloscope, which is a colourmatching test that utilises illumination of three spectral sensitivities, viz. red, green and yellow, based on the long-, medium-and short photoreceptor cones in the eye to detect normal matches. The other test is the Farnsworth-Munsell (FM) Test, which is a pigment colour arrangement test that utilises caps that need to be arranged sequentially according to the visible spectrum of light. The studies using the Farnsworth-Munsell 100 Hue (FM100) version of the test analysed the root mean square of the total error score to gauge severity, whereby an increase in this magnitude is regarded as poor colour vision. Both techniques together can assess various colour defects and colour acuity.
Sommerhalder et al., 23 using the anomaloscope, found no significant colour vision anomalies in PLHIV with no retinitis at a median CD4 count of 70 cells/mm 3 . In contrast, Mueller et al., 19 using the FM100 Test, reported that 29.4% of the individuals had abnormal colour vision, with the root mean square of the total error score being 8.31 (normal = 5.16) with a median CD4 count of 173 cell/mm 3 . However, Shah et al., 21 in the mid-2000s, using the same test and on a similar sample with a median CD4 count of 330 cell/mm 3 , reported that 9.9% of the sample had abnormal colour vision with the root mean square of the total error score being 10.20. These FM100 Test studies indirectly show that as the CD4 count increases, the prevalence of colour vision loss is reduced but without an axis of defect, therefore suggesting no predilection to a particular cone dysfunction but a general depression in retinal function. 24 The changes in colour vision suggest the involvement of the photoreceptor, which therefore extends retinal involvement to the outer retina and beyond the RNFL. Kozak et al. 24 refuted the categorical involvement of the inner retina alone, as shown with RNFL, by stating that photoreceptor dysfunction also implied outer retinal involvement in PLHIV. Table 2 quantitatively allows for the conclusion that colour vision is affected in PLHIV without retinitis, and manifested a higher root mean square of the total error scores of above 8 on the FM100.
In summation, it is clear that among PLHIV on ART without retinitis, the psychophysical visual function is affected but is limited to CD4 counts of between 200 cells/mm 3 and 500 cell/mm 3 . However, the identified gap in the literature should examine whether or not these changes extend to a higher level of immunocompetence where there is viral suppression and CD4 counts of greater than 500 cells/mm 3 . The question still to be answered is whether the maintenance of ART and the elevation of CD4 count actually guard against further deterioration, or if the compounded presence of the virus over time affects the tissues of the eye and threatens vision loss in light of the extended lifespan expectation of PLHIV.
Retinal nerve fibre layer and visual function in people living with HIV
There is an association between RNFL loss and visual function loss in PLHIV on ART without retinitis. Table 3 shows studies that relate the structural (RNFL) and functional (vision) relationship in PLHIV on ART without retinitis. The glaucoma model aptly shows that in early disease states there are small retinal ganglion cell (RGC) changes and larger RNFL changes corresponding to small changes in the mean deviation on automated perimetry. While at advanced states, changes in retinal ganglion cells showed larger changes in mean deviation and smaller changes in mean deviation for RNFL. This relationship supports the premise that diseases affecting the RNFL directly affect vision. In this case, HIV, albeit a different mechanism, as shown in Tables 1 and 2, demonstrates the structural retinal changes and visual functional changes that occur in PLHIV. The magnitude of these changes may not translate into quality of life changes; however, the studies in Table 3 show concurrent RNFL thinning in the presence of visual loss such as perimetry, CS and colour vision in PLHIV on ART without retinitis.
Retinal nerve fibre layer and perimetry in people living with HIV
Despite limited studies, the attempt to relate visual field changes with RNFL changes is evident. Studies in glaucoma have shown that there is an association between RNFL, the ganglion cell layer and perimetric changes. Perimetric assessment using either Humphrey's frequency doubling technique (FDT) or short wavelength automated perimetry (SWAP) is influenced by RNFL and changes in ganglion cell thickness.
Arantes et al. 25 managed to find RNFL thinning nasally, with FDT perimetric loss in PLHIV with CD4 < 100 cells/mm 3 . It showed the low CD4 count group to have a reduction in average ppRNFL thickness with the exception of the temporal zone but also showed thinning in the temporal and inferior outer macula zones. All this occurred with concurrent reduced values in the MD and the PSD. It was further shown that the eyes with affected PSD and GHT outside normal limits had a thinner average ppRNFL in PLHIV on ART without retinitis. Arantes et al. 26 went on in 2012 to specifically show independent thinning in the ppRNFL in the nasal, inferior and temporal zones, respectively. Independent defects in the nasal visual field zones also occurred. An association between the PSD and the average ppRNFL was shown and the strongest correlations occurred between the superior ppRNFL and inferior visual field, and the nasal ppRNFL and the temporal visual field in PLHIV on ART without retinitis.
A noteworthy observation is the use of the FDT perimeter in these studies, which has been shown to target ganglion cells of the magnocellular pathway and has the ability to predict field loss before standard automated perimetry. This is significant in light of the fact that PLHIV on ART without retinitis are the focus of this review. Table 3 provides quantitative evidence which concludes that reduced MD, PSD and GHT are all affected concurrently with a thinning of ppRNFL in PLHIV on ART without retinitis at reduced CD4 counts of 100 cell/mm 3 .
Retinal nerve fibre layer and contrast sensitivity/visual acuity/colour vision in people living with HIV
Kalyani et al. 12 noted temporal ppRNFL loss and reduced CS in PLHIV for more than 180 months, which shows a positive correlation. Pathai et al. 27 specifically showed an association of lower CS at log 1.70 and a thinner temporal ppRNFL. Most recently, Paul et al. 28 also found a positive correlation of reduced CS at log 1.33 with the temporal ppRNFL.
Gender bias may not be an influence, as the sample profile used by Kalyani et al. 12 was predominantly male and that of Pathai et al. 27 was predominantly female, and both results showed a favourable agreement. Furthermore, the study by Pathai et al. 27 was geographically located in South Africa, while Paul et al. 28 studied a sample in India and also found similar results, suggesting that race and geography may not be an influencer. Table 3 provides evidence and allows for the conclusion: that the temporal ppRNFL is associated with reduced CS in PLHIV on ART without retinitis. However, the agreement of the magnitudes of the reduced CS and the temporal ppRNFL still requires more investigation.
The review for the assessment of visual acuity and the RNFL in PLHIV on ART without retinitis is limited to Barteselli et al. 29 who showed for patients with CD4 counts of less than 100 cells/mm 3 a loss of inferior-temporal ppRNFL, for varying contrasts and illumination. The study suggests that the HIV status independently predicts visual performance under varying contrasts at 100%, 64% and 43%. Accordingly, visual acuity using the Early Treatment Diabetic Retinopathy Study (ETDRS) charts (logMAR acuity) at varying contrasts behaves in a similar way to CS, but both showed temporal RNFL associations in PLHIV on ART without retinitis.
The review for the assessment of colour vision and the ppRNFL in PLHIV on ART without retinitis is limited to Kalyani et al. 12 This study showed an inverse correlation of the temporal ppRNFL with the colour confusion index (CCI) for the Lanthony D15. Higher CCI indices suggest poorer colour acuity, which occurred with the thinning of the temporal RNFL in PLHIV on ART without retinitis. It is important to remember that colour vision involvement extends beyond the RNFL because the photoreceptors lie in the outer retina. Table 4 groups studies that show the neuro-ophthalmological relationship between the retina and the brain in HIV in the era of ART. Although some are not statistically significant observations, there are patterns of relationships. They may not be clinically significant but do show changes in the absence of prospective studies.
Neuro-ophthalmological associations in HIV in an era of antiretroviral therapy
The ophthalmic arm of the AGEhIV (neuroretinal changes in HIV-positive adults) has most recently reported that adults with prolonged and suppressed viremia whilst on ART do not appear to be at risk of developing HIV neuroretinal disorders and suggested that there is an undisrupted physiological retina-brain correlation. However, the study could not conclude that the retinal OCT may be a useful marker to specifically indicate HIV-related neuroretinal degeneration.
Demirkaya et al. 30 relate the retina to grey and white matter changes in HIV in the era of ART. Total grey matter volume was positively correlated to foveal GCL, pericentral total retinal thickness, RNFL and GCL, peripheral total retinal thickness, RNFL and inner plexiform layer (IPL) thickness. Cortical white matter was positively correlated to foveal and pericentral GCL, and peripheral RNFL thickness. Further to this, on magnetic resonance diffusion imaging they showed significant positive correlations between pericentral retinal thickness -in particular of the inner layers -and fractional anisotropy (FA), and negative associations between the same layers and mean diffusivity (MD). All these were observed in virally suppressed PLHIV on ART and are the findings of the AGEhIV study which demonstrates that the retina and brain are disrupted in the age of ART.
However, the AGEhIV study assessed virally suppressed men older than 45 years and concluded that retinal thickness and cerebral parameters were similar to those of normal people. Haddow et al. 31 also attempted to biomark retinal vessels for cerebrovascular ageing in PLHIV; however, no differences in retinal vascular indices in HIV-positive and HIV-negative men over 50 years were noted.
Blokhuis et al. 32 studied children with HIV on ART who were virally suppressed and found that reduced neuroretinal thickness was associated with microstructural white matter injury. Thereafter, Blokhuis et al., 33 in a sample, inversely correlated blood plasma levels of IL-6, MCP-1 and sICAM1 with foveal GCL thickness. Neuronal biomarkers such as cerebrospinal fluid (CSF) tTau levels inversely correlated with pericentral total retinal thickness and ONL/IS in both the foveal and pericentral regions. The relevance of these highlights biomarkers associated with inflammatory processes and neurodegenerative disease, and may help to explain observations in virally suppressed PLHIV. All these are the findings of the study by NOVICE (neuroretinal changes in HIV-positive children) and also demonstrates neuro-ophthalmological compromise in the present age of ART.
The NOVICE study looked at HIV-positive children between 8 and 18 years of age and reported that a decrease in foveal thickness was associated with a higher peak viral load. Further to this, it found that retinal thickness was associated with microstructural white matter injury in HIV infection in children. However, no significant changes occurred in colour vision, central visual field and CS function, but a thinner total foveal thickness occurred, caused by a thinning of the outer nuclear layer and inner segment. They attributed this to perinatal infection disturbing foveal maturation which did not disturb visual function. Furthermore, retinal thickness was associated with microstructural white matter injury in HIV-positive children by showing lower FA, higher MD and higher radial diffusion on diffusion-weighted imaging. However, work by Crowell et al. 34 did establish that virologic suppression during infancy or early childhood was associated with improved neurocognitive outcomes in school-aged children.
Much earlier, in 2012, Jesus-Acosta et al. 35 associated the RNFL to brain dysfunction and neurodegeneration in HIV (HAND) and highlighted the OCT as a tool that could represent a biomarker for HIV-associated neurocognitive disorder. Together with the AGEhIV and NOVICE studies, these make a case that in the era of ART there still remains a threat from the virus or, more controversially, the treatment, as PLHIV live longer.
The discrepancy in the NOVICE and AGEhIV findings was attributed to the continuous development of children, and that unsuppressed HIV in their early years may affect maturation. The findings of these studies may not be clinically relevant; however, the trace of the virus from the retina to the brain did show changes. As a result of the extended lifespans of PLHIV on ART, if followed prospectively, these findings may become clinically relevant. This may be the beginning of the recognition of the threat to vision and the brain in virally suppressed PLHIV.
Antiretroviral therapy and the retina
Evidence that ART is associated with retinal toxicity exists. Case reports identify ritonavir as a cause of retinal toxicity as a bull's eye maculopathy and, in some cases, this mimics retinitis pigmentosa. 36,37 Faure et al. 37 found that the toxicity continues despite cessation of ritonavir. Bull's eye retinopathy has also been identified in PLHIV with elevated CD4 counts and low viral loads using efavirenz, lamivudine and zidovudine. 38 The systemic effects of ART that may lead to retinal changes may be a reality as a consequence of atherosclerotic vascular changes related to ART that elevate lipid levels and in some cases cause the occlusion of the central retinal artery. 39 Antiretroviral therapy has been found to also cause metabolic toxicity to beta cell function, affecting insulin function, and glucose levels, 40 which may then affect the vascular integrity of the retina. The vascular impact of ART on the retina should be factored into studies going forward when studying PLHIV on ART who are virally suppressed, especially in light of the extended lifespan these persons are now being afforded.
Conclusion
The decrease in visual function and ppRNFL loss, in PLHIV on ART without retinitis who were flagged in this review, allow us to recognise the threat to vision loss. The affected visual functions include perimetry, CS, colour vision and visual electrophysiology in PLHIV on ART without retinitis. The structural indicator that is affected is the mean or overall ppRNFL thickness, with the superior and inferior ppRNFL zones both showing thinning. Over time, this can be translated to suggest atrophy that may affect function. These changes may be attributed to the disruption of microvascular circulation, cytokine release syndrome (apoptosis) and accelerated ageing. The RNFL involvement is part of a wider neuroretinal disorder that involves axonal loss. Studies on neuro-ophthalmological changes in an era of ART did find statistically significant changes in retinal layers and cerebral volume, which did not manifest clinically but established neuro-ophthalmic disruption anatomically. Further to this, one cannot discount the extenuating circumstance of the role of ART itself on the retina, which may also be a contributing factor that must be accounted for in future studies. All these changes flagged in the literature should be recognised as a threat to vision and the brain when factoring in the expected longer lives that PLHIV are afforded with ART.
|
2020-07-23T09:09:34.192Z
|
2020-07-22T00:00:00.000
|
{
"year": 2020,
"sha1": "8e4991e465d6efb96cbd5edb4ef73a398f32bb55",
"oa_license": "CCBY",
"oa_url": "https://avehjournal.org/index.php/aveh/article/download/547/1257",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b30bd6ede49bbadcc26c5f4a456bcf2fa53cd58",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222238136
|
pes2o/s2orc
|
v3-fos-license
|
NOTE ON MISMODELLING OF POLICYHOLDER’S AGE IN CLAIM FREQUENCY MODEL: A MATTER OF GENDER IN VEHICLE INSURANCE
Using the motor hull insurance data of Czech insurer, the paper deals with how mismodelling of policyholder’s age can induce misleading conclusions about the gender differences in claim frequency within vehicle insurance. This study is based on individual data with unit policy duration and puts the emphasis on correct modelling of functional form of the age to show that mismodelling as well as categorization yields misleading conclusions and, finally, we demonstrate how the inferences depend on categorization itself. Thus, we showed that linear form as well as the categorization increases the type I error to detect the obvious interaction between gender and age. By involving fractional polynomials, the results partially support the judgement of European Court of Justice to ban using gender as a rating factor, in particular for young policyholders. We concluded that, if another relevant data are not available, the gender as well as interaction with the age should be considered in the claim frequency model although such model cannot be used for setting premium.
Introduction
The paper deals with how mismodelling of policyholder's age can induce misleading conclusions about the gender differences in claim frequency within vehicle insurance. In fact, these differences have been observed across the policyholder's age and several studies dealt with this phenomena. However, their conclusions were mostly drawn on average or expected frequencies and involved the grouped data. By contrast, this study is based on individual data with unit policy duration and puts the emphasis on correct modelling of functional form of the age to show how mismodelling suppresses the obvious interaction and how the inferences depend on the categorization of continuous variables. In addition, the study also verifies as well as validates the "gender" effect across the policyholder's age with given level of confidence.
The preference of dataset in which each policy lasts 1 year is motivated by potential bias when grouped data are collected over different time exposure. Suppose two policies for which we observed a claim. One terminates at the end of the year, while the other ended in the middle of the year. Although the claim count is the same, the annual claim frequency of the latter policy is twice higher than the former because the policy was terminated untimely.
Both policyholder's age and gender are relevant risk factor from the statistical point of view. However, the gender might be a proxy for other driver's characteristics. For instance, it is accepted that male drivers pose higher risk than female drivers because of increased aggression, lower limits tolerance and other psychological reasons, see for instance (Cestac et al., 2011) or recently (Harbeck and Glendon, 2018). Further, the different claim frequency is also observed within various subgroups of the policyholders' age as well as differences between males and females across various age. It represents effect modification that requires involving an interaction between gender and age into the claim frequency model.
In these perspectives, the prohibition to use gender for setting premium appears against all empirical observations and statistical results. However, drawing conclusions by comparing average or expected claim frequencies is insufficient and must be supported and confirmed by proper statistical verification. Our results partially support the judgement of European Court of Justice that banned to use the gender as a rating factor since 21.12.2012.
To model the claim frequency, we adopted the concept of generalized linear models (GLMs) where the relationship between the frequency and individual characteristics is expressed via a link function transforming the claim frequency into a linear combination of given individual risk factors. However, the linearity assumption may yield incorrect assessments of effect of policyholder's age on the claim frequency. In addition, as we show in this paper, the obvious interaction with gender is rejected on the 95% confidence level. Therefore, we involved fractional polynomials (FPs) to specify the functional form correctly. On one hand, these functions are more sensitive to influential observation than linear form because such observation drives not only the value of estimated parameters, but also the degree and power of FP. On the other hand, this approach represents very flexible approach that increases the fit of the data as well as helps to avoid mismodelling when the true function is non-linear.
Further, to avoid estimation of FPs and still to respect a potential non-linearity, continuous variables are sometimes categorized into several groups that allows using the linear model. Then, the non-linearity is handled by non-proportional effect of each successive category.
However, the categorization might be controversial. First, there is a problem how to determine the number of cutpoints and where to place them. The best choice is to use recognized cutpoints that are mostly unavailable. More naturally, the cutpoints are given by percentiles (e.g. quartiles) but it worsens study comparison where each of them is based on different dataset and different percentiles. Second, categorization concerns a loss of information and examination of interaction between categorical and categorized continuous variable yields difficult interpretation of the model due to including many interaction terms. In addition, as we show in this paper, the conclusions about "gender" effect depends on categorization itself that increases the type I error to detect the obvious interaction between gender and age.
Thus, using the motor hull insurance data of Czech insurer, we subject the interaction between policyholder's age and gender to further analysis and we verify it statistically. We show that mismodelling the functional form of policyholder's age as well as its categorization yields misleading conclusions and, finally, we demonstrate how these inferences depend on categorization itself. The remainder of this paper is organized as follows. Section 2 summarizes the literature in this field of study. A claim frequency model based on negative binomial (NB) distribution is described in Section 3, as well as fractional polynomials and effect modification. Section 4 shows how the mismodelling and categorization affects the conclusion about the "gender" effect on claim frequency and presents the conclusions drawn on correctly specified form of policyholder's age. Section 5 gives the conclusions of this study.
Literature review
Although the Poisson as well as negative binomial distribution had been known for long time, only development of GLMs by (Nelder and Wedderburn, 1972) put the emphasis on distributional properties and non-linear models that incorporate explanatory predictors.
The first application of GLMs was used to model the claim frequency for marine insurance and the claim size for motor insurance in (McCullagh and Nelder, 1983). More applications of GLMs occurred mostly after the 1990, when the insurance market was being deregulated in many countries and the GLMs were used to undertake a tariff analysis, for example (Andrade-Silva, 1989), (Brockman and Wright, 1992) or (Renshaw, 1994). GLMs are also used for premium optimization, for example (Zaks et al., 2006) or (Branda, 2014), and for the estimation of solvency capital requirement that has appeared recently in (Valecký, 2017).
The first natural choice for modelling count data is Poisson regression model, but it is mostly not sufficient because of overdispersion. Therefore, various types of mixed Poisson model are applied. The negative binomial model was derived from Poisson-gamma mixture distribution, which is now commonly symbolized as NB2, (Cameron and Trivedi, 1986). The comparison of NB with Poisson can be found in (David, 2015). Common alternative for overdispersed data is also quasi-Poisson model, see for instance (Ver Hoef and Boveng, 2007) for comparison with NB model. In addition, the negative binomial model appeared in many extension, for instance as a zero-inflated model, for instance (Kim et al., 2016), or as a generalized model, (Greene, 2008). For review about variations of negative binomial model, we refer to (Hilbe, 2011).
To obtain a well fitted model, it is crucial to identify the relevant factors as emphasized (Kafková and Křivánková, 2014), while (Valecký, 2016) summarized the modelling issues necessary for a good claim frequency model and highlighted the non-linear effect of specific risk factors.
Generalised additive models (GAMs) represents one out of the method to handle the nonlinearity. However, we prefer fractional polynomials used extensively by (Royston and Altman, 1994), , because of better interpretation and higher transportability.
The EU Gender Directive of December 13 th 2004 (Council Directive 2004/113/EC) provided for equal treatment between men and women in the access and supply of goods and services. However, an exception in Article 5(2) allowed the proportional difference in insurance premiums. Then the judgement of EU's Court of Justice prohibited to use the gender as a rating factor since 21.12.2012 although several studies pointed out potential increase in danger of adverse selection as well as morale hazard, e.g. (Oxera, 2010(Oxera, , 2011. For all that, the gender is still involved as a relevant risk factor in models for the purpose of modelling claim frequency in vehicle insurance, e.g. (David, 2015), (Hsu et al., 2016) or (Summun et al., 2018), even though (Ayuso et al., 2016) or (Verbelen et al., 2018) proved that gender is a proxy for another driver's characteristics, such as experience or driving habits.
Negative binomial model
Because the Poisson model does not accommodate the overdispersion, we used the negative binomial model. Let's suppose negative binomial distribution with probability mass function in the form of where is the negative binomial heterogeneity or overdispersion parameter, i w is the exposure, i y is the observed claim frequency and i is the mean response. In terms of exponential dispersion model, that is assumed to be twice continuously differentiable with an invertible first derivative because these determine the mean and the variance function as follows , it yields the non-canonical negative binomial model that is referred as NB2 and where i represents the systematic component where ij x is an observed value on the variable j x for each policy 1, , in = and are unknown parameters to be estimated.
To obtain the estimates of , the maximum likelihood Newton-Raphson type algorithm is preferred to IRLS method because observed and expected information matrices are not equivalent in NB2 model and it is generally assumed for non-canonical models that observed standard errors are less biased than expected standard errors.
Further, the log-likelihood function is obtained by substituting The goodness of fit test is performed using the deviance statistic in the form of where ( ) The deviance statistic is also used to perform the likelihood ratio (LR) test for model comparison. The statistic is calculated as
Fractional polynomials
The effect of risk factor on the systematic component is not necessary linear and some transformation is required. One out of the techniques used to handle the non-linearity involves fractional polynomials. Let the expression (5) be rewritten in the form of ( ) . The powers could be any number, but estimation of general powers requires non-linear optimization, which may cause problem with convergence. Therefore, (Royston and Altman, 1994) restricts the power in the set 2; 1;0.5;0;0.5;1;2;3 where 0 denotes the log of the variable. The remaining functions are defined as for 1, , j km = and restricting powers k p to those in S .
Thus, for given degree of FP, all models are estimated and the best model is represented by the highest value of log-likelihood function. Note that, for one variable, the set S generates 8 models for FP1, 36 models for FP2, and so on, yielding a great flexibility. However, the functions of degree 2 m are rarely used. Next, the variables entering into the model can influence each other as well as the degree and powers of FPs, therefore ) originated a routine for FP selection in multivariable framework, that is called as a MFP algorithm with these steps, see the reference for the original version, 5. Let 1 jj =+. If j is smaller or equal to the number of variables, process the next factor (step 4).
Let
1 cc =+ and repeat the whole procedure until the convergence.
Evaluation of effect modifiers
Some risk factors in the systematic component (5) Clearly, the coefficient .
It follows that the z-statistics and the significance themselves depend on the level of 2 x , which means that the interactions may be statistically significant only for some observations. Therefore, the LR test based on the deviance statistic is preferred to the z-test. Thus, the model including the interaction is usually defined as follows
Assessment of "gender" effect across the policyholder's age
In this section, using the motor hull insurance data of Czech insurer, we demonstrated how the conclusions might be manipulated using the methodology and how important is to model the functional form correctly. We firstly presented a model with linear systematic component and a model with categorized policyholder's age, in which we showed how the inferences depend on categorization itself. At the end, we evaluated the effect of gender across the age properly by model that involves FPs. We also verified statistically that the age modifies the effect of gender and vice versa and we additionally performed a partial internal validation to support the importance of the conclusions.
We used the individual data that encompassed the characteristics of policies during the years 2004-2010 (74,721 observations) and these following risk factors were considered: age of car (agecar); engine displacement in cm3 (volume) and engine power in kW (kW); policyholder's age (ageman); car value (value); number of citizens in a region (nocit); gender of policyholder (gender; 0male, 1 -female); district area (district; 14 various regions in Czech republic); and type of fuel (fuel; 0petrol, 1 -diesel). Remind that each policy had the unit duration.
Finally, note that engine displacement is also one of the key factor of the engine power. Therefore, both volume and kw cannot be used together in the model because of high mutual dependence indicated by the correlation coefficient of 0.8348. Thus, we defined a new variable (kwvol) that combines volume and kw as a ratio of engine power in kW and engine displacement as follows: 1000. kw volume
Model with linear systematic component
First, we show how conclusions might be influenced when the policyholder's age is mismodelled. We estimated the model with linear systematic component and, thereafter, we extended the model by adding the interaction gender × ageman to assess the effect of gender across the age.
if condition is true, 0 otherwise.
The coefficient -0.0109 and -0.0085 represents the conditional effect of ageman for males and females respectively, indicating that the systematic component as well as claim frequency is decreasing as age increases. In addition, the coefficient on 02 x , which is smaller than on 12 x , indicates that difference by gender is increasing.
Note that the positive coefficient 0.1104 on gender does not indicate itself that the claim is generally more likely from females rather than from males because it also depends on the age itself. Therefore, we calculated the "gender" effect that represents the varying difference between both gender categories across the age, thus ( ) ( ) which confirms the increasing difference in claim frequency across gender as the age increases. Next figure shows the estimated conditional effects as well as the "gender" effect with 95% confidence interval.
Source: Own based on STATA 12
Clearly, the function representing the conditional effect of age for males has higher negative slope that yields the increasing difference in claim frequency from female drivers. It coincides with the increasing "gender" effect shown in the right figure.
Although female drivers are evaluated more risky in general, the "gender" effect suggests that the difference by gender is smaller for young drivers than for middle-aged. However, considering confidence interval of that effect, it does not differ from the main effect of the model without interaction significantly, indicating that the interaction is not statistically significant. It was also confirmed by likelihood ratio test, which yielded a chi-squared value, with one degree of freedom, of 1.03 and the corresponding p-value of 0.3105.
Thus, using the model with linear systematic component, we would conclude that females report claim more often than males regardless of the age and that the difference in claim frequency by gender does not vary significantly across the age. In this case, the variable gender would be considered as confounder rather than effect modifier. However, we showed further that the effect of age is in fact mismodelled, suppressing this effect modification. x I
Model with categorized age
x I x I x I x I Note that the "gender" effect in the first age category is determined by the estimated constant and coefficient on gender. Thus, the systematic component as well as claim frequency for males in the first age category is given by the constant -3.0703 and for females of the same category by -3.0703 + 0.1701, indicating that the claim is more likely from females at this age rather than from males. The other estimated coefficients indicate that the claim is less likely as the age category increases regardless the gender (except for the last age group of females). However, the gender difference across the age is varying. Conditional effect of age for both gender is as follows Note that all coefficient are positive, implying that females are more risky for the insurer in each age category. In addition, each coefficient on succeeding age category is higher than the coefficient on preceeding group, indicating the increasing difference by gender.
Unfortunately, although we tried to handle the non-linear effect of age, the estimated effect of gender is not statistical significant and conclusions coincide to these that we drew from the model with linear component. Next figure represents the "gender" effect for all age category including the 95 % confidence interval.
Figure shows that using these different categorization patterns yields different conclusions. In contrast to the right figure, the left figure shows clearly that the interaction is insignificant with 95% confidence level, while the second categorization pattern indicates that the claim frequency is lower than frequency given by the main effect with 95% confidence. It was also confirmed by LR test, providing the significance 0.1723 and 0.0172, which corresponds to the chi-squared value (with 2 degrees of freedom) of 1.86 and 5.67 respectively.
In addition, using these two categorization patterns, we found that there is no statistical difference by gender in claim frequency for the first age category. Males and females are statistically equally risky for the insurer, while there was a significant difference by gender when we considered categorization by quartiles. Obviously, the conclusions should not depend on the categorization, which proves that the categorization might yield misleading conclusions.
Model involving fractional polynomials
Finally, we involved FPs to handle the non-linearity and to avoid the categorization that incurs the loss of information available in the dataset. We applied the MFP algorithm, in which we obtained FP powers of (3 3) for kwvol, (-1 3) for ageman, (0.5) for value, (2) for agecar, and a linear term for nocit. 3 3 3 3 02 2 2 12 2 2 ,, ,, Next, we tested the model against the nested model without interaction using likelihood ratio test, which yielded a chi-squared value of 50.22 with two degrees of freedom. The corresponding p-value was less than 0.00001, indicating that the interaction between gender × ageman is significant.
Because of non-linear transformation of age as well as due to centring and scaling, the interpretation of conditional effects as well as "gender" effect is represented by next figure. The left figure shows clearly that claims are less likely from young women than young men in the 18-26 years interval, whereas they are more likely over 26 years. However, the "gender" effect plot indicates that the effect of gender changed from negative to positive and that it significantly differs from the main effect on the interval 18-32 years. More important, it is obvious that there is no statistical difference between males and females within the 18-30 years interval as well as above 75 years, while claims are more likely from women in the interval 31-74 years. Thus, although we observed higher frequency for young men, there is no statistical difference from young women. Significant distinction is confirmed in the middle age only, i.e. above 31 years.
Finally, although the interaction gender × ageman was statistically confirmed, the spurious interaction may be of concern because the effect modification might be data-driven. Because we did not have external data, we performed a partial internal validation by bootstrap resampling. We generated 100 bootstrap samples and we estimated the "gender" effect for each. Next figure compares the original and bootstrap "gender" effect functions. Figure 5 Mean and 95% confidence interval of gender effect function from 100 bootstrap replications and gender effect function on the original data with 95% confidence interval.
Source: Own based on STATA 12
Clearly, even if the data were changed randomly with replacement, significant "gender'' effect was detected. The original and the bootstrap "gender" effect as well as both relevant 95% confidence intervals differ only for ageman above 60 years, indicating that the "gender" effect function is data-driven on this interval. However, they also confirmed no difference in claim frequency by gender in the 18-30 years interval, while significance difference appeared above 30 years.
Conclusion
Using the motor hull insurance data of Czech insurer, the paper demonstrated how the conclusions might be manipulated using the methodology and how important is to model the functional form correctly.
The study showed that mismodelling of policyholder's age induce misleading conclusions about the gender differences in claim frequency. Assuming the linear form for the age, the gender was not identified as an effect modifier for the age with 95% confidence. It was also confirmed that categorization incurs the loss of information and that the interaction between age and gender was not detected although the non-linear effect of age was treated as non-proportional effect of age groups. It implies that linear form as well as the categorization increase the type I error to detect the obvious interaction between gender and age. In addition, using different categorization patterns, we showed that the significance of effect modification might depend on the categorization and yield contrary conclusions about the effect modification.
Involving fractional polynomials confirmed that gender is significant effect modifier for the age, in particular for young policyholders. In addition, the study also validates the "gender" effect across the policyholder's age with 95% confidence. In these perspectives, the EU's ban to use gender for setting premium appeared reasonable at least for young drivers. On the other hand, significant differences in claim frequency by gender appeared and were validated for the age above 30.
However, it does not necessary imply a gender disparity. As others showed, the gender represents a proxy for another characteristic, such as experience, driving habits, etc., and the question is how the insurer will deal with this problem. For instance, one out of the insurance companies in Czech republic started to set the premium for vehicle insurance according to the annual mileage.
Thus, we may conclude that gender should be entered into the claim frequency model at least as a proxy if another relevant data are not available. However, such model cannot be used for setting premium. It also implies that the interaction between gender and age should be considered in such frequency model. In addition, the linear form of the policyholder's age must be carefully verified otherwise some proper technique to handle non-linearity of the age, such as fractional polynomials, should be involved and, finally, categorization should be avoided because the approximation of non-linear effect is insufficient.
|
2020-09-03T09:05:07.480Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0f913b4512c3db9a530775e9de1b23713c1c6154",
"oa_license": null,
"oa_url": "http://www.eurrec.org/ijoes-article-7110?download=12",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e6006d6ce056dddadef76626a48144d82a433c11",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
15747446
|
pes2o/s2orc
|
v3-fos-license
|
Effect of short-term heat acclimation on endurance time and skin blood flow in trained athletes
Background To examine whether short-term, ie, five daily sessions, vigorous dynamic cycling exercise and heat exposure could achieve heat acclimation in trained athletes and the effect of heat acclimation on cutaneous blood flow in the active and nonactive limb. Methods Fourteen male badminton and table tennis athletes (age = 19.6 ± 1.2 years) were randomized into a heat acclimation (EXP, n = 7) or nonheat acclimation (CON, n = 7) group. For 5 consecutive days, the EXP group was trained using an upright leg cycle ergometer in a hot environment (38.4°C ± 0.4°C), while the CON group trained in a thermoneutral environment (24.1°C ± 0.3°C). For both groups, the training intensity and duration increased from a work rate of 10% below ventilatory threshold (VT) and 25 minutes per session on day 1, to 10% above VT and 45 minutes per session on day 5. Subjects performed two incremental leg cycle exercise tests to exhaustion at baseline and post-training in both hot and thermoneutral conditions. Study outcome measurements include: maximum oxygen uptake (VO2max); exercise heart rate (HR); O2 pulse; exercise time to exhaustion (tmax); skin blood flow in the upper arm (SkBFa) and quadriceps (SkBFq); and mean skin (Tsk). Results The significant heat-acclimated outcome measurements obtained during high-intensity leg cycling exercise in the high ambient environment are: (1) 56%–100% reduction in cutaneous blood flow to the active limbs during leg cycling exercise; (2) 28% drop in cutaneous blood flow in nonactive limbs at peak work rate; (3) 5%–10% reduction in heart rate (HR); (4) 10% increase in maximal O2 pulse; and (5) 6.6% increase in tmax. Conclusion Heat acclimation can be achieved with five sessions of high-intensity cycling exercise in the heat in trained athletes, and redistribution of cutaneous blood flow in the skin and exercising muscle, and enhanced cardiovascular adaptations provide the heat-acclimated athletes with the capability to increase their endurance time in the hot environment.
Introduction
It has been shown that heat acclimation enhances human ability to perform physical work in a hot ambient environment. [1][2][3] Without heat acclimation, aerobic capacity and performance are impaired in high ambient temperatures (ie, earlier onset of fatigue and heat-related exhaustion). 2,4,5 One mechanism after heat acclimation is to increase skin blood flow and the volume of sweat for dissipating heat to regulate body temperature during vigorous exercise in the heat. [6][7][8][9] It has been reported that when a healthy person attains the "complete" effects of heat acclimation, a series of physiological adaptations or adjustments may occur during exercise in a hot environment: maintained body temperature by increased convective heat loss via distributing blood flow to the cutaneous tissue and by sustained sweating rates; 2,7 increased plasma volume by maintaining cardiac-filling pressure and thus restoring arterial pressure and cardiac output; 1,10 reduced cardiovascular stress by lowering heart rate and enhancing exercise capacity and endurance; 2,3,11,12 induced compensatory reflexes by limiting blood volume contained in the splanchnic vascular bed and enhanced maximal cutaneous volume; 2,10 reduced electrolyte loss (ie, sodium chloride) in the sweat; 11,13 and increased heat-shock proteins' synthesis in response to cellular stress. 14 All of these enhance exercise performance 11,12 and prevent the negative effect of heat-related illnesses, such as exertional heat stroke, heat exhaustion, and heat cramps. 15 There is a scarcity of studies investigating shortterm (ie, #5 days of physical exercise in high ambient environments) heat acclimation in well-trained competitive athletes. These athletes, when preparing to compete in hot environments, often have to rely on research information on heat acclimation obtained from moderately trained or untrained populations using long-term (ie, 12-14 days of physical exercise in high ambient environments) heat acclimation protocol. 3,4,12 For competitive athletes, disruption of quality training time close to competitions with longterm heat acclimation could interfere with their training schedule, resulting in a negative effect on subsequent sport performance. Furthermore, competitive athletes also have to deal with environmental concerns when traveling from a mild climate region to a hot climate region (eg, countries of tropical climate) for competition, with the possibility of physical detraining (and jet lag), due to travel time and distance to the site of competition. It is critical to know the extent to which trained athletes may adapt to short-term heat acclimation to prevent negative thermoregulatory effects on physical performance. Thus, for the competitive athletes, the use of a short-term heat acclimation protocol of five daily sessions would be more economical and efficient than longterm protocols. The precise mechanisms responsible for the impaired performance in the high thermal environment in trained athletes are not yet fully understood.
During exercise in a high-thermal environment, cutaneous circulation undergoes periodic changes by increased local blood flow to meet the metabolic demands of muscular activity and to dissipate body heat. 6,7 The mechanism of peripheral vascular reflex responses during exercise in the heat maintains adequate cardiac output and blood pressure during periods of decreased cardiac-filling pressure, suggesting that during exercise in the heat, blood flow in the nonactive region of the tissue is lowered. 10 The effect of exercise in the heat on the distribution of SkBF in the active and nonactive limbs of highly trained athletes has not been fully investigated. Thus, it is unclear as to whether skin blood flow to the active region of the working tissue after heat acclimation decreases, remains unchanged, or increases during exhaustive leg cycle exercise in high ambient temperature in trained athletes. It would be of great interest for trained athletes to examine the extent of heat acclimation that could occur with respect to cardiovascular adaptations and body heat dissipation via SkBF in the active and nonactive limbs. Therefore, the purpose of this study was to determine the effect of five daily sessions of leg cycling exercise with an incremental work rate in high ambient temperature on physiological responses, as well as SkBF distribution in the active and inactive limb in trained athletes.
Methods and materials Subjects
Fourteen male athletes were recruited from active members of the national all-star table tennis and the national all-star badminton teams from Taiwan. The athletes recruited for this study were six badminton and eight table tennis players. (See exclusion criteria below.) The athletes were full-time college students at the National Taiwan Sport University in Taoyuan, Taiwan, not heat acclimated, and had been residing at the same location (altitude of 250 meters above sea level) for 3-4 years. During the study period, the months of November through March, the average monthly ambient temperature and relative humidity were: 19.9°C and 68% (November); 16.5°C and 67% (December); 15.3°C and 79% (January); 14.4°C and 77% (February); and 17.5°C and 82% (March), respectively. Summer month ambient temperature and relative humidity averaged 28°C-30°C and 80%-88%, respectively. The age, height, body mass, body surface area, and maximum oxygen uptake (VO 2max ) of the subjects were (mean ± standard deviation [SD]): 19.6 ± 1.2 years; 171.6 ± 4.7 cm; 64.7 ± 5.0 kg; 1.78 ± 0.08 m 2 ; and 53.0 ± 5.3 mL/kg −1 per minute −1 (with a cycle ergometer test), respectively. Before the subjects' randomization, the participants were fully informed of potential risks and anticipated discomforts associated with the experimental procedure and allowed to have two familiarization sessions -one for the cycle ergometer test protocol and one for the training protocol. All participants gave their written, informed consent to participate as study subjects. This research protocol was conducted according to the principles of the Declaration of Helsinki on human research and approved by the National Taiwan Sport University Ethics Committee. submit your manuscript | www.dovepress.com
Study design
All subjects underwent pre-participation screening using a Health and Exercise History Questionnaire and a blood chemistry profile test. 15 Subjects were excluded from the study if they: (1) had a cardiovascular, liver, kidney, metabolic, or pulmonary disease; (2) were recovering from musculoskeletal injuries or respiratory tract infections that could affect the outcomes of maximal leg cycle exercise test (GXT) and aerobic training in the heat; (3) were heat acclimatized during physical training within the past 6 months; (4) had been diagnosed with Raynaud's disease, a vasospastic disease of small arteries and arterioles affecting circulation to the digits of the hands; (5) currently used any tobacco product; (6) were heavy caffeine users, ie, .500 mg per day; or (7) used vasodilator drugs. We excluded subjects with Raynaud's disease because intermittent occlusions of circulation to the small arteries and arterioles affect arterial blood flow to the skin region and thus compromise skin blood measurements. The subjects were randomly assigned to an experimental (EXP, n = 7; three badminton players and four table tennis players) or control (CON, n = 7; three badminton players and four table tennis players) group. After randomization, the participants followed the experimental procedure outlined in Figure 1. Two separate laboratory visits were required for baseline study outcome measurements in thermoneutral and hot conditions, separated by at least a 24-hour rest. The outcome measurements were performed using a leg cycle ergometer (Model 818E, Monark Exercise AB, Vansbro, Sweden) and an incremental leg cycle protocol with 3-minute increment per stage until reaching volitional fatigue or exhaustion. The Monark cycle ergometer was calibrated daily before the measurement. The GXTs were conducted at the same time of the day on each visit Daily training work rate: day 1 : 10% below VT day 2 : 5% below VT day 3 : at VT day 4 : 5% above VT day 5 : 10% above VT EXP (n = 7) trained 5 days in hot (38°C, RH = 52%) conditions (ie, ±30 minutes), and the test order was randomly allocated and counterbalanced. Twenty-four hours after the baseline measurements, the EXP began to train in a hot environment, mean ± SD room temperature = 38.4°C ± 0.4°C, and relative humidity (RH) = 52.0% ± 4.6%, while the CON began to train in a thermoneutral environment with mean ± SD room temperature = 24.1°C ± 0.3°C and RH = 51.5% ± 4.5%, both groups on upright cycle ergometers. Both EXP and CON were trained using the following cycling exercise protocol from day 1 to day 5. On day 1, the training duration and work rate were 25 minutes per day and at 10% below the individual's ventilatory threshold (VT) obtained from the GXT in the thermoneutral condition. The subsequent training work rates for days 2, 3, 4, and 5 were increased to 5% below the VT, equal to the VT, 5% above the VT, and 10% above the VT, respectively ( Figure 1). The subsequent training duration was increased by 5 minutes each day, until reaching 45 minutes per day on day 5. At the conclusion of the training period, within 24 hours, subjects repeated the same baseline measurements. All subjects completed the 5-day training sessions without any incident of heat-related illnesses or muscle soreness. The thermal-controlled room used for exercise training and testing was equipped with a thermal-controlled system and 122 square meters wide with an 2.44 meter ceiling. Temperature in the testing room was manually controlled by an investigator using a wall-mounted thermostat outside the testing room. One investigator (TIC) accompanied the subject in the room at all times to perform the measurements. The hot thermal room condition was significantly higher than the thermoneutral condition.
Outcome measurements Determination of VO 2max and time to reach exhaustion
The subjects' VO 2max was measured on a cycle ergometer in a thermoneutral room with a mean ± SD, 24.1°C ± 0.3°C, and RH of 51.5% ± 4.5% and in a hot room with 38.4°C ± 0.4°C and RH of 52.0% ± 4.6%. The GXT protocol consisted of a 5-minute warm-up at a work rate of 30 watts (W), and 3 minutes of stretching exercises. During the VO 2max measurement, the initial cycling work rate was 60 W for 3 minutes; thereafter, the work rate was increased by 30 W every 3 minutes until reaching volitional fatigue. VO 2max was determined when the subject met at least two of the following criteria: (1) failed to maintain the prescribed pedaling frequency of 60 revolutions per minute for more than 5 seconds with repeated verbal encouragement (from researcher TIC); (2) a rating of perceived exertion (RPE) $19 (Borg 6-20 scale); (3) respiratory exchange ratio (RER) .1.10; or (4) exercise heart rate .95% of the predicted maximal heart rate. Expired gases were collected and recorded breath-by-breath, using an automated metabolic cart system (Sensor Medics System, Model 2900, Yorba Linda, CA, USA), which was calibrated 2 hours before all testing. All subjects reached peak work rates between 210 W and 240 W before stopping the test. During the VO 2max measurement, exercise time to exhaustion (t max ) was determined by the same researcher (TIC). The time-toexhaustion criteria was recorded in seconds and determined when the subject stopped pedaling the cycle ergometer, despite repeated verbal encouragement given by the same researcher. In addition, one of the following criteria must be attained: (1) a rating of RPE $19 (Borg 6-20 scale); or (2) an exercise heart rate .95% of the age-predicted maximal heart rate. Note that during the t max measurement, all the subjects met the above end-point criteria. In addition to the VO 2max and t max , study outcome measurements also included: maximal minute ventilation (V E ); oxygen consumption (VO 2 ); rate of elimination of carbon dioxide (VCO 2 ); VT; heart rate (HR); maximal oxygen-pulse; skin blood flow in the quadriceps, (SkBF q ); skin blood flow of the upper arm (SkBF a ); mean T sk ; and total body sweat loss. HR was recorded during all training sessions using a Polar heart rate monitor (Model #1901201, Polar Electro, Kempele, Finland). The equipment used for the outcome measurements was calibrated daily, 2 hours before testing or data collection. Over the 5-day period, average training HR between the EXP in the heat (163 ± 5.2 beats per minute [BPM] −1 ) and the CON in the thermoneutral condition (164 ± 3.0 BPM −1 ) were not significantly different. The exercise intensity ranged from 75% on day 1% to 89% of maximum heart rate on day 5. Core temperature was not obtained, due to more than half of the subjects experiencing esophageal discomfort or gastroesophageal reflux while swallowing the esophageal thermostat probe during cycling exercise in the hot environment. Twenty-four hours prior to the GXT test, the athletes were instructed to adequately hydrate, abstain from any physical training, drinking alcohol, caffeine or tea, and no food intake 2 hours before testing or training. Each subject was provided with ten 450 mL bottles of water daily and instructed to drink three bottles of fluid 2 hours before the GXT or exercise training and at least three to four bottles of fluid following the GXT test or exercise training. Subjects maintained their regular diet by consuming cafeteria-prepared standard meals for breakfast, lunch, and dinner at the campus dining hall. During the study period, the standard meals prepared by the cafeteria consisted of submit your manuscript | www.dovepress.com Dovepress Dovepress approximately 3500 kilocalories per day with 55%-60% calories from carbohydrate, 15%-20% calories from protein, and 20%-25% calories from fat.
Determination of VT level
During the GXT in the thermoneutral and hot conditions, VT was determined according to Wasserman et al, 16 using the criteria of systematic increase in ventilatory equivalents for oxygen (V E /VO 2 ) with no concomitant increase of V E /VCO 2 . The same investigator (TIC) performed determination of the point at which these two values crossed each other from the measured parameters.
Skin blood flow measurement
SkBF a and SkBF q during the GXT in both thermal conditions was measured concurrently using two calibrated laser Doppler flowmeters (LDF) (Vasamedics Inc, model BPM2; St, Paul, MN, USA) and two hardtip pencil probes (model BPM2, Vasamedics Inc). The subjects' skin sites were chosen and marked with a small circle, using a waterproof permanent black marker for the attachment of the LDF probe head. 17 To obtain the resting LDF value, the subject entered the thermalcontrolled room for 10 minutes before undertaking three resting LDF measurements every 20 seconds, from which the averaged value was taken and used for data analysis. These same skin sites were used for subsequent LDF study during the GXT measurements. The following LDF recorder settings were used: speed, 100 mm per minute; sensitivity, X20; and averaging time, 10 seconds. 17,18 During the GXT, the highest and lowest blood perfusion values were recorded for 10 seconds. These five 10-second SkBF values were averaged and reported as a relative perfusion unit (% RU). 19 The SkBF values were used as indexes for estimating continuous SkBF of the region being measured. The LDF can be applied to any region of the skin with a measuring depth limited to cutaneous tissue and was not affected by the underlying muscle blood flow. 20 The LDF instrument used for these outcome measurements was calibrated daily, at least 2 hours before testing or data collection.
Skin temperature measurement
During the GXT in both thermal conditions, T sk was measured at the same time as SkBF in an upright position. The following skin sites -chest (T ch ); thigh (T th ); forearm (T fa ); and medial calf (T ca ) -were recorded using a Tele-Thermometer (YSI Tele-Thermometer model 34, YSI Inc, Yellow Springs, OH, USA) and four attachable surface temperature probes (YSI model 409B). 17 Whole body mean T sk was estimated on the basis of regional area and thermal sensitivity of each site as follows: 21 T sk = 0.3 (T ch + T fa ) + 0.2 (T th + T ca ). (1)
Estimation of total body sweat loss
Total body sweat loss was estimated using the pre-and postbody weight difference divided by the individual's body surface area, and expressed as kg per meter −2 per hour. −1 Before each weighing, subjects were asked to void their bladder completely. Weighing of the subject was performed with subject wearing only thin athletic shorts, no shoes, and no socks, using an automated electronic weight-height scale (model NK-3000, Nakamura Medical Industry, Tokyo, Japan) measured to the nearest 0.05 kg. After termination of the GXT, subjects towel dried completely, changed into freshly dried athletic shorts, and then their postexercise body weight was measured. Note that during the GXT and all the training sessions, subjects were allowed to drink water ad libitum (ie, not required). The weight of fluid they drank was accounted for by weighing the water prior to ingestion and was subtracted from body weight obtained. From this procedure, whole body sweat loss was calculated.
Statistical procedures
All statistical analyses were performed using the Statistical Product and Service Solutions statistical software package (SPSS version 20, IBM Corporation, Armonk, NY, USA). A two-way (group × time point) analysis of variance (ANOVA), with repeated measures on the time factor and two-way ANOVA (group × work rate), with repeated measures on the work factor, was used to determine significant main effects for the study outcomes. When there was a significant main effect, the Tukey's test was used to locate the source of the difference. A probability of P , 0.05 was taken to indicate statistical significance. Based on the sample size calculation for the SkBF study outcomes, with a statistical power of 0.80 (β = 0.2); α = 0.05; and effect size = 1.75 and 1.5; respectively, for leg SkBF and arm SkBF, a sample size of seven would provide sufficient statistical power for this study.
Results
At baseline, physical and physiological characteristics of the subjects were not significantly different between the EXP and CON for age ( Figure 2A and B depict leg SkBF of the EXP and CON from the hot and thermoneutral conditions, respectively. During incremental leg cycling exercise in hot conditions, the post-heat acclimation of the leg SkBF of the EXP was significantly lower than the pre-heat acclimation leg SkBF: 71% at 120 W; 56% at 180 W; and 100% at 240 W (all P , 0.05) (Figure 2A). However, when the leg cycling exercise was performed in the thermoneutral condition, post-heat acclimation of the leg SkBF of the EXP were significantly higher than pre-heat acclimation: 65% at 60 W; 63% at 120 W; 80% at 180 W; and 73% at 240 W (all P , 0.05) ( Figure 2B). For the EXP, the rise in leg SkBF during incremental leg cycling exercise, measured at post-heat acclimation in the thermoneutral condition, was not significantly higher than the rise in leg SkBF measured in hot conditions, except at the 180 W work rate (70%, P , 0.05) (Figure 2A and B). None of the above changes in the leg SkBF during incremental leg cycling exercise was observed in the CON. Figure 3 depicts arm SkBF of the EXP and CON. When leg exercise was performed in hot conditions, after heat acclimation, the EXP exhibited steady increase in arm SkBF during incremental leg exercise. The increase in arm SkBF at 180 W was significant (P , 0.05), compared to all other work rates ( Figure 3A). This finding in the arm SkBF was not observed when leg cycling exercise was performed in the thermoneutral condition ( Figure 3B). Again, the above changes were not observed in the CON, except when the cycling work rate reached 180 W arm SkBF dropped 28% (P . 0.05) ( Figure 3A). We performed a regression analysis of SkBF during leg cycling exercise to exhaustion for the EXP and CON and observed that the arm SkBF showed a linear increase during leg exercise at pre-test (correlation submit your manuscript | www.dovepress.com Dovepress Dovepress coefficient [r] = 0.996, P , 0.001) and post-test (r = 0.996, P , 0.001) in both thermal conditions. Table 1 depicts study outcome measurements for time to exhaustion: VO 2max ; maximal heart rate; maximal O 2 pulse; total sweat loss; and peak T sk . After heat acclimation, in the hot environment, the EXP showed lowered HR at rest (−2%, P . 0.05) and during the leg cycling exercise at 60 W (−2.5%, P . 0.05), 120 W (−10%, P , 0.05), 180 W (−7.7%, P , 0.05), and 240 W (−5.1%, P , 0.05). In the thermoneutral condition, the reduction in exercise HR was also observed in the EXP and CON. For O 2 pulse measurement, after heat acclimation, the EXP showed significant increases during incremental leg cycling exercise (13.3%-15.7%, P , 0.05) in the thermoneutral condition, and only modest increases (1.2%-10%, P , 0.05) in the hot environment. However, at maximal cycling work rate in the hot environment, the EXP showed a 10.4% (P , 0.05) increase in peak O 2 pulse (all P , 0.05) ( Table 1). The observed changes in peak O 2 pulse were not observed in the CON at post-test, relative to pre-test in both thermal conditions (Table 1). No change in mean skin temperature (T sk °C) in the EXP and the CON were observed at post-test in both thermal conditions (Table 1). Also, no change in sweat loss in the EXP was observed at post-test in both thermal conditions, relative to the pre-test values. Note that the sweat loss of the CON at post-test was lowered (P , 0.05) in hot conditions, but not in the thermoneutral condition (Table 1).
Discussion
After five daily sessions of upright leg cycling exercise in the heat, the EXP showed a 6.6% or 1.1 minute (P , 0.05) gain in endurance time in the hot environment, while the CON exhibited no improvement in this outcome measurement (Table 1). Our results agreed with that of Hales et al 22 and Lorenzo et al, 2 who reported increased endurance time after heat acclimation in trained athletes in hot conditions. On HR response during incremental exercise in the hot environment, our results agreed with that of Garrett et al 4 and Nielsen et al. 8 In light of the VO 2max and HR response during incremental leg exercise obtained for the EXP and CON in both thermal conditions, it is reasonable to suggest that the training protocol used in the present study was effective for inducing heat acclimation and training effect, ie, enhancing cardiovascular adaptations. For the EXP group, pre-heat acclimation leg SkBF response to incremental cycling works was significantly higher than post-heat acclimation in the hot condition ( Figure 2A). The mechanism(s) responsible for this phenomenon is unclear. These observations may be interpreted as that before heat acclimation, leg SkBF in the hot environment was largely directed to the skin region, due to cutaneous vasodilator drive in response to exercise and heat stress. 10 However, after a 5-day heat acclimation process, the observed reduction in leg SkBF at 120 W, 180 W, and 240 W work rate (all P , 0.05; Figure 2A) in the hot condition may be explained as follows. Before heat acclimation during dynamic exercise in the heat, blood flow in the exercising muscle increases to meet the increased metabolic demand and that is accompanied by cutaneous vasoconstriction and substantial vasodilation in the active muscle until reaching exhaustion. 6,7 In this case, we hypothesize that after heat acclimation, cutaneous blood flow during dynamic leg exercise was lowered so that blood flow in the active muscle can be maintained in the high thermal environment and thus increased the time to exhaustion.
In the thermally neutral environment, after heat acclimation, at all levels of exercise work rates, leg skin blood flow increases steadily, due to cutaneous vasodilation until work rate reached 180 W and then leg SkBF dropped ( Figure 2B). This assumption is based on: (1) during leg exercise in the hot environment, internal body temperature also increases; and (2) a constant cardiac output and cardiac-filling pressure. 7 The above observations were not detected in the CON.
Arm SkBF response during leg cycling exercise in the thermoneutral condition support Smolander's findings 9,23 that skin blood flow measured at the forearm increased linearly in physically active men. We observed a plateau in arm SkBF during leg exercise in the hot ( Figure 3A) and thermoneutral condition ( Figure 3B). In the hot environment, our results did not support that of Hales et al 22 who observed a remarkable 80% decrease in SkBF in the nonactive arm during leg cycle exercise at intensity of 60% VO 2max . Both EXP and CON groups showed a larger and steeper increase in arm SkBF in response to increased work rate in hot conditions than in the thermoneutral environments. However, in both thermal conditions, arm SkBF plateau or drop when the exercise work rate reached 180 W or higher ( Figure 3A and B). The observed plateau in arm SkBF is due to reflex effects from the skin temperature that work at suppressing the cutaneous vasodilator drive to rising internal body temperature by raising the threshold for vasodilation. 7, 24 We did not measure body temperature in this study and could only assume that during incremental leg cycling exercise in the hot environment, internal temperature was elevated and thus raised the cutaneous vasodilation threshold. Another possible mechanism is that when arm SkBF approaches its peak value, it begins to redirect cutaneous blood flow away from nonexercising tissues (including blood volume in the visceral vascular bed) to meet the metabolic demands of the exercising skeletal muscle. 7 To meet the high metabolic demands and optimize cardiac output during heavy exercise in the high ambient environment, the thermoregulatory drive that is exercise must place a constraint on cutaneous vasodilation. 7 Stephens et al suggest that in the cutaneous blood flow, the control is from sympathetic vasoconstrictor nerves manifested through the release of norepinephrine and the vasoconstricting cotransmitter. 25 Other researchers suggest that in the exercising skeletal muscles, increased blood flow may be achieved via the release of local vasodilating substances, such as increased hydrogen (H + ) and carbon dioxide partial pressure (Pco 2 ), 6,8,10 and release of nitric oxide synthase, 7 overcoming the effect of sympathetic vasoconstrictor drive in those exercising muscles. These observations, redistribution of SkBF from skin presumably to exercising muscle in the active limb, led us to believe that after heat acclimation, vasodilation in the exercising muscle provides the capability for the EXP to increase their endurance time in the hot environment.
After heat acclimation, the EXP group significantly increased O 2 pulse during incremental leg cycle exercise in the hot and thermoneutral conditions (10.4% and 6.4%, respectively, both P , 0.05) ( Table 1). Wasserman et al 16 suggested that when the exercise work rate reached a critical intensity level, the O 2 pulse increases primarily because of an increasing arterial-mixed venous O 2 difference, suggesting that after heat acclimation, active tissue receives adequate arterial blood perfusion during exercise in the heat. One benefit of heat acclimation is the redistribution of blood flow away from the skin in the active muscles, or to increase blood perfusion to the working muscle by enhancing peripheral working muscle O 2 extraction. 5,6 This may be attributed to the increase in O 2 pulse, lowering of the heart rate, and the endurance time of the EXP after heat acclimation.
Throughout the study, our subjects were encouraged to drink fluid during the incremental leg cycle exercise in the hot and thermoneutral conditions and were encouraged to drink three 450 mL bottles fluid following each training session and to continue drinking three to four 450 mL bottles fluid throughout the day. It is possible that these athletes may have induced mild degrees of dehydration because of ad libitum (ie, not required) water intake. If a submit your manuscript | www.dovepress.com Dovepress Dovepress severe dehydration had occurred, due to incomplete fluid replacement from previous training sessions, the exercise time to exhaustion would have been affected. 1,5,26 This could be the case for the CON at post-test measurement because sweat loss of the CON was reduced by approximately 43% (P , 0.05) in the thermoneutral conditions and 23% (P . 0.05) in hot conditions. Thus, the subjects' hydration status may have influenced the skin blood flow outcome measurements.
The present study was novel in that we focused on the training of the lower limbs to separate training effects on SkBF in the active and nonactive limbs, with and without heat exposure. This study was unique with regard to the short-term (five daily sessions) training protocol and the exercise intensities at or above the individual's ventilation threshold with heat exposure and that can be generalized to all athletic groups. Together, the present study makes a contribution to the body of knowledge in the field of sports medicine, particularly given the dearth of studies investigating this population.
Summary and conclusion
The purpose of this study was to examine whether shortterm, ie, five daily sessions of vigorous dynamic cycling exercise and heat exposure could achieve heat acclimation in trained athletes and the effect of cutaneous blood flow in the active region of the skeletal muscle during all-out leg cycle exercise in the heat-acclimated athletes. The significant heat-acclimated outcome measurements during incremental leg cycling exercise in high ambient environment are: (1) 56%-100% reduction in cutaneous blood flow to the active limbs during leg cycling exercise; (2) 28% drop in cutaneous blood flow in nonactive limb at peak work rate; (3) 5%-10% reduction in exercise heart rate; (4) 10% increase in maximal O 2 pulse; and (5) 6.6% increase in endurance time to exhaustion. Based on these findings, we conclude that: (1) heat acclimation can be achieved with five sessions of dynamic cycling exercise in the heat in trained athletes; and (2) reduction of cutaneous blood flow in the exercising limb, lower exercise heart rate, and increase oxygen pulse during cycling exercise provided the heat-acclimated athletes the capability to increase their endurance time in the hot environment. Considering the importance of cutaneous circulation in body temperature regulation and blood flow distribution to the working muscle and skin, heat acclimation should be considered advantageous for trained, competitive athletes. Future research may consider using endurance athletes who compete outdoors for long-distance events with a VO 2max level .65 mL per kg per minute and using laser Doppler images to estimate regional skin blood perfusion with heat acclimation.
|
2017-06-16T10:20:12.680Z
|
2013-06-18T00:00:00.000
|
{
"year": 2013,
"sha1": "49ed354a1095440d4b2add98b8c1817678b782ff",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=16449",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a80b84efcc651ff4c8e769d2ac2768e9652deb44",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254175869
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Transcutaneous Acupoint Electrical Stimulation on Urinary Retention and Urinary ATP in Elderly Patients After Laparoscopic Cholecystectomy: A Prospective, Randomized, Controlled Clinical Trial
Purpose To investigate the effect of transcutaneous electrical acupoint stimulation (TEAS) on urinary retention after laparoscopic cholecystectomy in elderly patients, and to explore the relationship between TEAS and urinary ATP. Patients and Methods The TEAS group was administered active TEAS at specific acupuncture points prior to induction of anesthesia and continued for 45 mins after surgery. In the control group, participants received sham stimulus at the same acupoints and no output current was delivered by disconnecting the device’s output line. Urine samples were collected and evaluated in the first spontaneous voiding after surgery. In this study, postoperative urinary retention (POUR) was the primary outcome, which was diagnosed based on clinical symptoms, ultrasound assessments, and the need for bladder catheterization. Secondary outcomes include urinary ATP, postoperative spontaneous urination, urination symptoms, catheter-related bladder discomfort (CRBD), delirium, duration and hospitalization costs. Results The study involved 598 patients recruited and randomized between August 2018 and June 2020. Among these patients, 547 (91.5%) completed the study and were analyzed. There were 64 cases of POUR, including 23 (8.4%, 95% confidence interval [CI]: 6.4–9.9%) in the TEAS group and 41 (15.0%, 95% CI: 9.3–13.4%) in the control group (p = 0.017). A significant difference was observed between the TEAS and control groups for urinary ATP concentration in the first spontaneous urine postoperatively (344 nmol/L versus 233 nmol/L, p=0.001). There was a shorter spontaneous voiding recovery time, smaller voiding threshold, less postoperative catheterization, less CRBD, and lower hospitalization costs in TEAS group compared with control group. Conclusion TEAS reduces the incidence of POUR in elderly patients undergoing laparoscopic cholecystectomy, which may be related to an increase in bladder ATP release.
Introduction
Urinary retention is a common complication after anesthesia, often accompanied by painful vesical distention and cardiovascular responses, and can lead to motility and atony problems, especially in older patients. 1,2 In addition, in the elderly, urinary retention is associated with restlessness, confusion, and the possibility of delirium. 3 In general anesthesia, stress responses are attenuated and sympathetic nerve activity is inhibited, as well as vagus nerve activity is inhibited. 4 As a result of reduced parasympathetic activity, baroreceptor function may be impaired and the recovery of spontaneous urination may be delayed. 5 Aging is a physiological process associated with dysfunction of the autonomic nervous system. 6,7 The physiological response to anesthesia may be unpredictable and paradoxical if the autonomic nervous system is dysfunctional. ATP, the principal messenger released from urothelial cells, is a co-transmitter together with noradrenaline in sympathetic nerves and acetylcholine in parasympathetic nerves and considered a potential therapeutic target for functional bladder functional disorders. 8 In recent years, acupuncture has gained popularity as a method of improving urination control. Acupuncture and related methods have been shown to reduce the incidence of urinary retention after orthopedic, obstetric and gynecological, and gastrointestinal procedures. [9][10][11] Transcutaneous electrical acupoint stimulation (TEAS), which applies electrical stimulation to acupuncture points instead of needles, has shown benefits in studies relating to regulating voiding, but remains controversial when used in elderly patients. 12, 13 In addition, the effect of TEAS on urinary ATP function in patients following surgery is unclear. Therefore, we conducted a double-blind randomized trial and hypothesized that TEAS is effective in reducing urinary retention following laparoscopic cholecystectomy and is associated with increased urinary ATP release.
Materials and Methods Design
The study was approved by the ethics committee of Tianjin Nankai Hospital and is in accordance with the Helsinki Declaration. The trial was registered at ClinicalTrials.gov (registration number: NCT03631160). Written informed consent was obtained from all participating patients prior to randomization.
Patients
Inclusion criteria were: (1) age above 65 years; (2) elective laparoscopic cholecystectomy under general anesthesia; (3) no preventive or intraoperative indwelling catheter. The criteria for exclusion were: (1) urinary tract obstruction or infection, such as moderate or severe benign prostatic hyperplasia (International Prostate Symptom Score (I-PSS) greater than 7), urinary tract infections in the last 3 months; (2) contraindications for TEAS such as skin allergy, infections, itching or pacemaker wear; (3) previous experience with multiple acupuncture sessions; (4) preoperative American Society of Anaesthetists (ASA) grade ≥ IV; (5) severe impairment requiring kidney replacement therapy; and (6) participation in other clinical investigations within 3 months.
Eligible participants were randomly assigned to the TEAS and control groups at a 1:1 ratio using block randomization. Random numbers were then sealed in sequentially numbered, opaque envelopes which were revealed by the acupuncture before administration. Participants, outcome assessors and statisticians were blinded to the treatment allocation.
TEAS Administration
Four acupoints were selected as the stimulation points: Ciliao (BL32, bilateral), Sanyinjiao (SP6, bilateral), Zhongji (CV3), and Guanyuan (CV4). The above acupoints are located in lower back, lower leg and lower abdomen. Their positioning is based on the National Standard of the People's Republic of China: acupoint position ( Figure 1). TEAS was administered by a licensed acupuncturist with ≥8 years of clinical acupuncture experience. A low-frequency pulse electroacupuncture stimulator (Hwato brand model no. SDZ-IV, Suzhou Medical Appliances Co., Ltd, Suzhou, China) was attached to the selected acupoints. Stimulus started 30 mins before induction of anesthesia and lasted 45 mins after surgery. The frequency of stimulation was 4 / 20 Hz, and the intensity was determined by the patient's tolerance until the local skin twitches slightly and the pain was tolerable. 14 Participants in the control group received sham stimulus at the same points with sham electrode lines, cutting off output wires, causing no current output to pass to the patient, despite being connected. We also informed each patient that they may not feel tingling in their lower extremities and lower back when the stimulator works prior to stimulation. In addition, in order to make the simulation as realistic as possible, the stimulator with sham
Anesthetic and Surgical Procedure
All patients were completely emptied before entering the operating room. The method of anesthesia was standard for the two groups. Anesthesia was induced with 0.2-0.6 mg/kg etomidate and 0.2-0.5 μg/kg sufentanil and maintained with remifentanil and propofol infusion. Intraoperative monitoring included an electrocardiogram, noninvasive blood pressure, peripheral capillary oxygen saturation, carbon dioxide division at the end of breathing, and the bispectral index. Intra-abdominal pressure was maintained between 8-12 mmHg. Patients were treated with Ringer's lactate solution infusion at a rate of 10 mL/kg/h before and during the procedure. This was continued after surgery until oral fluid intake was allowed. Patient controlled intravenous analgesia (PCIA) with sufentanil combined with flurbiprofen was administered for 24-48 h postoperatively, as necessary.
Data Collection and Urinary ATP Measurements
The primary outcome was the incidence of POUR after surgery in this study. Based on clinical symptoms, ultrasound assessments, and bladder catheterization requirements, postoperative urinary retention (POUR) was diagnosed by the attending surgeon (Supplementary Table S1). 2 The secondary outcomes included urinary ATP concentration, spontaneous voiding ability, urination symptoms (such as voiding difficulty, oliguria, or nocturia), Catheter-related bladder discomfort (CRBD), delirium, early ambulation, length of hospital stay, and hospital costs. CRBD was evaluated by the anesthesiology resident: none = no complaints of CRBD symptoms, mild = reported only upon direct inquiry, moderate = spontaneous complaints by the patient without behavioral responses (eg, pulling out the catheter, flailing limbs, or a loud vocal response), and severe = spontaneous complaints by the patient with behavioral responses. 15 An assessment of postoperative delirium was made by trained personnel using the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) and the Richmond Agitation-Sedation Scale (RASS) from the first to the third postoperative day. 16 A patient was considered delirious if he or she was awake (RASS of 3 or greater) and positive for CAM-ICU on either of the CAM-ICU assessments. Early ambulation was defined as any partial DovePress or full weight-bearing activity conducted two to three times outside the ward, with a total walking distance of 250 to 500 m. 17 During the first spontaneous void after surgery, urine samples were collected from thirsty patients in each group and tested for urinary ATP levels. The urine samples were centrifuged at 2500 x g for 5 minutes and stored at-80°C. Measurement of urine ATP concentrations was performed using a luminometer and a luciferinluciferase assay (Sigma-Aldrich). In brief, the colorimetric reaction absorbance was measured using a spectrophotometer at 460 nm, and the ATP concentration was estimated by applying the luminescence value to a standard curve of known ATP concentrations.
Statistical Analysis
We conducted all analyses using IBM SPSS 22.0 (IBM Corp., Armonk, NY, USA). Kolmogorov-Smirnov was used to test the normality of continuous data (p < 0.05: reject normality), with the mean and SD reported for normal data, and the median (IQR) for non-normal data. As appropriate, t-tests or Mann-Whitney U-tests were used to compare the two groups. Categorical variables were analyzed through the chi-squared test or Fisher's exact probability test and summarized as numbers (%). Statistical significance was set at p < 0.05.
Results
The study assessed 712 patients who underwent elective laparoscopic cholecystectomy, of whom 598 met the inclusion/exclusion criteria. The 598 patients were enrolled and randomly assigned to two groups, the TEAS group (n = 298) and the control group (n = 300). There were 22 subjects who did not receive allocated intervention due to cancellation of surgery or preventive indwelling catheter (10 subjects from the TEAS group and 12 subjects from the control group); 29 subjects discontinued interventions due to another surgery or an intraoperative indwelling catheter (15 subjects from the TEAS group and 14 subjects from the control group). This resulted in the analysis of data on 547 subjects, including 273 patients in the TEAS group and 274 patients in the control group. A detailed patient flowchart is shown in Figure 2.
As can be seen in Table 1, the demographics and baseline characteristics of the patients were similar between the two groups and no significant differences were observed between the two groups (p >0.05).
Sixty-four patients were diagnosed with POUR, including 23 (8.4%, 95% confidence interval [CI]: 6.4-9.9%) in the TEAS group and 41 (15.0%, 95% CI: 9.3-13.4%) in the control group with a statistically significant difference (p = 0.017, Table 2). As compared to the control group, patients in the TEAS group had a shorter recovery time from spontaneous voiding (5.1 h versus 8.1 h, p =0.005) and smaller voiding threshold (414.9 mL versus 504.8 mL, p =0.048). The concentration of urinary ATP was measured in the first spontaneous urine postoperatively, and the results showed that patients in the TEAS group had a significantly higher level of ATP concentration (387.6 nmol/L versus 244.5 nmol/ L, p =0.001) than those in the control group ( Figure 3).
As compared with the control group, there were fewer patients in the TEAS group who experienced voiding difficulty ( During the study, six patients experienced adverse events, including four cases of gross hematuria or lower urinary tract infection and two cases of localized acupuncture point redness or pruritus, and no statistically significant difference was found between the TEAS and control groups in terms of the above adverse events (Table 3). There were no serious adverse events reported in either group. And all adverse events occurring during the trial were mild and resolved within a short period of time.
Discussion
According to our findings, TEAS can reduce the incidence of urinary retention in elderly patients following laparoscopic surgery. This mechanism may be related to the improvement of spontaneous urination reflex recovery by increasing ATP release, which is consistent with the hypothesis we presented.
POUR is defined as the inability to void when the bladder is full, along with pain and discomfort in the lower abdomen. 2 The incidence of POUR in the general surgical population varies widely. It is estimated to be 6-10% in laparoscopic cholecystectomy. 18,19 In older patients, changes resulting from physiological aging, as well as comorbidities and multiple drug therapies, may result in lower urinary tract symptoms, such as urinary retention and incontinence. The risk of POUR increases with age, with patients over 60 years of age being at double the risk. 20 In this population of elderly patients with poor mobility and cognitive impairment, combined with susceptibility to both infection and urinary bacterial overgrowth, a high rate of urinary tract infections is prevalent. 21 According to a three-year surveillance study of 134,637 patients, urinary tract infections are the second most common healthcare-associated infections among the elderly, and catheter-associated urinary tract infections are more prevalent among the needy. 22 In the present study, a significant reduction in the incidence of POUR after TEAS treatment was observed as compared to the baseline incidence of 15% in the elderly, which was consistent with previous research. Furthermore, no urinary tract infections occurred in patients treated with TEAS, indicating that TEAS has a protective effect on urinary tract susceptibility in older adults. The storage and elimination of urine are essential aspects of daily life and involve intricate neural signaling pathways that require coordination between the urinary bladder and the urethra. 23 It is necessary for both continence and micturition that the autonomic nerves and somatic nerves cooperate to control the bladder and urethra. During the storage phase of the micturition cycle, the sympathetic nervous system inhibits the contraction of the smooth muscle in the detrusor. As a result, the bladder is able to relax and expand. As the urinary bladder empties, ATP and acetylcholine act on smooth muscle P2X purinoceptor 1 (P2X1) and muscarinic receptors (M2 and M3) to mediate contractions. A switch from the filling phase to the emptying phase occurs when tension in the bladder stimulates stretch receptors (which are slowly adapting mechanoreceptors). 24 Inhibition of the parasympathetic nerves that innervate the bladder is the mechanism by which anticholinergics, opioids, and anesthetics cause urinary retention. 25,26 There are numerous signaling molecules secreted by the urinary epithelium (including acetylcholine, nitric oxide, neuropeptides, neurotrophins, and prostaglandins) which form the "uroepithelium-associated sensory network", with ATP serving as the main messenger for the voiding reflex and pain. 27 In experiments, administration of intravesical ATP increased afferent firing induced by bladder distension, decreased the voltage threshold for electrical stimulation, and increased the area of action potentials. 28 In rats with bladder dysfunction caused by AUR, ATP-sensitive potassium channel openers prevent the need DovePress for catheterizations as a result of the dysfunctional bladder. 29 The present study found elevated urinary ATP levels in TEAS of laparoscopic cholecystectomy patients, which may be associated with lower urinary retention rates. An elderly person's ability to adapt to environmental or intrinsic visceral stimuli is compromised due to significant changes in autonomic nervous system function as they age. There is evidence that the elderly has altered peripheral and central nervous system activity, as well as reduced neurotransmitter receptor function. As a result, bladder capacity is reduced and bladder volume sensation is lost. 30 In order to increase sympathetic, parasympathetic, and central cholinergic activity, simple strategies can be implemented to enhance autonomic function and increase cortical blood flow. By inhibiting ATP release from the urothelium and attenuating bladder afferent nerve firing, OnabotulinumtoxinA is able to exert its clinical effects on urinary urgency in cases of overactive bladder syndrome. 31 Alternatively, acupuncture-like stimulation of specific body areas causes an increase in bladder tone and a decrease in periurethral electromyogram activity when the bladder is full and contracting. 32 This response is mediated through the pelvic efferent (parasympathetic) nerve that drives the bladder and depends on the state of the bladder's filling. 33 Adenosine A1 receptors and local anti-nociceptive effects of acupuncture have been shown in research to increase ATP concentration in the body, as well as adenosine levels. 34 As a result of the present study, patients receiving TEAS treatment had higher urinary ATP levels and significantly faster recovery from voluntary voiding after surgery. Thus, increasing urinary ATP levels are related to the regulation of TEAS during urination.
This study has several limitations. First, we evaluated only the incidence of urinary retention following laparoscopic cholecystectomy and short-term after catheter removal, without long-term follow-up of voiding relief and functional outcomes. Second, despite the careful blinding procedure, there may be a potential placebo effect. Third, this study demonstrated that people with elevated ATP concentrations had a reduced bladder voiding threshold, although the specific mechanism remains to be determined.
Conclusion
TEAS can reduce the incidence of POUR and increase ATP release in elderly patients who have undergone laparoscopic cholecystectomy. Future studies should examine specific mechanisms by which ATP participates in TEAS to reduce bladder dysfunction caused by urinary retention.
Data Sharing Statement
The datasets generated during and/or analyzed during the current study will be available upon reasonable request from the corresponding author. Email: 30717008@nankai.edu.cn.
Ethical Approval
The study was approved by Tianjin Nankai Hospital Ethics Committee (ethical register number: NKYY_YX_IRB_2017_032_01). Trial Registration: NCT03631160.
|
2022-12-03T16:26:57.170Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "af27b071132370a49c369618bafb08de2c274022",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=85805",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c60bf78ad1b4e9a18d66ff28590d892f861f0d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
55928909
|
pes2o/s2orc
|
v3-fos-license
|
Wave extreme characterization using self-organizing maps
The self-organizing map (SOM) technique is considered and extended to assess the extremes of a multivariate sea wave climate at a site. The main purpose is to obtain a more complete representation of the sea states, including the most severe states that otherwise would be missed by a SOM. Indeed, it is commonly recognized, and herein confirmed, that a SOM is a good regressor of a sample if the frequency of events is high (e.g., for low/moderate sea states), while a SOM fails if the frequency is low (e.g., for the most severe sea states). Therefore, we have considered a trivariate wave climate (composed by significant wave height, mean wave period and mean wave direction) collected continuously at the Acqua Alta oceanographic tower (northern Adriatic Sea, Italy) during the period 1979–2008. Three different strategies derived by SOM have been tested in order to capture the most extreme events. The first contemplates a pre-processing of the input data set aimed at reducing redundancies; the second, based on the post-processing of SOM outputs, consists in a two-step SOM where the first step is applied to the original data set, and the second step is applied on the events exceeding a given threshold. A complete graphical representation of the outcomes of a two-step SOM is proposed. Results suggest that the post-processing strategy is more effective than the pre-processing one in order to represent the wave climate extremes. An application of the proposed two-step approach is also provided, showing that a proper representation of the extreme wave climate leads to enhanced quantification of, for instance, the alongshore component of the wave energy flux in shallow water. Finally, the third strategy focuses on the peaks of the storms.
Introduction
The assessment of wave conditions at sea is fruitful for many research fields in marine and atmospheric sciences and for human activities in the marine environment.In the past decades, the observational network (mostly relying on buoys, satellites and other probes) has been integrated with numerical model outputs allowing one to obtain the parameters of sea states over wider regions.Apart from the collection of wave parameters, the technique adopted to infer the wave climate at those sites is a crucial step in order to provide high-quality data and information to the community.In this context, several statistical techniques have been proposed to provide a reliable representation of the probability structure of wave parameters.While univariate and bivariate probability distribution functions (PDFs) are routinely derived, multivariate PDFs that represent the joint probability structure of more than two wave parameters are not straightforward.For individual waves, for instance, the bivariate joint PDF of wave height and period was derived by Longuet-Higgins (1983) and the bivariate joint PDF of wave height and direction was obtained by Isobe (1988).A trivariate joint PDF of wave height, wave period and direction is due to Kwon and Deguchi (1994).For sea states, attempts have been made to model the joint probability structure of the integral wave parameters.For instance, a joint PDF of the significant wave height and the average zero-crossing wave period was derived by Ochi (1978) and Mathisen and Bitner-Gregersen (1990).De Michele et al. (2007) exploited the "copula" statistical operators to describe the dependence among several random variables, e.g., significant wave hight, storm duration, storm direction and storm interarrival time, deriving their joint probability distributions.The same approach was F. Barbariol et al.: Wave extreme characterization using self-organizing maps applied by Masina et al. (2015) to the significant wave height and peak water level in the context of coastal flooding.
Recently, the self-organizing map (SOM) technique has been successfully applied to represent the multivariate wave climate around the Iberian Peninsula (Camus et al., 2011a, b) and the South American continent (Reguero et al., 2013).SOM (Kohonen, 2001) is an unsupervised neural network technique that classifies multivariate input data and projects them onto a uni-or bi-dimensional output space, called map.The SOM technique was originally developed in the 1980s, and has been largely applied in various fields, including oceanography (Liu et al., 2006;Solidoro et al., 2007;Morioka et al., 2010;Camus et al., 2011a;Falcieri et al., 2013).Typical applications of SOM are vector quantization, regression and clustering.SOMs gained credit among other techniques with same applications due to its visualization capabilities that allow one to get multi-dimensional information from a two-dimensional lattice.The SOM also has the advantages of unsupervised learning; therefore, vector quantization is performed autonomously.However, the quantization is strongly driven by the input data density.Indeed, the SOM is principally forced by the most frequent conditions, while the most rare (i.e., the extreme events) are often missed.Consequently, it is highly unlike to find extremes properly represented on a SOM.
In the context of ocean waves, drawing upon the works of Camus et al. (2011a, b) and Reguero et al. (2013), the SOM input is generally constituted by a set of wave parameters measured or simulated at a given location and evolving over the time t, e.g., the triplet composed by significant wave height H s (t), mean wave period T m (t) and mean wave direction θ m (t), even if other variables can be added (examples of five-or six-dimensional inputs can be found in Camus et al., 2011a).Several activities in the wave field could benefit from the SOM outcomes, such as selection of typical deep-water sea states for propagation towards the coast to study the longshore currents regime and coastal erosion, identification of typical sea states for wave energy resource assessment and wave farm optimization.In addition the empirical joint and marginal PDFs can be derived from SOMs.As accurately shown in Camus et al. (2011b), besides interesting potentials, especially in visualization, some drawbacks in using the SOM for wave analysis have emerged with respect to other classification techniques.Indeed, the largest H s are missed by SOMs because such extreme events are both rare (few comparisons in the "competitive" stage of the SOM learning) and distant from the others in the multi-dimensional space of input data (poorly influenced during the "cooperative" stage).
Moving from this evidence, the scientific question being asked is how can we employ SOM with its visualization capabilities to improve representation of the extremes of a multivariate wave climate at a location.To answer this question we have followed three different strategies.First, we have pre-preprocessed the SOM input data using the maximum-dissimilarity algorithm (MDA) in order to reduce the redundancies of the frequent low and moderate sea states, as done by Camus et al. (2011a).Indeed, MDA is a technique that reduces the density of inputs by preserving only the most representative (i.e., the most distant from each other in a Euclidean sense).Doing so, the most severe sea states are expected to gain weight in the learning process.We have called this strategy MDA-SOM.Then, we have focused on the postprocessing of the SOM outputs.In this context, we have applied a two-step SOM approach (herein called TSOM), by firstly running the SOM to get a reliable representation of the low/moderate (i.e., the most frequent) wave climate, and then by running a second SOM on a reduced input sample.This new sample has been obtained by taking from first-step SOM results the events exceeding a prescribed threshold (e.g., 97th percentile of H s ).To present results of two-step SOMs, we have proposed a double-sided map, showing on the left the SOM with the reliable representation of the low/moderate sea states, and on the right the map with the most severe sea states (i.e., the extremes).Then, we have applied a SOM to the peak of the storms individuated by means of a peak-overthreshold analysis (calling this strategy POT-SOM) and we have represented results using the double-sided map.An application of the proposed TSOM approach is finally reported: we have exploited the TSOM results to compute the longshore component of the wave energy flux, showing that a more proper representation of the extreme wave climate leads to an enhanced quantification of the energy approaching the shore.
Data
The data set employed for the SOM analysis consists of wave time series gathered at the Acqua Alta oceanographic tower, owned and operated by the Italian National Research Council -Institute of Marine Sciences (CNR-ISMAR).Acqua Alta is located in the northern Adriatic Sea (Italy, northern Mediterranean Sea), approximately 15 km off the Venice coast at 17 m depth (Fig. 1) and is a preferential site for marine observations (wind, wave, tide, physical and biogeochemical water properties are routinely retrieved), with a multi-parametermeasuring structure on board (Cavaleri, 2000) upgraded over the years.For this study, we have relied on a 30-year data set of 3-hourly significant wave height H s , mean wave period T m and mean wave direction of propagation θ m (measured clockwise from the geographical north), observed using pressure transducers.Preliminarily, data have been preprocessed in order to remove occasional spikes.To this end, at first the time series have been treated with an ad hoc despiking algorithm (Goring and Nikora, 2002).The complete data set is therefore constituted of three variables and 50 503 sea states.
Basic statistics of the time series (Table 1) point out that sea states at Acqua Alta have on average low intensity ( H s = 0.62 m, where − denotes mean), though occa- sionally they can reach severe levels: the most intense event (H s = 5.23 m, T m = 5.36 s, θ m = 242 • N) occurred on 9 December 1992 during a storm forced by winds coming from north-east.Such severe events are not frequent, as confirmed by the 99th percentile of H s , which is 2.68 m.Nevertheless they populate the wave time series at Acqua Alta and constitute the most interesting part of the sample, for instance for extreme analysis.Mean wave period is on average 4.1 s, while mean wave direction is 260 • N indeed most of the waves propagate towards the western quadrants.This is represented more in detail by the histogram representing the PDF of θ m (Fig. 2, bottom panel), which shows that the most frequent directions of propagation are indeed in the range 180 < θ m < 360 • N (western quadrants), with peaks at 247.5 and 315 • N. Directions associated with the most intense sea states (H s >4.5 m) can be obtained from the bivariate histogram (H s −θ m ) representing the joint PDF of H s and θ m (Fig. 2, top panel): 247.5, 270 and 315 • N. Mild sea states and calms (H s <1.5 H s , following Boccotti, 2000) are the most frequent conditions at Acqua Alta, with 80 % of occurrence during the 30 years of observations.They mainly propagate towards the western quadrants too, though the principal propagation directions of such seas states is north-west.In this context, the most frequent sea states at Acqua Alta are represented by {H s , θ m } = {0.25 m, 315 • N}.Storms in the area (denoted as sea states with H s ≥ 1.5 H s ) are generated by the dominant winds, i.e., the so-called Bora and Sirocco winds (Signell et al., 2005;Benetazzo et al., 2012).Bora is a gusty katabatic and fetch-limited wind that blows from north-east; it generates intense storms along the Ital- ian coast of Adriatic Sea characterized by relatively short and steep waves.Sirocco is a wet wind that blows from south-east; it is not fetch limited and it generates longer and less steep waves than Bora, which come from the southern part of the basin.Denoted conventionally as Bora the events with 180 ≤ θ m ≤ 270 • N, and as Sirocco the events with 270<θ m ≤ 360 • N, it follows that Bora storms have an F. Barbariol et al.: Wave extreme characterization using self-organizing maps occurrence of 12 % and Sirocco storms an occurrence of 8 %.The most frequent {H s , T m }, which occurred in the Bora and Sirocco quadrants, are shown in the bivariate (H s − T m ) histogram (Fig. 3) are {0.15m, 3.6 s} and {0.35 m, 3.8 s}, respectively, Sirocco being the most frequent among the two.The associated marginal histogram (Fig. 3) point out that Sirocco winds are responsible for most of the calms, in particular for sea states with H s <1 m, while Bora for the most energetic sea states.Nevertheless, the histogram of H s shows that Sirocco events with H s in the range of 4-5 m can occur as well as Bora events.Bora is also associated with the shortest period waves observed: indeed, the histograms of T m almost coincide for waves shorter than 5.5 s, while for longer waves the probability level of Bora mean periods abruptly drops to values much smaller than those of Sirocco (which remains to non-negligible levels until 9 s).The consequence of shorter and higher Bora waves, with respect to Sirocco, is steeper waves (3 % against 2 % on average, respectively).
Theoretical background
In this section, we recall SOM features that are functional to the study.For more comprehensive readings we refer to Kohonen (2001) and other references cited in the following.
The SOM is an unsupervised neural network technique that classifies multivariate input data and projects them onto a uni-or bi-dimensional output space, called map.Typically a bi-dimensional lattice is produced as output map.The global structure of the lattice is defined by the map shape that can be sheet, cylindrical or toroidal.The local structure of the lattice is defined by the shape of the elements, called units, that are typically either rectangular or hexagonal.The output map produced by a SOM on wave input data (e.g., as in Camus et al., 2011a) furnishes an immediate picture of the multivariate wave climate and allows one to identify, among others, the most frequent sea states along with their significant wave height, mean direction of propagation and mean period.
The core of SOM is represented by the learning stage.Therefore, the choice of functions and parameters that control learning is crucial to obtain reliable maps.In SOM, the classification of input data is performed by means of competitive-cooperative learning: at each iteration, the elements of the output units compete among themselves to be the winning or best-matching units (BMUs), i.e., the closest to the input data according to a prescribed metric (competitive stage), and they organize themselves due to lateral inhibition connections (cooperative stage).Usually, given that the chosen metric is a Euclidean distance, inputs have to be normalized before learning (e.g., by imposing unit variance or [0, 1] range for all the input variables) and de-normalized once finished.The lateral inhibition among the map units is based upon the map topology and upon a neighboring func- m (3 % for Bora, 2 % for Sirocco, g being gravitational acceleration), red solid lines denote wave breaking limit (7 %).Resolutions are H s = 0.2 m and T m = 0.2 s. tion that expresses how much a BMU affects the neighboring ones at each step of the learning process.During the learning process, the neighboring function reduces its domain of influence according to the decrease of a radius, from an initial to a final user-defined value.Learning can be performed sequentially, i.e., presenting the input data one at a time to the map, as done by the original incremental SOM algorithm.A more recent algorithm performs a batchwise learning, presenting the input data set all at once to the map (Kohonen et al., 2009).While the sequential algorithm requires the accurate choice of a learning rate function, which decreases during the process, the batch algorithm does not.At the beginning of the learning stage, the map has to be initialized: randomly or preferably as an ordered two-dimensional sequence of vectors obtained from the eigenvalues and eigenvectors of the covariance matrix of the data.In both SOM algorithms the learning process is performed over a prescribed number of iterations that should lead to an asymptotic equilibrium.Even if Kohonen (2001) argued that convergence is not a problem in practice, the convergence of the learning process to an optimal solution is however an unsolved issue (convergence has been formally proved only for the univariate case, Yin, 2008).The reason is that, unlike other neural network techniques, a SOM does not perform a gradient descent along a cost function that has to be minimized (Yin, 2008).Hence, in order to achieve reliable maps, the degree of optimality has to be assessed in other ways, e.g., by means of specific error met-rics.The most common ones are the mean quantization error and the topographic error (Kohonen, 2001).The former is the average of the Euclidean distances between each input data and its BMUs, and is a measure of the goodness of the map in representing the input.The latter is the percentage of input data that have first and second best matching units adjacent in the map and is a measure of the topological preservation of the map.
SOM setup
In this paper, the SOM technique has been applied by means of the SOM toolbox for MATLAB (Vesanto et al., 2000) that allows for most of the standard SOM capabilities, including pre-and post-processing tools.Among the techniques available, we have chosen the batch algorithm because together with a linear initialization it permits repeatable analyses; i.e., several SOM runs with the same parameters produce the same result (Kohonen et al., 2009).This is not a general feature of SOM, as the non-univoque character of both random initialization and selection of the data in the sequential algorithm lead to always different, though consistent, SOMs (Kohonen, 2001).
Parameters controlling the SOM topology and batchlearning have been accurately examined and their values have been chosen as the result of a sensitivity analysis aimed at attaining the lowest mean quantization and topographic errors.Therefore, we have chosen bi-dimensional squared SOM outputs that are sheet shaped and with hexagonal cells.This kind of topology has been preferred to others (e.g., rectangular lattice, toroidal shape, rectangular cells) because the maps produced this way had the best topological preservation (low topographic error) and visual appearance.The map's size is 13 × 13 (169 cells); hence, each cell represents approximately 300 sea states on average, if the complete data set is considered.The lateral inhibition among the map units is provided by a cut-Gaussian neighborhood function that ensures a certain stiffness to the map (Kohonen, 2001) during the batch learning process (1000 iterations).At the same time, to allow the map to widely span the data set, the neighborhood radius has been set to 7 at the beginning, i.e., more than half the size of the map, and then it linearly decreased to 1 during a single phase learning process.
Input data have been normalized so that the minimum and maximum distance between two realizations of a variable are 0 and 1, respectively.To this end, according to Camus et al. (2011a), the following normalizations have been used: Therefore, H and T range in [0, 1], while θ ranges in [0, 2].
To take into account the circular character of θ m in distance {H i , T i , θ i } and SOM unit {H j , T j , θ j } is defined as (2) The Euclidean-circular distance has been therefore implemented in the scripts of SOM toolbox for MATLAB where distance is calculated.
SOM strategies to characterize wave extremes
In this section, results of the standard SOM approach (applied one time, hence called single-step SOM) and results of the different strategies proposed to improve extremes representation are presented.The performances of a single-step SOM, MDA-SOM and TSOM are assessed by comparing the wave parameters time series and their empirical marginal PDFs to the time series reconstructed from the results of the different strategies and relative PDFs, respectively.POT-SOM is treated separately because a direct comparison with the other strategies using the described methods is not possible.
Single-step SOM
A single-step SOM has been applied using the setup illustrated in Sect.3.2.The SOM output in Fig. 4 According to the map, the most frequent sea states are represented by the triplet {0.17 m, 3.5 s, 323 • N}, which substantially resembles the information that one could have gather from the bivariate (H s −T m ) and (H s −θ m ) histograms (Fig. 3), though these are not formally related to one another.Most cells show wave propagation directions pointing towards the western quadrants, as also displayed in the joint and marginal histograms of θ m (Fig. 2).The cells denoting sea states forced by land winds (pointing toward east) are clustered in the top-left corner of the map and have low frequencies of occurrence (individual and cumulated).The frequency of occurrence of calms is 80 %, while that of Bora storms is 12 % and that of Sirocco storms is 8 % (using definition of calms, Bora and Sirocco storm events given in Sect.2).Hence, the integral distribution of the observed events over H s and θ m is retained by SOMs.Sea states with the longest wave periods are clustered in the top-right corner of the map.
The most severe sea states of the map are clustered in the top-right part of the map, but are limited to H s values smaller than 2.75 m.Indeed, the triplet with the highest H s produced by the SOM is {2.75 m, 5.9 s, 270 • N}.However, Tables and histograms in Sect. 2 have shown that H s can exceed 5.0 m at Acqua Alta.Therefore, sea states with H s >2.75 m are represented by cells with lower H s .This is clear in Fig. 5, where a sequence of observed events, including one with H s >4.0 m, has been compared to the sequence reconstructed after SOM; i.e., for each sea state of the sequence the triplet assumes the values of the corresponding BMUs.In Fig. 5 sea states with H s >2.75 m are represented by the cell with the highest H s , i.e., cell no.118 (first row, 10th column, assuming the cells numbering starts at the top-left cell and proceeds from top to bottom over map rows and then from left to right over map columns); hence, H s is limited to 2.75 m, whereas the peak of the most severe storm in Fig. 5 has {4.46 m, 6.7 s, 275 • N}.Quantitatively, for this particular event, single-step SOM underestimates the peak of 32 % H s , 12 % T m and 2 % θ m .Although H s appears to be the most affected (T m and θ m after a SOM are in better agreement with the original data), all the variables processed by SOM experience a tightening of the original ranges of variation as it is shown in Fig. 6 displaying the marginal empirical PDFs of H s , T m and θ m after SOM.Generally, PDFs provided by SOMs are in good agreement with the original ones.However, the range of variation of H s is reduced from [0.05, 5.23] to [0.17, 2.75] m, the range of T m from [0.5, 10.1] to [2.4, 7.4] s, and the range of θ m from [0, 360] to [41,323] • N. The maximum H s value given by SOM (2.75 m) is pretty close to the 99th percentile value (2.68 m), pointing out that SOM provides a good representation of the wave climate up to the 99th percentile approximately.Nevertheless, the remaining 1 % of events not prop- erly described (extending up to 5.23 m) is for some applications the most interesting part of the sample.This confirms that a single-step SOM provides an incomplete representation of the wave climate.
Maximum-dissimilarity algorithm and SOM (MDA-SOM)
In order to reduce redundancy in the input data and to enable a wider variety of represented sea states, in previous studies (e.g., Camus et al., 2011a) authors applied the MDA before the SOM process.In doing so, a new set of input data for a SOM is constituted by sampling the original data in a way that the chosen sea states have the maximum dissimilarity (herein assumed as the Euclidean-circular distance) one from each other.As a result of MDA, a reduction of the number of sea states with low/moderate H s , i.e., the most frequent at Acqua Alta, is observed.Hence, MDA-SOM is expected to provide a better description of the extreme sea states.Nevertheless, as pointed out by Camus et al. (2011a) the reduction of the sample numerosity leads to lower errors in the 99th percentile of H s (chosen to represent extremes) but also to higher errors in the average of H s .Therefore, in terms of percentage reduction of the original input data set, an optimum balance has to be found in order to get good descriptions of the average and of the extreme wave climate.
In the MDA-SOM application, we have pre-processed the input data set by applying MDA, as described in detail in Camus et al. (2011a).Looking for the best reduction coefficient, the original data set has been reduced by means of MDA from the initial 50 503 sea states (100 %) to 5050 (10 %), with step 10 %.The absolute errors on H s and on the 99th percentile of H s after MDA-SOM, relative to the original data set, are summarized in Table 2.The error on H s , initially 2 %, monotonically increases up to 57 %, while the error on the 99th percentile of H s , initially 9 %, decreases down to 3 % at 50-60 % and then increase up to 27 %.With the widening of the variables' range as principal target (hence a better description of extremes) but without losing the quality on the average climate description, we chose to consider 80 % reduction (7 % error on H s , 4 % error on 99th percentile H s ).The corresponding MDA-SOM output displayed in Fig. 7 is topologically equivalent to that produced by the single-step SOM (Fig. 4), except for minor differences on the location of some sea states.However, the most frequent sea state has {H s , T m , θ m } = {0.28m, 2.8 s, 328 • N}, which still resembles what has emerged from histograms of Sect.2, even if T m is less in agreement with respect to the singlestep SOM.Also, the sea state with highest H s has the triplet equal to {2.8 m, 6.0 s, 275 • N}; hence, even if the input data set has been reduced, the representation of extremes is still unsatisfactory.This is confirmed by the comparison of the original and the reconstructed (after MDA-SOM) time series.In Fig. 8, the comparison has been extended to the results of 60 % MDA-SOM (smaller error on 99th percentile H s , see Table 2) and 10 % MDA-SOM (maximum input data set reduction), in order to investigate if MDA-SOM can enhance extreme wave climate representation even accepting a worsening of the average one.Actually, 60 % MDA-SOM performs only slightly better than 80 % MDA-SOM in describing the chosen events; indeed the highest H s triplet, which represents the sea states at the peak of the most severe storm, is {2.93 m, 5.8 s, 258 The marginal empirical PDFs after MDA-SOM are compared in Fig. 9 to the PDFs of the original data set.The distributions are in good agreement and the representation is more complete with respect to the single-step SOM, especially concerning H s .Nevertheless, 10 % MDA-SOM distribution for H s exhibits a larger departure from the original distribution at 1.7 m with respect to the single-step SOM.Also 10 % MDA-SOM distributions, which provides the widest ranges, locally depart from the reference distributions, in particular for T m and θ m .The frequency of occurrence of calms is 81 %, while that of Bora storms is 12 % and that of Sirocco storms is 7 %.Hence, except for a minor change in the frequency of calms and Sirocco events, the overall statistics resembles that one directly derived from the Acqua Alta data set.
Two-step SOM (TSOM)
A TSOM has been then applied to provide a more complete description of the wave climate at Acqua Alta.To this end, the SOM algorithm has been run a first time on the original data set, without reductions (first step).Then, outputs have been post-processed: a threshold H * s has been fixed, and the cells having H s >H * s have been considered to constitute a new input data set that is composed of the sea states represented by the cells exceeding the threshold.Hence, a second SOM has been run on the new data set (second step).Using the same SOM setup as in the first step, we have obtained a two-sided map (Fig. 10): the first map (left panel) provides a good representation of the low/moderate wave climate but fails in the description of the most severe sea states, which are described in the second map (right panel), focusing on the climate over H * s .Three thresholds have been tested that correspond to the 95th, 97th and 99th percentile of H s : 1.80, 2.12 and 2.68 m, respectively.In the following, we have focused on the results with 97th percentile threshold, since they have turned out to be more representative of the extreme wave climate than the others.
Figure 10 depicts TSOM results with H * s = 2.12 m (97th percentile).The first map, on the left, is the map shown in Fig. 4, representing the whole wave climate at Acqua Alta.On that map, the six cells with H s >2.12 m have been encompassed by a black Without such cells, the map on the left represents the low/moderate sea states, i.e., the 97 % of the whole original data set constituted by events with H s below or equal to the 2.12 m threshold.The remaining 3 % of events, represented by the encompassed cells, are the most severe events at Acqua Alta.The first step SOM associates to such events 2.12 ≤ H s ≤ 2.75 m, 5.0 ≤ T m ≤ 6.5 s and 249 ≤ θ m ≤ 299 • N. Hence, according to SOMs, the most severe sea states pertain to a rather narrow directional sector (50 • ) hardly allowing one to discriminate between Bora and Sirocco conditions.A more detailed representation of such extremes is provided by the second map in Fig. 10, on the right, where extreme Bora and Sirocco events are more widely described by cells.Indeed, a sort of diagonal (from the top-right corner to the bottom-left corner of the map) divides the cells.Bora events are clustered on the left of this diagonal (top-left part of the map), while Sirocco ones on the right of that (bottom-right part of the map).On the diagonal, cells represent sea states that travel towards the west.This configuration somehow resembles the one observed in the left map, except for the land sea states, in the top-left corner.The most severe sea states are clustered in the topright corner of the map and also, though to a smaller extent, in the bottom-left part of it.The resulting ranges of H s , T m and θ m are 1.94 ≤ H s ≤ 4.26 m, 4.4 ≤ T m ≤ 8.3 s and 224 ≤ θ m ≤ 316 • N, respectively.
The widened ranges of wave parameters provided by a TSOM allow for a more complete description of the sea states at Acqua Alta, including the most severe as it is shown in Fig. 11.There, for the sequence of events presented in previous sections, the reconstructed TSOM time series is compared to the original one.Also results with 95th and 99th percentile TSOMs are plotted, and it clearly appears that the differences among the three tests (i.e., TSOMs with H s threshold on 95th, 97th and 99th percentiles) are very small, in particular for what concerns θ m .Nevertheless, the 95th percentile TSOM yields a smaller estimate of the highest H s peak with respect to the others, and the 99th percentile TSOM deviates more than the others from the original T m .
Such differences are also found in the marginal empirical PDFs of the wave parameters, shown in Fig. 12.Indeed, p(H s ) and p(T m ) locally differ among the three thresholds and also from the original PDF, in particular in the largest values of H s and T m .As expected, the more the threshold is high, the more H s range widens, extending to higher values.Hence, the 99th percentile TSOM provides the more complete representation of the wave climate, at least concerning H s .Indeed, the widest T m range is obtained with 97th percentile and the narrowest with a 99th percentile TSOM.Instead, p(θ m ) is equally represented by the three thresholds and is in excellent agreement with the original PDF, though the θ m range is limited with the respect to the complete circle.In addition, local departure from the original PDFs are still observed, especially for H s and T m .The frequency of occurrence of calms is 81 %, while that of Bora storms is 11 % and that of Sirocco storms is 8 %.Hence, except for a minor change in the frequency of calms and Bora events, the overall statistics resembles that one observed at Acqua Alta.
Peak-over-threshold SOM (POT-SOM)
As an additional strategy to provide a more complete representation of the wave climate through SOMs, we tested a third different approach.A SOM was applied initially on the whole data set, and then on the peaks of the storms defined by means of peak-over-threshold technique.Storms were identified according to the definition of Boccotti (2000): a storm is the sequence of H s that remains at least 12 h over a given threshold H * s corresponding to 1.5 times the mean H s .We 1) and then, with H * s = 0.93 m, we individuated 710 storms.The peaks of the storms constitute a new data set that has been analyzed by means of a SOM.At the end, we have obtained a doublesided map that represent at the same time the whole wave climate (on the left) and the "stormy" part of it (on the right).
POT-SOM output map is shown in Fig. 13.As expected, stormy events are Bora and Sirocco events: the former are clustered on the upper and middle part of the map, the latter in the lower part of it.The most severe storms, concentrated on the right side of the map, are both Bora and Sirocco events.The triplet with the highest H s is {4.27 m, 6.32 s, 265 • N} and the maximum H s value is very close to the 99th percentile of H s of the new data set, i.e., 4.28 m.Hence, 99 % of the stormy events are included within the represented range, resembling what was observed for the original data set analyzed with a single-step SOM.
Discussion
A summary of the performances of the different SOM strategies is given in Table 3.There the single-step SOM, MDA-SOM with 80 % reduction and the TSOM with H s threshold at 97th percentile are compared in their capabilities of representing the wave climate at Acqua Alta by means of the cells.The POT-SOM is not directly comparable to the other strategies since the data set used for the second map is composed of the storm peaks only.As done in the previous sections, the performances are assessed by comparing the reconstructed time series from each strategy with the original ones, and the resulting marginal PDFs with PDFs of the original data.However, here the performances are quantified by statistical parameters (see caption of Table 3 for nomenclature).Generally, the reconstructed time series are in agreement with the original ones, as shown by the high r av (over 0.98) and r SD (over 0.89), as well as high CC (over 0.95) and low RMSE (below 0.19 m for H s , 0.37 s for T m and 23 • for θ m ).Nevertheless, the highest ratios and correlation coefficients, and the lowest RMSE pertain to TSOMs.Similar conclusions can be drawn for the PDFs, which are reproduced with very high CC (over 0.95) and RMSE PDF (below 0.04) by all the approaches, but to a greater extent by TSOMs.As expected, the most wide range variability among the different strategies concerns H s .With the only exception of θ m , whose widest range is provided by MDA-SOM, TSOM turned out to be the most efficient in providing the most complete representation among the tested strategies.
We verified that a higher size single-step SOM (e.g., 25 × 25, not shown here) can produce a wider range of extremes with respect to that used in the study (i.e., 13 × 13): the units' maximum H s is 3.56 m instead of 2.75 m.In the same map configuration (i.e., 25 × 25), MDA preselection can further widen this range towards extremes: 3.63 m, the units' maximum H s obtained with an 80 % reduction of the sample (using MDA); 3.66 m, the units' maximum H s with a 40 % reduction.This has the effect of reducing the absolute error on 99th percentile of H s (1 % with 80 % reduction and 11 % with 40 % reduction).However, the most extreme sea states are still far from being properly represented (we recall that the most extreme sea state observed had H s = 5.23 m).In addition and most importantly, if a larger number of elements in the map can improve the SOM performance shown in the paper, it will certainly worsen the readability of the map and the possibility of extracting quantitative information from the map.Indeed, considering, for instance, the 25 × 25 map, sea states at a site would be represented by 625 typical sea states: a huge number that is hardly manageable for a practical classification of the wave conditions.
Application of TSOM
An application of the TSOM is proposed to show that a more detailed representation of the extreme wave climate can enhance the quantification of the longshore component of the shallow-water wave energy flux P (per unit shore length), expressed as (Komar and Inman, 1970) where E = ρgH 2 s /16 is the wave energy per unit crest length (being ρ the water density), c g is the group celerity and α is the mean wave propagation direction measured counterclockwise from the normal to the shoreline.P is a driving factor for the potential longshore transport, and its dependence upon the wave energy E (which in turn depends on the square of H s ) suggests that an accurate representation of H s is crucial.As the shoreline in front of Acqua Alta tower is almost parallel to the 20 • N direction (i.e., orthogonal to the 290 • N direction), the longshore transport is directed towards southwest when P is positive, and directed towards northeast when P is negative.Given the wave energy flux Ec g , P is maximized when α = ±45 • N, which correspond to θ m = 245 • N and θ m = 335 • N, respectively.
In order to obtain the shallow-water values of wave parameters, following Reguero et al. (2013), we propagated the Acqua Alta sea state resulting from the TSOM (see maps in Fig. 10) from 17 to 5 m depth (a typical closure depth in the region), approximately accounting for the wave transformations, i.e., shoaling, refraction and wave breaking.In doing so, we assumed straight and parallel bottom contour lines, we neglected wave energy dissipation prior to wave breaking, and we allowed H s to reach the 80 % of the water depth at most (depth-induced wave breaking criterion).Roughly, shoaling mostly affects the Sirocco sea states that are typically associated with longer wavelengths with respect to Bora sea states.In shallow water, refraction tends to reduce the difference between Bora and Sirocco directions with respect to Acqua Alta, as the normal direction to the shoreline, which waves tend to align to, is very close to the boundary (i.e., 270 • N), which we assumed in order to discriminate between the two conditions.Sea states forced by land winds (20 • N < θ m <200 • N) were excluded from the analysis.
The longshore component of the wave energy flux P at 5 m depth is shown in Fig. 14.It is worth noting that the left map represents the longshore component of the wave energy flux resulting from the single-step SOM technique (e.g., the left panel of Fig. 10).Here, P ranges between −2 and 8 kW m −1 , and the highest values are mainly due to Bora events that are responsible for potential longshore transport towards southwest (even if few Sirocco events with θ m close to 270 • N have the same effect).According to the left map, the transport towards northeast is due to Sirocco events that, however, cause less intense potential transport.The highest P values are associated with the highest H s events, clustered on the cells at the top of the Fig. 10 left map.The right map of Fig. 14 describes the longshore flux component due to the Acqua Alta sea states represented by the SOM cells exceeding the 97th percentile H s threshold (i.e., the six cells bounded by the black line in the left map).The range of P variation widens considerably when the extreme sea states are considered, with values ranging from −20 to 20 kW m −1 .As observed in the right map of Fig. 10, the sea states exceeding the 97th percentile threshold on H s are Bora and Sirocco events.The Bora events in the top-left part of the map (except for two cells in the bottom-right corner) contribute to positive, i.e., south-westward, transport, while Sirocco events in the bottom-right part contribute to negative, i.e., north-eastward, transport.The most intense transport is associated with the highest H s cells at the bottom-left, bottom-right and top-right corners of the Fig. 10 right map.The major difference with respect to a single-step SOM estimate concerns the Sirocco sea states, associated with negative P , that had the most intense value extended from −2 to −20 kW m −1 .
The mean longshore wave energy flux in shallow-water P , i.e., the average of P weighted on the frequencies of occurrence F over the 30 years of observations, was obtained by taking the absolute value of P from the maps of Fig. 14 and is 0.57 kW m −1 (Table 4).In order to support this estimate, we compared the 1.71 kW m −1 estimate of the mean wave energy flux Ec g at Acqua Alta against the 1.5 kW m −1 value obtained at the same site over 1996-2011 by Barbariol et al. (2013).The contributions to P from Bora (P + ) and Sirocco (P − ) are 0.45 and −0.12 kW m −1 , respectively, pointing out the predominant effect of Bora on the longshore transport over the western side of the Gulf of Venice.For comparison, P was also computed using single-step SOM results (see Table 4): in this case, P is 0.52 kW m −1 , P + is 0.41 kW m −1 and P − is −0.11 kW m −1 .Hence, with respect to the TSOM, the estimate of the mean longshore energy flux is 9.0 % lower for P , 7.5 % lower for P + and 16.5 % lower for P − .
Conclusions
In this paper, we have tested different strategies aimed at improving the characterization of multivariate wave climate using SOM.Indeed, we have verified that besides a satisfactory description of the low/moderate wave climate (in agreement with usual uni-and bivariate histograms), the single-step SOM approach misses the most severe sea states, which are hidden in SOM cells with H s even considerably smaller than the extreme ones.
For our purpose, we used the 1979-2008 trivariate wave climate {H s , T m , and θ m } recorded at Acqua Alta tower, and we showed that, for instance, the single-step SOM assigned most of the sea states with H s >2.75 m to the {2.75 m, 5.9 s, 270 • N} class.Hence, the most interesting part of the wave climate was condensed within a few cells of the map, also hindering the distinction between Bora and Sirocco events, i.e., the prevailing meteorological conditions in the northern Adriatic Sea.To increase the weight of the most severe and rare events in SOM classification, we tested a strategy based on the pre-processing of the input data set (i.e., MDA-SOM) and a strategy based on the post-processing of the SOM outputs (i.e., TSOM).Results presented in the study showed that the post-processing technique is more effective than the preprocessing one.Indeed, a TSOM allowed a more accurate and complete representation of the sea states with respect to the one furnished by MDA-SOM, because it provided a wider range of the wave parameters (particularly H s ), and more reliable a posteriori reconstructions of time series and empirical marginal PDFs.Nevertheless, some deviations from original PDFs were observed and the range of θ m was not complete, such that sea states traveling towards the north were not properly described.This requires further studies to improve SOM applications to wave analysis, which are rather promising, thanks to the well recognized visualization capabilities of SOMs.In this context, we proposed a double-sided map representation, which provides (on the left) a description of the whole wave climate that is particularly reliable for the low/moderate events and is completed (on the right) by the description of the extreme wave climate.This novel representation was also employed to provide a SOM classifi-cation of the storms peaks, based on the peak-over-threshold approach, on the right (POT-SOM).
Finally, a TSOM was applied for the assessment of the potential longshore wave energy flux to show how practical oceanographic and engineering applications can benefit from this novel SOM strategy.Indeed, the mean flux in front of the Venice coast was found to be 9 % higher if evaluated after a TSOM instead of a SOM.
Figure 1 .
Figure 1.Acqua Alta (AA) oceanographic tower location in the northern Adriatic Sea, Italy (left panel).The tower is depicted in the right panel.
Figure 2 .
Figure 2. Observed bivariate wave climate at Acqua Alta: histograms representing the joint PDF of H s and θ m (top panel) and the marginal PDF of θ m (bottom panel).Resolutions are H s = 0.5 m and θ m = 22.5 • .
Figure 3 .
Figure 3. Observed bivariate wave climate at Acqua Alta: histograms representing the joint PDFs of H s and T m for Bora (topleft panel) and Sirocco (top-right panel) sea states and the corresponding marginal PDFs of H s (bottom-left panel; blue for Bora, red for Sirocco) and T m (bottom-right panels; blue for Bora, red for Sirocco).Black solid lines in the top panels denote average wave steepness 2πH s /g/T 2m (3 % for Bora, 2 % for Sirocco, g being gravitational acceleration), red solid lines denote wave breaking limit (7 %).Resolutions are H s = 0.2 m and T m = 0.2 s.
Figure 5 .Figure 6 .
Figure 5. Single-step SOM: BMU cells (top panel) and comparison between original (blue solid lines) and reconstructed (red dashed lines) time series of H s (central-top panel), T m (central-bottom panel) and θ m (bottom panel), for a chosen sequence of events.
Figure 9 .
Figure 9. MDA-SOM: comparison between original (black solid lines) and resulting histograms representing the PDFs of H s (top panel), T m (central panel) and θ m (bottom panel), for the whole period of observations.Data set reduction: 80 % (blue dashed-squares line), 60 % (red dashed-circles line) and 10 % (green dashed line).
Figure 10 .Figure 11 .
Figure 10.TSOM output map with threshold H * s = 2.12 m (97th percentile of H s ).H s : inner hexagons' color, T m : vectors' length, θ m : vectors' direction, F : outer hexagons' color.Wave climate after a single-step SOM (left panel) and TSOM extreme wave climate (i.e., over the threshold, right panel and cells within black solid line in the left panel).For the right panel map, mean quantization error: 0.04; topographic error: 6 %.
Figure 12 .
Figure 12.TSOM: comparison of original (black solid line) and resulting histograms representing the PDFs of H s (top panel), T m (central panel) and θ m (bottom panel), for the whole data set.Thresholds: 95th (blue dashed-squares line), 97th (red dashedcircles line) and 99th (green dashed line) percentile of H s .
Figure 14 .
Figure 14.Application of TSOM: assessment of the longshore flux of wave energy P in shallow water, after single-step SOM (left panel) and resulting from the TSOM extreme wave climate (right panel and cells within black solid line in the left panel).Mean wave directions at Acqua Alta tower (blue arrows) indicate contributions of different meteorological conditions: positive mainly due to Bora (180 ≤ θ m ≤ 270 • N), negative to Sirocco (270<θ m ≤ 360 • N).Land wind events (white cells) have been excluded, and the direction of the shoreline (270 • N) is shown as gray dashed lines.
Table 4 .
Application of TSOM: assessment of the longshore flux of wave energy in shallow-water P .P is the mean over the 1979-2008 period accounting for the absolute value of P , P + is the mean of the positive P , and P − is the mean of the negative P , TSOM−SOM is the relative difference of values computed after TSOM with respect to values computed after SOM.SOM (kW m −1 ) TSOM (kW m −1 )
www.ocean-sci.net/12/403/2016/ Ocean Sci., 12, 403-415, 2016 F. Barbariol et al.: Wave extreme characterization using self-organizing maps SOM
merges all the information about the trivariate wave climate at Acqua Alta (H s : inner hexagons' color, T m : vectors' length, θ m : vectors' direction) including the frequency of occurrence (F : outer hexagons' color) of each {H s , T m , θ m } triplet.Hence, one can have an immediate sight on the wave climate features and on the empirical joint PDF thanks to visual capabilities of 's output.Gradual and continuous change in wave parameters over the cells points out that the topological preservation is quite good, as confirmed by the 22 % topographic error.
Table 2 .
MDA-SOM: absolute errors of average and 99th percentile of H s after MDA-SOM relative to the original data set (%).
415, 2016 F. Barbariol et al.: Wave extreme characterization using self-organizing maps provided
by 10 % MDA-SOM, though the maximum is however missed and in its proximity the original data are overestimated.Indeed, 60 % and 10 % MDA-SOMs locally overestimate H s in the low/moderate sea states.
Table 3 .
Performance summary of different SOM approaches, through the comparisons of reconstructed to original time series, and resulting to original PDFs.r av : ratio of time series averages, r SD : ratio of time series standard deviations, CC: time series cross-correlation coefficient, RMSE: time series root mean square error, CC PDF : PDFs cross-correlation coefficient, RMSE PDF : PDFs root mean square error).
|
2018-12-07T14:46:18.394Z
|
2016-03-10T00:00:00.000
|
{
"year": 2016,
"sha1": "22358658977d840cb28dbe58a918e7865c9d2861",
"oa_license": "CCBY",
"oa_url": "https://www.ocean-sci.net/12/403/2016/os-12-403-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "22358658977d840cb28dbe58a918e7865c9d2861",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
269534666
|
pes2o/s2orc
|
v3-fos-license
|
A real-world experience of venetoclax combined with hypomethylating agents vs. monotherapy hypomethylating agents in patients with myelodysplastic syndromes and chronic myelomonocytic leukemia patients
Introduction: Current clinical research has reported the effectiveness and safety of venetoclax in combination with hypomethylating agents (VEN-HMA) in patients with myelodysplastic syndromes (MDS) and chronic myelomonocytic leukemia (CMML). Thus, this study aimed to examine the effectiveness and safety of VEN-HMA therapy in patients with MDS and CMML and compared its short-term and long-term therapeutic effects with HMA monotherapy. Method: We analyzed data from our center, comprising 19 patients with MDS and CMML who received VEN-HMA therapy, compared to 32 patients treated with HMA monotherapy. Results: The overall response rate (ORR) in the VEN-HMA group was 73.7%, compared to 59.4% in the HMA group. The survival analysis revealed that the median overall survival (mOS) time in the VEN-HMA group was 16 months, with a median progression-free survival (mPFS) time of 9 months, both of which were longer than those observed in the HMA group (p < 0.05). Key adverse events (AEs) included grade 3–4 neutropenia (89.5% in VEN-HMA group vs. 87.5% in HMA group), grade 3–4 thrombocytopenia (73.7% vs. 71.9%), and anemia (73.7% vs. 90.6%). Infection of grade 3 or higher occurred in 63.2% of patients in the VEN-HMA group and 65.6% of patients in the HMA group. Discussion: Our study has confirmed the effectiveness and safety of the combined treatment of HMAs and venetoclax, which offers significant advantages to patients due to the relatively high and rapid response rates.
Introduction
Myelodysplastic syndromes (MDS) are clonal myeloid neoplasms characterized by refractory hemocytopenia and morphologic hematopoietic pathology, with a high risk of progression to acute myelocytic leukemia (AML) (Sekeres and Taylor, 2022).Currently, clinical diagnosis mainly adopts the 2016 revision to the World Health Organization (WHO) classification of myeloid neoplasms and acute leukemia, which is based on identified genetic and morphological abnormalities, emphasizing the importance of genetics in defining the disease (Arber et al., 2016).The annual incidence of MDS is approximately four per 100,000 people, predominantly affecting middle-aged and older individuals, with a higher prevalence in men than women (Rollison et al., 2008).The common clinical prognostic stratification system for MDS is the Revised International Prognostic Scoring System (IPSS-R), which incorporates cytogenetics, bone marrow blasts, hemoglobin, platelets, and absolute neutrophil counts to predict the clinical outcomes of untreated MDS patients (Greenberg et al., 2012).
Chronic myelomonocytic leukemia (CMML) is a rare clonal hematopoietic malignancy characterized by clinical features of both myelodysplasia and myeloproliferation, which also progresses to AML with a high risk of mortality (Patnaik and Tefferi, 2022).The clinical manifestations of CMML are highly diverse and characterized by persistent peripheral blood mononucleosis, as well as the development and proliferation of one or more blood cell lineages (Arber et al., 2016).The overall prognosis of CMML is generally poor, with a median OS time of 17 months (Guru Murthy et al., 2017).Prognosis is stratified according to the CMML-specific Prognostic Scoring System (CPSS) (Tremblay et al., 2021).
According to the 2023 NCCN guidelines, hypomethylating agents (HMAs) are recommended for lower-risk MDS patients with clinically relevant thrombocytopenia or neutropenia, as well as higher-risk patients who are ineligible for allogeneic hematopoietic stem cell transplantation (allo-HSCT) (Greenberg et al., 2022).To date, there is a lack of standardized CMML treatment strategies that can significantly prolong patient prognosis.Thus, CMML therapy primarily mirrors that of MDS.Patients with low-risk CMML should undergo close clinical monitoring and receive supportive care.Patients with high-risk CMML presenting obvious symptoms should be monitored and given cytotoxic therapy, HMAs, or allo-HSCT.Notably, HMAs are the sole agents approved by the U.S. Food and Drug Administration (FDA) for CMML treatment (Tremblay et al., 2021).
Venetoclax is a novel oral BCL-2 inhibitor that primarily kills tumor cells by inducing endogenous apoptotic pathways.In November 2018, the FDA expedited the approval of venetoclax in combination with HMAs for treating AML.Given the high risk of transformation to AML, several clinical trials have explored the effectiveness of VEN-HMA in patients with MDS, yielding promising therapeutic outcomes (Ball et al., 2020).Recently, a phase 1-2 trial conducted at a single center confirmed the effectiveness and safety of azacitidine plus venetoclax in patients with high-risk MDS or CMML (Bazinet et al., 2022).However, realworld evidence on this combination remains limited.Thus, this study aimed to examine the effectiveness and safety of VEN-HMA therapy in patients with MDS and CMML and compare its shortterm and long-term therapeutic effects with HMA monotherapy.
Study design and patients
We retrospectively analyzed 51 patients with MDS or CMML who received either VEN-HMA therapy or HMA monotherapy in our institution.The diagnostic criteria followed the 2016 revision of the WHO Classification of Hematolymphoid Tumors.Each patient received at least one administration of either the combination or monotherapy between May 2019 and February 2023 and experienced at least one evaluation after therapy.Inclusion criteria comprised patients with secondary MDS or those exposed to HMAs, while patients who progressed or died within one treatment cycle were excluded.Survey data were collected from electronic medical records.Complete blood counts, blood biochemical items, electrocardiograms, and CT scans of the lungs were conducted before the first treatment.Additionally, baseline assessments included morphology, immunophenotype, cytogenetics, and next-generation sequencing (NGS) of bone marrow.Patients were stratified into lower-or higher-risk groups based on IPSS-R and CPSS criteria, and all patients provided informed consent.
Treatment and response criteria
Patients with MDS or CMML who were newly diagnosed or those with treatment failure received azacitidine (75 mg• m −2• D −1 , 7 days) or decitabine (20 mg• m −2 • D −1 , 5 days) either with or without venetoclax (100 mg, day 1; 200 mg, day 2; 400 mg, days 3-14).Three patients were treated with venetoclax at reduced doses due to prolonged myelosuppression.Certain patients underwent bone marrow examination post-treatment.The last follow-up was conducted in May 2023.The failure treatment regimens before venetoclax included D-CAG and single HMA therapy.
Given the absence of international consensus criteria for CMML, researchers typically refer to the adult MDS/MPN International Working Group (IWG) 2006 criteria.Complete response (CR) is defined as bone marrow myeloblasts <5% and full recovery in peripheral blood.Marrow CR (MCR) refers to bone marrow myeloblasts less than 5% and a 50% decrease from pretreatment.Hematologic improvement (HI) indicates specific responses of three peripheral blood lineages.Partial response (PR) meets the criteria of CR but shows a reduction in bone marrow blast cells <50%, still exceeding 5%.Stable disease (SD) is defined as not meeting the minimum criteria for a PR, with no evidence of progression for at least 8 weeks.Treatment failure (TF) refers to the progression of disease or death during treatment, while progressive disease (PD) includes scenarios where bone marrow myeloblasts increase to >50% or meet specific criteria: neutrophil granulocyte, platelet decrease (>50%), hemoglobin decrease (>20 g/L), or transfusion dependence (Cheson, 2006).
The ORR encompasses the total rates of CR, MCR, HI, and PR patients.The median duration of response (mDOR) measures the time from when 50% of patients first achieved CR or PR to disease progression.OS refers to the time from the first venetoclax dose to The lower-risk group includes the IPSS-R (very low-risk group), the Low-Risk Group and Moderate Risk Group (≤3.5 points), and the CPSS: Low Risk and Moderate Risk group 1.The higher-risk group includes the IPSS-R: Moderate Risk Group (>3.5 points), the High-Risk Group and Very High-Risk Group, and the CPSS: Moderate Risk 2 and High-Risk Groups.a One patient's initial material was missing.
death from any cause.Given the challenges in reaching OS endpoints, requiring longer follow-up, PFS was also selected as a study endpoint, referring to the time from the first venetoclax dose to disease progression or death.All AEs of the patients in our study were assessed using Common Terminology Criteria for Adverse Events 5.0 (CTCAE), which was categorized into grades 1 to 5. Any AEs occurring after venetoclax treatment were documented as drug-related.
Statistics
Statistical analyses were conducted with Statistical Product and Service Solution (SPSS) 20.0 and GraphPad Prism 8.The chisquared tests and non-parametric test (Mann-Whitney test) were used for categorical and continuous variable comparison.A Kaplan-Meier curve was used to describe the survival characteristics by estimating OS and PFS.The statistical difference was compared using the log-rank test.The hazard ratios (HRs) and 95% confidence interval (CIs) were recorded.All statistical tests were two-sided, and a value of p < 0.05 was considered statistically significant.
Results
The basic characteristics of patients, evenly distributed between the two study groups, are shown in Table 1.A total of 51 patients completed the study, with 19 in the VEN-HMA group and 32 in the HMA group.In the VEN-HMA group, there were 11 patients with MDS, while the remainder had CMML.Conversely, the HMA group consisted of 20 patients with MDS and 12 with CMML.The median follow-up times were 16 and 24 months for VEN-HMA and HMA, respectively.As seen in Table 1, four patients in the VEN-HMA group underwent allo-HSCT, while none in the HMA group received transplantation (p = 0.007).A total of 41 patients underwent NGS before treatment.Table 1 lists the genes with mutation frequencies >10%.The two most common genes in the VEN-HMA group were ASXL1 and U2AF1, while in the HMA group, they were U2AF1 and SETBP1.However, no statistical significance was observed between the two groups for any of the genes.
Figure 1A presents the overview of treatment response on the basis of the IWG 2006 criteria.The ORR was 73.7% in the VEN-HMA group and 59.4% in the HMA group (p = 0.301).The treatment responses in the VEN-HMA group included CR (9/ Figures 1B and 1C display the survival data of the VEN-HMA and HMA groups.The mOS was significantly longer for patients treated with VEN-HMA than with HMA monotherapy (16 months vs. 7 months, p = 0.023).The PFS time between the two groups also showed significant differences (p = 0.021).In the VEN-HMA group, we performed a separate survival analysis focused on the treatment response and treatment cycle.Figure 1D shows that patients who achieved CR or MCR survived significantly longer than those who did not (p = 0.030).Patients who received more cycles of venetoclax had a longer OS than those who received <2 cycles of venetoclax (p = 0.045).
Any AEs observed during VEN-HMA therapy or HMA monotherapy are listed in Table 2. Grades 3-4 hematological AEs were frequently observed.Between the VEN-HMA group and the HMA group, the occurrence of grades 3-4 neutropenia, anemia, and thrombocytopenia was 89.5% vs. 87.5%,73.7% vs. 90.6%,and 73.7% vs. 71.9%,respectively.In the VEN-HMA group, there were 4 and 2 cases of patients dying from hemorrhage and infection, respectively, during the treatment.In the HMA group, two patients and six patients died from hemorrhage and infection, respectively.Regarding nonhematological AEs, 12 patients (63.2%) and 21 patients (65.6%) in the two groups developed grade 3 or higher infections.The most common side effects of the data were neutropenia and anemia, with 17 cases and 18 cases, respectively.The top two ion disorders were hypocalcemia (63.2% vs. 56.3%)and hypokalemia (52.6% vs.18.8%).Several patients experienced abnormal liver function and kidney function, which were managed through symptomatic treatment, resulting in recovery from ion disorders and recovery of liver and kidney function.
Discussion
Based on previous phase 1 results of azacitidine plus venetoclax in patients with high-risk MDS or CMML, the effectiveness and Vomiting safety of this regimen were evaluated, showing an ORR of 87%, and venetoclax was well tolerated by patients with MDS and CMML (Bazinet et al., 2022).Our study was the first real-world research that retrospectively analyzed data collected from patients with MDS or CMML who received venetoclax and HMA treatment at our center, comparing the short-term and long-term effects with patients undergoing HMA monotherapy.CMML is a relatively rare disease, with a median age at diagnosis of approximately 73-75 years, and patients are predominantly male (Patnaik and Tefferi, 2022).The epidemiology of MDS is consistent with patients with CMML, often occurring in older people (Li et al., 2022).In our study, the median age at diagnosis and sex ratio were consistent with previous studies.Most patients were in the higherrisk group, with 68.4% and 68.8% in the two groups, while other patients at lower risk due to symptomatic thrombocytopenia or agranulocytosis were treated with VEN-HMA treatment or HMA monotherapy.The three most common gene mutations in our study were ASXL1, U2AF1, and SETBP1.Numerous studies have shown that U2AF1 is associated with poor prognosis in MDS, especially in terms of OS (Wang et al., 2019;Wang et al., 2020).The presence of SETBP1 mutations and ASXL1 mutations usually indicates high white blood cell counts, extramedullary lesions, and poor prognosis (Patnaik et al., 2014).
The ORR of the VEN-HMA group was 73.7% vs. 59.4% in the HMA group in our study.Although the difference was not statistically significant, the ORR or CR/MCR rates of the VEN-HMA group were higher than those of the HMA monotherapy in our real-world data.Numerous clinical trial data results are shown below.One of the most important clinical trials in MDS, azacitidine-001, described that the mOS of the azacitidine group was 24.5 months, and the ORR was 29% (Fenaux et al., 2009).According to the CALGB 9221 trial, patients with low-risk MDS who were treated with azacitidine had an ORR of 59% and a median OS of 44 months (Silverman et al., 2006).Another clinical trial called SWOG S1117 compared the efficiency of azacitidine-based regimens and azacitidine monotherapy in patients with MDS and CMML, and the results showed the ORR of azacitidine was 38%, while the median OS was 15 months (Sekeres et al., 2017).Our study found the ORR of the VEN-HMA group was much higher than the clinical trial outcome of azacitidine monotherapy.
Comparing long-term curative effect, our results demonstrated that VEN-HMA therapy mOS and mPFS were significantly longer than HMA monotherapy.The mOS and mPFS in our VEN-HMA group were 16 months and 9 months, respectively, which aligns with the results observed in the phase 1 clinical trial (Bazinet et al., 2022).The OS in the VEN-HMA group did not exhibit an obvious advantage over the HMA monotherapy clinical trial described above.This could be due to the fact that most of the patients in our study were classified as high risk or very high risk.However, in cases where the basic information was similar, our VEN-HMA group showed a survival advantage over our HMA group.These results suggest that the combination of venetoclax may potentially extend the survival of patients with higher-risk MDS or CMML.Certainly, a larger sample and longer follow-up time would better support our results.In addition, we found that achieving CR/ MCR and >2 cycles of venetoclax resulted in a significantly longer survival time in the VEN-HMA group.This finding highlights the importance of achieving CR at the earliest possible time and receiving an adequate number of treatment cycles to maximize the benefits for patients.However, it should be noted that a subset of patients who were within two cycles of VEN-HMA treatment had to discontinue the regimen due to disease progression or death.Unfortunately, these patients had a relatively short OS.
The most common grade 3-4 events that occurred in patients with MDS who received azacitidine were peripheral blood cytopenias, including neutropenia in 90% of patients, thrombocytopenia in 85% of patients, and anemia in 57% of patients (Fenaux et al., 2009).In the VEN-HMA group, nearly all patients experienced grade 3-4 hematological toxicity, neutropenia, and thrombocytopenia, which is consistent with the above study; however, the incidence of anemia was relatively low.Therefore, in terms of safety, AEs with VEN-HMA were predominantly myelosuppressive, with no obvious advantage over traditional therapy.All patients generally tolerated the treatment well, with most of the AEs resolving with symptomatic treatment.In comparison with the HMA group, the incidence of hematologic AEs and mortality were essentially the same and slightly higher in the non-hematologic AEs.A phase 3 study of patients with MDS treated with azacitidine indicated an incidence of grade 3-4 hematologic AEs that was basically consistent with our observations of both groups (Fenaux et al., 2009).A retrospective single-center study reported that the most common grade 3-4 AE was neutropenia (90%), with the most common nonhematologic AE being infection (60%) (Mei et al., 2023).Similarly, the results observed in our study were within this range.
Allo-HSCT is the only available cure for MDS (De Witte et al., 1990).Indications for allo-HSCT of MDS include patients aged <65 years in the higher-risk group or patients aged <65 years with severe hemopenia, failure of other treatments, or poor prognostic genetic abnormalities in the lower-risk group.For several reasons, <10% of patients with MDS accept allo-HSCT (Li et al., 2022).In our study, only four patients in the HMA-VEN group underwent allo-HSCT.In our center, most patients with MDS and CMML are unable to undergo all-HSCT due to old age, poor basic condition, or financial constraints.
Although few similar studies have been reported, our study has certain limitations.First, our findings were based on a single platform, and the sample size was relatively smaller.A larger sample size and data from multiple centers are needed to validate our findings.Second, our data may be affected by some confounding variables, but we did not give a multivariate adjustment because of the limited sample size.Third, our follow-up time is relatively short (median follow-up time: 20 months), and survival outcomes for some patients have not yet been observed.
Our findings have confirmed the effectiveness and safety of HMAs and venetoclax combination therapy.VEN-HMA therapy demonstrates an advantage over HMA monotherapy in treating patients with MDS or CMML.This combination therapy allows patients to achieve complete remission more rapidly, offering a promising new approach for patients.However, continuous exploration of the dosage and duration of this regimen is necessary to reduce the risk of AEs.We believe that more patients can benefit from and tolerate this therapy.
Data availability statement
The datasets presented in this article are not readily available because the data are not publicly available due to privacy or ethical restrictions.Requests to access the datasets should be directed to LZ, zhangludan1998@sina.com.
FIGURE 1 (A) Best treatment response rates in the VEN-HMA and HMA groups.CR, complete remission; MCR, marrow CR; HI, hematologic improvement; PR, partial response; SD, stable disease; TF, treatment failure; PD, progressive disease.(B) Overall survival in the VEN-HMA and HMA groups.(C) Progressionfree survival in the VEN-HMA and HMA groups.(D) Comparison of overall survival between CR/MCR patients and non-CR patients in the VEN-HMA group.(E) Comparison of overall survival between patients who received more than two cycles in the VEN-HMA group.
TABLE 1
Basic characteristics of patients.
TABLE 2
Adverse events.
|
2024-05-04T15:10:07.607Z
|
2024-05-02T00:00:00.000
|
{
"year": 2024,
"sha1": "014b3309e5c7ffb17fd1c3a655667b84ef69f24d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "659f24835850abd42e161c4ea73cb68fa1f353ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
252019582
|
pes2o/s2orc
|
v3-fos-license
|
Fertilization Regulates Grape Yield and Quality in by Altering Soil Nutrients and the Microbial Community
: Rational fertilization is a win-win strategy for rural incomes and environmental restoration in ecologically fragile regions. However, the long-term cumulative grape productivity response to soil fertility has rarely been quantified. Here, long-term fertilization experiments (over 15 years) in the desert–oasis transitional zone of Sinkiang, China, were used to evaluate the interactions among grape yield, quality, fertilization, soil nutrients, and microbial communities. There were five treatments, as follows: CK 0 (no planting and no fertilizing); NP (synthetic nitrogen and phosphorus); M (manure only); NPM 1 (0.25 times NP and 0.33 times M); and NPM 2 (NP and 0.5 times M). The grape yield increased with the application of total nitrogen. The soluble solids and reducing sugar contents had significant positive linear correlations with grape yield, but the opposite trend was found between grape yield and titratable acidity and tannin contents. The redundancy analysis showed that fertilization, soil nutrients (soil organic carbon, available nitrogen, and dissolved organic nitrogen), and microbial communities (ratio of fungi to bacteria, ratio of Gram-negative bacteria to Gram-positive bacteria, and total phospholipid fatty acids) accounted for 31.9%, 19.7%, and 26.8% of the grape yield and nutritional ingredients, respectively. The path analysis identified that fertilization, soil nutrients, and the microbial communities were significantly positively associated with the grape yield, soluble solids, and reducing sugars, while their associations with titratable acidity, tannins, and phenols were significantly negative. These results suggested that fertilization is a viable strategy for regulating grape yields and grape quality because it alters soil fertility in ecologically fragile regions.
Introduction
The demand for fruit production is continuously increasing with improvements to human living standards.Globally, fruit production increased from 0.48 billion tons in 2000 to 0.72 billion tons in 2017, with an annual increase rate of 2.56% [1].Grapes, as the second most produced fruit, are becoming increasingly popular due to their nutritional and medicinal values [2,3].Another major use of grapes is in making wine, especially in Europe, Australia, and America.Wine is considered to be a healthy and hygienic drink [3], and the nutritional ingredients of grapes, including reducing sugars, titratable acidity, and tannin contents, determine the composition and quality of the wine [4].Obtaining high yields and superior nutritional ingredients requires large temperature differences, the avoidance of rain and dew, and a fine sandy loam soil, but regions with these characteristics are considered to be ecologically fragile [4,5].Furthermore, these regions are not suitable for grape growth without human interventions.Therefore, it is important to understand how to achieve high grape yields and good nutritional ingredients in ecologically fragile regions.
Rational fertilization is a win-win strategy that achieves high crop productivity and improves the soil quality in ecologically fragile regions [6].Fertilization with respect to crop requirements not only directly provides extra nutritional matter but also indirectly improves soil properties for better crop growth [7,8].Many researchers have explored the response of grape productivity to different fertilizers based on short-term experiments.For instance, Martínez, Ortega, Janssens, and Fincheira [8] reported that a combination of organic and synthetic fertilizers was better at increasing grape yields than no fertilizer, synthetic fertilizer, or organic fertilizer applications alone.Brunetto, et al. [9] showed that synthetic nitrogen fertilizer did not influence the nutritional contents of grapes, while manure decreased the soluble solids content and increased the total acidity content of grapes.Although the short-term responses of grape yields and nutritional ingredients to different fertilizers are well-known, the long-term cumulative grape productivity responses to different fertilizers may be complicated.To achieve high grape yields, a large amount of fertilizer is often applied [10].However, it is not clear whether grape yield increases are synchronized with an increase in nutritional ingredients over the long-term.
Fertilization can improve soil productivity, such as soil nutrients and microbial communities, which both play a vital role in determining crop productivity [11,12].For example, a combination of organic and synthetic fertilizer significantly increased the soil carbon (C) and nitrogen (N) concentrations compared to using a synthetic fertilizer or no fertilizer [7].The soil C and N concentrations represent the soil's ability to supply utilizable nutrients, and many studies have shown that high soil organic carbon (SOC) and N concentrations lead to high crop yields and quality [11,13].Soil biological fertility is a good indicator of the soil microbial parameter responses to different fertilizers and has been widely studied [12,14].A global meta-analysis reported that N addition significantly inhibited soil microbial growth, composition, and function based on 1408 paired observations [15].The ratio of fungi to bacteria usually decreased with N fertilization due to the low N demands of fungi [14].Compared to synthetic fertilizer, organic fertilizer affects soil microbial communities by providing C-rich organic compounds to the C-limited microbial communities in ecologically fragile regions.It also directly supplies nutrients [16].Bacteria and fungi are readily affected by available and complex C compounds, respectively [17,18].Furthermore, reducing C inputs could decrease Gram-negative and increase Gram-positive bacterial abundance [17,19].Soil fertility is associated with soil microbial communities because the mineralized nutrients at different soil fertility levels can actively change soil microbial communities [20].Overall, the combination of fertilization, soil nutrients, and soil microbial communities has complex effects (direct, indirect, and interacting) on crop productivity.Therefore, an increased understanding of how these variables directly and/or indirectly affect crop productivity is urgently needed if crop growth is to improve.
Sinkiang, China, is located in the hinterland of the Eurasian continent and is in a desertoasis transitional zone.This area is vulnerable to damage and has difficulty recovering due to the large temperature differences, high sunshine time, and low precipitation that are characteristic of the area.To prevent environmental degradation and ensure sustainable livelihoods, this natural and fragile ecological region was transformed into an area for growing grapes.Therefore, the objectives of this study were to (1) quantify the long-term cumulative responses of grape yields and nutritional ingredients to different fertilization treatments and (2) identify the links among grape productivity, soil nutrients, and soil microbial communities under different fertilization treatments.
Experimental Site
The research was conducted at the Shanshan monitoring site (43 • 06 N, 90 • 33 E), Sinkiang, China.The region is a desert-oasis transitional zone with an average temperature of 11.3 • C, an effective accumulated temperature of 5035 • C (>10 • C), and 3100 h of sunshine.The average annual precipitation is 25 mm and the evaporation is 3200 mm.The soil type is Haplic Cambisol and irrigation desert soil according to the Food and Agriculture Organization (FAO) [21] and the Chinese soil classification system, respectively.The initial (1987) topsoil (0-20 cm) at the Shanshan site had an SOC level of 7.88 g kg −1 , a bulk density of 1.15 kg m −3 , a soil pH of 8.10, and soil available N (AN), phosphorus (AP), and potassium (AK) levels of 78, 12.7, and 178 mg kg −1 , respectively.
Experimental Design
To determine differences in grape yields, grape nutritional ingredients, soil nutrients, and soil microbial communities, five different long-term fertilization regimes were established using a random design, as follows: (1) CK 0 (no planting and no fertilizing); (2) NP (synthetic nitrogen and phosphorus); (3) M (manure only); (4) NPM 1 (0.25 times NP and 0.33 times M); and (5) NPM 2 (NP and 0.5 times manure).The synthetic N and P were from urea and diammonium phosphate, respectively.The manure was pure sheep manure, with average C, N, and P contents (during the experiment) of 336, 20.1, and 4.96 g kg −1 dry weight, respectively.The CK 0 regime involved no planting and no fertilizing; NP was first applied in 1991, with synthetic N of 540 kg ha −1 year −1 and synthetic P of 300 kg ha −1 year −1 ; NPM 1 was first applied in 1996, with 0.25 times more synthetic fertilizer than of the NP treatment and manure at 12 × 10 3 kg ha −1 year −1 ; M was first applied in 2000, with 36 × 10 3 kg ha −1 year −1 ; and NPM 2 was first applied in 2001, with one application of synthetic NP fertilizer and manure at 15 × 10 3 kg ha −1 year −1 .This information is shown in Table 1.
Table 1.Description of the site information and the annual rates (kg ha −1 ) of synthetic nitrogen (Sy-N) with phosphorus (Sy-P) fertilizer additions and organic nitrogen (Or-N) with phosphorus (Or-P) fertilizer applied under the various fertilization treatments.Notes: CK 0 , no fertilizer and crops; NP, synthetic nitrogen and phosphorus; M, manure; NPM 1 , 0.25 times NP and 0.33 times M; NPM 2 , NP and 0.5 times manure; Sy-N was added as urea and diammonium phosphate, and Sy-P was added as diammonium phosphate.These experimental treatments were on land that had previously been a natural grape ecosystem (CK 0 ).Three years before the start of fertilization, the field was cleared to ensure uniform soil nutrients and microbial communities.The grape variety Thompson Seedless was planted at a density of 7500 plants per hectare (row spacing of 0.8 m × 1.7 m) and harvested in mid-August every year.Irrigation was applied because of the substantial evaporation and low amounts of precipitation.Flood irrigation was applied seven times during the germination, flowering, and berry expansion stages and during four periods after harvest (total irrigation amount was 27 × 10 3 m 3 per hectare).Lime sulfur was applied to prevent powdery mildew, and no herbicide was applied.The grapes were harvested manually and pruned in summer to leave 1-2 spikes on the fruiting branches.
Sample Collection
Samples were collected from the topsoil (0-20 cm) and subsoil (20-40 cm) in July, 2016.Five to ten random soil core samples were collected using the "S" curve method.Plant residues and rocks were removed to ensure the accuracy of the analysis indicators.Then, the soil samples were thoroughly mixed and divided into three parts.The first part was air-dried and crushed to pass through a 0.25 mm sieve to measure the soil nutrients, the second part was stored at 4 • C for the soil microbial biomass carbon (MBC) and nitrogen (MBN) analyses, and the third part was freeze-dried and stored at −70 • C for the soil microbial community analysis.
Additionally, five major nutritional ingredients in grapes were measured to represent grape quality.The soluble solids, reducing sugars, titratable acidity, tannins, and total phenols were measured using a refractometer, Fehling's reagent, point titration, potassium permanganate oxidation, and the Folin-Ciocalteu method, respectively.
The annual relative change rates method (ARs, %) for soil nutrients and the microbial communities was adopted to allow the datasets, which had different experimental durations, to be compared.The AR was calculated as follows: where S treatment represents the soil nutrients and microbial communities under the fertilization treatments, S control represents the soil nutrients and microbial communities under the CK 0 treatment, and T is the experimental duration (years).
Data Analyses
One-way ANOVA was used to explore the differences in grape yield, soil nutrients, and the microbial communities under the different fertilization treatments.A redundancy analysis (RDA) by Canoco 5 was used to select and quantify the factors driving the soil nutrients (SOC, TN, AN, AP, AK, DOC, and DON) and microbial communities (MBC, MBN, fungi, bacteria, ratio of fungi to bacteria, AMF, ACT, G+, G−, ratio of G−/G+, and PLFAs) effects on grape yield and nutritional ingredients.A path analysis using the Amos 17.0 package was employed to identify the relationships among yield, nutritional ingredients, soil nutrients, and the microbial communities.The driving factors and grape nutritional ingredients were divided into the following four latent variables: soil nutrients (SOC, AN, and DON), microbial communities (ratio of fungi to bacteria, G−/G+, and PLFAs), grape quality 1 (soluble solids and reducing sugars), and grape quality 2 (titratable acidity, tannins, and phenols).The following hypothetical paths were developed.First, fertilization, soil nutrients, and the microbial communities had a direct effect on the grape yield and quality, and second, fertilization indirectly affected the grape yield and quality via its effects on soil nutrients and the microbial communities.Finally, the soil nutrients that indirectly affected the grape yield and quality through their effects on the microbial communities were measured.
Grape Yield and Nutritional Ingredient Responses to Long-Term Fertilization
The grape yield increased with the application of TN (inorganic and organic) (NPM 2 > M > NP > NPM 1 , Figure 1).Significantly higher soluble solids and reducing sugar contents were recorded in the NPM 2 treatment (22 g L −1 and 199 g L −1 , respectively).A comparison of the M, NP, and NPM 1 treatments showed that NP led to the highest titratable acidity (7.20 g L −1 ), tannin (0.35 g L −1 ), and total phenol (0.52 g L −1 ) contents.The lowest titratable acidity, tannin, and total phenol contents occurred in the M treatment.The soluble solids and reducing sugar contents had a significant positive linear correlation with grape yield (Figure 2), but the titratable acidity and tannin contents significantly decreased with increasing grape yield.There was no significant linear relationship between grape yield and total phenol content.
Grape Yield and Nutritional Ingredient Responses to Long-Term Fertilization
The grape yield increased with the application of TN (inorganic and organic) (NPM2 > M > NP > NPM1, Figure 1).Significantly higher soluble solids and reducing sugar contents were recorded in the NPM2 treatment (22 g L −1 and 199 g L −1 , respectively).A comparison of the M, NP, and NPM1 treatments showed that NP led to the highest titratable acidity (7.20 g L −1 ), tannin (0.35 g L −1 ), and total phenol (0.52 g L −1 ) contents.The lowest titratable acidity, tannin, and total phenol contents occurred in the M treatment.The soluble solids and reducing sugar contents had a significant positive linear correlation with grape yield (Figure 2), but the titratable acidity and tannin contents significantly decreased with increasing grape yield.There was no significant linear relationship between grape yield and total phenol content.
Changes in Soil Nutrients under Long-Term Fertilization
The soil nutrient levels under the CK0 treatment and the ARs under the NP, NPM1, NPM2, and M treatments are shown in Table 2.The different fertilization treatments improved the SOC, TN, AN, AP, DOC, and DON concentrations in the topsoil (0-20 cm) compared to those in the CK0 treatment, except for the DOC and DON concentrations
Grape Yield and Nutritional Ingredient Responses to Long-Term Fertilization
The grape yield increased with the application of TN (inorganic and organic) (NPM2 > M > NP > NPM1, Figure 1).Significantly higher soluble solids and reducing sugar contents were recorded in the NPM2 treatment (22 g L −1 and 199 g L −1 , respectively).A comparison of the M, NP, and NPM1 treatments showed that NP led to the highest titratable acidity (7.20 g L −1 ), tannin (0.35 g L −1 ), and total phenol (0.52 g L −1 ) contents.The lowest titratable acidity, tannin, and total phenol contents occurred in the M treatment.The soluble solids and reducing sugar contents had a significant positive linear correlation with grape yield (Figure 2), but the titratable acidity and tannin contents significantly decreased with increasing grape yield.There was no significant linear relationship between grape yield and total phenol content.
Changes in Soil Nutrients under Long-Term Fertilization
The soil nutrient levels under the CK0 treatment and the ARs under the NP, NPM1, NPM2, and M treatments are shown in Table 2.The different fertilization treatments improved the SOC, TN, AN, AP, DOC, and DON concentrations in the topsoil (0-20 cm) compared to those in the CK0 treatment, except for the DOC and DON concentrations M treatments, but did significantly increased compared to the NP and NPM 1 treatments.The AN (2.64-3.55%)AR under the NP, NPM 2 , and M treatments was higher than that under the NPM 1 treatment.Finally, the NPM 2 and M treatments improved the DOC and DON concentrations compared to the CK 0 treatment, with AR values of 2.86-6.97%(DOC) and 2.29-4.29%(DON).
Mechanisms Driving Grape Yield and Quality
The RDA suggested that fertilization, SOC, AN, DON, F/B, G−/G+, and PLFAs were the driving factors regulating grape yield and nutritional ingredients (Figure 3).Fertilization had the most influential impact on grape yield and nutritional ingredients among the selected seven variables (31.9%).In addition, soil nutrients (SOC, AN, and DON) and the microbial communities (F/B, G−/G+, and PLFA) accounted for 19.7% and 26.8%, respectively.The driving factors and grape nutritional ingredients were divided into four latent variables (soil nutrients, microbial communities, grape quality 1, and grape quality 2; see Figure 4).The standardized loading scores suggested that SOC and G−/G+ were more powerful indicators of soil nutrients and the microbial communities, which was consistent with the RDA results.The path analysis explained 73-77% of the variance in grape yield and nutritional ingredients (Figure 5).Fertilization, soil nutrients, and the microbial communities directly affected grape yield and nutritional ingredients, and fertilization strongly, and positively affected soil nutrients.Soil nutrients directly affected grape yield and nutritional ingredients via their effects on microbial communities.Overall, fertilization, soil nutrients, and the microbial communities were significantly and positively associated with grape yield and quality 1, while their association with grape quality 2 was significantly negative.The standardized total effects on grape yield and quality occurred in the following order: fertilization > soil nutrients > microbial communities (Figure 6).
Mechanisms Driving Grape Yield and Quality
The RDA suggested that fertilization, SOC, AN, DON, F/B, G−/G+, and PLFAs were the driving factors regulating grape yield and nutritional ingredients (Figure 3).Fertilization had the most influential impact on grape yield and nutritional ingredients among the selected seven variables (31.9%).In addition, soil nutrients (SOC, AN, and DON) and the microbial communities (F/B, G−/G+, and PLFA) accounted for 19.7% and 26.8%, respectively.The driving factors and grape nutritional ingredients were divided into four latent variables (soil nutrients, microbial communities, grape quality 1, and grape quality 2; see Figure 4).The standardized loading scores suggested that SOC and G−/G+ were more powerful indicators of soil nutrients and the microbial communities, which was consistent with the RDA results.The path analysis explained 73-77% of the variance in grape yield and nutritional ingredients (Figure 5).Fertilization, soil nutrients, and the microbial communities directly affected grape yield and nutritional ingredients, and fertilization strongly, and positively affected soil nutrients.Soil nutrients directly affected grape yield and nutritional ingredients via their effects on microbial communities.Overall, fertilization, soil nutrients, and the microbial communities were significantly and positively associated with grape yield and quality 1, while their association with grape quality 2 was significantly negative.The standardized total effects on grape yield and quality occurred in the following order: fertilization > soil nutrients > microbial communities (Figure 6).
Effect of Long-Term Fertilization on Grape Yield and Quality
Fertilization provides nutrients for plant growth when the existing soil nutrients are not sufficient, especially in ecologically fragile regions.Our results showed that grape yield was higher in the NPM2 treatment and M treatment than in the NP treatment and NPM1 treatment.One potential reason was the different amount and type of N input (Fertilizer TN:NPM2 (842 kg ha −1 year −1 ) > M (624 kg ha −1 year −1 ) > NP (540 kg ha −1 year −1 ) > NPM1 (kg ha −1 year −1 ) (Table 1).Another important potential reason may be that the manure also indirectly enhanced the soil properties, which led to improved crop growth, as the extent of soil quality improvement largely depended on the type of fertilizer used (Figure 5) [7,19].The grape yield increased with the application of TN (inorganic and organic) (Figure 1 and Table 2).Nitrogen is the main component of phospholipids, nucleic acids, and protein in plant tissues.Therefore, N additions have a strong effect on grape yields [19].Interestingly, there was a different relationship between grape yield and the nutritional ingredients, which meant that fertilization not only impacted the grape yield but also had a considerable effect on the grape nutritional ingredients.Reducing sugars, titratable acidity, soluble solids, tannins, and total phenols are the main nutritional ingredients in grapes [4].The short-term effects of different fertilization schemes on grape nutritional ingredients have been extensively studied.For example, Thomidis et al. [31] reported a large sugar content with high N input treatments, and the soluble solids content with a manure treatment was higher than that with a synthetic fertilizer treatment [9].These results were consistent with our findings.However, Brunetto, Ceretta, Melo, Miotto, Ferreira, Couto, Silva, Garlet, Somavilla, Cancian, and Ambrosini [9] found that a manure treatment produced a higher titratable acidity content than a synthetic fertilizer treatment.The tannin and total phenolic contents were also significantly higher under a manure treatment [32].This result is inconsistent with our findings and this difference was probably due to the amount of N fertilizer and different accumulative effects (shortterm and long-term).Overall, the soluble solids and reducing sugar contents had significant positive correlations with grape yield, but the opposite trend was found between grape yield and the titratable acidity and tannin contents in the different experimental fertilization treatments.Therefore, achieving higher grape yields can also lead to reductions in some nutritional ingredients.Fertilization provides nutrients for plant growth when the existing soil nutrients are not sufficient, especially in ecologically fragile regions.Our results showed that grape yield was higher in the NPM 2 treatment and M treatment than in the NP treatment and NPM 1 treatment.One potential reason was the different amount and type of N input (Fertilizer TN:NPM 2 (842 kg ha −1 year −1 ) > M (624 kg ha −1 year −1 ) > NP (540 kg ha −1 year −1 ) > NPM 1 (kg ha −1 year −1 ) (Table 1).Another important potential reason may be that the manure also indirectly enhanced the soil properties, which led to improved crop growth, as the extent of soil quality improvement largely depended on the type of fertilizer used (Figure 5) [7,19].The grape yield increased with the application of TN (inorganic and organic) (Figure 1 and Table 2).Nitrogen is the main component of phospholipids, nucleic acids, and protein in plant tissues.Therefore, N additions have a strong effect on grape yields [19].Interestingly, there was a different relationship between grape yield and the nutritional ingredients, which meant that fertilization not only impacted the grape yield but also had a considerable effect on the grape nutritional ingredients.Reducing sugars, titratable acidity, soluble solids, tannins, and total phenols are the main nutritional ingredients in grapes [4].The short-term effects of different fertilization schemes on grape nutritional ingredients have been extensively studied.For example, Thomidis et al. [31] reported a large sugar content with high N input treatments, and the soluble solids content with a manure treatment was higher than that with a synthetic fertilizer treatment [9].These results were consistent with our findings.However, Brunetto, Ceretta, Melo, Miotto, Ferreira, Couto, Silva, Garlet, Somavilla, Cancian, and Ambrosini [9] found that a manure treatment produced a higher titratable acidity content than a synthetic fertilizer treatment.The tannin and total phenolic contents were also significantly higher under a manure treatment [32].This result is inconsistent with our findings and this difference was probably due to the amount of N fertilizer and different accumulative effects (short-term and longterm).Overall, the soluble solids and reducing sugar contents had significant positive correlations with grape yield, but the opposite trend was found between grape yield and the titratable acidity and tannin contents in the different experimental fertilization treatments.Therefore, achieving higher grape yields can also lead to reductions in some nutritional ingredients.
Effects of Long-Term Fertilization on Soil Nutrients and Microbial Communities
Soil fertility is a well-studied subject that affects human living standards and environmental quality.Fertilization is the most effective management method for improving soil fertility, especially soil nutrients and microbial communities.Previous studies have shown that synthetic fertilizer and manure increased the SOC and AN concentrations compared to CK 0 , and that this was due to exogenous nutrients and C inputs [33,34].The SOC, AN, and DON under the NPM 1 treatment were lower than those under the NP, NPM 2 , and M treatments (Tables 1 and 2).The fertilizer types were as follows: NPM 2 (Sy-N 540, Or-N 302), M (Sy-N 0, Or-N 624), NP (Sy-N 540, Or-N 0), and NPM 1 (Sy-N 135, Or-N 240) (kg ha−1 year−1).A large amount of synthetic fertilizer could promote the growth of plant roots, bringing more root exudates to the soil [35].Although manure can directly add exogenous substances to soil, the effect of a small amount of manure on soil fertility was not as significant as adding a large amount of synthetic fertilizer.Several studies have explored the effects of N addition on soil microbial communities at local and global scales [15,36].Most studies have reported that excessive application of nitrogen fertilizer (urea) will reduce the number of soil microorganisms, which may be related to the toxicity of ammonia and excess microbial nutrition [14,15].Previous studies have suggested that the ratio of fungi to bacteria decreased with the application of N fertilizer because the N requirement of bacteria is higher than that of fungi [37].However, our results were inconsistent with this finding.Two potential reasons are possible.First, our experiment was carried out using gray desert soil, which has a high sand content [38].Most nitrogen fertilizer is leached and lost when external irrigation is applied, which also meant that there were no toxic effects on the microorganisms [39].Another reason is that our site was located in a desert-oasis transitional zone that is extremely deficient in soil nutrients and has a low soil microbial biomass [40].Following soil N enrichment, the lower N requirement of plants reduced the AMF content (Table 3).Furthermore, a high AMF content promoted more C to be transferred from plant roots to soil, which increased the G+ bacterial abundance (Table 3) [41,42].Overall, the effects of fertilization on soil nutrients and microbial communities depended on the amount and type of fertilization.
Grape Yield and Quality Responses to Soil Nutrient, Microbial Community and Soil Fertility
The standardized loading scores suggested that SOC and G−/G+ were powerful indicators of soil fertility (Figure 4).The SOC content represents the capacity of soil to improve nutrient levels [16].Furthermore, G− and G+ bacteria rely on readily available and recalcitrant C sources [43].Therefore, the G−/G+ ratio indicates the level of newly formed SOC, namely soil fertility.As expected, fertilization significantly influenced grape yield and nutritional ingredients compared to soil nutrients and microbial communities (Figure 3).
Fertilization not only directly provides extra nutritional matter but also indirectly improves the soil properties, soil microbial communities, and soil microenvironment, which regulate plant growth [44].For example, based on a 25 year fertilization experiment, Cai, Xu, Wang, Zhang, Liang, Hou, and Luo [7] reported that fertilization increased the crop yield by enhancing soil nutrients.Using 223 Arctic and Antarctic soil samples, Siciliano, Palmer, Winsley, Lamb, Bissett, Brown, van Dorst, Ji, Ferrari, Grogan, Chu, and Snape [20] found that soil nutrients were associated with soil microbial communities.Based on previous studies, a relationship among grape yields and nutritional ingredients, fertilization, soil nutrients, and soil microbial communities has been hypothesized.Our results showed that fertilization, soil nutrients, and the microbial communities were significantly and positively associated with grape yield, soluble solids, and reducing sugars, while their association with titratable acidity, tannins, and phenols was significantly negative.Soil nutrients directly affect grape yield and nutritional ingredients via microbial communities.Soil nutrients provide energy for soil microbial activity and determine the microbial composition, and this might be an important mechanism by which soil fertility regulates grape productivity [20].Additionally, there was a greater direct effect between the soil microbial communities and grape nutritional.Overall, our results further verified that the interplay among fertilization, soil nutrients, and soil microbial communities, and their interaction jointly affected grape yield and nutritional ingredients (Figure 5).These results suggested that fertilization was a viable strategy for regulating grape yields and grape quality because it altered soil fertility in ecologically fragile regions.
Conclusions
Significantly different grape yields, nutritional ingredients, soil nutrients, and soil microbial communities were found among the long-term different fertilization treatments.The soluble solids and reducing sugar contents had a significantly positive linear correlation with grape yield, but the opposite trend was found between grape yield, and titratable acidity and tannin contents.The effects of fertilization on soil nutrients and the microbial communities depended on the amount of fertilization.Fertilization accounted for 31.9% of the variance in grape yield and nutritional ingredients, followed by microbial communities at 26.8% and soil nutrients at 19.7%.Fertilization, soil nutrients, and the microbial communities were significantly and positively associated with grape yield, soluble solids, and reducing sugars, while their association with titratable acidity, tannins, and phenols was significantly negative.Our results suggested that fertilization was a viable strategy for regulating grape yields and quality because it alters soil fertility.The results also showed that a high grape yield implied that all nutritional ingredients would also be high.
Figure 1 .
Figure 1.Grape yields (a), soluble solids (b), reducing sugars (c), titratable acidity (d), tannins (e), and total phenols (f) under the various fertilization treatments based on four fertilization experiments.Notes: See Table 1 for the abbreviations of the fertilization treatments; bars represent standard deviations; and different letters indicate significant differences (p < 0.05) among the various fertilization treatments.
Figure 2 .
Figure 2. Relationship between the grape yield and grape quality in terms of soluble solids (a), reducing sugars (b), titratable acidity (c), tannins (d), and total phenols (e).
Figure 1 .
Figure 1.Grape yields (a), soluble solids (b), reducing sugars (c), titratable acidity (d), tannins (e), and total phenols (f) under the various fertilization treatments based on four fertilization experiments.Notes: See Table 1 for the abbreviations of the fertilization treatments; bars represent standard deviations; and different letters indicate significant differences (p < 0.05) among the various fertilization treatments.
Figure 1 .
Figure 1.Grape yields (a), soluble solids (b), reducing sugars (c), titratable acidity (d), tannins (e), and total phenols (f) under the various fertilization treatments based on four fertilization experiments.Notes: See Table 1 for the abbreviations of the fertilization treatments; bars represent standard deviations; and different letters indicate significant differences (p < 0.05) among the various fertilization treatments.
Figure 2 .
Figure 2. Relationship between the grape yield and grape quality in terms of soluble solids (a), reducing sugars (b), titratable acidity (c), tannins (d), and total phenols (e).
Figure 2 .
Figure 2. Relationship between the grape yield and grape quality in terms of soluble solids (a), reducing sugars (b), titratable acidity (c), tannins (d), and total phenols (e).
3. 2 .
Changes in Soil Nutrients under Long-Term FertilizationThe soil nutrient levels under the CK 0 treatment and the ARs under the NP, NPM 1 , NPM 2 , and M treatments are shown in Table2.The different fertilization treatments improved the SOC, TN, AN, AP, DOC, and DON concentrations in the topsoil (0-20 cm) compared to those in the CK 0 treatment, except for the DOC and DON concentrations using the NPM 1 treatment.Compared to the NPM 1 treatment, the M treatment significantly increased the SOC (18.05%),TN (31.48%),AN (3.91%), DOC (19.06%), and DON (12.03%)ARs.The ARs for AP and AK were not significantly different among the NP, NPM 1 , NPM 2 , and M treatments.The ARs of all soil nutrients in the subsoil (20-40 cm) were lower than in the topsoil.Compared to the CK 0 treatment, the NP and NPM 1 treatments improved the SOC, TN, AN, and AP concentrations, but reduced the AK, DOC, and DON concentrations.The ARs for SOC and TN did not significantly differ between the NPM 2 and
Figure 3 .
Figure 3. Redundancy analysis (RDA) for the multivariate effects of fertilization, soil nutrients, and the soil microbial community on the grape yield and quality.The soil nutrients included soil organic carbon (SOC), soil available nitrogen (AN), and dissolved organic nitrogen (DON); the soil microbial community included phospholipid fatty acids (PLFAs), the ratio of fungi to bacteria (F:B), and the ratio of Gram-positive to Gram-negative bacteria (G−:G+); the grape quality included soluble solids (SS), reducing sugars (RS), titratable acidity (TA), tannins, and phenols.
Figure 3 .
Figure 3. Redundancy analysis (RDA) for the multivariate effects of fertilization, soil nutrients, and the soil microbial community on the grape yield and quality.The soil nutrients included soil organic carbon (SOC), soil available nitrogen (AN), and dissolved organic nitrogen (DON); the soil microbial community included phospholipid fatty acids (PLFAs), the ratio of fungi to bacteria (F:B), and the ratio of Gram-positive to Gram-negative bacteria (G−:G+); the grape quality included soluble solids (SS), reducing sugars (RS), titratable acidity (TA), tannins, and phenols.
Figure 4 .
Figure 4. Latent variables with their indicators that were considered in the path analysis.The numbers in parentheses show the loading scores.(a) The soil nutrients included soil organic carbon (SOC), soil available nitrogen (AN), and dissolved organic nitrogen (DON); (b) the soil microbial community (MC) included the phospholipid fatty acids (PLFAs), ratio of fungi to bacteria (F:B), and ratio of Gram-positive to Gram-negative bacteria (GP:GN); (c) grape quality 1 included soluble solids (SS) and reducing sugars (RS); (d) and grape quality 2 included titratable acidity (TA), tannins, and phenols.The data of DON were logarithmically converted.
Figure 5 .
Figure 5. Path analysis results regarding the direct and indirect effects of fertilization, soil nutrients, and the soil microbial community (MC) on the grape yield and quality (chi/df = 1.3, p = 0.43).The numbers show the path coefficients.The gray path and number indicate that the effect is statistically significant, and the dashed paths and associated numbers indicate that the effect is negative.See Figure 4 for the indicators of the four latent variables (nutrients, microbial community, quality 1, and quality 2).
Figure 4 .
Figure 4. Latent variables with their indicators that were considered in the path analysis.The numbers in parentheses show the loading scores.(a) The soil nutrients included soil organic carbon (SOC), soil available nitrogen (AN), and dissolved organic nitrogen (DON); (b) the soil microbial community (MC) included the phospholipid fatty acids (PLFAs), ratio of fungi to bacteria (F:B), and ratio of Gram-positive to Gram-negative bacteria (GP:GN); (c) grape quality 1 included soluble solids (SS) and reducing sugars (RS); (d) and grape quality 2 included titratable acidity (TA), tannins, and phenols.The data of DON were logarithmically converted.
Figure 4 .
Figure 4. Latent variables with their indicators that were considered in the path analysis.The numbers in parentheses show the loading scores.(a) The soil nutrients included soil organic carbon (SOC), soil available nitrogen (AN), and dissolved organic nitrogen (DON); (b) the soil microbial community (MC) included the phospholipid fatty acids (PLFAs), ratio of fungi to bacteria (F:B), and ratio of Gram-positive to Gram-negative bacteria (GP:GN); (c) grape quality 1 included soluble solids (SS) and reducing sugars (RS); (d) and grape quality 2 included titratable acidity (TA), tannins, and phenols.The data of DON were logarithmically converted.
Figure 5 .
Figure 5. Path analysis results regarding the direct and indirect effects of fertilization, soil nutrients, and the soil microbial community (MC) on the grape yield and quality (chi/df = 1.3, p = 0.43).The numbers show the path coefficients.The gray path and number indicate that the effect is statistically significant, and the dashed paths and associated numbers indicate that the effect is negative.See Figure 4 for the indicators of the four latent variables (nutrients, microbial community, quality 1, and quality 2).
Figure 5 .
Figure 5. Path analysis results regarding the direct and indirect effects of fertilization, soil nutrients, and the soil microbial community (MC) on the grape yield and quality (chi/df = 1.3, p = 0.43).The numbers show the path coefficients.The gray path and number indicate that the effect is statistically significant, and the dashed paths and associated numbers indicate that the effect is negative.See Figure 4 for the indicators of the four latent variables (nutrients, microbial community, quality 1, and quality 2).
Figure 6 .
Figure 6.Standardized total effects of fertilization, soil nutrients, and the microbial community on the grape yield and quality.See Figure 4 for the indicators of the four latent variables (nutrients, microbial community, quality 1, and quality 2).
Figure 6 .
Figure 6.Standardized total effects of fertilization, soil nutrients, and the microbial community on the grape yield and quality.See Figure 4 for the indicators of the four latent variables (nutrients, microbial community, quality 1, and quality 2).
1 .
Effect of Long-Term Fertilization on Grape Yield and Quality
|
2022-09-03T15:06:36.647Z
|
2022-08-31T00:00:00.000
|
{
"year": 2022,
"sha1": "6626ac5ecd5a099deaad96301839f5387b78be64",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/17/10857/pdf?version=1661941876",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "757186d760e92c3bf1a07747c408e1a2d9ad5274",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
219424935
|
pes2o/s2orc
|
v3-fos-license
|
A Formalised Ontology of Musical Instruments
A formal ontology is an aspect of knowledge representation that deals with the conceptualization of domains. As ontology enhances the sharing of common understanding of the structure of information, it can be used as a training tool. Hence, this article develops a formalised ontology of musical instruments that is based on the Hornbostel and Sach’s classification scheme. The ontology provides information about the families, groups and characteristics of musical instruments such as the shape and how it is played, among others. It also recommends what instrument to learn based on the user’s preference. The concepts of the ontology and the relationship between them are formalised using the predicate logic and implemented with Prolog programming language.
INTRODUCTION
An ontology is usually focused on an explicit specification of conceptualisation of the objects, concepts, and entities that are presumed to exist in certain area of interest and the relationships that hold among them. It has the meaning of a standardized terminological framework in terms of which the information is organized [1,2,3]. Ontologies ensure efficient retrieval by enabling inferences based on domain knowledge, which is gathered during the construction of the knowledge base [4]. It also helps to spell out the facts in a domain and hence, creates rules and constraints that bind the relationships between the concepts together [5,6,7]. Formalized ontology constructs a formal codification for the knowledge elicitation of the concerned domain [1].
Music is a collection of coordinated sound or sounds [8]. It brings about melody and hence helps brings positive effects on health by improving mood, reducing stress, lessoning anxiety and improving memory among others [9]. Musical instruments are used to make musical sounds. Musical instruments evolved in step with changing applications and technologies hence, different musical instruments have evolved over the years [10]. Musical ontology is the study of the kinds of musical things that exist and the relations that hold between them [11]. The ontology developed in this study is based on the Hornbostel and Sach's classification scheme [12]. Since reasoning aims at extracting information not directly represented and ontologies can be complemented with reasoning capabilities, mainly through the application of rules on the given facts, prolog programming language is chosen to implement this ontology [13,14].
RELATED WORK 2.1 Knowledge Representation Issues In Musical Instrument Ontology Design:
This paper presents preliminary work on musical instruments ontology design, and investigates heterogeneity and limitations in existing instrument classification schemes. The authors developed representations using the Ontology Web Language (OWL), and compared terminological and conceptual heterogeneity using SPARQL queries. They found evidence to support that traditional designs based on taxonomy trees lead to ill-defined knowledge representation, especially in the context of an ontology for the Semantic Web.
In order to overcome this issue, they suggested an instrument ontology that exhibits a semantically rich structure [15].
The Music Ontology:
This work develops a distributed music information system that aims at gathering music related information held by multiple databases or applications. Semantic Web technologies were used to create a unified information environment. The author developed a formal ontology for the music domain which allows a published and interlinked a wide range of structured music-related data on the Web. Different technologies were also developed along with an algorithm that automatically relate music-related datasets among each other. Three of the applications developed were described using the distributed information environment [16].
Towards the Automatic Generation of A Semantic Web Ontology for Musical Instruments:
This article presents a novel hybrid ontology generation system for musical instruments. The work focuses on automatic instrument taxonomy generation in Ontology Web Language (OWL). The hybrid system consists of three main units which are musical instrument analysis, Formal Concept Analysis and lattice pruning and hierarchical form generation [17].
METHODOLOGY 3.1 The Ontology Template
The conceptual modelling of the musical instruments developed in this study involves defining classes in the ontology, arranging the classes in a taxonomic hierarchy, defining slots and describing allowed values for these slots and filling in the values for slots for instances and modeling the ontology. This is followed by writing the axioms in predicate logic for the various facts and rules in the ontology. The predicate logic axioms were translated into prolog axioms and competency questions were used to draw inferences from the ontology. There are 4 main classes defined in this ontology. They are: Chordophones, Aerophones, Percussion and Keyboard. The class hierarchy is shown in Fig. 1.
Fig. 1: Class Hierarchy of the Musical Instruments
Some of the properties used are: isPlayed, isBowed, isPlucked, is_A, isWooden, isMetal, familyOf, hasPegs, hasBow, hasStrings, isClapped, typeOf, plays-with and shapeOf among others. A few of the facts and rules in the ontology are given below: i. Instruments can be in different families of Chordophones, Aerophones , Percussion and Keyboard.
ii. A string instrument can be bowed or plucked.
iii. An instrument is a string instrument if it has strings, pegs and is wooden.
iv. A plucked string instrument is usually played by plucking.
v. Guitar is a plucked instrument.
vi. A violin has pegs, strings and is wooden.
Axioms
Some of the axioms written in predicate logic are shown below: Axiom 1: The musical instruments are classified into Chordophones, Aerophones , Percussion and Keyboard.
Competency Questions
Some of the questions used to test the scope and competency of the ontology are given below:
IMPLEMENTATION AND RESULTS
All the predicate logic axioms were translated into prolog axioms. SWI (Sociaal-Wetenschappelijke Informatica ("Social Science Informatics")) Prolog was used to implement this ontology. The predicates denote the relations in the domain. The following Fig.s show the results for some queries. The upper part of the SWI prolog window is the prolog editor while the lower part of the window is where the queries are answered. 4 shows the answer to the instruments that exist in the ontology. Only part of the instruments could be shown as the list is longer than the interface.
CONCLUSION
The musical instrument ontology developed in this study gives a good classification and characteristics of the wind (also known as aerophones), string (also known as chordophone), percussions and keyboard. It can be used as a learning tool for beginners who want to start learning music or musical instruments.
|
2020-05-21T09:14:26.709Z
|
2020-05-15T00:00:00.000
|
{
"year": 2020,
"sha1": "4a0946c553479bbf1e6fe7afd55ee630d1aef32a",
"oa_license": null,
"oa_url": "https://doi.org/10.5120/ijca2020920235",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3c247e757ccceedd9a0164298b6d7e7fa948d63d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
13319623
|
pes2o/s2orc
|
v3-fos-license
|
Lattice thermal conductivity of Bi, Sb, and Bi-Sb alloy from first principles
Using first principles, we calculate the lattice thermal conductivity of Bi, Sb, and Bi-Sb alloys, which are of great importance for thermoelectric and thermomagnetic cooling applications. Our calculation reveals that the ninth-neighbor harmonic and anharmonic force constants are significant; accordingly, they largely affect the lattice thermal conductivity. Several features of the thermal transport in these materials are studied: (1) the relative contributions from phonons and electrons to the total thermal conductivity as a function of temperature are estimated by comparing the calculated lattice thermal conductivity to the measured total thermal conductivity, (2) the anisotropy of the lattice thermal conductivity is calculated and compared to that of the electronic contribution in Bi, and (3) the phonon mean free path distributions, which are useful for developing nanostructures to reduce the lattice thermal conductivity, are calculated. The phonon mean free paths are found to range from 10 to 100 nm for Bi at 100 K.
I. INTRODUCTION
Bi and Bi-Sb alloys have long been studied for their promising low-temperature thermoelectric applications.Bi and Sb have a rhombohedral crystal structure, which is a Peierl distortion of the simple cubic crystal.The small structural distortion results in Brillouin zone folding and a small overlap between conduction and valence bands, thereby causing semimetallic behavior and conduction by both electrons and holes.Since the semimetallic behavior causes cancellation of the hole and electron contributions to the power factor, bulk Bi is not a good thermoelectric material.However, Bi has a large thermomagnetic effect and a large thermomagnetic figure of merit (ZT) [1].The thermomagnetic effect is particularly pronounced below 10 K due to the extremely long mean free path of the electrons in Bi [2].In addition, Bi nanowires become semiconducting as their diameters approach several nanometers, thereby exhibiting a large thermoelectric power factor [3,4].As a conventional bulk thermoelectric material, Bi 1−x Sb x has drawn more attention than Bi, since alloying with a small amount of Sb causes Bi 1−x Sb x to become a narrow gap semiconductor, which is advantageous for high thermoelectric efficiency.Currently, Bi 1−x Sb x (x 0.12) is the best n-type thermoelectric material below 200 K [5].
Before discussing the lattice thermal transport, we emphasize that electrons, in addition to phonons, carry a considerable amount of heat in Bi, Sb, and Bi-Sb alloys.Therefore, both phonons and electrons contribute to the total thermal conductivity, which can be expressed as where κ tot and κ ph are the total thermal conductivity and the lattice thermal conductivity, respectively.The term κ e includes the thermal conductivity of electrons and holes, as well as the bipolar contribution.The κ e of Bi, Sb, and Bi-Sb alloys is expected to contribute substantially to κ tot , since these materials are either semimetals or semiconductors with a very narrow band gap.Accurate methods to obtain separate κ ph and κ e are crucial to developing better thermoelectric materials, but separating κ ph and κ e is experimentally nontrivial.κ ph can be directly measured under a high magnetic field, because such fields largely suppress electron transport.Previous measurements [6,7] in practical temperature ranges (100-300 K) utilized this method, but the prior measurements are mainly limited to transport along the binary crystallographic direction.We could not find any reports on κ ph along the trigonal direction, which is expected to have a greater ZT than for the binary direction and thus is of more interest.Another way to separate κ ph and κ e is to estimate κ e using either the Wiedemann-Franz law or other electron transport properties, such as the electrical conductivity and Seebeck coefficient [8].Such an approach provides a reasonable qualitative analysis, but validity of the Wiedemann-Franz law and the simple electron transport models used in the estimation of κ e is sometimes questionable for quantitative purposes [9].
In this paper, we study the lattice dynamics and quantify κ ph for Bi, Sb, and Bi-Sb alloys from first principles and the Boltzmann transport equation.As shown in recent papers [10][11][12][13][14][15][16], this approach provides excellent agreement with experimental data for many pair-bonded materials, such as Si, GaAs, and Si-Ge alloys.We follow the same approach but pay special attention to the range of interatomic interactions.This is because Bi, unlike pair-bonded materials, has significant interaction strength out to large number neighbors, such as the ninth-nearest neighbor [17,18].
II. SECOND-AND THIRD-ORDER FORCE CONSTANTS
In this paper, we calculated the second-and third-order force constants using density functional theory.The calculation of the second-order force constants of Bi and Sb is based on the real space approach [19].We calculated the force exerted on each atom when we displace one or multiple atoms in a 4 × 4 × 4 supercell (128 atoms).For the supercell calculation, we used 30 Ry for the cutoff energy of the plane wave basis and a 4 × 4 × 4 k-point mesh for Brillouin zone sampling, both of which were carefully checked for convergence.The calculation was performed with the ABINIT package [20] and Hartwigsen-Goedecker-Hutter pseudopotentials [21].The valence electrons in the pseudopotential are 6s 2 6p 3 and 5s 2 5p 3 for Bi and Sb, respectively.The spin-orbit interaction is included in all calculations because of the strong spin-orbit interaction in Bi and Sb [22].The second-order force constants are then fitted to the calculated displacement-force data set while enforcing translational and rotational invariance.In the fitting process, we considered up to the 14th neighbors to include the previously reported long-ranged interaction occurring at the ninth neighbor [17,18].The ninth neighbors are shown by the atom labeled C in Fig. 1, where the origin atom is described by atom A. Bi and Sb both have a slightly distorted simple cubic crystal structure.Due to this small crystallographic distortion, the six first neighbors in the cubic structure become three first neighbors and three second neighbors.In Fig. 1, atom B is the first neighbor to atom A and the second neighbor to atom C. The almost collinear chain consisting of AB and BC forms the ninth-neighbor relation, and atom C is the ninth neighbor to atom A. In the following discussions, the fourth and ninth neighbors are frequently mentioned to discuss the range of the force constants.The fourth and ninth neighbors in the rhombohedral crystal structure of Bi correspond to the second neighbor (separated by √ 2a) and the fourth neighbor (separated by 2a), respectively, in the undistorted cubic structure, where a is the lattice constant of the simple cubic structure.
The third-order force constants were calculated by taking finite differences of the second-order force constants [23].We built a 3 × 3 × 3 supercell consisting of 54 atoms, and we displaced one of the two basis atoms along the +R 1 direction in Fig. 1 by 0.04 Å.The displacement value of 0.04 Å was chosen after carefully checking the convergence of third-order force constants with respect to the displacement values.The size of the supercell was large enough to include the significant ninthneighbor interaction.In addition, the large size of the supercell minimizes the effect from the periodic images of the displaced atom due to periodic boundary conditions.For the calculation, a cutoff energy of 30 Ry and a 3 × 3 × 3 k-point mesh are used.We then calculate the second-order force constants using density functional perturbation theory [24,25].All of the procedures are repeated for another supercell with the displacement along the −R 1 direction.By taking the finite differences of the second-order force constants of the two different supercells, the third-order force constants with respect to the R 1 direction are calculated.Rotational invariance with respect to the trigonal direction is then applied to calculate the third-order force constants with respect to the R 2 and R 3 directions.Translational invariance is applied to the third-order force constants by modifying the self-interaction terms.
We calculated phonon dispersions and mode Grüneisen parameters to validate the calculated second-and third-order force constants.In Fig. 2
second-order force constant tensors versus distance.Both Bi and Sb have the interactions of significant magnitude occurring at the ninth neighbors, which agree well with the previous reports [17,18].In Fig. 3, the calculated phonon dispersions for Bi and Sb are compared with the experimental values.Both calculated phonon dispersions are similar to the experimental data, confirming the accuracy of the calculated second-order force constants.
Since the ninth neighbors in Bi and Sb have significant second-order force constants, the third-order force constants at the ninth neighbors should also be of interest.In Fig. 2(b), we plot the two-body third-order force constants as a function of distance.Each dot represents a third-order force constant.As seen in Fig. 2(b), the third-order force constants have substantial values at the ninth neighbors.The importance of the ninth-neighbor interaction on crystal anharmonicity can be checked with the mode Grüneisen parameters.The mode Grüneisen parameters are calculated with the two different sets of third-order force constants: one includes up to the fourth neighbors and the other includes up to the 10th neighbors.To validate the third-order force constants, the reference mode Grüneisen parameters are also calculated.For the reference mode Grüneisen parameters, we used density functional perturbation theory to calculate the phonon frequencies for two different crystal volumes: a crystal at equilibrium and one with the volume increased by 1%.We then take the finite differences of the two different phonon frequencies and calculate the mode Grüneisen parameters from the definition γ = −dlnω/dlnV , where ω and V are a phonon frequency and a crystal volume, respectively.Shown in Fig. 4 are the calculated acoustic mode Grüneisen parameters.Figure 4 shows that the acoustic mode Grüneisen parameters are underestimated over a range of wave vectors when the third-order force constants are considered only up to the fourth neighbors.Even after considering up to the eighth neighbors, the mode Grüneisen parameters are relatively unchanged.This is consistent with the negligible third-order force constants at the fifth, sixth, seventh, and eighth neighbors, as shown in Fig. 2(b).However, when extending the range up to the 10th neighbors, the calculated acoustic mode Grüneisen parameters agree reasonably well with the reference Grüneisen
Grüneisen parameter
Grüneisen parameter parameters.This confirms that the ninth-neighbor interaction is playing a significant role in the anharmonic properties.The optical mode Grüneisen parameter was also determined from third-order force constants that included up to the fourth-and 10th-neighbor interaction terms.Both cases yielded similar values for the optical Grüneisen parameter.
The significant interaction at the ninth-nearest neighbors can be explained by the resonant bonding in Bi and Sb [26].Bi and Sb have very weak sp hybridization, and the s band is well below the p band [27].Therefore, s electrons do not participate in the chemical bonding, and we can consider only p electrons forming the chemical bonds.For the three p electrons per atom in Bi or Sb to meet the six coordination number requirement in the cubiclike crystal structure, the electrons alternate their positions among six chemical bonds, leading the chemical bonding called resonant bonding [28].This resonant bonding picture implies two important features: (1) electrons are highly delocalized and are therefore easily polarized upon external perturbations, and (2) the chemical bonds in Bi and Sb are almost collinear due to the cubiclike crystal structure.The almost collinear bonding can be found in Fig. 1, as explained earlier.These two features result in the significant interaction at the ninth-nearest neighbors.The electron polarization by the displacement of the origin atom is long ranged along the collinear bonding direction due to the large electronic polarizability and almost collinear bonding.This long-ranged electron polarization reaches the ninth-nearest neighbors, giving rise to the significant interatomic interaction between the origin and the ninth-nearest neighbor atoms.
To study the effects of alloying on κ ph , the virtual crystal approximation is used [29].The atomic mass and the force constants of the virtual crystal were linearly interpolated between Bi and Sb, weighted by the composition ratio of the constituents.The lattice constant of the virtual crystal is also averaged according to the composition ratio, which is well justified by the fact that the Bi-Sb alloy follows Vegard's law [30].Three-phonon scattering is calculated using the virtual crystal approximation, while the atomic mass disorder is treated as an additional elastic scattering mechanism.This approach is successful in predicting the Si-Ge alloy thermal conductivity [16].
III. SCATTERING RATE AND LATTICE THERMAL CONDUCTIVITY
The lattice thermal conductivity can be calculated from the distribution function of the phonon modes.We calculate the distribution function by solving the linearized Boltzmann equation with the scattering rates due to the three-phonon process and mass disorder.The scattering rate of the threephonon process is given by where 1, 2, and 3 denote phonon modes in the three-phonon process, while n 0 and ω indicate the Bose-Einstein equilibrium distribution function and the phonon frequency, respectively.The three-phonon scattering matrix element V 3 is given by where ) is a third-order force constant with Cartesian coordinates αβγ and Rb representing the lattice vector and basis atom.Here, e αb denotes the phonon eigenvector component of the basis atom b along direction α, while N is the total number of wave vectors in the first Brillouin zone.The mass disorder scattering rate is with the mass variance factor g defined by g 2 , where f i is the fraction of element i. Putting both scattering rates above into the Boltzmann equation, we obtain where is a linearized deviation of the distribution function from equilibrium, defined as = (n 0 − n)/(∂n 0 /∂β) with β = ω/k B T .We solve the linearized Boltzmann equation above iteratively to find /∇T .The detailed procedure is provided in other papers [12].In contrast to the iterative method mentioned above, a more commonly used method to solve the Boltzmann equation is to neglect 2 and 3 in Eq. ( 6), and this approximation is known as the single-mode relaxation time (SMRT) approximation.The SMRT assumes only one phonon mode is ever out of equilibrium.The time for the nonequilibrium mode to relax to equilibrium is then calculated.We used both the full iterative method and the SMRT approximation to calculate the lattice thermal conductivity from the Boltzmann equation, and we compare the results from the two methods.After solving the Boltzmann equation, the lattice thermal conductivity can be obtained by where α and β are Cartesian directions, V is the crystal volume, and v is the phonon group velocity.One of the numerical uncertainties in this calculation occurs in the energy conservation of the scattering rate calculation.Due to the computational limitations, the Brillouin zone is sampled with a relatively coarse mesh.To find sets of three phonons satisfying the energy and momentum conservation, each point in the coarse mesh is usually broadened by a Gaussian function.However, in this case, numerical uncertainties arise from the tuning of two adjustable parameters (mesh size and Gaussian width).To avoid this artifact, a tetrahedron method is utilized for the Brillouin zone integrations of δ functions [31].With this method, the mesh size is the only adjustable parameter; consequently, the calculation should converge as the mesh size is increased.For our calculation, the mesh size of 16 × 16 × 16 was suitable for convergence.
IV. RESULTS AND DISCUSSION
In Fig. 5(a), we show that the ninth-neighbor interaction has a significant effect on the lattice thermal conductivity.We compare the κ ph in the binary direction calculated with the two different force constant sets: one set includes up to the 10th neighbor, and the other includes up to the fourth neighbor for the third-order force constants.In both cases, the second-order force constants include up to the 14th neighbor; otherwise, the phonon dispersion is not stable and the phonon frequencies of some modes have imaginary values.As shown in the mode Grüneisen parameter plot (Fig. 4), when the ninth-neighbor interaction is not included for the third-order force constants, the crystal anharmonicity is largely underestimated.Figure 5(a) explicitly shows that the κ ph is significantly overestimated when the ninth-neighbor interaction is not included for the third-order force constants.However, when the third-order force constants include up to the 10th-neighbor interactions, the calculated κ ph is half of the value obtained when including only third-order force constants up to the fourth neighbor.
The calculated results with the ninth-neighbor interaction are validated by comparing these results to the previously reported experimental data [6,7].Figure 5(a) shows that our calculation results with the ninth-neighbor interaction agree well with the experimental data by Uher and Goldsmid [6].Our calculation is further confirmed by comparing to another measurement by Kagan and Red'ko [7], showing κ ph 5 W/m-K around 250 K.In contrast, another reported value for κ ph by Gallo et al. [8], which is calculated from the difference between measured κ tot and calculated κ e , as briefly discussed later, shows disagreement with our calculation near room temperature.Our calculated κ ph is twice the reported value [8] at room temperature.This disagreement could stem from the simple electron transport model used in the referenced paper [8].Instead of directly measuring the lattice thermal conductivity, Gallo et al. obtained the electronic thermal conductivity from an electron transport model using a parabolic band structure and an electron scattering rate that obeys a simple power law.The measured Seebeck coefficient and electrical resistivity determine the electron contribution to the thermal conductivity, and then the lattice thermal conductivity is calculated by subtracting the deduced electronic thermal conductivity from the measured total thermal conductivity.To reiterate, our calculation near room temperature is well validated by Kagan and Red'ko's direct measurement [7].
We also see in Fig. 5(a) that the results from the SMRT approximation are similar to the calculations from the full iterative solution of the Boltzmann equation.This is because the temperatures in our calculations are large compared to the Debye temperature of Bi (120 K).When the temperature is not significantly smaller than the Debye temperature, umklapp scattering is dominant over normal scattering.In this case, the SMRT is usually a good approximation.
In Fig. 5(b), we compare the binary (⊥) and the trigonal ( ) directions of Bi in terms of κ ph .The previous paper based on obtaining the electronic thermal conductivity [8], mentioned above, estimates that κ ph, is half of the value of κ ph,⊥ in Bi at room temperature.Our calculation shows values for κ ph, is smaller than for κ ph,⊥ , but the difference is less than 10%.The relative value of κ ph, compared to κ ph,⊥ can be explained by the fact that the rhombohedral structure of Bi is close to the cubic structure but slightly stretched along the trigonal direction.Therefore, the atomic bonding is slightly softer in the trigonal direction than in the binary direction, resulting in the lower lattice thermal conductivity in the trigonal direction.However, the distortion from the exact cubic structure is very small: the rhombohedral angle of Bi (α in Fig. 1) is 57°30 , similar to 60°for the exact cubic structure [30].This very small distortion explains the almost isotropic κ ph of Bi shown in Fig. 5(b).The almost isotropic lattice thermal conductivity of Bi is in contrast with its well-known highly anisotropic electron transport properties [5].This shows that the small distortion of crystal structure of Bi affects the electron and the phonon transport to a different extent.Even though the distortion of the Bi crystal structure is very small from the exact cubic structure, this small distortion causes highly anisotropic shapes to occur in the very small electron and hole pockets responsible for its electronic transport properties, giving rise to largely anisotropic electron transport behavior.However, the small distortion does not much affect the lattice vibrational properties; thus, κ ph is observed to be almost isotropic.
We also compare the κ ph and κ tot of Bi in Fig. 5(b) to estimate the relative contributions from phonons and electrons to κ tot .In the binary direction, κ ph,⊥ is ß60% of κ tot,⊥ at 100 K, and its contribution decreases with temperature.In the trigonal direction, the phonon contribution from κ ph, to κ tot, is more significant than in the binary direction, with a contribution of ß75% at 100 K. Based on this large contribution from phonons with κ ph, to κ tot, , we conclude here that it can be possible to reduce the thermal conductivity effectively by enhancing phonon scattering, as recently demonstrated in Bi 1.4 Sb 0.6 Te 3 and PbTe [32,33].In particular, the large lattice contribution in the trigonal direction would be interesting, because the electron transport in this direction of Bi has a favorable feature for a high thermoelectric power factor.The electrons in the trigonal direction of Bi have an extremely large value for the product of mobility and the density-of-states effective mass, μ(m * /m) 3/2 , due to the high anisotropy in its electronic structure, which is directly related to the thermoelectric power factor [5].
Many features of κ ph in Sb, presented in Fig. 6, show strong similarities to the thermal conductivity of Bi.The ninthneighbor interaction in Sb is also significant, and the κ ph is significantly overestimated without including this contribution in the calculation.The SMRT is a good approximation for Sb since its Debye temperature is also small (ß200 K).The distortion from the exact cubic structure is also small for Sb, as it is in Bi, resulting in an almost isotropic κ ph .The most noticeable difference between Bi and Sb is the contribution of κ ph to κ tot , comparing Figs.5(b) and 6(b).The κ ph contribution is comparable to the κ e in Bi, but the κ ph contributes only a small portion of κ tot in Sb.In other words, κ e is significant in Sb, because the carrier density in Sb is two orders of magnitude larger than that of Bi [34].
The κ ph of Bi, Sb, and Bi-Sb alloys is presented in Fig. 7. Our calculation for Bi 88 Sb 12 agrees well with the experimental data for the κ ph by Kagan and Red'ko [7], showing ß3 W/m-K around 100 K and ß2 W/m-K around 250 K for Bi 87 Sb 13 .Figure 7(a) shows that the κ ph of Bi can be significantly reduced by alloying with small concentrations of Sb.The composition Bi 88 Sb 12 , which has the highest ZT among the Bi-Sb alloys, has four times smaller κ ph than Bi at 100 K.In order to study the anisotropy of phonon transport, we compare the κ ph in the binary and trigonal directions.The Bi-Sb alloy, like its Bi and Sb constituents, has a smaller κ ph in the trigonal direction, but the difference between the trigonal and the binary directions is very small, indicating a predominantly isotopic κ ph .
The comparison of κ tot and κ ph in Figs. of Bi 88 Sb 12 comes predominantly from lattice contributions at low temperature.Around 75 K, the calculated κ ph of Bi 88 Sb 12 is comparable to the measured κ tot for either the trigonal or the binary direction.The κ e , in this case, is expected to be small due to the positive electronic band gap (ß30 meV) of Bi 88 Sb 12 [34].The number of charge carrier in Bi 88 Sb 12 is much less than that in Bi and Sb, resulting in the smaller κ e .However, from comparing the measured κ tot and the calculated κ ph , the κ e increases with temperature.This can be explained by the increasing charge carrier density and increasing bipolar thermal transport as temperature increases.From Fig. 7(c), the κ e becomes comparable to the κ ph near room temperature.Another noticeable feature in κ ph of Bi 88 Sb 12 is that its insensitivity to temperature variation.This is because mass disorder scattering, a temperature-independent process, is the dominant phonon scattering mechanism in this alloy.
Finally, we show in Fig. 8 the accumulated thermal conductivity versus phonon mean free path.This accumulated thermal conductivity is defined as [35,36] where κ qλ represents the thermal conductivity of the phonon mode with wave vector q and polarization λ.Here, is Thermal conductivity (W/m-K) phonon mean free path, and χ ( ) is a step function: χ ( ) = 1 when qλ < , and χ ( ) = 0 otherwise.The accumulated thermal conductivity shows the range of mean free paths of the phonon modes that significantly contribute to thermal transport [35,36].From Fig. 8(a), we see that most of the heat is carried by phonons with mean free paths ranging from 10 to 100 nm at 100 K.However, the phonon mean free path range of the Bi 88 Sb 12 alloy is slightly different from that of Bi in the 50to 100-nm region in Fig. 8 the alloy is extended to longer mean free paths compared to Bi.This is because the alloy scattering is effective for high-frequency phonons but not as effective for low-frequency phonons.If the alloy scattering is approximated by a point defect scattering mechanism, the Rayleigh scattering model shows that the scattering rate is proportional to the fourth power of phonon frequency.
Figure 8(a) shows that the nanostructures in the 10-to 100-nm range scale can significantly contribute to phonon scattering, ultimately resulting in a greatly reduced thermal conductivity in both the Bi and Bi-Sb alloys.In addition to the reduction in κ ph , it is known that Bi nanowires become semiconducting and exhibit a high power factor when the diameter is on the order of 10 nm [3,4].If harmonic and anharmonic force constants of Bi nanowires are not drastically different from those of bulk phase Bi, the phonon mean free path distribution from bulk Bi calculations can guide the design of Bi nanowires for high ZT.To provide a strategy for reducing κ ph through nanostructuring, we present the phonon mean free path distributions of Bi at various temperatures in Fig. 8(c).From Fig. 8(c), we see that nanostructures having characteristic sizes of ß10 nm would be effective for suppressing κ ph in the temperature range of 100 to 300 K, because they are expected to reduce the lattice thermal conductivity by a factor of 10 at 100 K to a factor of 3 at 300 K if boundary scatterings are assumed to be completely diffuse.
V. CONCLUSIONS
We calculate the lattice thermal conductivities of Bi, Sb, and Bi-Sb alloys from first principles.We explicitly show that the significant ninth-neighbor interaction is important for anharmonic interatomic force constants, phonon scattering, and lattice thermal conductivity.Our calculation agrees well with the experimental lattice thermal conductivity values for the binary direction.We also provide the lattice thermal conductivity values for the trigonal direction, which has not been directly measured.From our calculation, the lattice thermal conductivities are almost isotropic in Bi, showing a significant contrast with its highly anisotropic electron transport.This implies that the small distortion in the crystal structure can affect the electron and the phonon transport to a much different extent.By comparing our calculated lattice thermal conductivity to the measured total thermal conductivity, we compare the relative thermal conductivity contributions from phonons and electrons.The lattice thermal conductivity is comparable in magnitude to the electronic thermal conductivity in Bi.In Sb, however, the electronic contribution to the thermal conductivity is much more dominant because of the larger charge carrier concentration.In Bi 88 Sb 12 , the lattice thermal conductivity is the dominant contributor below 75 K but it becomes less significant as the temperature increases.Finally, we calculate the phonon mean free path distributions at various temperatures, providing a useful guide in determining appropriate nanostructure sizes for achieving significant lattice thermal conductivity reduction.
FIG. 1 .
FIG. 1. Crystal structure of Bi and Sb.The void and filled atoms represent two basis atoms.R 1 , R 2 , and R 3 are primitive lattice vectors, and α is a rhombohedral angle.The values of α are 57°30 for Bi and 57°84 for Sb, which are close to 60°for the simple cubic structure.
FIG. 3 .
FIG. 3. (Color online) Phonon dispersion of (a) Bi and (b) Sb.Dots are experimental values from[37] for Bi and[38] for Sb.(c) The high symmetry points in the Brillouin zone.
FIG. 4 .
FIG. 4. (Color online) Acoustic mode Grüneisen parameters of (a) Bi and (b) Sb comparing inclusion up to the fourth and 10th neighbors to the references.The reference Grüneisen parameters are calculated using the difference of phonon frequencies of the two different crystal volumes.
FIG. 5 .
FIG. 5. (Color online) Thermal conductivity of Bi (a) in the binary direction and (b) in comparison between the binary and the trigonal directions.κ ph in (b) is calculated with the SMRT approximation and using third-order force constants up to the 10th neighbors.The solid lines and dots represent our first-principles calculation results and the experimental data from the literature, respectively.
FIG. 6. (Color online) Thermal conductivity of Sb (a) in the binary direction and (b) in comparison between the binary and the trigonal directions.The solid lines and dots in (b) represent our first-principles calculation results and the experimental data from the literature, respectively.
FIG. 7 .
FIG. 7. (Color online) Thermal conductivity of the Bi-Sb alloys: (a) the effect of Sb content on the lattice thermal conductivity of Bi-Sb alloys; (b) comparison between the total and the lattice thermal conductivities of Bi, Sb, and Bi 88 Sb 12 ; and (c) an enlarged plot for the Bi 88 Sb 12 data along the binary and trigonal directions.
FIG. 8 .
FIG. 8. (Color online) Phonon mean free path distribution of (a) Bi, Bi 99 Sb 1 , Bi 88 Sb 12 , and Sb at 100 K; (b) Bi and Bi 88 Sb 12 at 100 K; and (c) Bi at 50, 100, 200, and 300 K for the binary and trigonal directions.In (b) and (c), the accumulated thermal conductivity is normalized by the lattice thermal conductivity value.
|
2016-04-23T08:45:58.166Z
|
2014-02-01T00:00:00.000
|
{
"year": 2014,
"sha1": "586306813f8aab77118bbe8074d2a0251b2412d1",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/88767/2/Lee-2014-Lattice%20thermal%20conductivity.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "586306813f8aab77118bbe8074d2a0251b2412d1",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
222175423
|
pes2o/s2orc
|
v3-fos-license
|
Use of Higher-Harmonic and Intermodulation Generation of Ultrasonic Waves to Detecting Cracks due to Steel Corrosion in Reinforced Cement Mortar
The aim of this work was to provide further confirmation of the possible use of non-linear ultrasonic techniques for detecting the cracking due to corrosion of steel reinforcements in concrete. To this end accelerated steel corrosion tests have been conducted on model reinforced cement mortar specimens, while monitoring the appearance and width evolution of visible surface cracks, and performing non-linear ultrasonic measurements based on the phenomena of harmonic distortion and intermodulation. A new parameter, based on the difference between the amplitude of the fundamental frequency and the sum of the amplitudes of all the first-order and second-order intermodulation products, has been proposed in this work. The results confirm that the appearance of visible surface micro-cracks are preceded and accompanied by the observation of strong non-linear features in the received signal. Furthermore, the new parameter proposed in this work is as efficient as the relative non-linearity parameters, classically used in harmonic distortion non-linear ultrasonic studies, for detecting the non-linear features associated with the critical events of the cracking of cement mortar due to embedded steel corrosion. A hypothesis has been developed considering the possible effect of the filling of the void space by liquid containing rust products after the formation of new cracks or the enlargement of its width. This filling process, which might be particularly enhanced by net convective transport of liquid, would explain the evolution of the values of all the parameters used for putting in evidence the non-linear elastic features after the critical events of the cracking process.
Page 2 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 parameters is the corrosion rate of steel in the reinforced concrete structure, which can be used as input in theoretical models devoted to estimate its remaining service life (EHE-08 2010). Unfortunately, these techniques do not provide information about the cracks induced by the corrosion phenomena in the cementitious matrix of the composite materials. Ultrasonic techniques are one of the most extended approaches for the detection of cracks and defects in a wide range of materials, including cementitious materials, due to its non-destructive character and straight applicability to on-site studies (Blitz and Simpson 1995). In this context, there are passive techniques for damage monitoring in concrete, based on acoustic emission phenomena (Ohtsu 2010;Zaki et al. 2015); and active ones that imply the generation and propagation of ultrasonic waves throughout the material under analysis. These active techniques use the impact-echo method (Liang and Su 2001) or the ultrasonic pulse velocity (UPV) test. Nevertheless, even though there is a standardized procedure to carry out these latter measurements (ASTM C597-16 2016), it has been shown that the sensitivity and accuracy of these linear techniques are to certain extent limited for the detection of micro-cracks compared to non-linear techniques (Jhang 2009;Shah and Ribakov 2009a;Antonaci et al. 2012Antonaci et al. , 2013. During the last years, many research efforts have been made on the development of these non-linear ultrasonic (NLU) techniques so as to increase the detection features of the tests. Some of these techniques are based on changes in the resonance frequency (spectroscopy) (Van den Abeele et al. 2000), on the generation of higher order harmonics (Shah and Ribakov 2009b), or on the emergence of intermodulation products (Burrascano et al. 2019). The appearance or increase of the amplitude of harmonics or intermodulation products (see Fig. 1) is considered a symptom of deterioration and a loss of quality of the device or material studied. For example, in the case of loudspeakers, a catalog of non-linearities associated with defects or deterioration of specific parts could be defined. A previous publication (Klippel 2006) addressed the relationship between non-linear distortion measurements and non-linearities, which are the physical causes for signal distortion in loudspeakers, headphones, microspeakers and other transducers. Special mention deserve the application of exponential swept sine excitation signals, which allow to deconvolve simultaneously the linear impulse response of the system, and separate impulse responses for each harmonic distortion order (Farina 2000;Novak et al. 2010;Novak et al. 2015;Burrascano et al. 2019). A new simple and easy to implement method, termed as the Scaling Subtraction Method (SSM), has been proposed for enhancing the capabilities of detection of the non-linear elastic response of a system (Scalerandi et al. 2008;Antonaci et al. 2010;Antonaci et al. 2013). Figure 1 shows graphically the concepts of harmonic distortion (HD) and intermodulation distortion (IMD) (Price and Goble 1993). HD takes place when a system is excited by a signal of frequency f 1 . In this case, higher order frequencies appear at the output (f 2 = 2f 1 , f 3 = 3f 1 , and so on), being this phenomenon termed as higher order harmonics generation (Fig. 1a).
IMD occurs at frequencies that are the sum and/or the difference of integer multiples of the fundamental frequencies. If a non-linear system is excited by two signals (f 1 and f 0 ), the non-linearity give rise to additional output components at (f 1 + f 0 ) and (f 1 − f 0 ) known as the firstorder intermodulation products. At the same time, the second-order products will mix with the original signals Page 3 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 giving components of frequencies (f 1 + 2f 0 ), (f 1 − 2f 0 ), and so on (see Fig. 1b). Some reviews on the application of NLU techniques for non-destructive assessment of micro-damage in materials can be found in previous publications (Jhang, 2000(Jhang, , 2009. These techniques have been used to characterize the damage of granite samples subjected to compressive loadings (Chen et al. 2014) and to investigate thermal damage in sandstone (Chen et al. 2017). Several works have been devoted to the study of damage of concrete induced by loading (Antonaci et al. 2010;Kim et al. 2018). One problem associated with ultrasonic inspections of concrete structures is the high degree of signal attenuation, due to absorption and scattering (including backscattering) by the cement paste and aggregates. These phenomena lead to severe difficulties when there is only a single-side concrete surface available (pitch-catch inspection mode). The use of a combination of pulsecompression, low-frequency coded waveforms and piezo-composite transducers, has allowed to successfully overcoming these problems (Battaglini et al. 2014;Mohamed and Laureti 2015). For instance, the technique allowed detecting a steel rebar at a cover depth of 55 mm below the tested concrete surface in pitchcatch inspection mode (Laureti et al. 2018).
Only a few NLU studies of reinforcing steel corrosion in concrete can be found in the literature (Kwun et al. 1993;Woodward and Amin 2008;Korenska 2009;Antonaci et al. 2013).
Given the potential of the NLU techniques in terms of early damage detection and sensitivity to distributed micro-damage in materials, it was considered worth gaining more experience on the applicability of higherharmonic and intermodulation product's generation to the study of micro-cracking induced by steel corrosion processes in cement-based materials. For this purpose, several prismatic reinforced cement mortar specimens were prepared and subjected to accelerated corrosion tests using an imposed electric field, while performing NLU measurements. The corrosion tests were conducted in conditions typical of experiments aimed at studying the evolution of the surface cracking due to steel corrosion . The results presented in this work indicate that the appearance of visible surface micro-cracks seems to be preceded and accompanied by the observation of strong non-linear features in the received signal: harmonic distortion and intermodulation phenomena are clearly observed. However, it is hypothesized a possible influence of the filling of the cracks with liquid containing steel corrosion products on the evolution of the results of the NLU measurements. This possible effect had not been pointed out before.
Sample Preparation
The experimental tests were performed on a set of three prismatic reinforced cement mortar specimens having the same dimensions and composition. These specimens were designated as specimens 9, 10 and 11. The choice of using cement mortar instead of concrete was due to the interest in using a more homogeneous and simple model material, by avoiding the presence of coarse aggregate, which could produce a higher heterogeneity of the composite material and possibly a higher wave attenuation and scattering (Battaglini 2014), as compared to the siliceous sand (maximum size 4 mm), used as the aggregate in the mortar mix.
First, the cement mortar was prepared mixing a standard siliceous sand aggregate and a sulphate resisting ordinary Portland cement, CEM I 52.5 R-SR 3, in accordance with the UNE-EN 197-1 standard (Asociación Española de Normalización y Certificación 2011). A kneading process was performed mixing the cement mortar with water containing dissolved sodium chloride (NaCl), being the water/cement mass ratio (w/c) 0.5. The admixed NaCl let obtain a content of 2% Cl − relative to the cement weight in the hardened mortar (Climent et al. 2004), thus ensuring that the current efficiency of the electrically accelerated corrosion process be close to 100% (Nossoni and Harichandran 2012). In a following step, the fresh cement mortar mix was poured in 100 × 100 × 350 mm 3 plastic molds before being mechanically compacted (manual compaction) and cured during 7 days in a humidity chamber at 20 °C and 95% relative humidity. Each of these molds allowed center-crossing a steel rebar of 12 mm in diameter along each sample, 10 mm beneath its upper surface. The steel bars were previously cleaned from native corrosion products following a recommended procedure (ASTM G1-03 2004) and weighted, covering the ends with vinyl electric tape to avoid the steel-mortar-air interface. This layout was chosen so as to favor the micro-cracking produced by the corrosion process to emerge on the upper surface of the samples and thus facilitate the monitoring of the micro-crack width growth over time using a microscope Alonso et al. 1998). More details of the experimental procedure can be found in a previous publication (Climent et al. 2019). The composition data of the cement mortar mix are given in Table 1. Figure 2 shows the layout and dimensions of the reinforced cement mortar specimens, and a photo of one of them before starting the corrosion test.
Accelerated Corrosion Test
The accelerated corrosion tests were conducted using a potentiostat-galvanostat (Model 362, EG&G Instruments, Princeton NJ, USA). A constant anodic current density of 40 µA/cm 2 was applied between the steel rebar (anode) and an external galvanized steel grid (cathode) placed in the bottom of the specimens. To keep an appropriate electrical conductivity throughout the cement mortar, the samples were partially submerged (5 mm height) in a recipient filled with tap water, and a polypropylene sponge was put between the mortar specimen and the steel grid (Climent et al. 2006). The duration of the accelerated corrosion tests was different for each one of the tested specimens, see Sect. 3.1. Given that the galvanostat provided a constant current density, it was possible to corrode three specimens simultaneously by connecting them in series. The tests were performed under nearly controlled climatic conditions, with a relative humidity of 84% ± 4% and a temperature of 23 ± 1 °C, to minimize the influence of additional factors on the measurements, i.e., to avoid uncontrolled drying of the reinforced mortar specimens (Payan et al. 2010). Figure 3 shows an image of the accelerated corrosion test.
In this work, the physical damage of the cement mortar due to corrosion of the embedded steel bars has been followed by detecting the appearance of the first surface micro-crack and by monitoring the growth of the crack width over time. The monitoring process was possible because of the chosen setup and geometric conditions of the experiments, in which the cracks produced by steel corrosion appeared at the upper surface of the mortar specimen. A periodic inspection (daily measurements) of the mortar sample surfaces was carried out throughout the whole experiment using a microscope (magnification 40×, model 58-C0218, Controls, Milan, Italy). The limit of detection of the microscopic observations was approximately 10 µm (half value of the magnitude of the minimum division of the scale bar of the microscope: 20 µm). For each measurement the whole upper surface of the mortar specimens was inspected in order to detect the first observable crack, or to record the maximum value of the crack width. Most of the times the maximum value of the crack width was recorded at the same place where the first crack was observed, only in on a few occasions Page 5 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 the maximum crack width was recorded at a different position. Microscopic photographs were also taken daily on selected positions at the upper surface of the samples. In this way it was possible to plot the maximum crack width growth as a function of time, see Sect. 3.1. According to observations of previous publications (Alonso et al. 1998), the crack width growth should be approximately linear during the propagation period of the process of mortar cracking due to steel rebar corrosion. Another objective of this part of the research was to inspect the morphology of the steel-mortar interface and that of the open micro-cracks produced by the corrosion process. To this end the tests of the three reinforced mortar specimens were interrupted at different times, transversal cuts were performed on the tested specimens, and photographs were taken showing the aspect of the steelmortar interface and that of the cracked mortar covering the bar.
Experimental Setup and Procedure of the Non-linear Ultrasonic Measurements
The experimental setup for the NLU measurements is shown in Fig. 4. They were conducted by means of a NI-USB 6361 multifunction I/O device with a sample frequency of 2 MS/s and an ACD resolution of 16-bits. For the HD experiments a 30 kHz sinusoidal signal with a length of 10,000 cycles was sent to a FS WMA-100 voltage amplifier and then to the emitter transducer. Amplitudes between 120 to 200 V in 10 V steps were used for the excitation signal. The received signal was amplified using a signal conditioner 2693-A (Brüel & Kjaer, Naerum, Denmark) and then sent to the acquisition platform. For the IMD experiments, the emitter transducer was supplied with two tones at the same time (f 1 = 30 kHz, f 0 = 2 kHz). IDK09 transducers (Dakel 2019) with isolated contact membrane (Al 2 O 3 pure ceramic) and piezoelectric ceramic active element PbZrTiO 3 (PZT)-modified ceramic class 200-were used for both the emitter and receiver elements. This kind of PZT material is suitable for active (exciter) and passive (sensor) applications, and it is recommended for wide-band no-resonance uses due to their low mechanical quality factor (Q m < 100). Both transducers were glued permanently (at the beginning of the NLU measurement series) to the reinforced cement mortar specimens using a quick setting Cyanoacrylate glue as the coupling agent. In this way it is thought that any possible defect or air void in the coupling interface has affected equally to all the NLU measurements. Taking into account that in this work the interpretation of the NLU data is always done on a comparative basis (no absolute value is considered), it is believed that the influence of the coupling between sample and transducers is negligible. Previous works (Antonaci et al. 2013) have also shown that that the effects due to small differences in coupling turned out to be negligible compared to the effects induced by corrosion of steel embedded in concrete. The NLU measurements were conducted on a "direct transmission" mode (Antonaci et al. 2013). Page 6 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 specimen with the positions of the ultrasonic transducers and the steel bar under test. The NLU measurements began always 1 day before the corresponding accelerated corrosion test. Two daily measurements (intervals of 12 h) were taken in the course of the experiments. In order to carry out the measurements, a custom-made application was implemented using the system-design development software LabVIEW ® (see Fig. 6). The application allowed the emission and signal acquisition by automatically establishing different voltage steps according to the configuration indicated by the user.
A rectangular window was applied to the steady-state interval of the received signal. The frequency spectrum of the windowed signal was obtained using the Fourier transform method (FFT algorithm). Then, the amplitudes of the fundamental (corresponding to 30 kHz), second (60 kHz) and third harmonic (90 kHz) were determined. A similar process was carried out for the intermodulation products (first-order intermodulation products are expected at 28 and 32 kHz and secondorder products near 34 and 26 kHz).
Assessment of Material Damage Through Non-linear Elastic Wave Features
In the framework concerned the degradation of a material is linked to a non-linear mechanical behavior (Jhang 2000). As a result, when an ultrasonic wave propagates through the material and interacts with the microstructural defects, non-linear terms linked to these higher order waves are generated. As the microdamage grows, non-linearity should increase. Four different strategies have been used in this work to put in evidence the appearance of non-linear features due to the micro-damage of the specimens under study. First, the observation of the frequency spectrum of the received signal, which may allow direct observation of the higher harmonic generation or intermodulation distortion features; see Fig. 1. In second place the nonlinearity parameters can be derived from the amplitudes of the fundamental and harmonic frequencies.
Assuming that changes in the wave propagation velocity and the attenuation are small, and in experimental conditions like those used in this work (NLU measurements done with an input signal of fixed frequency, and the transducers located always at the same positions), the parameters of non-linearity can be approximated (Raichel 2006) as: Bn = A 2 /A 1 2 , and Bpn = A 3 /A 1 3 , being A 1 , A 2 and A 3 the amplitudes of the fundamental, second and third harmonic waves, respectively. In practice, the variation of these ratios relative to the initial undamaged conditions is used as an indicator of the microstructural damage (Shah and Ribakov 2009b).
As for specific parameters that have been used to quantify the damage by incorporating intermodulation products, the intensity modulation ratio, R, defined as: where A′ 1,l , A′ 1,r , A′′ 1,l and A′′ 1,r are the amplitudes of the subsequent left and right sidebands, respectively, and A 1 is the amplitude of the high-frequency acoustic wave; see In this work it is proposed the use of a new parameter for putting in evidence the micro-damage through the intermodulation non-linear features. The new index, termed as DIFA (difference of amplitudes), Eq. (2), is based on the difference between the amplitude of the fundamental frequency and the sum of the amplitudes of all the first-order and second-order intermodulation products; see Fig. 1: The parameter DIFA, as R, is thought to be sensitive to both the non-linear effects on the fundamental frequency (decrease of amplitude) and on the intermodulation products (increase of amplitudes). However, it also can be considered to represent somehow the redistribution of elastic energy among the various generated frequencies of the signal (Scalerandi et al. 2008). If DIFA shows a high value, close to the value corresponding to a reference (non-damaged) state, it would be indicating a low transference of energy to the intermodulation products. Conversely, a very low value of DIFA, as compared to the reference value, might be representative of a high degree of redistribution of elastic energy due to the appearance of critical micro-defects able to enhance the non-linear features of the elastic response.
Observations of the Evolution of the Damage due to Steel Corrosion
In the experimental conditions of these tests, the penetration of the steel corrosion process can be considered as linear with time, being the corrosion rate equal to the anodic current density passing through the electric circuit, i.e., the current efficiency is close to 100% (Nossoni and Harichandran 2012;Climent et al. 2019). Hence, the loss of effective radius of the steel bar, x, can be calculated as : where x is expressed in µm, I corr is the constant anodic current density expressed in µA/cm 2 (in this work 40 µA/ cm 2 ), and t is the time elapsed since the beginning of the accelerated corrosion test, in days. The value 0.0319 contains all relevant physical constants and unit change factors that are needed for the calculation. It must be recalled here that the accelerated corrosion test (current passing on) started always 1 day after beginning the series of NLU measurements, in order to (3) x = 0.0319 · I corr · t, Table 2 Data corresponding to the accelerated corrosion tests conducted on the three reinforced cement mortar specimens.
Data in parentheses are the days elapsed from the onset of current passing.
Sample
First crack (days) x 0 (µm) End of the test (days) 9 10 (9) 11.5 31 (30) 10 9 (8) 10.2 15 (14) 11 8 (7) 8.9 8 (7) Page 8 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 have a reference value regarding these variables. Table 2 contains the most relevant data of the experiments conducted on the three reinforced cement mortar specimens. The second column of the table contains the days elapsed when the first micro-crack was observed on each one of the specimens, see Sect. 2.2. Data in parentheses correspond to the days elapsed since the onset of current passing. The third column shows the values of the corrosion penetration necessary to produce the first visible crack (x 0 ), calculated using Eq.
(3) and the times indicated in parentheses at the second column of the table.
Finally, the fourth column contains the time at which each test was finished: at these times the NLU measurements were discontinued and the specimens were cut for observation of the steel-mortar interface and of the cracked mortar covering the rebar.
The first micro-crack in sample 11 was detected on the eighth day (7 days of current passing). Then, the specimen was disconnected from the power supply. A day later, a visible crack appeared in sample 10, and 2 days later it appeared in sample 9 (see Table 2). The test of sample 10 was stopped at the 15th day, and after disconnecting the current the specimen was cut in order to inspect the fissure and the corrosion products path; see Fig. 7. On that 15th day the maximum crack width on the upper surface of sample 10 reached the value of 70 µm. Finally, the test of sample 9 was concluded after 31 days of testing, when the surface of the specimen showed a crack with a maximum value of 320 µm width. Figure 7 shows clearly the accumulation of steel corrosion products at the steel-mortar interface, especially on top of the bar, and the partial filling of the open crack produced by the corrosion process. Figure 8 shows a series of images corresponding to the evolution of the fissure appeared on top of the mortar specimen number 9. The figure also includes the measured values of the crack width.
Another remarkable observation regarding the accelerated corrosion tests is that after the appearance of the open crack on top of the mortar specimens, a reddish viscous liquid, containing also solid steel corrosion products, leached out through the cracks; see Fig. 9. The pH of this liquid was 3, because of the hydrolysis reaction caused by the iron ions in solution Climent et al. 2019). This implies that a net mass transport process is taking place through the mortar specimens. It is known that transport phenomena through porous media, for instance ion diffusion, are greatly enhanced in case of exposure to a humid environment (Climent et al. 1998). Furthermore, a convective transport process can take place due to liquid wick action through concrete (Aldred et al. 2004), in the case of experimental conditions like those in this work: contact with liquid water at the lower part of the mortar specimen and exposure of the rest of surfaces to an atmosphere of relative humidity lower than 100%. The net transport of water from the bottom of the specimen to the upper surfaces drags the steel corrosion products first filling the cracks and finally making them appear at the openings of the cracks on the top surface; see Figs. 7, 9. However, it was observed that the leaching phenomenon was of different intensity for the various specimens under test. Figure 9 clearly shows that the leaching appeared earlier and with a much higher intensity for specimen number 9 than for specimen number 10. See the upper surfaces of both specimens at the 12th and 14th days of testing, and the much higher accumulation of reddish iron corrosion products in the plastic tray supporting specimen 9. The most likely explanation for this different intensity of the leaching of corrosion products is a possible difference in the compaction of the reinforced cement mortar specimens. It must be recalled that the compaction was done manually, see Sect. 2.1. It is possible that if the steel-mortar interface was less compact and more imperfect in some case (specimen 9), the dragging of corrosion species formed at the steel surface may have been more efficient, thus leading to a quicker filling of the cracks and to a more intense leaching out of corrosion products. Figure 10 depicts the evolution of the crack width for the mortar specimens, as a function of time. The cracking of concrete due to stresses caused by corrosion of embedded steel has been classically described as consisting of a short period during which no crack is visible; the generation step, followed by an approximately linear Page 9 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 increase of the crack width; the propagation period of the cracking process (Alonso et al. 1998;Andrade et al. 1993;Molina et al. 1993;Pedrosa and Andrade 2017). Nevertheless, some critical events can be considered in this process. The first appearance of an open crack at one of the exterior faces of the concrete specimen means that the incipient micro-cracks that may have been developing within the cementitious matrix, probably starting at the steel-cement paste interface, have coalesced to form a continuous crack which finally opens to the concrete surface (Antonaci et al. 2013). This implies an event where the composite material has increased considerably its heterogeneity (creation of new void space), thus changing considerably its non-linear elastic properties. In this sense it must be considered that for the three specimens the non-linear features should have increased considerably between the days 8th to 10th; see Fig. 10 and Table 2. Furthermore, even though the propagation is described approximately as a period of linear increase of the crack width, some changes of slope are clearly visible in Fig. 10. See for instance the sudden increase of maximum crack width between the 21st and 22nd days for specimen 9. These sudden changes are also probably due to creation of new void space, it may be the expansion of an existing crack or creation of a new open crack. Hence, it should also be expected to find observation Page 10 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 of increased non-linear features associated with these changes of slope in Fig. 10. Regarding the corrosion penetration necessary to produce the first visible crack (x 0 ), it can be calculated using a previously proposed empirical equation (Alonso et al. 1998): where c is the concrete cover depth over the rebar and Ø is the diameter of the rebar. Using Eq. (4) with the corresponding values of the geometric parameters of this work, a value of 15 µm is obtained for x 0 . This calculated value is slightly higher than the experimental values shown in the third column of Table 2, whose mean value (4) x 0 (µm) = 7, 53 + 9, 32 · c ∅ , is 10.2 µm ± 1.3 µm. However, it must be considered that Eq. (4) was derived by linear fitting of a data set of experimental results obtained in accelerated corrosion tests with a current density of 100 µA/cm 2 (Alonso et al. 1998), while a current density of 40 µA/cm 2 has been applied in this work. The authors of the above-mentioned study (Alonso et al. 1998) confirmed the influence of the current density applied. The lower the current density was, the faster the first crack appeared (less attack penetration needed) and the faster it developed. This may be interpreted mechanically by considering that a slower "load" application induces higher deformations (Alonso et al. 1998;Pedrosa and Andrade 2017). As a consequence, the results presented in this work can be considered as being Page 11 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 in fairly good agreement with the observations of previous authors.
Ultrasonic Tests
This section presents the results of the NLU measurements performed on the three reinforced cement mortar specimens under test. These measurements started always 1 day before the beginning of the corresponding accelerated corrosion test, i.e., 1 day before switching on the current; see Sect. 2.3. Some of the figures in this section show a vertical line indicating the onset of the corrosion test (t = 1 day). Figure 11 shows the results derived from the analysis of the frequency spectra of the received signals corresponding to specimen 10. The results correspond to the day 0 (before the onset of the corrosion test), and the days 3, 7 and 10. The relative amplitude of the components of each frequency spectrum was always calculated taking the reference amplitude (Aref) as equal to 1 V. Hence, the relative value of any amplitude (A) depicted in Fig. 11 has been calculated according to the following expression: It is apparent from Fig. 11 that the amplitudes of the second and third harmonics (A 2 and A 3 ) are always far lower (more than 30 dB lower) than the fundamental amplitude (A 1 ). At this point it should be recalled that the first visible crack due to steel corrosion was detected on the surface of specimen 10 at the ninth day ( Fig. 10 and Table 2). Regarding the evolutions of the relative amplitudes in Fig. 11, it is appreciable that on the third day small differences were found in comparison with the day 0 (before the onset of the corrosion test): the fundamental amplitude decreased slightly, and the harmonics' amplitudes also showed small differences with their amplitudes at the day 0. However, on the 7th day, 2 days before the visible appearance of the micro-crack at the surface of the mortar specimen, a strong decrease of the fundamental amplitude (about 20 dB) was observed together with comparatively lower (about 10 dB) reductions of the amplitudes of the harmonics. These differences can be clearly ascribed to harmonic distortion phenomena. Three days later (10th day) the relative values of the amplitudes returned to values close to those found on the 3rd day. The observations on the 7th day are compatible with a strong increase of the non-linear elastic features: previous publications have clearly shown that the effects of a non-linear elastic feature on the amplitude of the fundamental frequency component of the signal are much stronger than those on the secondand third-order harmonic components (Scalerandi et al. 2008). It is reasonable to interpret that on the 7th day, few days before the observation of the first visible surface fissure, the micro-cracks were actively developing at the cement mortar cover region over the corroding steel rebar. This observation points out to the possibility that the NLU measurements may provide an early warning of the cracking due to steel corrosion. Figure 12 shows the variations of the relative values of the parameters Bn and Bpn during the tests of the three reinforced mortar specimens. Any value in the figure is normalized to its corresponding value at (t = 0). It must be recalled that the values of these Bn and Bpn parameters are proportional to those of the rigorously defined non-linearity parameters; see Sect. 2.4. Previously it was proposed a tenfold increase relative to the initial values of these parameters, as the arbitrary threshold for considering a critical increase of the non-linear elastic features, which in turn would be indicative of the presence of significant defects or damage in the material medium (Climent et al. 2019). This threshold is indicated as dotted horizontal lines in the plots of Fig. 12. Regarding the results corresponding to specimen 11 (blue triangle points in Fig. 12), the relative parameters exceeded clearly the arbitrary threshold for the measurements taken during the interval between the 6th and the 8th day when a visible crack was first observed at the upper surface of specimen 11; see Table 2 and Fig. 10. As for the results of specimen 10 (red square points in Fig. 12), the relative Fig. 11 Relative amplitude evolution of the main components of the frequency spectrum of the received signal for specimen 10 (transducer emitter excited with 180 V). A 1 : relative amplitude of the fundamental frequency (f 1 = 30 kHz). A 2 and A 3 : relative amplitudes of the second and third harmonics, respectively, (f 2 = 60 kHz and f 3 = 90 kHz).
Page 12 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 parameters were clearly higher than the tenfold threshold in the period between the 7th and the 10th days of measurements. Looking at Fig. 10, it is appreciable that this period correspond to the few days around the moment of first observation of the surface crack on top of specimen 10 (9th day). However, after the 10th day the relative values of Bn and Bpn returned to the region below the tenfold threshold, except for a single measurement obtained on the 13th day. Finally, the results corresponding to specimen 9 (black star points in Fig. 12) show a different behavior: the relative values of Bn and Bpn did not exceed the threshold during the days around the first observation of a crack on top of specimen 9 (10th day). Instead, the parameters reached values much higher than the tenfold threshold during the period between the 15th and 21st days. It should be noted from Fig. 10 that these days preceded a clear change of slope in the evolution of the surface crack width growth, observed between the 21st and 22nd days for specimen 9. Hence, it is highly probable that all the observed strong increases of the values of the relative values of Bn and Bpn exceeding the arbitrary threshold, may be correlated with damage of the material related to the corrosion process: creation of a new open crack or creation of new void space by considerable enlargement of the width of an existing crack (Shah and Ribakov 2009b;Climent et al. 2019). Another observation appreciable from Fig. 12 is that the parameter Bpn (related to the third harmonic) seems to be comparatively more sensible than the parameter Bn (related to the second harmonic) for detecting the nonlinear features associated with damage due to embedded steel corrosion in cement mortar.
As for the experiments of intermodulation, Fig. 13 (1) and Eq. (2), respectively. Regarding the intensity modulation ratio, Fig. 13 shows rather clear progressive increases of the values of R during the critical time periods related with the cracking due to corrosion: for specimens 9 and 10, R started to increase just after the onset of the accelerated corrosion test, from days 1st to 9th, i.e., during the period when the micro-cracks are thought to be developing before coalescing into an open crack, which was visible on the upper surface at the 10th and 9th days for specimens 9 and 10, respectively (Table 2 and Fig. 10). The results of specimen 11 were less clear in this sense. Furthermore, another maintained progressive increase of R is observed for specimen 9 during the days 17th to 21st in parallel with the observation of high values of the relative Bpn parameter of specimen 9 in Fig. 12. It should be noted also that after these periods of progressive increase of R, its values decrease to lower values, making Fig. 13 to look like a saw-teeth graph. It seems that the IMD phenomena might be present long before the first observation of cracking, although some uncertainty remains regarding some results showing small increases of R (specimen 11). Figure 14 shows the variations of the parameter DIFA as a function of time for the tests of the reinforced mortar Page 13 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 specimens. It is appreciable that after the onset of current passing slight decreases of the parameter were observed for the three specimens. However, the most relevant variations were recorded between the days 6th and 8th for the specimens 10 and 11, and between the days 15th to 21st for specimen 9. During these latter periods the parameter DIFA dropped rather abruptly to very low values, close to 0, thus indicating a strong non-linear elastic response. This feature is not so evident when analyzing the evolutions of the dimensionless relative parameter R with time (Fig. 13). An excellent correlation may be noted between the periods of very low values of DIFA (Fig. 14), and those of high values of the relative parameters Bn and Bpn exceeding the arbitrarily chosen threshold (Fig. 12).
Again, it is noticeable in Fig. 14 that after the periods of very low values, the parameter DIFA returns to intermediate values (about 0.1 V in these experiments) typical of the situation previous to the critical events of the cracking process. Two issues remain unsolved in the precedent paragraphs. The first is the different behavior observed for specimen 9 in Figs. 12, 14, during the days 6th to 10th: the values of the relative parameters Bn and Bpn corresponding to this specimen did not exceed the tenfold threshold, and its values of the parameter DIFA did not drop to values close to 0. The second question is related with the ubiquitous observation that after the critical events of the cracking process, put in evidence by strong Page 14 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 non-linear features in the received ultrasonic signals (considerable decrease of the fundamental amplitude in Fig. 11, large increases of the relative parameters Bn and Bpn in Fig. 12, progressive increases of R in Fig. 13, and abrupt drops of DIFA to values close to 0 in Fig. 14), all the parameters returned to values typical to the pre-critical situation. A possible explanation may be hypothetically formulated taking into account previous knowledge and the observations and discussion of the precedent Sect. 3.1. It is generally accepted, in studies of concrete cracking due to steel reinforcement corrosion, that when the corrosion process has started there are three propagation stages: a first stage in which the corrosion products penetrate the porous network around the steel rebar, filling the steel/concrete interface; a second stage characterized by stress initiation since the corrosion accommodation region is completely filled with rust products that start to exert stress; and a third stage identified by the formation of cracks when stress reaches the tensile strength of the concrete and rust fills the cracks as they are created, (Bazán et al. 2018). The experimental conditions of the tests in this work induce a continuous net liquid transport from the bottom of the cement mortar specimen (in contact with liquid water) to the upper surfaces, due to wick action (Aldred et al. 2004), see Sect. 3.1. This convective transport may drag the solid corrosion products formed at the surface of the steel bar, and may give rise to a more efficient filling of the new void space created by cracking with liquid containing corrosion products of steel. This might explain the returning of all the values of the parameters depicted in Figs. 11,12,13,14 to values typical to the pre-crack situation, after having reached values indicative of strong non-linear elastic features. It is also possible that this circumstance might be especially important in cases where the steel-cement paste interface is somehow less compact, due for instance to a defective compaction; see Fig. 9 and the related discussion in Sect. 3.1 regarding the differences observed in relation to the leaching out of liquid containing rust products through the open cracks. This would be a plausible explanation for the different behavior shown by specimen 9 in Fig. 12, Fig. 14. It is known that when the rust products penetrate efficiently the porous network of the concrete, the pressure exerted by the expansion of the oxides is partially mitigated (Bazán et al. 2018). A careful observation of Fig. 10 allows appreciating that the first observed crack on top of specimen 9 appeared slightly later (10th day) than those appreciated on specimens 10 and 11 (9th and 8th days, respectively). Also, the first visible crack on specimen 9 had a width half of the value recorded for specimens 10 and 11. These observations seem to be indicative that the formation of the first visible crack on top of specimen 9 may have developed in a different way than those corresponding to specimens 10 and 11. A more efficient filling of the crack would explain the delay of the observation of non-linear features associated with the cracking of specimen 9: these features were not observed at the time of formation of the first visible crack (10th day), but they were clearly appreciable during the period between the 15th and 21st days when the micro-cracks may have been developing and coalescing to give rise to a considerable enlargement of the maximum width of the surface crack; see Fig. 10 between the 21st and 22nd days. The effect of filling the new formed cracks with liquid containing rust products, and its possible impact on the results of the NLU measurements, may be of diverse magnitude when a different experimental approach is adopted for the accelerated corrosion test (Antonaci et al. 2013). It is widely accepted in ultrasonic researches that there are many sources of non-linearity. Some doubts might be raised regarding the modification of the microstructure of the mortar specimens due to the progressive cement hydration and mechanical strength gain process: in this work the reinforced mortar specimens were tested after a period of curing of only 7 days (Sect. 2.1). However, it must be noted that the mortar was prepared with a cement of high speed of mechanical strength gain, CEM I 52.5 R-SR 3 (Asociación Española de Normalización y Certificación 2011). A standardized cement mortar using this type of cement reaches a minimum initial compressive strength of 30.0 MPa at 2 days, and a minimum nominal compressive strength of 52.5 MPa at 28 days. If we consider also the known accelerating effect of admixed chloride salts in relation to the mechanical strength gain of cement-based materials, it is reasonable to admit that the cement mortar has reached a considerable degree of development of cement hydration and mechanical strength during the 7 days curing, so as not to expect drastic changes in the microstructure that could give rise to relevant non-linear effects. Taking into account the good temporal correlation found between the critical events of mortar cracking recorded in Fig. 10 and Table 2, and the observations of relevant non-linear elastic features in the received ultrasonic signals (Figs. 11,12,13,14), it is highly probable that these non-linear effects are mainly due to the damage produced by corrosion of the steel bars under the form of micro-cracks which eventually coalesce to create an open crack visible at the upper mortar surface.
The non-elastic features can be efficiently detected both through HD (Bn and Bpn parameters) or IMD (R and DIFA parameters) experiments. The presented results suggest that the new parameter DIFA proposed in this work is as efficient as the relative parameters Bn and Bpn for detecting the strong non-linear features associated Page 15 of 17 Climent-Llorca et al. Int J Concr Struct Mater (2020) 14:52 with the critical events of the cracking of cement mortar damage due to embedded steel corrosion. During these critical events it is likely that new micro-cracks are being formed, or other pre-existing cracks are actively developing before coalescing into an open surface crack or giving rise to a considerable and sudden enlargement of the maximum width of the surface crack. Hence, it is reasonable to admit that in these circumstances the ultrasonic waves travel through a highly defective medium, leading to a relevant transference of energy from the fundamental frequency component of the waveform to higher order harmonics or to the intermodulation products (Scalerandi et al. 2008). These energy transferences manifest as highly increased values of the Bn and Bpn parameters or very low values of the DIFA parameter. The evolution of values found for the intensity modulation ratio (R), seems to indicate that the non-linear features may be appearing some time before the observation of an open crack, thus giving support to the idea that NLU techniques might be used as tools for the early warning of incipient damage due to steel reinforcement corrosion in concrete, before the appearance of visible symptoms of damage (cracking or delamination). More research is necessary to confirm the findings and interpretations of this work, and to advance on the proposal of practical procedures for applying the NLU techniques to the detection of cracks due steel corrosion in reinforced concrete, both for research purposes and for routine engineering surveys of damaged structures.
Conclusions
The results obtained in this work provide further confirmation that it is possible to use NLU techniques for the detection of cracking due to the corrosion of steel reinforcements in cement mortar or concrete. The appearance of visible surface micro-cracks seems to be preceded and accompanied by the observation of strong non-linear features in the received signal: harmonic distortion and intermodulation phenomena are clearly observed. A new parameter (DIFA), based on the difference between the amplitude of the fundamental frequency and the sum of the amplitudes of all the first-order and secondorder intermodulation products, has been proposed in this work. The results suggest that the parameter DIFA is as efficient as the relative non-linearity parameters, classically used in harmonic distortion NLU studies, for detecting the strong non-linear features associated with the critical events of the cracking of cement mortar damage due to embedded steel corrosion.
A recurrent observation in this work is that after the critical events of the cracking process, all the parameters used for putting in evidence the non-linear elastic features returned to values typical to the pre-critical situation. A hypothetical explanation to this fact has been developed in this work considering the possible effect of the filling of the void space by liquid containing steel corrosion products after the formation of new cracks or the enlargement of its width. This filling process might be particularly enhanced by net convective transport of liquid (wick action) in the experimental conditions of this work. More research is necessary to confirm the findings and interpretations of this work, and to advance on the proposal of practical procedures for applying the NLU techniques to the detection of cracks due steel corrosion in reinforced concrete, both for research purposes and for routine engineering surveys of damaged structures.
|
2020-10-06T21:16:18.779Z
|
2020-10-06T00:00:00.000
|
{
"year": 2020,
"sha1": "4615389bec175d6d445b2497a07d83cca1af5c4d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40069-020-00432-x",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4615389bec175d6d445b2497a07d83cca1af5c4d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
147747099
|
pes2o/s2orc
|
v3-fos-license
|
Students ’ Involvement in Campus Activities : Implications to Levels of Sociability
Students’ involvement and sociability in campus activities provide college students with ample opportunities to have a greater magnitude of student-to-student interactions. As such, they are more likely to perceive their educational experiences as having high quality compared to those of non- participants. This descriptive study utilized a sample of 300 students of WVSU-Janiuay Campus who were chosen through stratified purposive sampling. The results indicated that students often participated in campus activities. As to extent of involvement students are always involved in campus activities. The levels of sociability are very high as answered. As to campus activities that they participated in and levels of sociability significant difference existed while no significant differences existed in student’s involvement in campus activities. There was a positive and significant relationship among campus activities that students participated in, extent of students’ involvement and levels of sociability. Implications and recommendations for future research were discussed.
Participation with non-academic pursuits is not only beneficial to student development, but is known to be highly valued by teachers and staff.It may seem like a small change, but by demonstrating to students that we view these activities as equally important to academic study (Pascarella, et al, 2001).
Building an interactive campus is an integral component of universities' educational mission.Perhaps this vision is best characterized by an image of students, faculty, and staff helping one another day by day to cultivate aspirations, nurture commitments, and practice they profess.
Seen in this light, being part of the West Visayas State University (WVSU) system is not ultimately about personal gratification, "doing one's own thing," or peaceful co-existence, although WVSU-Janiuay is certainly an academe where its constituents can enjoy considerable freedoms, excel, and build lasting friendships by participating in various activities.
Research was performed on the claim of fact that students involved in extracurricular activities receive higher grades than those not involved in activities.This topic was studied because budget for school activities are meager, and the administrators of schools want to spend the money efficiently.This report examines the correlations among the activities that students participated in, extent of participation in campus activities, and levels of sociability.
A myriad of components contribute to the reasons why extracurricular activities benefit students academically.One of these reasons is that students learn character-building lessons that they can apply to their study habits and to their lives.Activities such as athletics, music, theater, and organizations teach students how to discipline themselves through drills, practices, or rehearsals (Astin, 1993).The students have a responsibility to the activity and must perform the tasks assigned to them whether it be to run, sing, act, or organize an event.By participating and persevering in any of these activities, the students gain a sense of self-respect, self-esteem, and self-confidence.Extracurricular activities give them pride in their accomplishments, and they learn that if an activity is worth doing, it is worth doing well.
Through extracurricular activities, students learn life skills that benefit their studies.Matt Craft, president of the Iowa State University Government of the Student Body, stated that being involved teaches students organization and time management skills.Because activities take time out of the students' schedules, the involved students must plan their time wisely and efficiently to complete the assigned tasks.
It is believed that given the right tools, students will thrive in taking charge of their own development, but to help them do this, we need to reassess our role as higher education providers.We should not just provide the opportunities for students to achieve good academic results but actively promote the benefits of a wider curriculum to students.After all, university should be seen as a transformative experience through which students can prepare themselves to succeed in the many and varied roles they will undertake in future life (Trevino, 2002).
That's why, over the last few years,West Visayas State University-Janiuay Campus has not been into research on campus activities and levels of sociability and its importance and value of both academic curriculum and co-curricular activities in developing the range of skills and attributes that are important for graduates.Armed with a better sense of the student journey, the university has designed a development plan to support students' transition through independence and competencies in the future work and enable them to take responsibility for their own development.
This study aimed to assess the students' involvement in campus activities and its implications to levels of sociability: Specifically, this study sought to answer the following questions: 1.How frequent do students participate in campus activities as an entire group and when classified as to sex, course and year level?2. What are the extent of students' involvement in campus activities as an entire group and when taken classified as to sex, course and year level?3. What are the levels of sociability in campus activities as entire group and when classified as to sex, course and year level?
4. Is there a significant difference in the campus activities that students participated in when classified as to sex, course and year level? 5. Is there a significant difference in the extent of students' involvement in campus activities when classified as to sex, course and year level?6.Is there a significant difference in the levels of sociability in campus activities when classified as to sex, course and year level.7. Is there significant relationship in the campus activities that students participated in, extent of students' involvement and levels of sociability?Methodology This study determined the students' involvement in campus activities and its implications to levels of sociability of West Visayas State University Janiuay Campus It also aimed to ascertain whether the students' involvement in campus activities and its implications to levels of sociability could be influenced by personal factors.
Descriptive research involves gathering data events and then organizes, tabulates, depicts, and describes the data collection (Garcia, 2003).
The respondents of the study were the three hundred (300) students of West Visayas State University-Janiuay Campus.They were taken through stratified purposive sampling.As initial step, the researcher identified the respondents.Identification of respondents was based on sex, course and year level.
The researchers constructed a rating scale designed to determine the level of students' involvement in campus activities and its implications to levels of sociability.
The tentative draft of the questionnaires on students' involvement in campus activities was submitted for validation to panel of jurors who are expert in the field of student affairs.An adapted questionnaire from David, et al for levels of sociability.
After the questionnaire was revised and finalized, permission to conduct the study was secured from the campus administrator and the instrument was distributed to the respondents at West Visayas State University-Janiuay Campus.The researcher gathered the accomplished instruments as soon as the respondents finished answering them.
The data gathered were subjected to certain statistical analysis to determine the levels of students' involvement in campus activities and its implications to levels of sociability.
In determining the responses of students in campus activities that they participated in the numerical weights and responses are as follows:
Discussion
The respondents often participated in campus activities when taken as an entire group while as to sex, the male responses were often while those of female were always.Willms, (2000), stated that most students participated in academic and non-academic activities at school to develop sense of belonging with their friends, have good relations with teachers and other students, and identify with and value schooling outcomes.
The Bachelor of Science in Elementary Education (BEEd), Bachelor of Science in Secondary Education (BSEd) and Bachelor of Caregiving Management (BCM) always participated, while BS Information Technology (BS Infotech) and Bachelor of Science in Hotel and Restaurant Technology (BSHRST) often participated while Bachelor of Science in Industrial Technology (BSIT) seldom participated in campus activities.The result explained the idea of Kuh (1995), which states that participation in extracurricular activities provides opportunities for students to apply classroom knowledge to real-world settings and develop skills that will assist in the practical realities of living after graduation.
As to year level,1 st year and 2 nd year students always participated while 3 rd year and 4 th year seldom participated in campus activities.The results as cited by Burton (2001), could be done to the fact that college sponsored activities do not receive the full participation of all students despite the opportunities associated with extracurricular involvement.
In determining students' involvement in campus activities when taken as an entire group and when classified as to sex, course and year level, their responses were "always" or had a very high level of sociability.The result encompassed the idea of Astin (1993), which stated that having an active college social life by participating in college student organizations could influence how one perceives his or her own college experience.He added that students with more opportunities to involve in the overall student life of the institution could have more student-to-student interactions.Consequently, student interactions were found to cultivate a more active social life in college.
The levels of sociability as an entire group were very high.When classified as to sex, male were very high while those of female were high.As to course, the BEED, BSED, BS Infotech and BCM were very high while those of BSHRST and BSIT were high.As to year level, 1 st year, 2 nd year and 4 th year were very high while 3 rd year were high.Significant differences existed when students were classified as to sex, course and year level.Baxter (1992), found that college sociability and affiliation cultivates students' intellectual development by initially teaching them responsibility and independence in regard to meeting new people who are becoming knowledgeable to the campus environment.
As to inferential statistics, t-Test result showed that there was a significant difference in the responses in the campus activities participated in when respondents were classified as to sex because the p value was less than 0.05 level of significance.This can be inferred with the idea of Trevino (1991), who found that extracurricular involvement was not significantly influenced by selected demographic data such as age, sex, GPA.
The One way ANOVA test revealed that significant difference existed in the campus activities that students participated in when classified as to course and year level, because the two-tailed probability was less than the set of .05.This can be inferred to the idea of Abrahamowicz (1988), who cited by using the College Student Experiences Questionnaire (QSEC) to assess these variables, the study found that significant differences existed between the college experiences of undergraduate students who were members of organizations compared to students who were not.
Employing the t-test for independent samples, the result revealed that no significant differences existed in the extent of students' involvement in campus activities when they were classified as to sex, since the two-tailed probability was greater than the set of 0.05 level of significance.Baxter (1992), postulated that a student's learning and development were directly proportional to the quality and quantity of a student's involvement in the academic experience.
The One-way ANOVA test revealed that there was no significant difference existed in the students' involvement in campus activities when respondents were classified as to course and as to year level the two-tailed probability is greater than the set of .05level of significance.Pascrella, 1991, cited that the greater the students involvement in college was the greater will be the amount of student learning and personal development.
There was positive and significant relationship in campus activities that student participated in, extent of students' involvement and levels of sociability.Terenzini, 1991, found out that when thinking in retrospect, college graduates perceived their extracurricular involvement as having substantial impact on the development of interpersonal and leadership skills important to general occupational success.Extracurricular activities involvement enhanced interpersonal and leadership skills, allowing students to explore their goals and to identify steps to achieve such goals.
Conclusions
In view of the findings, the following conclusions were deduced: Almost every school offers some type of extracurricular activity, such as music, academic clubs, and sports.These activities offer opportunities for students to learn the values of teamwork, individual and group responsibility, physical strength and endurance, competition, diversity, and a sense of culture and community.Extracurricular activities provide a channel for reinforcing the lessons learned in the classroom, offering students the opportunity to apply academic skills in a real-world context, and are thus considered part of a well-rounded education.
Participation of students in various activities can be considered as meta-construct that includes behavioral, emotional and cognitive engagement.What makes participation unique is on how it can draw on the involvement in academic, social and extra-curricular activities and is considered crucial for improving positive academic outcomes.It must focus on the extent of positive reactions to teachers, students and the academic community.
If, indeed, participation in extracurricular activities can lead to success in school, then the availability of these activities to students of all backgrounds becomes an important equity issue in background and school setting.
Recommendations
The West Visayas State University-Janiuay Campus must look into the responses of the respondents as to the campus activities that students participated in, their extent of involvement and levels of sociability in order to include in the campus calendar the activities that will maximize students' attendance and promote students' achievement.
There must be continued monitoring and evaluation of the campus activities that can increase the levels of sociability among students.
The administration must consider the conduct of activities with relevance to the students' welfare and must have promoted integral development among students.
The Office of Student Affairs must have encouraged students to become involved and stay involved in various campus activities.An effort to improve attendance in all campus activities of all students as an integral part of the larger school reform figure must also be given emphasis by the OSA.
Strong administrative support must be given in the conduct of relevant and enriching campus activities must also be given priority.
Further researches must be conducted in order to widen the perspectives along this line.If possible, variables not being studied must be taken into account to make this study more comprehensive and other dimensions of students' participation and its implications to levels of sociability must also be explained by future researchers.
C. Instructions: Check the column that corresponds to the frequency of your involvement in the campus activities.
Table 5
Responses of the Students in Campus Activities that they Participated In Differences in the Responses of the Students in Campus Activities that they Participated In using t-Test In determining responses of levels of sociability this scale with its interpretation was used.
Table 5
Differences in the Responses of the Students in Campus Activities that they Participated In using One-Way ANOVA
Table 6
Differences in the Extent of Students' Involvement in Campus Activities using t-Test
Table 7
Differences in the Extent of Students' Involvement in Campus Activities using One-Way ANOVA
Table 9
Differences in the Levels of Sociability using One-Way ANOVA involved in campus activities that….my sense of belonging 8. improves my leadership skills 9. are essential to my long term well-being 10. open doors for other opportunities that will help me become successful 11.reinforce my high expectations for social responsibilities 12. help me familiarize with the learning environment 13. provide me with an avenue to meet my future life-partner 14. help me establish commonalities with others and establish friendships 15. provide me with rewarding and challenging activities 16. help promotes my feeling of support and relatedness 17. motivate me to do well in school 18. make me proud of my school 19.help me perceive that rules of school to be enforced are fair 20.helps my friends to look forward to go to school 21.helps me participate in decision making 22. eases my feeling of loneliness 23.help me feel that close to or valued by teachers and school staff 24.set standards and help us students to meet it 25.reinforce explicit expectations for our positive behavior and academic success as students 26.create welcoming environment for us students 27.create common vision of success for us students
|
2018-01-25T22:21:17.851Z
|
2016-07-01T00:00:00.000
|
{
"year": 2016,
"sha1": "c7556d6f21e037138c6ddc7273218a8f61891db1",
"oa_license": "CCBYNC",
"oa_url": "https://research-advances.org/index.php/IJEMS/article/download/233/243",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a869abe603c696a5c63eb9ded83096cd6b8699ff",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
16977774
|
pes2o/s2orc
|
v3-fos-license
|
Involvement of Noradrenergic Transmission in the PVN on CREB Activation, TORC1 Levels, and Pituitary-Adrenal Axis Activity during Morphine Withdrawal
Experimental and clinical findings have shown that administration of adrenoceptor antagonists alleviated different aspects of drug withdrawal and dependence. The present study tested the hypothesis that changes in CREB activation and phosphorylated TORC1 levels in the hypothalamic paraventricular nucleus (PVN) after naloxone-precipitated morphine withdrawal as well as the HPA axis activity arises from α1- and/or β-adrenoceptor activation. The effects of morphine dependence and withdrawal on CREB phosphorylation (pCREB), phosphorylated TORC1 (pTORC1), and HPA axis response were measured by Western-blot, immunohistochemistry and radioimmunoassay in rats pretreated with prazosin (α1-adrenoceptor antagonist) or propranolol (β-adrenoceptor antagonist). In addition, the effects of morphine withdrawal on MHPG (the main NA metabolite at the central nervous system) and NA content and turnover were evaluated by HPLC. We found an increase in MHPG and NA turnover in morphine-withdrawn rats, which were accompanied by increased pCREB immunoreactivity and plasma corticosterone concentrations. Levels of the inactive form of TORC1 (pTORC1) were decreased during withdrawal. Prazosin but not propranolol blocked the rise in pCREB level and the decrease in pTORC1 immunoreactivity. In addition, the HPA axis response to morphine withdrawal was attenuated in prazosin-pretreated rats. Present results suggest that, during acute morphine withdrawal, NA may control the HPA axis activity through CREB activation at the PVN level. We concluded that the combined increase in CREB phosphorylation and decrease in pTORC1 levels might represent, in part, two of the mechanisms of CREB activation at the PVN during morphine withdrawal.
Introduction
Opiate withdrawal is associated with central noradrenergic neurons hyperactivity, and it has been proposed that noradrenergic afferents to the extended amygdala and to the hypothalamic paraventricular nucleus (PVN) are critically involved in the aversive properties (such as conditioned place aversion) as well as in the somatic symptoms of opiate withdrawal (teeth chattering, piloerection, lacrimation, rinorrhea and ptosis) [1][2][3]. These noradrenergic afferents originate in the nucleus of the solitary tract (NTS) and ventrolateral medulla (VLM) noradrenergic A 2 and A 1 cell groups [1,4].
Clinical and experimental findings have shown that administration of a 1 -and/or b-adrenoceptor antagonists reduced certain aspects of drug withdrawal and dependence, such as the negative emotional status, self administration and relapse [2,[5][6][7]. Furthermore, clonidine, an a 2 -adrenoceptor agonist, has been reported to attenuated withdrawal symptoms in humans and animals [8,9]. A prime candidate for the central actions of the adrenoceptor antagonists is the PVN, a structure with remarkably dense noradrenergic innervations [10,11]. Altered neuronal activity has been found in the PVN after naloxone-induced opiate withdrawal, as evidenced by increased activation of the immediate early gene product c-Fos and enhanced hypothalamus-pituitary-adrenocortical (HPA) axis response (as reflected by plasma levels of corticosterone, a marker for the HPA axis activity). These alterations were markedly decreased by systemic adrenoceptor antagonists [2,12]. Previous works have also shown that lesion of ascending axons of the ventral noradrenergic bundle markedly reduced opiate withdrawal-induced place aversion [1]. Furthermore, pretreatment with adrenoceptor antagonists attenuated heroin self administration in rats [13], suggesting that noradrenergic system may contribute to mechanisms that promote dependence.
On the other hand, morphine dependence exerts long-lasting effects on gene expression [14,15]. Precipitated morphine withdrawal has shown several indices of cAMP Response Element Binding protein (CREB) function within the PVN, including elevated c-Fos expression in rats [12,16]. It has been proposed that changes in CREB activity may be important for the development and expression of opioid dependence [17,18]. Recently, we found increased phosphorylated CREB (pCREB) expression within CRF immunoreactive neurons in the PVN and within tyrosinehydroxylase (TH)-positive neurons in the nucleus of the solitary tract (NTS) in morphine-withdrawn rats, which paralleled elevation of plasma corticosterone levels [19].
The purpose of the present series of experiments was to test the hypothesis that CREB activation in the PVN and the enhanced response of the HPA axis during naloxone-precipitated morphine withdrawal would arise from the activation of a 1 -and/or badrenoceptor. Specifically, the effects of prazosin (a 1 -adrenoceptor antagonist) and propranolol (b-adrenoceptor antagonist), were evaluated for their ability to modulate both CREB activation in the PVN and the pituitary-adrenocortical response to precipitated morphine withdrawal. Phosphorylation of CREB has been used as a marker for the activation of CREB-mediated gene transcription. However, there is recent evidence showing that some extracellular stimuli that cause CREB phosphorylation fail to induce CREBdependent transcription [20]. These findings suggest that there must be additional CREB co-activators that control the kinetics of CREB-target gene expression. It led to the discovery of a family of coactivators called transducers of regulated CREB activity (TORCs) [21], which facilitate CREB-mediated gene transcription [22]. TORCs are maintained in an inactive state in the cytoplasm as a result of phosphorylation. Different stimuli lead to TORC dephosphorylation and subsequent nuclear accumulation, whereby it can freely associate with CREB. The second aim of the present study was to assess the possibility that the activation of the CREB coactivator, TORC1 in the PVN arises from activation of a 1 -and/or b-adrenoceptor.
Results
In accordance with previous findings, Student's t-test showed that rats receiving long-term morphine treatment had significantly lower weight gain (1.6462.61 g; t(76) = 6.308; p,0.001; n = 42) than the placebo control group (22.7261.93 g; n = 36), which might be due to the reduced food intake observed during chronic morphine treatment [23]. The body weight loss after saline or naloxone injection to placebo-pelleted and morphine-dependent rats was recorded as a sign of opiate withdrawal. Two-way ANOVA revealed that chronic pretreatment, acute injection, and the interaction between chronic pretreatment and acute treatment had a significant effect on body weight loss [morphine treatment: F(1,37) = 37.60, p,0.0001; naloxone injection: F(1,37) = 36.91, p,0.0001; interaction: F(1,37) = 16.31, p = 0.0003]. In agreement with our previous results [24,25], post hoc analysis showed ( Table 1) that naloxone injection to morphine-dependent animals significantly increased (p,0.001) body weight loss when measured 60 min after injection compared with the placebo-pelleted group also receiving naloxone and with the morphine-treated rats receiving saline. However, administration of naloxone to control rats resulted in no significant changes in body weight loss, compared with control rats receiving saline. In animals pre-treated with prazosin (1 mg/kg i.p.), two-way ANOVA revealed significant effects of chronic pretreatment [F(1,26) = 106.87; p,0.0001] and acute drug injection [F(1,26) = 8.21; p,0.0081]. There was a significant (p,0.01) decrease in body weight loss during morphine withdrawal in animals receiving prazosin 20 min before naloxone injection compared with morphine-withdrawn rats receiving vehicle. In animals pretreated with propranolol (3 mg/kg i.p.), two-way ANOVA showed a significant effect of chronic pretreatment [F(1,25) = 66.53; p,0.0001]. In contrast to prazosin, post hoc test revealed that pretreatment with propranolol did not significantly attenuate the increase in body weight loss in morphine-withdrawn animals compared with morphine-withdrawn rats receiving vehicle instead of propranolol. Neither prazosin nor propranolol modified the weight loss in placebopretreated rats compared to placebo-treated rats receiving vehicle.
Effects of naloxone-induced morphine withdrawal on NA and MHPG levels and NA turnover in the PVN Post hoc analysis indicated that groups rendered dependent on morphine and injected with saline showed significantly (p,0.01) higher levels of NA than the placebo-pelleted groups also injected with saline (Fig. 1A). By contrast, morphine dependent rats receiving naloxone showed a significant (p,0.001) decrease in NA levels 60 min after naloxone injection. The ANOVA for MHPG production showed a significant effect of chronic pretreatment [F(1,28) = 13.61; p = 0.0010]. Post hoc analysis showed that the MHPG levels increased significantly (p,0.01) in the naloxoneprecipitated morphine withdrawal group, as compared with the placebo-treated group injected with naloxone and with the morphine-dependent rats receiving saline instead naloxone (Fig. 1B). Results for the two-way ANOVA for NA turnover (as revealed by MHPG/NA ratio) in the PVN showed a significant effect of chronic pretreatment [F(1,30) = 14.03; p = 0.0008], and significant interaction between pretreatment and acute treatment [F(1,30) = 12.53; p = 0.0013]. As shown in Fig. 1C, rats rendered dependent on morphine and injected with naloxone showed a significantly higher NA turnover in the PVN than the placebo group injected with naloxone (p,0.001) and than the morphinepelleted group receiving saline instead naloxone (p,0.001).
Effects of adrenergic antagonists on morphine withdrawal-induced CREB phosphorylation in the PVN as determined by Western blot and immunohistochemistry
In previous studies, western blot analysis revealed strong activation (phosphorylation) of CREB in the PVN after naloxone injection to morphine-dependent rats, which was dependent on protein kinase C activation [19]. In the present work we examined whether noradrenergic neurotransmission is necessary for the morphine withdrawal-induced CREB phosphorylation. Two-way ANOVA for rats pretreated with the selective a 1 -adrenoceptor antagonist, prazosin revealed that acute prazosin administration [F(1,22) = 23.61; p,0.0001] and the interaction between morphine treatment and prazosin injection [F(1,22) = 13.81; p = 0.0012] had a significant effect on pCREB immunoreactivity in the PVN. As shown in Fig. 2A, Newman-Keuls post hoc test shows that naloxone injection to morphine-dependent rats pretreated with vehicle produced a significant (p,0.01) increase in pCREB levels compared with the placebo-pelleted group also receiving naloxone, which was blocked (p,0.001) in rats pretreated with prazosin 20 min prior naloxone. The results were confirmed by immunohistochemical procedures. As shown in Fig. 2B, high levels of pCREB immunoreactivity were observed in the PVN 60 min after naloxone injection to morphine-dependent rats, whereas the PVN from rats pretreated with prazosin showed discrete staining for pCREB (Fig. 2C). According to the Westernblot analysis, there was a decrease (t(8) = 3.035; p,0.05) in pCREB immunoreactivity 60 min after naloxone administration to morphine-dependent rats pretreated with prazosin ( Fig. 3D). Two-way ANOVA for rats pretreated with the b-adrenoceptor antagonist, propranolol revealed significant effect of pretreatment [F(1,22) = 26.44; p,0.0001] on pCREB immunoreactivity in the PVN. Newman-Keuls post hoc test shows that administration of propranolol 20 min prior naloxone injection produced a similar increase (p,0.01) in pCREB levels than that seen in morphine dependent rats pretreated with vehicle instead propranolol (Fig. 3A). These results were also confirmed by immunohistochemical procedures. As shown in Fig. 3B, C, the PVN from rats pretreated with propranolol shows similar staining for pCREB than the PVN from rats receiving vehicle instead propranolol. No significant differences (t(8) = 1.060) were seen between the morphine-dependent rats receiving vehicle plus naloxone and those injected with propranolol plus naloxone (Fig. 3D).
Induction of CREB phosphorylation in CRF-positive neurons in the PVN is attenuated by prazosin
To explore the specificity of the decrease of pCREB levels observed in the parvocellular part of the PVN during morphine withdrawal in animals pretreated with prazosin, sections from different treatment were immunohistochemically double-labeled for pCREB and CRF ( Figure 4 A-C). ANOVA revealed significant differences in the number of CRF neurons expressing pCREB in rats pretreated with prazosin [F(2,14) = 21,69; p,0.001]. As shown in Figure 4D (left panel), post hoc comparisons showed a significant (p,0.01) decrease in the number of CRF neurons containing pCREB after naloxone-induced morphine withdrawal in prazosin-pretreated rats compared with those receiving vehicle instead of prazosin. Additionally, ANOVA also revealed significant differences [F(2,14) = 19,28; p,0.001] in CRF immunoreactivity in rats pretreated with prazosin. As shown in Figure 4D (right panel), there was a significant (p,0.01) decrease in the total number of CRF neurons after naloxone-induced morphine withdrawal in prazosin-pretreated rats. . Morphine withdrawal-induced CREB activation in the PVN is dependent on a 1 -adrenoceptor stimulation. Quantitative analysis and representative immunoblots (A) of pCREB in the PVN tissue isolated from placebo or morphine-dependent rats pretreated with prazosin before saline or naloxone injection to control and to morphine-dependent rats. Post hoc analysis revealed that the increase in CREB phosphorylation during morphine withdrawal was blocked by prazosin (1 mg/kg i.p.). Each bar represents mean 6 SEM (% of controls); p: placebo pellets; m: morphine pellets; veh: vehicle; n: naloxone; praz: prazosin. **p,0.01 vs control pellets (placebo)+vehicle+naloxone; +++ p,0.001 vs. morphine-treated rats+ to control and to morphine-dependent rats. vehicle+naloxone. PVN was also processed for pCREB immunohistochemistry. (B, C) represents immunohistochemical detection of pCREB in the PVN from morphine-treated rats receiving vehicle and naloxone (B) or prazosin plus naloxone (C). 3V: third ventricle. Scale bar: 100 mm. D: quantitative analysis of pCREB immunoreactivity the PVN. Data correspond to mean 6 SEM. Post hoc analysis revealed a significant decrease in pCREB immunoreactivity in prazosin-pretreated rats. *p,0.05 versus morphine+vehicle+naloxone. doi:10.1371/journal.pone.0031119.g002 Effects of prazosin on pTORC1 lower expression during morphine withdrawal Two-way ANOVA revealed a significant effect of acute injection [F(1,19) = 11.89; p = 0.0027], and an interaction between chronic pretreatment and acute treatment [F(1,19) = 7.87; p = 0.0113] on pTORC1 levels at the PVN. As shown in Fig. 5, Western blot analysis revealed a significant (p,0.01) decrease for pTORC1 immunoreactivity in morphine-withdrawn rats compared with the control group receiving naloxone. To determine the ability of the a 1 -adrenoceptor blockade on pTORC1 expression, control-and morphine-treated rats were pretreated with prazosin 20 min before saline or naloxone injection. Post hoc analysis showed that prazosin reverted (p,0.001) the decreased pTORC1 levels observed during morphine withdrawal.
Effects of adrenergic antagonists on morphine withdrawal-induced HPA axis activation
We measured plasma corticosterone concentrations (as HPA axis activation marker) in blood samples obtained from morphinedependent or control rats 60 min after injection of saline or naloxone. Two-way ANOVA for corticosterone revealed a Fig. 6C, in morphine-withdrawn rats administered propranolol, the plasma corticosterone levels increased significantly (p,0.001). By contrast to prazosin pretreatment, pretreatment with propranolol did not modify the morphine withdrawal-induced increase in corticosterone levels. Neither prazosin nor propranolol induced any significant modification in plasma levels of corticosterone in control rats receiving saline or naloxone or in morphine-pretreated rats receiving saline.
Discussion
For many years, studies have focused on the role of dopaminergic reward system in drug abuse. However, although the role of NA in stress is well known, its involvement in drug addiction has received less attention. It has been demonstrated that opiate withdrawal results in marked activity of central noradrenergic neurons [1,26]. Thus, several biochemical and electrophysiological changes induced by opiate abstinence have been reported, consisting of an increase in firing rate response by application of opiate antagonists after chronic morphine treatment [27,28]. Furthermore, NA caused a marked increase in the frequency of postsynaptic potentials of the parvocellular neurons of the PVN [29]. There is also evidence that increased NA is involved in various aspects of the withdrawal response [23,26].
The PVN, a component of the HPA stress axis, has a high density of noradrenergic inputs [1,30] and is anatomically connected with other brain areas implicated in drug abuse, such as the extended amygdala (the brain stress system) and the NTS-A 2 . We therefore hypothesized that the HPA axis may be an important site for the actions of NA during withdrawal. Previous studies from our group indicate that NA turnover is increased in the PVN 30 min after naloxone administration to morphinedependent rats [2]. Present findings show that morphine withdrawal also enhances noradrenergic activity in the PVN at 60 min time-point, as revealed by increased MHPG production and NA turnover in this nucleus, as Fig. 1 depicts. These effects have been shown to be accompanied by increased CRF hnRNA, TH mRNA expression and tyrosine-hydroxylase (TH) enzymatic activity in the PVN and are induced via a mechanism involving phosphorylation of TH at Ser31 [2,23]. The present study was focused on the impact of noradrenergic modulation in the context of withdrawal-induced CREB phosphorylation and HPA axis activation that is observed in morphine-withdrawn rats.
As reported recently [19], the data depicted in Fig. 2 indicate that naloxone-induced morphine withdrawal produced robust CREB activation in the hypothalamic PVN. These effects of morphine abstinence are mediated through the activation (phosphorylation) of CREB, but not through the up-regulation of its expression in the PVN, as previously shown by Martín et al. [18,19]. CREB regulates the transcription of over 10,000 genes, including those implicated in stress and addiction, such as CRF [31]. The present work showed that the increase in pCREB immunoreactivity co-localized with CRF neurons of the parvocellular part of the PVN (Fig. 4), consistent with the morphine withdrawal-induced the transcriptional regulation of CRF in the PVN. Thus, using probes complementary to intronic sequences of the gene encoding CRF in the parvocellular neurosecretory neurons of the PVN, we had found robust increases in the precursor mRNA (hnRNA) for CRF in morphine-dependent rats during naloxone-precipitated morphine withdrawal [16]. In addition, previous findings showed that the induction of c-Fos expression that occurs during morphine withdrawal occurs predominantly in hnRNA CRF-expressing neurons of the parvocellular part of the PVN, consistent with transcriptional regulation of CRF neurons by morphine withdrawal [16]. Taken together, present results might suggest that activation of CREB could contribute to the increased CRF gene-transcription during morphine withdrawal. Supporting this hypothesis are previous findings indicating that CREB is a potent activator of CRF transcription [20,32]. Furthermore, Itoi et al. [31] showed that injection of antisense oligodeoxynucleotides to CREB blocked the increase in CRF mRNA caused by stress and drug exposure [33]. According to these findings, in the present study we have shown Figure 5. Noradrenergic activity is required for morphine withdrawal-induced TORC1 activation in the hypothalamic PVN. Quantitative analysis and representative immunoblots of phosphorylated TORC 1 in the PVN tissue isolated from placebo or morphine-dependent rats pretreated with vehicle or prazosin before saline or naloxone injection to control and to morphine-dependent rats. Post hoc analysis revealed that the decrease in TORC phosphorylation induced by morphine withdrawal was reversed by prazosin. Each bar represents mean 6 SEM (% of controls); p: placebo pellets; m: morphine pellets; veh: vehicle; n: naloxone; praz: prazosin. **p,0.01 vs. control pellets (placebo)+vehicle+naloxone; +++ p,0.001 vs. morphine-treated rats+vehicle+naloxone. doi:10.1371/journal.pone.0031119.g005 Figure 6. Hypothalamus-pituitary-adrenal (HPA) axis activation during morphine withdrawal is attenuated by a 1 -but not badrenoceptor blockade. Placebo and morphine-dependent rats were pretreated with prazosin or propranolol and plasma levels of corticosterone (a marker of HPA axis activity) were determined 60 min after naloxone injection. Praz: prazosin; prop; propranolol; sal: saline; nx: naloxone. Each bar represents mean 6 SEM. Post hoc analysis revealed a significant increase in plasma corticosterone concentration after naloxone-induced morphine withdrawal, which was attenuated in prazosin-but not in propranolol-pretreated rats. ***p,0.001 versus placebo+naloxone; +++ p,0.001 versus morphine+saline; &&& p,0.001 versus placebo+prazosin+naloxone; $$$ p,0.001 versus morphine+pra-zosin+saline; ## p,0.01 versus morphine+naloxone; @@@p,0.001 versus placebo+propranolol+naloxone; %%% p,0.001 versus morphine+ propranolol+saline. doi:10.1371/journal.pone.0031119.g006 that morphine withdrawal increased CRF immunoreactivity in the PVN, as Fig. 4 depicts. The increase in number of CRF-positive neurons could be the results from an increase in the synthesis of the peptide then an increase in the peptide content in the CRF perikaria of the PVN. Although indirectly, all these results might suggest that activation of CREB could contribute to increased transcription of CRF gene during morphine withdrawal.
In the present study, the effects of a 1 -(prazosin) and b-(propranolol) adrenoceptor antagonists were evaluated for their ability to modify CREB phosphorylation, CRF immunoreactivity and corticosterone release in morphine-dependent rats. Present findings clearly show that, at the PVN level, only the a 1adrenoceptor can stimulate CREB phosphorylation, since prazosin but not propranolol significantly decreased morphine withdrawalinduced CREB phosphorylation (Figs. 2, 3). Furthermore, noradrenergic a 1 receptor blockade by prazosin significantly attenuated the morphine withdrawal-induced CREB activation into CRFpositive neurons (Fig. 4). We showed that this response was associated with a reduction of both CRF-containing neurons and corticosterone release, as Figs. 4 and 6 depict. These findings suggest that the ability of morphine withdrawal to stimulate CREB activation and the stress axis activity is under control of noradrenergic system via a 1 -adrenoceptor stimulation. This is in accordance with several reports describing the modulatory action of the noradrenergic system on the hypothalamic stress axis. Indeed, NA neurons arising from the NTS-A 2 provide excitatory inputs to the CRF neurons in the PVN, and activation of these neurons during precipitated morphine withdrawal or during stress was blocked by prazosin [12,34]. Since NA was found to stimulate CREB phosphorylation [35,36], and considering that CREB phosphorylation is critical for CRF transcription, it is reasonable to hypothesize that inhibition of morphine withdrawal-induced CREB activation by prazosin may be responsible for the inhibitory effect of this adrenoceptor antagonist on HPA axis activity, whereas the b-adrenoceptor seems not to be involved in those actions.
Electrical stimulation of the ventral ascending noradrenergic bundle and intracerebroventricular injection of NA, increase pituitary-portal plasma levels of CRF [37]. Moreover, NA injection directly into the PVN has a similar effect, which was prevented by a 1 but not b-adrenoceptor antagonists [38]. Taken together, these results would suggest a positive correlation between noradrenergic terminals innervating the PVN and HPA axis activity. In addition to the proposed direct effects of NA on adrenoceptor located on the CRF neurons, it has been shown that NA can also influence the activity of the HPA axis through activation of adrenergic receptors located on the bed nucleus of the stria terminalis (BNST; [39]) in response to stress. Daftary et al. [29] reported that CRF release may be also evoked through intrahypothalamic glutamatergic interneurons expressing a 1adrenoceptors, indicating the complexity of the interaction between noradrenergic system and CRF neurons.
CREB is classically considered to be the mediator of c-AMP/ PKA-mediated effects. According to the conventional model, cAMP activates PKA, leading to sequential phosphorylation of CREB, binding of phospho-CREB to the c-AMP-Response Element (CRE) in the CRF promoter and activation of transcription [40]. According to our results, it has been shown that noradrenergic neurons stimulate CRF cells via a 1 -adrenoceptors and hence the HPA axis [41][42][43]. The a 1 receptor is coupled to phospholipase C/PKC signal transduction pathway. Thus, it is possible that stimulation of phospholipids signaling by NA would lead to CREB phosphorylation and subsequent activation of the CRF neurons in the PVN from morphine-withdrawn rats. Supporting this possibility, PKC antagonists prevented the observed morphine withdrawal-induced CREB phosphorylation into CRF neurons in the rat PVN [19]. In addition, activation of calcium phospholipids-dependent pathways by the phorbol ester PMA also activated CREB phosphorylation in hypothalamic neurons [20]. According to all these findings, the results of the present study strongly suggest the relevance of a 1adrenoceptor in mediating the CREB phosphorylation that was seen after naloxone-induced morphine withdrawal. In addition, our findings support a facilitatory influence of NA on morphine withdrawal-induced HPA activation. However, since prazosin attenuated, but did not block the HPA axis response to morphine withdrawal, others receptor systems may be activated in addition to a 1 -adrenoceptors, such as CRF2 receptors and orexin receptors [44,45]. An additional explanation for the present findings is that, although CRF is thought to be the major secretagogue in stimulating ACTH secretion, AVP and other factors also play a role [46,47].
Previous studies suggest that the phosphorylation site of CREB is a convergence point for multiple kinases and acts as a molecular switch for controlling gene activation kinetics [48]. CREB activity can also be regulated by the new family of transcriptional coactivators, TORCs [49]. It has been recently shown that CREB is essential but not sufficient for activation of CRF transcription, suggesting that translocation of TORCs to the nucleus is required for CRF transcription by acting as a CREB coactivator [50]. TORCs phosphorylation by specific kinases increases its affinity with the scaffolding protein 14-3-3, thus preventing nuclear translocation [51]. The present study focused on investigating the effects of morphine withdrawal and adrenoceptor blockade on phosphorylated TORC1 (pTORC1) levels. Our results show that morphine withdrawal produced a decrease of pTORC1, the inactive form of this CREB coactivator (Fig. 5), which suggests that TORC1 was dephosphorylated (activated) in response to morphine abstinence. The mechanism regulating TORCs activation is under current investigation [50]. Our findings also show that pretreatment with prazosin antagonized the morphine withdrawal-induced decreased of pTORC1 levels in the PVN. Therefore, all this evidence might indicate that morphine withdrawal-induced activation of TORC1 would require the activation of a 1 -adrenoceptor, which suggests that phospholipids-dependent pathways might be involved in TORC1 activation, providing a mechanism by which morphine withdrawal and a 1 -agonist induce a stimulatory effect on CRF neurons.
It has been shown that the use of ligands targeting noradrenergic receptor subtypes can attenuate both the physical and motivational components of enhanced drug ingestion that has been observed in opiate-, alcohol-, and nicotine-dependent animals and humans [6,7,13,52,53]. The PVN appears to be a very important site of action for the a 1 -ligand-mediated effects on feeding [13]. The observation that prazosin but not propranolol attenuated the reduction in weight loss during morphine withdrawal would indicates that noradrenergic pathways may participate in a subset of somatic withdrawal signs through stimulation of a 1 -adrenoceptor subtype, as has been previously shown [2].
In summary, our results suggest that NA and a 1 -adrenoceptors may control the HPA axis activity through CREB activation at the PVN during acute morphine withdrawal. The combination of CREB phosphorylation and in pTORC1 dephosphorylation (activation) might represent, in part, two of the mechanisms of CREB activation at the PVN during morphine withdrawal.
Animals
Male Sprague-Dawley rats (220-240 g; Harlan, Barcelona, Spain) were housed two-to-three per cage in a room with controlled temperature (2262uC) and humidity (50610%), with free access to water and food. Animals were adapted to a standard 12-h light-dark cycle for 7 days before the beginning of the experiments. All surgical and experimental procedures were performed in accordance with the European Communities Council Directive of 24 November 1986 (86/609/EEC) and the local Committees for animal research (REGA ES300305440012). The study was approved by the University of Murcia bioethics committee (RD 1201/2005) and Ministerio de Ciencia y Tecnología (SAF2009-07178), Spain.
Drug treatment and experimental procedure
Groups of rats were rendered dependent on morphine by s.c. implantation of morphine base pellets (75 mg), one on day 1, two on day 3 and three on day 5, under light ether anesthesia. Control animals were implanted with placebo pellets containing lactose instead of morphine on the same time schedule. This morphine treatment paradigm has been shown to produce profound states of tolerance and dependence and to result in characteristic biochemical adaptations within the paraventricular nucleus and behavioral alterations [23,54]. On day 8, when rats were morphine dependent, animals were injected i.p. with vehicle, prazosin (1 mg/kg) or propranolol (3 mg/kg) and 20 min later received saline s.c. or naloxone (2 mg/kg s.c.). On the basis of our initial experiments of prazosin-induced inhibition of behavioral signs of morphine withdrawal and HPA axis activity [2,12] 1 mg/ kg dose of prazosin was chosen for our experiments. Dose of propranolol was selected on the basis of previous findings [12]. The weight gain of the rats was checked during treatment to ensure that the morphine was liberated correctly from the pellets because it is known that chronic morphine treatment induces a decrease in body weight gain due to lower caloric intake [55]. In addition, the day of experiment weight loss was determined as the difference between the weight determined immediately before saline or naloxone injection and a second determination made 60 min later, immediately before killing.
Western blot analysis
Sixty min after administration of naloxone or saline, rats were killed by decapitation. The hypothalamic tissue containing the PVN was dissected according to the technique of Palkovits [56]. PVN samples were placed in homogenization buffer [23], homogenized and sonicated for 30 s before centrifugation at 6,0006 g for 10 min at 4uC. Samples containing 40 mg of protein were loaded on a 10% SDS/polyacrylamide gel, electrophoresed and transferred onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Western analysis was performed with the following primary antibodies: 1:750 polyclonal antiphospho CREB-123-136 (pCREB; Millipore, Temecula, CA); 1:375 polyclonal anti-phospho-TORC1 (pTORC1; Cell Signaling); 1:5000 polyclonal anti-a-tubulin; and 1:1000 polyclonal bactin (Cell Signaling). After extensive washing with TBST, the membranes were incubated with peroxidase-labeled secondary antibodies. We used a-tubulin or b-actin (depending on the molecular weight of the protein measured) as our loading controls for all the experiments. Before re-probing, blots were stripped by incubation with stripping buffer (glycine 25 mM and SDS 1%, pH 2) for 1 h at 37uC.
Blots were subsequently reblocked and probed with anti atubulin or b-actin. Quantification of immunoreactivity corresponding to pCREB (43 kDa), pTORC1 (82 kDa), a-tubulin (52 kDa) and b-actin (45 kDa) bands was carried out by densitometry. The integrated optical density of the bands was corrected by subtraction of the background values. The ratios of pCREB/a-tubulin and PTORC1/b-actin were calculated and expressed as a percentage of the average of controls in each blot.
Immunohistochemical detection of pCREB
Sixty min after naloxone or saline injections, rats were deeply anesthetized with pentobarbital (100 mg/kg ip) and quickly perfused through the ascending aorta with saline followed by ice-cold fixative. Brains were post-fixed in the fixative for 3 h and then placed in PBS containing 10% sucrose overnight. Series of 30 mm frontal sections were cut on freezing microtome, collected in cryoprotectant and stored at 220uC until processing. After blocking with H 2 O 2 and normal goat serum, sections were then incubated for 60 h at 4uC with a rabbit anti-pCREB antibody (Upstate; 1:750). This was followed by application of a biotinylated anti-rabbit IgG (Vector Laboratories, Burlingame, CA, USA), and then with the avidin-biotin complex. Visualization of the antigenantibody reaction sites was performed using 3, 39-diaminobenzidine (DAB, Sigma). Sections were mounted onto chrome-alumn gelatine coated slides, dehydrated through graded alcohols, cleared in xylene and cover slipped with dibutylphtalate (DPX).
Double-labeling immunohistochemistry of pCREBimmunoreactive nuclei and CRF-positive neurons
For pCREB and CRF double-label immunohistochemistry, tissue sections from each rat in each treatment group were processed as follows: pCREB was developed with DAB intensified with nickel, and CRF revealed with DAB. pCREB immunohistochemistry was performed as described above, and pCREB antibody-peroxidase complex was visualized by using a mixture of Quantification of pCREB immunoreactivity pCREB immunostaining within section of the PVN was quantified bilaterally for each rat and for all treatment groups by an observer blinded to the treatment protocol. The density of pCREB-like immunoreactivity was determined using a computerassisted image analysis system (QWIN, Leica, Madrid, Spain). This system consists of a light microscope (DM4000B; Leica) connected to a video camera (DFC290, Leica) and the image analysis computer.
Quantification of pCREB-positive/CRF-positive neurons pCREB-positive CRF cells were identified as cells with brown cytosolic deposits for CRF-positive staining and blue/dark nuclear staining for pCREB. A square field (195 mm) was superimposed upon captured image to use as reference area. The number of double-labeled pCREB neurons observed bilaterally was counted in three to four sections from each animal in the PVN. The total number of CRF cells (with or without a visible nucleus) was also counted.
HPLC
NA and its metabolite in the central nervous system, MHPG, were determined by HPLC with electrochemical detection as described previously [57], frozen in liquid nitrogen, weighed, placed in perchloric acid (0.1 M), homogenized and centrifuged and the supernatants taken for analysis and filtered through 0.22 mm GV (Millipore). Ten mL of each sample were injected into a 5-mm C18 reversed-phase column (Waters, Milford, MA, USA) through a Rheodyne syringe loading injector (Waters). Electrochemical detection was accomplished with an electrochemical detector (Waters 2465). NA and MHPG were quantified by reference to calibration curves run at the beginning and the end of each series of assays. The content of NA and MHPG in the PVN was expressed as ng?g 21 wet weight of tissue. The NA turnover was determined as the NA ratio, which was calculated as: NA ratio = MHPG/NA.
Radioimmunoassay
Sixty min after saline or naloxone injection, rats were decapitated. Plasma levels of corticosterone were measured by commercially available kits for rats ( 125 I-corticosterone RIA; MP Biomedicals, Orangeburg, NY). The sensitivity of the assay was 7.7 ng.mL 21 .
Materials
Pellets of morphine (75 mg morphine base/pellet; Alcaliber Labs., Madrid, Spain) or lactose (placebo) were prepared by the Department of Pharmacy and Pharmaceutics Technology (School of Pharmacy, Granada, Spain); naloxone HCl, prazosin HCl and DL-propranolol HCl were purchased from Sigma Chemical Co.
(St Louis, MO). Naloxone HCl and propranolol were dissolved in sterile 0.9% NaCl (saline); prazosin was dissolved in sterile distilled water and administered in volumes of 0.1 ml/100 g body weight. Phosphatase inhibitor Cocktail Set (Calbiochem, Germany); protease inhibitors (Boehringer Mannheim, Germany). HPLC reagents were purchased from Sigma.
Statistical analysis
Data are presented as mean 6 S.E.M. Data were analyzed using one-or two-way analysis of variance (ANOVA) followed by a post hoc Newman-Keuls test. Student's t-test was used when comparisons were restricted to two experimental groups. Differences with a P-value ,0.05 were considered significant.
|
2017-06-17T03:55:39.962Z
|
2012-02-15T00:00:00.000
|
{
"year": 2012,
"sha1": "f293c1f005c5351feadec222c36dd0302072b13c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0031119&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f293c1f005c5351feadec222c36dd0302072b13c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
254405254
|
pes2o/s2orc
|
v3-fos-license
|
Host Genetics and Environment Shape the Composition of the Gastrointestinal Microbiome in Nonhuman Primates
ABSTRACT The bacterial component of the gastrointestinal tract microbiome is comprised of hundreds of species, the majority of which live in symbiosis with the host. The bacterial microbiome is influenced by host diet and disease history, and host genetics may additionally play a role. To understand the degree to which host genetics shapes the gastrointestinal tract microbiome, we studied fecal microbiomes in 4 species of nonhuman primates (NHPs) held in separate facilities but fed the same base diet. These animals include Chlorocebus pygerythrus, Chlorocebus sabaeus, Macaca mulatta, and Macaca nemestrina. We also followed gastrointestinal tract microbiome composition in 20 Macaca mulatta (rhesus macaques [RMs]) as they transitioned from an outdoor to indoor environment and compared 6 Chlorocebus pygerythrus monkeys that made the outdoor to indoor transition to their 9 captive-born offspring. We found that genetics can influence microbiome composition, with animals of different genera (Chlorocebus versus Macaca) having significantly different gastrointestinal (GI) microbiomes despite controlled diets. Animals within the same genera have more similar microbiomes, although still significantly different, and animals within the same species have even more similar compositions that are not significantly different. Significant differences were also not observed between wild-born and captive-born Chlorocebus pygerythrus, while there were significant changes in RMs as they transitioned into captivity. Together, these results suggest that the effects of captivity have a larger impact on the microbiome than other factors we examined within a single NHP species, although host genetics does significantly influence microbiome composition between NHP genera and species. IMPORTANCE Our data point to the degree to which host genetics can influence GI microbiome composition and suggest, within primate species, that individual host genetics is unlikely to significantly alter the microbiome. These data are important for the development of therapeutics aimed at altering the microbiome within populations of genetically disparate members of primate species.
that contribute to host diversity is necessary in order to guide the development of therapeutics aimed at correcting insufficiencies.
Several factors contribute to the composition of the gut microbiome (4). The GI tract microbiome is acquired during birth and is shaped by immunoglobulins, enzymes, and complex oligosaccharides transferred to the offspring from the mother and her microbiome (2). The introduction of solid food further shapes the development of the microbiome, transitioning the bacterial community to one better capable of processing and extracting energy from a diet high in fiber and protein (4). Adult dietary intake continues to shape the composition of the microbiome, with the types and relative amounts of fats, sugars, fibers, and proteins having significant effects on the abundances of different bacterial phyla within the GI tract microbiome (4).
Host genetics is also thought to shape the composition of the GI tract microbiome, with certain genetic loci associated with particular microbes (5). Both individual and genomewide associations have been described between bacterial frequencies and gene abundance (5), single nucleotide polymorphisms (6), and gene functionality (7), and similarly, several loci are associated with variations in b diversity (8). Although adjusted to control for variables, including age, sex, and ethnicity, it remains unclear from these large studies whether genetics -absent specific disease-associated polymorphisms-contributes to diversity of the microbiome more than diet and environment. Under the control of host genetics are more direct potential mediators that act on the bacteria living in the GI tract and allow the host to shape the composition of the microbiome there. Other studies have identified a range of such mediators: mucus production along the GI tract, secretion of antimicrobial peptides, production of immunoglobulin A, regulation of electron acceptor and donor availability in the gut lumen, and secretion of microRNAs (miRNAs) (9)(10)(11).
To investigate how host genetics may influence the composition of the GI microbiome in primates, we profiled the GI tract resident bacteria of 15 Chlorocebus pygerythrus (vervet African green monkeys), 7 Chlorocebus sabaeus (sabaeus African green monkeys), 49 Macaca mulatta (rhesus macaques [RMs]), and 6 Macaca nemestrina (pig-tailed macaques [PTMs]) under controlled dietary and environmental conditions. In addition, we assessed changes in the composition of the GI tract microbiome of 20 RMs as they were moved from a provisioned outdoor environment to indoor research facilities.
RESULTS
Microbiome variation across all animals. To assess the degree to which host genetics can shape the composition of the gut microbiome, stool samples were collected from two genera and four species of nonhuman primates (NHPs) (Chlorocebus, n = 22 [sabaeus, n = 7; vervet, n = 15]; Macaca, n = 55 [PTMs, n = 6; RMs, n = 49]), and fecal DNA was assessed by 16S sequencing. Weighted UniFrac analysis revealed that a significant difference in b diversity exists between genera (Adonis, P = 0.001), with principal-coordinate analysis (PCoA) demonstrating that genera clustered away from each other irrespective of species, facility, and birth location (Fig. 1A). When analysis is controlled for facility by excluding facility 1 animals-because all Chlorocebus animals were housed at facility 2-or by excluding wildborn vervets, the significance of the comparisons remains unchanged (results not shown). Unlike b diversity, a diversity was not significantly different between groups as measured by Shannon diversity (Fig. 1B). Differences between the microbiomes of NHP genera are further seen by relative abundances for all represented phyla among the cohort (Fig. 1C). Bacteroidetes, Firmicutes, and Proteobacteria were the 3 main phyla in both NHP genera. These phyla and members of the phyla Actinobacteria, Epsilonbacteraeota, Fibrobacteres, Lentisphaerae, Planctomycetes, Tenericutes, and Verrucomicrobia showed significant differences by LEfSe (linear discriminant analysis [LDA] effect size) (Fig. 1D).
We looked further within the 3 main phyla-which account for ;90% of total bacterial abundance in our samples-using LEfSe to determine significant differences and used Krona plots to aid in the visualization of these differences (Fig. 1E to G). Within Bacteroidetes, the class Bacteroidia, order Bacteroidales, and family Bacteroidaceae were significantly higher in the NHP genus Chlorocebus (Fig. 1E). Among Firmicutes, the classes Clostridia and Erysipelotrichia, the orders Clostridiales and Erysipelotrichales, and the families Clostridiaceae_1, Lachnospiraceae, and Erysipelotrichaceae were significantly higher in Chlorocebus, while within the class Bacilli, the order Lactobacillales and families Lactobacillaceaea, Streptococcaceae, Peptococcaceae, Peptostreptococcaceae, and Veillonellaceae were significantly higher in Macaca (Fig. 1F). For Proteobacteria, the classes Deltaproteobacteria and Gammaproteobacteria, the orders Desulfovibrionales and Betaproteobacteriales, and the families Desulfovibrionaceae, Succinivibrionaceae, and Burkholderiaceae were significantly higher in Chlorocebus, with the order Pasteurellales and family Pasteurellaceae significantly higher in Macaca (Fig. 1G). The remaining phyla make up less than 10% of the total bacterial abundance in our samples, with significant differences at the phylum level for Epsilonbacteraeota, Fibrobacteres, Planctomycetes, Tenericutes, and Verrucomicrobia as assessed by LEfSe. Full results of significant differences down to the genus level are available in Fig. S1 in the supplemental material, and all operational taxonomic units (OTUs) examined are available in Table S1.
Microbiome variation across members of Chlorocebus. To determine if the GI tract microbiomes among NHPs of species of the same genus were disparate, we performed weighted UniFrac analysis to examine b diversity in a subset of the original cohort, sabaeus monkeys (n = 7) and vervets (n = 15). Sabaeus monkeys and vervets significantly differed by (P = 0.002) and clustered away from each other by PCoA ( Fig. 2A). While b diversity showed significant differences between sabaeus monkeys and vervets, a diversity was not significantly different between groups (Fig. 2B). Differences between the microbiomes of NHP species can further be seen in relative abundances for all represented phyla among the subset (Fig. 2C). LEfSe analysis revealed significantly different OTU counts among subtaxa of the phyla Actinobacteria, Bacteroidetes, Fibrobacteres, Firmicutes, Protebacteria, and Spirochaetes. Only Fibrobacteres significantly differed at the phylum level (Fig. 2D).
A deeper analysis of differences in OTU counts by LEfSe revealed fewer differences between sabaeus monkeys and vervets ( Fig. 2E to G) than between Chlorocebus and Macaca in the three main phyla of Bacteroidetes, Firmicutes, and Proteobacteria. Within . Significance between NHP genera in fecal b diversity was assessed by Adonis. Lines represent the distance from each sample to the group's centroid. (B) Shannon diversity (a diversity) comparison of fecal microbiome between NHP genera. Lines denote means. Significance between groups was determined by unpaired, two-way t test. (C) Relative abundance of bacterial families in NHP genera measured by 16S rRNA gene sequencing. Color is by phylum, and line divisions are by family. (D) LEfSe cladogram representing all taxa in the fecal microbiome down to the genus level, with red (greater in Chlorocebus) or green (greater in Macaca) nodes indicating significant differences. Gold nodes indicate no significant difference. Labels were restricted to phylum level for ease of visualization, full results of significant differences down to genus level are available in Fig. S1, and all OTUs examined are available in Table S1. (E to G) Krona plots representing relative frequency of fecal Bacteroidetes (E), Firmicutes (F), and Proteobacteria (G) subtaxa comprising $5% phylum composition to the family level. The shown taxa are collapsed to the lowest common taxon. The percentage given after phyla in panels E to G is the percentage of total bacteria that phylum made up in the group. the phylum Bacteroidetes, the families Muribaculaceae and Rikenellaceae were significantly higher in sabaeus monkeys, while the order Flavobacteriales and family Flavobacteriaceae were higher to a significant degree in vervets (Fig. 2E). In Firmicutes, the families Christensenellaceae, Clostridiaceae_1, and Eubacteriaceae were higher in sabaeus monkeys, whereas the families Aerococcaceae and Lachnospiraceae were significantly higher in vervets (Fig. 2F). Among Proteobacteria, the class Deltaproteobacteria, the order Desulfovibrionales, and the family Desulfovibrionaceae were significantly higher in sabaeus monkeys, with the orders Aeromonadales and Pseudomonadales and the families Succinivibrionaceae and Moraxellaceae higher in vervets (Fig. 2G). Full results of significant differences down to the genus level are available in Fig. S1, and all OTUs examined are available in Table S1.
Microbiome variation across vervets. Environmental conditions surrounding birth and early life are proposed to have lasting effects on immunity and the commensal microbiome (4). To investigate if being born in captivity or the wild could have lasting effects on the composition of the GI tract microbiome, we performed weighted UniFrac analysis to examine b diversity in subdivided groups of the vervets from the original cohort, those born in captivity (n = 9) and those born in the wild (n = 6). Vervets did not significantly differ (Adonis, P = 0.744) and did not cluster away from each other by birth status (Fig. 3A). a diversity was also not significantly different between the vervet subsets (Fig. 3B). Comparable relative abundances for all represented phyla among the subset of vervets can be seen in Fig. 3C. LEfSe analysis of OTU counts revealed no significant differences from the phylum level down to the family level between the vervets by birth status; only 3 genera within the Firmicutes phylum were significantly different between the subsets of vervets ( Fig. 3D to G). Full results of significant differences down to the genus level are available in Fig. S1, and all OTUs examined are available in Table S1.
Related individuals have been shown to have similar microbiomes, sharing features that are conserved irrespective of lifelong cohabitation; however, it remains unclear whether these similarities are the result of genetic relatedness or early life exposure events (5,12). To determine if genetic relatedness significantly contributes to the . Significance between NHP species in fecal b diversity was assessed by Adonis. Lines represent the distance from each sample to the group's centroid. (B) Shannon diversity (a diversity) comparison of fecal microbiomes between NHP species. Lines denote means. Significance between groups was determined by unpaired, two-way t test. (C) Relative abundance of bacterial families in NHP species measured by 16S rRNA gene sequencing. Color is by phylum, and line divisions are by family. (D) LEfSe cladogram representing all taxa in the fecal microbiome down to the genus level, with red (greater in sabaeus monkeys) or green (greater in vervets) nodes indicating significant differences. Gold nodes indicate no significant difference. Labels were restricted to the phylum level for ease of visualization. Full results of significant differences down to the genus level are available in Fig. S1, and all OTUs examined are available in Table S1. (E to G) Krona plots representing relative frequency of fecal Bacteroidetes (E), Firmicutes (F), and Proteobacteria (G) subtaxa comprising $5% phylum composition to the family level. Shown taxa are collapsed to the lowest common taxon. The percentage given after phyla in panels E to G is the percentage of total bacteria that phylum made up in the group. development of the microbiome, we assessed b diversity by weighted UniFrac analysis on the same data subset, stratified by groups of related vervets (family 1, n = 2; family 2, n = 2; family 3, n = 3; family 4, n = 6; n = 2 vervets unrelated to others in the cohort). Certain captive-and wild-born vervets belonged to the same family unit due to a small-scale breeding program conducted at their housing facility to maintain a population of vervets after the initial group was imported. The wild-born animals were more than 20 years old; thus, understanding how their transition from outdoor to indoor facilities led to microbiome alterations was not possible. Family groups were determined by individuals directly related to each other (parent-child relationships) being grouped together; family 4 includes an additional generation: hence its larger n. Vervets did not differ significantly by family group (Adonis, P = 0.923), nor did they cluster away from each other by PCoA (Fig. 3H). Thus, neither early life environment nor family relatedness significantly contributed to the differences in the fecal microbiomes of outbred animals within the same species.
Microbiome variation across Macaca species. Within the Macaca genus, pigtailed macaques exhibit a higher degree of gastrointestinal pathologies and elevated immune activation compared to rhesus macaques, and yet, a detailed comparison of the fecal microbiomes is lacking (13,14). To determine if differences in gut microbiomes of the macaque species are Table S1. (E to G) Krona plots representing relative frequency of fecal Bacteroidetes (E), Firmicutes (F), and Proteobacteria (G) subtaxa comprising $5% phylum composition to the family level. Shown taxa are collapsed to the lowest common taxon. The percentage given after phyla in panels E to G is the percentage of total bacteria that phylum made up in the group. (H) Principal-coordinate analysis of weighted UniFrac distances of gut microbiota in vervets grouped by those related to one another (family 1, n = 2; family 2, n = 2; family 3, n = 3; family 4, n = 6; 2 vervets were not related to any others in the cohort). Significance between these groups was assessed by Adonis. Lines connect related vervets. present, we assessed b diversity by weighted UniFrac analysis in a subset of the original cohort, PTMs (n = 6) and RMs (n = 49). PTMs and RMs differed significantly in b diversity (Adonis, P = 0.047) and showed a modest separation from each other by PCoA (Fig. 4A). While b diversity showed significant differences between PTMs and RMs, a diversity was not significantly different between groups (Fig. 4B). Differences between the microbiomes of the NHP species of PTMs and RMs can further be seen in relative abundances for all represented phyla among the subset (Fig. 4C). Members of the Actinobacteria, Bacteroidetes, Elusimicrobia, Epsilonbacteraeota, Fibrobacteres, Firmicutes, Patescibacteria, Proteobacteria, and Spirochaetes phyla were shown to be significantly different between the two species by LEfSe analysis of OTU counts (Fig. 4D). Full results of significant differences down to the genus level are available in Fig. S1, and all OTUs examined are available in Table S1.
A closer look at these differences determined by LEfSe, with Krona plots to provide clearer visualization, shows fewer significant differences among the three major phyla of Bacteroidetes, Firmicutes, and Proteobacteria in the GI microbiome of PTMs versus RMs (Fig. 4E to G) compared to the differences seen in Chlorocebus versus Macaca. Among the members of the phylum Bacteroidetes, only the family Tannerellaceae was significantly higher in PTMs (Fig. 4E). For Firmicutes, the family Lactobacillaceae was significantly higher in PTMs, whereas the class Clostridia, the order Clostridiales, and the families Planococcaceae, Clostridiaceae_1, and Family_XIII were higher in RM (Fig. 4F). In Proteobacteria, the order Pasteurellales and the family Pasteurellaceae were higher to a significant degree in PTMs, while the class Deltaproteobacteria, the orders Desulfovibrionales and Pseudomonadales, and the families Desulfovibrionaceae and Moraxellaceae were significantly higher in RMs (Fig. 4G).
Microbiome variation across indoor facilities. To investigate if individual housing facilities of NHPs can have effects on the composition of the GI tract microbiome, we performed weighted UniFrac analysis to examine b diversity in subdivided groups of the Macaca, which were housed in facility 1 (n = 21) and housed in facility 2 (n = 28). RM b diversity did not significantly differ by housing facility (Adonis, P = 0.126), and samples did not cluster away from each other by facility (Fig. 5A). a diversity was significantly different between Fig. S1, and all OTUs examined are available in Table S1. (E to G) Krona plots representing relative frequency of fecal Bacteroidetes (E), Firmicutes (F), and Proteobacteria (G) subtaxa comprising $5% phylum composition to the family level. Shown taxa are collapsed to the lowest common taxon. The percentage given after phyla in panels E to G is the percentage of total bacteria that phylum made up in the group. the RM subsets, with a P value of 0.009 (Fig. 5B). Comparable relative abundances for all represented phyla among the subset of RMs can be seen in Fig. 5C. By LEfSe analysis of OTU counts, there were members of the phyla Actinobacteria, Bacteroidetes, Firmicutes, Proteobacteria, and Spirochaetes that were significantly different in abundance between facilities (Fig. 5D). Full results of significant differences down to the genus level are available in Fig. S1, and all OTUs examined are available in Table S1.
Between the two housing facilities, there were no significantly different OTU counts, as determined by LEfSe, down to the family level for either Bacteroidetes or Firmicutes ( Fig. 5E and F). For Proteobacteria, the order Rickettsiales and family Anaplasmataceae were significantly higher in facility 2 RMs (Fig. 5G).
Microbiome variation from provisioned outdoor environment to captivity for research. To determine if NHPs undergo significant and durable changes in their GI tract microbiomes as they move from a provisioned outdoor environment into controlled, indoor research facilities, we performed weighted UniFrac analysis to examine b diversity in a group of RMs moved from a free-ranging Indian-origin rhesus breeding colony to facility 2 (n = 20). Transfer included deworming by ivermectin and fenbendazole and movement from a social setting to an individual caged setting where animals were unable to physically interact, forage, or otherwise encounter environmental immunogens. The four time points studied (days 0, 7, 18, and 63 of transfer [D0, D7, D18, and D63, respectively]) significantly differed in b diversity (Adonis, P = 0.001) and clustered away from each other by PCoA (Fig. 6A). While b diversity showed significant differences between RM time points across all days, a diversity was only significantly different between D7 versus D18 (P = 0.003) and D7 versus D63 (P = 0.009) (Fig. 6B). Differences between the microbiomes of the RMs over the four time points can further be seen in relative abundances for all represented phyla among the subsets (Fig. 6C).
Krona plots were used to create a more in-depth visualization of differences in abundance (Fig. 6D to F), and limma was used to determine significantly altered amplicon sequence variants (ASVs) (Fig. 6G). The phylum Bacteroidetes decreased slightly from D0 (3%) to D7 (2%) before it expanded greatly by D18 (28%) in its contribution to the overall composition of the fecal microbiome, then contracted again by D63 (10%) (Fig. 6D), with 62 significantly Table S1. (E to G) Krona plots representing relative frequency of fecal Bacteroidetes (E), Firmicutes (F), and Proteobacteria (G) subtaxa comprising $5% phylum composition to the family level. Shown taxa are collapsed to the lowest common taxon. The percentage given after phyla in panels E to G is the percentage of total bacteria that phylum made up in the group.
DISCUSSION
Here, we studied the compositions of the fecal microbiomes in two genera of NHPs, including four species across these genera, in two different facilities, where diet is controlled across the groups. We found that the microbiomes of the four species of NHPs we studied are distinct from each other by measures of b diversity despite controlled dietary and environmental conditions. The birth location of vervets did not significantly contribute to a or b diversity (Fig. 3), while housing facility among RMs did influence the fecal microbiome composition and a diversity but not b diversity (Fig. 5). This difference in a diversities among the RMs depending on housing facility, despite similar base diets, may be due to differences in caretaking, water supply, or types of treats supplied to animals between facilities.
Within a cohort of RMs moved from a semiprovisioned outdoor environment to an indoor research facility, b diversity of the gut microbiome significantly changed, while a diversity was only transiently altered (Fig. 6). Based on phylum abundances over the four time points studied, the compositions of the RM GI microbiomes appear to be in the process of returning themselves to a state more like that before they were moved to our research facilities by day 63 after transition. However, when examined down to the ASV level, this is not a process of restoration. The types of Firmicutes that make up the GI microbiome at D63 are significantly enriched for members of the families Erysipelotrichaceae, Lactobacillaceae, Lachnospiraceae, Ruminococcaceae, and Streptococcaceae compared to precaptivity abundance, along with the Bacteroidetes component being enriched for Prevotellaceae. While ivermectin, fenbendazole, and other anthelminthics have been shown to have effects on the GI microbiome (15)(16)(17), previously observed shifts in GI microbiome composition induced by anthelminthics do not align with the changes we observed. The enrichment of Lactobacillaceae, Ruminococcaceae, and Prevotellaceae, the lower abundance of Clostridiaceae, and the higher Bacteroidetes/Firmicutes ratio at D63 are more in line with the GI microbiome observed in humans following a vegetarian or vegan diet (18,19). Thus, the observed changes in the GI microbiome of the RMs that transitioned from the semiprovisioned outdoor environment to our research facilities may be more reflective of the dietary changes they underwent, as the monkey chow they receive in our research facilities is plant based. While the same monkey chow is given to the animals in the semiprovisioned outdoor environment, these animals also have access to plant species not found in the monkey chow in addition to insects and small animals found naturally in the environment, which RMs are known to consume as a part of their diet in the wild. However, since all RMs in this cohort received ivermectin and fenbendazole, we cannot definitively conclude whether anthelminthic treatment or dietary changes were the driving factor behind the shifts in GI microbiome composition. Or these changes could have been caused by other factors associated with captivity, which has been shown in other studies to have significant effects on the composition of the GI microbiome (20)(21)(22). However, given all the changes that occur during the transition from the wild to captivity, it is unclear which specific factor(s) may be driving the observed changes. It is clear though from our study that this transition into captivity induces more significant changes in the GI microbiome than are induced by genetics within a single host species when provided the same base diet.
Another limitation of the study was the inability to examine potential sex-based differences due to a lack of female NHPs among our cohort (as females are usually kept for breeding), with only 2 out of 97 sampled animals being female. Some comparisons are also not possible between age-matched individuals, a limitation imposed by availability of animals as well, in addition to uncertainty around the age of wild-caught animals. We also acknowledge the differences in sample sizes between species of NHPs, imposed again by availability of animals. When analysis is redone after randomly selecting the same number of animals from each species, significant differences in b diversity between NHP genera and species persist (results not shown). Additional work is certainly merited.
A diverse gut microbiome synthesizes vitamins, essential amino acids, and short-chain fatty acids SCFA, which contribute to the health and integrity of the intestinal epithelial barrier (23). Components of commensal taxa such as lipopolysaccharide (LPS) and peptidoglycan and secreted factors such as SCFAs can also directly influence local immunity by supporting the differentiation and maintenance of antigen-presenting cells, lymphocytes, and innate immune cells (23). In turn, the host can mediate changes in the GI microbiome through various secreted proteins, miRNAs, and microvesicles, enhancing or inhibiting the growth of particular bacteria (9)(10)(11). When the gut microbiome is dysbiotic, various disease states can result, with inflammation being a common observation. Associations have been found between the gut microbiome composition and inflammatory bowel disease, Crohn's disease, type 2 diabetes, and obesity (23). Infectious diseases have also been found to have associations with the gut microbiome. Clostridium difficile infections often develop after perturbations to the microbiome (24). Human immunodeficiency virus type 1 (HIV-1) infections have been associated with decreased intestinal abundances of Firmicutes and Bacteroides and increased abundances of Proteobacteria and Prevotella (25). In humans, confounding variables contribute to the dysbiosis, which have been observed in different disease states-variables that can be assessed or controlled for in the nonhuman primate model (26,27).
Many studies have found associations between host genetics and the shaping of the composition of the GI tract microbiome (5)(6)(7)(8). Our study demonstrates that host genetics contributes to the composition of the GI tract microbiome, although not to an overly large degree. The host's genetic contribution to GI tract microbiome composition is clear when comparing between genera under controlled dietary conditions, but these differences become less apparent when comparing within genera, and even less so when comparing microbiome compositions within species under controlled conditions.
In summary, we found that the gut microbiomes of four NHP species were significantly different from one another despite highly controlled dietary and environmental conditions. These findings could better inform the interpretation of microbiome data from NHP species, as viewing studies through this lens may allow for better understanding of what is a typical composition for an NHP on a species-specific basis, accounting for the contribution of host genetics to the final gut microbiome environment. These data highlight the utility of NHP studies where environmental variables can be more tightly controlled and provide a benchmark against which studies of outbred human populations can be measured.
MATERIALS AND METHODS
Animals. More than 1 mL of feces was collected from 7 sabaeus monkeys (Chlorocebus sabaeus), 15 vervets (Chlorocebus pygerythrus), 6 pig-tailed macaques (PTMs) (Macaca nemestrina), and 49 rhesus macaques (RMs) (Macaca mulatta) for single-time-point analysis as previously described (28). All NHPs were male, except for two vervets. Among the vervets, six were imported from outdoor environments in Tanzania and nine were captive born from six parental couples among these animals. Only the parents living at study initiation were sampled (29). Among the RMs, 21 were housed in facility 1 at the National Institutes of Health (NIH) in Bethesda, MD, USA, and 28 were housed at facility 2 at the NIH. Longitudinal stool samples were taken from 20 male RMs at the National Institute of Allergy and Infectious Diseases (NIAID) free-ranging Indian-origin rhesus breeding colonythese 20 RMs are not among the RMs included in the cross-genera/cross-species comparisons. These 20 RMs were given one dose of ivermectin (0.2 mg/kg of body weight by subcutaneous injection) following their initial exam as well as fenbendazole (50 mg/kg by oral administration) once a day for 3 days after the initial exam. Samples were taken the day of initial exam (D0) and then 7 days post-initial exam (D7), 18 days post-initial exam (D18), and 63 days post-initial exam (D63). The other RMs in this study received similar treatment, although several years before study initiation. Stool was collected from the bottom of individual animals' cages, placed inside polypropylene tubes, then flash frozen on dry ice before being stored at 280°C.
The NIAID institutional animal care and use committee, as part of the NIH intramural research program, approved all experimental procedures pertaining to NHPs (protocol LVD 26E). The animals in this study were housed and cared for under the supervision of the Association for the Assessment and Accreditation of Laboratory Animal Care (AAALAC)-accredited Division of Veterinary Resources and as recommended by the Office of Animal Care and Use nonhuman primate management plan. Care at these facilities met the standards set forth by the Animal Welfare Act, animal welfare regulations, United States Fish and Wildlife Services regulations, as well as the 8th edition of the Guide for the Care and Use of Laboratory Animals (30). The physical conditions of the animals were monitored daily. The animals were provided continuous access to water and offered commercial monkey biscuits twice daily as well as fresh produce, eggs and bread products, and a foraging mix consisting of raisins, nuts and rice. Enrichment to stimulate foraging and play activity was provided in the form of food puzzles, toys, cage furniture, and mirrors. All animals had the same base food of monkey diet (LabDiet, St. Louis, MO, USA).
Microbiome analysis. Total DNA was extracted from stool and sequenced using the Illumina MiSeq platform with primers for the V4 region of the 16S rRNA gene (515F to 806R) as previously described (28). Singletime-point samples from the four species of NHPs were extracted and sequenced together to avoid batch effects, as were the longitudinal RM samples. Illumina FASTQ files were analyzed using a custom R script. Paired-end FASTQ reads were filtered and processed using the dada2 package (version 1.18.0) in R (version 4.1.0). Reads were trimmed to 225 bp (forward) and 200 bp (reverse) and filtered to exclude sequences with degenerate bases (N), more than 2 expected errors (maxEE), or chimerism. Before filtering, 12.02 million reads for single-time-point analysis were included in 83 samples with an average of 144,800 reads per sample. After filtering and quality trimming, 7.1 million reads were included across all single-time-point samples, with an average of 85,500 reads per sample. Five samples with less than 1,000 reads were omitted from further analysis. Before filtering and quality trimming, 5.93 million reads for longitudinal analysis were included in 80 samples, with an average of 74,100 reads per sample. After filtering and quality trimming, 2.64 million reads were included across all longitudinal time point samples, with an average of 32,900 reads per sample. Six samples with less than 1,000 reads were omitted from further analysis. Reads were binned into amplicon sequence variants (ASVs), and taxonomies were annotated with the SILVA taxonomic framework (release 132) at a 99% identity threshold and then analyzed using PhyloSeq (version 1.36.0) in R. ASVs identified as non-Bacteria, Cyanobacteria, or mitochondria (Rickettsiales mitochondria) were removed from further consideration, as were resultant genera at less than 3% prevalence or phyla with no genera diversity. (Archaea were excluded under these criteria for rarity and inconsistency of sequences.) In our study, genera were considered the operational definition for an operational taxonomic unit (OTU). Identified ASVs were grouped by genera and summed to yield OTU count using dplyr (version 1.0.10).
Statistical analysis. All statistical analyses were run at the OTU level unless otherwise noted. Weighted UniFrac and Shannon diversity analyses were performed using the PhyloSeq package (version 1.36.0) in R. Adonis analysis was performed on weighted UniFrac values using the vegan package (version 2.5-7) in R. A separate analysis was run for each subset of the data to generate weighted UniFrac values and PCoA plots for the comparisons between NHP genera, between NHP species within the two genera we looked at, between vervets by birth status, between RMs by facility, and between all time points for the longitudinal samples. Unpaired, two-way t tests to compare Shannon diversity values were performed in R between NHP genera, between NHP species within the two genera analyzed, between the vervets by birth status, and between RMs by facility. OTU counts were uploaded to the Huttenhower lab Galaxy server for LEfSe (linear discriminant analysis [LDA] effect size) and then used to construct LEfSe cladograms and bar graphs (31,32). Logarithmic LDA scores were set to a threshold of 2.0, with a values set to 0.05 for the factorial Kruskal-Wallis test among classes and the pairwise Wilcoxon test between subclasses. OTU counts were exported from R, averaged within groups, and then used to construct Krona plots (version 1.3) (33). The voom function within the R package limma (version 3.48.3) was used to determine significantly altered ASVs between time points for the longitudinal samples (34).
Data availability. The data sets generated and analyzed during the current study, including FASTQ files and metadata, are available in the NCBI Sequence Read Archive under accession no. PRJNA772263.
|
2022-12-08T17:52:05.400Z
|
2022-12-08T00:00:00.000
|
{
"year": 2022,
"sha1": "0d86f61ec2796c07035d61b780b87b62a5e2032f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ASMUSA",
"pdf_hash": "0d86f61ec2796c07035d61b780b87b62a5e2032f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233220302
|
pes2o/s2orc
|
v3-fos-license
|
Development and Validation of IBSA Photographic Scale for the Assessment of Neck Laxity
Objective To describe the development and validation of the 5-grade photographic IBSA Neck Laxity Scale. Methods The scale was developed from 2 real images, which led to the creation of 5 morphed images, representing different degrees of severity of laxity of the neck. For validation, a set of 50 images (25 real and 25 morphed) was created and sent for evaluation to 6 trained raters (physicians) in 2 rounds, 1 month apart. Raters had to assess each image according to the 5-image scale. Inter-rater and intra-rater reliability in both rounds was evaluated. Results As to intra-rater reliability, single rater kappa scores between 0.69 and 0.87, and a global kappa score of 0.78 were observed. Inter-rater agreement was measured by means of the intra-class correlation coefficient and scores higher than 0.85 were reported, indicating excellent reliability. Conclusion IBSA Neck Laxity Scale is a validated and reliable scale.
Introduction
Being a particularly exposed area with early extrinsic and intrinsic evident aging textural changes, the anatomical region of the neck represents the target of many aesthetic rejuvenation procedures. As the years go by, wrinkled and sagging skin, platysmal bands and Venus rings, and/or blunting of the cervicomental angle begin to appear. 1 The current emerging need is to have a younger overall appearance, a fresh look that includes all the most visible body districts, the neck being among them, and this growing demand requires physicians to be prepared to recognize different clinical scenarios, to suggest and perform the most appropriate treatment.
A youthful neck is characterized by an acute cervico-mental angle, a firm and well-defined jawline, a smooth skin with minimal melanin or vascular lesions, no horizontal or vertical neck lines, absence of any platysmal band, no visible submandibular gland, and no hypertrophic masseter muscles. 2 The neck region is characterized by the passage of crucial vascular and nervous structures for the entire craniofacial district and by a sophisticated muscularaponeurotic network. From an anatomical point of view, different layers can be recognized, therefore from the most superficial to the deepest it is possible to identify the skin, the subcutaneous tissue, the superficial musculofacial plane, and finally the sub-platysmal structures (Figure 1). 2 The ageing process involves all these structures with different mechanisms.
The cutaneous layer of the neck consists of a relatively thin epidermis and dermis which have to bear several tensile and compressive stresses; moreover, the frequent movements of the neck in anterior-posterior and lateral direction which are made possible by the contraction of the underlying platysma can be the cause of the so-called "necklace lines". These horizontal neck wrinkles are linear depressions in the anterior side of the neck. Whereas facial wrinkles are certainly caused by skin aging, horizontal neck wrinkles are not rarely seen in children and young adults and age-related skin laxity can make them even worse: repeated bending of the neck to look at cell phones, tablets, or books could also lead to wrinkle development in younger subjects. However, age-related skin laxity can make these wrinkles more evident. 3 Moreover, the skin of the neck often shows a significant extrinsic photoaging, characterized by increased epidermal thickness, degeneration of collagen and elastin, and deposit of loops of elastotic collagen in the deep dermis. 2,3 Subcutaneous or adipose tissue is located deep into the epidermal-dermal plane, and its amount greatly varies among individuals.
The cervical platysmal layer consists of wide strapshaped skeletal muscles which span from the dermal attachments along the mandibular border to the clavicle. The superficial layer of the deep cervical fascia invests the cervical platysma and it stretches upwards where it is known as the superficial-muscular aponeurotic system (SMAS). With age, the retaining ligaments that maintain the free medial edges anatomically close to the deep cervical fascia become weaker, and medial edges descend leading to platysma bands. 4 With muscle flaccidity and atrophy typical of aging, platysma bands can worsen cervical laxity and make it more evident, resulting in a sagging, adynamic, and obtuse neck. 2 As the subcutaneous fullness of the mandible recedes, the fat of the jowl, which was previously concealed by the surrounding soft tissues, is revealed. Ptosis of unsupported skin, coupled with the downward pull of platysma muscle, leads to the development of the characteristic "turkey neck" deformity. In addition, contraction of platysma muscle, caused in part by the need to support the deeper neck and floor of oral structures, gives rise to vertical fibrous bands on the neck, whereas laxity in the overlying skin can create horizontal rhytides. As aging progresses, the hyoid bone and larynx gradually descend, resulting in loss or blunting of the cervicomental angle. 5 Finally, the deep plane includes sub-platysmal fat, the anterior belly of the digastric muscles, and submandibular glands that can become ptotic or hypertrophic with ageing, leading to a visible bulge that disrupts the planar, smooth surface of a youthful-appearing neck. 4 Neck rejuvenation treatments range from minimally or non-invasive methods to invasive surgical techniques: poor texture, fine lines, dyschromia and photoaging can be improved by fractional nonablative and ablative lasers, as well as ablative fractional radiofrequency devices; several injectables such as botox can help improve the appearance of the neck and jawline, and soften the appearance of neck banding secondary to muscle action; hyaluronic acid (HA) fillers are widely known as a good solution to treat horizontal neck wrinkles and can be used to sculpt the jawline and create improved contours and balance; excessive preplatysmal fat can be addressed by suction-assisted lipoplasty when skin elasticity is fairly preserved, otherwise neck lift is a better option. 6,7 Obviously, the most invasive treatments pose a series of problems related to safety, as the anatomical region of the neck is both a pivotal passageway and the location of important vascular, nervous, and glandular structures; moreover, being the skin of this region particularly thin, fibrotic and scarring outcomes may represent a complication of invasive treatments.
The choice of one type of intervention rather than another depends on a series of variables closely linked to patient's characteristics. In case of severe tissue deterioration and laxity, surgery -despite its possible complications -is usually the best option for a good aesthetic result. On the other hand, there are a number of intermediate stages in which the use of the filler could be useful and resolving. Currently, the discrimination of individual cases is a matter of personal assessment of physicians, their experience and mere aesthetic judgement. Hence, it is not always easy for clinicians to assess what is the best intervention to implement.
In order to better identify individuals who could benefit more from one treatment than another, and to help doctors in their therapeutic choices, IBSA has designed a photographic scale, validated by physicians for their peers, which consists of 5 grades that differ according to the laxity of neck tissues. The objective is to help physicians overcome some clinical dilemmas by providing them with an objective, clear and easy-to-use tool that allows a quick initial assessment of the patient's situation. According to Italian law, this scale validation was exempt from ethic committee approval because no human beings have been involved. This scale development and validation was conducted according to the ethics of the Declaration of Helsinki. The information and data were generated, recorded, documented, and processed in accordance with a specific procedure, based on the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) Good Clinical Practice (GCP). All validators signed an informed consent form at the time of the enrolment in order to collect the clinical data. The process that led to the development and validation of this innovative photographic tool is the topic of this paper.
Scale Development
The scale was developed and validated following a method already used in photonumeric scales adopted for other parts of the body. The methodology which was already tested was then customized for the purposes of the scale here described (Figure 2). [8][9][10][11] A team of three experts (ie a plastic surgeon, the scientific responsible for dermoaestetics at IBSA and the responsible for research and development of QuantifiCare) selected two photographs from the database of a clinician with the aim of identifying two images, one representative of the neck with a healthy aspect and one of the most severe degree of laxity, but still considered eligible for filler treatment. This last photograph was selected to represent a borderline stage, beyond which the use of the filler would not be recommendable as surgery is currently the only option available.
351
A total of 50 photos of necks of patients between 40 and 79 years of age were chosen for this selection. Once this first selection was made, QuantifiCare in close interaction and collaboration with the 3 experts involved created a photographic scale with a morphing program. This scale is composed of 5 images that correspond to different degrees of severity of neck laxity (Figure 3). The images show the frontal aspect of the neck, from the lower edge of the mandible to the upper edge of the clavicle, the chin is in the rest position. By taking into account the tone and degree of laxity of the tissues, the 5 grades are identified and as follows: Grade 1: Normal trophism of the tissues of the neck. Grade 5: Severe laxity of the tissues of the neck. Subsequently, a set of 50 photographs (25 real and 25 morphed) was created for the validation of the scale by 6 selected raters. Before the presentation of this set of images to the 6 validators and their training on the software, in order to carry out a further and final assessment of the grades and their corresponding descriptors, the 5 morphed images were presented to the raters by means of a ranking exercise during which the 5 selected Grades plus a set of other non-selected morphed images were presented: during this test, validators were only given the descriptors and they had to match the image that better represented the condition described. They had to perform this ranking exercise twice with a break of a week between the two sessions. Validators confirmed the scale which was previously developed.
Software Characteristics
The software was developed and validated by following GAMP5 methodology, compliant to 21 CFR part 11, it used pseudonymized photographic data sorted in a folder architecture to generate first a Microsoft Access Database file, and finally a Comma Separated Value file. Photographic data were acquired using a Phase one XF IQ150 camera.
As to software, Windows 10 and DirectX 9 or superior was required; as to hardware, a minimum of 2GB of free space on the hard drive, and 4GB of RAM memory were required, along with a screen resolution of at least 1024x768.
The software was divided by page, each containing 1 photo for evaluation. For each page, the user had the possibility to give a score, before submitting it to go to the next page.
As to user requirements, a Webex training session was scheduled. During this initial phase, experts were presented a demo of the tool to ensure the proper evaluation of the system, and that no questions would arise during the rating. Validators were given the possibility to restart and go through the training module any time.
All users were given a unique login for correct authentication. Five score options were given (Grade 1, Grade 2, Grade 3, Grade 4, and Grade 5), and experts could only select one. The application displayed one photo per patient, in the center of the window, image size was updated while window was resizing, it was possible to enlarge each photo and fullscreen visualization was available by clicking on it. Users were allowed to stop the evaluation any time and resume it later. The application did not allow to go to next subject without having scored the current one; however, it was possible to reevaluate previous patients, and change any of the scores already given.
Assigned scores were saved during navigation among patients, and there was no time limit. Scores were provided to IBSA in a CSV file, along with a report which also contained results as specified for each subject and name of evaluators.
Scale Validation
The 50 photographs were sent to the 6 raters who had 30 days for completion of each of two evaluation rounds, one month apart. In the two assessments images were the same, but in a different order.
Finally, data were collected by QuantifiCare and subjected to statistical analysis by IBSA to assess responses of the same rater on the same photo (inter-rater evaluation), and scores of the same photo among the different raters (intra-rater evaluation). (Table 1) For scale validation, intra-rater reliability between the first and the second evaluation performed by the same expert was calculated using weighted kappa scores with Fleiss-Cohen weights (the evaluation is a rate, while categories are ordinal).
Intra-Rater Reliability
This analysis was performed assessing each expert alone, and considering all the first and the second available evaluations together.
Kappa scores between 0.69 and 0.87 as to expert analysis, and a global kappa score of 0.78 when the 6 expert evaluations were analyzed all together were observed.
These results indicate substantial/almost-perfect agreement between the first and the second evaluation performed on the same image (kappa scores range between 0 and 1; 0.61-0.80 indicate substantial agreement; 0.81-1.00 indicate almost-perfect agreement).
Inter-Rater Reliability (Table 2)
Inter-rater agreement was measured by calculating intra-class correlation coefficient (ICC [2,1] -to be used when all images are rated by the same raters who are assumed to be a random subset of all possible raters) as described by Shrout and Fleiss.
The analysis was performed considering the first and the second evaluation separately, and analyzing the two evaluations together.
These results indicate good/excellent reliability (intraclass correlation coefficient ranges between 0 and 1; values less than 0.5 are indicative of poor reliability, values between 0.5 and 0.75 indicate moderate reliability, values between 0.75 and 0.9 indicate good reliability, and values greater than 0.90 indicate excellent reliability).
Discussion
The anatomical region of the neck is particularly exposed to environmental insults and it is undoubtedly one of the first to show signs of premature ageing, which makes rejuvenation of this area one of the increasingly desirable targets for an increasing number of patients. Being a particularly delicate body area from an anatomical and structural point of view, the evaluation of aesthetic defects in the skin and underlying tissues is a crucial moment to discriminate between an invasive treatment with all its DovePress possible complications and a less invasive kind of intervention than surgery. Hence, the need to provide the aesthetic doctor with a reliable tool for assessment among the different cases. This scale based on 5 images was developed using digital techniques starting from real photos, it is a simple and immediate aid, which makes it possible to quickly recognize different clinical circumstances. The scale has been validated by doctors who based their assessment on their clinical experience and the positive results of the validation phase highlight its reliability.
The tool here described identifies a deficit in the volume and shape of the neck region due to aging, and each degree is represented by an iconographic and a verbal reference, ie the image and the descriptor, respectively. However, these two indicators are influenced by individual variables which may represent a limitation to the use of this scale. However, an iconographic and verbal classification as the one here presented, never perfectly mirrors, but rather tries to best describe a possible real situation, the rest is entirely up to the physician's clinical instinct and experience. This is precisely the reason why the only way to truly evaluate the advantages of this scale is to see how it performs in a real-life scenario.
Conclusions
Overall inter and intra-rater data indicate that the scale created is consistent and reliable.
Funding
This project has been sponsored by IBSA Farmaceutici Italia.
Disclosure
AL is an employee of IBSA SA Switzerland. ML & ND are employees of Quantificare SA France. GB is an employee of IBSA Farmaceutici Italia. The authors report no other conflicts of interest in this work.
|
2021-04-14T05:19:34.536Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "d782ad181ccbeffd33c2a9a1fc9093be9d04891b",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=68285",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d782ad181ccbeffd33c2a9a1fc9093be9d04891b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14188554
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of softening ability of Xylene & Endosolv-R on three different epoxy resin based sealers within 1 to 2 minutes - an in vitro study
Objectives This study evaluated the efficacy of Endosolv-R and Xylene in softening epoxy resin based sealer after 1 to 2 min exposure. Materials and Methods Sixty Teflon molds (6 mm × 1.5 mm in inner diameter and depth) were equally divided into 3 groups of 20 each. AH 26 (Dentsply/De Trey), AH Plus (Dentsply/De Trey), Adseal (Meta-Biomed) were manipulated and placed in the molds allotted to each group and allowed to set at 37℃ in 100% humidity for 2 wk. Each group was further divided into 2 subgroups according to the solvents used, i.e. Xylene (Lobachemie) and Endosolv-R (Septodont). Specimens in each subgroup were exposed to respective solvents for 1 and 2 min and the corresponding Vicker's microhardness (HV) was assessed. Data was analysed by Mauchly's test and two-way analysis of variance (ANOVA) with repeated measures, and one-way ANOVA. Results Initial hardness was significantly different among the three sealers with AH Plus having the greatest and Adseal having the least. After 2 min, Xylene softened AH Plus and Adseal sealer to 11% and 25% of their initial microhardness, respectively (p < 0.001), whereas AH 26 was least affected, maintaining 89.4% of its initial microhardness. After 2 min, Endosolv-R softened AH 26, AH Plus and Adseal to 12.7, 5.6 and 8.1% of their initial microhardness, respectively (p < 0.001). Conclusions Endosolv-R was a significantly more effective short term softener for all the tested sealers after 2 min whereas Xylene was an effective short term softener against AH plus and Adseal but less effective against AH 26.
Introduction
Teeth with pulpal and periradicular involvement are commonly treated with root canal treatment. Although success rate of endodontic treatment is up to 86 to 93%, failure in endodontic treatment can be expected. 1 The main causes of endodontic failure are insufficient cleaning, inadequate obturation, untreated or missed out root canals, lack of efficient hermetic sealing and survival of bacteria. 2 These make nonsurgical endodontic retreatment necessary. For effective results, retreatment requires thorough debridement of former root canal filling materials including sealers. 3 Debridement especially in resin based sealers that strongly adhere to the root canal dentin is more difficult. 4,5 The root canal filling material in bulk can be easily removed with hand and rotary instrument leaving small amount of residue attached to the root canal dentin. Recently, Duncan and Chong have suggested the use of solvents to remove this root canal residue. 5 The aim of this study was to evaluate the softening ability of two solvents Xylene & Endosolv-R on three Epoxy resin based endodontic sealers that will facilitate their effective mechanical removal.
Materials and Methods
Three epoxy resin based root canal sealers, AH 26 (Group I, Dentsply/De Trey, Konstanz, Germany), AH Plus (Group II, Dentsply/De Trey) and Adseal (Group III, Meta-Biomed, Cheongwon, Korea) were tested. Composition of each sealer is described in Table 1. A well of 6 × 1.5 mm in diameter and depth, respectively, was prepared in each of sixty Teflon disks of 12 × 2 mm in diameter and height, respectively. These sixty molds were randomly and equally divided into three groups, each containing 20 Teflon molds (n = 20). Each sealer was then mixed according to the manufacturer's instructions and placed into the well of the mold. The sealer specimens were then allowed to set for 2 weeks at 37℃ and 100% humidity. The design of the sealer specimen was such that only one surface of the specimen was exposed to solvent. Twenty set sealer specimens in each group were further randomly and equally divided into two subgroups (n = 10) based on solvents to which they were exposed to Xylene (Subgroup A, Lobachemie, Mumbai, India) and Endosolv-R (Subgroup B, Septodont, Cedex, France).
Initial Vicker's microhardness (HV) of each fully set sealer specimens were calculated using a Mitutoyo microhardness testing machine (Instrument No. 810-117E, Mitutoyo, New Delhi, India) with a Vicker's microhardness indenter. The indenter was applied at three predetermined points on the specimen surface with the load of 10 grams for 10 seconds. The indentations in the sample surface were measured under 100 times magnification with the microscope attached to the same machine. The mean of three was taken for each sample.
Each sealer specimen was then immersed in a petridish containing the corresponding solvent (i.e. Xylene and Endosolv-R) for 1 minute. Each specimen was then retrieved from the solvent and air dried. At this point of time, the microhardness of each sealer specimen was reassessed by similar procedure as mentioned above. Each sealer specimen was then again exposed to corresponding solvents for additional 1 minute. (i.e. totally 2 minutes.) and similar procedure to evaluate microhardness was followed as mentioned above. In this way ten sealer specimens from each group were tested for reduction in Vicker's microhardness (HV) after 1 and 2 minutes of exposure to solvent. Vicker's microhardness (HV) for each sealer at initial stage, after 1 and 2 minutes exposure to solvents was summarized (Tables 2 and 3) in terms of means and standard deviations. Reduction in the microhardness of each sealer at one and two minutes was expressed as percentage with reference to the respective initial hardness. To determine the effect of solvents and sealer types on the microhardness according to time, two-way analysis of variance (ANOVA) with repeated measures was performed. The assumption of sphericity was assessed using Mauchly's test. Additionally, to determine the significance of difference in the mean hardness across groups at each time point, one-way ANOVA was used followed by Tuckey's post-hoc pairwise comparison. The analysis was performed using SPSS 16.0 (SPSS INC., Chicago, IL, USA) software.
Results
The descriptive statistics in terms of mean and standard deviation were obtained for each sealer type according to time, for each solvent Xylene and Endosolv-R (Tables 2 and 3). It was evident that the mean Vicker's hardness reduced significantly with time for all sealer and solvent combinations. However, their extent of reduction was different. To assess the statistical relevance of change in the hardness due to sealer type and solvent exposure, two-way ANOVA with repeated measures was performed independently for two solvents.
Solvent-Xylene (Subgroup A)
Mauchly's sphericity test for each of the three effects in the model i.e. two main effects (group and time) and the interaction effect (group x time) revealed that the assumption of sphericity was met by group (p = 0.32) and hence no correction of F-ratios was required. However, time and interaction violated the assumption (p < 0.05). The estimate of sphericity (ε) for these effects were 0.543 and 0.385 respectively, and hence Greenhouse-Geisser correction was referred for degrees of freedom of F-statistics. The results of ANOVA revealed that there was a highly significant main effect of groups (F (2,18) = 591.48, p < 0.0001) when exposed to Xylene. The main effect of Softening ability of Xylene and Endosolv-R indicating that the hardness (HV) attained at different time points depends on the type of sealer material when exposed to Xylene (Table 2). Exposure time of Xylene had a noticeable effect on AH Plus followed by Adseal (Figure 1).
Solvent-Endosolv-R (Subgroup B)
Mauchly's sphericity test revealed that the two main effects, i.e. group and time as well as the interaction effect (group x time) met the assumption of sphericity with p values 0.729, 0.389 and 0.174 (p > 0.05), respectively. Hence, there was no need of any correction of F-ratios. The ANOVA revealed that the main effect of groups was highly significant (F (2,18) = 114.44, p < 0.0001). The main effect of time was also highly significant (F (2,18) = 11,208, p < 0.0001) indicating that the marginal mean of hardness (HV) was different at three time points. The interaction effect (group x time) was also significant (F (4,36) = 51.22, p < 0.0001) suggesting the dependency of hardness (HV) of sealer type exposed to Endosolv-R and time (Table 3, Figure 2). The mean Vicker's microhardness (HV) at different time points for AH 26 was higher than that of Adseal. However, AH Plus contributed mainly to the interaction effect. Mean hardness (HV) for this sealer when exposed to Endosolv-R was higher than that for other two sealers. However, after two minutes exposure, the mean hardness (HV) for AH Plus dropped remarkably with a mean of 9.23 ± 0.44 (94.4% reduction) and was very close to that of Adseal (9.88 ± 1.20). In short, the effect of Endosolv-R exposure to AH Plus was noticeable after two minutes.
The above analyses revealed that AH Plus sealer exposed to either of the solvents had the maximum reduction in the mean hardness (HV) as compared to other two sealers, and in particular, the effect is pronounced for Endosolv-R exposure to AH Plus. Additionally, one-way ANOVA in subgroup A suggested that mean initial hardness (HV) differed significantly across three groups. Similar was the observation after 1 and 2 minutes. Further, also in subgroup B, the mean initial hardness (HV) differed significantly in three groups, and the finding was consistent after 1 minute. However, after 2 minutes, the mean hardness (HV) of group I differed significantly than groups II and III, while the means of groups II and III showed no significant difference.
Discussion
For endodontic retreatment to be successful, it is necessary to completely remove all previous obturation material. 3,6 An ideal root canal sealer should be easily removed if retreatment is necessary to allow access for antimicrobial agent and medicament to all root canal ramification. 7 Shin et al. advocated the use of Gates Glidden and Profile systems for retrieval of resin based root canal sealer. 8 Recently, Duncan and Chong suggested many methods for removal of root filling materials including use of the hand files, rotary files, ultrasound, heated pluggers and solvents. 5 Hand and rotary instruments are commonly being used for effective removal of root canal fillings. 2,3 The bulk of these filling materials can be removed within 2 to 3 minutes, but still remnants of gutta percha and sealer remains commonly attached to root canal dentin. 6,9 With better sealing and bonding, resin based root canal sealers are of interest to many clinicians. 4,10,13 The complete debridement of the remnants of resin based sealers that strongly attaches to the dentin is prolonged tedious task. So to remove these fillings and sealer out of the fins and aberration of root canal systems, literature has suggested 'wicking action' is necessary which can be provided by solvents. 5,11,12 Therefore, it will be helpful to use solvents along with hand and rotary files to remove root canal debris.
Three epoxy resin based sealers, AH plus, AH 26, and Adseal were used because they are mechanically harder and more difficult to remove than zinc oxide euginol based ones. 5,10,13 Lee et al. have mentioned that resin based sealers attach more strongly to both dentine and gutta percha as compared to zinc oxide euginol and calcium hydroxides based ones. 10 Mamootil et al. have stated that resin-based sealers have deeper and more consistent penetration into dentinal tubules than other sealers both in vitro and in vivo. 15 Further microleakage has been found to be least in the case of resin based sealers. 16 Cho et al. have also mentioned in their study that the bond strength of final restoration was the least affected in case of resin based sealers than the zinc oxide euginol root canal sealer. 17 Kim et al. have stated that resin based sealers (AH 26, EZ fill and AD Seal) are more biocompatible and have advantage in terms of radiopacity. 18 In paint industries, solvents are often used to soften resin coating materials on paints to allow their easy removal. 19 Thus solvents used in removal of paints can be considered in root canal retreatment for removal of strongly adhering resin based sealers from root canal walls. 19 Chloroform and Xylene as a solvent for root canal sealer have been studied, but U.S. Food and Drug Administration has banned chloroform due to its potential for carcinogenicity and cytotoxicity. 5,[19][20][21] Use of D-Limonene (Refined Orange Oil) in endodontics is becoming popular due to its confirmed biocompatibility, safety, and noncarcinogenic property, but Martos et al. and Mushtaq et al. have mentioned that the performance of orange oil as a solvent was inferior to xylene and chloroform. [22][23][24] Because of concerns about Chloroform, clinicians and researchers have developed a renewed interest in finding an alternative solvent. Xylene is chlorinated hydrocarbon commonly considered as gutta percha solvent. 25 It may also soften or dissolve the sealers and could potentially facilitate their mechanical removal. 19 Use of Endosolv-R for removal of resin based sealer has suggested by Cohen, Duncan and Chong. 5,11 It contains 66.5 grams of formamide and 33.5 grams of phenyl ethylic alcohol. 26 In this study, only one surface of sealer was exposed to solvent to simulate the root canal conditions. Softening was defined as reduction in hardness that resulted from exposure to solvent. 19 After 1 minute, Xylene was most effective against AH Plus (86.1%) followed by Adseal (65.1%) sealer and least effective against AH 26 (8.5%). After 1 minute, Endosolv-R was most effective against Adseal (69.2%) followed by AH 26 (62.5%) and AH plus (61.4%) (p < 0.001). After 2 minutes, Xylene was most effective against AH Plus (89.1%) followed by Adseal (75.1%) sealer but least effective against AH 26 (10.6%). These results are in partial agreement with the study of Kfir et al. 19 After 2 minutes, Endosolv-R was most effective against AH plus (94.4%) followed by Adseal (91.9%) and AH 26 (87.3%) (p < 0.001). In other words, after 2 minutes of exposure Endosolv-R has been found to be a more effective short term softner for all the three sealers and its effect was more pronounced in case of AH plus.
Setting of epoxy resin sealers involves polymerization and cross linking of their monomers, resulting in 3D lattice. 19 This set polymer is unaffected by saline or water. Hydrophobic organic solvents such as Xylene and Endosolv-R may have the ability to penetrate this 3D lattice resulting in swelling of the lattice and reduction in strength and hardness. 19 Thus softening occurs that facilitates their removal by scrubbing effect provided by files. 19 Ramzi et al. have stated that Endosolv-R combined with rotary files has most effectively removed filling materials from the root canals, especially in the apical third. 6 Vranas et al. have reported that Endosolv-R has significant softening effect on resorcinol-formalin pastes after 2 minutes. 4, 25 Gambrel et al. concluded in their probe penetration study that softening effect of Endosolv-R after 20 minutes was superior to other tested solvents. 4,26 Shokubinejad et al. mentioned that Endosolv-R does not affect the bond strength of newer obturation materials with root canal dentin whereas Laxmi Narayan et al. showed that Xylene causes significant reduction in enamel and dentin microhardness and thus may reduce the bond strength of newer endodontic sealers. 27,28 Also, by Occupational Safety and Health Administration (OSHA) guidelines, Xylene causes irritation of eyes and mucous membranes, gastrointestinal distress and toxic hepatitis when ingested, chemical pneumonitis, hemorrhages in air spaces when inhaled, cytotoxic reaction when extruded periapically. 5,20,29 However Chutich et al. have suggested that the amount of Xylene periapically extruded was too small to cause toxicity. 30 Even less information regarding biocompatibility of Endosolv-R is available and it has been suggested to have fetotoxic properties. 31 However, results mentioned in this study may vary in in vivo conditions based on setting characteristics of sealer in root canal system and availability of solvents to the sealer in curved & ramified canals.
Conclusions
Within the limitations of this study, it can be concluded that after 2 minutes, Endosolv-R was a significantly more effective short term softener than Xylene for all the tested sealers, and thus Endosolv-R can be viewed as a better substitute of Chloroform for softening and removing of epoxy resin based sealers.
|
2016-05-04T20:20:58.661Z
|
2014-01-20T00:00:00.000
|
{
"year": 2014,
"sha1": "040bf4f4b3989b6cfadd45770b90bf914d37ddd5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5395/rde.2014.39.1.17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "040bf4f4b3989b6cfadd45770b90bf914d37ddd5",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3493961
|
pes2o/s2orc
|
v3-fos-license
|
Statistical-QoS Guaranteed Energy Efficiency Optimization for Energy Harvesting Wireless Sensor Networks
Energy harvesting, which offers a never-ending energy supply, has emerged as a prominent technology to prolong the lifetime and reduce costs for the battery-powered wireless sensor networks. However, how to improve the energy efficiency while guaranteeing the quality of service (QoS) for energy harvesting based wireless sensor networks is still an open problem. In this paper, we develop statistical delay-bounded QoS-driven power control policies to maximize the effective energy efficiency (EEE), which is defined as the spectrum efficiency under given specified QoS constraints per unit harvested energy, for energy harvesting based wireless sensor networks. For the battery-infinite wireless sensor networks, our developed QoS-driven power control policy converges to the Energy harvesting Water Filling (E-WF) scheme and the Energy harvesting Channel Inversion (E-CI) scheme under the very loose and stringent QoS constraints, respectively. For the battery-finite wireless sensor networks, our developed QoS-driven power control policy becomes the Truncated energy harvesting Water Filling (T-WF) scheme and the Truncated energy harvesting Channel Inversion (T-CI) scheme under the very loose and stringent QoS constraints, respectively. Furthermore, we evaluate the outage probabilities to theoretically analyze the performance of our developed QoS-driven power control policies. The obtained numerical results validate our analysis and show that our developed optimal power control policies can optimize the EEE over energy harvesting based wireless sensor networks.
Introduction
Energy harvesting offers a promising solution to prolong the lifetime of battery-powered wireless sensor networks. Different from the conventional energy supplies that suffer from limited lifetime, energy harvesting can provide the never-ending supply of energy for wireless sensor networks [1][2][3][4]. A large number of renewable energy sources, i.e., radio frequency (RF) signal, thermoelectric generator, vibration absorption device, etc. [5,6], can be exploited to harvest energy for wireless sensor nodes. Due to the random distribution and mobility of harvested energy powered sensor nodes, the energy harvesting often intermittently occurs, resulting in the very low energy efficiency for wireless sensor networks [7,8]. Therefore, it is very important to significantly increase the energy efficiency for energy harvesting based wireless sensor networks.
Recently, the energy efficiency in energy harvesting based wireless communications and networks were studied [9][10][11]. The authors of [9] developed the power allocation scheme to maximize the energy efficiency of orthogonal frequency division multiple access (OFDMA) based wireless powered the Truncated energy harvesting Water Filling scheme (T-WF) (under the very loose QoS requirement) to the Truncated energy harvesting Channel Inversion (T-CI) scheme (under the very stringent QoS requirement). For battery-finite energy harvesting based wireless sensor networks, we derive and analyze the statistical QoS-driven power control policies under the following three scenarios: (i) the average harvested energy constraint dominated optimal power control policy, (ii) the battery capacity constraint dominated optimal power control policy, and (iii) both the average harvested energy constraint and the battery capacity constraint dominated optimal power control policy. Furthermore, we analyze the outage probability for our developed optimal power control policy. The numerical obtained results validate our analyses and show that our proposed QoS-driven power control polices can maximize the EEE for energy harvesting based wireless sensor networks, thus enabling efficient and QoS-guaranteed energy harvesting wireless communications in wireless sensor networks.
The rest of this paper is organized as follows. Section 2 gives our QoS-guaranteed energy harvesting based wireless sensor network model and introduces the principle of effective energy efficiency. Sections 3 and 4 develop the QoS-driven power control policies to maximize the effective energy efficiency for battery-infinite and battery-finite energy harvesting based wireless sensor networks, respectively. Section 5 analyzes the energy outage probabilities and the data-transmission outage probabilities. Section 6 numerically evaluates our developed QoS-driven power control polices for battery-infinite and battery-finite energy harvesting based wireless sensor networks, respectively. The paper concludes with Section 7.
System Model
We consider an energy harvesting based wireless sensor network model, as shown in Figure 1, where the energy harvesting enabled sensor nodes (SNs) communicate with the access point (AP). We concentrate on a discrete time system with a point-to-point link between the SN and WAP. Time division multiple access (TDMA) is employed for the SN-AP communications. In such scenario, incremental energy is harvested by the SN from the ambient energy sources and stored in the battery for data transmission.
A first-in-first-out (FIFO) data queue buffer is implemented at the SN, which contains the data packets from the upper-protocol-layer, as illustrated in Figure 1. The packets are divided into frames at the data-link layer and split into bit-streams at the physical layer. The channel state information (CSI) is estimated at the AP and reliably fed back to the SN. The SN needs to find the optimal power control policy based on the QoS constraint requested by the service, the CSI fed back from the AP, and the available energy harvested from the environments.
We denote by B, E H , and P[i] the total bandwidth of one SN-AP link, the average harvested energy, and the instantaneous transmit power, respectively, where i is the time index of the frame. The additive white Gaussian noise (AWGN) is denoted by N 0 . The channel power gains, denoted by g[i], follow the stationary block fading channel model, where they keep unchanged within the time duration of one frame, but vary independently across different frames. The instantaneous channel signal-to-noise ratio (SNR), denoted by γ[i], can be expressed as γ[i] = E H g[i]/N 0 B. Moreover, we employ Nakagami-m fading channel model, which is very general and often best fits the land-mobile and indoor mobile multi-path propagations. The probability density function (PDF) of instantaneous channel SNR, denoted by p Γ (γ), can be expressed as follows: where Γ(·) denotes the Gamma function, m represents the fading parameter of Nakagami-m distribution, and γ is the average received signal-to-noise ratio.
The Statistical Delay-Bounded QoS Guarantees
Based on large deviation principle (LDP), the author of [29] showed that, for a queueing system with stationary and ergodic arrival and service process, the queue length process Q(t)(t ≥ 0) converges in distribution to a finite random variable Q(∞) that satisfies which states that the probability of the queue length exceeding the queue length bound x decays exponentially as the bound x increases. The parameter θ (θ > 0), which is called QoS exponent [21], indicates the exponential decay rate dominated by the queue length bound. A large θ leads to a fast decay rate, which implies that a stringent QoS demand is supported. A small θ corresponds to a slow decay rate, which means that the system can provide a loose QoS requirement [30].
, is a convex function differentiable for all real θ [29]. The instantaneous service rate R[i] can be derived as follows [31]: where µ[i] is the power control policy. We define the power control policy as the proportion of transmit power in the average harvested energy. Thus, the instantaneous transmit power can be written as When the service rate sequence R[i] is stationary and time-uncorrelated, we can derive the effective capacity as follows [21]:
Effective Energy Efficiency in Energy Harvesting Based Wireless Sensor Networks
The SN harvests energy from the environments and stores it in the battery. The energy arrives at discrete time intervals with various amounts. We assume that the energy arrival process is stationary and ergodic, and thus can be modeled as the Poisson process with the arrival rate λ e [4,32]. Therefore, according to the Poisson process based energy arrival [4,32], the average harvested energy, denoted by E H , is equivalent to the energy arrival rate and can be derived as follows: where H[i] is the harvested energy during the i th time frame. We aim to maximize the energy efficiency under the statistical delay-bounded QoS provisioning for energy harvesting based wireless sensor networks. Thus, we define the effective energy efficiency (EEE), denoted by E e , as the achieved effective capacity per unit harvested energy. Then, we can derive the EEE for energy harvesting based wireless sensor networks as follows: Without loss of generality, we normalize the observation time interval. Thus, the terms of power and energy can be interchangeably used.
QoS-Driven Optimal Power Control Policy with Infinite Battery Capacity
In this section, we assume that the battery capacity is large enough to store the harvested energy without energy overflow. Conventionally, the power control schemes are functions of the instantaneous SNR γ[i]. However, for battery-infinite energy harvesting based wireless sensor networks, our QoS-driven power control policy, denoted by µ(η[i]), needs to be adaptive to the instantaneous SNR γ[i], the QoS exponent θ, and the energy arrival rate λ e . The variable is defined as the QoS and energy based state information (QSI).
Average Harvested Energy Constraint
We assume that the harvested energy is only used for transmission, i.e., energy required for processing is not taken into account [3,4]. Then, the instantaneous transmit power in energy harvesting based wireless sensor networks cannot exceed the available harvested energy, which can be formulated as follows: where P(η[i]) is the transmit power during the ith frame and the symbol notation H[0] denotes the amount of energy available in the battery at the initial time. The right-hand of Equation (7) is the summation of harvested energy from the initial time to (t − 1)th frame because the harvested energy in the tth frame cannot be used for transmission at the same time. Since the discrete-time channel and the energy arrival process are both stationary and ergodic, the time average is equal to the statistical average for the harvested energy [33], which is shown as follows: In the following, we omit the time-index i for simplicity. When t is large enough, we substitute Equation (8) into Equation (7) and rewrite Equation (7) as follows: which shows that the power control policy is constrained by the average harvested energy.
The Effective Energy Efficiency Maximization for Battery-Infinite Energy Harvesting Based Wireless Sensor Networks
We formulate the energy efficient optimization problem, denoted by P1, to maximize EEE in battery-infinite energy harvesting based wireless sensor networks as follows by using Equations (4) and (6): subject to Equation (9) and µ(η) ≥ 0. Since log(·) is a monotonically increasing function, the numerator of objective function in problem P1 can be simplified as follows: Due to the monotonicity of log(·) function and linearity of (1 + µ(η)), the numerator of objective function in problem P1 is strictly concave with respect to µ(η). However, the problem P1 is still a non-convex optimization problem because of the variable in the denominator. In order to convert the problem P1 into a convex optimization problem, we assume the energy arrival rate λ e to be fixed for the energy harvesting based wireless sensor network. This kind of assumption is practical because the energy sources for the energy harvesting based wireless sensor networks are relatively stable during the short period and variable across the whole energy harvesting process. Therefore, we can solve problem P1 with fixed λ e and the solution of problem P1 is adopted to the energy harvesting based wireless networks with different values of λ e . Since log(·) is a monotonically increasing function, problem P1 can be simplified as the new problem P2, which is formulated as follows: subject to Equation (9) and µ(η) ≥ 0. The term β = (θT f B)/ log 2 is defined as the normalized QoS exponent. It is clear that the objective function of P2 is strictly convex and the item E γ (P(η)) in Equation (9) is linear with respect to µ(η). Thus, problem P2 is a strictly convex optimization problem and the optimal solution for problem P2 is given by the following Theorem 1.
Theorem 1.
The optimal power control policy for the battery-infinite energy harvesting based wireless sensor networks, denoted by µ * (η), which is the solution of problem P2, is determined by where γ in is defined as the cut-off SNR threshold in the battery-infinite energy harvesting based wireless sensor networks and can be numerically obtained by substituting µ * (η) into the following constraint: Proof. The Lagrangian function of problem P2 is formulated as follows: where κ is the Lagrange multiplier. Then, the Karush-Kuhn-Tucker (KKT) conditions of problem P2 can be written as follows [34]: Defining γ in κ/β and solving Equation (16), we can obtain the optimal power control policy as shown in Equation (13), where γ in can be numerically obtained from Equation (14).
Theorem 1 gives the QoS-driven power control policy for battery-infinite energy harvesting based wireless sensor networks. To better understand the insights of Theorem 1, we plot the instantaneous transmit power control policy in Figure 2. Observing Figure 2, we have: (i) given energy arrival rate, when QoS exponent is very small, more power is assigned to the better channel and less power to the worse channel. However, when QoS exponent is very large, more power is assigned to the worse channel and less power to the better channel. (ii) The allocated power increases as the energy arrival rate increases. In addition, we can observe that the cut-off SNR threshold depends on λ e . Furthermore, we discuss two specific cases of Theorem 1 in following Remarks 1 and 2, which are the optimal power control policies under the very loose QoS constraint and the very stringent QoS constraint, respectively, for battery-infinite energy harvesting based wireless sensor networks.
Remark 1.
Under the very loose QoS constraint (θ → 0), the optimal energy harvesting power control policy for µ * (η) converges to which is referred to the Energy harvesting Water-Filling (E-WF) scheme. When the QoS constraint is very loose, our developed optimal power control policy converges to the E-WF scheme, where the water levels are dominated by the energy arrival rate and cut-off SNR threshold. The conventional staircase water-filling scheme [3] is the special case (θ = 0) of the E-WF scheme.
Remark 2.
Under the very stringent QoS constraint (θ → ∞), the optimal power control policy for energy harvesting based wireless networks µ * (η) converges to . We call the power control policy specified in Equation (18) the Energy harvesting Channel Inversion (E-CI) scheme. Figure 2, when θ varies from 0 to ∞, reflecting different delay-bounded QoS constraints, our developed QoS-driven energy harvesting power control policy swings between the E-WF scheme and the E-CI scheme. Using our developed optimal power control policy for battery-infinite energy harvesting based wireless sensor networks, we can derive the maximum EEE, denoted by E * e (θ, λ e ), as follows:
QoS-Driven Optimal Power Control Policy with Finite Battery Capacity
In this section, we aim to maximize the EEE of energy harvesting based wireless sensor networks with finite battery capacity. Letμ(η[i]) denote by the QoS-driven power control policy in the ith frame andP(η[i]) =μ(η[i])E H denote by the transmit power in the ith frame for the SNs with finite battery capacity.
The Effective Energy Efficiency Maximization for Battery-Finite Energy Harvesting Based Wireless Sensor Networks
We denote by B max the maximum battery capacity for the SN. Then, the causality constraint for battery-finite energy harvesting based wireless sensor networks is formulated as follows [4]: Based on Equation (20a,b), we can obtain thatP(η[t]) needs to satisfy: Thus, when t approaches ∞, we can further simplify Equation (21) to the average harvested energy constraint and the battery capacity constraint as follows: Now, we formulate the effective energy efficiency maximization problem for the battery-finite energy harvesting based wireless sensor networks as follows: subject to Equation (22). It is hard to solve problem P3 since it is a non-convex optimization problem. Thus, we convert problem P3 into the equivalent problem P4, which is a convex optimization problem, as follows: subject to Equation (22). Since the average harvested energy E H is variable in energy harvesting based wireless sensor networks, in order to solve the problem P4, we need to analyze the cases that the optimal policy is determined by only the average harvested energy constraint (E γ (P(η)) ≤ E H ), only the battery capacity constraint (P(η) ≤ B max ), and both constraints specified in Equation (22).
The Optimal Power Control with QoS Provisioning in Battery-Finite Energy Harvesting Based Wireless Sensor Networks
If the battery capacity is large enough to store harvested energy without overflow, the optimal power control policy is not limited by the battery capacity. We denote byf θ (λ e ) the threshold to judge whether the battery capacity constraint is always satisfied or not (We will derive the closed-form expression forf θ (λ e ) in Section 4.3.). For fixed θ, if B max ≥f θ (λ e ) holds, the battery capacity constraint is always satisfied. In the case of B max ≥f θ (λ e ), the optimal power control policy is only determined by average harvested energy constraint. Thus, the effective energy efficiency maximization problem P3 becomes problem P1. Then, we give the following Proposition 1.
Proposition 1.
If B max ≥f θ (λ e ) is satisfied, the optimal power control policy in battery-finite energy harvesting based wireless sensor networks is given as follows: Proof. The proof of Proposition 1 is very similar to the proof of Theorem 1. We omit the details here.
If the transmitter always harvests energy more than the battery capacity, the energy overflowed will be wasted. In this case, the optimal power control policy is only determined by the battery capacity constraint. Thus, we have the following Proposition 2.
Proposition 2.
If B max ≤ λ e , the optimal power control policy in battery-finite energy harvesting based wireless sensor networks is given as follows: Proof. If the optimal power control policy is only determined by the battery capacity constraint, the maximum available instantaneous power, denoted byP(η) = B max , will be always optimal. Thus, in this case, the optimal power control policy isμ For the region λ e < B max <f θ (λ e ), the optimal power control policy is the solution of problem P4. In this case, we solve problem P4 and have the following Theorem 2.
Theorem 2.
If λ e < B max <f θ (λ e ) is satisfied, the optimal power control policy in battery-finite energy harvesting based wireless sensor networks is given bỹ where f (η) λ e β β+1 /(γ fn 1 β+1 γ β β+1 ) − λ e /γ is defined for simply expression and γ fn is the cut-off SNR in battery-finite energy harvesting based wireless sensor networks. The parameter γ fn can be numerically obtained by substituting Equation (27) into: Proof. We formulate the Lagrangian function of problem P4 as follows: where κ 1 and κ 2 are the Lagrange multipliers corresponding to the constraints specified in Equation (22). Then, the corresponding KKT conditions can be expressed as follows: Solving Equation (30), we can obtain the optimal power control policy in Equation (27), where γ fn κ 1 /β and can be determined by the constraint Equation (28).
Theorem 2 gives the QoS-driven power control policy for battery-finite energy harvesting based wireless sensor networks. According to the optimal power control policy given by Theorem 2, we plot the instantaneous power control policy corresponding to Equations (27) and (28) in Figure 3. As illustrated in Figure 3, for fixed energy arrival rate, the power control policy allocates more power to the better channel and less power to the worse channel when the QoS exponent is very small. When the QoS exponent is very large, the power control policy allocates more power to the worse channel and less power to the better channel. The allocated power increases as the energy arrival rate increases. Meanwhile, the cut-off SNR threshold γ fn varies as the energy arrival rate varies. However, the maximum power is limited by the battery capacity. To further analyze the effect of QoS exponent on the optimal power control policy in battery-finite energy harvesting based wireless networks, we discuss two special cases of Theorem 2 in Remarks 3 and 4, which correspond to the optimal energy harvesting power control policies under the very loose QoS constraint and the very stringent QoS constraint, respectively.
Remark 3.
Under the very loose QoS constraint (θ → 0), the optimal power control policyμ * (η) in Theorem 2 converges toμ * (η) = whereγ = λ e γ fn /(1 − B max γ fn ) is the solution of 1/(λ e γ fn ) − 1/γ = B max /λ e . As θ varies to 0, the optimal power control policy in battery-finite energy harvesting based wireless sensor networks converges to the Truncated energy harvesting Water Filling (T-WF) scheme. In the T-WF scheme, both the energy arrival rate and the cut-off SNR threshold dominate the water level while the power is constrained by the battery capacity. The traditional directional water-filling scheme [4] is the special case (θ = 0) of the T-WF scheme.
As depicted in Figure 3, when the QoS exponent θ varies between 0 and ∞, the corresponding optimal power control policy for battery-finite energy harvesting based wireless sensor networks swings between the T-WF scheme and the T-CI scheme. Substituting Equations (27) and (28) into Equation (6), we can derive the maximum effective energy efficiency for battery-finite energy harvesting based wireless sensor networks, denoted byẼ * e (θ, λ e ), as follows: where {a, b} + max{a, b}.
The Analysis for the Threshold of Energy Constraintsf θ (λ e )
Based on the analyses of Section 4.2 for battery-finite energy harvesting based wireless sensor networks, if the optimal power control policy is only determined by the average harvested energy constraint, it needs to satisfy To derive the maximum value of f (η), which isf θ (λ e ), we first check the convexity of function ) − λ e /γ by setting its secondary derivation with respect to γ to be 0 as follows: Solving Equation (35), we can obtain γ = For the region λ e γ fn , ∂ 2 f (η)/∂γ 2 is less than zero corresponding to the low SNR region. When λ e γ fn , ∂ 2 f (η)/∂γ 2 is larger than or equal to zero corresponding to the high SNR region. Thus, f (η) is concave in the low SNR region and convex in the high SNR region. We set the first derivation to zero as follows: solving which, we can obtain the stationary point as follows: Because of β+1 β β+1 λ e γ fn < 2(β+1) 2 β(2β+1) β+1 λ e γ fn , the stationary point falls into the low SNR region. Therefore, the maximum of f (η) in the low SNR region corresponds to the stationary point γ = β+1 β β+1 λ e γ fn . Then, substituting Equation (37) into the function of f (η) specified in Equation (34), we can obtain that, in the low SNR region, f (η) needs to satisfy In the high SNR region, since f (η) is convex, the maximum of f (η) can be obtained between the following two boundary points: Substituting the two functions in Equation (39) into Equation (34), respectively, we can derive that in the high SNR region f (η) needs to satisfy: Then, based on Equations (38) and (40), the upper bound of f (η) is given as follows: where the equality holds for the reason that function f (η) is continuous and, in the low SNR region, the value at the stationary point is larger than the value at the inflection point. Therefore, we can obtain the closed-form off θ (λ e ) as :f As a result, if B max ≥ β β /γ fn (β + 1) β+1 holds, the battery capacity constraint is always satisfied.
Outage Probability Analyses
For energy harvesting based wireless networks, there exits the energy outage probability and the data-transmission outage probability [35,36]. The energy outage probability is the probability that harvested energy is not sufficient enough to keep the power consumption, i.e., The data-transmission outage probability is the probability that instantaneous service rate cannot support the required target data rate. Let P e out and P d out denote by the energy outage probability and data-transmission outage probability, respectively. In the following, we analyze the energy outage probability and data-transmission outage probability, respectively, to theoretically evaluate the performance for energy harvesting based wireless sensor networks.
Energy Outage Probability
For energy harvesting based wireless sensor networks, we have the following Lemma 1 regarding the energy outage probability. Lemma 1. When t approaches to ∞, P e out converges to 0.
Proof. Using our developed optimal power control policies, the energy outage probability for energy harvesting based wireless sensor networks can be derived as follows: where P * (η[i]) = µ * (η[i])λ e denotes the optimal power allocation in the i th frame. According to Equations (14) and (28), P * (η[i]) needs to satisfy Thus, when t approaches to ∞, the expectation of and can be written as follows: Based on Equations (43) and (45), and the law of Chebyshev large numbers [37], we can obtain which shows the energy outage probability converges to zero as t approaches to ∞. Now, we have derived that P e out converges to zero when t approaches to infinity. Next, when t is not infinite, we can derive the upper-bound for the energy outage probability according to the Chebyshev inequality [37] as follows: where D[a] represents the variance of a.
Observing Equation (48), we find that P e out decreases as H[0] increases. Moreover, according to Lemma 1, P e out converges to 0 when t approaches to ∞. Practically, it always needs to take a relatively long time to cumulate energy from the energy sources before starting communications. Therefore, the energy outage probability can be regarded as zero by charging the battery for a while in reality.
Data-Transmission Outage Probability
Using our developed optimal power control policies, the data-transmission outage probability for energy harvesting based wireless sensor networks can be formulated as follows [38]: where R th is the required target service rate. Based on the work of [39,40], the data-transmission outage probability in Equation (49) can be converted as follows: where α is the parameter controlling the severity or the diversity of the channel fading. Then, we analyze the data-transmission outage probabilities in battery-infinite and battery-finite energy harvesting based wireless networks, respectively.
Battery-Infinite Energy Harvesting Based Wireless Sensor Networks
The optimal power control policy for battery-infinite energy harvesting based wireless networks has been shown in Theorem 1. Plugging Equation (13) into Equation (50), we can obtain the data-transmission outage probability, denoted by P d i out , for battery-infinite energy harvesting based wireless sensor networks as follows: To further evaluate the data-transmission outage probability, we obtain Lemma 2 regarding P d i out under two specified cases, i.e., when QoS constraint is very loose and QoS constraint is very stringent.
Lemma 2.
When the QoS constraint is very loose (θ → 0), the data-transmission outage probability for battery-infinite energy harvesting based wireless sensor networks converges to When the QoS constraint is very stringent (θ → ∞), the data-transmission outage probability for battery-infinite energy harvesting based wireless sensor networks converges to Proof. Based on Equation (51), we analyze P d i out in the following two cases corresponding to the data-transmission outage probabilities, under the very loose QoS constraint and the very stringent QoS constraint, respectively.
Case I: Under the very loose QoS constraint (θ → 0), the data-transmission outage probability for battery-infinite energy harvesting based wireless sensor networks converges to In this case, P d i out converges to 1 as γ approaches to zero.
Case II: Under the very stringent QoS constraint (θ → ∞), the data-transmission outage probability for battery-infinite energy harvesting based wireless sensor networks becomes Observing Equation (55), we find that P d i out converges to zero as γ approaches to zero. Meanwhile, P d i out becomes 1 as γ approaches to ∞. Therefore, comprehensively considering both Cases I and II, we have Lemma 2.
Based on the proof of Lemma 2, we can also obtain that under the very loose QoS constraint, P d i out decreases as γ increases. Meanwhile, under the very stringent QoS constraint, P d i out increases as γ increases.
Battery-Finite Energy Harvesting Based Wireless Sensor Networks
Substituting Equation (27) into Equation (50), we can obtain the data-transmission outage probability, denoted by P d f out , for battery-finite energy harvesting based wireless sensor networks as follows: Then, we obtain the upper and lower bounds of P d f out under the very loose QoS constraint and the very stringent QoS constraint, respectively, in Lemma 3.
Lemma 3.
When the QoS constraint is very loose (θ → 0), the data-transmission outage probability for battery-finite energy harvesting based wireless sensor networks converges to When the QoS constraint is very stringent (θ → ∞), the data-transmission outage probability for battery-finite energy harvesting based wireless sensor networks converges to Proof. The expression of P d f out has been specified in Equation (56). Then, we analyze the data-transmission outage probability for battery-finite energy harvesting based wireless sensor networks in two specific cases corresponding to the data-transmission outage probabilities under the very loose QoS constraint and the very stringent QoS constraint, respectively.
Case 1: Under the very loose QoS constraint (θ → 0), the data-transmission outage probability for battery-finite energy harvesting based wireless sensor networks converges to Observing Equation (59), we find that P d f out turns to 1 when γ approaches to zero. P d f out converges when γ approaches to ∞. Case 2: Under the very stringent QoS constraint (θ → ∞), the data-transmission outage probability for battery-finite energy harvesting based wireless sensor networks converges to: Based on Equation (60), we can obtain that P Since the energy outage probability can be treated as zero, the outage probability for energy harvesting based wireless sensor networks can be entirely determined by the data-transmission outage probability, which is calculated based on Equations (51) and (56). Both Equations (51) and (56) show that the outage probabilities are functions of instantaneous SNR γ, QoS constraint θ, and energy arrival rate λ e . Based on Equations (51) and (56), we can derive the outage probability corresponding to the specified instantaneous SNR, QoS constraint, and energy arrival rate.
Performance Evaluation
In this section, we conduct numerical analyses to evaluate the performance of our proposed QoS-driven power control policies for energy harvesting based wireless sensor networks. Throughout the simulation, we use normalized effective energy efficiency and normalized effective capacity (EC), which are defined as the EEE and EC per Hz per second, respectively, to evaluate the performance of the energy harvesting based wireless networks. We also set the bandwidth, the time frame length, the maximum battery capacity and the parameters of Nakagami-m channel model to be B = 1 MHz, T f = 0.2 ms, B max = 2 mJ, γ = 5 dB, and m = 2.
In order to numerically analyze the thresholdf θ (λ e ) for energy constraints, we plot the transmit power curves versus the instantaneous SNR in Figures 4 and 5, where the QoS constraint θ is set to be 0.01 and 0.1, respectively. Observing Figures 4 and 5, we find that the transmit power curves are concave when γ is very small and convex when γ is very large. This validates our analyses for thresholdf θ (λ e ) of energy constraints in Section 4.3. The maximum value of transmit power, which corresponds to the thresholdsf θ (λ e ), can be obtained at the stationary points in Figures 4 and 5, i.e., when θ = 0.01 and λ e = 2,f 0.01 (2) = 1.406, which represents that if B max ≥ 1.406, the optimal power control policy is dominated only by the average harvested energy constraint under this circumstance. Figures 4 and 5 also illustrate that, for different energy arrival rates and under different QoS constraints, we can obtain different energy constraints' thresholdsf θ (λ e ). This verifies thatf θ (λ e ) depends on the energy arrival rate λ e and QoS constraint θ. Figures 6 and 7 depict the normalized EEE and the normalized EC of our developed optimal power control policy versus energy arrival rate λ e . As illustrated in Figures 6 and 7, EEE decreases as energy arrival rata increases while EC increases as energy arrival rate increases. This indicates that there is a trade-off between the EEE and EC. Also illustrated in Figures 6 and 7, for λ e ≤ λ e1 (under the QoS constraint θ = 10 −3 ) and λ e ≤ λ e2 (under the QoS constraint θ = 10 −2 ), respectively, both the optimal power control policies in battery-infinite and battery-finite energy harvesting based wireless sensor networks have the same EEE and EC. This is because the instantaneous power control policy given by Proposition 1 is only limited by average harvested energy in the low energy arrival rate region. Therefore, when λ e ≤ λ e1 (under the QoS constraint θ = 10 −3 ) and λ e ≤ λ e2 (under the QoS constraint θ = 10 −2 ), the EEE and EC are not limited by the battery capacity. However, the battery capacity limits the EEE and EC in the high energy arrival rate region. For this reason, the optimal power control policy for battery-infinite energy harvesting based wireless sensor networks achieves much larger EEE and EC than that for battery-finite energy harvesting based wireless sensor networks when λ e > λ e1 (under the QoS constraint θ = 10 −3 ) and λ e > λ e2 (under the QoS constraint θ = 10 −2 ). We can also observe from Figures 6 and 7 that, under the QoS constraint θ = 10 −1 , both the the battery-infinite and battery-finite energy harvesting based wireless sensor networks have the same EEE and EC when λ e is less than 4. This indicates that, when the QoS constraint is very stringent, the optimal power control policy for battery-finite energy harvesting based wireless sensor networks is not limited by battery capacity until the networks have a relatively large energy arrival rate. Figure 8 depicts the normalized EEE of the optimal power control policy versus the QoS exponent, where the energy arrival rate λ e is fixed to 2 and 3, respectively. As shown in Figure 8, the normalized EEE decreases as the QoS exponent θ increases. This indicates that the looser the traffic QoS constraint is, the larger EEE we can achieve. In addition, the optimal power control policy in battery-infinite energy harvesting based wireless sensor networks can achieve larger EEE than that in battery-finite energy harvesting based wireless sensor networks when the QoS constraint is very loose or very stringent. This is due to the reason that the QoS-driven power control policy in battery-finite energy harvesting based wireless sensor networks is limited by the battery capacity in the high SNR region when the QoS requirement is very loose and in the low SNR region when the QoS constraint is very stringent. When the QoS constraint is not very loose or not very stringent, both the QoS-driven power control policies for the battery-infinite and battery-finite energy harvesting based wireless sensor networks have the same EEE. This is because the maximum instantaneous transmit power is always less than the battery capacity when the QoS constraint is not very loose or not very stringent. Figure 9 compares the performance of our developed optimal power control policy with other existing schemes, i.e., the related research works [25], E-WF scheme, and constant power allocation scheme. We find that both the power control policies with QoS provisioning specified in this paper and [25] can achieve better performance than the power control policies without QoS provisioning, i.e., the E-WF scheme and the constant power allocation. In addition, Figure 9 also shows that our developed optimal power control policy in Theorem 1 can achieve larger EC than the power control policy in [25]. This is because in [25] the data rate QoS requirement is considered, which is deterministic QoS, while our developed optimal power control policy provides the statistical QoS guarantees, which is adaptive to diverse delay-bounded QoS constraints, thus achieving the maximum EC. To further verify the analyses in this paper, we plot normalized EEE of the optimal power control polices developed in Theorems 1 and 2, constant power allocation, E-WF scheme, T-WF scheme, E-CI scheme, and T-CI scheme in Figure 10. We can observe that our developed QoS-driven power control policies, which are the solution of Theorems 1 and 2, can achieve larger EEE than other schemes for energy harvesting based wireless sensor networks. When the QoS constraint is very loose, our developed QoS-driven power control policy for battery-infinite energy harvesting based wireless sensor networks converges to the E-WF scheme and our developed QoS-driven power control policy for battery-finite energy harvesting based wireless sensor networks converges to the T-WF scheme. When the QoS requirement is very stringent, our QoS-driven optimal power control policy for battery-infinite energy harvesting based wireless sensor networks converges to the E-CI scheme and the QoS-driven power control policy for battery-finite energy harvesting based wireless sensor networks converges to the T-CI scheme. Energy arrival rate (mJ) Normalized effective capacity (kbps/Hz) Optimal policy of Theorem 1 Existing scheme E−WF scheme Constant power Figure 9. The comparison between our developed QoS-driven optimal power control policy, the existing related scheme, E-WF scheme, and constant power allocation scheme. Figure 10. The comparison between our developed QoS-driven optimal power control policies, the constant power allocation scheme, the E-WF scheme, T-WF scheme, E-CI scheme, and T-CI scheme. Figures 11 and 12 illustrate the outage probabilities of our developed optimal power control policies. As depicted in Figure 11, when the QoS exponent θ is very small, the outage probability for battery-infinite energy harvesting based wireless sensor networks converges to 1 in the low SNR region and P 1 in the high SNR region, while the outage probability for battery-finite energy harvesting based wireless sensor networks converges to 1 in the low SNR region and P 2 in the high SNR region. In addition, when the QoS exponent θ is very large, the outage probability for battery-infinite energy harvesting based wireless sensor networks converges to zero in the low SNR region and 1 in the high SNR region, while the outage probability for battery-finite energy harvesting based wireless sensor networks converges to P 2 in the low SNR region and 1 in the high SNR region. Note that the corresponding lower bounds and P 2 = 1 − exp[−((2 R th T f B − 1)λ e /B max ) α 2 ] can be obtained from Lemmas 2 and 3, respectively. In Figure 12, we plot the outage probability curves versus the instantaneous SNR under the QoS constraint θ = 10 −4 , where the energy arrival rate is set to be 1, 2, and 3, respectively. As depicted in Figure 12, when energy arrival rate is 1, the battery-infinite outage probability is the same as battery-finite outage probability. When energy arrival rate is 2 or 3, the battery-infinite energy harvesting based wireless sensor networks achieve a smaller outage probability than the battery-finite energy harvesting based wireless sensor networks. This is because the optimal power control policy is not constrained by the battery capacity when energy arrival rate is 1. Thus, both battery-infinite and battery-finite energy harvesting based wireless sensor networks have the same outage probability. When energy arrival rate is 2 or 3, the optimal power control policy is limited by the battery capacity in battery-finite energy harvesting based wireless sensor networks. Thus, the battery-finite energy harvesting based wireless sensor networks have the larger outage probability than the battery-infinite energy harvesting based wireless sensor networks.
Battery-inifinite
Battery-finite Figure 11. The outage probability of our developed optimal power control policy with α = 4, λ e = 3 mJ, and B max = 1.5 mJ.
Conclusions
In this paper, we developed the statistical delay-bounded QoS-driven power control policies for energy harvesting based wireless sensor networks to maximize the effective energy efficiency. First, we analyzed the available energy constraints for the battery-infinite and battery-finite energy harvesting based wireless sensor networks, respectively. Then, we formulated the EEE maximization problems, solving which, we derived the optimal power control policies. Our analyses identified the key fact that, under various QoS constraints, the optimal power control policy for battery-infinite energy harvesting based wireless sensor networks varies between the E-WF scheme and E-CI scheme while the optimal power control policy for battery-finite energy harvesting based wireless sensor networks varies between the T-WF scheme and T-CI scheme. We also derived the threshold of the energy arrival rate to judge whether the EEE is limited by the battery capacity constraint or not. In addition, we analyzed the outage probabilities for energy harvesting based wireless sensor networks using our developed optimal power control policies. The obtained numerical results validated our analyses and showed that our developed QoS-driven power control policies can achieve the maximum EEE for energy harvesting based wireless sensor networks.
|
2017-09-03T05:46:10.887Z
|
2017-08-23T00:00:00.000
|
{
"year": 2017,
"sha1": "4120cf727a0d44b312447652a64bf0cecf10f061",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/9/1933/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4120cf727a0d44b312447652a64bf0cecf10f061",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science"
]
}
|
210879463
|
pes2o/s2orc
|
v3-fos-license
|
Consideration of Qualitative Changes in Agricultural Settlements Due to Land Consolidation A Case Study Based on the Perceptions of Non-Farmers
Land consolidation (LC) is implemented as a public project that contributes to the improvement of agricultural productivity, and its effect is evaluated mainly by labour productivity and land productivity. However, to maintain both agricultural production and the social community, understanding the impact on non-farmers in the community as one of the aspects of LC is extremely important. In this study, we surveyed rural areas about eight years after the LC was implemented by posted questionnaire and analysed the difference between farmers’ and non-farmers’ perceptions of the multifaceted evaluation items on the policy effect. The evaluation points for the LC include the following: [1] Impact on farming and farmland preservation, [2] Impact on community activation, and [3] Impact on collaboration between farmers and non-farmers. Results can be summarized as follows: First, it was confirmed that there is a trend for non-farmers’ attachment to the area to be reduced because of LC. Second, non-farmers evaluations that LC attracts young farmers were also low. However, this opinion was much more noticeable in non-farmers who had quit agriculture recently than in the generation that had left agriculture because of LC. In other words, LC is a useful policy for improving agricultural conditions and agricultural structure. However, in some cases, the connections between farmers and non-farmers is weakened. Thus, cooperative activities to actively prevent this weakening are important.
About Land Consolidation
In this report, we first describe the formation process of land consolidation (LC) projects, which are one of the main methods used in today's agricultural land development policy. In a study on policy evaluation in Japan, it became clear that the evaluation index was biased toward improving agricultural productivity. This was not a problem during the time that Japan was experiencing a population increase and there were plenty of workers to maintain the communities in rural areas. However, today's population is rapidly decreasing. Therefore, it was hypothesized that if only agricultural productivity is regarded as important, there is a possibility that the sustainability of rural communities could be dampened. A survey on the sustainability of rural communities focusing on non-farmers was carried out using a questionnaire that was designed based on this hypothesis. The results showed that the hypothesis was supported, especially based on the recognition of non-farmers who had retired from agriculture. In the following, a series of case studies whose findings can be used to suggest future policy formation processes are presented and then some countermeasures are considered.
LC projects have been used as part of public policy for rural development. They seek to comprehensively improve agricultural land conditions by applying soil improvements and compartmentalization, area expansion, and irrigation and drainage capacity to farmland with poor workability. There is evidence that LC is carried out voluntarily by adjacent villages (Bonner, 1987). The prototype of LC projects similar to the current type dates back to the nineteenth century, and projects were practiced in each country as a policy in the 1950s (Food and Agriculture Organization of the United Nations, 2003). Against this backdrop, the progress of LC is said to have been hampered by the green revolution (Bullard, 2007). In the past, the possibility of cultivating multiple types of produce simultaneously was argued to be an advantage derived from the fragmentation of agricultural land (Hardjono, 1987). Even today, this is recognized as an advantage of agricultural land fragmentation (Kawasaki, 2010(Kawasaki, , 2011, which has gradually spread; further, there is greater concern about the current environmental burden and improving the quality of the environment (Fourie, 2004). Until now, policy details have been widely adopted as a method of rural development while being localized to specific regions in Europe (Thomas, 2006), Central Asia (Gun, 2003), and Africa (Lawry, 1989). LC has been useful as a method of rural development in East Asia (Long, 2014), and even in Japan, the subject of this study, it was institutionalized through the implementation of the post-war Land Improvement Act (Horiguchi, K., & Taketani, 2012). Later, large-scale agricultural land of several hectares or more was targeted (Ishii, 2005), and in recent years, it has been applied to the regeneration of agricultural land damaged by the Great East Japan Earthquake (Hattori, Shimizu, & Saito, 2018). According to Japanese national agricultural and forestry statistics, which are based on the same statistical method, for the past 10 years the number of agricultural management entities with a farm size greater than 2 ha has gradually increased Figure 1. Thus, in addition to the improvement of the physical condition of agricultural land, the current agricultural policy promotes the concentration of villages' agricultural capital (labour or agricultural machinery) into the hands of a few influential farmers or corporate entities. The reasons the selection and concentration of agricultural capital are packaged in LC are as follows. First, because it was clear that the population on Japan's agricultural land would continue to decline in the period immediately following the war, a breakaway from an agricultural structure based on many individual farmers was targeted as quickly as possible. LC was expected to be the driving force behind this change in structure; however, in the 1980s, there was nationwide criticism that voluntary aggregation was not progressing in the regions that had adopted LC (Motosugi, 2008). As a countermeasure, it was advanced through incentives in the form of subsidies. Hashimoto & Nishi (2016) provide a useful account of the policies relating to LC from the post-war period to recent times.
Second, owing to the concentration of agricultural resources, many small farmers (most of them elderly) are retiring from independent farm management. It has been shown that effective utilization of this kind of surplus labour for auxiliary work, such as weeding and wastewater management, improves the sustainability of regional agriculture (Taisuke Takayama, Horibe, & Nakatani, 2018;Yamashita & Hoshino, 2006). Voluntary cooperation of residents who have retired from the independent farming business and those who do not have a history of farming is expected; however, in practice, cooperation is promoted through incentive policies targeting a series of activities, including environmental conservation, around the agricultural land and rural society in general. This kind of comprehensive agricultural policy has seen full-scale implementation since 2000; however, its results are still being evaluated (Hashiguchi, 2011;Komiyama & Ito, 2017;T Takayama & Nakatani, 2014).
Research Background
Our awareness of the issues is influenced by the scarcity of objective evaluations of whether the surplus labour generated by LC can be smoothly redirected toward progress. Evaluations of LC primarily use indices relating to agricultural production; this is true not only in Japan (Arimoto, 2011;Hoshino, 1992;Kunimitsu, Nakata, & Toshima, 2005) but also overseas (Bizimana, Nieuwoudt, & Ferrer, 2004). A study was also conducted to evaluate the strengthening of regional social capital through LC by using large-scale statistical materials as data (Taisuke . However, because these studies conducted statistical analyses, the specific opinions of residents who had retired from the agricultural business were treated abstractly. Previous research on the conflicts of farmers or interest adjustment related to LC projects in a broad sense focused on, for example, the difference in agricultural land conditions before and after the project in one case study (Wójcik-Leń et al., 2018) and the consensusbuilding leading to project implementation in another case study (Haldrup, 2015). One needs to show the legitimacy of the incentive policy's aim to effectively return the surplus labour force created by LC to the area; thus, it is necessary to prove the hypothesis that negative changes in village society will impede voluntary cooperation between farmers and non-farmers.
This case study seeks signs of disharmony that can occur in rural society because of LC. The aim of this paper is to understand the intentions of non-farmers who have given up farming by using a survey and describing the changes in agricultural village communities.
Concept Definitions
First, we need to define some concepts related to this study. In this study, we developed our research based on an investigative survey. In recent years, other methods such as online surveys have been developed; however, for this study, we conducted a postal survey and collected them once completed. Below, we briefly describe the investigative survey.
"LC beneficiary" refers to all inhabitants who owned farmland within the construction area prior to the implementation of LC. The condition for being a beneficiary was not whether one was a farmer but whether there was ownership of farmland within the LC zone. In the legal procedure of LC in Japan, there were patterns in which the amount of money that local inhabitants should bear and the standard of the subsidy object recognition differed according to whether the project manager was from a prefecture or a country, or what the purpose of the project was. All the members of LC beneficiary had an obligation to pay the amount to be tolerated in the region, except for the public subsidy.
"Agricultural workers" refers to all workers except residents who were, at the time, not at all involved in farming; further, there was no lower limit on the number of days spent as an agricultural worker. In the survey conducted in this study, farmer and agricultural worker were synonymous because only one respondent was selected per household. Strictly speaking, farming households include several members-both agricultural workers and non-agricultural workers. However, in this study, for the sake of simplification, we defined a farmer as an agricultural worker and a nonfarmer as a non-agricultural worker.
"Years retired" was defined as the number of years between the nonfarmer's retirement from farming and the time of this survey. Notably, there is a possibility that the meaning of "retirement" is not uniform by respondents. For example, there are cases where a person has retired from agriculture completely, or where a person has retired from agriculture as a manager, but is partly involved in agricultural activities. However, in this study, the interpretation of retirement was left to the respondent, because of the constraint of investigation time, and the quantity requested by the examinee was substantial.
Analytic Framework
By cross tabulation, using data on the intentions of residents, we compared individual differences in non-farmers' positive and negative perceptions of the current situation and trend in regional agriculture. In this study, taking an LC project in which construction was completed 12 years ago as an example, we examined all the beneficiaries living in that area. Then, separating beneficiaries into farmers and non-farmers, we calculated the number of years since retirement in the case of non-farmers. In addition, to measure the subjective influence LC has exerted on the area and the difference in the evaluations of farmers and non-farmers, we confirmed a difference of opinion in the non-farmer groups classified by retirement year.
Recent surveys in rural areas in Japan have empirically shown that the percentage of elderly people responding to sampling tends to be low. In this study, sequential examination by cross tabulation was the fundamental analysis method (instead of multivariate analysis) based on the possibility that the collection of the data necessary for the analysis was difficult.
The target for investigation was Town A, in Ishikawa prefecture. Ishikawa prefecture is located on the east coast in the centre of Japan. Town A is almost in the centre of the Noto Peninsula in Ishikawa Prefecture (Figure 2). Basically, Ishikawa Prefecture has a greater proportion of abandoned agricultural land than the Japanese average ( Figure 3). The total population of Town A is decreasing at a rate that greatly exceeds that of Ishikawa Prefecture on the whole, which has been slightly higher than the national average. Furthermore, although the total number of farmers in Town A is decreasing at a slightly lower rate than that of Ishikawa Prefecture, it has decreased continuously ( Figure 4). Thus, an improvement of the agricultural land conditions was desired. Town A's administration began explicit investigations in 1998, following requests for LC from residents. The project began in 2000, and the construction period concluded in 2006, when the current layout of the agricultural land was created ( Figure 5) Prior to the beginning of the project in 2000, only 2.3 ha of agricultural land, corresponding to about 2% of the project's target area, was under the control of influential farmers and arbitrary organizations. Following the completion of LC, 91.5 ha, corresponding to about 70% of the project area, was under the control of influential farmers or systematically created agricultural organizations.
In this process, we clarified changes in the evaluation of local communities spread among general small-scale farmers. There were six evaluation items based on the survey questions shown in Table 1.
Data Collection
A questionnaire survey sheet was mailed to all beneficiaries' houses in December 2014. By the middle of January 2015, 94 responses had been received (a response rate of 31%). The questionnaire consisted of items relating to personal attributes such as sex, age, presence or absence of successors, and farmer or otherwise; non-farmers were asked how many years had passed since they quit farming. Since the number of samples that could be collected on this occasion was few, the analysis did not include sex, age and presence or absence of successors.
The items for evaluating the effects of LC were categorized as follows: 1) items relating to substantial agriculture and agricultural land; 2) items relating to the local community; and 3) items regarding cooperation between farmers and non-farmers. There were two questions listed for each item, giving a total of six questions. The six questions and the answer choices are shown in Table 1. Figure 6 shows the cross tabulation of the sex and age of respondents. All respondents were over the age of 50, and male respondents constituted about 70% of the sample. Moreover, while there were 51 farmers and 39 non-farmers, only 36 non-farmers entered the number of years elapsed since their retirement from farming. The number of years since retirement was 1-2 years for two respondents, 3-5 years for five of them, 6-10 years for 14 respondents, and over 11 years for 15 of them. Of the 14 who had retired 6-10 years ago, 12 said that they had retired because of the implementation of LC.
0%
20% 40% 60% 80% 100% Female n=12 Male n=77 50s 60s Over 70s Figure 6. Cross tabulation of gender and age of the respondents Note: Five respondents did not select a gender And, within the present farmers, the total number of the answer to the question on the existence of the successor was 41. Of these, four (9.8%) answered "Be sure to have successors", 14 (34.1%) answered "Having no successor", and 23 (56.1%) answered "I don't know. (Undetermined)". From this survey result, it was proven that it was difficult for individual farmers to maintain the farmland of the region. Table 2 shows whether there was significant difference between farmers and non-farmers in their answers to the six questions and three items of Table 1. Because some non-respondents' answers were included for each question, the sample size of the number of farmers and non-farmers is not uniform.
Considerations for the Differences between Farmers and Non-farmers
First, items relating to the convenience of farmland and farm roads because of LC (Questions 1 and 2) evoked positive evaluations. This was the stated purpose of the LC project and can be said to be an obvious result. Next, items on cooperation between farmers and non-farmers resulted in a lower evaluation by non-farmers. However, a majority of the evaluations of both farmers and non-farmers was positive. Conversely, their evaluations of the effects that LC exerted on the community were relatively low, as seen from Q3 and 4 in Table 2.
Next, we evaluated the significant differences between farmers and non-farmers. The Cramer's V coefficient shown in Table 2 can be interpreted as moderately significant when above 0.1 and definitely significant when above 0.2 (Cohen, 1988). As a result, it was inferred that there was a significant difference in questions other than Question 2. In particular, opinions about the effects of LC on the local attachment of beneficiaries were evaluated as showing the clearest and most significant difference between farmers and non-farmers.
In the above analysis, the impact of LC on the attitude of non-farmers has not been extracted, because the difference of which stage in life nonfarmers retired from agriculture was not considered. Therefore, in the following analysis, the difference by category is evaluated after the nonfarmers are divided by the number of years retired from agriculture.
The Effect of the Number of Years Retired on Nonfarmers' Evaluations of LC
In the previous analysis, because the evaluations of non-farmers for Questions 1 and 2 were high, they were not treated as serious concerns. Here, looking only at Questions 3-6, we confirmed a difference in the evaluations between non-farmers based on the number of years of retirement. Assuming the start year of LC, there were three categories of years retired: less than 5 years (little relation to LC), 6-10 years (strong relation to LC), and over 11 years (no relation to LC). The question was whether characteristic results could be seen in the 6-10 years group. Table 3 i shows the non-farmers' answers according to the number of years retired. The percentage shown on the left side in parentheses in the aggregate column is the share of the frequency of each option for the same number of years retired; the percentage on the right side is the share of the frequency of the specific number of years retired in the total responses for the option.
From the results, it can be seen that non-farmers who had been retired for five or fewer years had few positive evaluations in response to Questions 3 and 4, which measured the effects LC has had on the local community. On the other hand, no discernible difference was present between non-farmers who retired 6-10 years ago and those who retired more than 11 years ago.
Regarding Questions 5 and 6, seeking opinions on the effects of LC on cooperation between farmers and non-farmers, there was no significant difference across the number of years retired.
Interpretation of Results and Discussion
Based on these results, it seems that LC is a useful policy for improving agricultural conditions and agricultural structure; however, in some cases, the connections between farmers and non-farmers are weakened. Thus, it is important to promote cooperative activities to actively prevent this weakening. In the absence of an environmental policy that offers sufficiently meaningful incentives, there is the possibility that non-farmers' local attachments will gradually weaken.
The Japanese government's "multifunctional payment" policy-which financially supports the cooperative activity of farmers and non-farmers in rural areas-is being enforced (MAFF, 2015). However, sudden changes in such a policy have occurred frequently in the past due to shifts in government and national fiscal constraints. Therefore, there is no confirmation that the present subsidy system for collective resource management activities in rural areas will continue in the future.
So, it is considered that the premise behind the implementation of LC projects should be to provide a regional agricultural plan that encourages non-farmers not to leave agricultural activities and supports consensus building in the local community.
CONCLUSION
In this study, we evaluated LC, which is a mainstay in the measurement of the direct effects of agricultural management and improvements to agricultural productivity, from the perspective of the sustainability of the local community. Then, we searched for secondary negative effects resulting from LC based on the necessity of environmental policy guiding the progress of cooperation between farmers and non-farmers and implemented along with the LC. The results were as follows.
First, it was confirmed that the proportion of non-farmers whose local attachment was lowered by LC was slightly higher than that of farmers. Second, non-farmers' evaluations that LC attracts young farmers were low. However, this opinion was much more noticeable in non-farmers who had quit agriculture recently than in the generation that left agriculture because of LC. Our knowledge of the relationship between agricultural land size and community empowerment is not sufficient; however, there have been some case studies on the subject (Li, Leng, & Yuan, 2019).
Also, there are many other problems worthy of attention such as the abolition of the subsidy for paddy farming, the decrease in rice prices, and the crisis of management continuation due to the decrease in the labour forces of large-scale agricultural management entities. We carried out interviews with some large-scale management farmers at other areas in this prefecture and confirmed that there were concerns about the expansion of the management area. However, this is only estimation at present, because sufficient data are not available to clarify the problem structure and to identify the causal relationship. These subjects should be approached via qualitative research such as through interview investigations in future research.
|
2020-01-16T09:04:27.201Z
|
2020-01-15T00:00:00.000
|
{
"year": 2020,
"sha1": "23e102bf9aec5d5e4eb6fabc71a65329fd0dbc95",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/irspsd/8/1/8_124/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "62d06662472bfcee39ed550d2b82a0aa4ce47d5f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
17552576
|
pes2o/s2orc
|
v3-fos-license
|
An Analytical Approach for Estimating Fossil Record and Diversification Events in Sharks, Skates and Rays
Background Modern selachians and their supposed sister group (hybodont sharks) have a long and successful evolutionary history. Yet, although selachian remains are considered relatively common in the fossil record in comparison with other marine vertebrates, little is known about the quality of their fossil record. Similarly, only a few works based on specific time intervals have attempted to identify major events that marked the evolutionary history of this group. Methodology/Principal Findings Phylogenetic hypotheses concerning modern selachians’ interrelationships are numerous but differ significantly and no consensus has been found. The aim of the present study is to take advantage of the range of recent phylogenetic hypotheses in order to assess the fit of the selachian fossil record to phylogenies, according to two different branching methods. Compilation of these data allowed the inference of an estimated range of diversity through time and evolutionary events that marked this group over the past 300 Ma are identified. Results indicate that with the exception of high taxonomic ranks (orders), the selachian fossil record is by far imperfect, particularly for generic and post-Triassic data. Timing and amplitude of the various identified events that marked the selachian evolutionary history are discussed. Conclusion/Significance Some identified diversity events were mentioned in previous works using alternative methods (Early Jurassic, mid-Cretaceous, K/T boundary and late Paleogene diversity drops), thus reinforcing the efficiency of the methodology presented here in inferring evolutionary events. Other events (Permian/Triassic, Early and Late Cretaceous diversifications; Triassic/Jurassic extinction) are newly identified. Relationships between these events and paleoenvironmental characteristics and other groups’ evolutionary history are proposed.
Introduction
Modern selachians (Neoselachii) represent a diversified clade of marine vertebrates encompassing all living sharks (about 500 described species) and batoids (rays and skates, about 630 described species) as well as some extinct groups. Known certainly since the Early Permian [1], neoselachians have developed a wide range of lifestyles, modes of reproduction and feeding strategies throughout their long and successful evolutionary history [2][3][4]. However, preservation of neoselachian remains in the fossil record is reduced due to the cartilaginous nature of their skeleton. In fact, neoselachian fossil remains mainly consist of isolated oral teeth (vertebrae, scales, fin spines and rostral teeth are also occasionally encountered) as a result of their polyphiodonty (continuous shedding and replacement of teeth), although exceptionally preserved skeletons are known from few localities [5]. Thus, taxonomic identifications and classifications almost solely rest on dental morphologies and the attribution of some taxa to higher taxonomic ranks is sometimes made difficult as a consequence of the reduced number of characters available (compared with whole skeletons) and of morpho-functional convergences. Nevertheless, its is accepted that teeth generally provide a set of morphological characters that frequently allow their identification at lower taxonomic ranks [5][6], commonly down to the species level.
Although selachian fossil remains are regarded as relatively common in comparison to other marine vertebrates, no attempts have been made to qualitatively assess the quality of the selachian fossil record, reported in various works [5,[24][25][26]. In addition, little is known about the events that have marked the evolutionary history of this group and recent studies either measured these over large geological periods using standing diversities [27] or focused on peculiar time intervals (Jurassic, K/T boundary), using evolution rates and/or subsampling methods [28][29]. The aim of the present study is to take advantage of the various and recent molecular phylogenetic hypotheses (years 2003-2012) in order to assess the fit of the selachian (i.e. neoselachians and hybodonts as used here) fossil record (recently updated by one of us [26]) to phylogenies according to different branching methods. This allowed an estimation of the diversity patterns of this group for orders, families and genera through time to be gained, along with the identification of various evolutionary events that marked this group over the past 300 Ma.
Phylogenetic Framework
The taxonomic level used is critical for the results obtained from palaeobiodiversity studies and the consequences of taxonomic levels used on resulting patterns and their corresponding issues have been discussed [30][31][32]. Although high taxonomic levels (orders, families) are little affected by problems related to their preservation, these are less informative for congruence-testing methods and the use of higher taxa obscures a large part of the phyletic and diversity patterns that would be observed with lower taxonomic levels. On the other hand, the use of species databases adds numerous issues concerning the fossil species concept (fossil species infrequently represent true biological species [33]), synonymies and others in addition to the fact that described fossil species represent only a small fraction of the genuine fossil species diversity [34]. Moreover, it has been shown that the genus is a much reliable rank on which biodiversity analyses are based [32,35]. Consequently, the genus level will be the lowest taxonomic rank of the study presented here and higher taxonomic levels (orders, families) will be used for more general considerations.
Due to the scarcity of comprehensive genus-based phylogenies of modern elasmobranchs (resolving phyletic relationships of sharks, rays and skates altogether), the phylogenetic framework used here is a compilation of different phylogenies found in the literature for each order, family and genus based on living taxa (see File S1). These intra-ordinal phylogenetic relationships were included within six cladistic trees corresponding to the six main phylogenies tested (Fig. 1) that comprise inter-order relationships found in recent molecular works devoted to neoselachians: DOU [11], MAI [3], HUM [21], HEI [15], MAW [14] and MAW-m [14] (with modification of shark interrelationships of [22]). A seventh recent molecular phylogenetic framework (NAY [23]) was considered as it provides genus-level neoselachian interrelationships. Following the common hypothesis that hybodont sharks are sister-group to all neoselachians [16][17], the former were included in the phylogenetic hypothesis in stem position, with intrarelationships following those of [17]. These order-level phyloge- nies, reduced to cladistic trees, were combined along with fossil taxa that were added in unresolved positions (polytomies) according to their systematic position [5,26] and taxa of uncertain affinities were left in basal position in the order, super-order or higher taxonomic rank. Therefore, seven phylogenetic hypotheses were available for each taxonomic level (genus, family and order levels). Data on selachian fossil record comprising first and last occurrences of the recent and extinct taxa (513 genera, 89 families and 14 orders) included in this analysis is from [26]. See File S1 for detail on used phylogenetic hypotheses, File S2 for stratigraphic framework and File S3 for observed and inferred fossil record.
Branching Methods
Two possibilities of branching approximations can be considered for plotting fossil record on phylogenetic relationships drawn from a cladogram ( Fig. 2A). The first one, referred to as 'Conventional Branching Method' (CBM) here (Fig. 2B), clearly respects cladistic rules and is broadly used in studies dealing with congruence-testing methods. According to this, sister groups originate from a common ancestor from which they subsequently diverge, thus implying a coeval origination age for the two lineages. Consequently, when such a condition is not observed in the fossil record (the origination age of one of the lineages is very often older than the other), the time gap between the two lineages must be 'filled in' in order to fit the cladistic hypothesis ( Fig. 2A) and the fossil range added to the youngest lineage is referred to as 'ghost range' (Fig. 2B). In addition to the CBM, we considered another branching method ( fig. 2C) that respects exactly the same phylogenetic relationships drawn from the same cladogram ( fig. 2A) but differs in being thriftier in terms of added 'ghost range'. Contrary to the CBM, this method considers that the divergence age of a lineage can be younger than the first occurrence date of its sister group and that the former can be descending directly from the latter. Taxa considered (branches) are regarded as pools representing a number of taxonomic entities of intergrading morphs that vary through time, but that are grouped together according to taxonomy and classification rules used in paleontological studies (typological concept). Accordingly, taxon A is branched directly from taxon B (Fig. 2C, 'node' 4) with no added stratigraphic range for the former. However, although all representatives of pool B are grouped together in the systematic conception, those of the oldest forms (grey box), can be considered as belonging to either A or B. Although this method seems to contradict the concept of clade in introducing paraphylies, this is justified here with supraspecific-level fossil taxa for providing a lowest estimated diversity especially considering the quality and nature of the selachian fossil record. Moreover this branching method does not artificially increase the amount of ghost ranges when dealing with groups of poorly resolved phylogenies (numerous polytomies), as in elasmobranchs. Similarly, although E diverges before D in the phylogenetic hypothesis, E seems to emerge from D (Fig. 2C, 'node' 2) even though the first occurrence of the former is younger than that of the latter. It is necessary however, that 'node' 2 is older than (or at the very least contemporaneous of) 'node' 3 and younger than 'node' 1, in order to respect the phylogenetic hypothesis induced by the cladogram, thus implying the addition of a ghost range at the base of E. This branching method is referred to as 'Direct Descendence Branching Method' (DDBM) here. Although it may allow for paraphylies sensu stricto, it can be considered that this branching method takes account of the variability of the taxonomic classification and respects the divergence order of each node proposed in the phylogenetic hypothesis, as opposed to the conventional method that retains and induces polytomies. Conversely, the CBM gives more credit to the phylogenies and requires the addition of numerous stratigraphic ranges for each clade to the detriment of the observed fossil record.
It is then possible to infer diversity curves (Fig. 2D) by compiling the number of taxa per time bin and corresponding ghost ranges, according to the overestimating (CBM) and underestimating (DDBM) branching methods. Thus, a domain constrained by the lowest estimated diversity values using the DDBM (lower border) and by the highest estimated diversity values using the CBM (upper border) can be identified (grey zone). This zone can be considered as a Genuine Diversity Domain (GDD), which should include 'true' taxic diversity values and the gap between standing diversity and GDD borders is indicative of the quality of the selachian fossil record. Thus, when such a gap is large, the fossil record is likely to be relatively incomplete, particularly if the lower border is much higher than the standing diversity (suggesting a lack of covering between last and first occurrences of two sister taxa). In addition, when DDBM and CBM diversity curves are superimposed, branching methods are likely to have little influence on index values, i.e. that the first occurrences of sister taxa are roughly coeval. Conversely, strongly diverging DDBM and CBM diversity curves suggest a large number of taxa of limited temporal distribution.
We used the Relative Completeness Index metric [36] for estimating the completeness of the selachian fossil record using both branching methods in order to compare estimated diversities according to the seven phylogenetic hypotheses considered. Because this metric considers the relative ratios of ghost and observed ranges, the lower the RCI score is, the better the fit of fossil record to phylogenetic hypotheses. We use the terms RCI and RCI' in order to differentiate the calculations using the conventional branching method (used in RCI sensu [36]) from the branching method using direct descendence, respectively. For each hypothesis and taxonomic level, diversity curves and scores (RCI, RCI') were computed for unresolved trees that retain all the polytomies induced by the phylogenetic uncertainty and/or addition of fossil taxa with unresolved phylogenetic relationships. Subsequently, RCI and RCI' values were also computed for resolved trees where polytomies were randomly resolved using routine replications in order to reduce the amount of ghost ranges. We retained the lowest RCI and RCI' scores indicating the best fit of resolved phylogenetic relationships to fossil record.
All data analyses were made using the R Statistical software [37] with the package APE [38]. Programming codes are available upon request to the authors.
Fit of Phylogenies to Stratigraphy
RCI and RCI' scores computed for unresolved trees (Table S1) according to each hypothesis and taxonomic level are obviously significantly higher than corresponding RCI and RCI' values for resolved trees, respectively. Similarly, RCI scores are lower for data using higher taxonomic ranks as a result of the smaller number of taxa encompassed, thus limiting the amount of ghost lineages (e.g. compared with generic data). Routine replications performed to randomly resolve polytomies induce a great number of RCI and RCI' values (with minimal values reached for few computed resolved trees, see Figure S1). For each analysis, we retained the lowest RCI and RCI' values for resolved trees, bearing in mind that minimal values depend on the fixed number of replications. However, the computed phylogenetic relationships that best fit the fossil record have no real biological meaning and only attest of the minimal value allowed by the phylogenetic framework and fossil record (as opposed to the calculation of GER where only the fossil record depends on the minimal value of gaps, [39]). Moreover, the minimal RCI and RCI' values for resolved trees are dependent on the number of replications performed and must be considered with caution when the number of possible resolved trees reaches infinite values (i.e. generic data). Thus, only RCI and RCI' values for unresolved trees will be retained for further comparisons and discussion.
Results suggest that three out of seven phylogenies stand out: MAI [3], HUM [21] and DOU [11] phylogenies received the best RCI/RCI' scores, depending on taxonomic levels considered. No correlations were evidenced by statistic tests (see Table S2) between RCI and RCI' values for order and family data, thus suggesting that branching methods influence RCI scores' distribution at higher taxonomic levels. Conversely RCI and RCI' values appear correlated for genus data. The other correlation observed (RCI scores for family data vs. RCI scores for order data) may be explained by a roughly similar amount of ghost ranges between family and order data for a given tree topology (the fossil record for a given order is often represented by a single family). One can expect that the resolution of tested phylogenetic hypotheses (number of tree nodes) seriously influences RCI and RCI' values. However, RCI and RCI' values do not appear correlated to the number of tree nodes in most cases (see Table S2), with the exception of two correlations, although the one where the number of tree nodes is positively correlated to RCI scores is not strongly statistically supported. In addition, the branching method used does not appear to influence this relationship as such a correlation was found for both branching methods. Giving the inconsistency in their distribution, these correlations remain difficult to interpret.
Quality of the Selachian Fossil Record
All generated diversity curves were plotted (Fig. 3) for the three datasets used (orders, families, genera; see File S3) along with modern taxic and observed diversity (standing diversity) curves. Figure 3A illustrates these results for genus data. These indicate that the GDD is rather well constrained (close upper and lower borders) since the Permian until the Jurassic/ Cretaceous and to a lesser extent in the Cenozoic, suggesting that apparitions of sister taxa in the fossil record are more or less contemporaneous, whereas the Cretaceous interval (particularly Early Cretaceous) shows the greatest uncertainty (depending on the branching methods and phylogenetic hypotheses).This suggests that a large number of phylogenetic relationships of Cretaceous genera are unresolved, that numerous genera must be reconsidered and/or that their stratigraphic occurrences are too restricted in time. With the exception of the early Jurassic interval, the gap between standing generic diversity and GDD values is large from the Triassic until Recent. This is particularly marked for the Late Cretaceous-Middle Eocene interval where observed fossil data represent down to nearly 30% of the estimated diversity (considering values of the lower GDD boundary). Patterns obtained from family-based datasets (Fig. 3B) differ in some ways. While the GDD is narrow in the Permian-Early Jurassic interval, the gap between resulting curves for CBM and DDBM is marked throughout most of the Early Jurassic -Neogene interval. Consequently, the family-level selachian fossil record can be regarded as globally complete if one considers curves generated from DDBM branching hypothesis (lower GDD boundary), with the exception of the latest Cretaceous -earliest Paleogene interval. However, the CBM suggests that the family diversity is poorly known in the whole Jurassic -Paleogene interval, with up to about half of the diversity yet to be discovered. With the exception of the late Jurassic -lower Cretaceous interval (Fig. 3C), the gap between the lower GDD boundary and standing ordinal diversity is reduced, indicating a reasonably good fossil record for this taxonomic rank (although this gap is greater in the pre-mid Jurassic if one considers the upper GDD boundary). The thickness of the GDD, although important in the Early -Middle Jurassic interval, indicates rather well constrained estimated diversity values regardless of the branching method used, a pattern that is possibly imputable to the limited branching possibilities for orders in comparison to lower taxonomic ranks, due to the smaller number of taxa considered. Despite the efforts put to infer diversity patterns for high taxonomic ranks (orders, families), these results should be taken with caution because high taxonomic ranks are less informative than the genus level (see Method Chapter and references herein). This is particularly true for some early Mesozoic selachians taxa of which affinities at higher taxonomic levels are uncertain.
Major Events in the Selachian Evolutionary History
Percentages of diversity variation through time were computed (Fig. 4) using observed and inferred fossil record, with the usual formula: 100(N t+1 -N t ) / N t [N t = number of taxa at time t]. Diversity events are represented by positive or negative peaks with corresponding boxes indicating the range of variation according to the phylogenetic hypotheses, for a given branching method. Thus, when the observed diversity variation is situated below the inferred values, the diversity event is expected to be underestimated by the observed standing diversity, and conversely. Again, timing and amplitude of these events vary according to the branching method used and phylogenetic hypothesis considered, respectively. Basically, inferred diversification events are coeval between CBM and DDBM when first occurrences are simultaneous in the fossil record, whereas when gradual, diversification ages inferred from CBM are older than those inferred from DDBM.
Three major diversification events (arbitrarily above 50%) can be identified from each of the three datasets (genera, families and orders). The first one (1) is a marked diversification around the Permian/ Trias boundary, the three datasets indicating a minimum diversity increase of 100% (over 1000% with DDBM for generic data). However, inferred ordinal CBM values do not support a diversification at this time, but earlier (this branching method assumes a coeval origination for most of neoselachian clades). Indeed, a mid Permian diversification is suggested by all datasets but diversification rates cannot be calculated for this, as a result of the edge effect [40]. A second diversification event (2) is observable in the earliest Jurassic (Hettangian/ Sinemurian). Although this is marked for all datasets and branching methods, the observed diversity increase appears underestimated for both ordinal and familial data, but overestimated for generic data. The last diversification event (3) (Pliensbachian/ Toarcian) is of higher or similar amplitude according to the datasets, and recognized regardless of the datasets and methods considered. Two additional diversification events are observable from observed familial and generic data: the Lower/Middle Triassic boundary and the Middle/ Upper Triassic boundary. However, although the former is supported by inferred values, branching methods diverge concerning its amplitude. Similarly, the latter event appears overestimated by observed values as DDBM and CBM values either suggest a minor diversification event or no event. The Cretaceous period is characterized by a series of moderate to minor evolutionary events rather than a single major one. The tempo of these evolutionary events slightly differs according to branching methods. CBM suggests a marked diversification (genera and families) in the earliest Cretaceous followed by a series of diversity peaks in the Cretaceous, including a peak in the Cenomanian. However, generic diversity values inferred from the DDBM indicate a gradual increase until the earliest Late Cretaceous (according to the phylogenetic hypotheses considered), where a marked diversity peak is present (Cenomanian; 30-40% diversity increase), followed by another distinct peak in the Campanian/ Maastrichtian (20% increase). DDBM on family dataset suggests a succession of stepwise increases throughout the Cretaceous with a marked peak at the Santonian/ Campanian boundary (around 25% increase). In addition, two diversity peaks represented in observed data (Albian and late Paleocene) appear overestimated when phylogenetic hypotheses are considered. It should be noted that, although the Cretaceous/ Paleogene extinction is represented in the three datasets, its amplitude appears overestimated in observed data (e.g. 12-15% inferred vs. 40% observed diversity drop for generic data), which is due to the higher general diversity estimated by phylogenies. This extinction is thus comparable, at genus and family levels, to another diversity drop observed for both datasets at the Triassic/ Jurassic boundary. Other diversity drops observed for both datasets remain largely overestimated by observed data and/or represent minor events. Finally, inferred Cenozoic ordinal and familial diversities stagnate or increase slightly, as opposed to inferred generic diversity values that show a series of minor late Eocene -Oligocene drops followed by roughly constant values until Recent. Obviously, the peak observed for genus and family data at recent times appears largely overestimated, regardless of the branching method used.
Discussion
A common hypothesis concerning neoselachian phylogenetic interrelationships is the basal dichotomy between the shark clades Galeomorphii (Orectolobiformes, Carcharhiniformes, Lamniformes, Heterodontiformes) and Squalomorphii (Hexanchiformes, Squaliformes, Pristiophoriphormes, Squatiniformes). Globally, RCI and RCI' values do not clearly indicate if the fossil record supports this basal dichotomy of sharks represented in four of the seven hypotheses (e.g. MAI [3], HEI [15], MAWm [14,22], NAY [23]). Even though one of them, MAI [3], received some of the best scores at family and order levels, those of DOU [11] that only supports the Squalomorphii clade, received the best scores for RCI and RCI' for genus data. This incongruence remains unclear and can testify: (1) of the incompleteness of the fossil record, especially during the first steps of the neoselachian radiation, (2) of irrelevant resolutions of phylogenetic relationships, or (3) a combination of these two factors. Actually, the main difference between the latter phylogenies and the former lays is the resolution of interrelationships among batoids. While this group is well constrained in HEI [15] and MAW-m [14,22], there is an important polytomy in MAI [3]. Similarly, the other phylogenetic hypothesis that best fits the fossil record (RCI for family and order data) is HUM [21]. This considers that the Hexanchiformes are sister-group to all sharks and the clades Galeomorphii and Squalomorphii are fragmented into a clade gathering Carcharhiniformes and Lamniformes, a second clade with Heterodontiformes, Squaliformes, Pristiophoriformes and Squatiniformes, the Orectolobiformes being placed in polytomy between the two former. Again, although this shows a good fit to stratigraphy, a possible reason for the low RCI values attributed to this phylogeny, along with the position of the Orectolobiformes, is the relationships among the clade Batomorphii, with the clade Rhiniformes, Prisids and Torpediniformes being placed in unresolved position. It is thus likely that this topology received low RCI scores with CBM because it follows the stratigraphic order of apparition of shark clades and leaves a degree of freedom concerning the batoid interrelationships. In addition, there is a possible relationship between branching methods and phylogenies for high taxonomic ranks, the tree topologies including the groups Galeomorphii and Squalomorphii being concordant with the DDBM whereas the topology of HUM [21] is in agreement with the CBM. However, DOU [11] received best RCI and RCI' scores for genus data (although other phylogenies received very close index values), suggesting that branching methods do not influence congruence values at lower taxonomic ranks. Again, the large polytomy among the batoid clade most likely is responsible for these low congruence index values.
However, and although the fossil record for orders appears rather complete with the exception of the Middle -Late Jurassic interval, results indicate that standing diversity of lower taxonomic ranks (families and to a greater extent, genera) is by far imperfect. This may be regarded as contradictory to the fact that selachian fossil remains have been collected and studied for over two centuries. However, such a gap between observed and inferred diversities is conceivable as studies based on bulksampling and washing-sieving techniques became common only about fifty years ago, whereas older studies used surfacesampling of conspicuous remains exclusively. On the basis of the high number of Lazarus taxa, the Mesozoic is considered a period of corresponding poor fossil record for neoselachians [27], a pattern also observed here for selachians in general (including here neoselachians and hybodonts). This is particularly true for the Jurassic interval, where both branching methods on family and genus data indicate large gaps between standing and inferred diversities. This is likely to be the result of the geographical restriction of studies on Jurassic selachians, being almost exclusively limited to European localities (mainly Germany and England). Similar remarks can be made, to a lesser extent, for Early Cretaceous genus and family data (with the exception of family data using DDBM) but the low diversity of sampled paleoenvironmental facies may also be responsible for such a gap. These two parameters, along with the uncertainty of affinities of numerous taxa, are also likely to influence results for Triassic diversity. Causes for differences between observed and inferred generic diversities in the Late Cretaceous and mid-Eocene are less straightforward. Numerous corresponding faunas have been reported from a variety of facies, geographical areas and environmental realms. However, this period corresponds to the highest inferred selachian diversity and thus, sampling effort may simply not have been sufficient to cover such a diversity.
Few recent studies, restricted to the Mesozoic interval, have attempted to identify key events in the selachian evolutionary history. Recent studies suggested a diversification peak in the Toarcian (late Early Jurassic) [29] as well as a significant neoselachian diversification in the Early Jurassic and possibly Middle Jurassic and a second, larger phase of diversification through the mid to Late Cretaceous [27,41]. Results presented here however, indicate a phase of radiation in the mid Permian. This should be taken with caution as timing only depends on the age of the oldest neoselachian remains known to date ('Synechodus' antiquus [1]) and further discoveries may modify this assumption. The second diversification event inferred at both genus and family levels takes place around the Permian/ Triassic boundary. Little is known about pre-Triassic neoselachian and hybodont evolution patterns but it appears that this period corresponds to the first major radiation for these groups and it is noteworthy that most of the early shark groups went extinct by the late Permian (e.g. bransonelliforms, stethacanthids, symmoriids, petalodontiforms). It is thus likely that this radiation was an opportunistic response to the extinction of early shark groups, with neoselachians and hybodonts probably filling up ecological niches previously occupied by the former. The Triassic shark diversity plateau is followed by two major Early Jurassic diversification events (< Hettangian and Toarcian) that have been reported from previous studies using different methods. Neoselachian diversifications in the Rhaetian [42] and Hettangian [27] have been signaled as well as a diversity peak in the Toarcian [29]. In these cases, authors suggest a correlation between rising sea-levels [29,42], warm climatic periods [29] and increasing diversity. In addition, innovations in body plans and reproduction strategies (oviparity) were mentioned as possible adaptations that favored Early Jurassic selachian radiations [29]. Although these parameters are likely to have played a role in these Early Jurassic selachian evolutionary events, the fact that these events are contemporaneous to the radiation of actinopterygian bony fishes in the Late Triassic and early Jurassic is striking [43]. Neoselachian adaptations to active predation (jaw suspension, sensory system, vertebra morphology) are key characters/ structures that allowed predation on diversifying early Mesozoic ray-finned bony fishes. It is difficult to assess whether the Cretaceous diversifications took place during a rapid pulse in the early Cretaceous followed by minor diversification events in the Late Cretaceous (as suggested by CBM) or during the Early, Mid and latest Late Cretaceous (as suggested by DDBM). Reason for this is that very little is known on (particularly pre-Aptian) Early Cretaceous fully marine selachian faunas. Despite these uncertainties, three time intervals: Berriasian-Hauterivian, Cenomanian and Santonian-Campanian can be recognized as periods of diversification for selachians, including the apparition of numerous modern selachian clades (many lamniform, squaliform and batoid families). It is noteworthy that the Cenomanian stage also corresponds to an important radiation event for rayfinned fishes [44], which shared similar marine environments with selachians. The Cretaceous/Paleocene boundary is often regarded as the first major extinction event in the selachian evolutionary history. During this extinction event eleven families and one order (Hybodontiformes) went extinct [28] (see also [41,45]). Although not considering data on hybodont sharks (the authors argued that only a single Maastrichtian species was known), it has been suggested [28] that extinction levels were similar among ecological selachian groups, with the exception of benthopelagic and deep-sea taxa, which were less affected. Whatever the causes of the mass disappearance of selachian taxa at the K/T boundary, it is certain that this extinction event affected this group, but probably in a lower order of magnitude than expected when standing diversity is considered. However, our results suggest a Triassic/ Jurassic diversity drop of similar amplitude to the K/T extinction. The vast majority of families and genera concerned by this extinction are hybodont sharks or selachian of doubtful affinities (e.g. Pseudodalatiidae, Homalodontidae, Hueneichthys,) as neoselachian groups remain poorly represented. Although numerous works carried on the Triassic/ Jurassic boundary suggest an important extinction (end-Triassic extinction) for a number of terrestrial (e.g. Therapsida, early amphibians) and marine groups (e.g. most ammonoids, conodonts, most bivalves, xenacanthimorph sharks), no studies on selachians have reported an impact of the end-Triassic extinction on this group. Even if proposed reasons for this mass extinction are numerous [46], the eruption of the Central Atlantic magmatic province [47,48] associated with the break up of the Pangea is likely. The combination of the extinction of a number of shark groups (particularly among hybodont sharks) and the apparition of new ecological niches probably favored the diversification of new shark groups fulfilling these free niches, as indicated by the subsequent major Hettangian diversification identified here (see above). However, no major biotic/ abiotic crises corresponding to the series of late Eocene -Oligocene selachian generic diversity drops following the slight early Paleogene recovery have been reported and no recent studies on Cenozoic selachian diversity have been carried out yet (but see [41]). Studies on Tertiary paleoclimates [49][50] indicate that the early Paleogene corresponds to a period of high atmospheric temperatures including the Paleocene/Eocene Thermal Maximum, the Early Eocene Climatic Optimum and the mid-Eocene Climatic Optimum, whereas temperatures drop dramatically in the Bartonian (along with the onset of Antarctic ice sheets) and stay low in the Oligocene. Thus, a positive correlation between temperatures and selachian diversity may explain the patterns observed for this time interval. Such a correlation has moreover been reported for numerous living marine organisms [51] and it is likely that this prevailed in the Cenozoic. Similarly, inferred generic diversity keeps decreasing afterward (familial and ordinal diversities stagnate) until Recent, as paleotemperatures do [50].
Conclusion
This study presents an innovative methodology for combining phylogenetic hypotheses and stratigraphy using two branching methods (CBM and DDBM) with the purpose of inferring highest and lowest boundaries of the true selachian taxic diversity and evolutionary history. This has been applied on three taxonomic ranks (orders, families and genera) and seven phylogenetic hypotheses on a period encompassing 300 Myrs. For the first time, the selachian fossil record is quantitatively assessed, suggesting a globally poor record for lower taxonomic ranks (genera, families) when phylogenetic relationships are considered. We also present the first comprehensive analysis of major events that are likely to have marked the selachian evolutionary history. Some of them were mentioned in previous works using alternative methods (Early Jurassic, mid-Cretaceous, K/T boundary and late Paleogene diversity drops), thus reinforcing the efficiency of the methodology presented here in inferring such evolutionary events. Other events (Permian/Triassic, Early and Late Cretaceous diversifications; Triassic/ Jurassic extinction) are identified for the first time. Figure S1 Example of RCI scores obtained for an unresolved tree and for corresponding resolved trees with replications.
(TIF)
Table S1 RCI (using CBM) and RCI' (using DDBM) scores for the various phylogenetic hypotheses considered. 'Unresolved' corresponds to the best RCI and RCI' scores considering trees with polytomies; 'resolved' indicates best RCI and RCI' scores for trees with resolved polytomies and corresponding number of possible trees. Values in bold indicate best scores for each taxonomic rank. (XLS)
|
2016-05-04T20:20:58.661Z
|
2012-09-05T00:00:00.000
|
{
"year": 2012,
"sha1": "a39fd1bffb6915c9b089b641bbbb2ebd966ac0b3",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0044632&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a39fd1bffb6915c9b089b641bbbb2ebd966ac0b3",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
39048267
|
pes2o/s2orc
|
v3-fos-license
|
Failure Prognosis of High Voltage Circuit Breakers with Temporal Latent Dirichlet Allocation †
The continual accumulation of power grid failure logs provides a valuable but rarely used source for data mining. Sequential analysis, aiming at exploiting the temporal evolution and exploring the future trend in power grid failures, is an increasingly promising alternative for predictive scheduling and decision-making. In this paper, a temporal Latent Dirichlet Allocation (TLDA) framework is proposed to proactively reduce the cardinality of the event categories and estimate the future failure distributions by automatically uncovering the hidden patterns. The aim was to model the failure sequence as a mixture of several failure patterns, each of which was characterized by an infinite mixture of failures with certain probabilities. This state space dependency was captured by a hierarchical Bayesian framework. The model was temporally extended by establishing the long-term dependency with new co-occurrence patterns. Evaluation of the high voltage circuit breakers (HVCBs) demonstrated that the TLDA model had higher fidelities of 51.13%, 73.86%, and 92.93% in the Top-1, Top-5, and Top-10 failure prediction tasks over the baselines, respectively. In addition to the quantitative results, we showed that the TLDA can be successfully used for extracting the time-varying failure patterns and capture the failure association with a cluster coalition method.
Introduction
With the increasing and unprecedented scale and complexity of power grids, component failures are becoming the norm instead of exceptions [1][2][3].High voltage circuit breakers (HVCBs) are directly linked to the reliability of the electricity supply, and a failure or a small problem with them may lead to the collapse of a power network through chain reactions.Previous studies have shown that traditional breakdown maintenance and periodic checks are not effective for handling emergency situations [4].Therefore, condition-based maintenance (CBM) is proposed as a more efficient maintenance approach for scheduling action and allocating resources [5][6][7].
CBM attempts to limit consequences by performing maintenance actions only when evidence is present of abnormal behaviors of a physical asset.Selection of the monitoring parameters is critical to its success.Degradation of the HVCB is caused by several types of stress and aging, such as mechanical maladjustment, switching arcs erosion, and insulation level decline.The existing literature covers a wide range of specific countermeasures, including mechanism dynamic features [8][9][10], dynamic contact resistance [11], partial discharge signal [12,13], decomposition gas [14], vibration [15], and spectroscopic monitoring [16].Furthermore, numerous studies applied neural networks [8], support vector machine (SVM) [17], fuzzy logic [18], and other methods [19], to introduce more automation and intelligence into the signal analysis.However, these efforts were often limited to one specific aspect in their diagnosis of the failure conditions.In addition, the requirements for dedicated devices and expertise restrict their ability to be implemented on a larger scale.Outside laboratory settings, field recordings, including execution traces, failures, and warning messages, offer another easily accessible data source with broad failure category coverage.The International Council on Large Electric Systems (CIGRE) recognizes the value of event data and has conducted three world-wide surveys on the reliability data of circuit breakers since the 1970s [20][21][22].Survival analysis, aiming at reliability evaluation and end-of-life assessment, also relies on the failure records [2,23].
Traditionally, the event log is not considered as an independent component in the CBM framework, as the statistical methodologies were thought to be useful only for average behavior predictions or comparative analysis.In contrast, Salfner [24] viewed failure tracking as being of equal importance to symptom monitoring in online prediction.In other fields, such as transactional data [25], large distributed computer systems [26], healthcare [27], and educational systems [28], the event that occurs first is identified as an important predictor of the future dynamics of the system.The classic Apriori-based sequence mining methods [29], and new developments in nonlinear machine learning [27,30] have had great success in their respective fields.However, directly applying these predictive algorithms is not appropriate for HVCB logs for three unique reasons: weak correlation, complexity, and sparsity.
(1) Weak correlation.The underlying hypothesis behind association-based sequence mining, especially for the rule-based methods, is the strong correlation between events.In contrast, the dependency of the failures on HVCBs is much weaker and probabilistic.
(2) Complexity.The primary objective of most existing applications is a binary decision: whether a failure will happen or not.However, accurate life-cycle management requires information about which failure might occur.The increasing complexity of encoding categories into sequential values can impose serious challenges on the analysis method design, which is called the "curse of cardinality".(3) Sparsity.Despite the cardinality problem, the types of failure occurring on an individual basis is relatively small.Some events in a single case may have never been observed before, which makes establishing a statistical significance challenging.The inevitable truncation also aggravates the sparsity problem to a higher degree.
The attempts to construct semantic features of events, by transforming categorical events into numerical vectors, provide a fresh perspective for understanding event data [31,32].Among the latent space methods, the Latent Dirichlet Allocation (LDA) method [33], which represents each document as mixtures of topics that ejects each word with certain probabilities, offers a scalable and effective alternative to standard latent space methods.In our preliminary work, we introduced the LDA into failure distribution prediction [34].In this paper, we further extended the LDA model with a temporal association by establishing new time-attenuation co-occurrence patterns, and developed a temporal Latent Dirichlet Allocation (TLDA) framework.The techniques were validated against the data collected in a large regional power grid with regular records over a period of 10 years.The Top-N recalls and failure pattern visualization were used to assess the effectiveness.To the best of our knowledge, we are the first to introduce the advanced sequential mining technique into the area of HVCB log data analysis.
The rest of this paper is organized as follows.The necessary process to transfer raw text data into chronological sequences is introduced in Section 2. Section 3 provides details of the proposed TLDA model.The criteria presented in Section 4 are not only for performance evaluation but also show the potential applications of the framework.Section 5 describes the experimental results in the real failure histories of the HVCBs.Finally, Section 6 concludes the paper.
Description of the HVCB Event Logs
To provide necessary context, the format of the collected data is described below.The HVCBs' failure dataset was derived from 149 different types of HVCBs from 219 transformer substations in a regional power grid in South China.The voltage grades of the HVCBs were 110 kV, 220 kV, and 500 kV, and the operation time ranged from 1 to 30 years.Most of the logs were retained for 10 years, aligned with the use of the computer management system.Detailed attributes of each entry are listed in Table 1.In addition to the device identity information, the failure description, failure reason, and processing measures fields contain key information about the failure.
Failure Classification
One primary task of pre-processing is to reduce the unavoidable subjectivity and compress the redundant information.Compared to the automatically generated logs, the failure descriptions of HVCBs are created manually by administrators containing an enormous amount of text information.Therefore, the skill of the administrators significantly influences the results.Underreporting or misreporting the root cause may reduce the credibility of the logs.Only by consolidating multiple information sources can a convincing failure classification be generated.An illustrating example is presented in Table 2.The useful information is hidden in the last part and can be classified as an electromotor failure.Completing this task manually is time-consuming and is highly prone to error.Automatic text classification has traditionally been a challenging task.Straightforward procedures, including keyword searches or regular expression, cannot meet the requirements.Due to progress in deep learning technology, establishing an end-to-end text categorization model has become easier.In this study, the Google seq2seq [35] neural network was adopted by replacing the decoder part with a SVM.The text processing procedure was as follows: (1) An expert administrator manually annotated a small set of the examples with concatenated texts, including the failure description, failure reason, and processing measures; (2) After tokenization and conjunction removal, the labeled texts were used to train a neural network; (3) Another small set of failure texts were predicted by the neural network.Wrong labels were corrected and added to the training set; (4) Steps ( 2) and (3) were repeated until the classification accuracy reached 90%; (5) The trained network was used to replace manual work.The preferential classification taxonomy was the accurate component location that broke the operation.The failure phenomenon was recorded when no failure location was available.Finally, 36 kinds of failures were extracted from the man-machine interaction.
The numbers of different failures were ranked in descending order and plotted in a log-log axis shown in Figure 1.The failure numbers satisfy a long-tail distribution [36], making it hard to recall the failures with a lower occurrence frequency.
Energies 2017, 10,1913 4 of 20 component location that broke the operation.The failure phenomenon was recorded when no failure location was available.Finally, 36 kinds of failures were extracted from the man-machine interaction.The numbers of different failures were ranked in descending order and plotted in a log-log axis shown in Figure 1.The failure numbers satisfy a long-tail distribution [36], making it hard to recall the failures with a lower occurrence frequency.
Sequence Aligning and Spatial Compression
The target outputs of the sequence pre-processing are event chains in chronological order.As mentioned earlier, the accessibility to the failure data was limited to the last 10 years.Therefore, the visible sequences were bilaterally truncated, creating new difficulties for comparing different sequences.Instead of using the actual failure times, the times of origin of the HVCBs were changed to their installation time to align different sequences.To mitigate the sparsity problem, spatial compression was used by clustering failure events from the same substation of the same machine type, as they often occur in bursts.Finally, of the 43,738 raw logs, 7637 items were HVCB-related.After sequence aligning and spatial compression, 844 independent failure sequences were extracted, with an average length of nine.A sequence example can be found in Figure 2. Different failures that break the device operation continually occurred along the time axis.
Proposed Method
The key idea behind all failure tracking predictions is to obtain the probability estimations using the occurrence of previous failures.The problem is unique because both the training sets and the test sets are categorical failure data.A detailed expression of the sequential mining problem studied in this paper can be summarized as follows: the HVCB failure prognosis problem is a topic of sequential mining concerned with estimating the future failure distribution of a HVCB, based on the failure
Sequence Aligning and Spatial Compression
The target outputs of the sequence pre-processing are event chains in chronological order.As mentioned earlier, the accessibility to the failure data was limited to the last 10 years.Therefore, the visible sequences were bilaterally truncated, creating new difficulties for comparing different sequences.Instead of using the actual failure times, the times of origin of the HVCBs were changed to their installation time to align different sequences.To mitigate the sparsity problem, spatial compression was used by clustering failure events from the same substation of the same machine type, as they often occur in bursts.Finally, of the 43,738 raw logs, 7637 items were HVCB-related.After sequence aligning and spatial compression, 844 independent failure sequences were extracted, with an average length of nine.A sequence example can be found in Figure 2. Different failures that break the device operation continually occurred along the time axis.
Energies 2017, 10,1913 4 of 20 component location that broke the operation.The failure phenomenon was recorded when no failure location was available.Finally, 36 kinds of failures were extracted from the man-machine interaction.
The numbers of different failures were ranked in descending order and plotted in a log-log axis shown in Figure 1.The failure numbers satisfy a long-tail distribution [36], making it hard to recall the failures with a lower occurrence frequency.
Sequence Aligning and Spatial Compression
The target outputs of the sequence pre-processing are event chains in chronological order.As mentioned earlier, the accessibility to the failure data was limited to the last 10 years.Therefore, the visible sequences were bilaterally truncated, creating new difficulties for comparing different sequences.Instead of using the actual failure times, the times of origin of the HVCBs were changed to their installation time to align different sequences.To mitigate the sparsity problem, spatial compression was used by clustering failure events from the same substation of the same machine type, as they often occur in bursts.Finally, of the 43,738 raw logs, 7637 items were HVCB-related.After sequence aligning and spatial compression, 844 independent failure sequences were extracted, with an average length of nine.A sequence example can be found in Figure 2. Different failures that break the device operation continually occurred along the time axis.
Proposed Method
The key idea behind all failure tracking predictions is to obtain the probability estimations using the occurrence of previous failures.The problem is unique because both the training sets and the test sets are categorical failure data.A detailed expression of the sequential mining problem studied in this paper can be summarized as follows: the HVCB failure prognosis problem is a topic of sequential mining concerned with estimating the future failure distribution of a HVCB, based on the failure
Proposed Method
The key idea behind all failure tracking predictions is to obtain the probability estimations using the occurrence of previous failures.The problem is unique because both the training sets and the test sets are categorical failure data.A detailed expression of the sequential mining problem studied in this paper can be summarized as follows: the HVCB failure prognosis problem is a topic of sequential mining concerned with estimating the future failure distribution of a HVCB, based on the failure history of itself, and the failure sequences of all the other HVCBs, under the limitations of short sequences and multiple categories.This section will present how the TLDA provides a possible solution to this problem by embedding the temporal association into the LDA model.
Latent Dirichlet Allocation Model
LDA is a three-level hierarchical Bayesian model originally used in natural language process.It posits that each document is modeled as a mixture of several topics, and each topic is characterized by an infinite mixture of words with certain probabilities.A LDA example is shown in Figure 3.A document consists not only of words but also the topics assigned to the words, and the topic distribution provides a sketch of the document subject.LDA introduces topics as a fuzzy skeleton to combine the discrete words into a document.Meanwhile, the shared topics provide a convenient indicator to compare the similarity between different documents.LDA has had success in a variety of areas by extending the concepts of document, topic, and word.For example, a document can be a gene [37], an image [38], or a piece of code [39], with a word being a feature term, a patch, or a programming word.Likewise, a failure sequence can be treated as a document, and a failure can be recognized as a word.The topics in LDA can be analogous to failure patterns that represent the kinds of failures that cluster together and how they develop with equipment aging.Two foundations of LDA are the Dirichlet distribution and the idea of latent layer.
Energies 2017, 10,1913 5 of 20 history of itself, and the failure sequences of all the other HVCBs, under the limitations of short sequences and multiple categories.This section will present how the TLDA provides a possible solution to this problem by embedding the temporal association into the LDA model.
Latent Dirichlet Allocation Model
LDA is a three-level hierarchical Bayesian model originally used in natural language process.It posits that each document is modeled as a mixture of several topics, and each topic is characterized by an infinite mixture of words with certain probabilities.A LDA example is shown in Figure 3.A document consists not only of words but also the topics assigned to the words, and the topic distribution provides a sketch of the document subject.LDA introduces topics as a fuzzy skeleton to combine the discrete words into a document.Meanwhile, the shared topics provide a convenient indicator to compare the similarity between different documents.LDA has had success in a variety of areas by extending the concepts of document, topic, and word.For example, a document can be a gene [37], an image [38], or a piece of code [39], with a word being a feature term, a patch, or a programming word.Likewise, a failure sequence can be treated as a document, and a failure can be recognized as a word.The topics in LDA can be analogous to failure patterns that represent the kinds of failures that cluster together and how they develop with equipment aging.Two foundations of LDA are the Dirichlet distribution and the idea of latent layer.
Dirichlet Distribution
Among the distribution families, the multinomial distribution is the most intuitive for modeling a discrete probability estimation problem.The formulation of the multinomial distribution is described as: which satisfies ∑ = and ∑ = 1.Multinomial distribution represents the probability of different events for experiments, with each category having a fixed probability happening times.Γ is the gamma function.Furthermore, the Maximum Likelihood Estimation (MLE) of is: which implies that the theoretical basis of the statistic method is MLE estimation of a multinomial distribution.Effective failure prognosis methods must balance the accuracy and details of the adequate grain information.However, we supposed that the dataset has M sequences and N kinds of failures.Modeling a multinomial distribution for each HVCB will result in a parameter matrix with the shape of M × N.These statistics for individuals will cause most elements to be zero.Taking the failure sequence in Figure 1 as an example, among the 36 kinds of failure, only 7 have been seen, making providing a reasonable probability estimation for the other failures impossible.This is why much of the statistical analysis relies on a special classifying standard to reduce types of failure, or
Dirichlet Distribution
Among the distribution families, the multinomial distribution is the most intuitive for modeling a discrete probability estimation problem.The formulation of the multinomial distribution is described as: which satisfies Multinomial distribution represents the probability of k different events for n experiments, with each category having a fixed probability p i happening x i times.Γ is the gamma function.Furthermore, the Maximum Likelihood Estimation (MLE) of p i is: which implies that the theoretical basis of the statistic method is MLE estimation of a multinomial distribution.Effective failure prognosis methods must balance the accuracy and details of the adequate grain information.However, we supposed that the dataset has M sequences and N kinds of failures.Modeling a multinomial distribution for each HVCB will result in a parameter matrix with the shape of M × N.These statistics for individuals will cause most elements to be zero.Taking the failure sequence in Figure 1 as an example, among the 36 kinds of failure, only 7 have been seen, making providing a reasonable probability estimation for the other failures impossible.This is why much of the statistical analysis relies on a special classifying standard to reduce types of failure, or ignores the independence of the HVCBs.Two solutions are feasible for alleviating the disparities: introduce a priori knowledge or mine associations among different failures and different HVCBs.
One possible way to introduce a priori knowledge is based on Bayes' theorem.Bayesian inference is a widely used method of statistical inference to estimate the probability of a hypothesis when insufficient information is available.By introducing a prior probability on the parameters, Bayesian inference acts as a smoothing filter.Conjugate prior is a special case where the prior and posterior distribution have the same formulation.The conjugate prior distribution of multinomial distribution is Dirichlet distribution, which is: with the normalization coefficient being: similar to the multinomial distribution.Due to the Bayesian rule, the posterior distribution of → p with new observations → x can be proven as: with the mean being: From Equation ( 6), even the failures with no observations are assigned to a prior probability associated with α i .The conjugate relation can be described as a generative process shown in Figure 4a: (2) Choose a failure f ij ∼ Multinominal( → θ i ), where j ∈ {1, 2, 3, , N i }.
Energies 2017, 10,1913 6 of 20 ignores the independence of the HVCBs.Two solutions are feasible for alleviating the disparities: introduce a priori knowledge or mine associations among different failures and different HVCBs.
One possible way to introduce a priori knowledge is based on Bayes' theorem.Bayesian inference is a widely used method of statistical inference to estimate the probability of a hypothesis when insufficient information is available.By introducing a prior probability on the parameters, Bayesian inference acts as a smoothing filter.Conjugate prior is a special case where the prior and posterior distribution have the same formulation.The conjugate prior distribution of multinomial distribution is Dirichlet distribution, which is: with the normalization coefficient being: similar to the multinomial distribution.Due to the Bayesian rule, the posterior distribution of with new observations can be proven as: with the mean being: From Equation ( 6), even the failures with no observations are assigned to a prior probability associated with .The conjugate relation can be described as a generative process shown in Figure 4a:
Latent Layer
Matrix completion is another option for solving the sparsity problem that establishes global correlation among units.The basic task of matrix completion is to fill the missing entries of a partially observed matrix.In sequential prediction with limited observations, predicting the probabilities of failures that have never appeared is a problem Using the recommend system as an example, for a sparse user-item rating matrix with users and items, each user had only rated several items.To fill the unknown space, is first decomposed as two low dimensional matrices ∈ × and ∈ × satisfying:
Latent Layer
Matrix completion is another option for solving the sparsity problem that establishes global correlation among units.The basic task of matrix completion is to fill the missing entries of a partially observed matrix.In sequential prediction with limited observations, predicting the probabilities of failures that have never appeared is a problem Using the recommend system as an example, for a sparse user-item rating matrix R with m users and f items, each user had only rated several items.
To fill the unknown space, R is first decomposed as two low dimensional matrices P ∈ R m× f and Q ∈ R n× f satisfying: R ≈ PQ T = R (7) with the aim of making R as close to R as possible.Then, the rating of user u to item R(u, i) = rui , can be inferred as: Many different realizations of Equation ( 7) can be created by adopting different criteria to determine whether the given matrices are similar.The spectral norm or the Frobenius norm creates the classical singular value decomposition (SVD) [40], and the root-mean-square error (RMSE) creates the latent factor model (LFM) [41] model.In addition, regularization terms are useful options to increase the generalization of the model.
Analogously, a latent layer with L elements can be introduced between the HVCB sequences and the failures.For M sequences with N kinds of failures, instead of M N-parameter multinomial distributions described above, M L-parameter multinomial models, and L N-parameter multinomial models are preferred, where L failure patterns are extracted.A schematic diagram of the comparison is shown in Figure 5.No direct observations exist to fill the gap between s 1 and f 3 ; the connection of Energies 2017, 10,1913 7 of 20 with the aim of making as close to as possible.Then, the rating of user to item , = ̂ , can be inferred as: Many different realizations of Equation ( 7) can be created by adopting different criteria to determine whether the given matrices are similar.The spectral norm or the Frobenius norm creates the classical singular value decomposition (SVD) [40], and the root-mean-square error (RMSE) creates the latent factor model (LFM) [41] model.In addition, regularization terms are useful options to increase the generalization of the model.
Analogously, a latent layer with elements can be introduced between the HVCB sequences and the failures.For sequences with kinds of failures, instead of N-parameter multinomial distributions described above, L-parameter multinomial models, and N-parameter multinomial models are preferred, where failure patterns are extracted.A schematic diagram of the comparison is shown in Figure 5.No direct observations exist to fill the gap between s1 and f3; the connection of s1-z1-f3, s1-z2-f3, s1-z3-f3 will provides a reasonable suggestion.
Latent Dirichlet Allocation
The combination of Bayesian inference and matrix completion creates the LDA.Two Dirichlet priors are assigned to the two-layer multinomial distributions.A similar idea is shared by LFM, where the regularization items can be theoretically deduced from the assumption of Gaussian priors.The major difficulties in realizing LDA lie in the model inference.In LDA, it is assumed that the jth failure in sequence comes from a failure pattern , making satisfying a multinomial distribution parameterized with .In addition, the failure pattern also originates from a multinomial distribution whose parameters are .Finally, from the perspective of Bayesian statistics, both and are sampled from two Dirichlet priors with parameters and .The original Dirichlet-multinomial process can evolve to a three-layer sampling process as follows: (1) Choose ~ , where ∈ 1,2,3, . ., ; (2) Choose ~ , where ∈ 1,2,3, . ., ; For each failure , (3) Choose a latent value ~ ; (4) Choose a failure ~ .
Latent Dirichlet Allocation
The combination of Bayesian inference and matrix completion creates the LDA.Two Dirichlet priors are assigned to the two-layer multinomial distributions.A similar idea is shared by LFM, where the regularization items can be theoretically deduced from the assumption of Gaussian priors.The major difficulties in realizing LDA lie in the model inference.In LDA, it is assumed that the jth failure in sequence m f mj comes from a failure pattern z mj , making f mj satisfying a multinomial distribution parameterized with → ϕ z mj .In addition, the failure pattern z mj also originates from a multinomial distribution whose parameters are where m ∈ {1, 2, 3, , M}, and j ∈ {1, 2, 3, , N m }.N m is the failure number in sequence m, and M is the total sequence number.
The probabilistic graphic of LDA is shown in Figure 4b, and the joint probability distribution of all the failures under this model is given by: The learning targets of LDA include → θ m and → ϕ k .They can both be inferred from the topic assigning → z .The posterior distribution of → z cannot be directly solved.Gibbs sampling is one possible solution.First, the joint probability distribution can be reformulated as: where → n k = n w k w=1:V , and are the statistics of the failures count under topic k, and topic count under failure sequence m.V is the number of failure types.The conditional distribution of the Gibbs sampling can be obtained as: where n i k,−mj is the number of failures with the index i assigned to topic k, excluding the failure f mj , and n i m,−mj is the number of failures in sequence m with topic i, excluding the failure f mj .After certain iterations, the posterior estimation of → θ m and → ϕ k can be inferred with: ) Finally, the posterior failure distribution of the ith HVCB can be predicted with:
Introducing the Temporal Association into LDA
Even with the promising advantage of finding patterns in the categorical data, directly borrowing LDA to solve the sequence prediction problem has some difficulties.LDA assumes that data samples are fully exchangeable.The failures are assumed to be independently drawn from a mixture of multinomial distributions independently, which is not true.In the real world, failure data are naturally collected in time order, and different failure patterns evolve.So, it is important to exploit the temporal characteristics of the failure sequences.
To introduce the time attributes in LDA, we first assumed that the future failure was most related to the other failures within a time slice.Instead of using the failure sequences of the full life circle, the long sequences were divided into several sub-sequences by a sliding time window with width W. The sub-sequences may overlap with each other.Under this assumption, a simple way to use LDA in a time series is to directly exploit the pattern distributions → θ m in different time-slices.However, this approach does not consider the dependence among different slices.In the LDA model, the dependence among different sub-sequences can be represented by the dependency among the pattern distributions.A modified probabilistic graph is shown in Figure 6, where u ms represents the topic distribution of a specified sub-sequence, and where is the number of sub-sequences in sequence m, is the number of failures in the subsequence , and is the topic distribution of a specified sub-sequence.Due to the lack of conjugacy between Dirichlet distributions, the posterior inference of Equation ( 15) can be intractable.Simplifications, such as the Markov assumption and specified conditional distributions, can elucidate the posterior distribution out [42,43].However, the formulation does not need to be Markovian, and the time dependency can still be complicated.To overcome this problem, an alternative method of creating a new co-occurrence mode is proposed to establish the long-term dependency among different sub-sequences.Specifically, form Equations ( 12) and ( 13), the failures that occur together are likely to have the same failure pattern.In other words, co-occurrence is still the foundation for deeper pattern mining in LDA.Therefore, instead of specifying the dependency among the topic distributions, as shown by the dotted line in Figure 6, a direct link was constructed between the current and earlier failures by adding the past failures into current sub-sequence with certain probabilities.Additionally, the adding operation should embed the temporal information by assigning a higher probability to the closer ones.Based on the requirements, a sampling rate comforting exponential decay is implemented as follows: where the attenuation coefficient ∆ controls the decreasing speed of along the time interval . is the time at the left edge of the current time window.Figure 7 shows the schematic diagram of the process for constructing new co-occurrence patterns.To predict the future failure distribution, the failures ahead of the current time window are also included.Each iteration generates new data combinations to argument the data.An outline of the Gibbs sampling procedure with the new data generation method is shown in Algorithm 1. Due to the lack of conjugacy between Dirichlet distributions, the posterior inference of Equation ( 15) can be intractable.Simplifications, such as the Markov assumption and specified conditional distributions, can elucidate the posterior distribution out [42,43].However, the formulation does not need to be Markovian, and the time dependency can still be complicated.To overcome this problem, an alternative method of creating a new co-occurrence mode is proposed to establish the long-term dependency among different sub-sequences.Specifically, form Equations ( 12) and ( 13), the failures that occur together are likely to have the same failure pattern.In other words, co-occurrence is still the foundation for deeper pattern mining in LDA.Therefore, instead of specifying the dependency among the topic distributions, as shown by the dotted line in Figure 6, a direct link was constructed between the current and earlier failures by adding the past failures into current sub-sequence with certain probabilities.Additionally, the adding operation should embed the temporal information by assigning a higher probability to the closer ones.Based on the requirements, a sampling rate comforting exponential decay is implemented as follows: where the attenuation coefficient ∆ controls the decreasing speed of p(x) along the time interval x.T is the time at the left edge of the current time window.Figure 7 shows the schematic diagram of the process for constructing new co-occurrence patterns.To predict the future failure distribution, the failures ahead of the current time window are also included.Each iteration generates new data combinations to argument the data.An outline of the Gibbs sampling procedure with the new data generation method is shown in Algorithm 1. Compute the statistics , , , , , in Equation ( 11) for each sub-sequence; 3: for iter in 1 to MaxIteration do 4: Foreach sequence in Sequences do 5: Foreach sub-sequence in sequence do 6: Add new failures in the current sub-sequence based on Equation ( 16); 7: Foreach failure in the new sub-sequence do 8: Draw new from Equation ( 11); 9: Update the statistics in Equation ( 11); 10: End for 11: End for 12: End for 13: Compute the posterior mean of and based on Equations ( 12) and (13) 14: End for 15: Compute the mean of and of last several iterations Based on the above premise, the TLDA framework for extracting the semantic characteristics and predicting the failure distribution is shown in Figure 8.After preprocessing and generating the sub-sequences, an alternating renewal process was implemented between the new co-occurrence pattern construction and the Gibbs sampling.The final average output reflects the time decrease presented in Equation ( 16) due to the multi-sampling process.Finally, Equation ( 14) provides the future distribution prognosis using the learned parameters of the last sub-sequence of each HVCB.Based on the above premise, the TLDA framework for extracting the semantic characteristics and predicting the failure distribution is shown in Figure 8.After preprocessing and generating the sub-sequences, an alternating renewal process was implemented between the new co-occurrence pattern construction and the Gibbs sampling.The final average output reflects the time decrease presented in Equation ( 16) due to the multi-sampling process.Finally, Equation ( 14) provides the future distribution prognosis using the learned parameters of the last sub-sequence of each HVCB.Compute the statistics , , , , , in Equation ( 11) for each sub-sequence; 3: for iter in 1 to MaxIteration do 4: Foreach sequence in Sequences do 5: Foreach sub-sequence in sequence do 6: Add new failures in the current sub-sequence based on Equation ( 16); 7: Foreach failure in the new sub-sequence do 8: Draw new from Equation ( 11); 9: Update the statistics in Equation ( 11); 10: End for 11: End for 12: End for 13: Compute the posterior mean of and based on Equations ( 12) and (13) 14: End for 15: Compute the mean of and of last several iterations Based on the above premise, the TLDA framework for extracting the semantic characteristics and predicting the failure distribution is shown in Figure 8.After preprocessing and generating the sub-sequences, an alternating renewal process was implemented between the new co-occurrence pattern construction and the Gibbs sampling.The final average output reflects the time decrease presented in Equation ( 16) due to the multi-sampling process.Finally, Equation ( 14) provides the future distribution prognosis using the learned parameters of the last sub-sequence of each HVCB.
Evaluation Criteria
The output of the proposed system is the personalized failure distribution for each HVCB.However, directly verifying the prediction result is impossible due to the sparsity of the failure sequences.Therefore, several indirectly quantitative and qualitative criteria are proposed as follows.
Quantitative Criteria
Instead of verifying the entire distribution, the prognosis ability of the model was testified by predicting the upcoming one failure.Several evaluation criteria are developed as follows.
Top-N Recall
The Top-N prediction is originally used in the recommend system to check if the recommended N items satisfy the customers.Precision and recall are the most popular metrics for evaluating the Top-N performance [44].With only one target behavior, the recall becomes proportional to the precision, which can be simplified as: where R N (h) is the failure set with the Top-N highest prediction probabilities for HVCB h, T(h) is the failure that subsequently occurred, and |H| is the sum of HVCBs to be predicted.The recall indicates whether the failure that subsequently occurred is included in the Top-N predictions.Considering the diversity of the different failure categories, Top-1, Top-5 and Top-10 recalls were used.
Overlapping Probability
The overlapping probability P o is proposed as an aided index to the Top-1 recall, which is defined as the probability the model assigns for T(h).For instance, assuming a model concludes that the next failure probabilities for a, b, c are 50%, 40%, and 10%, respectively, after a while, failure b actually occurs.Then, the overlapping probability is 40%.This index provides an outline of how much the probability distribution overlaps with the real one-hot distribution, which can also be understood as the confidence.With similar Top-1 recall, higher mean overlapping probability represents a more reliable result.
These two kinds of quantitative criteria are suitable for different maintenance strategies, considering the limitation of the maintainers' rigor.The Top-N recall corresponds to the strategy of focusing on the Top-N rank failure types, whereas the overlapping probability is another possible strategy of monitoring on the probabilities of the failure types that exceed the threshold.
Qualitative Criteria
The TLDA can provide explicit semantic characteristics.The results of our algorithm offer a deep new perspective for understanding the failure modes and their variation trends.For example, different failure patterns can be extracted by examining the failures with high proportions.By considering → θ as a function of time, investigating rise and fall of different failures and how they interact is easy, either from a global perspective or when focusing on one sample.In addition, by introducing the angle cosine distance as a measurement, the similarity between failure p and failure q can be calculated as: Figure 9 depicts the cosine distance computing method.Only the angle between the two vectors affects this indicator.A higher cosine distance often indicates more similar failure reasons.
Case Study
The experimental dataset was based on the real-world failure records described in Section 2. After data processing, the failure history of each HVCB was listed as a failure sequence in chronological order.A cross-validation test was used to assess the performance with the following process.Firstly, the last failure of each sequence was separated as the test set.Then, the remaining instances were used to train the TLDA model based on Algorithm 1.For each validation round, the tail part of each failure sequences was randomly abandoned to obtain the new test sets.
Parameter Analysis
Hyper-parameters of the proposed method include the number of the failure patterns , the width of the time window , and the attenuation coefficient ∆.For all runs of the algorithm, the Dirichlet parameters and were assigned with symmetric priors of 1/ and 0.01, respectively, which were slightly different from the common setting [45].Gibbs sampling of 300 iterations was sufficient for the algorithm to converge.For each Gibbs sampling chain, the first 200 iterations were discarded, and the average results of the last 100 iterations were taken as the final output.The first set of experiments was conducted to analyze the model performance with respect to among {25, 30, 35, 40, 45, 50}. Figure 10 shows the results of Top-1, Top-5, Top-10 recalls, and the overlapping probability under fixed and ∆ of six years and 10,000 days, respectively.These evaluation indexes do not appear to be much affected by the number of failure patterns.The failure pattern of 40 surpasses the others slightly for the Top-N recalls.The overlapping probability increased to relatively stable numerical values after 40.The overfitting phenomenon, which perplexes many machine learning methods, was not serious with high numbers of failure patterns.
Case Study
The experimental dataset was based on the real-world failure records described in Section 2. After data processing, the failure history of each HVCB was listed as a failure sequence in chronological order.A cross-validation test was used to assess the performance with the following process.Firstly, the last failure of each sequence was separated as the test set.Then, the remaining instances were used to train the TLDA model based on Algorithm 1.For each validation round, the tail part of each failure sequences was randomly abandoned to obtain the new test sets.
Parameter Analysis
Hyper-parameters of the proposed method include the number of the failure patterns K, the width of the time window W, and the attenuation coefficient ∆.For all runs of the algorithm, the Dirichlet parameters → α and → β were assigned with symmetric priors of 1/K and 0.01, respectively, which were slightly different from the common setting [45].Gibbs sampling of 300 iterations was sufficient for the algorithm to converge.For each Gibbs sampling chain, the first 200 iterations were discarded, and the average results of the last 100 iterations were taken as the final output.The first set of experiments was conducted to analyze the model performance with respect to K among {25, 30, 35, 40, 45, 50}. Figure 10 shows the results of Top-1, Top-5, Top-10 recalls, and the overlapping probability under fixed W and ∆ of six years and 10,000 days, respectively.These evaluation indexes do not appear to be much affected by the number of failure patterns.The failure pattern of 40 surpasses the others slightly for the Top-N recalls.The overlapping probability increased to relatively stable numerical values after 40.The overfitting phenomenon, which perplexes many machine learning methods, was not serious with high numbers of failure patterns.
Case Study
The experimental dataset was based on the real-world failure records described in Section 2. After data processing, the failure history of each HVCB was listed as a failure sequence in chronological order.A cross-validation test was used to assess the performance with the following process.Firstly, the last failure of each sequence was separated as the test set.Then, the remaining instances were used to train the TLDA model based on Algorithm 1.For each validation round, the tail part of each failure sequences was randomly abandoned to obtain the new test sets.
Parameter Analysis
Hyper-parameters of the proposed method include the number of the failure patterns , the width of the time window , and the attenuation coefficient ∆.For all runs of the algorithm, the Dirichlet parameters and were assigned with symmetric priors of 1/ and 0.01, respectively, which were slightly different from the common setting [45].Gibbs sampling of 300 iterations was sufficient for the algorithm to converge.For each Gibbs sampling chain, the first 200 iterations were discarded, and the average results of the last 100 iterations were taken as the final output.The first set of experiments was conducted to analyze the model performance with respect to among {25, 30, 35, 40, 45, 50}. Figure 10 shows the results of Top-1, Top-5, Top-10 recalls, and the overlapping probability under fixed and ∆ of six years and 10,000 days, respectively.These evaluation indexes do not appear to be much affected by the number of failure patterns.The failure pattern of 40 surpasses the others slightly for the Top-N recalls.The overlapping probability increased to relatively stable numerical values after 40.The overfitting phenomenon, which perplexes many machine learning methods, was not serious with high numbers of failure patterns.In the next experiment, the qualitative criteria were examined as a function of the time window W and the attenuation coefficient ∆, with the number of the failure patterns K fixed at 40.The results are shown in Figure 11.The peak values of different criteria were achieved with different parameters.The optimal parameters with respect to the performance metrics are summarized in Table 3.
In the next experiment, the qualitative criteria were examined as a function of the time window and the attenuation coefficient ∆, with the number of the failure patterns fixed at 40.The results are shown in Figure 11.The peak values of different criteria were achieved with different parameters.The optimal parameters with respect to the performance metrics are summarized in Table 3. Table 3. Optimal parameters for different prediction tasks.
Performance Criteria W (Years) ∆ (Days)
Top-1 7 30,000 Top-5 7 20,000 Top-10 3 10,000 Overlapping Probability 7 30,000 From Table 3, the high Top-1 recall calls for a relatively large window size of seven years and a large decay parameter of 30,000 days, while the best Top-10 recall was obtained with smaller parameters of three years and 10,000 days.The Top-5 recall also requires a large of seven years but a smaller ∆ of 20,000 days when compared to the Top-1 recall.The overlapping probability also shares similar optical parameters with Top-10.The difference among the parameter selection for different evaluation parameters may be explained as follows.With wider and larger ∆, the subsequence tends to include more failure data.A duality exists where more data may help the model discover the failure pattern more easily or limit its extension ability.With more data, the model tends to converge on several certain failure patterns and provides more confidence in the failures.This explains why the Top-1 recall and the overlapping probabilities share the same optical parameters.From Table 3, the high Top-1 recall calls for a relatively large window size of seven years and a large decay parameter of 30,000 days, while the best Top-10 recall was obtained with smaller parameters of three years and 10,000 days.The Top-5 recall also requires a large W of seven years but a smaller ∆ of 20,000 days when compared to the Top-1 recall.The overlapping probability also shares similar optical parameters with Top-10.The difference among the parameter selection for different evaluation parameters may be explained as follows.With wider W and larger ∆, the sub-sequence tends to include more failure data.A duality exists where more data may help the model discover the failure pattern more easily or limit its extension ability.With more data, the model tends to converge on several certain failure patterns and provides more confidence in the failures.This explains why the Top-1 recall and the overlapping probabilities share the same optical parameters.However, this kind of converge may neglect the other related failures.For the Top-10 recall, the most important criterion is the fraction of coverage, rather than one accurate hit.Training and predicting with relatively less data focuses more on the mutual associations, which provides more insight into the hidden risk.Generally, the difference between the optical parameters of Top-1 and Top-10 recalls reflects a dilemma between higher confidence and wider coverage in machine learning methods.
Comparison with Baselines
The best results were also compared with several baseline algorithms, including the statistical approach, Bayesian method, and the newly developed Long Short-Term Memory (LSTM) neural network.The statistical approach is the most common method for log analysis in power grids, which accounts for a large proportion in the annual report of power enterprises.A global average result that mainly focuses on the proportion of different failures is used to guide the production of the next year.The Bayesian method is one of the main approaches for distribution estimation.A sequential Dirichlet update initialized with the statistical average was conducted to provide a personalized distribution estimates for each HVCB.In past years, deep learning has exceeded the traditional methods in many areas.As one branch of deep learning for handling sequential data, LSTM has been applied to HVCB log processing.The key parameters of the LSTM include the embedding dimension of eight and the fully connected layer, having 100 units.Additionally, sequences shorter than 10 are padded to ensure a constant input dimension.
Table 4 reports the experimental result, where the model with the best performance is marked in bold font.The TLDA had the best performance for the Top-1, Top-5, and Top-10 tasks with 51.13%, 73.86%, and 92.93%, respectively, whereas the best overlapping probability was obtained by the Bayesian method.Although the Bayesian method obtained good overlapping probability and Top-1 recall, its Top-5 and Top-10 performances were the worst among the tested methods because the Bayesian method places too much weight on individual information and ignores the global correlations.On the contrary, the statistical approach obtained a slightly better result in Top-5 and Top-10 recall owning to the long tail distribution.However, its Top-1 recall was the lowest.The unbalanced datasets create a problem for the LSTM for obtaining high Top-1 recall.However, the LSTM still demonstrated its learning ability as reflected in its Top-5 and Top-10 recalls.As mentioned before, the LDA method treats each failure sequence as a mixture of several failure patterns.Some interesting failure modes and failure associations can be mined by visualizing the failures.For simplicity, failure patterns of 10 were adopted to train a new TLDA model.Table 5 lists the failures that account for more than 1% in each failure pattern.All the failure patterns were extracted automatically and the titles were summarized afterward.Notably, error records may exist, the most common of which was the confusion between causes and phenomena.For example, various failure categories can be mistaken for operating mechanism failure as that the mechanism is the last step of a complete HVCB operation.A summary of the extracted failure patterns is as follows.Failure pattern 1 mainly contains the operating mechanism's own failures, while pattern 2 reveals the co-occurrence of the operating mechanism within the driving system.Analogously, pattern 3 and pattern 6 mainly focus on how the operation may be broken by the tripping coils and secondary parts such as remote control signal.Pattern 7 and pattern 10 cluster the failures of pneumatic and hydraulic mechanism together.The other patterns also show different features.Different failure patterns have special emphasis and overlap.For example, though both contain secondary components, pattern 9 only considers their manufacturing quality, while pattern 6 emphasizes the interaction between the secondary components and the final operation.
Temporal Features of the Failure Patterns
The average value of θ in different time slices can be calculated as a function of time to show the average variation tendency of different failure patterns.As shown in Figure 12, the failure modes of hydraulic mechanism, pneumatic mechanism and cubicles increase along with operation years, while the percentage of the measuring system, tripping and closing coils decrease.The SF6 leakage and machinery failures always share a large portion.The rise and fall of different failure patterns reflect the dynamic change of the device state, which is useful for targeted action scheduling.Additionally, the concentration can be placed on one sequence to determine how each event change the mixture of the failure modes.Figure 13 shows the failure mode variation of the sample.At first, the SF6 leakage and the cubicle failures allocates a large portion to the corresponding modes.Then, the contactor failure improves the failure pattern of the secondary system.Afterward, the operation mechanism creates a peak in the pattern of machinery parts.However, its shares are quickly replaced by the failure mode of the tripping coils.This can be considered as the model's selfcorrection to distinguish failures caused by the operating mechanism itself or its preorder system.At last, the remote control failure causes a portion shift from the failure mode of the secondary system to the operation error by secondary system.
Similarities between Failures
The similarities between different failures based on Equation ( 18) are shown in Figure 14.A wealth of associations can be extracted combined with the equipment structure knowledge.In general, the failures with high similarities can be classified into four types.Additionally, the concentration can be placed on one sequence to determine how each event change the mixture of the failure modes.Figure 13 shows the failure mode variation of the sample.At first, the SF6 leakage and the cubicle failures allocates a large portion to the corresponding modes.Then, the contactor failure improves the failure pattern of the secondary system.Afterward, the operation mechanism creates a peak in the pattern of machinery parts.However, its shares are quickly replaced by the failure mode of the tripping coils.This can be considered as the model's self-correction to distinguish failures caused by the operating mechanism itself or its preorder system.At last, the remote control failure causes a portion shift from the failure mode of the secondary system to the operation error by secondary system.Additionally, the concentration can be placed on one sequence to determine how each event change the mixture of the failure modes.Figure 13 shows the failure mode variation of the sample.At first, the SF6 leakage and the cubicle failures allocates a large portion to the corresponding modes.Then, the contactor failure improves the failure pattern of the secondary system.Afterward, the operation mechanism creates a peak in the pattern of machinery parts.However, its shares are quickly replaced by the failure mode of the tripping coils.This can be considered as the model's selfcorrection to distinguish failures caused by the operating mechanism itself or its preorder system.At last, the remote control failure causes a portion shift from the failure mode of the secondary system to the operation error by secondary system.
Similarities between Failures
The similarities between different failures based on Equation ( 18) are shown in Figure 14.A wealth of associations can be extracted combined with the equipment structure knowledge.In general, the failures with high similarities can be classified into four types.
Similarities between Failures
The similarities between different failures based on Equation ( 18) are shown in Figure 14.A wealth of associations can be extracted combined with the equipment structure knowledge.In general, the failures with high similarities can be classified into four types.The first type is causal relationship, where the occurrence of one failure is caused by another.For example, the failure of a rejecting action may be caused by the remote control signal, safe-blocked circuit, auxiliary switch, SF6 constituents, and humidity ovenproof which may cause blocking according, to the similarity map.The second type is wrong logging.Failures with wrong logging relationships often occur in a functional chain, facilitating wrong error location.The similarity between electromotor stalling and relay or travel switch failures, and the similarity between secondary cubicle and tripping coil may belong to this type.The third type is common cause failures.The failures are caused by similar reasons, such as the similarities among the measurement instruments, including the closing instructions, the high voltage indicating device, the operation counters, and the gas pressure meter.The strong association between the secondary cubicle and the mechanism cubicle may be caused by the deficient sealing, and a bad choice of motors assigns high similarity between the electromotor and oil pump.The fourth type is relation transmission.Similarities are built on indirect association.For example, the transmission bar has a direct connect to the operation counter, and the counter shares a similar aging reason with the other measurement instrument, making the transmission bar similar in number to the high voltage indicating device and the gas pressure meter.Likewise, the safe-blocked circuit may act as the medium between the air compressor stalling and SF6 constituents.
This similarity map may help establish a failure look-up table for fast failure reason analysis and location.
Conclusions and Future Work
In this paper, the event logs in a power grid were considered a promising data source for the goal of predicting future critical events and extracting the latent failure patterns.A TLDA framework is presented as an extension of the topic model, introducing a failure pattern layer as the medium between the failure sequences and the failures.The conjunction relation between the multinomial distribution and the Dirichlet distribution is embedded into the framework for better generalizations.Using a mixture of hidden variables for a failure representation not only enables pattern mining from the sparse data but also enables the establishment of quantitative relationships among failures.Furthermore, a simple but effective temporal new co-occurrence pattern was established to introduce strict chronological order of events into the originally exchangeable Bayesian framework.The The first type is causal relationship, where the occurrence of one failure is caused by another.For example, the failure of a rejecting action may be caused by the remote control signal, safe-blocked circuit, auxiliary switch, SF6 constituents, and humidity ovenproof which may cause blocking according, to the similarity map.The second type is wrong logging.Failures with wrong logging relationships often occur in a functional chain, facilitating wrong error location.The similarity between electromotor stalling and relay or travel switch failures, and the similarity between secondary cubicle and tripping coil may belong to this type.The third type is common cause failures.The failures are caused by similar reasons, such as the similarities among the measurement instruments, including the closing instructions, the high voltage indicating device, the operation counters, and the gas pressure meter.The strong association between the secondary cubicle and the mechanism cubicle may be caused by the deficient sealing, and a bad choice of motors assigns high similarity between the electromotor and oil pump.The fourth type is relation transmission.Similarities are built on indirect association.For example, the transmission bar has a direct connect to the operation counter, and the counter shares a similar aging reason with the other measurement instrument, making the transmission bar similar in number to the high voltage indicating device and the gas pressure meter.Likewise, the safe-blocked circuit may act as the medium between the air compressor stalling and SF6 constituents.
This similarity map may help establish a failure look-up table for fast failure reason analysis and location.
Conclusions and Future Work
In this paper, the event logs in a power grid were considered a promising data source for the goal of predicting future critical events and extracting the latent failure patterns.A TLDA framework is presented as an extension of the topic model, introducing a failure pattern layer as the medium between the failure sequences and the failures.The conjunction relation between the multinomial distribution and the Dirichlet distribution is embedded into the framework for better generalizations.Using a mixture of hidden variables for a failure representation not only enables pattern mining from the sparse data but also enables the establishment of quantitative relationships among failures.Furthermore, a simple but effective temporal new co-occurrence pattern was established to introduce strict chronological order of events into the originally exchangeable Bayesian framework.The effectiveness of the proposed method was verified by thousands of real-word failure records of the HVCBs from both quantitative and qualitative perspectives.The Top-1, Top-5, and Top-10 results revealed that the proposed method outperformed the existing methods in predicting potential failures before they occurred.The parameter analysis showed a different parameter preference for higher confidence or a wider coverage.By visualizing the temporal structures of the failure patterns, the TLDA showed its ability to extract meaningful semantic characteristics, providing insight into the time variation and interaction of failures.
As future work, experiments can be conducted in other application areas.Furthermore, as a branch of the state space model, the attempt to use the trained TLDA embedding in the Recurrent Neural Network may provide better results.
Figure 1 .
Figure 1.Long tail distribution of the failure numbers.
Figure 2 .
Figure 2. A graphical illustration of a failure sequence.
Figure 1 .
Figure 1.Long tail distribution of the failure numbers.
Figure 1 .
Figure 1.Long tail distribution of the failure numbers.
Figure 2 .
Figure 2. A graphical illustration of a failure sequence.
Figure 2 .
Figure 2. A graphical illustration of a failure sequence.
Figure 4 .
Figure 4. Graphical representations comparison of the Dirichlet distribution and LDA: (a) graphical representation of the Dirichlet distribution; (b) graphical representation of LDA.
Figure 4 .
Figure 4. Graphical representations comparison of the Dirichlet distribution and LDA: (a) graphical representation of the Dirichlet distribution; (b) graphical representation of LDA.
Figure 5 .
Figure 5. Schematic diagram of the matrix completion: (a) the graphical representation of the failure probability estimation task.The solid lines represent the existing observations, and the dotted line represents the probability of the estimate.(b) The model makes an estimation by the solid lines after matrix decomposition.
Figure 5 .
Figure 5. Schematic diagram of the matrix completion: (a) the graphical representation of the failure probability estimation task.The solid lines represent the existing observations, and the dotted line represents the probability of the estimate.(b) The model makes an estimation by the solid lines after matrix decomposition.
→w
are the prior parameters, with the joint distribution being: is the number of sub-sequences in sequence m, N ms is the number of failures in the sub-sequence s, and → u ms is the topic distribution of a specified sub-sequence.
Figure 6 .
Figure 6.Graphical representation for a general sequential extension of LDA.
Figure 6 .
Figure 6.Graphical representation for a general sequential extension of LDA.
Figure 7 .Algorithm 1
Figure 7.The sampling probability within and prior to the time window.Algorithm 1 Gibbs sampling with the new co-occurrence patterns Input: Sequences, MaxIteration, , , ∆, Output: posterior inference of and 1:Initialization: randomly assign failure patterns and make sub-sequences by ; 2:Compute the statistics , , , , , in Equation (11) for each sub-sequence; 3:for iter in 1 to MaxIteration do 4:Foreach sequence in Sequences do 5:Foreach sub-sequence in sequence do 6:Add new failures in the current sub-sequence based on Equation (16); 7:Foreach failure in the new sub-sequence do 8:Draw new from Equation (11); 9:Update the statistics in Equation (11); 10:End for 11:End for 12:End for 13:Compute the posterior mean of and based on Equations (12) and (13) 14: End for 15: Compute the mean of and of last several iterations
Figure 7 .Algorithm 1 7 :
Figure 7.The sampling probability within and prior to the time window.
Energies 2017, 10 , 1913 10 of 20 Figure 7 .Algorithm 1
Figure 7.The sampling probability within and prior to the time window.
Figure 9 .
Figure 9. Schematic diagram of the cosine distance with two dimensions.
Figure 10 .
Figure 10.Performance comparison versus the number of failure patterns: (a) the Top-1, Top-5, and Top10 recalls with respect to the number of failure patterns.(b) the overlapping probability with respect to the number of failure patterns.
Figure 9 .
Figure 9. Schematic diagram of cosine distance with two dimensions.
Figure 10 .
Figure 10.Performance comparison versus the number of failure patterns: (a) the Top-1, Top-5, and Top10 recalls with respect to the number of failure patterns.(b) the overlapping probability with respect to the number of failure patterns.
Figure 10 .
Figure 10.Performance comparison versus the number of failure patterns: (a) the Top-1, Top-5, and Top10 recalls with respect to the number of failure patterns.(b) the overlapping probability with respect to the number of failure patterns.
Figure 11 .
Figure 11.Performance comparison versus time window length and the attenuation coefficient: (a) the Top-1 recall versus the model parameters; (b) the Top-5 recall versus the model parameters; (c) the Top-10 recall versus the model parameters; and (d) the overlapping probability versus the model parameters.
Figure 11 .
Figure 11.Performance comparison versus time window length and the attenuation coefficient: (a) the Top-1 recall versus the model parameters; (b) the Top-5 recall versus the model parameters; (c) the Top-10 recall versus the model parameters; and (d) the overlapping probability versus the model parameters.
Figure 12 .
Figure 12.Average time-varying dynamics of the extracted 10 failure patterns.
Figure 13 .
Figure 13.Time-varying dynamics of the failure patterns for an individual HVCB.
Figure 12 .
Figure 12.Average time-varying dynamics of the extracted 10 failure patterns.
Figure 12 .
Figure 12.Average time-varying dynamics of the extracted 10 failure patterns.
Figure 13 .
Figure 13.Time-varying dynamics of the failure patterns for an individual HVCB.
Figure 13 .
Figure 13.Time-varying dynamics of the failure patterns for an individual HVCB.
Figure 14 .
Figure 14.Similarity map for all the failures in the real-word dataset.
Figure 14 .
Figure 14.Similarity map for all the failures in the real-word dataset.
Table 1 .
Attributes of the failure logs.
Time when a HVCB was first put into production OthersIncluding the person responsible, mechanism type, a rough classification, manufacturers, etc.
Table 2 .
A typical manual log entry sample.
Table 3 .
Optimal parameters for different prediction tasks.
Table 4 .
Performance comparison with different methods.
Table 5 .
Top failures in each failure pattern.
|
2017-12-10T14:44:03.793Z
|
2017-11-20T00:00:00.000
|
{
"year": 2017,
"sha1": "0fcc76ef8b6ed1683c08275ea183caf06bb2cf54",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/10/11/1913/pdf?version=1511248289",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0fcc76ef8b6ed1683c08275ea183caf06bb2cf54",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
251677267
|
pes2o/s2orc
|
v3-fos-license
|
Do Perceptions about Palliative Care Affect Emergency Decisions of Health Personnel for Patients with Advanced Dementia?
Decision analysis regarding emergency medical treatment in patients with advanced dementia has seldom been investigated. We aimed to examine the preferred medical treatment in emergency situations for patients with advanced dementia and its association with perceptions of palliative care. We conducted a survey of 159 physicians and 156 nurses from medical and surgical wards in two tertiary hospitals. The questionnaire included two case scenarios of patients with advanced dementia presenting gastrointestinal bleeding (scenario I) or pneumonia (scenario II) with a list of possible interventions and 11 items probing perceptions towards palliative care. Low burden interventions such as laboratory tests and intravenous administration of antibiotics/blood were preferred. Palliative measures such as analgesia/sedation were chosen by about half of the participants and invasive intervention by 41.6% (gastroscopy in scenario I) and 37.1% (intubation/mechanical ventilation in scenario II). Medical ward staff had a more palliative approach than surgical ward staff in scenario I, and senior staff had a more palliative approach than junior staff in scenario II. Most participants (90.4%) agreed that palliative care was appropriate for patients with advanced dementia. Stress in caring for patients with advanced dementia was reported by 24.5% of participants; 33.1% admitted fear of lawsuit, 33.8% were concerned about senior-level responses, and 69.7% were apprehensive of family members’ reaction to palliative care. Perceptions of health care workers towards palliative care were associated with preferred treatment choice for patients with advanced dementia, mainly in scenario II. Attitudes and apprehensions regarding palliative care in these situations may explain the gap between positive attitudes towards palliative care and the chosen treatment approach. Acquainting emergency care practitioners with the benefits of palliative care may impact their decisions when treating this population.
Introduction
Patients with advanced dementia (AD) are often referred from the community or longterm care facilities to general hospitals for the management of urgent medical conditions. The decisions of health personnel in these situations are often made under constraints of time and are often subject to uncertainty when there is no information regarding patients' prior wishes or advance directives. When making treatment decisions for patients with AD, medical personnel are often required to consider the value of life versus the quality of life of their patients and adjust the care approach accordingly [1][2][3]. The terminal nature and trajectory of dementia, general perceptions, and attitudes towards palliative care (PC), the making regarding a hypothetical case scenario of a patient with AD who presents with bowel obstruction related to a space-occupying lesion [18].
Participants
The study used a convenience sample of physicians and nurses who work in medical and surgical departments at two tertiary university-affiliated hospitals in Israel. These hospitals, although in different geographical regions, are comparable in size, and both serve patients from diverse ethnic and cultural backgrounds. Criteria for inclusion were medical personnel (physicians and nurses) without formal postgraduate training in geriatric medicine and/or PC. Participants were recruited during departmental staff meetings. We excluded health personnel with formal postgraduate training in geriatric medicine and/or PC since most of the personnel did not have such training, and this training may bias the findings of the study. Data collection took place from February 2018-February 2019 in Sheba Medical Center, and from July 2019-February 2020 in Hadassah Medical Center. The first author (M.E) participated in ward staff meetings, presented the study aims, and then administered the questionnaires.
Questionnaire
The questionnaire was comprised of two parts. The first part presented two hypothetical case scenarios describing patients with AD presenting with acute, urgent, potentially life-threatening medical situations, along with a selection of immediate medical treatment options for each scenario ranging from palliative to aggressive care. The scenarios presented were two common emergency health situations that included relevant medical intervention in older adults with AD. These options were chosen with the consent of all of the authors, and confirmed by six experts, who were board certified physicians in internal medicine and geriatric medicine working in acute care hospitals.
The first case scenario was a patient with gastrointestinal bleeding, and the second was an individual with aspiration pneumonia and acute respiratory failure (see Appendix A). The researchers asked participants to indicate interventions they would recommend for the patient from six to eight relevant choices. The interventions for scenario I were: urgent gastroscopy, insertion of a nasogastric tube, insertion of a central vein line, blood transfusion, intravenous infusion of fluids, laboratory tests, subcutaneous infusion of fluids, and analgesia. Interventions for scenario II included: intubation and mechanical ventilation, intravenous infusion of fluids, antimicrobial therapy, laboratory tests, analgesia, and sedation.
Each intervention was given a score within a range of −1 to +3, where more aggressive treatments received higher scores. For example, urgent gastroscopy received a score of +3, while PC treatments (such as analgesia or sedation) were awarded a score of −1 and a score of +1 if not chosen. An option other than PC intervention that was not chosen was given a score of 0. The sum of the scores of the chosen interventions, termed the "Palliative Score", ranged from −1 to +17 and −2 to +11 for the first and second scenarios, respectively. Lower values reflected a more PC-oriented approach. Therefore, palliative interventions were given a negative value score of −1. The scores for the interventions were determined by consensus among the researchers and approved by the geriatric expert physicians. A pilot test on a subset of 19 participants was conducted and there was no need for further modifications in the case scenarios presented.
The second part of the questionnaire included 11 items addressing the perceptions of the participants related to PC for patients with AD. No valid and reliable questionnaire was found that met the study objectives when we conducted the study. Therefore, the questionnaire was based on a literature review that described the existing factors involved in providing PC for this population. It specifically addressed the assessment and thought processes involved in PC decision-making of healthcare staff in acute care settings regarding patients with AD and the potential barriers to providing PC [7,12,16,20].
The validation of the questionnaire items consisted of several steps. We first established the content validity, asking six physicians with expertise in this field of study to evaluate the questionnaire. The physicians were board certified in internal medicine and geriatric medicine; all worked in acute care hospitals. They evaluated whether the items effectively captured the topic under investigation. Following the pilot test mentioned above, no further modifications were made in this part of the questionnaire.
The 11 items included were: appropriateness of PC for patients with AD, colleagues' perceptions of the appropriateness of PC, perceived ease of having an end-of-life conversation, death of a patient with AD perceived as failure and accompanied by guilt feelings, the ability to make care decisions for patients with AD, perceived stress and the desire to avoid decision-making for a patient with AD, four items assessing perceived legal and organizational concerns, and perceived concerns of responses of family members regarding PC approach. We also performed a principal component analysis that identified four underlying components representing the following themes: (1) apprehension, (2) ability to make decisions and being comfortable with end-of-life care, (3) appropriateness, and (4) fear of family member's reactions. Using reliability statistics, a Cronbach Alpha on the standardized items reached 0.70.
Each item was graded using a five-point Likert scale. A lower score reflected a more positive perception of PC. All of the respondents completed the questionnaires during regular nursing or medical staff meetings.
Ethical Considerations
The study was approved by the Institutional Review Board of each hospital (Sheba Medical Center, 4839-18-SMC; Hadassah Medical Center, 0027-19-HMO). Participants received oral and written information about the study, and they provided their written consent.
Data Analysis
SPSS version 25 (IBM SPSS Statistics for Windows, Armonk, NY, USA: IBM Corp.) was used for the data analysis. Descriptive statistics were applied to analyze study variables (participant characteristics, PC perceptions, and preferred medical treatments). Percentages were calculated for dichotomous and categorical characteristics. Mean, standard deviation, median, and range were presented for continuous variables. The first two perception items (PC appropriate for patients with AD; Perceived PC appropriate for patients with AD by colleagues) were scored as dichotomous perceptions (positive/negative), and the mean scores of the medical treatment choices (the Palliative Score) were compared using t-test and Wilcoxon non-parametric test for median comparison. The remaining nine perception items were scored as agree/neutral/disagree or low/moderate/high, according to the item. The mean "Palliative Scores" for the two scenarios were compared according to participants' characteristics (medical vs. surgical staff, physicians vs. nurses, and senior vs. junior staff members) using t-test, and differences in median values were tested using the Wilcoxon non-parametric test. For responses to the PC perception items scored in three categories, one-way analysis of variance (ANOVA), and the nonparametric Kruskal-Wallis test were applied for the comparison of means and median "Palliative Scores" values of the two scenarios. In situations where the ANOVA or the Kruskal-Wallis tests came out statistically significant, multiple comparisons procedures using a post-hoc test were used to determine where the differences between the categories occur, and Bonferroni correction was applied. A value of p < 0.05 was considered statistically significant.
Participants
As shown in Table 1, the sample included 315 health personnel: 159 physicians and 156 nurses. Median age was 33 years (mean 35.5 years, SD 9.4, range 24-71); 167 of them (53%) were females, half had four years or less of professional experience (mean 7.7, SD 8.6, median 4, range 1-50 years); less than a third (n = 97, 30.8%) were senior staff (defined as medical staff who completed all stages of their medical specialization or nurses in management positions). The majority were from medical wards (n = 190, 60.3%), and 39.7% (n = 125) were from surgical wards.
Preferred Medical Treatment as Shown via Two Case Scenarios
The mean Palliative Score of the preferred medical treatments was 9.92 (SD 4.29; range 1-17; median 10) for the first scenario (gastrointestinal bleeding) and 6.22 (SD 2.92; range 1-11; median 6) for the second scenario (pneumonia with respiratory failure). The percentage of respondents who chose the various medical treatments in the two case scenarios is presented in Table 2. The treatments most frequently chosen for scenario I and II, respectively, were laboratory tests (84.1%; 69.8%) and an intravenous fluid infusion (79.4%; 72.7%). Blood transfusion was chosen by 68.6% for scenario I, antimicrobial therapy by 72.4% for scenario II, and nasogastric tube insertion by 67.9% for scenario I. Analgesia was chosen by 50.5% and 62.2% for scenarios I and II, respectively, and sedation by 58.4% for scenario II (sedation was not a listed treatment option in scenario I). Subcutaneous infusion of fluids, urgent gastroscopy, intubation and mechanical ventilation, and central vein line insertion were less commonly selected. We found that 96% of the participants chose at least one alternative with the highest potential for causing suffering in scenario I, and 90% chose at least one alternative with either moderate or high potential for causing suffering in scenario II.
Preferred Medical Treatment as Shown via Two Case Scenarios
Medical staff had a more PC approach than surgical staff in scenario I (gastrointestinal bleeding) (Palliative Score 9.1 ± 4.1 vs. 9.8 ± 4.4 for medical and surgical wards, respectively, p < 0.001) ( Table 3). In scenario II (pneumonia with respiratory failure), there was no difference in the Palliative Score between medical and surgical staff. Physicians and nurses did not differ in their chosen Palliative Score in both scenarios. Senior staff had a more PC approach than junior staff in scenario II (pneumonia with respiratory failure) (Palliative Score 5.5 ± 3.1 vs. 6.6 ± 2.8 for senior staff and junior staff, respectively, p = 0.002), but not in scenario I (gastrointestinal bleeding).
Palliative Care Perceptions
Most participants (n = 284, 90.4%) agreed that PC is appropriate for patients with AD. On the other hand, their assessment regarding their colleagues' acceptance of a PC approach was lower (73.6%) ( Table 4). Approximately half of the participants reported feeling comfortable conducting end-of-life discussions (58.0%), having the ability to make care decisions (n = 178, 56.7%), and not being stressed or avoiding caring for a patient with AD (n = 165, 52.5%). About half (n = 146, 46.5%) disagreed that PC could expose them to a lawsuit. Most (n = 219, 69.7%) were concerned regarding the reaction of family members to the administration of PC.
Association between Palliative Care Perceptions and the Preferred Treatments
A positive perception of PC as appropriate for patients with AD as perceived by colleagues was associated with fewer chosen aggressive medical treatment options. This association was statistically significant for scenario II (respiratory failure, p = 0.001 for mean and p = 0.002 for median), but not for scenario I (gastrointestinal bleeding, p = 0.1 and p = 0.7 for mean and median, respectively) ( Table 4).
Participants who agreed with feeling comfortable having end-of-life discussions in scenario II report less aggressive care (agree vs. neutral p < 0.001; agree vs. disagree p = 0.01 for comparison of means and agree vs. neutral for median, p = 0.001). Feeling guilty about the death of a patient with AD was associated with choice of more aggressive care treatments in scenario II; participants who disagreed had significantly lower mean scores than those who were neutral (p = 0.002) or agreed with this statement (p = 0.046). This pattern has also been seen when median scores were compared (6 and 7 for disagree and neutral, respectively, p = 0.009). The ability to make treatment decisions for patients with AD was related to the choice of less aggressive care treatment options in scenario I (agree vs. disagree, p = 0.03 for comparison of mean and median values). Participants who reported having lower feelings of stress and avoidance while caring for patients with AD chose less aggressive care treatments than those who were neutral or agreed that care of patients with AD causes stress and avoidance (p = 0.006 for overall comparison of mean values in scenario II). Agreement with the statement that PC exposes the healthcare provider to a lawsuit was associated with more aggressive treatment choices in scenario II (neutral vs. disagree, p = 0.006 for median). Higher levels of concern for PC lawsuits were associated with choice of more aggressive treatments in scenario II (high vs. low, p = 0.001 for comparison of mean and median).
We also observed an association between the concern regarding criticism by senior staff members and more aggressive treatment choices in both scenarios (Palliative Score 9.1 ± 4.6, 9.8 ± 4.0, and 10.8 ± 4.1 for low, neutral, and high levels of concern, p = 0.02 for comparison of means in scenario I and 5.0 ± 2.8, 6.5 ± 2.9, and 7.3 ± 2.7 for low, neutral, and high levels of concern, p < 0.001 for comparison of mean scores in scenario II). Statistically significant differences were also observed for low vs. high and low vs. neutral; (p < 0.001 and p = 0.009, respectively) for comparison of median scores in scenario II. Lack of organizational support was associated with choice of more aggressive care in scenario II (disagree vs. neutral, p = 0.009 for comparison of means, disagree vs. agree, p = 0.043, neutral vs. agree, p = 0.009 for comparison of median). Apprehension of the reaction of patients' family members to PC was associated with choice of more aggressive care treatments in scenario II; high vs. low, p = 0.035 for comparison of mean.
Discussion
Our study was aimed at the assessment of the perception and implementation of concepts of PC and end-of-life decisions addressing patients with AD with life-threatening conditions. Our main findings were that while 90% of the study cohort of physicians and nurses accepted PC as appropriate for patients with AD, only half of them chose a PC approach (analgesia and sedation) as the treatment option in acute life-threatening conditions, while slightly more than a third considered invasive interventions, such as endoscopy in the case of gastrointestinal bleeding and intubation and mechanical ventilation in the case of respiratory failure. Nearly all participants chose at least one intervention with the highest potential for causing suffering in scenario I and at least one intervention with moderate or high potential for causing suffering in scenario II.
The most frequently chosen medical interventions were laboratory tests, intravenous fluids, blood transfusions, and antimicrobial therapy. These treatments may fall within a wide range of goals of care, including "basic care" provided to acute care patients as well as PC. Performing laboratory tests when there is a clinical necessity is consistent with both PC and curative care, since the test results might have implications for care and constitute minimal accompanying risk and discomfort [21]. While intravenous fluids may be a reasonable treatment for all care approaches (excluding the near-death period), there is no consensus on whether blood/blood products or antimicrobial therapy should be considered as a part of PC. A review of the literature revealed that four out of seven studies reported longer survival among patients in need of PC receiving antibacterial therapy, while the remaining studies found no difference in survival [22]. Although antimicrobial therapy has been used in PC to achieve symptom relief [23], some claim that when symptoms are absent, and the patient cannot swallow, antimicrobial therapy should not be considered palliative [24,25]. Nevertheless, studies have shown that most patients, their family members, and healthcare workers prefer antibiotics, even when the patient is terminally ill or suffering from AD [22,26,27]. Antimicrobial therapy is viewed as a low-burden intervention with the potential to treat reversible conditions that may be associated with suffering [22]. Similar perceptions have been found with respect to blood/blood products in PC. A systematic review found some short-term benefit with respect to symptom alleviation, and one study found prolonged survival with the supplementation of blood [28].
Only about half of the respondents selected interventions aimed at ameliorating suffering such as sedation and analgesia. This finding is surprising and likely reflects a lack of awareness of suffering in uncomplaining patients with AD. Some participants favored invasive interventions known to cause patients discomfort or suffering, such as insertion of a nasogastric tube, urgent gastroscopy, or placement of a central venous line. About one third selected intubation and mechanical ventilation, despite the fact that the Law of the Dying Patient in Israel permits withholding, but not withdrawing, continuous interventions such as mechanical ventilation, in patients with an estimated life expectancy of less than six months [29,30]. Studies have reported an increase in the use of mechanical ventilation among patients with dementia [31,32]. The usual life-saving orientation in acute care settings and the lack of palliative/geriatric medical training may explain increasing utilization of aggressive care and the relatively low rate of adopting a PC approach. This explanation is supported by a study among nurses and nurse-assistants indicating that knowledge and training in palliative and dementia care were associated with higher levels of positive attitudes toward PC in patients with dementia [33].
One possible explanation for the preference of aggressive, life-prolonging medical treatment in acute care settings among many of our study participants is the socialization process that shapes the perceptions of health personnel [34,35]. However, studies show that preference for more aggressive medical treatment does not lead to an improved prognosis or quality of life in patients with AD [36][37][38][39][40]. Providers appear to be skeptical about PC for patients with chronic diseases other than cancer [41], and many lack the self-confidence to provide such care [42]. A Finnish study concluded that the low rate of implementation of PC could be associated with unrecognized palliative needs of patients with dementia [43].
Medical staff chose less aggressive care than surgical staff in the case of gastrointestinal bleeding (scenario I). The surgical staff utilized more invasive interventions in this case (such as gastroscopy), likely reflecting their automatic response to gastrointestinal bleeding as a part of their professional training, regardless of the underlying state of AD, while the medical personnel preferred more conservative management.
We interviewed a subset of the current sample (15 physicians and 11 nurses) in a qualitative study to investigate the healthcare personnel's thinking processes associated with a case scenario of bowel obstruction related to a space-occupying lesion in a patient with AD. We found that surgical health personnel tended to focus on the immediate interventional response, while medical staff focused mainly on palliative measures [18]. Interestingly, there was no difference in the treatment preferences between medical and surgical disciplines in the case of pneumonia with respiratory failure, indicating that professional orientation was not a major determinant in this condition. In addition, both in the current study and in the qualitative study described here [18], there was no difference in the level of aggressive care between physicians and nurses. By contrast, other studies have reported a preference of nurses for PC, while physicians favored an aggressive approach, likely reflecting their own perception and training as life-savers [44][45][46][47]. We propose that our contradicting finding may reflect different perspectives in Israel regarding the role of nurses vs. physicians in critical medical decisions, especially under acute care settings.
The senior level staff had a more palliative approach than the junior staff in the case of pneumonia with respiratory failure. This is likely due to their experience regarding the longterm course, prognosis, and suffering of those patients, leading to their reluctance to initiate mechanical ventilation under these settings. Furthermore, senior staff are more experienced in end-of-life decisions and assumed to be less apprehensive in taking responsibility. On the other hand, there was no difference between the senior and junior staff in the approach to the case of acute gastrointestinal bleeding. This is most likely based on their assumption that this is a potentially reversible critical condition, unrelated to the end-stage cognitive impairment.
In our study, most of the respondents reported that PC was appropriate for AD patients, with a high percentage perceiving agreement among their colleagues. This finding suggests improved PC perceptions, corroborating findings of recent studies [48].
Noteworthy, however, a Finnish study, assessing the tendency of physicians to choose PC for patients with AD, using a hypothetical case scenario, reported that PC was chosen less frequently in 2015 than in 1999. The authors suggest that increased reports of legal concerns among physicians in 2015 may partially explain this shift of preferences [43].
We found an association between treatment choices and perceived level of apprehension regarding criticism by senior personnel in both hypothetical scenarios. An ability to make care decisions for patients with AD was associated with medical treatment choices in scenario I (gastrointestinal bleeding) but not in scenario II. By contrast, provider perceptions related to the other aspects of PC in patients with AD were found to be associated with medical treatment choices in scenario II (pneumonia with respiratory failure), but not in scenario I (gastrointestinal bleeding). This striking difference between the two scenarios may be due to the inherent nature of these medical conditions. Acute gastrointestinal bleeding is a potentially reversible problem not directly related to AD that can be successfully managed irrespective of the general PC approach. In contrast, pneumonia with respiratory failure is likely directly related to the underlying severe cognitive impairment and the associated neurologic functional deficits, with a poor anticipated immediate and long-term outcome [1]. Indeed, aspiration pneumonia is a leading terminal event in most AD patients. Furthermore, mechanical ventilation in these settings may turn out to be permanent in the case of weaning failure. This is a major ethical issue in Israel and in other societies where disconnecting a patient from mechanical ventilation is illegal or unacceptable [49].
Only about half of the respondents reported positive feelings about caring for patients with AD. Feeling comfortable with end-of-life discussions, lack of negative feelings, such as guilt, stress, and lack of desire to avoid being involved in the care of patients with AD, were all associated with favoring PC treatment choices in the case of pneumonia with respiratory failure. Negative self-perceptions about PC may be related to insufficient knowledge and experience in providing PC [50,51]. Indeed, the lack of acquaintance regarding the options for end-of-life care was found to be a barrier to effective discussions about end-of-life among internal medicine residents [52]. We have recently reported that health personnel in general hospitals report low rates of end of life discussions with family members of hospitalized patients with dementia [53]. Some investigators found that knowledge deficits were negatively correlated with perceived self-efficacy [54,55], and positively correlated with lack of confidence in making care decisions for patients with AD and concerns about their ability to provide quality end-of-life care [12,56]. Other researchers argue that negative emotions and stress arising from caring for the seriously ill patient might negatively impact quality of care and even lead to poor judgment and performance, and to incoherent care goals [13]. Stress is another factor that decision makers must face in most life-or-death situations [5].
Our findings demonstrate statistically significant associations between perceptions related to legal concerns and preferred medical treatments in scenario II (pneumonia with respiratory failure). Perceptions of risk and ability to cope may play a role in the care of patients with AD [16]. Substantial legal concerns were found to be associated with more aggressive treatment choices [43]. Others found critical gaps and a lack of knowledge of relevant legal status among health personnel [57], promoting dis-concordant perceptions and performance. True enough, Jox et al. [58] found that health care providers supporting PC acknowledged that choosing this approach may result in legal actions and disciplinary sanctions, and, therefore, imposing the provision of care they considered futile [58,59].
We found that about a third of our respondents were very concerned about seniorlevel criticism regarding PC decisions, and about a third agreed that there was a lack of organizational support for PC. Both items were associated with choosing more aggressive medical treatment. Evidence from previous studies supports these findings. Providers who consider care options that do not follow the perceived traditional curative care approach may be worried about a negative response from other staff members, leading to the suppression of the pursuit of initial treatment intentions [16,60,61]. This could be due to the organizational culture of acute care hospitals that encourages a curative treatment approach and uses mortality rates as an indicator of quality of care. This might be especially relevant to surgical disciplines. Therefore, when a junior health care worker considers PC, it might be perceived as different from the accepted care plan. In such cases, the junior practitioner will tend to consult with peers and senior staff. Ranse et al. [62] reported that informal collegial support may assist with the management of end-of-life care, whereas a lack of perceived organizational support in the context of care decisions for patients with life-limiting illness may be a barrier to PC [20].
More than two thirds of the participants in our study reported having a high level of perceived concern about the reaction of family members to PC. The higher the level of concern, the more aggressive the chosen medical treatments in scenario II (pneumonia with respiratory failure). Our findings are consistent with previous reports, showing that physicians are reluctant to offer PC when they anticipate that their recommendation would be misunderstood by the patient's representative as giving up on patient's care [41]. In a systematic review of barriers to prescribing PC in cases of life-limiting disease, an important factor was the perceptions of the patient's family [63]. Often, family members coax health personnel to provide futile care to patients with AD. Medical staff may be intimidated and avoid PC to prevent conflict, even though it contradicts their own best judgment [58,63], leading to less appropriate care for the patient [64].
Although benefits and acceptance of PC have evolved over the last decades, in certain societies and under specific medical settings, such as in departments of emergency medicine, its implementation is limited [65]. Varied tools that screen for the presence of PC needs were employed and adjusted for emergency care [66]. However, perceptions of PC appropriate to patients with chronic medical conditions are seldom adopted in the presence of acute life-threatening illness in the emergency room [67].
This study has a few limitations. Possibly, the treatment options chosen by the participants in the described scenarios do not accurately reflect real-life responses. In addition, the study was carried out in two acute care tertiary hospitals in Israel, using local experts for questionnaire validation, and therefore, the generalization of the findings to primary care hospitals and to other countries and societies may not be applicable. Indeed, we did not investigate the possible impact of culture and religion on our findings. Staff perceptions about PC were focused on immediate decision-making encountered in acute care settings. Therefore, many aspects of PC such as a multidisciplinary approach, or response to emotional, spiritual, or social needs have not been addressed. The list of medical treatment options provided for both scenarios relates to the management of acute conditions and not to end-of-life care or to other interventions such as palliative sedation. In our study, we excluded physicians and nurses with former postgraduate training in geriatric and/or PC. Therefore, the impact of such training on perceptions about PC and treatment decisions in emergency situations for patients with AD was not investigated. Our questionnaire was content validated by Israeli physicians, not involved in this study, who were all experts in geriatric medicine and internal medicine. Further studies are necessary to validate our questionnaire in other countries and societies, addressing the cultural and legal differences regarding the attitude towards dementia and end-of-life care and management of patients with AD.
Conclusions
This study presents the perceptions of health personnel in the context of treating patients with AD under acute life-threatening medical situations. Although most participants expressed favored PC for this population, it is evident that when considering actual clinical decisions in these emergency situations, many barriers remain that impede the implementation of PC. This was demonstrated by a large percentage of respondents who did not choose PC treatments, as well as discomfort and concern expressed regarding related aspects that may affect them personally following the implementation of PC. Our findings could form a basis for the development of effective PC training programs for acute care personnel and for the revision of organizational norms of care for patients with AD.
Acquainting emergency care practitioners with the benefits of PC for patients with AD may potentially impact their decisions when treating this population. Most impotent, adoption of advance directives, and discussions in advance with family members of patients with AD may prevent futile or unnecessary referrals to hospitals and may facilitate implementation of PC in this population during acute life-threatening illnesses.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons.
|
2022-08-20T15:10:41.396Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e5bbe87c41029eda980428da6b1ffc7823162cf1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/16/10236/pdf?version=1660746501",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8981b6c876b5c73a9d813141ad8695cc0a34fb68",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6905301
|
pes2o/s2orc
|
v3-fos-license
|
A method for determination of muscle fiber diameter using single fiber potential (SFP) analysis
We have used computer simulation to study the relationship between the muscle fiber diameter and parameters: peak-to-peak amplitude and duration of the negative peak of the muscle fiber action potential. We found that the negative peak duration is useful in the determination of fiber diameter via the diameter dependence of conduction velocity. We have shown a direct link between the underlying physiology and the measurements characterizing single fiber potential. Using data from simulations, a graphical tool and an analytical method to estimate the muscle fiber diameter from the recorded action potential has been developed. The ability to quantify the fiber diameter can add significantly to the single fiber electromyography examination. It may help study of muscle fiber diameter variability and thus compliment the muscle biopsy studies.
Introduction
Single fiber EMG (SFEMG) is a powerful technique to study the pathophysiology of the motor unit. Two types of measurements are made using this technique: fiber density (FD) and jitter. The FD is useful to study the grouping of muscle fibers of a motor unit within its territory. In neuropathy, FD increases due to reinnervation. The ''jitter'' measurements are used to assess the efficacy of the neuromuscular transmission [9].
In myopathy, the FD may be increased slightly due to fiber splitting, regeneration of muscle fibers, etc. The jitter is usually normal. In this manner, SFEMG is not particularly useful in the diagnosis of myopathy. It is often used to rule out other diseases such as a neuropathy or a neuromuscular junction disease [9]. A primary change in myopathy is the increased variability of the muscle fiber diameter. This is assessed quantitatively using muscle biopsy studies [2]. There is no electrophysiologic technique to assess this characteristic of the motor unit.
The abnormalities of muscle fiber diameter are observed indirectly on electrodiagnostic studies. On routine needle EMG examination, increased variability gives motor unit potentials with polyphasic waveforms. Atrophy can give low amplitude (in what follows by amplitude we mean the peak-to-peak value of potential change, counted in mV) potentials [1,8,9].
The variability of muscle fiber diameter may be studied indirectly by investigating the muscle fiber conduction velocity. The muscle fibers are stimulated directly using an intramuscular needle, and their action potentials are recorded at a few millimeter distance. The latency of the potential is used to compute the velocity [10].
In principle, the shape of the single muscle fiber action potential also contains information about the muscle fiber diameter. In the so-called line source mode, the extracellular muscle fiber action potential V(t) is computed as [3]: where i(t) is the transmembrane current and u is a weight function. The transmembrane current is proportional to the square of the fiber diameter [3]. Thus, larger fibers will produce a larger amplitude potential. The amplitude is also affected by the distance of the fiber from the electrode [4].
Higher the distance, lower is the amplitude. Hence, amplitude alone cannot be used as a marker of the fiber diameter. The weight function is the potential recorded by the electrode as a unit current source propagates from the endplate to the tendon. The waveform of this function is thus dependent on the propagation velocity of muscle fiber, and hence the fiber diameter. The change in weight function waveform with radial distance and with fiber diameter has not been investigated systematically.
In this study, we have used computer simulation to study the relationship between the muscle fiber diameter and parameters of potential: peak-to-peak amplitude and duration of the negative peak of SFP. These relationships are used to develop a graphical and an analytical tool to estimate the muscle fiber diameter from the recorded action potential. Although our goal is similar, our method of analysis is quite different from that used by Rodriguez et al. [6,7]. They used a computer model to recursively obtain the best match for measured waveform. This is not always possible in clinical environment. We believe that the graphical method described in this study offers more simplicity and gives a better understanding of the action potential waveform.
Methods
The line source model described by Nandedkar and Stålberg [3] was used for simulating single muscle fiber action potentials. This model has been tested in simulations of normal and abnormal EMG signals, and shows a good concordance with experimental and clinical recordings [4,5].
The single fiber potential V recorded at the electrode at time t, coming from a single fiber, is given by a convolution of a weight function u(t) and current i(t) given by (1). The effect on the potential V of the distance of the electrode from the fiber is described by the weight function, which is defined as follows [3]: where r r is the radial conductance and K depends on the ratio of axial to radial conductance, r and z are the radial and axial distance of electrode from the current source. The axial distance z is related to time t by the speed of propagation of the potential along the fiber: z = vt, where the velocity of propagation v is linearly dependent on fiber diameter. The action potentials were simulated for various combinations of muscle fiber diameter (25-90 lm) and radial distance (50-500 lm). The range of fiber diameters corresponds to the data from muscle biopsy [2]. In normal muscle the mean value is 50 ± 5 lm and in various neuromuscular disorders it is in range from 25 to 90 lm. The range of radial distances was determined so that the largest fibers would not be closer to the electrode than their radius (50 lm). The electrode records mainly from muscle fibers that are within 400 lm of the recording surface [4]. Hence the maximum radial distance was limited to 500 lm. The velocity of propagation of the current is determined by the diameter of the fiber according to the following formula [3]: v ¼ 2:2 þ 0:05ðd À 25Þ, where the fiber diameter d is in lm and the propagation velocity is in ms -1 .
For the simulation we have used our own software which is based on the formulation of the simulation method presented by Nandedkar and Stålberg [3]. In this method the potential is a result of the convolution of the weight function and source current given by (1) with the source current described by the formula derived by Nandedkar and Stålberg [3]. Using our software we are able to model the potential from one or more fibers and determine its parameters. In the modeling we have assumed the same parameters for conductance as well as for the source current model as given in [3]. The model has been used by us to examine the properties of SFP in various neuromuscular disorders (Nandedkar et al. [5]).
The amplitude was measured from maximum positive to maximum negative peaks (Fig. 1). The zero-crossing following the initial downward and subsequent upward peaks were identified. The time difference between them is the duration of the negative peak ( Fig. 1)-henceforth it will be denoted by t z .
The change in the amplitude and duration with radial distance and muscle fiber diameter was investigated. In the experimental recordings, the peak duration and amplitude are measured while the muscle fiber diameter and radial distance are the ''unknown'' variables. Hence we may write: By fixing t z , we obtain parametric curve with coordinates: where the symbol j t z means that it is a parametric dependence of d and r on a for a fixed t z . Similarly by fixing a, we may obtain parametric dependence of r and d on t z : We have approximated the dependencies between d, r, a, and t z as follows: the negative peak duration was approximated by the following two bi-quadratic polynomials: where both F i¼1;2 are expressed as: where e i;j ¼ c i;j;1 þ xðc i;j;2 þ xc i;j;3 Þ ð 10Þ In our approach, the F i are quadratic functions of y which is either fiber diameter d for i ¼ 1, or fiber to electrode distance r for i ¼ 2. The coefficients of these quadratic polynomials are by themselves given by quadratic Eq. (10) with x equal to the log 10 ðaÞ. The coefficients c i;j;k are determined by fitting Eqs. (7) and (8) to results of simulation calculations.
We have found experimentally that the form of dependence given by Eqs. (7) and (8) is better suited to least squares bi-quadratic approximation than Eqs. (3) and (4).
In order to calculate the diameter, x ¼ log 10 ðaÞ has to be calculated from the amplitude of the potential and then used in Eq. (10) to calculate the coefficients e 1;1 . . .e 1;3 . Using these coefficients the quadratic Eq. (9) may be solved for y i.e., for the diameter. Since Eq. (9) is quadratic in y it may be solved directly. Similarly by calculating coefficients e 2;1 . . .e 2;3 Eq. (9) may be solved to obtain radius. The same Eq. (9) is used to derive both fiber diameter and radial distance. The quantity that is determined depends on whether one uses the set of coefficients c 1;j;k -for the determination of d, or the coefficients given in the second column in Table 1-for the determination of r from a given a and t z .
Results
The results of simulation studies are shown in Figs. 2, 3, 4, 5, and 6. The dependence of the weight function on parameters is shown in Fig. 2. The weight function is broader when the radial distance between the muscle fiber and the recording electrode is increased (Fig. 2a). For the same distance, the smaller fibers have a broader weight function due to slower conduction velocity (Fig. 2b). The simulated waveforms also show differences in the amplitude and durations. When their negative peaks are aligned and their amplitudes are normalized, larger fibers have a shorter negative peak duration (Fig. 3a). With identical radial distance, larger fibers have higher amplitude and shorter duration (Figs. 3b, 4). The amplitude changes much Fig. 1 Definitions of the SFP parameters. The amplitude was measured from the maximum positive (a) to maximum negative peak (b). The time difference between zero-crossings (c) and (d) is the duration of the negative peak t z Table 1 The values of the coefficients c 1;j;k and c 2;j;k used in Eqs. (9) and (10 more with radial distance than does the negative peak duration (Fig. 5). The amplitude and negative peak duration dependencies are shown on a single graph (Fig. 6). The solid curves represent the dependency of ðd; rÞj t z for a range of t z values (0.4-1.4 ms) and dashed curves represent the dependency of ðd; rÞj a for a range of amplitude values (50 lV-5 mV). A plot of these dependencies (Fig. 6) allows one to estimate the fiber diameter and radial distance knowing the amplitude and negative peak duration.
For example, if a potential with amplitude of 0.7 mV and a negative peak duration of 0.6 ms is recorded, we could determine the distance and diameter from the intersection of the curve for constant amplitude = 0.7 mV (solid line) and constant duration = 0.6 ms (dashed line). The point of intersection gives the fiber diameter (roughly 65 lm) and the radial distance (150 lm).
While in principle the graphical tool, provided that the curves for constant amplitude or negative peak duration are dense enough, may be used to determine fiber diameter and Fig. 3 a SFP for fiber of diameter 30 and 90 lm located at a distance of 100 lm from the electrode. Both potentials have been scaled so that the maximum = 1. Potential due to larger fiber has been shifted in time (by ?5.9 ms) so that the maxima coincide. The increase of the width of potential with the decrease of diameter is clearly seen. b SFPs for fibers with diameter of 30, 45, 60, 75, and 90 lm located at a distance of 200 lm from electrode. It is seen how the amplitude of the SFP increases with the increase of fiber diameter and at the same time the potentials from larger fibers arrive earlier at the electrode Fig. 4 The negative peak duration (t z ) dotted line decreases with the increase in fiber diameter (at fixed fiber to electrode distance). The solid line is the dependence of amplitude on fiber diameter. With the increase of diameter the amplitude increases. The amplitude is measured in mV, the negative peak duration is measured in ms distance from needle we have approximated the data with analytical formulae which makes them more suitable for use. The graph, however, may be used to verify that the curves are smooth, nearly linear and that the two variables a and t z give rise to two families of curves that are well separated (Fig. 6).
The graph may be readily used to compare the diameters for two or more SFPs. If two SFPs have the same amplitude, so on the graph we are moving along one of the constant amplitude curves, then the one that has shorter negative peak duration is due to larger fiber. This is because the negative peak duration depends on conduction velocity which in turn is proportional to fiber diameter.
Hence the larger the fiber, the shorter the negative peak duration.
In order to explain the dependence of fiber diameter on amplitude for a constant negative peak duration one has to note that the amplitude depends primarily on two factorsthe distance from the electrode and fiber diameter (see Fig. 3b). For a constant t z , the amplitude changes mainly due to the change in fiber to electrode distance. The increase of radius, with the decreasing amplitude, leads to the broadening of the weight function. In order to keep the t z constant the fiber diameter has to increase with the increase of radius. Hence for two SFPs of the same negative peak duration the one for which the amplitude is smaller will be due to a larger fiber.
In order to calculate the values of the coefficients c 1;j;k and c 2;j;k we have simulated several hundreds of SFP with fiber diameter ranging from 25 to 90 lm and radial distance ranging from 50 to 500 lm calculating for each SFP its peak-to-peak amplitude and negative peak duration. These data were then approximated using formulae (9) and (10) and the coefficients have been determined by standard linear least squares method. These coefficients are given in Table 1. The mean error of the determination of t z from Eq. (7) is 5 Â 10 À3 , and for Eq. (8) it is 3 Â 10 À3 therefore we have rounded the coefficients to 4 significant places.
Using this method to approximate the data it is found that the root mean square error of the determination of diameter is 2 lm and for radial distance it is 6 lm. The maximum error for d is 8 lm, and for r it is 50 lm. The maximum errors occur at the ends of interpolation regions for r [ 310 lm, d\ 35 lm, or d [ 125 lm. Hence the approximations given by Eqs. (7) and (8) enable to determine d and r from a and t z with satisfactory accuracy.
For example, using these equations for a simulated SFP with fiber diameter d ¼ 55 lm, fiber to electrode distance r ¼ 80 lm for which a ¼ 1:357 mV, hence log 10 a ð Þ ¼ 0:13258 and t z ¼ 0:593 it is obtained that d ¼ 55:7 lm and r ¼ 80:9 lm. In this case the difference between the parameters used to model the SFP and their values obtained from calculation are less than 1 lm.
In order to estimate the effect of measurement errors of a and t z on the result we may assume a measurement error of AE20 lV for amplitude. For t z we assume an accuracy of AE0:04 ms which corresponds to sampling frequency of 25 kHz. It is found that with the assumed value for the error in a the change in the derived diameter is less than 0.3 lm. By changing t z by the assumed value the d changes by 8 lm. It can be easily verified using the coefficients of the fit that the diameter is more sensitive to measurement errors in t z than to errors in a. Therefore, to be able to determine fiber diameter and fiber to electrode distance a good quality of the SFP is required with sufficient Fig. 5 The dependence of t z (dotted line) and amplitude (solid line) on the electrode distance from the fiber for a fiber of diameter of 55 lm and electrode located at distances from 50 to 300 lm. The amplitude is measured in mV, the negative peak duration is measured in ms. The amplitude decreases with distance and the negative peak duration increases t z (ms) a (mV) Fig. 6 The dependence of fiber diameter (d) on fiber to electrode distance (r) for a fixed amplitude (a) of the potential or a fixed duration ðt z Þ. Solid curves represent constant duration and dashed curves represent constant amplitude. The values of amplitude are given along the top and right margins of the graph. The values of constant duration are given in the lower part of the graph resolution in time. It is also for this reason that the graph tool can be only used for approximate or qualitative estimates as it would have to contain tens of constant t z curves to provide comparable accuracy to the presented formulae.
Discussion
From the simulation studies it has been found that the negative peak duration of SFP is a quantity which is useful in the determination of fiber properties and in particular of fiber diameter. Rodriguez and co-workers [6,7] have shown that this quantity is related to the width of the source current generating the potential. We have found that t z is related to fiber diameter. This relation is easily understandable in view of the above findings because the duration is proportional to the ratio of the width (*1.5 mm) of the negative peak of the second derivative of intracellular potential [3] and source propagation velocity. The velocity of propagation is dependent on fiber diameter; therefore the larger the diameter, the less time it will take for the feature to pass the electrode, hence the negative peak duration will be shorter. Thus, it turns that t z is one of the basic quantities characterizing the SFP and it gives a direct link to the underlying physiology.
We have derived a set of analytical formulae using which it is possible to determine fiber diameter (and fiber to electrode distance) from the measurement of the SFPs amplitude and negative peak duration. The analysis of the sensitivity of the determination of fiber diameter to the errors in the measurement of amplitude and negative peak duration shows that a sampling rate of 25 kHz is required so that the error in the determined fiber diameter is less than 8 lm. The errors in the determination of amplitude have much smaller impact on the error of the determination of d. In order to be able to determine the fiber diameter and fiber to electrode distance, the noise level in the recorded SFP has to be low (\20 lV).
The ability to quantify the fiber diameter can add significantly to the SFEMG examination. It may help one to study muscle fiber diameter variability in many different muscles and thus compliment the muscle biopsy studies.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
2015-07-06T21:03:06.000Z
|
2012-10-05T00:00:00.000
|
{
"year": 2012,
"sha1": "766e1a67444cd8c8b752cba13de6a0b2408c7ed4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11517-012-0965-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "766e1a67444cd8c8b752cba13de6a0b2408c7ed4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Computer Science",
"Medicine"
]
}
|
258938665
|
pes2o/s2orc
|
v3-fos-license
|
The Contribution of Attentional Bias to Negative Information to Social Anxiety-Linked Heightened State Anxiety During a Social Event
It has been proposed that people with high compared to low trait social anxiety pay greater attention to negative information concerning upcoming social events, and that such attentional bias drives the disproportionately elevated levels of state anxiety they exhibit in response to these events. These two hypotheses have not yet been adequately tested. We recruited participants who were high or low in trait social anxiety. Participants completed a mock job interview, and reported their state anxiety during this experience. Prior attentional bias to negative, relative to benign, information concerning this event was assessed using a variant of the dual probe approach, in which participants were exposed to dual videos, each comprising two video clips of people who had completed the mock job interview, discussing either negative or benign aspects of this experience. High compared to low trait social anxiety participants displayed higher attentional bias to negative social information, and this bias mediated the association between elevated trait social anxiety and heightened state anxiety experienced during the mock job interview. These findings demonstrate that elevated trait social anxiety is characterized by an attentional bias to negative, relative to benign, information concerning an upcoming social event, and that this attentional bias statistically predicts the disproportionately elevated state anxiety that people with high trait social anxiety experience during such an event.
events may share information about them concerning their negativity. It has been proposed that people with high trait social anxiety pay greater attention to negative, compared to benign, information concerning upcoming social events (regardless of the absolute level of bias exhibited), than people with low trait social anxiety, and that such attentional bias drives the disproportionately elevated levels of state anxiety they exhibit in response to these events (Heimberg et al., 2010). High trait social anxiety is particularly prevalent among adolescents and young adults (Spence & Rapee, 2016), and can exert a significant adverse impact on social, academic and occupational functioning (Davila & Beck, 2002;Stein & Kean, 2000). While there is considerable evidence demonstrating that people with high trait social anxiety display greater attentional bias to negative social information, such as negative socially relevant words (e.g. timid or embarrassed) or negative facial expressions (e.g. angry or disgusted; cf. Mathews & MacLeod, 2005), Elevations in state anxiety can be triggered by social events, such as giving a speech or during a job interview (Hofmann, 2007). Importantly, people differ in their tendency to experience elevations in state anxiety when engaging in social events. This dimension of individual difference is known as trait social anxiety. People with high trait social anxiety, relative to those with low trait social anxiety, tend to exhibit elevated levels of state anxiety when engaging in a social event. Other people who have previously experienced such research has not yet investigated whether people with high trait social anxiety display greater attentional bias towards negative, compared to benign, information concerning upcoming social events. By extension, it also has not been investigated whether such attentional bias can statistically predict the disproportionately elevated levels of state anxiety that people with high trait social anxiety exhibit when engaging in these events. Thus, the aim of the present study was to test the validity of these two hypotheses.
The most common method of assessing attentional bias is the attentional probe task (MacLeod et al., 1986). In this task, participants are briefly presented with stimulus pairs, usually comprising one negative and one benign member. The stimuli presented are often simply pairs of emotionally toned words (e.g. timid / proud), or pairs of face images displaying different emotionally-toned facial expressions (e.g. angry / happy). A single visual probe stimulus is subsequently presented in the locus where either member of the stimulus pair was just displayed, and participants are required to quickly identify this probe, which remains onscreen until the identification response is detected. The degree to which this identification response is speeded for probes appearing in the locus of the negative compared to benign member of the stimulus pair provides an index of attentional bias to negative information. In studies that have used this conventional probe approach to compare patterns of attentional bias in people high and low in trait social anxiety, it has repeatedly been shown that the former individuals are disproportionately speeded to identify probes appearing in the locus of negative member of stimulus pairs, suggesting that high trait social anxiety is characterized by relatively greater attentional bias to such negative information (Asmundson & Stein, 1994;Gilboa-Schechtman et al., 1999;Vassilopoulos, 2005).
However, there are three limitations with these previous studies, in terms of their capacity to shed light on the two hypotheses under present consideration. First, these studies have relied upon the conventional attention probe task to assess attentional bias, which has been shown to have low psychometric reliability, with internal consistency of the attentional bias index often being < 0.30 (McNally, 2018). Second, these studies have not involved assessing participants' state anxiety response to a potentially stressful social event. Third, the stimulus information employed in these previous studies have typically been single words or faces, that do not convey negative and benign information concerning a specific social event that participants know they are about to experience.
To overcome the low psychometric reliability of the attentional probe task, Grafton, Teng and MacLeod (2021) recently developed a dual probe attentional bias assessment approach. Specifically, rather than presenting a single probe on each trial in the locus of either member of stimulus pairs, which remains on-screen until the participant executes an identification response, the dual probe approach instead involves the simultaneous presentation of two probes, very briefly (200 ms), one in the locus of negative information and the other in the locus of benign information. The participant simply identifies whichever probes they see. The proportion of correctly identified probes appearing in the locus of negative information provides an index of attentional bias to such information. In addition, a major advantage of the dual probe approach is that it can readily be delivered using continuous video stimuli, thereby enabling presentation of richer information than the type of stimuli typically delivered within the conventional single probe task (i.e. simple word or pictorial stimuli). Grafton et al. showed that, when the dual probe task is configured to present such video stimuli, it is capable of sensitively detecting anxietylinked attentional bias, and importantly, the resulting attentional bias index demonstrates high psychometric reliability (internal consistency = 0.97).
In the present study, this dual probe approach was employed in a manner that overcame the other two limitations of previous research, to test the validity of the two hypotheses under consideration. Recruitment was focused on young adults given the prevalence of high trait social anxiety amongst this cohort. To assess variation in the degree to which participants who were high or low in trait social anxiety experienced elevations in state anxiety in response to a potentially stressful social event, we delivered a mock job interview at the end of the experimental session. To assess prior attentional bias to negative relative to benign information concerning this event, we employed a variant of the dual probe approach in which participants were exposed to dual videos, each comprising two video clips that were a head and shoulder shot of a first-year university student who had previously completed the mock job interview, discussing either negative or benign aspects of this experience. This enabled test of the first hypothesis, that people with high trait social anxiety display relatively greater attentional bias towards negative, compared to benign, information concerning upcoming social events. If this hypothesis is correct, then participants high in trait social anxiety will display higher negative attentional bias index scores than participants low in trait social anxiety. To test the second hypothesis, that such bias can statistically predict the disproportionately elevated levels of state anxiety that people with high trait social anxiety exhibit in response to upcoming social events, participants state anxiety during the mock job interview was assessed. It was assumed that participants high in trait social anxiety, relative to participants low in trait social anxiety, would exhibit disproportionately elevated levels of state anxiety during this job interview. If the second hypothesis is correct, then this association between trait social anxiety and state anxiety during the mock job interview will be mediated by attentional bias to negative information.
Participants
Six-hundred and eleven first year psychology students at the University of Western Australia were screened for trait social anxiety using the Social Interaction Anxiety Scale (SIAS; Mattick & Clarke 1998a, b) at the beginning of the university semester. Twenty-five participants were recruited from the upper third of the SIAS score distribution (scoring 35 or above) and were designated the High Trait Social Anxiety Group. Twenty-five participants were recruited from the lower third of the SIAS score distribution (scoring 20 or below) and were designated the Low Trait Social Anxiety Group 1 . This gave rise to a between-group factor of Trait Social Anxiety Group (High Trait Social Anxiety vs. Low Trait Social Anxiety).
Questionnaires
Social Interaction Anxiety Scale. The Social Interaction Anxiety Scale assesses a person's tendency to experience elevations in state anxiety when engaging in social events, and so it can be considered a measure of trait social anxiety (SIAS; Mattick & Clarke 1998a, b). The SIAS comprises 20 items that each describe an anxiety symptom that could be elicited by exposure to social situations. Respondents are required to rate each item on a five-point scale ranging from 0 ("Not at all characteristic or true of me") to 4 ("Extremely characteristic or true of me"). This yields a score between 0 and 80, with higher scores indicating higher levels of trait social anxiety. The SIAS is one the most used measures of trait social anxiety (Rodebaugh et al., 2006), and has been shown to have high reliability and validity (Hedman et al., 2010).
Spielberger State Anxiety Inventory -Short Form. The Spielberger State Anxiety Inventory -Short Form was programmed for computer delivery to assess state anxiety (STAI-S; Marteau & Bekker 1992). The short form of the STAI-S comprises six items. Three items describe feelings that indicate high state anxiety (e.g. I feel tense), whereas the other three items describe feelings that indicate low state anxiety (e.g. I feel calm). Participants responded to each item using a visual analogue scale. Each scale consisted of a 15 cm horizontal line, divided into 60 equal partitions, with the terminal labels "Not at all" and "Very much", and the intermediary labels "Somewhat" and "Moderately". Using the mouse, participants moved a cursor alone the line to a point that corresponded to their state anxiety experience, by pressing the left mouse button to register their response. This resulted in a score ranging from 1 to 60. The three items describing feelings that indicate low state anxiety were reversed scored, before a mean score was computed across all six items, which ranged from 1 to 60, with higher scores indicating higher levels of state anxiety.
Stimulus Videos Describing Negative and Benign Aspects of Upcoming Social Event
The present study required the creation of 24 dual videos, each comprising one video clip in which an individual conveyed negative information concerning the mock job interview experience, and one video clip in which a different individual conveyed benign information concerning the mock job interview experience. To achieve this, we recruited a separate cohort of 24 first-year university students, who performed as "actors" to create video content (12 male and 12 female). We wanted to ensure that that the informational content of the videos was credible, and so each of these student actors was first required to complete the mock job interview, before then recording two video clips: one in which they described negative aspects of the mock job interview experience (negative video clips), and the other in which they described benign aspects of the mock job interview experience (benign video clips).
Each video clip began and ended with a scripted opening and closing statement respectively. Between these statements, the video content was unscripted, but was structured such that the student actors described negative or benign aspects of the mock job interview, with respect to each of four predetermined topics. For the negative video clips, these four topics were: (i) thought of self-doubt; (ii) physical symptoms of anxiety experienced during the mock job interview; (iii) difficulties in organizing coherent answers under pressure; and (iv) concern that others watching their recorded interview would judge them poorly. For the benign video clips, these four topics were: (i) the rewarding feeling that comes from having completed a challenging experience; (ii) increased confidence in public speaking having completed the mock job interview; (iii) experience gained for an interview in real-life; and (iv) the absence of any negative experience. The order in which student actors were required to talk about each of these topics was randomized. Each video clip lasted for 60 s. Each video was edited to indicate probe identity using the 3 × 3 number pad on the keyboard to respond, pressing the key that corresponded to the position of the small grey square within the 3 × 3 probe grid (see Fig. 1). Across each dual video, 10 probe pairs were presented. Thus, across the task, 240 probe pairs were presented. The 24 dual videos were presented in a random order, with the constraint that all student actors were presented once, before appearing again. A brief rest period was provided after every six dual videos. The attentional bias assessment task last approximately 24 min.
As Grafton et al. (2021) point out, attentional distribution can only be accurately inferred only if participants correctly identify probes. Thus, in keeping with Grafton et al., participants who failed to identify probes within at least 80% of dual probe presentations were excluded. The rationale underpinning the dual probe task assumes that participants will most often see, and therefore identify, those probes that appear in the locus of the component video clip to which they were attending. Therefore, attentional bias to negative, compared to benign, information concerning the upcoming social event can be indexed by calculating the proportion of the correctly identified probes, overall, that had appeared in the locus of the negative component video clips within the dual videos. Thus, we computed this index of attentional bias to negative information using the following equation: Index of Attentional Bias to Negative Information About Interview = Number of correctly identified probes in locus of negative video clips / Total number of correctly identified probes.
A higher score on this index reflects greater attentional bias to negative, compared to benign, information concerning the upcoming mock job interview event.
Social Event: Mock Job Interview
As previously noted, participants were aware that, at the end of the experimental session, they would be required to complete a social event, which in the present study was a mock job interview. The mock job interview can be considered a social event as it involves communicative interaction with other people. Participants were told that this would be a mock job interview for the role of a research assistant within the University of Western Australia's psychology department. This role was chosen given that it represents the type of employment opportunity university students may seek. During this mock job interview, participants were shown a video, which participants were informed had been pre-recorded, in which four 'interviewers' were sitting at a table facing the participant. Each interviewer asked one question (e.g. "What personal qualities in general make you a good employee, and could you provide some examples?"), and the participant had 60 seconds to respond aloud to this ensure that the face of the student actor was centered on vertical and horizontal axes, and occupied two-thirds of the video.
From these video clips, we created 24 dual videos, using VideoPad Video Editor® (Version 5.02, NCH Software, 2017). To achieve this, we first generated 12 pairs of male and female student actors, at random. Then, for each student pair, we created two dual videos, each comprising one negative video clip and one benign video clip. In one of these dual videos, the student in the negative video clip was the male of the pair and the student in the benign video clip was the female. In the other dual video, the student in the negative video clip was the female of the pair and the student in the benign video clip was the male. The purpose of counterbalancing valence of information, and biological sex of the actor was to avoid any confound between differing valence of information and the visual/auditory differences that distinguish biological males and females. Across the dual videos, the negative video clip began in the left and right position, with equal frequency. In each dual video, the component video clips each measured 17.2 cm x 13 cm. The centre of one component video clip in the dual video was positioned 11.5 cm to the left, and the centre of other was positioned 11.5 cm to the right, of screen centre. The audio tracks for the left or right video clips were played through the left and right audio channel, respectively, and were equalised in VideoPad Video Editor prior to compilation of the dual video. The position of the two component video clips, within each dual video, switched with each other at random intervals of five, six, or seven seconds, such that across the 60 s duration of each dual video, the positions of the two component video clips switched 9 times.
Attentional Bias Assessment Task
During presentation of each dual video, at pseudorandom points, a pair of small visual probe stimuli was presented for 200 ms. These probe pairs never appeared within the 2 s window preceding or following a switch in the position of the two component videos. These probe stimuli were grey, 3 × 3 grids on a black background, in which one of the outer eight grid positions was occupied by a small grey square. One probe appeared in the centre of the location in which the video clip in the left position had been playing, and the other probe appeared in the centre of the location in which the video clip in the right position had been playing. The identity of the probes in each pair was always different, but participants were informed that the identity of the probes in each pair were always the same. Participants were required to identify whatever probes they saw, and to sheet, providing informed consent, and completing the Social Interaction Anxiety Scale. They were informed that they would later be required to complete a mock job interview, and that their performance would be recorded, and livestreamed to the experimenter outside the testing room. Next, the participant was told that, before they completed this mock job interview, they would be given the opportunity to view videos in which people who had previously completed the task described their personal appraisals of this experience. The participant was then seated approximately 60 cm in front of the computer screen, and provided with instructions for the attentional bias assessment task. They were instructed that, during presentation of the videos, they should allow their attention to move as it normally would, and should identify any probes they saw, using the number pad. Next, the participant completed a short practice involving two dual videos, each comprising two benign video clips, before completing the attentional bias assessment task. During this task, state anxiety was assessed after every six dual videos. The participant notified the experimenter when they had completed the attentional bias assessment task, and was question. Participants were instructed that their response to each question should make full use of the allocated 60 seconds. A count-down clock positioned in the top-right hand corner of the video displayed the amount of time that the participant had left to respond. Participants were told that their responses would be video recorded, and also livestreamed to the experimenter, who would be seated outside the testing room.
Experimental Hardware
A Hewlett-Packard i7-6700 Desktop PC, 21.5-inch LG Flatron-E2211 monitor with inbuilt webcam, standard QWERTY keyboard and two-button mouse, and Sennheiser HD202 headphones, were used to deliver the experimental tasks.
Procedure
Each participant was tested individually. The test session commenced with the participant reading an information Fig. 1 Illustrative depiction of event during attentional bias assessment task the mock job interview to an independent samples t-test that considered the between-group factor Trait Social Anxiety Group (High Trait Social Anxiety vs. Low Trait Social Anxiety). Before doing so, the data was inspected for outliers using the Median Absolute Deviation approach (Leys et al., 2013). No outlying scores were identified. The analysis revealed a significant main effect of Trait Social Anxiety Group, t (45) = 4.54, p < .001, Cohen's d = 1.32, reflecting the fact that participants in the High Trait Social Anxiety Group (M = 39.49, SD = 10.80) reported higher state anxiety scores compared to participants in the Low Trait Social Anxiety Group (M = 26.89, SD = 7.96), thus verifying that the high trait social participants experienced greater state anxiety during the mock job interview than did their low trait social counterparts 2 .
Did Participants High in Trait Social Anxiety Exhibit Relatively Greater Attentional Bias to the Negative Information?
As mentioned, participants who failed to identify probes on at least 80% of dual probe presentations were eliminated. This resulted in the further exclusion of one participant from each group. The remaining participants, on average, correctly identified probes on 92.25% (SD = 4.62%) of dual probe presentations. For each participant, we computed the Index of Attentional Bias to Negative Information About Job Interview, as described in the Method section. Outlier analysis was conducted in the same manner as before. The attentional bias scores of six participants (two in the High Trait Social Anxiety group and four in the Low Trait Social Anxiety group) were identified as outliers. These scores were Winsorized into the nearest value within the variable sample, as recommended by Field (2013), with the resulting distribution of attentional bias scores remaining normally distributed (skew = -0.15; kurtosis = -0.69). Reassuringly, the internal consistency of the attentional bias index scores, which was computed by calculating split-half reliability across odd and even trials, was extremely high, at 0.93, again underscoring the excellent psychometric reliability of the dual probe task.
To determine the validity of the hypothesis that people with high trait social anxiety display relatively greater attentional bias towards negative, compared to benign, information concerning an upcoming social event, we subjected the attentional bias index scores to an independent samples t-test that considered the between-group factor Trait Social 2 Participants in the High Trait Social Anxiety Group (M = 27.59, SD = 10.02) also reported higher state anxiety scores compared to participants in the Low Trait Social Anxiety Group (M = 14.35, SD = 7.35), during the attentional bias assessment task, F (1, 45) = 28.00, p < .001, partial η 2 = 0.38 then taken to an adjacent testing room to complete the mock job interview. In the mock job interview, the participant was instructed to stand one meter in front of a computer screen, which was positioned at eye-level. They were reminded that their responses would be recorded via the monitor's inbuilt webcam, and livestreamed to the experimenter who would be seated outside the testing room. They were then left to complete the mock job interview. Immediately upon its completion, the participant rated the degree to which they had experienced state anxiety during the mock job interview. Finally, the participant was debriefed about the purpose of the study, and thanked for their participation.
Group Characteristics at Time of Testing
As will be recalled, the High and Low Trait Social Anxiety Groups were created based upon SIAS scores obtained as part of a mass screening procedure conducted at the beginning of the university semester. It was recognized that, at the point of the test session, participants' SIAS scores may have regressed towards the mean. Thus, to ensure that no participant was inappropriately classified as being high or low in trait social anxiety, a median split was carried out on the SIAS scores obtained at test time. Any member of the High Trait Social Anxiety Group who at test time scored below the median (SIAS = 25) of the test time SIAS score distribution was eliminated, as was any member of the Low Trait Social Anxiety Group who at test time scored above this median. This resulted in the exclusion of one participant from each group. The resulting High Trait Social Anxiety Group comprised 16 females and 8 males, with a mean age of 19.35 years (SD = 1.87), and a mean SIAS score of 42.33 (SD = 8.82). The resulting Low Trait Social Anxiety Group comprised 10 females and 13 males, with a mean age of 19.45 years (SD = 2.34), and a mean SIAS score of 14.83 (SD = 4.94). The two groups differed significantly in terms of SIAS scores, as intended, t (45) = 13.10, p < .001, and they did not differ significantly in terms gender ratio, χ 2 (1, n = 47) = 2.55, p = .11, or age, t (45) = 0.17, p = .87.
Did Participants High in Trait Social Anxiety Exhibit Disproportionately Elevated State Anxiety During the Mock Job Interview?
If the mock job interview served as a social stressor, then the high compared to low trait social anxiety participants would report greater levels of state anxiety during this experience. To confirm this assumption, we subjected the state anxiety scores indicating state anxiety experienced during
Discussion
The aim of the current study was to test the validity of two hypotheses: the first hypothesis was that people with high trait social anxiety display relatively greater attentional bias to negative, compared to benign, information concerning upcoming social events, and the second hypothesis was that such attentional bias statistically predicts the heightened state anxiety that these individuals experience during social events. To enable test of these hypotheses, we capitalised upon the dual probe variant of the attentional probe task recently developed by Grafton et al. (2021). Unlike the conventional single probe variant of this task, this new approach has been shown to have excellent psychometric reliability. In our current dual probe task, the internal consistency of the attentional bias index scores was extremely high, at 0.93, which is comparable to that reported by Grafton et al., and is considerably higher than what is typically obtained on the conventional single probe variant of the attentional probe task. The dual probe task also readily enables presentation of ecologically valid information conveying the negative and benign aspects of an upcoming social event, delivered using video clips of individuals who had previously experienced the event. Employing such video stimuli, we have demonstrated that people with high trait social anxiety display relatively greater attentional bias to negative information concerning an upcoming social event, consistent with the first hypothesis under test.
We suggest that future researchers should now build on this novel finding, to test more refined hypotheses concerning the patterns of attentional bias that characterise elevated trait social, perhaps by refining the precise nature of the information conveyed by video clip stimuli delivered within the dual probe task. For example, it has long been recognised that the selective processing of negative socially-relevant information may be adaptive, if that information concerns aspects of an upcoming event that can be controlled in ways that reduce the likelihood of experiencing a negative social outcome (e.g. Ledley & Heimberg 2006). Indeed, everyone may attend to negative information concerning an upcoming social event, when that information concerns controllable aspects of the event, with people high in trait social anxiety only showing greater attention to negative information concerning an upcoming social event, when this information concerns uncontrollable aspects of the event.
In the present study, the information presented within the video clip stimuli was not developed to permit dissociation of attentional responding to information that concerns controllable vs. uncontrollable aspects of an upcoming social event. However, such stimuli could be readily developed and delivered within the dual probe task to enable tests of Anxiety Group (High Trait Social Anxiety vs. Low Trait Social Anxiety). This analysis revealed a significant main effect of Trait Social Anxiety Group, t (45) = 2.25, p < .05, Cohen's d = 0.67. This significant main effect reflected the fact that participants in the High Trait Social Anxiety Group (M = 0.48, SD = 0.09) displayed higher attentional bias to negative social information scores than participants in the Low Trait Social Anxiety Group (M = 0.42, SD = 0.10), and is consistent with the hypothesis that, compared to their low trait counterparts, the high trait socially anxious participants would display relatively greater attentional bias to negative information concerning an upcoming social event.
Did Attentional Bias to Negative Information Mediate the Association Between Trait Social Anxiety and State Anxiety During the Mock Job Interview?
Having confirmed that participants with high trait social anxiety experienced heightened state anxiety during the mock job interview, and that elevated people with high trait social anxiety display relatively greater attentional bias to negative information concerning this upcoming social event, we went on to test the validity of the predictions generated by the second hypotheses under consideration, that this negative attentional bias would mediate the association between elevated trait social anxiety and heightened state anxiety during the mock job interview, by conducting a simple mediation analysis. In this analysis, conducted on the data from the 45 participants remaining following the above described exclusions, Trait Social Anxiety (TSA) Group was entered as the predictor variable, State Anxiety During Job Interview (SA) scores were entered as the outcome variable, and Attentional Bias to Negative Information About Interview Index (ABI) scores were entered as the mediator variable. In keeping with Hayes (2013), the analysis was conducted using bootstrapping with 5000 resamples to calculate 95% bias-corrected confidence intervals (CIs) for the indirect effect.
The result of this mediation analysis confirmed that TSA Group predicted the SA scores (c' path; β = 0.89, p < .001), and the ABI scores (a path; β = 0.64, p < .05). These ABI scores also predicted the SA scores (b path; β = 0.33, p < .05). Of most relevance to the hypothesis under consideration, TSA Group predicted the SA scores in a manner that was mediated by ABI scores (ab path), as the bootstrapped 95% CIs of this indirect effect did not include zero (0.10-5.28). Thus, these results indicate that attentional bias to negative information about the interview did indeed mediate the association between elevated trait social anxiety and heightened state anxiety experienced during the mock job interview. concurrently report state anxiety, it is possible that variation in attentional bias influenced how participants reconstructed their state anxiety experience when making this retrospective report. We suggest that future researchers address this possibility by replicating the present study, but assessing state anxiety during the interview using concurrent physiological markers, such as, heart rate variability (Dimitriev et al., 2016). Such future work should also consider exposing participants to social stressors other than a mock job interview. Indeed, the types of stressors that evoke social anxiety are likely to differ from one person to the next. By employing a range of social stressors, future researchers will be able to determine the generalisability of the present findings. It should also be noted that the sample size in the present study was relatively small. Post-hoc power analysis revealed that the study had 0.60 power to detect the social anxiety-linked group difference in attentional bias to negative information, and 0.50 power to detect the hypothesised indirect effect, reflecting the mediating impact of such attentional bias on the association between elevated trait social anxiety and heightened state anxiety experienced during the interview. Thus, we suggest that future researchers seek to replicate the present study with larger samples. When doing so, we recommend that these replications be pre-registered, as doing so would increase confidence in the results obtained. Finally, we suggest that future researchers seek to extend the current work by determining whether the presently observed pattern of findings are displayed by people high in trait social anxiety across the full developmental trajectory, across different socio-economic groups, across different races and ethnicities, and when such anxiety is assessed using complementary measures of trait social anxiety, for example, the Social Phobia Scale (Mattick & Clarke, 1998a, b).
For the moment, however, the present findings demonstrate that people with trait social anxiety display relatively greater attentional bias to negative information concerning an upcoming social event, and the pattern of observed mediation is consistent with the possibility that this attentional bias can statistically predict the disproportionately elevated state anxiety that people with high trait social anxiety experience when they engage in this social event. We hope that these findings, and the new approach we have employed to assess social anxiety-linked selective attention, will be of value to future investigators seeking to better understand contribution of attentional bias to elevated trait social anxiety.
Funding Open Access funding enabled and organized by CAUL and more fine-grained hypotheses concerning the attentional basis of high trait social anxiety.
Turning to the second hypothesis under test, the present results confirmed that the measure of negative attentional bias significantly mediated the association between trait social anxiety group and degree of state anxiety experienced during the mock job interview. These findings are consistent with the hypothesis that attentional bias to negative information concerning an upcoming social event statistically predicts the disproportionately elevated state anxiety that people with high trait social anxiety experience when they then engage in this social event.
Of course, the approach taken in the present study to investigate this second hypothesis involved testing only the naturally occurring associations between variables using mediation analysis, precluding strong claims about the potential causal role of the candidate mediator (Fiedler et al., 2011). To more powerfully test the functional contribution of such attentional bias, we suggest that future researchers employ an attentional bias modification (ABM) approach (cf. MacLeod & Grafton, 2016), to determine whether the transient modification of attentional bias to negative information about an upcoming social event significantly alters levels of state anxiety experienced when people engage in such an event, as would be predicted if this attentional bias serves causally to elevate state anxiety. Such extensions of the present research will further advance understanding of the attentional basis of elevated trait social anxiety, and inform the development of ABM procedures that can potentially exert a more powerful therapeutic impact on such disposition (Heeren et al., 2015).
The present study was not designed to assess the degree to which introduction of the mock job interview served to elevate state anxiety. Although we observed that attentional bias to negative information predicted state anxiety experienced during the mock job interview, the design did not involve expressing this anxiety in terms of the elevation in state anxiety from when the mock job interview was introduced. Assessing state anxiety during the session, before completion of the job interview, would not enable suffice to achieve this aim, given that the study required participants to be aware of the interview. To address this issue, future research could deliver the attentional bias assessment task to participants who did not know they themselves would be completing the mock job interview, and then assess the degree to which introduction of the mock interview served to elevate state anxiety.
It should be noted that, in the present study, participants retrospectively reported the level of state anxiety they experienced during the mock job interview. While this retrospective assessment approach was adopted to ensure the experience of the interview was not disrupted by the need to Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
its Member Institutions
Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Ethics Approval This study was approved by the Human Research Ethics committee of the XXX (Ethics approval: RA/4/1/5243).
Consent to Participate
Informed consent was obtained from all individual participants included in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
|
2023-05-28T15:13:39.752Z
|
2023-05-26T00:00:00.000
|
{
"year": 2023,
"sha1": "8a8f58f03bcd6f5123eddfdab81a7546584c1153",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10608-023-10389-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "b9f2f9f97b9aedf4d2c9670acf215990915dd22e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
3717637
|
pes2o/s2orc
|
v3-fos-license
|
Adhesion, proliferation, and apoptosis in different molecular portraits of breast cancer treated with silver nanoparticles and its pathway-network analysis
Background Silver nanoparticles (AgNPs) have attracted considerable attention due to the variety of their applications in medicine and other sciences. AgNPs have been used in vitro for treatment of various diseases, such as hepatitis B and herpes simplex infections as well as colon, cervical, and lung cancers. In this study, we assessed the effect on proliferation, adhesion, and apoptosis in breast cancer cell lines of different molecular profiles (MCF7, HCC1954, and HCC70) exposed to AgNPs (2–9 nm). Methods Breast cancer cell lines were incubated in vitro; MTT assay was used to assess proliferation. Adhesion was determined by real-time analysis with the xCELLingence system. Propidium iodide and fluorescein isothiocyanate-Annexin V assay were used to measure apoptosis. The transcriptome was assessed by gene expression microarray and Probabilistic Graphical Model (PGM) analyses. Results The results showed a decreased adhesion in breast cancer cell lines and the control exposed to AgNPs was noted in 24 hours (p≤0.05). We observed a significant reduction in the proliferation of MCF7 and HCC70, but not in HCC1954. Apoptotic activity was seen in all cell lines exposed to AgNPs, with an apoptosis percentage of more than 60% in cancer cell lines and less than 60% in the control. PGM analysis confirmed, to some extent, the effects of AgNPs primarily on adhesion by changes in the extracellular matrix. Conclusion Exposure to AgNPs causes an antiproliferative, apoptotic, and anti-adhesive effect in breast cancer cell lines cultured in vitro. More research is needed to evaluate the potential use of AgNPs to treat different molecular profiles of breast cancer in humans.
characterization of cell lines
The selection of breast cancer cell lines (MCF7, HCC1954, and HCC70) was done according to their molecular portrait. 6 All cell lines were obtained from the American Type Culture Collection (ATCC; Manassas, VA, USA). MCF7 (ATCC ® HTB-22) is an epithelial-adherent adenocarcinoma cell line obtained from a 69-year-old Caucasian female that expresses ERs and PRs and is HER2/neu-negative, classified as luminal A. HCC1974 (ATCC ® CRL 2338) is an epithelial-adherent ductal carcinoma cell line obtained from a 61-year-old Indian female classified as HER2/neu, negative for estrogen and progesterone receptors. HCC70 (ATCC ® CRL-2315™) is an epithelial-adherent cell line derived from the primary ductal carcinoma of a 49-year-old Black female that is negative for estrogen and progesterone receptors, and is classified as a triple-negative HER2/neu. All cell lines were stored in the vapor phase of liquid nitrogen.
synthesis and characterization of colloidal agNPs
AgNP solutions were prepared using 100 mg silver nitrate (AgNO 3 ) in 100 mL ethanol and 1 g PVP as the stabilizing agent; the weight ratio of PVP to AgNO 3 was kept at 10:1. The ethanolic solution containing the metallic salt and PVP was refluxed to 363 K and stirred for 12 hours. The formation of AgNPs can be observed at a glance by a change in color in the solution, because small AgNPs are amber and the addition of PVP prevents aggregation. 10 We obtained the ratio 1:10 (AgNO 3 /PVP) that was best for stabilization of AgNP size. When we increased the PVP concentration, the particle nucleation rate was higher and particle size decreased. 15 The complete methodology for preparation of AgNPs can be obtained from a previous work. 16 The characterization of AgNPs was undertaken by ultraviolet-visible (UV-Vis) spectrophotometry and transmission electron microscopy (TEM). Figure 2A shows an absorption peak at 412 nm UV-Vis, which indicates the presence of Ag ions in the sample. On the other hand, Figure 2B and C indicate the presence and size of particles (2-9 nm) in the sample by TEM image and histogram.
exposure to agNPs
Breast cancer cell lines were adjusted to a concentration of 1×10 5 /mL in RPMI-1640 medium (Thermo Fisher Scientific, Waltham, MA, USA), for HCC1954 and HCC70, and DMEM medium, (Thermo Fisher Scientific), for the MCF7 cell line. Plates were incubated at 37°C with 95% relative humidity and 5% CO 2 atmosphere for 24 hours. Then, the AgNPs groups were exposed to a concentration of 12.5 µg/mL of nanoparticles for 24 hours. In control groups, the same conditions were used but without AgNPs. All in vitro tests were conducted under these conditions, with the exception of the real-time adhesion analysis. To observe the adhesion, it was necessary to expose cells to the absence/presence of AgNPs during culture.
adhesion analysis in real time
Cell adhesion was analyzed in triplicate in independent samples of each cell line in the absence/presence of AgNPs. In total, 1×10 5 cells were cultured in a 96-well plate containing microelectrodes for the measurement of cell number, which is translated into a value called the cellular index. This value determines how much a cell disrupts flow impedance in real time using the xCELLingence system (Roche Applied Sciences and ACEA Biosciences, Basel, Switzerland). The cellular index was analyzed every 3 hours for 24 hours posttreatment with AgNPs.
Determination of apoptosis
After AgNPs stimulation, cells were washed twice with phosphate-buffered saline (PBS) and 5 µL fluorescein isothiocyanate-Annexin V was added. Cells were incubated for 15 minutes; then, 5 µL propidium iodide was added to evaluate apoptosis by a detection kit (BD Pharmigen, San Diego, CA, USA). Finally, apoptosis was measured on an EPICS XL-MCL Flow Cytometer (Beckman Coulter, Krefeld, Germany), and apoptosis results were expressed as the percentage of total apoptosis.
Nucleic acid isolation
Isolation of DNA and RNA was done with the AllPrep DNA/ RNA Mini Kit (Qiagen, Hilden, Germany) from almost a million of cells of each cell line in the absence/presence of AgNPs. After isolation, RNA quality was measured with an RNA 6000 Nano kit (Agilent, Santa Clara, CA, USA) in 2100 Bioanalyzer (Agilent). The acceptable RNA Integrity Number (RIN) was higher than 6.0. We quantified RNA and DNA with a NanoDrop 2000 Spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Acceptable quality DNA should have 260/280 nm ratios 1.8 and 2.0.
labeling and hybridization
In total, 200 ng total RNA from cell lines was amplified and labeled using the Low Input Two-Color Quick Amp Labeling Kit (Agilent). As internal controls, a Two-Color RNA Spike-In Kit (Agilent) was used. Labeled samples were purified with the RNeasy mini kit (Qiagen, Hilden, Germany). Cell line samples were labeled with Cy5 dye, and a control was labeled with Cy3 dye.
Hybridization was done with a Hi-RPM Gene Expression Hybridization Kit (Agilent) in a Human GE 4×44k v2 microarray AMADID 026652 (Agilent).
Microarray data
The hybridized microarray was scanned with a DNA Microarray Scanner with SureScan High-Resolution Technology, C version (Agilent), and a Feature Extraction software v.11 (Agilent) was used to extract microarray data. Cy3/Cy5 log rates from a gene panel were analyzed in the R Package. Afterwards, we selected genes highly associated with adhesion, proliferation, and apoptosis. We obtained 437 genes, which were processed as follows: data were filtered and nested in centroids using Gene Cluster 3.0 and then adjusted to logarithm base 2; thereafter, genes were centered on the basis of the mean. Heat maps of these genes were generated in the Java program Treeview 1.1.6r2.
Probabilistic graphical Model impact analysis
The 437 selected genes were analyzed using the probabilistic graphical model (PGM) based on the Markov random field in Reactome FIViz, a Cytoscape plugin (http://www. cytoscape.org/download.php, and this one of ReactomeFIViz [also called Reactome Cytoscape Plugin or ReactomeFIPl-ugIn] http://apps.cytoscape.org/apps/reactomefiplugin), 14,19,20 where an interactome was generated with the top 10% differences. 21 Then, we ran a network clustering algorithm on the interactome to identify working subnetworks. 20 The tool provides predicted functional impact scores by integrating all observed variations to assess whether the activities of each gene are increased, decreased, or unaffected. 14,19,20 Pathway enrichment analysis The gene modules were subjected to pathway enrichment analyses (hypergeometric testing); 22 then, pathway functional
PgM pathway analysis
Pathways related to adhesion, proliferation, and apoptosis were analyzed with the PARADIGM; this approach was adapted for Reactome pathways by converting reactions drawn in pathway diagrams into factors; a PGM used in the Reactome FIViz, a Cytoscape plugin. 19,23 This was done according to Reactome pathways database and by using their pathways component names in order to analyze the integrated pathway activity (IPA) in each related pathway.
statistical analysis
Independent samples were analyzed with Student's t-test and the Mann-Whitney U test, according to the nature of the variable. Statistical significance was set at p-value #0.05. Assessment of pathway associations was undertaken by the hypergeometric test using the Benjamini-Hochberg false discovery rate (FDR) correction, with an FDR correction-modified p-value #0.05. The analyses were conducted with SPSS version 20 (SPSS, Chicago, IL, USA).
adhesion in real time
To investigate whether AgNPs had any effect on the ability of cancer cells to adhere and grow, cells were seeded in plates under conditions as explained earlier. Significant decreases in adhesion were observed with AgNPs treatment; the lowest values recorded were for the MCF7 line followed by the HCC70 cell line, as shown in Figure 3A and C. The first 9 hours of the real-time assay show direct changes in cell adhesion of the treated lines against the controls. MCF7 and HCC70 lines show pronounced decrease compared to the control, whereas the decrease in HCC1954 line is less ( Figure 3B). However, the cellular index from 12 hours onwards decreases steadily in all the lines and is maintained until 24 hours. MTT assay. MCF7 and HCC70 showed a significant reduction between the AgNPs group and control group; both lines exhibited a reduction of more than half in the proliferation index. Whereas HCC1954 had a moderate reduction as in the adhesion assay; HCC1954 was least affected by AgNPs.
Determination of apoptosis
The total apoptosis percentage was determined in each of the cell lines exposed and not exposed to AgNPs ( Figure 5A). Figure 5B-D shows a representative dot plot of each cell line exposed to AgNPs. All cell lines showed a significant increase in the percentage of apoptosis; HCC70 showed the highest percentage of apoptosis. The increase in percentage is at least five times greater in comparison to the respective controls of each cell line.
PgM impact analysis
We generated an interactome from the 437 genes along with their immediate inferred interaction partners; this provided a network of 12,177 genes. The top 10% of nodes, classified by differences in protein impact score (PIS) were selected to generate a minimal essential network (MEN; Figure 7). 21 The size of the nodes in the interactome is proportional to their PIS; this value is obtained not only by the expression of the gene but also by their known interactions. 14,19 It is important to mention that the app generates a comparison against random samples from its database in order to provide the results. The edges show the interactions between different nodes in the interactome. Table 1 shows the PIS of the top 10% of the genes and their p-value against random samples, according to the PGM impact analysis. Genes such as cyclin B2 (CCNB2), laminin subunit alpha 1 (LAMA1), matrix metallopeptidase 7 (MMP7), secreted protein acidic and cysteine-rich (SPARC), CDC28 protein kinase regulatory subunit 2 (CKS2), aurora kinase B (AURKB), interleukin 18 (IL18), versican (VCAN), and matrix metallopeptidase 3 (MMP3) were some of the genes that showed statistically significant gene expression from the random samples generated dynamically by the App in the control lines. Meanwhile, in the AgNPs groups, genes that were found to be differently expressed were the MMP3, SPARC, thrombospondin 2 (THBS2), occludin (OCLN), lipoprotein lipase (LPL), interleukin 1 beta (IL1B), serpin peptidase inhibitor, clade F member 1 (SERPINF1), cadherin 1 (CDH1), mutL homolog 1 (MLH1), collagen type VI alpha 1 chain (COL6A1), connective tissue growth factor (CTGF), tumor necrosis factor receptor superfamily member 1A-associated via death domain (TRAF2), and cadherin 6 (CDH6), compared to a random background generated dynamically by the App. The MEN includes 122 genes and 246 edges cluster by modularity; 20 eight modules were identified and mapped onto biological pathways using the Reactome FIViz, a Cytoscape plugin, to ascertain function as shown in Table 2 (the complete data of Table 2 is shown in the Supplementary materials). Table 2 shows that gene sets are significant for adhesion (extracellular matrix organization, focal adhesion, and integrin signaling pathway), proliferation (cell cycle, anaphase-promoting complex [APC/C] mediated degradation of cell-cycle proteins), and apoptosis (direct p53 effectors, the intrinsic pathway for apoptosis, and p53 signaling pathway).
PgM pathway analysis
The results of the PGM pathway analysis are shown in Table 3 (the complete data of Table 3 is shown in the Supplementary materials), where functional components of related pathways are displayed by biological activities (adhesion, proliferation, and apoptosis) and the IPA differences in values. There were several significant changes related to the functional components on adhesion, which included the secreted phosphoprotein 1-CD44 antigen (SPP1-CD44), platelet endothelial cell adhesion molecule 1 (PECAM1), SPP1, SPP1-Integrin alpha5beta1-alpha9beta1, SPP1-Integrin alphaVBeta1,3,5, Integrin alphaVbeta3-PECAM1, Integrin alphaVbeta3-Tenascin, Tenascin-C hexamer, and Integrin alpha9beta1-Tenascin-C hexamer. There were no significant changes related to functional components of apoptosis and proliferation. Some components of the apoptosis pathways, such as caspase-8 (CASP8) and catenin beta-1 (CTNNB1), reached
Discussion
The use of nanomaterials in medicine has been an important contribution to science; however, the biological risk of using AgNPs has not been clearly established. 24,25 As the effect of these AgNPs has been previously tested on murine lymphoma, 16 one of the goals of this study was to assess whether the effect was similar in human cancer cell lines.
To evaluate this effect on breast cancer cell lines, different molecular portraits were used. The behavior and prognosis of the disease depend, to a large extent, on the molecular presence of the estrogen receptor, progesterone receptor, HER2/neu status, and the genetic profile of the cell lines. 6,7 Different molecular samples were included, as specified in the material and methods section, in order to observe the effect of AgNPs on these different cell lines. We used xCELLigence -an impedance-based live-cell monitoring platform -to determine how AgNPs affect adhesion on breast cancer cell lines. 26 Results are expressed in cell index, a complex measurement that can provide data with regard to cellular adhesion, viability, and proliferation; 26 however, data displayed during the initial hours of the realtime assay essentially showed cellular adhesion. 27,28 The decrease of the cell index caused by AgNPs represents a decrease in cell migration and metastasis of cancerous cells. 29 The fact that the greatest decrease was observed in HCC70 shows the potential use of this nanomaterial as a treatment for triple-negative breast cancer, which is commonly associated with a worse prognosis. Although the mechanism of action by which AgNPs produce this effect has not yet been described, it is hypothesized that they affect focal adhesion kinases. 27,28 These are large complexes of dynamic macromolecular proteins commonly found on migratory cells that provide the mechanical link between the cell and its binding to the extracellular matrix. 28,29 Reported data from 9 to 24 hours of exposure to AgNPs can be used as an indicator of proliferation and cytotoxicity. 28 Our data showed a clear decrease in the cell index in all lines, which proves their antiproliferative effect.
Previous studies within our study group showed that these AgNPs have antiproliferative and apoptotic effects and they, moreover, increase the production of reactive oxygen species (ROS) at concentrations of 9.0 µg/mL or higher in the L5178Y lymphoma cell line. 16 The proliferation/viability assay by reduction of MTT to formazan crystals showed that AgNPs decrease the proliferation of breast cancer cell lines when compared to the control group -except in the HCC1954 cell line, which in this test, did not show significant differences. The mechanism by which AgNPs generate this reduction in proliferation has not been well described. Asharani et al in 2009 proposed three possible mechanisms: 1) chromosomal aberrations, 2) DNA oxidation, and 3) cytoskeletal damage. Furthermore, the increase in ROS and DNA fracture in several studies has been confirmed so far. 11,14,[30][31][32][33] The MTT assay did not provide the cause by which AgNPs decrease proliferation, although it did demonstrate that they have an antiproliferative effect.
The apoptosis assay showed a clear proapoptotic effect on the AgNP-treated cell lines, regardless of breast cancer molecular portraits. However, this proapoptotic effect increased as the prognosis of the cell line worsened. HCC70, classified as triple negative and the cell line with the worst prognosis in this study, showed the greatest apoptotic effect. 6,7 The effect of AgNPs on cells can be proven by antiproliferative and apoptotic effects. Studies conducted on MDA-MB-231 and MCF-7 breast cancer lines have shown that exposure to AgNPs of different size and origin can cause caspase-3 activation, reduced expression of Bcl-2, and fragmentation of DNA that will eventually lead to apoptosis. [30][31][32] There is a difference in the effect of AgNPs shown by the assays of apoptosis and proliferation of HCC1954; this can be mostly explained because apoptosis was reported with the set of cells in early and late apoptosis. Cells in early stages of death could be considered viable cells by the MTT assay because they still have some mitochondrial activity and the number of cells undergoing proliferation could be compared to the number of cells in early and late apoptosis, as determined by Annexin V assay. 34,35 In order to assess how AgNPs affect adhesion, proliferation, and apoptosis, the genetic expression of cancer cell lines was analyzed. The expression analysis was only conducted in genes related to the biological tests undertaken in the study ( Figure 6). There was a significant change in the expression of genes between the control and AgNP groups as some genes were overexpressed and others underexpressed in different patterns according to their molecular subtype.
Although knowing the expression values of different genes is useful, it does not provide enough information in its functional form. It has been postulated that a better way to systematically uncover gene function and the higher level organization of proteins into biological pathways is through the analysis of molecular interaction networks. 36 In order to carry out this analysis, three different approaches have been proposed: 1) fixed-gene set enrichment analysis, 2) new network construction and clustering, and 3) network-based modeling. 37 In this study, the third approach was mainly used in order to obtain a better understanding of the impact of genetic expression at the protein level, and it was undertaken with the PGM analysis and its different algorithms. 14,19 The 437 genes were processed using the PGM impact analysis, and an interactome was generated with the top 10% (122 nodes 248 edges). 21 Then, they were grouped by modularity, 20 and the resulting modules are shown in Figure 7. The size of the nodes is proportional to the PIS (the complete data of Table 1 is provided in the Supplementary materials). The interactome is not only a graphic representation of the PIS but also shows functional interactions in the genes involved. Some genes with significantly different protein impact, which have been associated as cancer biomarkers are CCNB2, LAMA1, MMP3, SPARC, CKS2, BNIP2, IGFBP7, APC, LAMB3, LPL, COL8A1, and MTA1. CCNB2 has been primarily described as a marker in colorectal carcinoma; it upregulates and coordinates expression of other cellcycle-related genes by NF-Y might contribute to tumor cell proliferation. 38 LAMA1 promotes cell adhesion, invasion, and migration of tumor and endothelial cells, resulting in tumor growth, angiogenesis, and metastasis. 39 MMP3 can cause epithelial-mesenchymal transition and malignant transformation in cultured cells. 40 SPARC is overexpressed in many cancers, including breast cancer, and the effects of SPARC seem to be cell-type specific, 26 adhesion, proliferation, and apoptosis in molecular portraits of breast cancer cell death and necrosis; these genes are underexpressed in breast cancer lines; however, after exposure to AgNPs, the values of the exposed lines seem to approach the values found in random samples. 41 The available information on IGFBP7 is controversial; it has been concluded that its alteration is related to cancer. However, depending on the cell line, this gene is either overexpressed or underexpressed. 42,43 APC is an important tumor suppressor gene in breast cancer, and PIS values increase considerably after exposure to AgNPs. 44 LAMB3 -a gene expressing the laminin protein known to influence cell differentiation, migration, adhesion, proliferation, and survival -demonstrated in silencing studies that it functioned as an oncogene. 45,46 LPL is associated with tumor nutrition and proliferation; its expression and activity varies in different types of cancer. LPL has high PIS values in untreated and AgNPs cell lines, although this is not so in random samples of the in silico analysis. This indicates that treatment with AgNPs does not normalize PIS; instead, it causes PIS to increase. 47 The expression of COL8A1 is closely related to tumor cell proliferation, invasion, and tumorigenicity in vivo. However, treatment with AgNPs does not affect PIS values. 48 MTA1 overexpression correlates significantly with tumor grade and angiogenesis in human breast cancers; nevertheless, PIS values of this gene are significantly lower than those in random samples generated by Reactome FIViz, and exposure to AgNPs does not seem to have any effect. 49 Using the enrichment analysis of the set of genes, we can understand the main routes that would be affected by these genes as those related to the extracellular matrix, such as extracellular matrix organization, focal adhesion, and integrin signaling pathway, and with respect to cell death, the intrinsic pathway for apoptosis and direct p53 effectors ( Table 2).
The PGM pathway analysis uses the PARADIGM algorithm to obtain an IPA score, where all significant data were related to components of the extracellular matrix, such as SPP1-CD44, PECAM1, SPP1, SPP1-Integrin alpha5beta1-alpha9beta1, SPP1-Integrin alphaVBeta1,3,5 and Integrin alphaVbeta3-PECAM1, that regulate cell communication, adhesion, and migration. 50 Interaction of β1 integrins with hERG1 channels in cancer cells stimulated distinct signaling pathways that affect different aspects of tumor progression. However, the role of β1 integrins in tumorigenesis has not been fully resolved. 51,52 Our data suggest that a lower IPA value increases the cell rate in breast cancer cell lines.
Results obtained in the two different PGM analyses do not show the same scope as in the biological assays related to adhesion, proliferation, and apoptosis. This could be due to the small sample or the absence of other additional data, such as those on methylations, polymorphisms, and copy number variations, which reduce the representability of RNA expression data.
Conclusion
With these results, we can conclude that AgNPs possess a great therapeutic potential against cancer, by decreasing adhesion and proliferation and increasing the percentage of apoptosis. In addition, some genes by which AgNPs carry out their action were identified. However, more studies that focus on the mechanisms of action are needed before AgNPs can be safely used in the clinical setting.
Data sharing statement
All data generated or analyzed during this study are included in this published article (and its Supplementary materials).
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/international-journal-of-nanomedicine-journal The International Journal of Nanomedicine is an international, peerreviewed journal focusing on the application of nanotechnology in diagnostics, therapeutics, and drug delivery systems throughout the biomedical field. This journal is indexed on PubMed Central, MedLine, CAS, SciSearch®, Current Contents®/Clinical Medicine, Journal Citation Reports/Science Edition, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
|
2018-04-03T00:26:36.084Z
|
2018-02-22T00:00:00.000
|
{
"year": 2018,
"sha1": "9ec86ac70513e2824e5c1f40f8a0f3d6e5c2b1fb",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=40650",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b36b266e3e800efe2586a10e8246c2272589f932",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
169316866
|
pes2o/s2orc
|
v3-fos-license
|
Environmental crime liability of the Nigerian government in its oil pollution menace
It is strongly viewed that the discovery of oil in Nigeria since 1956 brought with it, grave environmental challenges. The United Nations Development Programme (UNDP) estimates that between 1976 and 2001 alone, there were an approximate of 6800 spills totalling 3,000,000 barrels of oil [3]. Similarly, reports showed that there were 253 oil spills in 2006, 588 oil spills in 2007, and 419 oil spills in the first six months of 2008. Cumulatively, an estimated 9 to 13 million barrels (1.5 million tons) of oil has spilled into the Niger Delta over the past 53 years [4].
Introduction
The oil and gas sector is a significant portion of the Nigerian economy [1]. Scholars believe crude oil production has become more relevant in contemporary times as there is yet no cheaper alternative to it as a form of energy. Interestingly, the sector has also been asserted to cause the most significant chunk of the Nigerian environmental pollution, as shall be discussed below [2].
Extensive oil spill pollution as it is in Nigeria
It is strongly viewed that the discovery of oil in Nigeria since 1956 brought with it, grave environmental challenges. The United Nations Development Programme (UNDP) estimates that between 1976 and 2001 alone, there were an approximate of 6800 spills totalling 3,000,000 barrels of oil [3]. Similarly, reports showed that there were 253 oil spills in 2006, 588 oil spills in 2007, and 419 oil spills in the first six months of 2008. Cumulatively, an estimated 9 to 13 million barrels (1.5 million tons) of oil has spilled into the Niger Delta over the past 53 years [4]. Section 10 of the National Inland Waterways Authority Act have described marine waters in Nigeria and their basins to include all navigable rivers like the rivers Niger and Benue, the rivers Sokoto, Ogun, Hadejia, Kaduna, Gongola, Katsina-Ala, and Cross River etc., and their tributaries [5]. There are also smaller bodies of water enclosed by the lagoons, like the Lagos Lagoon, the creeks, etc. which are also regarded as internal waters under the Act. The pollution of these water bodies by crude oil is a major source of oil pollution of the environment [6].
Amnesty International Report has observed that the water system of the Niger Delta (the rivers, streams, ponds), have been contaminated with oil spills and waste discharges from oil companies. These water pollutions from oil spill, kill fishes as well as the fish larvae thus not only reducing the population of fishes in the river, but also damaging the ability of the fishes to reproduce, causing both immediate damage and long-term harm to fish stocks [7].
It is also believed that the spill has negatively affected the existence of Shell Fish in the Niger Delta waters [7]. Substantiating the report, it has been observed that shell fish have totally disappeared in the K-Dere area of Bodo West in Ogoniland and this has been attributed to the oil spills on the waters in that region [6]. Even more, the disappearance of Cockles for same reason [7]. Cumulatively, these oil spills on the Niger Delta waters inhibits the ability of Niger Delta indigenes to resort to their water system for their livelihood activities of fishing [6]. It is notable that several Niger Delta indigenes rely on fishing for their sustenance and survival [6]. A recent study of the United Nations Environmental Programme (UNEP) found that drinking water in Ogoniland (a native name for the Niger Delta), contained a known carcinogen at levels 900 times above World Health Organisations (WHO) guidelines [8].
It is also known fact that the people of the Niger Delta region, nay most other parts of Nigeria, rely on agriculture for food and their livelihood [6]. Interestingly, it has been reported that oil pipelines run across farmlands; and other oil infrastructure, such as well heads and flow stations, are often close to agricultural land [7]. It is therefore easy for a spill to destroy viable crops of Niger Delta farmers. A study found that oil spills in the Niger Delta region reduces the ascorbic content of vegetables by an estimate of 36% and the crude protein content of cassava by an estimate of 40%, thus resulting in a 24% increase in the prevalence of childhood malnutrition in the region. Other scholars have posited that emissions from combustion of associated gas contains toxins such as benzene, nitrogen oxides, dioxin, etc. which increase air prone disease risk, insecurity of food and damage to the weather [9].
It is further asserted that oil spills on land also cause the ground The grave challenges of oil pollution have been over stated in several environmental journals. The liability accruing to such pollutions have also been extensively discussed by several legal scholars. Interestingly, this discussions on responsibility and liability seems to take a lead and an end to the involvement of the huge multinational oil players in the pollution saga; thus, ignoring the role and liability of other silent parties. Firstly, Nigeria is replete with environmental legislations following the directions of its National Policy on Environment. However, this paper shall not delve into a discussion of these legislative provisions, nor their discussions as it concerns oil pollution and its attendant liabilities in Nigeria. Rather the paper shall examine the extent of liability of the Nigerian government with respect to enforcement of regulation against oil and gas pollution in Nigeria.
Gas flaring pollution as it is in Nigeria
Gas flaring, that has been defined as the burning of natural gas, which could have otherwise, been refined into usable products, seems to be another strong source of pollution in Nigeria's delta basin [10]. Although the average gas flare in the world is about 4% [11], records have put an estimate of 123 gas flaring sites in Nigeria's Delta basin with an estimate of 45.8 billion kilo watts of heat discharged into the atmosphere daily [12]. Hence Nigeria is reported to have over 25% share in the global gas flaring [11].
Indeed, a report by Oil Producing and Exporting Countries (OPEC) put the oil production rate of Nigeria at a total of 22.8 billion barrels of oil from 1958-2003, while maintaining that from Shell-BP alone, an average of a thousand cubic feet of gas is flared per barrel; which when computed, sums up to 22.8 trillion between 1958 and 2003 [13,14]. It is posited that 84.60% of total gas produced is still flared with 14.86% only being used locally [15]. It is therefore not surprising, the assertion that more gas is flared in the Niger Delta than any other place in the world [16]. A recent data shows that, just from two flow stations, an average of 800,000m/day of gas is flared [17].
This strong pollution source in the Niger Delta, comes with its attendant health risk such as asthma, bronchitis, skin problems, breathing problems. Peters [11] posited that the process of flaring creates a physical raging fire at gas flaring sites, with thick smokes billowing into the atmosphere and falling back as acid rain, thus polluting the rivers and creeks within the region. This position was supported by Uyigue and Agho, who posited that the concentration of acid rain seems higher in the Niger Delta than surrounding regions [18].
Scholars have further maintained that the heat from the gas flaring in the region has killed several of the regions vegetation, destroyed the mangrove swamps and salt marshes, and inhibited growth of plants.
Interestingly, over 80% of Nigeria's revenue comes from sale of oil produced from the Niger Delta region where the environmental vices listed above seem to be prevalent in Ref. [14,19,20]. Scholars have clearly stated that the Niger Delta region is not only home to the oil wealth of Nigeria, but has also made Nigeria one of the largest producers of petroleum in Africa, and a known figure among oil producing nations globally [21]. The region is made up of nine states, with over 37 million inhabitants who make up 22% of Nigeria's population [22].
It is therefore surprising that there seems to be a lot of environmental damage associated with the region, and the attendant under-development it has brought about. This must have occasioned the assertion of Sagay that environmental abuse and degradation seems to weigh now, on the same scale with poverty and deprivation, in the oil bearing region.
A notable point in the debacle is that the continued oil and gas pollutions have been occasioned by oil multinationals operating in Nigeria despite existing regulations and purported government enforcement in regulating the oil and gas sector [23]. This includes the Oil Pipelines Act, The Petroleum Act, The Environmental Impact Assessment Act (1990LFN 2004, The Oil in Navigable Waters Act (1990), Associated Gas Reinjection Act [24] etc.
A reason for this is not far-fetched. For example, the existing legislation sanctioning gas flaring in Nigeria hands over the power to assume when gas flaring can be permissible to a petroleum minister, if such minister decides that utilisation or re-injection is not feasible on a particular field [25,26]. This is dangerous in a unique environment like Nigeria whereby a politician with no previous background, skill or knowledge in oil and gas could be appointed a petrol minister. It could rather result to an arbitrary permission of gas flaring by a sitting petroleum minister for monetary gains.
It is therefore not surprising that scholars have asserted that the Nigerian government grants applications for permit to flare where the applicant pays some prescribed fee by the minister [27]. This might have hugely contributed to the inability of the Nigerian government to successfully stop gas flaring beyond the deadlines of 1st January 1984January , 1990January , 1998January , 2000January , 2004January , 2006, January 1st 2008, December 31st 2008, and till date.
Moreso, the statutory penalty for gas flaring remains a paltry sum of N10.00 (£0.0215) (This is the official exchange rate as of 01/09/2017) for every cubic feet of gas flared [28]. These facts make one wonder whether there is any real intention, under the Act to stop gas flaring. It also raises a wonder as to whether the legislative arm of the Nigerian government (in enacting the Act), had any real intention of enforcing a policy against gas flaring in Nigeria.
Notably, having observed the inability of the law to check gas flaring, the current legislative regime in Nigeria has failed to make any real moves at resolving the deficiency in the Associated Gas Flaring Act as the Act remains the statutory instrument on gas flaring in Nigeria, until the passage of the Petroleum Industries Bill [PIB] (2012). The PIB (currently under legislative review) seems to proffer no real solution to the gas flaring challenge either. Section 277(1) of the Bill confers right on the petrol minister to permit flaring. This is no different from what has been obtained under the Associated Gas Flaring Act. Worsening an already bad situation, Section 275 of the Bill further fails to state a date for the ceasing of gas flaring but also puts the power to set such date into the hands of the petrol minister. Interestingly, instead of providing penalties reflecting a thorough prohibition of gas flaring, Section 201 of the Bill creates a strong basis for permission to gas flaring by requiring that persons who flare gas rather pay fines as determined by the minister.
It is therefore trite to assume that gas flaring might have continued, despite deficient legislative enactments to check it [29].
Indeed the example given above only reflects an inadequacy that can been seen in most other environmental laws in Nigeria. A scholar have posited that not only does these laws lack the enforcement and sanctioning strength to ensure compliance; but they also lack clarity as to communicating the exact intentions of the enactment [28]. This paper does not seek to explore the possible deficiencies in legislations sanctioning oil and gas pollution in Nigeria, but shall rather analyse the liability on the part of the Nigerian government arising from its failure to properly regulate the oil and gas sector with regards to environmental protection and management.
Liability of the Nigerian Government in its oil and gas pollution
One might say that the pollution acts above has been orchestrated by oil multinationals. There are however, reports that Shell Petroleum Development Company (SPDC) have worked in close hands with the Nigerian government [30]. This becomes interesting when compared against the extensive environmental spills and gas flaring being done in Nigeria, together with their significant effects. Does it then mean that the Nigerian government is okay with the pollution acts as it is? A question that yet remains to be answered is whether the Nigerian state can be said to be criminally liable for yet failing to regulate the continued pollution within the Nigerian territorial environment (irrespective the actual participant). This question shall be considered in the light of some international law principles and statutory provisions in Nigeria.
The Permanent Sovereignty over Natural Resources principle, while guaranteeing sovereignty of states over their natural resources, mandates a duty of states to protect the environment. Similarly, Section 1 of the Nigerian Petroleum Act [31] vests the entire ownership and control of all petroleum in, under or upon all land or Nigerian territorial waters in the Nigerian government. Furthermore, principle 21 of the Stockholm Declaration mandates states to ensure that activities within their region do not damage the environment. This was further reiterated in principle 15 of the Rio Declaration which mandates states to ensure and guarantee due diligence and precaution against environmental damage within its territory. This premise is supported by the view of Davidson (2011) that the government must have a strong interest in conducting business with trustworthy, responsible and ethical corporate partners.
Most importantly, section 20 of the Nigerian constitution (1999 as amended) mandates the Nigerian government to protect and improve the environment, while guaranteeing public safety. In addition to this, section 2 of the Nigerian Environmental Impact Assessment Act [24] provides that the public or private sector of the economy shall not undertake or embark on or authorised projects or activities without first considering and investigating its impact on the environment. Even more, Section 5(e) of the Federal Environmental Protections Agency Act [FEPA] [32] projects a co-operation between the Nigerian government and the Federal Environmental Protection Agency in ensuring environmental protection and conservation of natural resources. Section 23 of the FEPA Act further provides that "The President for purposes of this Part of this Act may, by regulations, prescribe any specific removal methods, national contingency plans, financial responsibility levels for owners or operators of vessels, or onshore or offshore facilities, notice and reporting requirements, penalties and compensation as he may determine necessary to minimise pollution by any hazardous substance." Similarly, Section 24 of the Act provides that "The Agency shall co-operate with the Ministry of Petroleum Resources (Department of Petroleum Resources) for the removal of oilrelated pollutants discharged into the Nigerian environment and play such supportive role as the Ministry of Petroleum Resources (Department of Petroleum Resources) may, from time to time, request from the Agency." An obsolete reference was that imposing any penal consequence or reproach to any state openly, for disobeying international law could amount to war by the injured party [33]. Nevertheless, in recent times, it has been argued that states can be responsible for a wrongful conduct that contravenes the position of international law [34]. A position projected in Article 19 of the International Law Commission's Draft Articles on State Responsibility [35].
Series of proposals and arguments put forward since 1920 have contemplated the concept of state criminal responsibility and international crime [36]. Indeed, traditional assumptions of 'sovereignty of a state' reflects that [37]: "a) the state system is committed exclusively to state values, principally to state autonomy and the impermeability of state territory, and to the welfare of the state as a monolithic entity; b) That international law is based on the consent of states, and is made only by states and only for states; c) that the international system and international law do not (may not) address what goes on within a state; in particular, how a state treats its own inhabitants is no one else's business, not the business of the system, not the business of any other state; d) That international law cannot be 'enforced': a state can only be persuaded, induced, to honour its international obligations and will do so only when it is in its national interest to do so; e) That a state's sovereignty shields its constitutional system from international influences." However, scholars have posited that not only is international system still very much a system of independent states and has moved beyond state values towards human values [37]; but also international law seems to have influenced, and is influenced by individual state constitutions and constitutional system [37]. It is therefore not surprising that Section 12(1) of the Nigerian constitution establishes impliedly that international treaties (including environmental treaties) ratified by the National Assembly should be implemented as law in Nigeria. Interestingly, Nigeria is signatory to most international conventions on the environment. This therefore, implies that Nigeria is in actual sense, obligated to guarantee a protection of the Nigerian environment.
From the analysis above, it is imperative that ensuring a protection of the Nigerian environment goes beyond an international law principle to a statutory requirement within the Nigerian state [which upon a respect of the rule of law, it is bound to comply with and implement]. Projecting this view, a scholar has argued that an effective protection of the environment from damage caused by the oil companies depends on what the government does with its ownership rights [38]. Citation: Chuks-Ezike C. Environmental crime liability of the Nigerian government in its oil pollution menace. Environ Risk Assess Remediat. 2018;2(2): [1][2][3][4][5][6][7] An argument in favour of this position was ratified in Article 2 of Responsibility of States for Internationally Wrongful Acts report from the United Nations fifty-third session commission provides that "there is an internationally wrongful act of a State when conduct consisting of an action or omission: (a) is attributable to the State under international law; and (b) constitutes a breach of an international obligation of the State" [39]. This therefore implies that a breach of an environmental principle (codified within international law) makes the state responsible for such breach. It has been noted that a strict liability crime is one in which the mental state of the accused is irrelevant as to part or all of the crime [40]; Hence the prosecution need only prove that the accused "engaged in a voluntary act, or an omission to perform an act or duty which the accused was capable of performing" [40].
A summary of this pieces' position on the liability of the Nigerian government in the Nigerian oil pollution is summarised in the case of SERAC v Nigeria [41] before the African Commission on Human and People's Rights [42]. In this case, the plaintiff, a non-governmental organization representing the Ogoni interest alleged that the Nigerian government has colluded with Shell in its joint venture to cause environmental degradation and health problems for the Ogoni people [41]. The plaintiff based its rationale for this allegation on the failure of the Nigerian government to regulate the operations of oil companies against environmental damage. To this effect, the oil consortium disposed of toxic wastes, contaminating Ogoni waterways in violation of applicable international environmental standards [41].
The plaintiff further alleged that, unlike the allegations of sabotage by the SPDC, the consortium rather neglected to properly maintain oil facilities, which in turn resulted to several oil spills close to villages. These spills had "serious short and long-term health impacts, including skin infections, gastrointestinal and respiratory ailments, and increased risk of cancers, and neurological and reproductive problems" [41]. The plaintiff further accused the Nigerian government of failing to produce basic environmental impact studies relating to the hazardous effects of oil production in Ogoniland and even refusing to allow scientists from environmental organizations to conduct assessments.
While ruling, the trial commission held that the proven conducts of the Nigerian government, were in clear breach of the obligations to respect, protect, and fulfil the right to health and the right to a healthy environment under the African Charter. A summary of the ruling of the trial commission were that there rests an obligation on governments to: -guard against threatening the health and environment of their citizens; and avoid policies and practices that might violate the integrity (and undermine the right to health and a healthy environment) of individuals for which the commission found the Nigerian government guilty of conspiring with oil multinationals to destroy the environment and livelihood of the Ogoni people; take reasonable measures to prevent pollution and ecological degradation and to "promote . . . sustainable development and use of natural resources . . . ." [41] The Commission opined that compliance with this obligation, requires states to conduct EIAs in order to provide communities with information regarding their exposure to hazardous substances. Again, the Commission found the Nigerian government guilty of failing to provide EIAs of Ogoniland and by preventing independent experts from conducting such assessments [41]; and "..Take reasonable . . . measures to prevent pollution and ecological degradation, to promote conservation, and to secure an ecologically sustainable development and use of natural resources" [41]. The Commission found the Nigerian government guilty of failing to regulate the conduct of third parties-including corporations-that interfere with the right to health and a healthy environment of Nigerians [hence a violation of the obligation to protect by the Nigerian government].
According to the Commission, The Nigerian government has failed to regulate by: 1) failing to monitor the oil production activities of Shell and other multinational corporations operating in Ogoniland; 2) enforce domestic and international environmental standards, which require safety measures and prompt oil spill response to prevent further environmental pollution and ecological devastation; 3) and consult with indigenous communities before commencing oil operations.
The Commission therefore, enjoined the Nigerian government to comply with these obligations. It is however on record that the Nigerian government might not have fully complied with these obligations, which necessitated the case of SERAP v. Federal Republic of Nigeria [43]. In this case the plaintiff accused the Nigerian government of violating the right to health and standard of living as well as the socio-economic rights of Niger Delta indigenes as stipulated under the African Charter, by failing to enforce existing environmental laws and regulations to protect the environment.
Whilst the court dismissed several claims brought by the NGO under the African Charter, it limited its judgment to Articles 1 and 24 of the Charter. Upon dismissing Nigeria's claims that human rights violations were non-justiciable, the court reaffirmed the African Commission's essential holding in SERAC. The court held that Nigeria's failure to monitor and enforce environmental laws violated the rights to health and a healthy environment under Articles 1 and 24 of the African Charter. The court further opined a strong belief that a breach of the right to health and the right to a healthy environment in Nigeria has invariably resulted to a subsequent breach of other rights, including the rights to an adequate standard of living and economic and social development. Accordingly, the court ordered the Nigerian government to-(1) take all effective measures towards restoring the environment of the Niger Delta; (2) take all necessary measures to prevent a commission of further environmental pollution; and (3) take all measures to hold the perpetrators of the environmental damage, including Shell, accountable.
It has however been reported that despite this notable decision in 2012, Nigeria is yet to take any appropriate measures to enforce the court's decision [44]. Scholars have maintained that, although there has not been any oil production in Ogoniland since 1993, many facilities remain in the area, and "pipelines carrying oil produced in other parts of Nigeria still pass through Ogoniland" which have resulted to continued oil spills [44].
Conclusion
As shown by its inability to even enforce the decision of the SERAC case, the Nigerian government yet fails to enact the provision of the Nigerian constitution [and notably, other environmental laws and regulations on oil spill this work does not seek to discuss, but does exist] [45][46][47][48][49][50]. Indeed, beyond oil spill laws, the Nigerian government seemed to have failed to effectively monitor compliance with and enforce relevant environmental laws against gas flaring in the Niger Delta region, thus exacerbating the environmental devastation in the region [8]. This failure of monitoring and enforcement might have contributed to the difficulty in specifically pointing out the accurate level of responsibility of any of the participating corporate offenders [51][52][53][54][55]. It might therefore, not be farfetched to adduce some form of criminal liability, if not almost same liability that would accrue to an oil polluter, for the Nigerian government. This is because, a failure to regulate the extensive pollution discussed, could be interpreted to be a full wilful permission of the pollution crime, despite having been statutorily restricted to regulate [56][57][58][59][60][61][62][63][64]. This, under the strict liability purview could suffice for the actual pollution conduct and would only expectedly create a liability as a co-committer of the actual pollution offence or an accomplice-in-commission.
|
2019-05-30T23:47:38.527Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "973672c074a8c4df6ba6510c80a8df0377b809a0",
"oa_license": null,
"oa_url": "https://www.alliedacademies.org/articles/environmental-crime-liability-of-the-nigerian-government-in-its-oil-pollution-menace.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "10c4508181632218492f353059ba73b68fce6e6c",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Business"
]
}
|
225727142
|
pes2o/s2orc
|
v3-fos-license
|
Body image perception and satisfaction in university students
– Body image represents the mental perception of body shapes and is a multifactorial structure that includes psychological, physical and emotional elements. The discrepancy between the subjective perception of body image and the desire for the ideal body type can interfere with the feeling of satisfaction and trigger the desire for changes in appearance, directly inter-fering with mental health and general well-being. Men and women may differ in body image satisfaction due to the different social influences and beauty standards imposed. The aim of this study was to evaluate the subjective perception of body image and satisfaction with body shapes among men and women. The sample consisted of 100 college students of both genders. Subjective perception of body image and satisfaction were measured through self-assessment, through scale figure silhouettes. There was a significant difference in the subjective perception of body image in the comparison between genders, and women presented greater discrepancy between the real and the perceived image. In the analysis of satisfaction with body image there were no differences between genders and both presented high percentage of dissatisfaction. In addition, 46.2% of men would like to decrease their body shapes, 53.8% would increase them. As for women, 76.1% would like to decrease their body dimensions and 23.9% to increase them. The strong pressures imposed by society and the standards set by the prevailing media in determining body image dissatisfaction or self-assessment negatively, regardless of gender.
INTRODUCTION
Body image (BI) represents the mental perception of body shapes, being a multifactorial structure that includes psychological, physical and emotional elements 1,2 , as well as perceptive components (perception of the body as a whole); cognitive (assessment of the body and its parts); affective (feelings about the body) and behavioral (actions and behaviors that occur from perception) 3 .
The perception of body image is permanently in a state of change and may not correspond to the actual appearance, being influenced by subjective aspects such as attitudes, experiences and evaluations that the individual has of his own body 4 and also for social aspects like the influence of media and the setting of beauty ideas standards 5 .
The discrepancy between subjective perception of body image and desire for the ideal body type can interfere with feelings of satisfaction and trigger emotional responses and yearning for changes in appearance, directly affecting psychic health and overall well-being 4,6,7 . Thus, dissatisfaction with body image occurs when the perceived image and the desired image are not congruent, causing negative evaluation of the body itself 8 .
On the sociocultural construction of the body Goldenberg 9 states that the valorization of certain attributes and behaviors over others, makes a body typical for each society. Because of this, individuals who do not fit beauty models may self-evaluate negatively or unrealistically and even engage in inappropriate behaviors to modify weight and body shape, such as excessive exercise, medication use, plastic surgery and eating disorders 6 .
Thus, culture determines the ideal standards of beauty and promotes increasing pressure that interferes with body image satisfaction 3 . Moreover, in Western countries thinness symbolizes competence, success, control and attractiveness, while being overweight represents laziness, personal indulgence, lack of self-control and poor willpower 10,11 . In line with this beauty archetype, in the study of Freitas et al. 12 It was identified that men and women rated underweight people as more beautiful and desirable than obese people.
According to data released by the International Society of Plastic and Aesthetic Surgery 13 , Brazil was the second country in the world ranking of plastic surgery procedures. Approximately 1.44 million surgeries were performed in the country, representing 13.9% of the total number worldwide.The most popular plastic surgeries were liposuction and abdominoplasty, both directly related to the decrease in body size, reinforcing the premise of thinness as a beauty standard.
In addition to plastic surgery, body image dissatisfaction may also influence increased involvement with physical activity, as individuals dissatisfied with their bodies are more likely to engage in exercise programs 14 . Another important point refers to the inaccurate perception of body shapes that may also favor weight gain. Overweight and obese people who do not really self-evaluate are more likely to exhibit detrimental attitudes toward maintaining proper weight, such as overeating, eating unhealthy foods, and low levels of physical activity 15 .
In relation to gender, the culture of corpolatry sets different standards of beauty for women and men. For women the model of thinness and fitness prevails.The female body should be well cared for, free from unwanted marks (wrinkles, stretch marks, blemishes and cellulite) and excess fat and sagging 9,16,17 . For men, the pattern is based on low body fat and higher muscle mass 18 . These patterns were verified in the study of Kuan et al. 19 , in which women wanted a body figure below their real weight, while men chose overweight images.
In addition, women are more concerned and dissatisfied with their bodies, are more likely to follow strict weight loss diets and are at higher risk of developing eating disorders 12,20 . Because of this, women are more likely to have distortions in the subjective perception of body image when compared to men, overestimating body size and indicating a desire to be thinner 21 .
Men, on the other hand, tend to accept their bodies better, since they are less predisposed to social influences and pressures and to stigmas related to body weight 22 . However, in the study of Radwan et al. 23 who assessed body image dissatisfaction in 308 college students and the association between real and perceived BMI, It was identified that 80.9% of participants were dissatisfied with body shape, with no differences between women and men Similarly Coelho et al. 24 evaluated the body image satisfaction of 1591 adults, evidencing that 85.9% of the subjects indicated dissatisfaction with the body image.
Considering the problems involved with body image dissatisfaction and distortion, it is important to understand the influence of gender on the formation of body self-concept and central structures linked to the identity of individuals. Thus, this comparative study aimed to evaluate the subjective perception of body image and satisfaction with body shapes between men and women.
Sample
The sample consisted of 100 university students of both sexes, with an average age of 26,26 (±6,01). The subjects were divided into two groups, according to gender, for the comparative analysis of the proposed objective. The study was approved by the Local Ethic Committee at the author's University of Estado of Minas Gerais (UEMG) CAEE n° 97237218.4.0000.5525.
Instruments
Subjective perception of body image and body satisfaction were measured by self-assessment using the Silhouette Figure Scale (SFS) adapted and validated for the Brazilian population in the studies of Kakeshita 25 and Kakeshita et al. 26 . The SFS It is composed of 15 female body shape silhou-ettes and 15 male silhouettes arranged in ascending order BMI, starting from the smallest value of BMI (12,5 kg/m 2 ) for the greatest (47,5 kg/m 2 ).
To determine the actual body image (BI) Body Mass Index (BMI) was calculated, which is a measure expressed by the relationship between body mass (Kg) divided by height in meters squared (BMI= kg/m 2 ). From the BMI, the relationship between the subjective perception of body image (SPI) and the real body image (BI) was established. For this, table 1 was used as reference, which shows the correspondence between BMI values and the representation of the SFS. In case of selection of the same figure, the subject was classified as "satisfied" with his body image. When the figure chosen as "desired" was larger than the SPI, the intention to increase body size was considered. On the other hand, when the chosen figure was smaller, the desire to reduce it was considered. In both cases the subject was classified as dissatisfied with his body image.
Treatment and Data analyses
The measurements verified in this study were: (1) Subjective Body Image Perception (SPI); (2) Real Body Image (BI); (3) Desired Body Image (ID); (4) the difference between BI and SPI (Distortion); and (5) the difference between BI and ID (range).
The SPI indicates in the SFS (between 1 to 15) represents the subject's body (reference). From this proposition, this study also adopted other measures associated with SPI.Using the same scale, when an individual indicates a figure representing a desired image (ID), this measurement can give direction to the goal each one may have. To make comparisons by the scale, the real body image (BI) was adopted, which deals with associating the figure of the scale that corresponds to the volunteer's BMI.
However, this study also proposed two new forms of measurement, distortion and range. Distortion is the distance the individual has between the SPI and the BI. And Reach deals with the distance between BI and ID.
Descriptive statistics were performed using median and interquartile range. Normality was verified by the test Komolgorov-Sminorv, none of the data presented normal distribution. The significance value was ≤ 0,05. Data were divided into two groups: (M) male subjects and (F) female subjects.
For inferential analysis of the data it was used tests U Mann-Whitney for the comparison of the above measures between the sexes. In addition, to verify the association between satisfaction with body image and gender, the test was performed Independence Chi-Square.
RESULTS
The descriptive characteristics of the sample are presented in Table 2. Both groups presented BMI values corresponding to SFS figure 6, being classified as eutrophic in relation to the appropriate distribution of body mass and height. In the analysis of Subjective Body Image Perception (SPI), the central tendency was the choice of figure 9 by women (interquartile range ± 4) and figure 6 by men (interquartile range ± 4). In the comparison of SPI between sexes, there was a significant difference in the results presented by women SPI (U = 801.5, p ≤ 0.05) ( Figure 1A). Distortion analysis refers to the difference between BI and SPI. There was a significant difference for this measure in the comparison between men and women. (U = 689, p ≤ 0,05) ( Figure 1B). Women presented as central tendency the Distortion of 3.0 (standard deviation ± 1.73) and for men the Distortion was 1.0 (standard deviation ± 1.08).
The distance between the BI and the subject's ID was assessed by means of the variable called Range. For this analysis there was no significant difference between genders. (U = 1128,5, p ≤ 0,05) ( Figure 1D).
In the analysis of satisfaction an association was used through the Chisquare of independence, which indicated no significant difference between Satisfaction and Sex, even after the use of the correction of Bonferroni ( Figure 1. Comparison of Subjective Body Image Perception (SPI) (Figure 1A), distortion (Figure 1B), Desired Body Image (ID) ( Figure 1C), and range of comparasion (Figure 1D), between women and men. = 5,26, p ≤ 0,0025). Frequency analysis of observed data also indicated that men and women are dissatisfied with body image (Table 3). Of the 39 men dissatisfied with body image, 46.2% indicated the desire to decrease body dimensions (n = 18) and 53.8% indicated the desire to increase body dimensions (n = 21). Regarding dissatisfied women, 23.9% indicated the desire to increase their body dimensions (n = 11) and 76.1% indicated the desire to decrease their body dimensions (n = 35).
DISCUSSION
The present study evaluated the SPI and body satisfaction, as well as the actual (BI) and desired body images (ID). In addition, the responses between men and women were compared to identify the influence of gender on these variables. Finally, the study proposed two new variables, called Distortion (difference between actual image and subjective perception of image) and Range (difference between actual and desired image).
The results indicated that there was a significant difference in SPI between men and women, in which women overestimated their body shapes and chose larger figures than BI. There was also a significant difference in the Distortion analysis, as women presented greater discrepancy between BI and SPI. Alipour et al. 27 evaluated 184 college women and found that only 35.86% of them choose a correct body image. In another study it was observed that women tend to rate themselves as heavier compared to men, even when they have adequate BMI 28 .
In the present study, women tended to choose representative images of bodies larger than their actual body shapes. Similarly, in studies Ansari et al. 20 and Kiviruusu et al. 21 the women rated themselves as heavier and indicated a desire to be thinner. In this sense, it can be said that women are more influenced by the social impositions of the ideal model of body and beauty 9,16,17 and are more likely to have SPI changes 6 .
In choosing the desired image (ID) there were no significant differences between women and men. However, the women self-rated as representative of figure 9 and chose as desired figure 7, indicating the desire to decrease body size. These results are in accordance with Kuan et al. 19 and Jiménez et al. 29 that highlighted the women would like to lose weight by choosing leaner body images as desired. Similarly, Heiman and Olenik-Shemesh 30 identified that women are more concerned with body weight than men and are influenced by the media in determining their ideal appearance, projecting the desire for a tall, lean body.
For men, there was no discrepancy between SPI and ID. These results differ from the study of Kuan et al. 19 who found that men demonstrated a desire for an overweight body shape. However, some studies indicate that men have better acceptance of their bodies and are less prone to social influences on an ideal body pattern 22 .
In the satisfaction with body image variable, no differences were found between genders, and both men and women presented a high percentage of dissatisfaction. Still in this scope, it is noteworthy that 39 men (78%) were dissatisfied with body image, selecting a different ID from the PSI and 46 women (96%) also showed incongruence between the ID and the SPI. Similarly, in the study of Jiménez et al. 29 men and women presented 75% dissatisfaction with their body shape. Coelho et al. 24 also identified a dissatisfaction percentage of 85.9% in the participating subjects, without gender distinction. Conti et al. 10 verified dissatisfaction for both sexes, especially regarding excess weight and abdominal fat. Already in the study Freitas et al. 12 it was observed that the percentage of dissatisfied women was double compared to men, reinforcing a more negative self-perception of women about body image.
Results indicated that women and men are dissatisfied with body image. In this regard, it is important to note that dissatisfaction with body shapes can trigger harmful health behaviors such as strict diets and the increased risk of developing eating disorders 12,20 . Thus, it is essential to understand in a broader sense the possible influences of dissatisfaction with body image and the emergence of health risk behaviors, and it is necessary to deepen this issue.
Regarding the limitations of the study, it is noteworthy that the sample investigated showed similarity of demographic and sociocultural characteristics, such as age and educational level. These similarities may have influenced the perception of the ideal body type and satisfaction with body image, since the representation of body image suffers interference from the social context. Thus, it is suggested to investigate groups with greater distinctions in relation to socio-cultural aspects in the search for broader results.
CONCLUSION
Body image is an important construct of personal identity and is related to the subjective perception that the subject presents of his body dimensions. In addition, the subjective perception of body image and the level of satisfaction with the body can directly interfere with psychic health and general well-being, leading to the adoption of inappropriate behaviors and severe consequences for the subjects' health.
The results of this study indicate that both men and women had a high percentage of dissatisfaction with body image, with no differences between genders. In addition, women also had a subjective perception of body image that diverged from actual body image. Because of this, it can be said that the strong pressures imposed by society and the standards set by the media prevail in determining body image dissatisfaction or self-assessment negatively, regardless of gender.
Further studies are suggested to identify the attitudes assumed by the subjects regarding dissatisfaction with body image and its possible health risks.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. This study was funded by the authors.
Ethical approval
Ethical approval was obtained from the local Human Research Ethics Committee -Ethical Committee of the Universidade do Estado de Minas Gerais CAEE n° 97237218.4.0000.5525 and the protocol was written in accordance with the standards set by the Declaration of Helsinki.
|
2020-07-16T09:06:05.803Z
|
2020-06-26T00:00:00.000
|
{
"year": 2020,
"sha1": "70ae6e53ad9a5e6b70723473b0c93ebef01c9834",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbcdh/v22/1415-8426-rbcdh-22-e70423.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f4f0d611fcbb51d85ee139a7c6e8594204af9a4a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
22998577
|
pes2o/s2orc
|
v3-fos-license
|
Observations of total RONO 2 over the boreal forest: NO x sinks and HNO 3 sources
. In contrast with the textbook view of remote chemistry where HNO 3 formation is the primary sink of nitrogen oxides, recent theoretical analyses show that formation of RONO 2 ( 6 ANs) from isoprene and other terpene precursors is the primary net chemical loss of nitrogen oxides over the remote continents where the concentration of nitrogen oxides is low. This then increases the prominence of questions concerning the chemical lifetime and ultimate fate of 6 ANs. We present observations of nitrogen oxides and organic molecules collected over the Canadian boreal forest during the summer which show that 6 ANs account for ∼ 20 % of total oxidized nitrogen and that their instantaneous production rate is larger than that of HNO 3 . This confirms the primary role of reactions producing 6 ANs as a control over the lifetime of NO x (NO x =NO + NO 2 ) in remote, continental environments. However, HNO 3 is generally present in larger concentrations than 6 ANs indicating that the atmospheric lifetime of 6 ANs is shorter than the HNO 3 lifetime. We investigate a range of proposed loss mechanisms that would explain the inferred lifetime of 6 ANs finding that in combination with deposition, two processes are consistent with the observations: (1) rapid ozonolysis of isoprene nitrates where at least ∼ 40 % of the ozonolysis products release NO x from the carbon backbone and/or (2) hydrolysis of particulate organic nitrates with HNO 3 as a product. Implications of these ideas for our understanding of NO x and NO y budget in remote and rural locations are discussed.
4544
E. C. Browne et al.: Observations of RONO 2 : NO x sinks and HNO 3 sources HCHO) (e.g. Fuentes et al., 2000). In turn, these oxidants control the burden of tropospheric ozone and of both short (e.g. isoprene) and long-lived (e.g. CH 4 , CH 3 Br) organic compounds thus impacting climate. Consequently, the oxidative chemistry of BVOC has been the subject of extensive research. Recent advances in laboratory and field measurements have focused on the products of BVOC oxidation and have inspired renewed examination of how the mechanisms of BVOC oxidation affect atmospheric composition. In particular, the impact of BVOC on the HO x budget has been highlighted (e.g. Thornton et al., 2002;Lelieveld et al., 2008;Hofzumahaus et al., 2009;Stavrakou et al., 2010;Stone et al., 2011;Whalley et al., 2011;Mao et al., 2012;Paulot et al., 2012;Taraborrelli et al., 2012).
Oxidation of BVOC by OH results in peroxy radicals, which may react with NO x (NO x = NO + NO 2 ), with other peroxy radicals (RO 2 or HO 2 ), or -in some cases -may isomerize (potentially regenerating OH). The reaction of peroxy radicals with NO 2 results in the formation of peroxy nitrates (RO 2 NO 2 ) -a class of molecules which generally act as temporary reservoirs of NO x and serve to transport NO x on regional and global scales. Reaction of peroxy radicals with NO generally acts to propagate the ozone production cycle (R1a); however, a minor channel of the Reaction (R1b) which proceeds with the efficiency α (also known as the branching ratio), results in the formation of organic nitrates (RONO 2 ).
Calculations with box and chemical transport models (CTMs) have shown that organic nitrates play a significant role in the NO x and O 3 budgets (e.g. Trainer et al., 1991;Chen et al., 1998;Horowitz et al., 1998Horowitz et al., , 2007Liang et al., 1998;von Kuhlmann et al., 2004;Fiore et al., 2005;Wu et al., 2007;Paulot et al., 2012). In Browne and Cohen (2012) we have shown that at NO x concentrations typical of remote and rural environments; the formation of ANs is the dominant instantaneous NO x sink even at modest concentrations of BVOC. However, the net impact on O 3 and NO x depends on the extent to which ANs act as a permanent versus temporary NO x sink, as has been shown in numerous models (e.g. von Kuhlmann et al., 2004;Fiore et al., 2005Fiore et al., , 2011Horowitz et al., 2007;Ito et al., 2009;Paulot et al., 2012). The lifetime and fate of ANs remains one of the outstanding questions about their chemistry; compared to other aspects of the NO y , HO x and VOC chemistry, there has been limited research on products of ANs oxidation. Even for those nitrates whose oxidation products and yields have been measured, these measurements have occurred under conditions where the resulting peroxy radicals react primarily with NO and not with HO 2 or RO 2 (which are the expected reactions in the low NO x conditions of the boreal forest). As recently pointed out by Elrod and co-workers Hu et al., 2011), ANs may also be removed via hy-drolysis in aerosol with an assumed product of NO − 3 . This uncertainty in the fate of ANs results in large uncertainties in global ozone budgets. For instance, recent modeling studies have found that the ozone response to increasing isoprene emissions (as predicted in a warmer climate) is highly sensitive to the fate of isoprene nitrates (Ito et al., 2009;Weaver et al., 2009).
Here, we use observations, collected aboard the NASA DC-8 aircraft, of a suite of nitrogen oxides, organic molecules, and oxidants (OH and O 3 ) from the July 2008 NASA ARCTAS (Arctic Research of the Composition of the Troposphere from Aircraft and Satellites) campaign over the Canadian boreal forest, to examine the extent to which the organic nitrate products of BVOC oxidation control the lifetime of NO x in the remote continental boundary layer. We find that the production of ANs is dominated by biogenic molecules and is generally larger than the production of HNO 3 . Using the concentration measurements in conjunction with the production rates, our measurements also provide a constraint on the ratio of the ANs lifetime to the HNO 3 lifetime over the boreal forest. We examine the loss processes of ANs find that both deposition and chemical loss processes (including oxidation of isoprene nitrates and hydrolysis of ANs in aerosol) are important. We find that the ozonolysis of isoprene nitrates is the largest gasphase sink and we find that the particle phase hydrolysis of ANs, which produces HNO 3 , may be both an important loss process for ANs and a significant source of HNO 3 . The branching of ANs loss between the processes that return NO x to the pool of available free radicals (e.g. oxidation) and those that remove NO x from the atmosphere (e.g. deposition, hydrolysis) has important consequences for regional and global NO x , O 3 , and OH.
ARCTAS measurements
The NASA ARCTAS experiment was designed to study processes influencing Arctic chemistry and climate and has been described in detail previously by Jacob et al. (2010). In this analysis we use measurements from the summer portion of the campaign over the Canadian boreal forest (June-July 2008). These measurements were made aboard the NASA DC-8 aircraft which contained instrumentation for an extensive suite of gas and aerosol measurements.
NO 2 , total peroxy nitrates ( PNs), and total organic nitrates ( ANs) were measured aboard the DC-8 using thermal dissociation-laser induced fluorescence (TD-LIF). The instrument has been described in detail elsewhere (Day et al., 2002;Wooldridge et al., 2010) and the specific configuration used during ARCTAS has been described in Browne et al. (2011). Briefly, a two-cell TD-LIF with supersonic expansion (Thornton et al., 2000;Cleary et al., 2002;Day et al., 2002;Wooldridge et al., 2010) was deployed for ARCTAS. We use a 7 kHz, Q-switched, frequency doubled Nd:YAG E. C. Browne et al.: Observations of RONO 2 : NO x sinks and HNO 3 sources 4545 laser to pump a tunable dye laser (pyrromethene 597 in isopropanol) tuned to a 585 nm absorption in the NO 2 spectrum. We reject prompt scatter using time gated detection and eliminate scattered light at < 700 nm using bandpass filters. Fluorescence is imaged onto a red sensitive photomultiplier tube and counts are recorded at 4 Hz. The dye laser is tuned on and off an isolated rovibronic feature in the NO 2 spectrum, spending 9 s on the peak of the NO 2 absorbance and 3 s in an off-line position in the continuum of the NO 2 absorption. The difference between the two signals is directly proportional to the NO 2 concentration. We calibrate at least every two hours during a level flight leg using a 4.5 ppm NO 2 reference standard diluted to ∼ 2-8 ppbv in zero air.
The sample flow was split in thirds with one third directed to detection cell 1, where ambient NO 2 was continuously measured. The remaining flow was equally split between the measurement of total peroxy nitrates ( PNs) and total organic nitrates ( ANs) which are detected by thermal conversion to NO 2 in heated quartz tubes. PNs were converted to NO 2 at ∼ 200 • C and ANs at ∼ 375 • C, which is sufficient to dissociate ANs as well as any semivolatile aerosol phase organic nitrates (Rollins et al., 2010b). We do not detect non-volatile nitrates (i.e. NaNO 3 ). The resulting NO 2 of both heated channels (NO 2 + PNs or NO 2 + PNs+ ANs) was measured in cell 2. The duty cycle of cell 2 was evenly split between the measurement of PNs and of ANs and alternated between the two either every 12 s or every 24 s. The 9 s average from each on-line block was reported to the data archive which is publically available at http://www-air. larc.nasa.gov/missions/arctas/arctas.html.
PNs are calculated from the difference in signal between the ambient temperature and 200 • C channel and likewise, ANs are calculated from the difference in signal between the 375 • C (NO 2 + PNs + ANs) and the 200 • C (NO 2 + PNs). The detection limit (defined as signal to noise of 2 for the 9 s average) of the ANs signal is directly related to the magnitude of the NO 2 + PNs (NP) signal and during ARCTAS was on average < 20 pptv for a 200 pptv NP signal. The ANs signal also requires interpolation of the NP signal which we calculate using a weighted sum of a linear interpolation of the NP signal (weight ∼ 1/3) and an interpolation of the ratio of NP to NO 2 signal scaled to the measured NO 2 . The uncertainty in the ANs measurement depends both on the magnitude and the variability of the NP signal. On average, the NP signal varied by less than 20 % on the timescale of the ANs measurements. An example time series of the ANs and PNs data is shown in Fig. 1.
In the analysis below we use measurements only between 10 and 18 local solar time which enables us to neglect the possible interference from ClNO 2 (Thaler et al., 2011) since ClNO 2 is rapidly photolyzed during daylight hours.
In addition to the core measurement of ANs, described above, we use the measurements listed in Table 1 forest of Canada that took place 29 June-13 July 2008 averaged to the 60 s time base (version 13).
ANs concentration and production
In the continental boundary layer over the boreal forest (between 50 • and 67.5 • N), we observed that ANs were 22 % (median) of NO y (Fig. 2) in background conditions which were sampled on flights 17, 19, 20, and 23. Periods of boundary layer sampling were determined by visually inspecting the potential temperature and ratio of potential temperature to IR surface temperature. The boundary layer heights determined by this method (∼ 1.5-2.4 km a.g.l.) are consistent with boundary layer heights measured over Northern Saskatchewan in July 2002 (Shashkov et al., 2007). The minimum altitude of sampling was just under 500 m. We see no evidence of a significant vertical gradient in the contribution of ANs to NO y , and thus believe the use of the median values to be appropriate. The background conditions were defined to exclude recent anthropogenic and biomass burning influences by only using conditions where CO was less than 180 ppbv and NO x was less than 200 pptv. Remaining biomass burning influences were removed by visually inspecting the HCN and CH 3 CN concentration time series and excluding plumes. The mean concentrations of CO, CH 3 CN, and HCN used in our analysis are lower than the means of the background ARCTAS measurements described in Simpson et al. (2011). In this analysis we define NO y as the sum of the measured individual components of NO y (NO, NO 2 , PNs, ANs, gas phase nitric acid, and submicron aerosol nitrate). The observation that ANs are on the order of 20 % of NO y is consistent with almost all past measurements of 4546 E. C. Browne et al.: Observations of RONO 2 : NO x sinks and HNO 3 sources Table 1. Species and measurement techniques used in this paper in addition to the core measurement of ANs and NO 2 .
22%
HNO 3 (gas) 25% HNO 3 (particle) 4% Fig. 2. NO y composition in the boundary layer over the remote boreal forest for background conditions (see text). HNO 3 (particle) refers to submicron particulate NO − 3 as measured by the AMS and may include a contribution from particulate ANs (see text).
ANs from TD-LIF in continental locations (Day et al., 2003;Rosen et al., 2004;Cleary et al., 2005;Perring et al., 2009Perring et al., , 2010Farmer et al., 2011); however, in this data set we find that the instantaneous production rate of ANs is larger than the HNO 3 production rate -a situation that has not been reported previously.
Using the measured VOCs, OH, HO 2 , and NO concentrations (Table 1), we calculate the instantaneous production rate of ANs (P( ANs) Eq. 1) via OH oxidation of VOCs by assuming that peroxy radicals are in steady-state (Eqs. 2-3) which results in Eq. (4): where Here, k isom refers to the rate of a unimolecular isomerization reaction of RO 2 . This class of reactions has recently been shown to be important when the lifetime of RO 2 is long, such as in low NO x conditions (e.g. Peeters et al., 2009;Peeters and Müller, 2010;Crounse et al., 2011). γ (Eq. 5) represents the fraction of RO 2 that reacts with NO and depends on the identity of the RO 2 . We calculate specific γ values for peroxy
Fig. 3. Calculated instantaneous production rates of HNO 3 (red)
and ANs (black) as a function of NO x . The points are calculated using in situ observations as described in Appendix A. The lines are calculations from the steady-state model described in Browne and Cohen (2012). The two black lines shown assume branching ratios of 5 % (solid black line) and 10 % (dashed black line) for ANs production from the reaction of RO 2 with NO. These two branching ratios are assumed to bracket the values expected in forested environments.
radicals derived from monoterpenes (α-and β-pinene), isoprene, methacrolein, and methyl vinyl ketone. All other peroxy radicals (which, as shown below, account for only 3 % of the ANs production) are assumed to behave like methyl vinyl ketone peroxy radicals. Each of these γ values are calculated using the RO 2 +HO 2 rate calculated from the parameterization used in the Master Chemical Mechanism (MCM) v3.2 (Jenkin et al., 1997;Saunders et al., 2003) available at http://mcm.leeds.ac.uk/MCM. We use measured isomerization rates for isoprene peroxy radicals (Crounse et al., 2011) and methacrolein peroxy radicals . Although there are theoretical predictions that peroxy radicals derived from monoterpenes undergo a fast ring closure reaction followed by addition of O 2 , regenerating a peroxy radical (Vereecken and Peeters, 2004), there are no experimental constraints on the organic nitrate yield for this peroxy radical. We assume that the organic nitrate yield is the same as the parent and thus implicitly assume the isomerization reaction of monoterpene-derived RO 2 is unimportant for our calculation. We also assume that the isomerization reaction is negligible for the remaining RO 2 species. All γ values use the same rate coefficients for RO 2 + NO (from MCM v3.2) and for RO 2 + RO 2 (the IUPAC CH 3 O 2 + C 2 H 5 O 2 reaction rate available at http://www.iupac-kinetic.ch.cam.ac.uk/, Atkinson et al., 2006). In these calculations, when the measured NO is less than 0 pptv, we assign it a value of 1 pptv. Due to the more complete data coverage, we use the LIF mea-surements of OH and HO 2 ; however, the LIF and CIMS data agree well and we see no significant difference when using the CIMS data (Appendix A2). Details regarding the VOCs, OH oxidation rates, branching ratios, and uncertainties regarding rate coefficients are described in Appendix A.
The instantaneous production of HNO 3 is calculated using the measured OH and NO 2 . We use the rate constant from Mollner et al. (2010) with the temperature dependence from Henderson et al. (2012).
The total calculated P( ANs), shown in Fig. 3, is similar to or greater than the calculated nitric acid production. Biogenic species account for the majority (97 %) of P( ANs) (Fig. 4) with isoprene (64 %), methyl vinyl ketone (9 %), and α and β-pinene (25 %) contributing the most production. Due to the rapid isomerization of the methacrolein peroxy radical, very few methacrolein nitrates are formed (< 1 % of P( ANs)). The conclusion that the P( ANs) rate is faster than the P(HNO 3 ) rate holds for both the isoprene nitrate branching ratio of 11.7 % from Paulot et al. (2009) as shown in Fig ANs and isoprene shown in Fig. 1 illustrates how increases in ANs roughly correspond to increases in the precursors (e.g. isoprene).
Since only α and β-pinene were measured aboard the DC-8 aircraft, it is likely that the concentration of monoterpenes is underestimated. Enclosure measurements of black spruce trees (an important constituent of the Canadian boreal forest) indicate that emissions of camphene and 3-carnene are larger than those of α-pinene (Fulton et al., 1998) and extensive measurements of VOCs in the boreal forest of Finland have shown that α-and β-pinene represent only a fraction of the monoterpenes (e.g. Räisänen et al., 2009;Hakola et al., 2012). Vertical profile measurements from the surface to ∼ 800 m in the boreal forest of Finland also indicate steep vertical gradients in monoterpenes and isoprene (Spirig et al., 2004), indicating that production of ANs is likely much faster at altitudes lower than those sampled by the DC-8 aircraft (minimum of ∼ 500 m). Since the composition of monoterpenes is dependent on the ecosystem, we do not attempt to scale the monoterpene measurement. Rather, we note that if the monoterpene concentration is doubled, the monoterpene contribution to ANs production increases to 39 % reducing the isoprene contribution to 51 %. The median of the ratio of P( ANs) to P(HNO 3 ) also increases from 1.96 to 2.6.
Despite this larger production rate of ANs than of HNO 3 , the median concentration of ANs (108 pptv) is less than the median concentration of the sum of gas phase HNO 3 and particulate NO − 3 (180 pptv). One possible explanation of this apparent discrepancy is that entrainment may have a significant effect on the concentrations. The observed concentration differences during flight segments where the DC-8 crossed the boundary layer indicate that entrainment will dilute both HNO 3 and ANs. ANs have a slightly faster dilution: the median concentration difference above and within the boundary layer is 1.0 × 10 9 molecules cm −3 for ANs and 7.0 × 10 8 molecules cm −3 for HNO 3 (gas + particle). As an upper limit estimate we assume that the average boundary layer height is 1.5 km and is growing at 10 cm s −1 . Even with this dilution correction the production rate of ANs is greater than that of HNO 3 in 50 % of the boundary layer data. In contrast, in 90 % of the data the concentration of ANs is less than that of HNO 3 (gas+particle). Since we use an upper limit estimate of the effect of entrainment and considering the production rate of ANs is likely larger than calculated here due to the presence of unmeasured BVOCs (particularly within the forest canopy), we conclude that factors other than entrainment are responsible for the production rate-concentration discrepancy between HNO 3 and ANs. It is also possible that the particulate phase NO − 3 as measured by the aerosol mass spectrometer (AMS) includes a contribution from particle phase ANs (e.g. Farmer et al., 2010;Rollins et al., 2010a). For 77 % of the one minute data for which there are both gas phase HNO 3 and ANs measurements, the concentration of ANs is less than the concentration of gas phase HNO 3 . Therefore, the possible contribution from ANs to the AMS NO − 3 signal does not affect our conclusions that HNO 3 is generally present in higher concentrations than ANs. In the remainder of the manuscript HNO 3 will refer to the sum of gas phase HNO 3 and particulate NO − 3 unless stated otherwise. We conclude that the larger production rate yet smaller concentration of ANs than of HNO 3 implies a shorter lifetime of ANs than of HNO 3 . We note that the lifetime of ANs represents the loss of the nitrate functionality and thus will be longer than the average lifetime of individual nitrates because oxidation of some nitrates results in products that are more highly functionalized ANs.
Lifetime of HNO 3
The lifetime of HNO 3 in the boundary layer is primarily determined by deposition that, for gas phase HNO 3 , is generally assumed to occur with unit efficiency at a mass transfer rate set by turbulence. Assuming an approximate boundary layer height of ∼ 2 km (we observed boundary layer heights that ranged from 1.5 km to 2.6 km) and a deposition velocity of 4 cm s −1 , we calculate a lifetime of ∼ 14 h (loss rate of 2 × 10 −5 s −1 ) for HNO 3 for midday conditions. The deposition velocity of HNO 3 over forests has been reported to range from 2 cm s −1 to 10 cm s −1 (Horii et al., 2005 and references therein), with a strong variation associated with time of day and season. Given the uncertainty and time of day dependence also associated with the boundary layer height, we use this lifetime as a guide for thinking about the daytime lifetime of ANs, which our measurements indicate is shorter than that of HNO 3 , and do not focus on the exact number. The depositional loss of aerosol phase NO − 3 is generally on the order of days, however, due to its low contribution to total HNO 3 (Fig. 2), we consider only the gas-phase loss. Other losses, photolysis and oxidation by OH, are quite slow with median lifetimes of several weeks.
Lifetime of ANs
Using the ARCTAS data we are unable to constrain the exact ANs lifetime since to do so would require knowledge of the photochemical age of the air mass, the history of ANs production (which is likely to have significant vertical gradients), and the exact chemical speciation of the ANs. However, with the constraint imposed by the HNO 3 data and with some reasonable assumptions we can identify the most likely ANs loss processes.
Deposition
Deposition is likely a significant term in the ANs budget, however, the deposition velocity of ANs will be less than that of HNO 3 . The measured Henry's law coefficients of some of the more soluble individual hydroxy nitrates (∼ 10 3 -10 5 M atm −1 , Shepson et al., 1996;Treves et al., 2000) are orders of magnitude lower than that of HNO 3 (1 × 10 14 M atm −1 at pH ∼ 6.5, Seinfeld and Pandis, 2006). Still, these measured Henry's law coefficients of hydroxy nitrates indicate that wet deposition is a significant loss process and a recent study indicates that foliar uptake of organic nitrates is possible (Lockwood et al., 2008). The only direct simultaneous measurements of ANs and HNO 3 deposition are those of Farmer and Cohen (2008) 3.4 cm s −1 for HNO 3 above a ponderosa pine forest. Similar results have been obtained more recently at the same forest (K.-E. Min, personal communication, 2012). Although the exact magnitude of the depositional loss likely depends on the specific composition of ANs, as well as the partitioning between gas and aerosol, we assume that a similar result exists for the boreal forest since recent measurements of speciated organic nitrates using chemical ionization mass spectrometry at the ponderosa pine forest (Beaver et al., 2012) indicate a similar composition of ANs as assumed here from the instantaneous production rate. Therefore, although the deposition of ANs is important, it is slower than the deposition of HNO 3 , thus implying the existence of other sinks of ANs. In other words, chemistry must be an important sink of ANs.
Photolysis
The OH oxidation of both isoprene and monoterpenes produces hydroxy nitrates as first generation products. These molecules account for at least 89 % of the instantaneous production rate of ANs ( Fig. 4) for the conditions considered here. Although there are no direct measurements of the photolysis rates of these specific molecules, by analogy to other compounds we estimate that photolysis is a negligible sink for them. Roberts and Fajer (1989) report that the cross section of nitrooxy ethanol is approximately a factor of three smaller than methyl nitrate. Similarly, photolysis rates of alkyl nitrates are on the order of several days (e.g. Roberts and Fajer, 1989;Talukdar et al., 1997) and are thus too slow to be important. In contrast, α-nitrooxy ketones have been shown to have a cross section approximately five times larger than alkyl nitrates (Roberts and Fajer, 1989;Barnes et al., 1993). Our calculations suggest these are too small a fraction of the total to affect the overall lifetime. To estimate an upper limit, we use the fastest reported photolysis rate from Suarez-Bertoa et al. (2012), which is for 3-methyl-3nitrooxy-2-butanone. This rate was calculated assuming solar conditions appropriate for 1 July at noon at 40 • N. To achieve a rate appropriate for the ARCTAS conditions we use the median rates of methyl and ethyl nitrate photolysis measured during ARCTAS and scale these to the rate of 3-methyl-3-nitrooxy-2-butanone using the measurements of Roberts and Fajer (1989) and Suarez-Bertoa et al. (2012). We take the average of the rate calculated from methyl nitrate and from ethyl nitrate and assume that 9 % of the nitrates (the methyl vinyl ketone contribution in Fig. 4) are α-nitrooxy ketones. This results in an overall photolysis rate for ANs of 2.5 × 10 −6 s −1 (lifetime of ∼ 110 h), a rate that even when combined with deposition is too slow to account for the inferred ANs loss.
Oxidation
The overall gas-phase chemical removal rate of ANs can be represented as where k ANi+OX the rate constant of that oxidant with the specific nitrate, [OX] represents the concentration of oxidant (OH, O 3 , or NO 3 ), [AN i ] represents the concentration of a specific nitrate, and χ i the fraction of the reaction that results in loss of the nitrate functionality (referred to as NO x recycling). To simplify our calculation, we neglect the possibility that the oxidation of nitrates results in the formation of dinitrates which would result in a small positive term in Eq. (6). We also ignore oxidation by NO 3 since we only use daytime measurements above the forest canopy. We estimate the composition of ANs as a mixture of the small, long-lived, alkyl nitrates measured in the whole air samples (which account for a median of 30 % of the ANs measured by TD-LIF) and molecules that can be estimated from the instantaneous production rate of ANs (Fig. 4). The small nitrates have very long lifetimes and are a negligible term in the overall loss rate. We use the OH oxidation rates of isoprene-derived nitrates (assuming 60 % δhydroxy isoprene nitrates and 40 % β-hydroxy isoprene nitrates) and methyl vinyl ketone-derived nitrates from Paulot et al. (2009). Recently, Lockwood et al. (2010) have measured the ozone oxidation rate of three of the eight possible isoprene nitrate isomers. The three isomers include one δhydroxy isomer and two β-hydroxy isomers. We assume that the δ-hydroxy isoprene nitrate rate constant from Lockwood et al. (2010) is representative of all δ-hydroxy isomers. The rate constants for two β-hydroxy isomers differ by approximately a factor of three and we bound the possible range of reaction rates using these two rates. This results in an ozonolysis rate ranging from 7.4 × 10 −17 cm 3 molecules −1 s −1 to 1.7 × 10 −16 cm 3 molecules −1 s −1 . Results using the branching ratio between the δ and β-hydroxy nitrate channels as determined by Lockwood et al. (2010) (and updated by Pratt et al., 2012) are included in Appendix B.
We are unaware of any experimental constraints on the oxidation rate of monoterpene nitrates by OH and we estimate an OH oxidation rate constant of 4.8 × 10 −12 cm 3 molecules −1 s −1 based on a weighting of the MCM v3.2 rates for α-pinene and β-pinene nitrates as described in Browne et al. (2013). The monoterpene nitrates in our calculations are based on the production from the observed concentrations of α-and β-pinene, the only two monoterpenes measured aboard the aircraft. These nitrates will predominantly be saturated molecules and thus ozonolysis of these nitrates should be too slow to be important. As discussed in Sect. 3, it is likely that the contribution of monoterpene nitrates is underestimated. It is therefore possible that some of the monoterpene-derived nitrates may be Table 2. Median oxidation rates calculated using the assumptions from the text. Here k AN+OX refers to the rate of reaction with the class of organic nitrates with either OH or O 3 , β and δ refer to the NO x recycling following reaction with OH or O 3 , respectively, and (1−F RO 2 +HO 2 ) refers to the fraction of RO 2 reactions that lead to NO x recycling (i.e. the fraction of the time RO 2 reacts with either NO or other RO 2 ). The two numbers listed for the isoprene + O 3 rate reflect the range in possible β-hydroxy isoprene nitrate ozonolysis rates. unsaturated molecules. We discuss the impact of this possibility in Appendix B and conclude that since the release of NO 2 from these molecules following oxidation is likely low, the effect on the oxidation rate is minimal.
The NO x recycling (χ) following OH oxidation depends on the fate of the resulting nitrooxy peroxy radical (R(NO 3 )O 2 ) which may react with NO, HO 2 , or other RO 2 . We assume that reactions with HO 2 generate a more highly functionalized nitrate and that the NO x recycling (the loss of the nitrate functionality) occurs with the same efficiency through both the R(NO 3 )O 2 + NO and R(NO 3 )O 2 + RO 2 channels. We use the same assumptions for the R(NO 3 )O 2 + HO 2 rate as in the calculation of γ in Sect. 3, however, we assume that no isomerization reactions occur. We find that RO 2 + RO 2 reactions account for at most 1 % of the RO 2 reactions. Uncertainties regarding these estimations are discussed in Appendix B. NO x recycling from the RO 2 + NO reaction have been constrained by laboratory experiments to be ∼ 55 % for isoprene nitrates and 100 % for MVK nitrates (Paulot et al., 2009). We are unaware of any measurements of NO x recycling from monoterpene nitrates and assume a value of 100 % as an upper limit. Although the molecular structure of monoterpene nitrates implies that the NO x recycling is likely much lower than 100 %, the contribution (as calculated below) from monoterpene nitrates to NO x recycling is negligible making a more accurate estimate unnecessary. NO x recycling following ozonolysis of unsaturated nitrates (isoprene nitrates) depends on the initial branching of the ozonide to the two possible pairs of a carbonyl molecule and an energy-rich Criegee biradical and the subsequent fate of the Criegee biradical (stabilization or decomposition). To our knowledge, no experimental constraints on this process exist for any unsaturated organic nitrate. The MCM v3.2 assumes equal branching between the two possible carbonyl/Criegee biradical pairs; we calculate NO x recycling (40 %) using the MCM v3.2 products of the ozonolysis of isoprene nitrates, the assumption that a stabilized Criegee biradical reacts only with water, and the relative abundances of the different isoprene nitrate isomers from Paulot et al. (2009) Our calculation of the ANs loss rate can be summarized by expanding Eq. (6) to Here, β represents the fraction of NO x recycled following the reaction of the peroxy radical with RO 2 or NO, F RO 2 +HO 2 (Eq. 8) represents the fraction of the time that the peroxy radical reacts with HO 2 (and thus does not recycle NO x ), and δ represents the NO x recycling from ozonolysis. Uncertainties regarding this calculation are described in Appendix B. Using the assumptions above, we calculate a chemical ANs lifetime of ∼ 9-18 h ( Table 2) which ranges from slightly shorter to slightly longer than our estimated HNO 3 lifetime (∼ 14 h). In combination with deposition (∼ 17.5 h for a 2 km boundary layer), a detailed representation of oxidative ANs loss results in a calculated ANs lifetime in the range of the assumed lifetime of HNO 3 . In these calculations, the majority of ANs loss occurs via isoprene nitrate ozonolysis, which has recently been reported to be much faster than previously assumed (Lockwood et al., 2010). Additional measurements of this rate and the products are important to constraining our understanding of ANs and their role in the NO x budget.
Loss of ANs
Although we calculate a ANs loss rate due to oxidation and deposition that is similar to the assumed loss rate of HNO 3 , the recent suggestion that organic nitrates may undergo hydrolysis in aerosols to produce HNO 3 as a product (Sato, 2008;Darer et al., 2011;Hu et al., 2011) is also a viable hypothesis to explain the measurements. Evidence for organic nitrate losses in ambient (Day et al., 2010) and chamber generated particles (Liu et al., 2012) analyzed with IR spectroscopy is consistent with this mechanism. This chemistry results in the depletion of ANs and an enhancement in HNO 3 ; both effects would contribute to the ratio of ANs to HNO 3 production and concentration that we report here. Bulk solution studies of hydrolysis of organic nitrates indicate that primary and secondary nitrates are stable at atmospherically relevant pH, but that the lifetime of tertiary hydroxy organic nitrates is surprisingly short (0.019-0.67 h), even in neutral solutions Hu et al., 2011). Since these are bulk solution studies, there are some difficulties associated with extending the rates to aerosol processes. Namely, the question arises as to whether the nitrates are present in the organic or aqueous phase of the aerosol and if the availability of liquid water is sufficient for the reaction. Some of these issues have been recently discussed by Liu et al. (2012) who, using a smog chamber without seed aerosol, constrained the hydrolysis of particulate organic nitrates derived from the photo oxidation of 1,2,4trimethylbenzene. Using their measurements of the organic aerosol composition, they calculated a lifetime of ∼ 6 h for particulate organic nitrates when the relative humidity was greater than 20 %.
Since the vapor pressures of first generation isoprene nitrates are generally too high to partition into aerosol ), we begin the estimation of the hydrolysis rate by assuming that only monoterpene nitrates are present in organic aerosol. Although Henry's law coefficients of small (≤ 5 carbons) hydroxy nitrates have been measured to be quite large, approximately ∼ 10 3 -10 5 M atm −1 (Shepson et al., 1996;Treves et al., 2000), it is reasonable to assume that as a ten carbon compound, a monoterpene nitrate may have a lower Henry's law coefficient. We therefore assume that these nitrates partition only into organic aerosol and that the organic aerosol contains sufficient liquid water for this reaction to occur (median RH of 63 % and minimum of 34 %).
We use absorptive partitioning theory to determine the fraction of the monoterpene nitrate in the particle phase (Pankow, 1994;Donahue et al., 2006): Here C * i represents the effective saturation concentration (µg m −3 ) of the organic nitrate, C a i is the concentration of the organic nitrate in the condensed phase (µg m −3 ), C g i the concentration of the organic nitrate in the gas phase (µg m −3 ), and C OA is the concentration of organic aerosol (µg m −3 ). In the second equality R is the universal gas constant (8.206 × 10 −5 atm m 3 K −1 mol −1 ), MW i is the molecular weight of the organic nitrate (assumed here to be a hydroxy monoterpene nitrate -215 g mol −1 ), ζ i is the molality based activity coefficient (assumed to be 1), p i is the vapor pressure of the organic nitrate (Torr), and 760 and 10 6 are unit conversion factors. We calculate an estimated bound on the partitioning of monoterpene nitrates to the aerosol using vapor pressures of 4 × 10 −6 Torr (C * i = of 48 µg m −3 at 286 K -the median temperature during ARCTAS) derived from chamber measurements of nitrate products of the NO 3 + β-pinene reaction ) and of 5.8×10 −7 Torr (C * i = 7 µg m −3 ) from chamber measurements of the NO 3 + limonene reaction (Fry et al., 2011). The organic aerosol loading is from the AMS measurement and can be subdivide into two distinct regimes: one with a median loading of ∼ 1 µg m −3 (at ambient temperature and pressure) and one with a median loading of ∼ 6.6 µg m −3 . The enhanced loading regime (60 % of the data) was associated with higher concentrations of acetone, a known oxidation product of monoterpenes, suggesting that monoterpenes are an important source of SOA. This is consistent with measurements in southern Ontario reporting high concentrations of biogenic SOA (Slowik et al., 2010). The concentration of the biogenic species (α-pinene, β-pinene, isoprene, MVK, and MACR) were all higher in the regime of enhanced organic aerosol loading than in the lower loading regime. The isoprene oxidation products showed higher enhancements (e.g. 181 % equivalent to 278 pptv for MVK) than did isoprene (18 %-53 pptv). The concentration enhancement of acetone (117 %-1.23 ppbv) was also larger than that of the monoterpenes (105 %-122 pptv), however, the long lifetime and multiple sources of acetone make a direct attribution to monoterpene oxidation impossible. Nevertheless, it is clear that the enhanced loading regime represents a larger biogenic influence and is more aged than the lower loading regime.
The fraction of the monoterpene nitrate in the aerosol (F aero ) is calculated using Eq. (10).
We calculate the loss rate of ANs through hydrolysis (k hyd-loss ) using Eq. (11): where F tertiary represents the fraction that is tertiary nitrate and k hyd represents the hydrolysis rate constant. We set F tertiary at 75 %, midway between the 63 % for α-pinene nitrates and 92 % for β-pinene nitrates from MCM v3.2. We Table 3. Median calculated loss rate of ANs due to hydrolysis in the particle phase assuming that only monoterpene nitrates may partition into the aerosol and hydrolyze. We consider cases that span different vapor pressures, hydrolysis rates, and organic aerosol loadings. Here, C * represents the effective saturation concentration, τ hyd is the lifetime to hydrolysis for a tertiary nitrate in the particle phase, k hyd-loss is the calculated loss rate of ANs via hydrolysis (see text for details), and the last column is the median of the ratio of this HNO 3 source to the source from the reaction of OH with NO 2 . After correction for the small alkyl nitrates, monoterpenes accounted for ∼ 10 % (∼ 19 %) (median value) of the ANs in the low (high) aerosol loading periods. We assume that only a fraction (75 %) of the monoterpene nitrates undergo hydrolysis and thus the fraction of the ANs that are in the particle phase and undergoing hydrolysis is 2-7 % for the high loadings and < 1 % for the low loadings. note that the fraction of ANs predicted to be derived from monoterpenes based on the instantaneous production rate changes insignificantly between the low and enhanced loadings and we use the value from Fig. 4. However, in the low loading regime the small alkyl nitrates represent a larger fraction of ANs (61 %) than in the enhanced loading regime (23 %). Thus, the absolute fraction of ANs from monoterpene nitrates is higher in the enhanced loading regime. Based on the work by Elrod and co-workers Hu et al., 2011) showing an order of magnitude variation in the tertiary nitrate hydrolysis lifetime, it appears that the identity of the organic nitrate influences the hydrolysis rate. Although these bulk solution rates may not be strictly applicable to aerosol processes, it is also likely that the lifetime reported by Liu et al. (2012) for 1,2,4trimethylbenzene-derived organic nitrates may not apply to biogenic systems. Therefore, we calculate the overall ANs hydrolysis rate (k hyd-loss ) for three different combinations of hydrolysis rates (k hyd ) and C * values as shown in Table 3.
In the enhanced loading regime these rates range from 2 % to 20 % of the oxidative lifetime (assuming the faster ozonolysis rate). It should be emphasized that the hydrolysis loss rate calculated here is reflective of the hydrolysis loss rate averaged over all the individual organic nitrates; in other words, the loss rate of an individual nitrate might be faster or slower than this rate. In fact, the rate calculated here is the result of only ∼ 2 % (C * = 48 µg m −3 ) or ∼ 7 % (C * = 7 µg m −3 ) of the ANs undergoing hydrolysis in the enhanced loading regime and for < 1 % of the ANs (regardless of C * value) undergoing hydrolysis in the low loading regime. Any changes to this fraction will result in proportional changes to the overall hydrolysis rate. Consequently, due to the chemical complexity of this process our range of rates should not be taken as upper and lower estimates of the impact of this channel. Rather, this range should be interpreted as evidence that the hydrolysis reaction may represent an important, previously unaccounted for ANs loss process as well as a potentially important source of HNO 3 . This loss process of ANs is important in that unlike the oxidative pathway, hydrolysis represents a sink of ANs that removes NO x from the atmosphere.
Production of HNO 3
In addition to being a sink of ANs the hydrolysis reaction may also be an important source of HNO 3 . As shown in Table 3, the ratio of this HNO 3 source to the known source from the reaction of OH with NO 2 and ranges from a median of 0.13 to greater than 1 in the enhanced loading regime. We believe that this upper limit is likely incompatible with the HNO 3 budget and is likely the result of extrapolating bulk solution rates to aerosol environments; however, we do find evidence of this HNO 3 source in the variation of the ratio of HNO 3 to NO 2 with NO x . In the boundary layer when the lifetime of HNO 3 is short, HNO 3 is in photochemical steadystate and the ratio of HNO 3 to NO 2 should be proportional to the OH concentration (Day et al., 2008). We estimate the lifetime of HNO 3 to be ∼ 14 h, a value short enough that HNO 3 should be in diurnal steady-state. When there is a substantial concentration of ANs, the ratio of HNO 3 to NO 2 increases while NO x decreases. OH, however, exhibits the opposite trend and it decreases (Fig. 5a). For conditions of low ANs, the ratio of HNO 3 to NO 2 is more similar to OH. It is unlikely that variations in photochemical age are the dominant factor explaining the observed behavior of the HNO 3 to NO 2 ratio (Fig. 5a). The largest deviation in the expected behavior of the HNO 3 to NO 2 ratio as a function of NO x occurs at the lowest NO x concentrations -air masses which are likely to be more aged than those with higher NO x concentrations. However, the deviation only occurs in those air masses with a substantial concentration of both ANs (Fig. 5a) and monoterpenes (not shown) and thus likely higher aerosol phase organic nitrates. Other than NO x concentration, other available chemical tracers for defining age with time zero at biogenic emissions were found to be unsuitable because of their direct correlations with ANs or because their sources were not unique.
This trend of increasing values as NO x decreases is also the same trend as the ratio of P( ANs) to P(HNO 3 ) (where P(HNO 3 ) = k OH+NO 2 [OH][NO 2 ]) as shown in Fig. 5b using results from the steady-state model in Browne and Cohen (2012). The similarity in magnitude between the HNO 3 to NO 2 ratio and the P( ANs) to P(HNO 3 ) ratio is expected if the hydrolysis of ANs constitutes the major loss process Atmos. Chem. Phys., 13, 4543-4562
. (A)
On the left y-axis is the ratio of HNO 3 (gas+particle) to NO 2 versus NO x colored by ANs concentration. The solid lines are binned median values of points corresponding to ANs concentrations ≥ 100 pptv (55 % of the data) for the ratio of HNO 3 to NO 2 (black line, left y-axis) and for OH concentration (grey line, right yaxis). If HNO 3 is in steady-state the ratio of HNO 3 to NO 2 should be equivalent to OH. The difference in these two lines as a function of NOx indicates the possibility of additional HNO 3 sources. (B) Comparison between the ARCTAS measurements and predictions from the steady-state model described in Browne and Cohen (2012). The solid lines are the same as in Fig. 5a with the shaded grey area representing the interquartile range of the OH concentration. The dashed and dotted black lines represent the steady-state model predictions of the ratio of ANs production to HNO 3 production for branching ratios of 10 % and 5 % respectively (left yaxis). The dashed grey line represents the steady-state model prediction of the OH concentration (right y-axis).
of ANs (i.e. that the ratio of hydrolysis to oxidation may be higher than our calculations here suggest). These results suggest that the ARCTAS HNO 3 concentration is consistent with a source of HNO 3 other than the reaction of OH with NO 2 and that this source is likely the hydrolysis of ANs.
There are similar hints of this additional HNO 3 source in a reinterpretation of data from previous experiments. Previous measurements of HNO 3 have found evidence for a temperature dependent OH source (Day et al., 2008) and of an elevated within canopy OH concentration (Farmer and Cohen, 2008) in a ponderosa pine forest. However, these results are also consistent with a source of HNO 3 from rapid ANs hydrolysis. For instance, the temperature dependent OH source may result from an increase in biogenic VOC emissions with temperature resulting in a larger ANs production and consequently a larger HNO 3 source. Likewise, the rapid hydrolysis of ANs with low vapor pressures formed from sesquiterpenes and monoterpenes in the forest canopy would result in a within canopy source of HNO 3 . This reinterpretation of the HNO 3 data as resulting from an additional production pathway (via hydrolysis of ANs) rather than through an elevated concentration of OH is also more consistent with OH measurements made in the same forest a few years later that report a within-canopy OH gradient and temperature dependence smaller than that inferred from the previous studies. However, we note that these studies were conducted in different years and it is possible that the ecosystem and its within-canopy chemistry have changed in between those years.
It is interesting to consider the ultimate fate of the NO − 3 possibly produced by the organic nitrate hydrolysis. In 57 % of the background measurements the molar ratio of sulfate to ammonium (as measured by the AMS) is greater than onehalf, indicating that it is unfavorable for NO − 3 to be present in the aerosol and that ANs hydrolysis is possibly a source of gas phase HNO 3 . However, this is a simplistic approximation to an extremely complex problem. The thermodynamics of an aerosol that is an organic-inorganic mixture are much more complex (Zuend et al., 2011) than purely inorganic aerosols and are subject to uncertainties regarding the composition of aerosol and the interaction of ions with various functional groups present on organic species. Further studies on organic nitrate hydrolysis in aerosols are needed to better constrain the atmospheric impacts; however, it appears that the hydrolysis of organic nitrates may contribute (quite significantly) to HNO 3 production.
These results suggest the need for research constraining the possible hydrolysis loss of ANs and the associated HNO 3 production. In particular, we need measurements of how the hydrolysis of organic nitrates from biogenic species differs in aerosol versus bulk solution, the aerosol liquid water content necessary for this reaction, and specific rates for monoterpene nitrates.
Implications
As shown in Fig. 3, the calculated ANs production for most of the data is similar to the steady-state model results from Browne and Cohen (2012) if we assume a branching ratio somewhere between 5 % and 10 % for ANs formation from the entire VOC mixture. For the ARCTAS data we calculate that the biogenic VOCs account for ∼ 53 % of the VOC reactivity with respect to OH (median value not including CO and CH 4 ). Assuming that the biogenic VOCs are the only sources of ANs with an average branching ratio of 11 % (similar to isoprene), results in an overall branching ratio of ∼ 6 %. This suggests that the NO x lifetime and ozone production efficiency in the boreal forest are similar to those calculated in 4554 E. C. Browne et al.: Observations of RONO 2 : NO x sinks and HNO 3 sources Browne and Cohen (2012) and that the steady-state model provides a useful framework for understanding the NO x budget under low NO x conditions on the continents.
However, as discussed in Browne and Cohen (2012), the net regional and global impact of ANs on NO x lifetime and ozone production depends on the degree to which ANs serve as a permanent versus temporary NO x sink. Modeling studies have found that different assumptions regarding NO x recycling from isoprene nitrates result in large sensitivities in NO x and O 3 (e.g. von Kuhlmann et al., 2004;Fiore et al., 2005;Horowitz et al., 2007;Wu et al., 2007) and that these uncertainties affect predictions of ozone in a future climate (e.g. Ito et al., 2009;Weaver et al., 2009). The analysis presented here suggests that ANs have a short atmospheric lifetime due to a combination of deposition and chemical loss, but we find the data is ambiguous about the relative fraction of the ANs chemical loss that acts to release NO x or to produce HNO 3 . Furthermore, the exact fate of ANs loss is likely ecosystem dependent; for instance, ANs may have a significantly different impact on the NO x budget in forests dominated by isoprene emissions versus in forests dominated by monoterpene emissions since first generation monoterpene nitrates have lower vapor pressures than first generation isoprene nitrates.
Due to the lumped treatment of ANs in most condensed chemical mechanisms, it is likely that these mechanisms will be unable to reproduce the ARCTAS results, and consequently are misrepresenting the NO x lifetime and ozone production. For instance, some condensed mechanisms instantaneously convert isoprene nitrates to HNO 3 , resulting in zero NO x recycling. The ozonolysis of isoprene nitrates is also ignored in many mechanisms; this is incompatible with our results that the majority of NO x recycling during ARCTAS results from ozonolysis. Lastly, many condensed mechanisms ignore monoterpene nitrates or lump them into a long-lived nitrate. Our results suggest that, at least in the boreal forest, monoterpene nitrates are an important NO x sink and that their particle phase hydrolysis may represent a source of HNO 3 .
Finally, it is interesting to note that since the loss of ANs through hydrolysis depends on the specific isomer of the nitrate, there are interesting implications for the loss of monoterpene nitrates formed from OH versus from NO 3 chemistry. Based on the assumption that tertiary radicals are more stable than primary radicals and thus have a higher nitrate yield, the oxidation of α-or β-pinene and limonene by NO 3 is more likely to result in a primary nitrate and oxidation by OH is more likely to result in a tertiary nitrate. Thus, nitrates formed by OH oxidation may have a shorter atmospheric lifetime than those formed from NO 3 chemistry.
Conclusions
We present the first measurements of ANs over the remote boreal forest of Canada and show that ANs are present in significant concentrations. Using measurements of VOCs we calculate the instantaneous production rate of ANs and find that, as expected for a remote forested environment, biogenic species, specifically monoterpenes and isoprene, dominate the ANs production. If the observations of α-and βpinene underestimate the total source of monoterpenes then monoterpenes play an even larger role, than the 25 % we calculate. We also find that the instantaneous production rate of ANs is, in general, faster than that of gas phase HNO 3 production, despite a lower overall concentration, implying that ANs have a shorter lifetime than HNO 3 . We estimate that depositional loss of ANs is important and that the combined loss to reaction with O 3 and OH occurs at a rate similar to the assumed deposition rate of HNO 3 . Oxidation of isoprene nitrates, in particular by O 3 , is primarily responsible for the rapid loss rate. We emphasize that this oxidative loss rate represents the loss of the nitrate functionality and that oxidative reactions of individual nitrates are faster since some of their products are more highly functionalized nitrates.
We also provide evidence which suggests that particulate organic nitrates undergo rapid hydrolysis contributing to HNO 3 production. Although, we are unable to constrain the magnitude of this source precisely, all reasonable assumptions imply that it is significant both as a loss of ANs and is a source of HNO 3 . Furthermore, there is evidence of its existence in the variation of the HNO 3 to NO 2 ratio as a function of NO x . We conclude that the rapid loss of ANs required to explain these observations is a balance between processes which recycle NO x (oxidation) and those which remove it (hydrolysis and deposition).
Appendix A A1 Calculation of ANs production
In the calculation of the ANs production rate we use the VOCs, branching ratios, and OH reaction rates listed in Table A1. We do not attempt to estimate the concentration of any unmeasured VOCs or to fill in any missing data.
A2 Uncertainties in the calculation of ANs production
The calculated production of ANs is sensitive to the assumptions about reaction rates, organic nitrate branching ratios, the assumption that the VOC measurements are representative of the entire VOC mix, and possible errors in measurements. We have investigated several possibilities (outlined in Table A2) and find that our conclusion is robust. In Table A2 instantaneous production of ANs to HNO 3 for ten different possibilities (including our base case that was presented in the text). In the unique RO 2 + RO 2 rate case we take the rate of RO 2 + RO 2 reactions from MCM v3.2 RO 2 + CH 3 O 2 rates for methyl vinyl ketone, methacrolein, isoprene, and monoterpenes. We weight the methyl vinyl ketone and isoprene rates by the initial branching of the different peroxy radicals. The monoterpene rate is calculated assuming an even split between α-and β-pinene and weighting the different peroxy radicals. No significant difference is observed using these rates. If we increase the isomerization rate of the isoprene peroxy radical by an order of magnitude (Isomerization×10 case), we also observe no significant difference. Recent measurements of the isoprene nitrate branching ratio range from 7 % to 12 % (Paulot et al., 2009;Lockwood et al., 2010). In our base calculation we use the branching ratio of 11.7 % reported by Paulot et al. (2009). In the 7 % IN case below, we use the yield of 7 % measured by Lockwood et al. (2010) and find that although the contribution from isoprene decreases, P( ANs) is still larger than P(HNO 3 ).
It is also likely that there are VOCs contributing to organic nitrate production that were not measured during ARC-TAS, and thus the base calculation is biased low. For instance, only the monoterpenes α-pinene and β-pinene are measured. Measurements from the boreal forest in Finland indicate substantial contributions from other monoterpenes as well as contributions from sesquiterpenes (Spirig et al., 2004;Räisänen et al., 2009;Hakola et al., 2012). As expected, if we double the production from monoterpenes (2 × Monoterpenes) to account for unmeasured species, we see an increases in the ratio of P( ANs) to P(HNO 3 ). Table A2. The median value of the P( ANs) to P(HNO 3 ) ratio and the speciation of P( ANs) for different assumptions regarding RO 2 reaction rates, OH and HO 2 concentrations, and VOC concentrations as described in Appendix A. In our base calculation we use the LIF OH measurement. It has recently been shown that this measurement may have an interference in environments with high biogenic emissions . This should have a minor effect on our calculation since any change in OH will affect both P( ANs) and P(HNO 3 ). Nevertheless, we test this possibility using the OH measurement from the chemical ionization mass spectrometry instrument -the HO x CIMS OH case. These two different measurements agreed well during the campaign . We see a slight decrease in the ratio of P( ANs) to P(HNO 3 ), however this can be attributed to the discrepancy in data coverage between the two instruments; if we restrict the LIF OH to the same points with CIMS OH coverage, we calculate the same median ratio.
Recently it has been reported that some LIF HO 2 measurements may suffer from a positive interference from the conversion of RO 2 to HO 2 in the instrument . This should increase our production of ANs relative to HNO 3 due to an increase in the fraction of RO 2 that reacts with NO. If we decrease the HO 2 by 40 % ([HO 2 ] × 0.6) case, we find this to be true ([HO 2 ] × 0.6 case). Using the HO x CIMS HO 2 measurement also results in an insignificant change to the median P( ANs) to P(HNO 3 ).
In low NO x environments it is possible that RO 2 is present in higher concentrations than HO 2 which would decrease our ANs production. However, we find that increasing the RO 2 concentration by an order of magnitude (RO 2 × 10 case) has a negligible effect on our calculation. Even if this increase is coupled with a doubling of the RO 2 + RO 2 rate (not shown), there is no significant effect. Furthermore, the HO x CIMS measurements of RO 2 do not show any evidence that the RO 2 to HO 2 ratio has any significant increase at low NO x .
Lastly, we investigate the sensitivity of the calculation to the NO concentration. Using the NO concentration calculated assuming a steady-state between NO and NO 2 and the measured concentrations of NO 2 , HO 2 , O 3 , NO 2 photolysis, and assuming that RO 2 is equal to HO 2 we find an increase in the median of the P( ANs) to P(HNO 3 ) ratio (Steady-state NO case). Overall, we conclude that although there is uncertainty in the absolute numbers, the production of ANs is, on average, faster than the production of HNO 3 .
Uncertainty in the calculated ANs oxidation rate
The calculated oxidation rate of ANs is sensitive to uncertainties and assumptions including: the assumption that the instantaneous production represents the composition, possible interferences in HO x measurements, reaction rate uncertainties, and assumptions regarding NO x recycling.
Two of the most likely deviations from our assumption that the production in Fig. 4 represents the composition are nitrates produced from unmeasured BVOCS (likely monoterpenes and sesquiterpenes) and the presence of higher generation isoprene and monoterpene nitrates. In order for these nitrates to increase the ANs loss rate, their loss rate must, on a per molecule basis, be faster than the isoprene nitrate loss which implies that these nitrates are unsaturated. In Browne et al. (2013) we estimate the oxidation rates of unsaturated monoterpene nitrates 7.29×10 −11 cm 3 molecules −1 s −1 for OH oxidation and 1.67 × 10 −16 cm 3 molecules −1 s −1 for ozonolysis, similar to the isoprene nitrate oxidation rates. Thus, if the monoterpene nitrates had a larger NO x recycling than the isoprene nitrates, then they would increase the ANs loss. NO x recycling from monoterpene nitrates is difficult to estimate given the number of different monoterpene structures and the variability of emission factors between species. Furthermore, since the ozonolysis of the nitrates will dominate the loss process, NO x recycling through this channel will be most important. To our knowledge, there are no measurement constraints on the NO x recycling from the ozonolysis of any organic nitrate.
To estimate the effect of this complex problem we use results from the WRF-Chem model run over the boreal forest of Canada for the ARCTAS time period. This model uses a chemical mechanism with a comprehensive treatment of ANs including 11 isoprene-derived nitrates and two monoterpene-derived nitrates (one unsaturated and one saturated) as described in Browne et al. (2013) and . Sampled along the flight track, the WRF-Chem model predicts a ANs oxidative loss rate of 2.3 × 10 −5 s −1 (median), a number similar to our estimate here, which suggests that these effects have only a small influence on our calculation.
Since the ozonolysis of isoprene nitrates accounts for the majority of the ANs loss rate, the possible interferences in the HO x measurements (OH and HO 2 ) and uncertainties in the RO 2 reaction rates have a negligible effect on our calculated loss. Consequently, the uncertainties regarding isoprene nitrate ozonolysis, particularly the yields of the various isoprene nitrate isomers, the ozonolysis rates, and the magnitude of the NO x recycling are non-negligible. If we use the split between the δ-hydroxy and β-hydroxy nitrates from Lockwood et al. (2010) (with updates from Pratt et al., 2012) (∼ 10 % and ∼ 90 %, respectively) and the distribution of ANs production calculated using the isoprene nitrate formation yield from Lockwood et al. (2010) (51 % isoprene, 12 % MVK, 33 % α-and β-pinene), we calculate an overall ANs loss rate of 1.4 × 10 −5 s −1 assuming the slower β-hydroxy rate and 3.8 × 10 −5 s −1 if we assume the faster rate. These rates are similar to those in Table 2. We note that we have weighted the ozonolysis rates using the initial production yields of the β-hydroxy and δ-hydroxy nitrates. Given that these nitrates have (potentially) different atmospheric lifetimes (at 2×10 6 molecules cm −3 OH and 30 ppbv O 3 δ-hydroxy nitrates have a lifetime of ∼ 1.2 h and the βhydroxy nitrates of ∼ 0.97-2.6 h using the OH rate constants from Paulot et al., 2009), it is likely that the reaction rate of the ANs we measure will favor the less reactive nitrates and our calculation may be high. Lastly, in our derivation of the NO x recycling we follow the assumptions of MCM v3.2, which include the assumption of equal branching between the two possible carbonyl/Criegee biradical pairs. However, the exact branching depends on nature and number of the substituents on the alkene (Calvert et al., 2000).
There is also uncertainty introduced via our assumption that when the nitrooxy peroxy radical formed via OH oxidation reacts with HO 2 the nitrate functionality is preserved. Recent experimental work on the nitrooxy peroxy radicals derived from the reaction of isoprene with NO 3 indicates that the reaction of this peroxy radical with HO 2 likely has a large flux through the channel forming radical products (i.e. the alkoxy radical and OH) (Kwan et al., 2012). If we assume that this channel occurs half of the time, which is within the range estimated by Kwan et al. (2012), we calculate that the oxidation rate increases by 66 % when we assume the slower isoprene nitrate ozonolysis rate and 26 % when the faster rate is assumed.
It is also possible that reaction of the nitrooxy peroxy radical with other RO 2 (in particular, acyl peroxy radicals) may proceed at a faster rate. For instance, the reaction rate of CH 3 C(O)O 2 with CH 3 O 2 at 285 K is approximately two orders of magnitude faster than the self reaction rate of CH 3 O 2 (Atkinson et al., 2006). If we increase the RO 2 rate constant by a factor of 50, an increase which is consistent with assuming that about half the peroxy radicals react with a rate of 2.3 × 10 −11 cm 3 molecules −1 s −1 rather than 2.3 × 10 −13 cm 3 molecules −1 s −1 (i.e. are more like acyl peroxy radicals), we calculate that the RO 2 + RO 2 reaction occurs ∼ 30 % of the time. This increases the oxidation rate to 2.2-3.6×10 −5 s −1 . We note that this likely overestimates the number of peroxy radicals. Furthermore, in our analysis we have assumed that the products of the nitrooxy peroxy radical reaction with other RO 2 are the same as those when it reacts with NO (i.e. we assume that the channel forming RO is dominant). While this channel is likely favored when the reaction is with an acyl peroxy radical, molecular channels which will retain the nitrate will likely be more important for non-acyl peroxy radicals. For instance, Kwan et al. (2012) estimate that only 19-38 % of the RO 2 + RO 2 reactions in their study result in the formation of alkoxy radicals. This decrease in alkoxy radical formation will also decrease the calculated oxidation rate.
Overall, these calculations suggest that ozonolysis of isoprene nitrates is the largest oxidation sink of organic nitrates. Further experimental constraints on the ozonolysis rates and products of the isoprene nitrates are needed to reduce the uncertainty concerning the fraction of NO x that is recycled back to the atmosphere. Additional experiments constraining the products of isoprene-derived nitrooxy peroxy radicals with HO 2 and other peroxy radicals are also needed in order to understand the oxidation of these nitrates under low NO x conditions.
|
2017-05-08T22:36:49.080Z
|
2013-05-02T00:00:00.000
|
{
"year": 2013,
"sha1": "e4749ab646979edb5c1914eb08614b7b82dd9772",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/13/4543/2013/acp-13-4543-2013.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "38ca9b242a683a7fc9879c5a185a755423d3d7ed",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.