id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
225067362
pes2o/s2orc
v3-fos-license
Beyond Ethylene: New Insights Regarding the Role of Alternative Oxidase in the Respiratory Climacteric Climacteric fruits are characterized by a dramatic increase in autocatalytic ethylene production that is accompanied by a spike in respiration at the onset of ripening. The change in the mode of ethylene production from autoinhibitory to autostimulatory is known as the System 1 (S1) to System 2 (S2) transition. Existing physiological models explain the basic and overarching genetic, hormonal, and transcriptional regulatory mechanisms governing the S1 to S2 transition of climacteric fruit. However, the links between ethylene and respiration, the two main factors that characterize the respiratory climacteric, have not been examined in detail at the molecular level. Results of recent studies indicate that the alternative oxidase (AOX) respiratory pathway may play an essential role in mediating cross-talk between ethylene response, carbon metabolism, ATP production, and ROS signaling during climacteric ripening. New genomic, metabolic, and epigenetic information sheds light on the interconnectedness of ripening metabolic pathways, necessitating an expansion of the current, ethylene-centric physiological models. Understanding points at which ripening responses can be manipulated may reveal key, species- and cultivar-specific targets for regulation of ripening, enabling superior strategies for reducing postharvest wastage. INTRODUCTION Ripening of fruit involves a symphony of transcriptionally and hormonally controlled processes that result in the accumulation of sugars, reduction in acidity, development of aroma, and nutritional profiles (McAtee et al., 2013;Cherian et al., 2014). The ripening process has been under continual manipulation both as a result of natural selection for improved seed dispersal as well as human domestication via the selection of desirable organoleptic properties (Giovannoni, 2004;Liu M. et al., 2015). Fleshy fruits fall into one of two broadly defined ripening categories, climacteric and nonclimacteric, based on their respiratory profile as well as the manner in which they produce the phytohormone ethylene (Seymour et al., 2012). Nonclimacteric fruits respire and produce ethylene at basal levels throughout fruit maturation and senescence. This mode of ethylene production is termed System 1 (S1) ethylene production. Nonclimacteric fruits, including cherries, berries, and citrus, are harvested ripe and do not exhibit increasing levels of ethylene production during ripening, although in a few cultivars, ripening may be accelerated through exogenous application of ethylene or ethylene-producing compounds, such as ethrel (Barry et al., 2000;Barry and Giovannoni, 2007;Chen et al., 2018). In contrast, ripening in climacteric fruits, such as apple, pear, banana, papaya, avocado, mango, and tomato, is characterized by a burst of respiration accompanied by a substantial increase in ethylene biosynthesis as fruit transitions from S1 to System 2 (S2) ethylene production (Figure 1; Osorio et al., 2013b;Chen et al., 2018). The synchronization of the hallmark respiratory rise with autocatalytic ethylene production forms the basis for the modern understanding of climacteric ripening (Lelièvre et al., 1997;Cara and Giovannoni, 2008). Because of this distinct ripening physiology, climacteric fruits can be harvested unripe and ripened off the tree or vine (Seymour et al., 2013a;Hiwasa-Tanase and Ezura, 2014). Following the respiratory climacteric, ripening proceeds rapidly and irreversibly, which presents additional challenges to the storage and preservation of climacteric fruit after harvest (Jogdand et al., 2017). While the concept of two distinct ripening categories is simple in theory, the reality is far more complex, with certain fruits displaying variable phenotypes (Paul et al., 2012). Interestingly, some cultivars within the same species display differences in ripening profiles; such is the case for, peach, plum, melon, and Chinese pear (Yamane et al., 2007;Minas et al., 2015;Saladié et al., 2015;Farcuh et al., 2018). This is clearly exemplified in Japanese plum (Prunus salicina Lindl.), where a nonclimacteric cultivar and an ethylene-responsive, suppressed-climacteric cultivar were both found to be derived from independent bud sport mutations in a single climacteric plum variety (Minas et al., 2015). This discovery suggests that FIGURE 1 | Ripening profiles of five climacteric fruits and one nonclimactic fruit. Rates of ethylene production and respiration are indicated by the solid and dotted lines, respectively. At the 0-day time point, fruits were unripe but assumed to have reached physiological maturity and 100% ripening competency. Some fruits require cold conditioning in order to achieve ripening competency (e.g., European pear, 15-90 days, 0-10°C) (solid blue arrow). Other fruits can be induced to ripen more quickly through appropriate cold storage (e.g., avocado, 14 days, 5-10°C; dashed blue arrow; Eaks, 1983). Expression and activity of alternative oxidase (AOX) have been implicated in preclimacteric and/or climacteric stages of some fruits (e.g., European pear, papaya, banana, and tomato; Woodward et al., 2009;Oliveira et al., 2015;Hendrickson et al., 2019;Hewitt et al., 2020b). Peak AOX expression, as indicated in the cited studies, is indicated by vertical dashed red lines. The transition from white to shaded background represents the transition from S1 to S2 ethylene (preclimacteric to climacteric shift). General respiration and ethylene profiles for each fruit were adapted from the following studies: avocado (Eaks, 1983), banana (McMurchie et al., 1972, European pear (Wang and Mellenthin, 1972), papaya (Fabi et al., 2007), tomato (Herner and Sink, 1973), orange (McMurchie et al., 1972). October 2020 | Volume 11 | Article 543958 the basis for the distinction between climacteric versus nonclimacteric ripening is highly specific at the genetic level. Furthermore, such systems provide natural models that could facilitate study of the biological basis for climacteric and nonclimacteric ripening (Kim H. -Y. et al., 2015;Minas et al., 2015;Fernandez i Marti et al., 2018). Transcriptional and phytohormonal regulation of ethylenedependent ripening have been reviewed extensively (Cherian et al., 2014;Karlova et al., 2014;Kumar et al., 2014;Chen et al., 2018). In contrast to climacteric fruit, the regulatory network involved in nonclimacteric ripening has been much less studied. Nevertheless, it is known that abscisic acid (ABA) and polyamines, rather than ethylene, play essential roles in ripening in nonclimacteric fruits (Li et al., 2011;Jia et al., 2016a). Studies in strawberry and tomato indicate that the split between climacteric and nonclimacteric ripening responses lies in the way that S-adenosyl-L-methionine (SAM) is preferentially utilized as a precursor to ethylene or as a substrate for polyamine biosynthesis (Van de Poel et al., 2013;Lasanajak et al., 2014;Guo et al., 2018). Decarboxylation of SAM by decarboxylase (SAMDC) represents the rate-limiting step in polyamine biosynthesis (Handa and Mattoo, 2010;Seymour et al., 2013b;Cherian et al., 2014). Moreover, transgenic expression of a yeast SAMDC in tomato results in preferential shunting of substrate into the polyamine biosynthesis pathway, rather than the ethylene biosynthesis pathway (Lasanajak et al., 2014). In addition to increased flux through the polyamine biosynthesis pathway, overexpression of SAMDC in strawberry leads to spermine and spermidine-mediated increase in expression of positive regulators of ABA biosynthesis and signaling (Guo et al., 2018). Signaling components downstream of ABA receptors are believed to induce changes in the expression of genes associated with pigment development and sugar metabolism (Li et al., 2011). Through the exploration of the underlying genetic factors of ripening of both climacteric and nonclimacteric fruit in model systems, foundations have been laid for evaluation of ripening processes in nonmodel fruits exhibiting deviations from the standard profiles. Not surprisingly, manipulation of environmental factors, genetic factors, and use of chemical inhibitors like 1-methylcyclopropene (1-MCP) to inhibit ripening result in developmental patterns that do not follow the classical model of ethylene response and signaling (Watkins, 2006(Watkins, , 2015Tatsuki et al., 2007;Chiriboga et al., 2013). Mechanisms for blockage and/or bypass of the concerted steps in classical ethylene biosynthesis are beginning to be elucidated as more studies examine how genetic manipulation or stimulation via temperature or chemical application affect ripening (Klee and Giovannoni, 2011;Hewitt et al., 2020a,b). Furthermore, as molecular biology, transcriptomics, and epigenetic analysis tools have rapidly advanced, new insights have been gained into some of the master regulators of ripening acting upstream and/or independently of ethylene (Liu R. et al., 2015;Giovannoni et al., 2017). The alternative oxidase (AOX) respiratory pathway has recently garnered interest as a potential target for ripening manipulation (Hendrickson et al., 2019;Hewitt et al., 2020a,b). This review will explore AOX as a branch point for variation from the classical model of climacteric ripening, which may be affected by physiological or chemical perturbations in metabolism, transcriptional regulatory elements, and epigenetic signatures regulating fruit ripening. Understanding these variations is expected to inform novel strategies to reduce postharvest waste while improving marketability of fruit efficiently. Reexamining the Classical Model for Ethylene-Dependent Ripening Early knowledge of the role of ethylene in the ripening process made components of ethylene biosynthesis and transduction some of the first targets for ripening control in model systems such as tomato (Solanum lycopersicum) (Barry and Giovannoni, 2007;Klee and Giovannoni, 2011;Liu M. et al., 2015;Mattoo and White, 2018). Resulting as a side product of methionine cycling (Yang and Hoffman, 1984), ethylene biosynthesis begins with the conversion of L-methionine into SAM. SAM is then converted into 5'-methylthioadenosine (MTA) and 1-aminocyclopropane carboxylate (ACC) via ACC synthase (ACS). ACC is subsequently converted into ethylene by 1-aminocyclopropane carboxylate oxidase (ACO). ACC, long known for its role as the immediate precursor to ethylene, has recently been investigated for its potential role in ethyleneindependent regulation of growth (Polko and Kieber, 2019;Vanderstraeten et al., 2019). Following successful perception of ethylene, the hormone signal is transduced via a series of messengers to the nucleus where ethylene-responsive transcription factors (ERFs) activate downstream ripening-associated genes involved in cell wall softening, starch to sugar conversion, aroma production, and changes in pigmentation, among numerous other changes (Seymour et al., 2012;Osorio et al., 2013a;Cherian et al., 2014;Gao et al., 2020). While ethylene is important in ripening, the associated hormone perception and signaling pathways do not operate in isolation (McAtee et al., 2013); they may be dependent on other processes or manipulated by altering preripening conditions (Figure 2). Some fruits require a period of cold temperature exposure, known as "conditioning, " in order for ripening to commence (Hartmann et al., 1987). European pear (Pyrus communis), for example, must undergo between 15 and 90 days (depending on the cultivar) of conditioning at 0-10°C to activate the S2 autocatalytic ethylene biosynthesis (Villalobos-Acuna and Mitcham, 2008;Hendrickson et al., 2019; Figure 1). Other fruits do not require conditioning to ripen but can be induced to ripen more quickly through appropriate cold storage regimes. Kiwifruit subjected to cold conditioning, upon transfer to room temperature, respires faster, and softens appreciably in comparison to fruits subjected to ethylene preconditioning alone (Ritenour et al., 1999). In 'Hass' avocado, for example, exposure to moderate chilling temperatures of 5-10°C for 2 weeks results in the early onset of the respiratory climacteric by 2-7 days in comparison with control fruits stored at 20°C, without any negative impacts. However, longer storage and colder storage temperatures result in a reduced climacteric response and development of symptoms of chilling injury in avocado (Eaks, 1983 Recent independent studies, as well as reviews, have indicated that such modifications to the classical climacteric ripening implicate several pathways that operate concomitantly with ethylene biosynthesis and perception, including AOX respiration, signaling by reactive oxygen species (ROS), and pathways that may be triggered by changes in epigenetic signatures (Perotti et al., 2014;Kumar et al., 2016;Farinati et al., 2017;Bucher et al., 2018;Hendrickson et al., 2019;Hewitt et al., 2020b). New advances in genome editing have provided further insight regarding potential sites for variation in ripening, as manipulation of upstream transcriptional regulators (Ito et al., 2017) leads to alterations in fruit texture, photoperiodic response, and posttranscriptional regulation of ripening-related genetic elements (Martín-Pizarro and Posé, 2018). Understanding the FIGURE 2 | The network of pathways involved in respiration, ethylene biosynthesis, and signaling, and ROS production and signaling during the ripening process. Pathways implicated in the stimulation of AOX expression or activity are indicated by dashed, colored lines. (1) Glyoxylic acid (GLA; green): GLA has been shown to directly activate AOX in vivo via interaction with the two cysteine residues that gate the protein (Umbach et al., 2006). In vivo, exogenous GLA application is hypothesized to lead to increased flux through the glyoxylate cycle and, consequently, the TCA cycle (Hewitt et al., 2020a). The latter leads to increased accumulation of NADH, resulting in increased flux of electrons through the cytochrome c (CYTc) pathway and consequent activation of AOX to prevent CYTc overreduction (Calderwood and Kopriva, 2014). (2) Hydrogen sulfide (H 2 S; purple): sulfides accumulate in the cytoplasm as a result of exogenous treatment with H 2 S; these can be converted to the amino acid cysteine (Calderwood and Kopriva, 2014). Cysteine, in turn, may directly modulate the activity of the AOX protein in response to stress or developmental changes in cellular redox state (Jia et al., 2016b;Dhingra and Hendrickson, 2017). (3) Methyl Jasmonate (MeJA; pink): exogenous treatment with MeJA leads to increased flux through the jasmonic acid biosynthesis pathway and accumulation of JA. JA has been shown to activate MYC, a transcription factor (TF) that enhances the function of ethylene signaling factors, particularly EIN/EIL family TFs, leading to an enhanced ethylene response (Shinshi, 2008;Kim J. et al., 2015). It has also been hypothesized that JA can directly influence AOX gene expression, possibly via a similar mechanism, with MYC acting alone or as a complex with other transcription factors to target promoter regions of AOX family genes (Fung et al., 2006). In addition to MYC, JA may regulate additional TFs like RIN, NOR, and cold-induced TFs that target ripening-associated genes (Czapski and Saniewski, 1992;Nham et al., 2017). (4) Cold (red): low temperature causes activation of respiratory burst oxidases (NADPH oxidases) embedded in the cell membrane, which produce superoxide that is then converted to hydrogen peroxide (Marino et al., 2012). Continued stimulation of NADPH oxidases leads to accumulation of ROS in the apoplast; these ROS can then be transferred into the cytosol via aquaporin channels (Qi et al., 2017). When redox state is disrupted, both cytosolic ROS and ROS produced in the mitochondria by the CYTc pathway transmit signals to the nucleus to mediate gene expression changes (including activation of AOX; Li et al., 2013). way in which other key pathways may interact with ethylene biosynthesis and response during ripening will lend important insight into how control of ripening in various fruits can be fine tuned to increase predictability and marketability. A Novel Role for AOX in Fruit Ripening The hallmark aspect of climacteric ripening is the respiratory rise that occurs prior to the S1-S2 ethylene transition, in which an initially gradual increase in carbon dioxide evolution is followed by a heightened burst in respiratory activity during the ripening climacteric (Hiwasa-Tanase and Ezura, 2014; Colombié et al., 2017). The respiratory climacteric has been extensively documented in terms of physiology and biochemistry in a number of fruits (Hulme et al., 1963;Krishnamurthy and Subramanyam, 1970;Salminen and Young, 1975;Hartmann et al., 1987;Andrews, 1995;Hadfield et al., 1995). While early studies were foundational to the understanding and characterization of climacteric ripening with respect to total respiration, greater examination of the genetic and biochemical underpinnings of the respiratory climacteric in a variety of systems is needed. Climacteric respiration represents the combined activity of several mitochondrial pathways that differentially direct electron transport, leading to several possible energetic fates. The first is the cytochrome c (CYTc) pathway. CYTc operates as a result of a proton gradient generated in the mitochondrial intermembrane space and concludes in the production of cellular energy currency via ATP synthase. In plants, CYTc activity also facilitates cellular detoxification via the synthesis of antioxidant compounds, namely, ascorbic acid (Welchen and Gonzalez, 2016;Fenech et al., 2019). The final step in ascorbic acid production is catalyzed by l-galactono-1,4-lactone dehydrogenase (GLDH), an enzyme that also plays a role in the assembly of respiratory complex I and serves as an electron donor to CYTc in the synthesis of ascorbic acid (Millar et al., 2003;Schimmeyer et al., 2016;Welchen and Gonzalez, 2016). Thus, when antioxidant production is high, electron flux through the CYTc pathway is correspondingly increased. Furthermore, when flux through the CYTc pathway is at maximum capacity due to high cellular respiratory demands, the alternative oxidase (AOX) pathway provides a secondary avenue for electron flux, thereby preventing overreduction of the mitochondrial electron transport chain. Unlike CYTc, AOX is insensitive to cyanide-containing compounds, allowing for viability when normal respiratory activity is inhibited (Wagner and Moore, 1997;Rogov and Zvyagilskaya, 2015). Additionally, expression of AOX can also be modulated via retrograde mitochondrial signaling in response to reactive oxygen species (ROS) or metabolic disruption (Dojcinovic et al., 2005;Li et al., 2013). Because of this, AOX expression has been used both as an indicator of stress and as a metric to infer the energetic and metabolic status of plant biological systems during development (Saha et al., 2016). In almost all plants, AOX proteins are encoded by a small nuclear multigene family, which consists of two gene subfamilies: AOX1 and AOX2 (Polidoros et al., 2009). The transition from S1 to S2 ethylene biosynthesis involves numerous metabolic changes, which may occur simultaneously or in series, that require a great deal of regulation and feedback mechanisms to ensure that ripening occurs properly. There is increasing evidence, including respiratory partitioning studies, supporting a role of AOX in the modulation of respiration at various stages around the time of climacteric via induction of S2 ethylene, which thereby influences the development of ripening-associated phenotypes downstream (Figure 1; Considine et al., 2001;Xu et al., 2012;Ng et al., 2014;Perotti et al., 2014;Hendrickson et al., 2019). Activation of AOX During Ripening In several fruits, AOX expression and/or activity has been characterized at the preclimacteric and climacteric stages of fruit development and ripening (Figure 1). Banana, a comparatively fast-ripening fruit, displays elevated preclimacteric expression of AOX at the mature green stage (Woodward et al., 2009), while in papaya, AOX expression peaks between the onset of S2 ethylene and the climacteric peak. In tomato and apple, expression reaches a maximum around the same time as the climacteric peak, indicating that the climacteric rise in these fruits can be partially attributed to the increased capacity for mitochondrial oxidation (Duque and Arrabaça, 1999;Xu et al., 2012;Oliveira et al., 2015). In mango, interestingly, AOX peaks after the climacteric, contributing to oxidation during fruit senescence rather than ripening (Considine et al., 2001). Furthermore, in some fruits, both climacteric and nonclimacteric, AOX activation can be achieved via cold temperature or chemical stimulation (Figure 2). In zucchini and sweet pepper, cold-induced activation of AOX results in the mitigation of chilling injury (Aghdam, 2013;Hu et al., 2014;Carvajal et al., 2015). Recently, it was demonstrated that completion of cold conditioning, facilitating the S1-S2 transition, in European pear coincides with preclimacteric maxima in AOX transcript accumulation (Hendrickson et al., 2019;Dhingra et al., 2020;Hewitt et al., 2020b). In a subsequent study, chemical genomics approaches to further target AOX in pear fruit led to the discovery of glyoxylic acid (GLA) as a chemical activator of both AOX and ripening Hewitt et al., 2020a). GLA has been shown to directly activate AOX in vivo via interaction with the two cysteine residues that gate the protein (Umbach et al., 2006). Furthermore, transcriptomic characterization of expressed genes in response to GLA implicates the AOX pathway and glyoxylate cycle in a more extensive ripening network wherein exogenous GLA application results in increased flux through the glyoxylate cycle and TCA cycles, the latter of which leads to accumulation of NADH (Hewitt et al., 2020a). This, in turn, results in increased flux of electrons through the CYTc pathway and consequent activation of AOX to prevent CYTc overreduction (Figure 2; Hewitt et al., 2020a,b). In addition to GLA, there is evidence for activation and differential regulation of AOX protein isoforms by TCA cycle intermediates (Selinski et al., 2018). The results of these studies provide information necessary to develop and test "cocktails" of TCA/GLA cycle metabolites that could result in optimal activation of alternative respiration in the context of fruit ripening regulation when applied exogenously to preclimacteric fruit postharvest. Hydrogen sulfide (H 2 S), though a known phytotoxin, in minuscule doses can enhance alternative pathway respiration and inhibit ROS production (Hu et al., 2012;Luo et al., 2015;Li et al., 2016;Ziogas et al., 2018). Exogenous treatment with H 2 S leads to the accumulation of sulfides in the cytoplasm, some of which are directly converted to the amino acid cysteine (Calderwood and Kopriva, 2014). Cysteine, in turn, may directly modulate the activity of the AOX protein in response to stress or developmental changes in cellular redox state (Jia et al., 2016b). Additional studies have demonstrated the activation of AOX and ripening in response to H 2 S treatment. In 'Bartlett' and 'D' Anjou' pears, H 2 S facilitated a bypass of normal cold conditioning requirements for ripening, and treated fruit demonstrated increased ethylene evolution, heightened respiratory rate, and activated AOX expression (Dhingra and Hendrickson, 2017). In hawthorn fruit, H 2 S application mitigated chilling injury, which is linked to increased AOX activity and antioxidant capacity (Aghdam et al., 2018). Application of metabolism-regulatory hormones methyl salicylate (MeSA) and methyl jasmonate (MeJA) results in increased expression of AOX, correlating with a reduction in chilling injury in sweet pepper. JA is a known activator of MYC, a transcription factor that enhances the function of ethylene signaling factors, particularly within the EIN/EIL family, thus leading to enhanced ethylene responses (Shinshi, 2008;Kim J. et al., 2015). It has also been hypothesized that JA can directly influence AOX gene expression (Fung et al., 2006) via a similar mechanism, with MYC acting alone or in complex with other transcription factors to target promoter regions of AOX family genes (Figure 2). In addition to MYC, JA may regulate additional TFs like RIN, NOR, and coldinduced TFs that target ripening-associated genes (Czapski and Saniewski, 1992;Nham et al., 2017;Li et al., 2018). Together, these findings reveal interesting insights into chemical and hormonal events that operate in an ethyleneindependent space during climacteric ripening, as well as how ripening can be better regulated as a result of this information. Novel discoveries presented in recent studies, which complement and expand upon the foundational knowledge of the role of AOX in ripening, have laid the framework for several exciting hypotheses. First, knowledge of how AOX expression can be induced may provide an avenue for development and testing of ripening strategies in fruits whose respiratory profiles are affected by temperature. Furthermore, knowledge of how temperature preconditioning may mitigate chilling injury via AOX stimulation could allow for improved management practices of papaya, avocado, banana, mango, zucchini, and other temperature-sensitive fruits during storage (Lederman et al., 1997;Aghdam, 2013;Carvajal et al., 2015;Luo et al., 2015;Valenzuela et al., 2017). Molecular and Metabolic Links Between Respiration and Ethylene It is clear, based on the simultaneity and interdependency of responses, that respiration and ethylene are physiologically correlated during climacteric ripening-climacteric rise in respiration is accompanied by a spike in autocatalytic ethylene production, and blocking ethylene perception prevents respiration from increasing further (Hiwasa-Tanase and Ezura, 2014; Watkins, 2015). While elucidating the precise connections between the two pathways will require more genetic and metabolic work, the results of several studies point toward crosstalk between ethylene and respiration and implicate AOX expression and signaling by ROS in this connection (Sewelam et al., 2016). In tomato, 1-MCP treatment reduces transcript levels of AOX1a (Xu et al., 2012); the simultaneous inhibition of ethylene response and maintenance of respiration at low levels by 1-MCP indicates a relationship between ethylene, the respiratory climacteric, and AOX at the molecular level. The activity of AOX and biosynthesis of ethylene are directly dependent upon flux through CYTc and the availability of ATP. RNA interference studies in tomato reveal a modulatory role of AOX in ethylene production, as ACS4 activity in AOX-RNAi plants is significantly lower than in wildtype (WT) plants (Xu et al., 2012). Reduced activity of ethylene biosynthetic enzymes when AOX is silenced could be due to a decrease in precursors for ethylene production. For example, the methionine cycle, and therefore ethylene biosynthesis, is dependent upon ATP generation via respiration (Mattoo and White, 2018). Specifically, methionine is converted to the immediate precursor to ACC, SAM, in an ATP-dependent reaction catalyzed by SAM synthetase (Figure 2; Yang and Hoffman, 1984). During ripening, AOX could allow for heightened carbon flux through glycolysis and the TCA cycle; this would prevent overreduction of the ubiquinone pool, increase oxidation of NADH, and accelerate carbon turnover, resulting in the production of large amounts of ATP that could be used for S2 ethylene and other ripening-associated metabolic processes. Moreover, a recent report of a link between the TCA and methionine metabolism (and consequently, ethylene metabolism) via NADH oxidation lends further support to this concept (Lozoya et al., 2018). In cucumber, brassinosteroids were reported to induce ethylene responses and ROS, the collective activities of which resulted in stimulation of AOX and activation of downstream abiotic stress responses (Wei et al., 2015). ROS, which are produced in the mitochondria during respiration and accumulate in the apoplast as a result of abiotic stimulation of respiratory burst oxidase (NADPH oxidase) homologs, were historically thought of in terms of their toxicity to plants in high concentrations; however, their critical roles in response to perturbations in cellular redox state have become clearer in recent years (Jimenez et al., 2002;Marino et al., 2012;Vaahtera et al., 2013;El-Maarouf-Bouteau et al., 2015;Kumar et al., 2016;Noctor et al., 2018;Decros et al., 2019). During fruit maturation, ROS accumulation peaks once at the start of ripening (presumably the start of the respiratory climacteric) and again at overripening, around the time of harvest maturity (Muñoz and Munné-Bosch, 2018). This accumulation may coincide with the activation of AOX in certain species (Figure 1). In Arabidopsis, ethylene-induced signaling by hydrogen peroxide (H 2 O 2 ), a form of ROS, was shown to activate AOX in response to cold temperatures (Wang et al., 2010(Wang et al., , 2012. It has been suggested that such temperature-induced transcriptional changes in AOX occur via ROS derived from NADPH oxidase activity (Vanlerberghe, 2013;McDonald and Vanlerberghe, 2018). These NADPH oxidases produce O 2 − , which is converted to H 2 O 2 , in the apoplast. These ROS are translocated to the cytoplasm via aquaporin channels (Qi et al., 2017). The influx of apoplastic ROS, along with additional species produced in the mitochondria, serves to activate nuclear-targeted redox signals, which elicit antioxidative responses and alteration in metabolic processes (Suzuki et al., 2011;Marino et al., 2012;McDonald and Vanlerberghe, 2018;Chu-Puga et al., 2019; Figure 2). Such ROS-induced retrograde signaling leads to modulation of AOX expression, as has been shown in potato and pea. It has been hypothesized that in this way, ROS may facilitate crosstalk between respiration and ethylene via AOX activation in fruit ripening (Marti et al., 2009;Hewitt et al., 2020b;Hua et al., 2020). Taken together, these results indicate that an interplay between different components of several metabolic signaling pathways is responsible for initial AOX activation. Furthermore, the activity of both ethylene and alternative respiratory pathways may be self-perpetuating by means of an autostimulatory feedback loop involving ROS (Wei et al., 2015). While AOX serves as a mechanism to prevent overreduction of the CYTc pathway, it is possible that overstimulation of AOX via external perturbations (e.g., chemical or temperature) in fruit prior to the S2 transition induces a vacuum effect, drawing the glycolytic pathway and TCA cycle into action to deliver more reducing power, thereby initiating CYTc pathway respiratory activity. For example, in Arabidopsis and tomato, aox mutants displayed disrupted accumulation of primary respiratory metabolites, which affect development (Xu et al., 2012;Jiang et al., 2019). The same may be true of AOX impairment in fruit. Understanding the regulation of the respiratory climacteric and how crosstalk between ethylene and AOX is facilitated may require a look at the transcriptional regulators of these responses. Transcriptional Regulation Modulates Both Ethylene and Respiratory Responses Within the last decade, the importance of transcriptional regulation of ripening response has become more evident. During ripening, signals from upstream transcription factors (which may be activated by environmental or intrinsic triggers) facilitate a cascade of downstream signaling activity (Cherian et al., 2014;Gao et al., 2020). In fruits, this signaling activity leads to increased respiration, cell wall softening, and changes in the production of pigments, volatiles, starch, and sugar content, and phytonutrient metabolite content-these processes are all characteristic of ripening, with respiration and biosynthesis of ethylene particularly relevant to climacteric ripening (Seymour et al., 2013a;Karlova et al., 2014). Among some of the most important transcriptional regulators during ripening are ripening inhibitor (RIN), colorless non-ripening (CNR), tomato Agamous 1 (TAG1), tomato Agamous-like 1 (TAGL1), fruitful 1 and 2 (FUL1 and 2), and non-ripening (NOR); all of these are involved in a complex and interconnected regulatory network that ultimately leads to fruit ripening and the aforementioned ripening-associated qualities (Figure 2; Seymour et al., 2013a;Fujisawa et al., 2014;Datta and Bora, 2019). The availability of mutants targeting the aforementioned ripening regulators in tomato allows for study of consequences of ripening perturbation at the regulatory level. Many studies have demonstrated the detrimental effects of mutations in these key transcriptional regulators on ethylene production and signaling, which are expected to have further downstream respiratory consequences via the avenues for the interpathway crosstalk discussed in the previous section. Because of its resultant complete inhibition of ripening in tomato, the rin mutation has become one of the most iconic ripening-associated mutations in studies of climacteric fruit. Rin mutant tomatoes fail to mature beyond the green-ripe stage and do not exhibit the characteristic ripening climacteric of wild-type fruit (Vrebalov et al., 2002). Commercial varieties of tomato, heterozygous for the rin mutation, have been introduced to the market and have displayed increased shelf life with little noticeable alteration to the desired flavor profile (Garg et al., 2008). Chromatin immunoprecipitation studies revealed several direct targets of RIN, including the ethylenebiosynthesizing enzyme 1-aminocyclopropene carboxylate oxidase 4 (ACO4) and α-galacturonase (α-gal), an enzymeassociated with cell wall breakdown and fruit softening (Martel et al., 2011;Fujisawa et al., 2012). Furthermore, RIN protein binding sites (CArG box) were identified in the promoter regions of α-gal and ACO4, and ACS2 genes (Fujisawa et al., 2012. Binding to these motifs is facilitated by localized demethylation of associated promoter regions (Li et al., 2017). Expression of ethylene biosynthetic enzymes ACS1 and ACO1 in apple was greatly decreased when expression of various RIN-like MADS-box genes was downregulated or silenced (Ireland et al., 2013). Bisulfite sequencing studies revealed that binding sites in the promoter regions of known transcriptional targets of RIN were found to be demethylated, suggesting that demethylation is necessary for RIN binding and development (Zhong et al., 2013). Treatment with the methyltransferase inhibitor 5-azacytidine resulted in fruit that ripened prematurely, further lending support to demethylation of binding sites as a trigger for RIN-activated ripening (Zhong et al., 2013;Liu R. et al., 2015). While the rin mutant was classically understood as a loss of function mutant, more recent work suggests that it is actually a gainof-function mutant that produces a protein that actively represses ripening (Ito et al., 2017). Regardless, it is clear that when RIN is perturbed, ripening does not proceed to completion. Colorless non-ripening (CNR) transcription factor is a squamosal-promoter binding-like protein, which appears to be necessary for RIN to bind to promoters (Martel et al., 2011;Zhong et al., 2013). With a hypermethylated, heritable promoter that results in reduced transcriptional activity, CNR is a unique example of an epiallele, requiring the activity of a specific chromatin-methylating enzyme, chromomethylase3, for somatic inheritance (Ecker, 2013;Seymour et al., 2013b;Chen et al., 2015). Thus, the CNR transcription factor lends evidence for the role of epigenetics in critical developmental transitions such as those that occur during S1-S2 ethylene production and ripening. Fujisawa et al., 2014). Mutation of these regulatory factors results in fruit with decreased ripening capacity or non-ripening phenotypes (Itkin et al., 2009;Garceau et al., 2017). Another TF among the core set of regulatory elements is the NAC-domaincontaining protein at the tomato non-ripening (NOR) locus. NOR mutants fail to ripen in a physiologically similar manner to RIN and TAGL1 mutants. NOR acts upstream of ethylene biosynthesis and, like RIN, appears to bind to promoter regions of genes involved in ethylene biosynthesis, thereby positively regulating ripening (Gao et al., 2018 ; Figure 2). It is unclear whether RIN and NOR interact with one another to stimulate ripening in conjunction. Considering increasing understanding of their regulatory role in ripening, NOR and FUL genes have been recent targets for improving shelf life in tomato fruit (Nguyen and Sim, 2017;Wang et al., 2019). Recently, the role of AOX has been investigated in NOR, CNR, and RIN mutant fruit (Manning et al., 2006;Xu et al., 2012;Perotti et al., 2014). AOX activity elicits differential effects in each of these mutants, and expression of RIN, CNR, and the ethylene receptor never ripe (NR) has been observed in fruit in which AOX is silenced. When AOX was inhibited via RNA interference, the expression of these transcriptional regulators decreased (Xu et al., 2012). Because CNR acts upstream of ethylene biosynthesis and the NR receptor acts downstream, this observation suggests that AOX plays a yet uncharacterized role in ripening mediated by transcriptional regulators that affect ethylene biosynthesis, signal transduction, and response (Giovannoni, 2004;Seymour et al., 2013b;Hewitt et al., 2020b). Interestingly, it has been hypothesized that the activation of both NOR and RIN may be linked, either directly or indirectly, to jasmonic acid signaling (Czapski and Saniewski, 1992). As indicated previously, JA is known to enhance transcriptional activation of other ripening-associated processes, including AOX. In addition to JA, the ethylene signaling molecule EIN3/EIL1 is hypothesized to directly activate RIN in tomato, thus serving as an instrumental part of a positive feedback loop resulting in autocatalytic ethylene production (Figure 2; Lü et al., 2018), and corresponding to increased consumption of ATP produced from CYTc respiration. Epigenetic Regulation of Ripening and a Potential Link to the Respiratory Climacteric Epigenetics refers to the heritable modifications of the genome beyond the physical nucleotide sequence, including DNA methylation and modifications to histone proteins. In contrast, epigenomics refers to all modifications, regardless of heritability (Giovannoni et al., 2017). One of the most commonly studied forms of epigenetic modification is DNA methylation. Methylation status is in constant flux due to the changing environment; therefore, condition-specific methylation status may be used to infer stress conditions, ripening competency, and developmental progress, among other things. Recent evidence suggests that perturbation of mitochondrial function has an effect on epigenetics. At the same time, continued oxidation of NADH serves to counteract the increase in nuclear DNA methylation and maintain cellular homeostasis (Lozoya et al., 2018). The causes of such perturbations may vary. Temperature is known to be a major factor in alteration of methylation status, and conditioning of fruit requiring chilling to ripen, or to avoid chilling injury, could affect the methylation of promoter regions of key ripening related genes and regulatory transcription factors. With more tools for epigenetic and epigenomic analyses available, including bisulfite sequencing and PacBio long-read sequencing, new insights are being gained into the impact of epigenetic signatures on development and senescence of fruit (Daccord et al., 2017;Xu and Roose, 2020). Understanding how the epigenome governs downstream transcriptional regulation and response is critical to better understanding ripening and senescence. Chemically induced demethylation of tomato fruit using 5-azacytidine results in early ripening of fruit (Zhong et al., 2013). This finding indicates that the alteration of methylation status is one of the first steps in the regulation of downstream processes associated with ripening. Recent transcriptomic and gene ontology enrichment analysis of cold conditioned 'D' Anjou' and 'Bartlett' pear fruit suggests that both methylation and chromatin modifications may be important for activation of vernalization-associated genes and ripening-associated transcriptional elements, which were activated in conjunction with AOX during prolonged cold temperature exposure (Hewitt et al., 2020b). Interestingly, these two cultivars appear to differentially engage expression of vernalization genes VRN1 and VIN3, which could justify the need for different cold exposure time in different cultivars (Hewitt et al., 2020b). Studying the epigenomic status in light of mutations to key transcription factors, mentioned previously, illuminates the way that DNA methylation and histone modifications in genetic regulatory elements serve to modulate certain aspects of ripening early on in development. Beyond abiotic influences, it is possible that some signal from mature seeds is originally what signals the onset of ripening progression (McAtee et al., 2013). This hypothesis is supported by the recent characterization of enzymes responsible for removing epigenetic signatures to DNA or histones, such as the recently characterized DEMETER-like DNA demethylase gene SlDML2 in tomato (Liu R. et al., 2015;Zhou et al., 2019). These proteins are particularly highly expressed in the locular tissue surrounding mature seeds in the fruit. In contrast to tomato, a climacteric fruit in which demethylation is an important factor in ripening, nonclimacteric orange fruit was recently reported to exhibit global increases in DNA methylation as ripening progresses (Huang and Liu, 2019). In addition to factors known to directly affect methylation, recent evidence suggests that additional factors, including noncoding RNAs, circular RNAs, and microRNAs, may target genes with specific methylation patterns or abundance to elicit changes in expression (Zuo et al., 2020). The breadth of factors that may contribute to alterations in the epigenome is still being elucidated; however, it is possible that methylation status during ripening is tied to fruits' classifications as climacteric or nonclimacteric. This, in turn, is expected to differentially Frontiers in Plant Science | www.frontiersin.org 9 October 2020 | Volume 11 | Article 543958 impact a wide range of ripening-associated parameters, in addition to ethylene and respiratory profiles. CONCLUSIONS AND FUTURE PERSPECTIVES The interplay of ethylene (biosynthesis, signaling, and response) and respiration (CYTc and AOX) have been extensively characterized at the physiological level. Studies conducted within the last several years provide new insights with respect to the connection between these two critical pathways at the molecular and metabolic levels. AOX activity begins to increase during the preclimacteric phase, prior to the S1-S2 ethylene transition, particularly in the context of cold temperature or other external stimuli. This activity may be accompanied by the accumulation of ROS as the CYTc pathway capacity reaches a maximum. At the onset of S2 ethylene stimulation, both ethylene response and alternative pathway respiration appear to be interdependent. Ethylene biosynthesis requires ATP, generated via respiration, and activity of respiratory pathways modulated by ERFs in the nucleus; this is evidenced by the inhibition of ethylene production and respiration by the ethylene receptor antagonist 1-MCP. The commencement of both ethylene and respirationassociated processes is likely due to upstream transcriptional and epigenetic regulators, as well as the signaling activity of ROS generated during increased respiration. Mutants of master transcriptional regulators in tomato have provided a means for the study of their effects on both ethylene production and respiration during ripening. Furthermore, studies have investigated the effects of temperature and chemical manipulation of AOX with the aim to understand ways in which the timing of ripening can be controlled. AOX and related processes represent a new frontier in regulating postharvest ripening and, therefore, better management to reduce fruit wastage and development of postharvest management strategies in different types of fruits. The ever-growing fields of genomics, transcriptomics, metabolomics, and epigenomics offer high-throughput strategies for extrapolation of important biological function and expression information from large datasets generated in recent ripening experiments (Morozova and Marra, 2008;Banerjee et al., 2019). The relatively new ability to examine plant epigenomes, and most recently the "ripenome" at the level of single nucleotide bases reveals further avenues for understanding epigenetic regulators of the transition from preclimacteric to climacteric, and the subsequent development of species-specific ripening phenotypes (Giovannoni et al., 2017). Advances in gene editing have provided improved tools by which candidate ripeningassociated genes, identified using the aforementioned "omics" approaches, can be targeted in a concise way to achieve specific results with limited or no off-target effects. Use of these new approaches will facilitate improved understanding of the array of diverse transcriptional responses, the interaction of associated pathways (including AOX, vernalization-associated, and organic acid metabolic pathways), functional implications of the S1-S2 transition, and interdependent roles of ethylene and respiration in ripening (Nham et al., 2017;Hewitt et al., 2020a,b). Such approaches used independently or in conjunction will facilitate the identification and targeted manipulation of candidate regulators of important ripening and fruit quality-associated characteristics in fruits. This, in turn, could translate to greater storability of fruits after harvest, improved marketability within respective fruit industries, and enhanced consumer satisfaction overall. AUTHOR CONTRIBUTIONS SH and AD conceptualized the review. Both the authors contributed to the article and approved the submitted version. FUNDING Work in the Dhingra lab in the area of fruit ripening is supported in part by Washington State University Agriculture Center Research Hatch grants WNP00797 and WNP00011 and grant funding from the Fresh and Processed Pear Research Subcommittee to AD. SH acknowledges the support received from ARCS Seattle Chapter and National Institutes of Health/ National Institute of General Medical Sciences through an institutional training grant award T32-GM008336. The contents of this work are solely the responsibility of the authors and do not necessarily represent the official views of the NIGMS or NIH.
2020-10-27T13:11:17.787Z
2020-10-27T00:00:00.000
{ "year": 2020, "sha1": "350df8006c72396f4dd72119ef3d8320cebb2683", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.543958/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "350df8006c72396f4dd72119ef3d8320cebb2683", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252651852
pes2o/s2orc
v3-fos-license
Revisiting Fisher-KPP model to interpret the spatial spreading of invasive cell population in biology In this paper, the homotopy analysis method, a powerful analytical technique, is applied to obtain analytical solutions to the Fisher-KPP equation in studying the spatial spreading of invasive species in ecology and to extract the nature of the spatial spreading of invasive cell populations in biology. The effect of the proliferation rate of the model of interest on the entire population is studied. It is observed that the invasive cell or the invasive population is decreased within a short time with the minimum proliferation rate. The homotopy analysis method is found to be superior to other analytical methods, namely the Adomian decomposition method, the homotopy perturbation method, etc. because of containing an auxiliary parameter, which provides us with a convenient way to adjust and control the region of convergence of the series solution. Graphical representation of the approximate series solutions obtained by the homotopy analysis method, the Adomian decomposition method, and the Homotopy perturbation method is illustrated, which shows the superiority of the homotopy analysis method. The method is examined on several examples, which reveal the ingenuousness and the effectiveness of the method of interest. time travelling waves that move with speed = 2 √ [5]. The model also gives rise to traveling wave solutions (TWSs) with > for ICs that decrease sufficiently slowly as → ∞, though for most practical applications, we are interested in TWSs that travel with the minimum wave speed since ICs with compact support are often more relevant [7,8,9]. A TWS ( , ) = ( = − ), propagating with a speed , is restricted to be positive and bounded [10]. Therefore, the boundary conditions (BCs) for the TWS of Eq. (1) are usually ( → −∞) → 1, ( → ∞) → 0. Additionally, 0 ≤ ( , ) ≤ 1 and > 0 is the wave speed. Fisher [2] first introduced the RDE (linear diffusion and nonlinear growth of a population) given by Eq. (1) as an ideal for the extension of a mutant gene with an advantageous selection intensity and an advantageous density . This equation is encountered in chemical kinetics [11] and population dynamics which includes problems, such as the nonlinear evolution of a population in a one-dimensional habitat, and neutron population in a nuclear reaction. Moreover, the same equation occurs in logistic population growth models [12], flame propagation, neurophysiology, autocatalytic chemical reactions, and branching Brownian motion processes. The F-K model and its various extensions have been used to study a broad range of biological phenomena. In this regard, in this paper, the generalized Fisher's equation in the following form is considered [13]: where is a positive constant. The F-K model and its various extensions are used to simulate the spatial expansion of invasion cell populations in biology [7,8,11,12,14,15,16]. In ecology, the F-K model has been used to study the spatial spreading of invasive species [17,18,19]. Further, as in refs. [3,5,20], the F-K model supports TWSs that have been broadly cultivated using a range of mathematical techniques. Therefore, the equation of choice is of great interest from the mathematical point of view as well. These TWSs of the F-K model are frequently used to mimic biological invasion [9]. Interestingly, the constant speed travelling waves (TWs) play a significant role in a wide number of medical applications [7,8], and their behavior can be observed and measured experimentally. However, in spite of the measureless interest in TWSs to the F-K model, there are various features of these solutions that are biologically dissatisfactory [21]. For example, the TWSs do not have compact support for −∞ < < ∞. On the other hand, the classical TWSs of the F-K equation do not involve a well-defined front and the cell density remains positive for ( , ) → 0 as → ∞. Further, any restricted IC with compact support will always contribute to effective colonization and population growth. Unfortunately, the F-K equation is studied insufficiently to extract TWSs, as it is a challenging and difficult task of searching for such solutions to that equation [22]. To extract the TWSs of the F-K equation, researchers have, so far, applied some approximate and analytical methods. These methods include the Adomian decomposition method (ADM) [10], the homotopy perturbation method (HPM) [13,23], the variational iteration method (VIM), etc. [24]. It is to be mentioned at this juncture that the series solutions attained by the HPM and the ADM are often convergent in restricted regions [25]. But sometimes it is required to enlarge the regions of convergence (ROC). To solve this problem, Liao proposed a powerful method, the homotopy analysis method (HAM) [26] on the basis of a fundamental concept in differential geometry and topology. The HAM is a semi-analytic approach for obtaining series solutions to a wide range of nonlinear equations, such as algebraic equations, ordinary differential equations (ODEs), partial differential equations (PDEs), integro-differential equations (IDEs), differential-difference equations (DDEs), and their coupled equations. Homotopy is used to define the conjunction between any two several objects in mathematics that have identical characteristics in various appearances [27]. The HAM is different from other analytical techniques for many purposes and recently it has been applied in many research fields. Unlike perturbation approaches, the HAM is independent of large/small physical parameters and hence it is valid whether or not a nonlinear problem has small/large physical parameters. More crucially, unlike other perturbation and classic non-perturbation approaches, the HAM gives a simple mechanism to assure the convergence of series solutions, and hence the HAM is valid even for severely nonlinear situations. The HAM is based on the construction of a homotopy wherein an auxiliary linear operator is chosen to construct the homotopy. It is to be noted here that an auxiliary parameter is used in this method to control the ROC of the approximate series solution. A series solution of differential or integral equations achieved by the HAM converges very quickly over other analytical methods, namely the HPM, the ADM, the artificial small parameter method, the -expansion method, and the decomposition method [28], and hence it may reduce a significant amount of computational cost. Also, the HAM provides greater flexibility in choosing auxiliary linear operators and initial approximations and as a result, a complicated nonlinear problem is transferred into a vast number of simpler linear sub-problems. The HAM has recently been indicated to be useful for obtaining analytical solutions for nonlinear frequency response equations. Further, the HAM is employed to solve linear and nonlinear stiff ODEs, the matrix Riccati differential equation, and the Genesio system [29]. Furthermore, the HAM has been effectively used in many nonlinear problems, namely viscous flows [30], heat transfer, nonlinear water waves and oscillations [31], entropy analysis [32], and so on. Recently, the HAM has been used to analyze the reverse flow reactor (RFR) model [33]. Also, heat transfer and the MHD flow of viscoelastic fluids over an exponentially stretching surface are analyzed by the HAM [34]. Thus, from the literature point of view, it is clear that no evidence has been found to solve the F-K equation with the application of the HAM. Therefore, this research aims to use the effective and powerful method, the HAM, to solve nonlinear Fisher's equation and to explain the solutions from ecological and biological points of view. For this purpose, we solve Eq. (2) for several cases using the HAM in studying the spatial spreading of invasive species in ecology, as well as in the study of the spatial spreading of invasive cell populations. Since the HAM provides us with a greater flexibility to choose the initial approximation and an auxiliary linear operator to solve any nonlinear problems even if the problem has a closed-form solution, the solutions of the F-K equation are possible taking into account more than one initial approximation, which can be found later. Furthermore, because this approach divides a complicated nonlinear problem into an unlimited number of simple linear sub-problems, thereby a complicated nonlinear problem can easily be solved that cannot be unraveled via the other analytical problems available in the literature. The facts mentioned above signify the reason behind choosing the HAM to solve Fisher's equation. To the best of the authors' knowledge, this paper investigates for the first time, the effectiveness and the applicability of the HAM on the Fisher's equation by describing the solutions in the case of spatial spreading of invasive species in ecology and in biology to extract the characteristic of an invading cell population. Also, it describes the contribution of the proliferation rate in the Fisher-KPP model. The rest of the paper is structured in the following way: The basic concept of the HAM for solving nonlinear problems is presented in Section 2. Applications of the HAM for solving the Fisher-KPP equation are introduced in Section 3, and the discussion of the attained results is presented in Section 4. Finally, the conclusion based on the outcomes emanated from the study is placed in Section 5. Basic concept of the HAM for solving nonlinear problems In this section, the basic idea for solving a nonlinear differential equation (NDE) by the HAM is presented briefly. For this purpose, we consider an NDE in the following form: where  is a nonlinear operator, is an unknown function of the independent variables and . For simplicity, all the ICs or BCs are ignored. Then the zero-order deformation equation following Liao [26,31] can be set out as where is an auxiliary linear operator, ( , ) denotes a non-zero auxiliary function, ∈ [0, 1] is an embedding parameter, ℎ ≠ 0 is a convergence-control parameter (CPP), and 0 ( , ) is an initial guess of ( , ). Clearly when the embedding parameter equals 0 and 1, then Eq. (4) holds ( , ; 0) = 0 ( , ), ( , ; 1) = ( , ), respectively. Thus, as increases from 0 to 1, then the solution ( , ; ) varies from the initial guess 0 ( , ) to the exact solution ( , ). Liao [35] expanded ( , ; ) in a Taylor series as follows: where The convergence of the series given by Eq. (5) depends upon the choice of the auxiliary linear operator, the initial guess, CCP ℎ, and the auxiliary function. If they are chosen properly, then the series (5) converges at = 1, we have the homotopy series solution as follows: Differentiating -times the zero-order deformation equation given by Eq. (4) with respect to and dividing by !, we can easily derive the th-order deformation equation. Finally, setting = 0, we have where >1. In this fashion, the original nonlinear equation is converted into an infinite number of linear ones. It should be accentuated that the linear equations can easily be solved by any symbolic computation software, such as Maple, Mathematica, Matlab, and so on. Applying −1 on both sides of Eq. (6), we get In this way, it is easy to obtain ( ≥ 1), at th-order, and thereby, we have When → ∞, an accurate approximation of Eq. (3) is obtained. If Eq. (3) has a unique solution, then this method will yield the unique solution and if Eq. (3) does not possess a unique solution, the HAM will yield a solution among many other possible solutions. Applications of the HAM to Fisher's equation In this section, we envisage the generalized Fisher's equation in the following form [13]: To incorporate our discussion, four important cases of nonlinearity, which correspond to some real physical processes, have been investigated to show the reliability of the proposed scheme. For this purpose, several ICs have been selected. Case 1: In this case, we will solve Eq. (8) for = 1 and = 1 (Fisher's equation), i.e., we will solve the following equation with the help of the HAM: Eq. (9) can be used to study flame propagation and nuclear reactors. To solve Eq. (9) by the HAM, at first, we need to choose an appropriate initial approximation. To illustrate our purposes, we consider the initial approximation of Eq. (9) following Singh et al. [13] as ( , 0) = , where is an arbitrary constant. For the analytical solution of Eq. (9), we choose the linear operator as ( ; ) = ( ; ) with the property [ ] = 0, where is a constant and we assume that −1 exists and is defined as −1 = ∫ 0 (.) . Now, we define the nonlinear operator  as Using the definition given by Eq. (4), we construct the zero-order deformation equation as follows: For = 0 and = 1, from Eq. (11), we can write Thus, the deformation equation of the th order is attained as Therefore, the solution of the th order deformation equation by taking −1 on both sides of Eq. (12) can be presented as For ≥ 1 and ( , ) = 1, using Eq. (10), we obtain Hence, from Eq. (14), we get (13) and solving for ( , ), we see Hence, the approximate solution of Eq. (9) by the HAM is given by (see Eq. (7)) When ℎ → −1, then we attain the exact solution ( , ) from Eq. (15), which can be presented in the following form: By using any symbolic computation system and performing some algebraic operations, the solution given by Eq. (16) can be brought to the following closed-form solution: which is the exact solution to the problem specified by Eq. (9). From the initial solution and the obtained solution, it is perceived that the initial solution can be generated from the closed-form solution by setting = 0. The two-dimensional (2D) graphical illustration of the solution of Eq. (9) given by Eq. (17) for some values of the ICs is presented in Fig. 1 for a better perspective. It is perceived from the figure that if increases from 0 to 5, then ( , ) decreases for all values of with the ICs = 2, = 3. The solution given by Eq. (17) achieved via the HAM is exactly the same with those acquired by the ADM [10], the HPM [13,23], and the VIM [24], which is the exact solution of Eq. (9) with the specified initial approximation. Case 2: In this case, we solve Eq. (9) with another initial approximation to examine the flexibility of the HAM in the case of choosing several initial approximations. It is pertinent to note here that the HAM has greater flexibility in choosing an initial approximation, i.e., we can choose several approximations. For suitability and to make sense clear, we choose another initial solution of Eq. (9) following Hasnain and Saqib [36] as By the same process detailed in Case 1, we obtain Hence, the approximate solution of Eq. (9) by the HAM can be set as Thus, the exact solution, ( , ) obtained from Eq. (19) as ℎ → −1 is given by By performing some algebraic operations with the aid of a symbolic computation software, the above solution presented in Eq. (20) can be brought to the following closed-form solution: The solution given by Eq. (21) is another exact solution to the problem given by Eq. (9). It is obvious that the initial approximation given by Eq. (18) can be procured from the obtained solution by setting = 0. For interpreting the result physically, its 2D graphical illustration is presented in Fig. 2. In order to solve Eq. (22) by the HAM and illustrate our purpose, first, as in ref. [13], we consider the initial approximation of Eq. (22) as According to the solution procedure detailed in case 1, the nonlinear operator is defined as and then the solution of the deformation equation of the th order is given by where ( , ) = 1 and Hence, from Eqs. (24) and (26), we attain Substituting ( −1 ( , )) in Eq. (25) and solving for ( , ), it is easy to see Hence, the series solution of Eq. (22) attained by the HAM is given by The exact solution ( , ) obtained from Eq. (27) as ℎ → −1 is given by With some algebraic operations performed by a symbolic computational tool, it is easy to see the solution presented by Eq. (28) in the following closed-form: which is the exact solution of Eq. (22) for case 3. It is noticeable that the initial solution given by Eq. (23) can be produced from Eq. (29) by setting = 0. The 2D plot of the solution of Eq. (22) given by Eq. (29) is displayed in Fig. 3 for a clear understanding. Fig. 3 is presented with = 6, = 1, and ( , 0) = 1 (1+ ) 2 , for varying ∈ [−1, 1], where varies from −3 to 3. It is perceived from Fig. 3 that ( , ) increases over time and after some time, ( , ) becomes constant. The closed-form solution of Eq. (22) given by Eq. (29) is found to be exactly the same with those extracted via the ADM [10], the HPM [13,23], and the VIM [24], which is the exact solution of Eq. (22) with the given initial approximate solution. The equation has applications in biology, especially in tumor growth and invasion. To solve Eq. (30) by the HAM and point out our intention, we consider the initial approximation following Singh et al. [13] as On the basis of the process detailed in obtaining the solution in case 1, the nonlinear operator can be defined as and then the solution of the deformation equation of the th order leads to Then from Eqs. (32) and (33) maintaining the same procedure detailed above, we attain the following results: , and so on. By the same process, we can attain the terms, as desirable, of the series. Hence, the approximate series solution of Eq. (30) by the HAM is given by When ℎ → −1, then we can put forward ( , ) given by Eq. (34) in the following form: By performing some algebraic operations through any symbolic computation software, one can see ( , ) given by Eq. (35) in the following closed-form: which is the exact solution of Eq. (30). It is obvious that the initial solution given by Eq. It is perceived from Fig. 4 that ( , ) increases initially and after some time, ( , ) turns out to be constant for all values of . The closedform solution of Eq. (30) achieved through the HAM given by Eq. (36) provides the same results as those picked up via the ADM [10], the HPM [13,23], and the VIM [24], which is the exact solution of Eq. (30) with the given initial approximate solution. Furthermore, the estimated error can be given as Discussion of the attained results In this section, we describe the effectiveness and the applicability of the HAM to analyze the solution of the Fisher's equation with the viewpoint of the spatial spreading of invasive species in ecology and in biology, the disposition of the spatial spreading of invasive cell populations, and the influence of the proliferation rate. If we consider the Fisher-KPP model in the case of spatial spreading of invasive species in ecology or to extract the nature of the spatial spreading of invasive cell populations in biology, Figs. 1-4 may be described as follows: It is clear from Fig. 1 obtained for the proliferation rate = 1 that at the initial time , the invasive species are located to a critical level, but as time increases, the invasive species or the invasive cell populations decrease continuously with constant shape and constant speed, and after a brief period of time, the invasion is found to be constant. Thus, in this case, Fisher's equation does not allow the invasive species or the invasive cell populations to go extinct. With the same proliferation rate, we solve Eq. (9) in case 2 with the initial approximation ( , 0) = )−2 , and this is possible, because the HAM provides us with a proper base function so as to yield a better approximation of the nonlinear problem. It is seen from Fig. 2 that if the invasion species stand before starting time of our investigation, then the invasion increases over time, but after a while, the invasion becomes constant. This case also does not allow the solution to go extinct. But for the same proliferation rate, it is seen from Fig. 4 for the initial approximation ( , 0) = (Fig. 3) that the invasion increases over time, but after a few times, the invasion becomes constant. This case also does not allow the solution to go extinct. Thus, from the above four cases, we notice that Fisher's equation gives the solution with compact support, but it does not allow the solution to go extinct. Furthermore, in the long-time limit, the solutions of Fisher's equation lead to a constant speed and constant shape TW. It is of interest to note here that for every positive IC with compact support, Fisher's equation will always amplify to a TW that moves with a minimum speed. It can easily be perceived from the above results that the general solution of the Fisher's equation = 2 2 + (1 − ) is given by From the solution given by Eq. (37), we notice that the proliferation rate is the only parameter used to describe the entire population. To examine the effect of , the graphical illustration of the solution given by Eq. (37) for some values of is presented in Fig. 5 for a better perspective. It is perceived from Fig. 5 that a TW is approached that travels in the positive -direction with constant shape and constant speed in the case of spatial spreading of invasive species in ecology or the spatial spreading of invasive cell populations in biology. The invasive cell population propagates severally on a semi-infinite region due to the variation of proliferation rate (see Fig. 5). For the proliferation rate = 1, Fig. 5(a) indicates that the invasion starts to decay in space quickly with a constant speed and after a short spatial distance, the invasion is found to be constant. For the proliferation rate = 2, Fig. 5(b) shows that for = 0, the invasion decreases quickly in space that means the propagation of the invasion is very less. On the other hand, for other times, at first, the invasion spreads out over space and then begins to decay as the spatial distance increases, and the invasion becomes constant. Finally, Fig. 5(c)-(d) shows that the invasion declines after overlong spatial distance which signifies that the rate of the spatial spreading is maximum in those cases. It is clarified from the figure that the rate of invasion is almost stationary, but it does not go extinct. Therefore, it is perceived from the figure that the invasion spread out over space quickly when the proliferation rate is increased which means that the invasion starts to decrease rapidly only when the proliferation rate is lowered. Further, it can be perceived from Fig. 5 that the solution approaches a travelling wave that moves in the positive -direction with constant speed and constant shape. Our solutions confirm that the speed of propagation is minimum, as expected. To examine the convergence of the series given by Eq. (15), we present graphically the HAM approximation solutions with several orders of Eq. (9) taking ℎ = −0.28, = 2 and its exact solution in Fig. 6. It can be observed from Fig. 6 that the HAM solution converges to the exact solution after only 5 terms. We have also estimated the root mean square error (RMSE) between our approximate results taking the first 8 terms and the exact result. The estimated RMSE value is found to be 4.7648 × 10 −4 , which is reasonable. To ensure the proper judgement of the above statements, the absolute error (AErr) between the approximate series solution given by Eq. (15) with several orders of Eq. (9) and its exact solution is presented in Table 1 taking = 2 and ℎ = −0.28. From Table 1, it is revealed that the AErr is minimum at the highest order approximated solution. That means the HAM approximated series solution converges to the exact solution when tends to infinity. To test the superiority of the HAM over the ADM and the HPM, we present graphically the exact solution of Eq. (9) and its approximate series solutions taking the first 8 terms attained by the HAM, the ADM, and the HPM in Fig. 7. It is to be noted here that the approximate series solutions of Eq. (9) by the ADM and the HPM are achieved by taking ℎ = −1 in Eq. (15), which is the same as the ones produced by the ADM [10] and the HPM [13,23]. Therefore, the series solution attained by the HAM is superior to the series solutions that came out through the ADM and the HPM. It is seen also from Fig. 7 that the HAM result achieved by taking ℎ = −0.55 converges rapidly to the exact solution than the ADM and the HPM. To clarify the above judgment, the AErrs between the approximate series solution (15) of Eq. (9) obtained by the HAM, the ADM, and the HPM and its exact solution are presented in Table 2. From the Table 2, it is perceived that at each time mentioned in the table, the HAM result is better than that of the ADM and HPM. Further, the HAM has a special advantage that controls the ROC by selecting suitable values of the CCP ℎ. To justify this advantage of the HAM, we exhibit the approximation series solutions of Eq. (9) given by Eq. (15) by taking the first 8 terms for several values of ℎ in Fig. 8. It is seen from Fig. 8 that the parameter ℎ controls the ROC. It is interesting to note here that the approximate series solution of Eq. (9) given by Eq. (15) by taking the first 8 terms converges to the exact solution mostly for ℎ = −0.35. AErrs of the approximation series solution of Eq. (9) and its exact solution for varying is presented in Table 3 with ℎ = −0.35. Thus, it is suggestive that the Fisher-KPP model can be used to emulate biological invasion in various affections, but it cannot be used to emulate biological recession, as the Fisher-KPP model does not allow the solution to go extinct for all cases. For removing this limitation, we need to modify the Fisher-KPP model for simulating biological recession so that the population eventually can become extinct in all cases. However, the outcomes indicate that the HAM is a powerful mathematical technique for extracting exact and analytical solutions to nonlinear equations. Conclusion In this paper, we have promoted the solution of the nonlinear Fisher type equations using the HAM and studied the nature of the solutions from the viewpoint of ecology and biology. It is observed that every positive IC with compact support of the Fisher's equation amplifies to a TW with a minimal speed. The effects of the proliferation rate in the Fisher-KPP model is discussed on the entire population and it is observed that the invasive cell or the invasive population decreases within a short period of time whereas the proliferation rate is minimum. The HAM includes a certain CCP that provides a convenient way to control the ROC and convergence rate of the series solution. Thus, we can extend or shorten the experimental region by controlling the CCP ℎ when the closed form is unavailable. Also, the solutions extracted by the HAM contain the solutions obtained by the HPM and ADM. Therefore, it is confirmed that the HAM is superior to other semi-analytical methods. In the present study, Matlab software is used to figure out different effective graphs of the solution of the Fisher's equation obtained via the HAM by setting the several values of the parameter ℎ in which Fig. 8 indicates ℎ controls the ROC. The approximated results of Eq. (9) (Fisher's equation) by the HAM containing only first 8 terms matches reasonably well with the exact solution of the equation with respect to the RMSE value. However, our approximated outcomes are nearly identical within the range 6 ≤ ≤ 9. For a reasonable case, = 9 is justified, and for larger , approximate solutions may approach the exact result. The outcomes show that the HAM is a powerful and efficient method for obtaining an analytic approximation solution to a nonlinear problem that converges to the actual solution relatively rapid (after just five iterations in the case of the current study, see Fig. 6). Since in the case of ecology or biology, the invasive species or invasion cell cannot counteract totally at any instant of time using the Fisher-KPP model, we need to reformulate the Fisher's equation. For this purpose, we deliberate that if the diffusivity in the Fisher's equation (2) is altered, the graph of the approximate solution may be terminated after a finite number of steps even in closed-form and so the invasive species may be counteracted in a concise stage. Future scope and perspective of our research work concentrates on investigating precisely under what conditions solutions go to travelling waves or become extinct modifying the assumption that the contact point corresponds to zero cell density and on examining how quickly travelling waves develop as → ∞. Author contribution statement Gour Chandra Paul: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper. Tauhida: Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Dipankar Kumar: Conceived and designed the experiments; Wrote the paper. Data availability statement Data will be made available on request. Declaration of interests statement The authors declare no conflict of interest. Additional information No additional information is available for this paper.
2022-10-02T15:08:47.824Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "803f33cb52c76c1f19838f6323df2cfa520157da", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "48900c81acbc0157ce5117c000197de9504e3bf5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
250064247
pes2o/s2orc
v3-fos-license
Construction method of strengthening shear walls using prestressed steel bars for a high-rise building . 2. In the shear wall reinforcement of a high-rise residence in Changzhou, China, the prestressed steel bar reinforcement method is innovatively used. This paper focuses on the reinforcement principle and construction method of the prestressed steel bar method for strengthening the shear wall. During the construction, combined with the engineering quality problems, the prestressed steel bar method is used to strengthen the shear wall. This method avoids the reduction of the use area caused by the increasing section reinforcement method and the stress lag caused by the replacement method, does not change the structural stiffness and the shape of structural members, and shortens the construction period. After monitoring by the monitoring unit, the reinforcement method has good effect. This study also passed the acceptance of the science and technology plan project of the Ministry of construction, and formed a complete set of construction method of prestressed steel bar strengthening shear wall. The effective implementation of this method can provide technical reference for the reinforcement construction of similar projects. Introduction Prestressed steel bar, also known as PC steel bar, is an intermediate type prestressed product between finish rolled rebar and prestressed steel strand. It has the advantages of high strength, high toughness, low relaxation, strong binding force with concrete, good weld ability, upsetting and material saving [1]. It has been widely used in concrete centrifugal pipe pile, electric pole, elevated pier, railway sleeper and other high-strength prestressed components, and its application field has expanded from railway and highway to construction, water conservancy, energy and geotechnical anchoring engineering fields. In recent years, the application research of prestressed steel bar has developed to a certain extent. For example, Shangguan Junwei [2] studied to determine the steel bar parameters of roadway support through the relationship between surrounding rock layers of roadway, and achieved the engineering purpose of basically no separation of roadway roof through the joint roof support of steel bar and anchor cable. Xu Jinhong [3] studied the reinforcement of weakly cemented rock stratum with high prestressed steel rod and anchor cable support through experiments, which meets the normal use of roadway well. Yao Yi [4] also found through research that prestressed steel bars play a significant role in active support in roadway support, the supported roadway deformation is small, and the roof is basically free of separation. Wu Yongzheng [5,6] and others have carried out the mechanical property test of mining prestressed steel bar by means and methods of laboratory test, numerical simulation and field test, and carried out the development and Application Research of complete set of steel bar support technology. It is found that under high prestress, the active support effect of steel bar is obvious, and has the advantages of high prestress application level, low cost and good support effect. Wang Bangping [7] carried out the research on the application of prestressed steel bar in the vertical prestress of the web of prestressed concrete continuous beam bridge. Through the research, it is found that the use of unbonded prestressed steel bar in continuous beam bridge has the advantages of excellent material performance, simple construction process, reliable acceptance method and easy implementation compared with prestressed finish rolled rebar. Deng Ming [8] and others used prestressed steel bars to strengthen the T-beam diaphragm to solve the problem of cracking and damage of the diaphragm of the T-beam bridge. It was found that the crack width of the diaphragm and the deflection of the main beam were greatly reduced. Fang Minhua [9] conducted an experimental study on the mechanical performance of PC steel rod prestressed concrete roof slab, and found that PC steel rod used for prestressed concrete roof slab has good stress and obvious omen before failure, which improves the mechanical performance of general prestressed concrete flexural members. Through collection, it is found that the existing research on prestressed steel bars mainly focuses on bridge construction and roadway support. It is found that the application advantages of prestressed steel bars are obvious in bridge construction and roadway support. However, there are few studies on prestressed steel bars in civil buildings. In this case, we combined with the quality defects of the project, compared and selected the reinforcement scheme, innovatively adopted the prestressed steel bar to strengthen the shear wall, and achieved good results, hoping to provide effective reference for the reinforcement of other engineering structures. Engineering survey A high-rise residential building in Changzhou has 24 floors above ground and 2 floors underground, with a building height of 98.900 m. The main structure have been capped. The infilled wall and secondary structure of the upper structure have been completed. The quality supervision station found that the rebound strength of the local shear wall concrete on the bottom floor is lower than the designed strength. The testing unit found that the concrete strength of the shear wall of 12 floors and below is mostly low. The structure with more than 12 stories is normal. Based on the test results and the structural recheck conclusion of the original design unit, the professional design unit has designed the reinforcement of the components whose original concrete strength does not meet the requirements of the project. For some shear wall structures, the shear wall reinforcement measures of local shear wall embedded steel bar applying stress method and covering the wall surface with steel mesh and spraying composite mortar ductility treatment have been taken. Principle of shear wall strengthened by prestressed steel bar In the shear wall to be reinforced, a 250 mm wide vertical groove shall be set by mechanical and manual chiseling. During the chiseling process, the wall reinforcement shall be retained as much as possible, and the wall naturally forms a roughened surface, which shall be set in the groove Φ100 steel rod column (see Figs. 3 and Fig. 4 for details, and Fig. 5 for site photo). End plates of 200×150×20 are set at both ends of the steel rod column to facilitate jacking with the upper and lower shear wall concrete, and are vertically welded on the steel rod column Φ16. The number of studs connected with the upper and lower shear walls shall be determined according to the calculation, and the number of studs connected with the shear wall of the layer to be reinforced shall be set according to the structural requirements. The steel bar column passes through the steel beam with force placed in the wall before the beam is set up (perpendicular to the wall surface at the trough body) and is firmly welded with the simultaneous interpreting steel beam. Meanwhile, a counterforce steel beam is erect in the wall above the tank body, and a pair is arranged between the two H type steel beams. Φ180×8 force transmission steel pipe and Jack (see Fig. 6 for details) to ensure the setting Φ100 steel rod column can work together with shear wall. According to the design requirements, the steel rod is prestressed through force transfer steel beam. After the prestress is applied to the design requirements by Jack, the steel wedge is used to knock the upper part of the steel rod and the wall tightly and weld firmly. The damaged part of the wall is welded with reinforcement of the same specification and restored to the original state, and then closed with formwork, The vertical slot in the shear wall shall be poured with non shrinkage cement-based grout with coarse aggregate. After the measured strength of the newly poured concrete in the vertical slot meets the requirements, the pressurizing equipment shall be removed, the force transfer steel pipe and reaction frame steel beam shall be removed, and the exposed force transfer steel beam shall be cut off. During construction, temporary measures shall be taken to ensure stability of newly added prestress Φ100 steel rod column. After construction, the stability of prestress Φ100 steel rod column is guaranteed by shear wall. In the reinforcement of building structure, the newly added part of the structure belongs to secondary stress. Before reinforcement, the original structure has been under the action of external force load, and the cross-section stress and strain level are generally very high. The newly strengthened part of the structure will be stressed only when the strengthened structure further bears the load, that is, the second load, so that the stress and strain in the newly added part always lag behind the stress and strain of the original structure, that is, there is a phenomenon of strain lag. This will result in high stress and large deformation of the original structure, while the stress of the newly added part is still at a low level, which cannot give full play to its role and cannot play its due reinforcement effect. Secondly, the reinforced structure belongs to the secondary stress combination structure, and the old and new parts still have the problem of overall work and common stress. The technology of strengthening shear wall with prestressed steel bar solves the problem of strain lag by applying prestress on the prestressed steel bar, anchoring the upper and lower ends of the prestressed steel bar with the shear wall with epoxy resin concrete, and then strengthening the connection with the shear wall through cylindrical head studs welded on the steel bar, which effectively ensures that the stress of the prestressed steel bar reaches or even exceeds the original shear wall. The overall performance of the whole reinforcement system is improved. Construction technology Removal of obstacles at the construction site → setting out and positioning of embedded steel bars at the shear wall site → slotting of shear wall → erection of steel bars and support system → stressing of steel bars → grouting of the notch tightly → removal of supporting auxiliary components after the grouting strength reaches C30 → binding and welding of steel mesh → painting or spraying of 30 mm thick high-performance composite mortar → maintenance → putting into use. Construction method of steel bar embedded in shear wall Construction technology: positioning and setting out → roughening the surface of shear wall → opening installation slot of shear wall → opening and installation of steel beam of reaction frame → installation of steel bar and support system → acceptance of installation quality of reaction frame and steel bar → embedded installation monitoring system → prestressing steel bar by stages → fixing the upper pressure end of steel bar → unloading by stages → adjustment of original beam and wall reinforcement → formwork erection → pouring Slurry → formwork removal → curing → cutting off the lateral support of the force transmitting steel beam and the fixed steel bar. (1) According to the requirements of the drawings, the location of the embedded steel bars shall be set out and rechecked in combination with the size of the completed members. In order to avoid the edge members with more reinforcement, the prestressed steel bars shall be symmetrically set inside the edge members of the shear wall. (2) In the setting out position of the wall, the 250 mm wide wall through slot and the top shear wall through the hole where the horizontal temporary support steel beam is placed are to be chiseled mechanically and manually. During the construction, the steel bars in the original wall body shall not be damaged as much as possible, and other main structures shall not be damaged. (3) According to the requirements of the design drawings, the shaped steel bar shall be made in the processing plant. When the steel bar is cut, the cutting section shall be flat and perpendicular to the centre line of the steel bar. The welding between the bolt and the steel bar shall be carried out according to the angle and position of the drawing. The temporary horizontal force transmitting steel beam (H350 × 220 × 20 × 20) part of the bolt can be welded on site. The steel bar and the force transmitting steel beam shall be connected by full penetration groove weld. (4) The steel bar and auxiliary jacking support system are set up in the groove. The vertical connection of the steel bar is welded with Φ114 × 7 sleeve (Fig. 6), and the verticality of the steel bar is strictly checked. The steel bar in the middle layer is equipped with [16 channel steel and M12 U-bolt, so as to ensure the lateral stability of the steel bar in the centre of the wall and the side of the steel bar. Pour the epoxy resin concrete at the lower cushion beam, and complete the anchoring of the lower cushion beam. (5) According to the requirements of the drawings, the upper flange of the reverse support steel beam shall be placed close to the upper part of the hole, the lower part shall be supported by Jack, and the steel plate and concrete contact surface shall be filled with modified epoxy slurry to ensure the compactness of the structural adhesive and the flatness of the upper and lower surfaces of the steel beam; after the grouting material near the steel beam reaches the specified strength, the jack and Φ180 × 8 supporting steel pipe can be set on the force transmitting steel beam, and the jack and steel The contact surface of the beam shall be smooth, the contact surface of the steel pipe and the jack shall be set with 30 thick steel base plate, and the centre line of the jack and the stiffener of the force transmitting steel beam shall coincide. (6) According to the stress value applied in the design, two jacks are used to apply prestress to the steel bar step by step. The first load is 30 % of the required stress value, the second load is 60 %, the third load is 90 %, the fourth load is 100 % of the required stress value, and the interval time is 10 min. The data of relevant stress changes are collected, and the stress value is adjusted according to the data, so as to ensure the reliability of the steel bar Effective stress. (7) After the stress value meets the design requirements, it shall meet the design requirements. Knock the steel wedge between the base plate and plug weld the gap between the steel bar and the upper base plate firmly. Anchor the upper end of the steel bar with epoxy resin and remove the jack and unloading steel beam. (8) After finishing the concrete, the reinforcement at the original structure of the notch shall be repaired and recovered. The cut-off reinforcement in the construction shall be connected and welded as required. The length of the welding overlap shall meet the design requirements. The crossing of the reinforcement shall be bound with iron wire. After the formwork is erected, the original section of the wall shall be restored by pouring the grouting material, and the test block shall be reserved and maintained as required by the specification. According to the test report of test block, determine the time of formwork removal, and check the appearance quality after formwork removal. If there are honeycomb and pockmarks, handle them according to the severity. (9) The temporary support of steel bar shall be removed after the strength of grouting material reaches the requirements. Construction method for ductility treatment of shear wall Construction process: removal of obstacles at reinforcement part → wall chiselling treatment → surface cleaning → binding and welding of reinforcement mesh on wall surface → planting and installation of tie bars → cleaning, wetting and brushing interface agent on new and old joint surface → construction of composite mortar surface (spray construction) → maintenance. (1) The wall surface shall be roughened and provided with 5mm deep groove. (2) Before the spraying of cement composite mortar, the steel mesh shall be concealed for acceptance, and the next process can be carried out only after the acceptance is qualified. (3) Before spraying the composite mortar, the spraying thickness mark shall be set with the spacing of about 1000 mm. (4) Before spraying, the air compressor and jet machine shall be put into trial operation. After inspection and operation, air supply test and water supply test shall be carried out for composite mortar pipeline, and no air leakage and water leakage shall occur. (5) Check the ventilation and lighting of the work area. (6) Before spraying construction, spray water on the surface to be sprayed and wet, and brush the interface agent. (7) When spraying, the distance between the nozzle and the spraying surface should be 1.0 m, and the nozzle should be vertical to the spraying surface. (8) The spraying operation shall be carried out in the sequence of section and block, part first, then whole and bottom-up. During spraying, the nozzle moves slowly and repeatedly in a spiral shape, with a spiral diameter of about 20-30 cm, so as to ensure dense spraying. (9) During the spraying construction, the water cement ratio shall be well controlled to keep the surface of the spraying mortar flat and free of dry block sliding and flowing. In addition, the rebound rate of spraying composite mortar shall be controlled, which shall not be greater than 20 % on the side and 30 % on the top. Rebound materials falling on the ground should be collected and broken in time to prevent caking. The solid elastic material shall be screened and classified, and the material diameter can be reused if it meets the quality requirements of the above raw materials. The contaminated rebound material shall not be used for structural reinforcement. (10) After the thickness of spraying composite mortar meets the design requirements, it shall be scraped and trowelled. The levelling shall be carried out in time after the initial setting of mortar. It is not allowed to disturb the internal structure of the mortar and its bonding with the base course. After the final setting of the last layer of spraying mortar for 2 h, spray water for curing. The curing time shall not be less than 14 d. Construction monitoring (1) Monitoring purpose. In the process of structural reinforcement, the internal force and displacement of each stressed component are monitored to ensure the reliability of reinforcement. (3) Burying of monitoring points. 1) Embedding of internal force monitoring points of steel bars: according to the test requirements of the design unit and combined with the actual situation of construction, before the steel bars are stressed, two fiber grating reinforcement meters (Fig. 7) are arranged symmetrically on the steel bars to test the axial force of the steel bars in the loading, concrete pouring, unloading and other stages (see Table 1 for specific time nodes). Install a pair of FBG surface strain gauges on the steel bar to be tested, connect the demodulator and debug the peak wavelength of the FBG. Read the grating peak before loading as the initial peak. During the whole process, the computer will automatically read the peak value of the grating and convert it into the axial force of the steel bar: (2) 2) Embedding of wall stress monitoring points: before the change of wall stress, a pair of surface strain gauges shall be embedded at both sides of the measured wall, at the midpoint of the support structure connection (Fig. 8), and the readings shall be measured for three times, and the average value shall be taken as the initial reading of the measurement point. After the wall stress changes, the readings of the measuring points shall be measured again to obtain the wall strain, and the wall stress and the change of the reinforcement stress in the wall shall be calculated. 3) Embedding of displacement monitoring points: before jacking operation, the upper layer of the replacement layer shall be supported on the wall surface, and static level measuring points shall be set at the midpoint of the two support lines as the monitoring points for displacement measurement; static level observation points shall be set on the wall surface not affected by deformation as the reference points to observe the relative deformation between the reference points and the monitoring points. (4) Acceptance criteria. The qualified standard of the inspection lot shall be no more than 20 % of the loss axial force value specified by the designer, and the qualified judgment standard of axial force and vertical displacement shall be determined according to the technical standard for inspection of building structures (GB / T 50344-2004) (see Table 2). In the process of reinforcement, the monitoring unit monitored the stress of steel bar and the deformation of shear wall in real time. Monitor the axial force of the steel bar during the loading process, 3 days after the concrete pouring in the reinforcement area, 1 day and 2 days after the removal of the loading device. The monitoring data show that the residual coefficient of the axial force of the prestressed steel bar is more than 0.85, and the residual coefficient of the axial force of most steel bars is more than 0.9, which meets the standard that the loss axial force value specified by the design unit is not more than 20 %. Before, after and 1 day after the replacement wall is removed the vertical displacement of the top plate of the replacement shear wall is monitored at all stages after the support is removed. The measured maximum cumulative vertical displacement is 0.2 mm, which meets the limit requirement of no more than 0.3 mm as required by the specification. Conclusions Among the existing structural reinforcement methods, the replacement concrete method does great damage to the original structure. Due to the secondary stress, the stress lag of the replaced shear wall is obvious, and the construction period is long; If the method of adding steel plate or section steel around the shear wall and column is adopted, it will affect the later decoration construction of residents, significantly reduce the use area of the house and increase the structural stiffness and self-weight. The prestressed steel bar reinforcement method overcomes the above shortcomings, not only ensures the reinforcement quality and user use area, but also shortens the construction period. After the completion of the reinforcement project, through the construction monitoring, it is found that the maximum prestress loss of the prestressed steel bar is 15 %, which meets the standard that the loss axial force value required by the design is no more than 20 %. The maximum cumulative deformation of the vertical displacement of the top plate of the replacement shear wall is 0.2 mm, which meets the limit requirement of less than 0.3 mm required by the specification. The structural reinforcement has also passed the on-site special acceptance of the science and technology plan project of the Ministry of construction. After being put into use for a period of time, the building operates well, achieves the expected reinforcement goal, and fully reflects that the reinforcement method is economical, safe and effective. On this basis, in order to facilitate the promotion of technology, we continue to summarize and form a complete set of construction methods.
2022-06-27T17:36:38.862Z
2022-06-25T00:00:00.000
{ "year": 2022, "sha1": "d9c144945266444db8961226941481bffe7811c9", "oa_license": "CCBY", "oa_url": "https://www.extrica.com/article/22527/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "63b5623a59f1d186e930f5bc65b8913f480c4b6e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
119571447
pes2o/s2orc
v3-fos-license
Quark-Antiquark and Diquark Condensates in Vacuum in a 3D Two-Flavor Gross-Neveu Model The effective potential analysis indicates that, in a 3D two-flavor Gross-Neveu model in vacuum, depending on less or bigger than the critical value 2/3 of $G_S/H_P$, where $G_S$ and $H_P$ are respectively the coupling constants of scalar quark-antiquark channel and pseudoscalar diquark channel, the system will have the ground state with pure diquark condensates or with pure quark-antiquark condensates, but no the one with coexistence of the two forms of condensates. The similarities and differences in the interplay between the quark-antiquark and the diquark condensates in vacuum in the 2D, 3D and 4D two-flavor four-fermion interaction models are summarized. I. INTRODUCTION It has been shown by effective potential approach that in a two-flavor 4D Nambu-Jona-Lasinio (NJL) model [1], even when temperature T = 0 and quark chemical potential µ = 0, i.e. in vacuum, there could exist mutual competition between the quark-antiquark condensates and the diquark condensates [2]. Similar situation has also emerged from a 2D two-flavor Gross-Neveu (GN) model [3] except some difference in the details of the results [4]. An interesting question is that if such mutual competition between the two forms of condensates is a general characteristic of this kind of two-flavor four-fermion interaction models? For answer to this question, on the basis of research on the 4D NJL model and the 2D GN model, we will continue to examine a 3D two-flavor GN model in similar way. The results will certainly deepen our understanding of the feature of the four-fermion interaction models. We will use the effective potential in the mean field approximation which is equivalent to the leading order of 1/N expansion. It is indicated that a 3D GN model is renormalizable in 1/N expansion [5]. II. MODEL AND ITS SYMMETRIES The Lagrangian of the model will be expressed by All the denotations used in Eq.(1) are the same as the ones in the 2D GN model given in Ref. [4], except that * The project supported by the National Natural Science Foundation of China under Grant No.10475113. the dimension of space-time is changed from 2 to 3 and the coupling constant H S of scalar diquark interaction channel is replaced by the coupling constant H P of pseudoscalar diquark interaction channel. Now the matrices γ µ (µ = 0, 1, 2) and the charge conjugate matrix C are taken to be 2 × 2 ones and have the explicit forms (2) It is emphasized that, in 3D case, no "γ 5 " matrix can be defined, hence the third term in the right-handed side of Eq.(1) will be the only possible color-anti-triplet diquark interaction channel which could lead to Lorentzinvariant diquark condensates, where we note that the matrix Cτ 2 λ A is antisymmetric. Without "γ 5 ", the Lagrangian (1) will have no chiral symmetry. Except this, it is not difficult to verify that the symmetries of L include: continuous flavor and color symmetries 2. discrete symmetry R: q → −q; 3. parity P: If the quark-antiquark condensates qq could be formed, then the time reversal T , the special parities P 1 and P 2 will be spontaneously broken [6]. If the diquark condensates q C τ 2 λ 2 q could be formed, then the color symmetry SU c (3) will be spontaneously broken down to SU c (2) and the flavor number U f (1) will be spontaneously broken but a "rotated" electric charge UQ(1) and a "rotated" quark number U ′ q (1) leave unbroken [7]. In addition, the parity P will be spontaneously broken, though all the other discrete symmetries survive. This implies that the diquark condensates q C τ 2 λ 2 q will be a pseudoscalar. In this paper we will neglect discussions of the Goldstone bosons induced by breakdown of the continuous symmetries and pay our main attention to the problem of interplay between the above two forms of condensates. III. EFFECTIVE POTENTIAL IN MEAN FIELD APPROXIMATION Define the order parameters in the 3D GN model by then in the mean field approximation, the Lagrangian (1) can be rewritten by where are the expressions of the quark fields in the Nambu-Gorkov basis [8]. In the momentum space, the inverse propagator S −1 (x) for the quark fields may be expressed by The effective potential corresponding to L given by Eq. (4) becomes (6) Similar to the case of the 2D NG model [4], the calculations of Tr for (red, green) and blue color degrees of freedom can be made separately thus Eq.(6) will be reduced to After the Wick rotation, we may define and calculate in 3D Euclidean momentum space where Λ is the 3D Euclidean momentum cut-off. Assume that Λ ≫ |σ − |∆||, Λ ≫ σ + |∆| and Λ ≫ σ, then by means of Eq.(8) we will obtain the final expression of the effective potential in the 3D GN model IV. GROUND STATES Equation (9) provide the possibility to discuss the ground states of the model analytically. The extreme value conditions ∂V (σ, |∆|)/∂σ = 0 and ∂V (σ, |∆|)/∂|∆| = 0 will lead to the equations Define the expressions where A, B and C represent the second order derivatives of V (σ, |∆|) with the explicit expressions Equations (10) and (11) have the four different solutions which will be discussed in proper order as follows. In summary, if the necessary conditions G S Λ > π 2 /12 and H P Λ > π 2 /8 for non-zero σ and ∆ are satisfied, then the least value points of the effective potential V (σ, |∆|) will be at As a result, in the ground state of the 3D two-flavor GN model, depending on that the ratio G S /H P is either bigger or less than 2/3, one will have either pure quarkantiquark condensates or pure diquark condensates, but no coexistence of the two forms of condensates could happen. V. CONCLUDING REMARKS The result (22) in the 3D GN model can be compared with the ones in the 4D NJL model and in the 2D GN model. The minimal points of the effective potential V (σ, |∆|) for the latter models have been obtained and are located respectively at with C = (2H S Λ 2 4 /π 2 − 1)/3 and Λ 4 denoting the 4D Euclidean momentum cutoff in the 4D two-flavor NJL model, if the necessary conditions G S Λ 2 4 > π 2 /3 and H S Λ 2 4 > π 2 /2 for non-zero σ and ∆ are satisfied [2], and in the 2D two-flavor GN model [4]. In Eqs.(23) and (24), G S and H S always represent the coupling constants in scalar quark-antiquark channel and scalar diquark channel separately. By a comparison among Eqs.(22)-(24) it may be found that the three models lead to very similar results. In all the three models, the interplay between the quarkantiquark and the diquark condensates in vacuum depends on the ratio G S /H D (D = S for the 4D and 2D model and D = P for the 3D model). In particular, the diquark condensates could emerge (in separate or coexistent pattern) only if G S /H D < 2/3. This is probably a general characteristic of the considered two-flavor fourfermion models, since in these models the color number of the quarks participating in the diquark condensates and in the quark-antiquark condensates is just 2 and 3 respectively. However, there are also some differences in the pattern realizing the diquark condensates among the three models, though the pure quark-antiquark condensates arise only if G S /H D > 2/3 in all of them. In the 2D GN model, the pure diquark condensates emerge only if G S /H S = 0 and this is different from the 4D NJL model where the pure diquark condensates may arise if G S /H S is in a finite region below 2/3. Another difference is that in the 3D GN model, there is no coexistence of the quark-antuquark condensates and the diquark condensates but such coexistence is clearly displayed in the 4D and 2D model. This implies that in the 3D GN model, G S /H P = 2/3 becomes the critical value which distinguishes between the ground states with the pure diquark condensates and with the pure quark-antiquark condensates. It is also indicated that if the two-flavor four-fermion interaction models are assumed to be simulations of QCD (of course, only the 4D NJL model is just the true one) and the four-fermion interactions are supposed to come from the heavy color gluon exchange interactions −g(qγ µ λ a q) 2 (a = 1, · · · , 8; µ = 0, · · · , D − 1) via the Fierz transformation [7], then one will find that in all the three models, for the case of two flavors and three colors the ratio G S /H D are always equal to 4/3 which is larger than the above critical value 2/3. From this we can con-
2007-06-23T16:34:04.000Z
2007-04-06T00:00:00.000
{ "year": 2007, "sha1": "f2c494b2f1a0103b219383044e34231cf292bd45", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0704.0829", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f2c494b2f1a0103b219383044e34231cf292bd45", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
245363227
pes2o/s2orc
v3-fos-license
Social Learning for Energy Transition—A Literature Review With increasing concerns regarding environmental sustainability, energy transition has emerged as a vital subtopic in transition studies. Such socio-technical transition requires social learning, which, however, is poorly conceptualized and explained in transition research. This paper overviews transition research on social learning. It attempts to portray how social learning has been studied in the context of energy transition and how research could be advanced. Due to the underdevelopment of the field, this paper employs a narrative review method. The review indicates two clusters of studies, which portray both direct and indirect links concerning the phenomena. The overview reveals that social learning is a force in energy transition and may occur at different levels of analysis, i.e., micro, meso, and macro, as well as different orders of learning. The author proposes to develop the academic research on the topic through quantitative and mixed-methods research as well as contributions and insights from disciplines other than sociology and political science. Some relevant topics for further inquiry can be clustered around: orders of social learning and their antecedents in energy transition; boundary-spanning roles in social learning in the context of energy transition; social learning triggered by stories about energy transition; and other theoretical underpinnings of energy transition research on social learning. Introduction The energy sector plays a vital role in contemporary societies, since it contributes to their socioeconomic development and well-being. With increasing concerns regarding environmental sustainability, policies directed at the energy sector have moved toward more efficient technologies and clean energy resources that call for energy transition [1][2][3]. For European Union countries, the European Commission indicated the goals for the energy transition up to 2030 and 2050 in the regulations of 2019 [4]. They laid the regulatory foundations for the so-called socio-technical transition of the energy sector in the member states, as energy transition should be understood [5][6][7]. It is a transition "that requires the co-evolution of social, economic, political, and technical factors" [8] as well as social learning [9]. Social learning, that is, an interactive and dynamic process of knowledge creation and acquisition among various societal actors through social interactions [10], is especially emphasized in research on the socio-technical transition process that leads to enhanced system sustainability [10][11][12][13][14][15]. Yet, in the view of Scholz and Mehner, "it remains weakly conceptualized and elaborated in transition research" [3] (p. 323). Moreover, Mierlo and Beers claim that "well-established research fields related to learning are broadly ignored or loosely applied" in transition studies [15] (p. 255). Nevertheless, social learning is an essential factor in the socio-technical transition process since it induces indispensable changes in actors' norms, values, goals, and operational procedures. These changes govern the decision-making processes and actions necessary to turn ideas into daily practice [11,13]. In the context of the energy sector, the topic of social learning appears to be even less developed. This paper intends to demonstrate this research gap. It critically reviews the available research on social learning for energy transition, and attempts to portray how social learning has been studied in the context of energy transition and how research could be advanced in order to expand the domain of transition studies. The specific questions that the review answers are as follows: (1) Is social learning defined in a reviewed study, and, if so, how? What is the role of social learning? (2) In what context are social learning and energy transition studied in the reviewed research? (3) Does the reviewed study show links between social learning and energy transition, and if so, what are they? and (4) Which questions in the study remain unanswered with regard to social learning in the context of energy transition? Due to the scant research on the issue that is currently available, this paper applies a narrative review method, which allows the advancement of conclusions from a limited number of various types of studies, including empirical and conceptual work. Such a review should not only summarize works but also develop "original thinking that builds on an integration of the literature reviewed" [16] (p. 186) and is feasible for a single researcher. It ought to propose conclusions on how to further expand the field. Thus, the current review also attempts to address the question of how the links between social learning and energy transition can be approached in future research. It may advance transition studies with regard to energy transition and social learning. In the next section, the paper first briefly portrays the frames for the studied phenomenon delineated by the domain of transition research. Furthermore, it explains the methodological approach for the selection of studies for the review. It briefly sums up each work and groups them into two clusters. It also attempts to address the aforementioned specific questions and advances conclusions concerning what we know about social learning for energy transition, what we still do not know in that respect, and how the field can be developed in the future. The review expands the existent knowledge of transition studies. Transition Studies We have been able to observe the rapid growth of transition studies over recent years [17]. The domain is concerned with transitions, i.e., wide-ranging, fundamental, structural changes of various socio-technical systems, such as the energy, transportation, food, and health systems. Hence, the topic of energy transition is discussed in transition studies. The field is interdisciplinary and has been developed by researchers from the fields of sociology, political science, economics, psychology, management, engineering, geography, and philosophy. It tackles the issues of "governance, power and politics, civil society, culture and social movements" [17] (p. 5) in regard to system transitions. The domain has adopted many theoretical backgrounds designed for transition studies, i.e., the multi-level-perspective framework, strategic niche management, transition management and technological innovation systems [18], yet many authors have developed unique theoretical approaches by combining perspectives from different disciplines [17]. Qualitative methods, especially the case study method, have been mainly used to address research questions in transition studies. Zolfagharian et al. [17] indicate that transition studies have recently been enriched by focusing on other factors influencing transition processes which, although critical, have been overlooked in prior research. Social learning appears to be such a factor [15]. Mierlo and Beers [15] notice that transition studies quite often mention the term social learning, which is not surprising, since social learning shares many similarities with transition studies, namely: the multi-stakeholder approach, the focus on structural change, and the diverse time horizons. Despite this, transition scholars have seldom adopted theories of social learning, differentiated between outcomes and processes of social learning, or investigated certain negative effects, such as learning to resist change. The following review of energy transition studies on social learning leads to similar conclusions. Method This review provides a narrative, critical synthesis of various types of research around the topic of social learning in the context of energy transition. It attempts to portray how social learning has been studied in the energy transition setting and how research could be advanced. There are four specific questions that the review intends to answer: (1) Is social learning defined in a reviewed study, and if so, how? What is the role of social learning? (2) In what context are social learning and energy transition studied in the reviewed research? (3) Does the reviewed study show links between social learning and energy transition, and if so, what are they? (4) Which questions in the study remain unanswered with regard to social learning in the context of energy transition? Despite its narrative character, the review protocol was guided by the PRISMA 2020 checklist [19]-see Figure 1. Method This review provides a narrative, critical synthesis of various types of research around the topic of social learning in the context of energy transition. It attempts to portray how social learning has been studied in the energy transition setting and how research could be advanced. There are four specific questions that the review intends to answer: (1) Is social learning defined in a reviewed study, and if so, how? What is the role of social learning?; (2) In what context are social learning and energy transition studied in the reviewed research?; (3) Does the reviewed study show links between social learning and energy transition, and if so, what are they?; (4) Which questions in the study remain unanswered with regard to social learning in the context of energy transition? Despite its narrative character, the review protocol was guided by the PRISMA 2020 checklist [19]-see Figure 1. Two databases were used as search engines, namely Scopus and the Web of Science, since both allow a search strategy based on simultaneous searching for published, peerreviewed studies according to a set of keywords in the paper's title, keywords, and abstract. The following keywords were applied: "social learning" AND "energy transition". The aim behind this search strategy was to find studies that place sufficient emphasis on the links between social learning and energy transition. Thus, any research that did not mention both pairs of words in either the title, abstract or paper keywords was excluded from the analysis as irrelevant in terms of the aim of the paper. Moreover, both databases were searched as they include top-tier journals with a strong impact on various fields of science. Additionally, these databases index and provide references to papers across the sciences collected in other academic search engines, e.g., Whiley, ScienceDirect, or Spring-erLink. Two databases were used as search engines, namely Scopus and the Web of Science, since both allow a search strategy based on simultaneous searching for published, peerreviewed studies according to a set of keywords in the paper's title, keywords, and abstract. The following keywords were applied: "social learning" AND "energy transition". The aim behind this search strategy was to find studies that place sufficient emphasis on the links between social learning and energy transition. Thus, any research that did not mention both pairs of words in either the title, abstract or paper keywords was excluded from the analysis as irrelevant in terms of the aim of the paper. Moreover, both databases were searched as they include top-tier journals with a strong impact on various fields of science. Additionally, these databases index and provide references to papers across the sciences collected in other academic search engines, e.g., Whiley, ScienceDirect, or SpringerLink. In the search strategy, studies were excluded if they were not published as peerreviewed papers or in a language other than English (as it limits their international reader-ship). The search strategy was not limited to any specific field due to the interdisciplinary nature of transition studies, nor to a time period. The search provided only 17 unique records after duplicates from both databases had been removed. In the initial screening of the results, four records were eliminated (one paper was written in German, one record was a working paper, one paper provides references to social learning in the main text only, and another was a book chapter). As a result, 14 papers were retrieved and analyzed for eligibility, and subsequently included in the review (see Table 1). Results This section discusses the overall characteristics of the reviewed papers and how they approached social learning in the context of energy transition. It also indicates directions for the development of scholarship. Basic Characteristics of Studies The reviewed papers were published from 2017-2021, whereas in 2021, there were five studies released, four in 2017, two in 2020 and 2018, and one in 2019. The authors mainly used the journal Energy Research & Social Science for the dissemination of their research (see Table 1). After reading the entire text of each paper, nine out of fourteen were classified as studies that directly tackle social learning in the context of energy transition, and the remaining five as those where the topic is not treated as a primary thread (see Table 2). [20] qualitative research based on a case study of a national deliberative poll on energy conducted in Japan in 2012 public management, nuclear governance; deliberative participation theory, social learning theory to develop a learning-oriented framework to evaluate deliberative participation under the influence of pre-existing government-industry-citizen relations as a key contextual factor; citizens' involvement in deliberative processes helps them learn and progress to higher-order learning; political contextual factors may constrain social learning in deliberative processes, while social learning can create forces for energy transition; Japanese nuclear sector social learning is presented as an intentional collective process of self-reflection that occurs through interaction and dialogue among diverse stakeholders; the paper emphasizes its role as one of the outcomes of deliberative participation and "an analytical lens to understanding the new roles of citizens and the changing relationships between citizens, governments, and the nuclear industry in energy governance" (p. 126); the paper analyzes three orders of social learning, i.e., instrumental learning, communicative learning, and transformative learning all three orders of learning were achieved via deliberative participation processes; access to multiple information sources, diverse perspectives, and dialogic processes were instrumental in the progress to higher orders of learning; deliberative participation can improve nuclear governance by fostering higher-orders of social learning of citizens the extent to which social learning outcomes in deliberative participation affect actual energy transition in other countries and other energy subsectors; contextual factors that influence social learning in deliberative participation in a given country and other energy subsectors [28] conceptual study that shows the application of a framework to an energy transition project in an urban context in Denmark public management, strategic management planning; collaborative approaches to critically review co-creation and other selected collaborative approaches in order to develop an assessment framework for co-creation in strategic planning for energy transitions; a municipality in Denmark social learning is not directly defined, yet it is included as one of the activities that foster transformative power in co-creation; the study refers and defines two orders of social learning the outcome of the study is an assessment framework for co-creation in strategic planning for energy transitions where social learning is among a set of activities in the co-creation process the role of social learning among other activities in the co-creation process for energy transition; interplay among different activities and social learning in co-creation; the role of social learning in different phases of co-creation Darby, 2017 [29] qualitative study of households based on interviews and observations; 12 interviewees energy policy studies, transition research; theoretical background is not specified "to let some voices speak to the reader and encourage reflection on the places, resources and experiences that inevitably influence the understanding, form and timing of energy transitions" (p. 122); households in Scotland although included in the paper's keywords, social learning is not defined in the study and intermittently referred to; energy transition can emerge from social learning; energy transition operates at many levels; transition is a non-uniform process influenced by geography and history "housing, climate, demographics and social networks, all place-specific, are important in influencing how energy is captured and used" (p. 126); the study reveals 'learning stories' in everyday transition the role of personal and organisational energy stories in shaping energy transition; energy advice serves as a boundary spanner in energy transition The authors approached the topic through mainly qualitative studies; one research paper was a mixed-methods study, three papers were conceptual and one was a theoretical review. In terms of the methods used for data collection, three papers reported data from interviews, three from case studies (two from a single case study and one from multiple case studies), and in two papers, the authors presented the results of action research and two network analyses; moreover, other studies used observation and experiments. Two papers analyzed and compared data from more than one country [21,30]. All the studies conducted refer to various disciplines within social science; however, two of them are the most influential, i.e., sociology and political science. The authors locate their studies in different areas, e.g., public management studies [20,28], landscape studies [10], innovation studies [21,22,25], or adult education research [23]. Transition studies are represented in most of the papers [22,24,27,29,30]. The theoretical background is not always revealed in the research, yet transition theory, especially socio-technical transition and the multi-level perspective [22,25,30], has substantiated most studies [22,24,27,29,30] and appears as a cutting-edge theory in energy transition studies. This is not surprising because transition theory focuses "on determining social movements and social innovations [and its] effectiveness is measured by legitimacy and social learning" [9] (p. 776). Social Learning: Concept and Role in the Study First and foremost, it needs to be stressed that the confined number of studies defines social learning [14,20,21,24] and one paper refers to its single order-transformative learning [25]. The authors of three papers [14,20,21] refer to a definition of social learning proposed by Reed et al. [32], who perceive it as learning through social interaction that requires reflection and leads to changes in understanding, attitudes, beliefs, and values. Yet studies that do not define social learning portray it as learning in social interactions as well [10,14]. Such learning has an impact on the wider social context and involves sharing ideas, knowledge, and experiences among individuals. Proka et al. [24] also highlight knowledge exchange in their definition of social learning. In addition, Pellicer et al. describe social learning as learning in a social action process and indicate its two orders (i.e., instrumental-learning how to do something, and transformative learning-"changes in mental frames and assumptions") [25] (p. 102). Likewise, other studies distinguish distinct kinds (or levels, loops, or orders) of social learning. Mah et al. [20] analyze three orders of learning, i.e., instrumental learning (acquiring new knowledge or skills), communicative learning (understanding and reinterpreting knowledge through deliberative processes), and transformative learning (reflecting on the underlying assumptions that lead to a change in perceived values). In their study, they proved that deliberative participation processes supported all three orders of learning and identified factors that enabled progress to higher orders of learning (see Table 2). Transformative learning (here the second order of learning) is separately investigated in research by Pellicer et al. [25], who emphasize its reflective character. Their study shows that both micropolitical and macropolitical factors influenced the emergence of firstand second-order learning, which then modeled three different sustainability strategies proposed by the grassroots initiatives studied. References to first and second-order learning can also be found in a study by Sillak et al. [28]. In view of this research, first-order learning allows an understanding of how a goal can be achieved, while second-order learning is more reflective and leads to a determination of which goals are worth achieving. The authors formulate an assessment framework for co-creation in strategic planning for energy transitions that involves social learning as an activity in the co-creation process. Transformative learning, according to Reed et al., is associated with double-loop learning that assumes "reflecting on the assumptions which underlie our actions" [32] (p. 4). In their conceptual study, Milchram et al. [14] proposed a dynamic framework for institutional change that is instrumental in energy transition, which includes double-and triple-loop (changes in the existing exogenous variables) learning. In their framework, controversies concerning values can be expressed in social interaction and stimulate doubleand triple-loop social learning, both of which may be decisive in transition processes driven by institutional change. With regard to the role of social learning, the reviewed research demonstrates that social learning is a key element in change or transition processes [9,14,21,[24][25][26][28][29][30] as well as being vital from a participatory perspective [10,14,20]. It can also be identified as one of the orientations in adult learning, which is particularly needed in energy transition [23]. Social Learning and Energy Transition-Direct and Indirect Links As a starting point, one remark ought to be made. The research regarding social learning for energy transition is mainly based on rather scant qualitative studies and a few conceptual studies; as such, implying antecedent variables and outcomes of social learning is inappropriate. An absence of quantitative studies is noticeable. Nevertheless, the reviewed research allows for an understanding of how the authors relate both phenomena. Direct Links The cluster of studies on the direct links between social learning and energy transition includes nine works, namely six empirical papers, two conceptual papers, and one theoretical paper. Mah et al. [20] link social learning and energy transition through citizens' participation in deliberative processes. The study suggests that all the orders of learning observed in deliberative processes can create forces for energy transition. Boyle et al. [10] and Proka et al. [24] connect social learning with energy transition through action research. Social learning can be stimulated by action research, leading to more sustainable and just transition [10] or strategizing incumbent regimes [24]. Regarding the latter, a study by Proka et al. [24] did not confirm the links. Matschoss et al. [21] hold that energy transition can be achieved through change initiatives concerning practices and related values and studied whether more extensive social learning in community living labs is conducive to engagement in the change process. In another study, Matschoss and Repo [22] identified patchworks of niches directed toward energy transition in Finland and claim that social learning can be developed through interconnections and experimentation therein. Pellicer et al. [25] studied grassroots initiatives and assumed that energy transition requires various sustainability strategies, which, in turn, are shaped by first-and secondorder learning driven by micropolitical (concerning a niche) and macropolitical factors (concerning landscape and niche-regime interactions). Skjolsvold et al. [26] analyzed how "smart" energy feedback technologies can be helpful in the quest for energy transition in households. They identified four processes, which they labeled re-arrangements (concerning knowledge, materials, social relations and routines), triggered by feedback technology. These processes involve learning which can support transformative actions. Picci et al. [23], in their conceptual work, analyze training civil servants from the adult learning perspective and analyze different orientations in adult education. They posit that a social learning environment presents synergistic links with constructivist research through a design approach that can enhance energy transition through capacity building. Another conceptual study by Milchram et al. [14] links social learning and energy transition through the Institutional Analysis and Development framework extended with feedback loops of social learning. They contend that value controversies can initiate social learning processes that may eventually bring about structural change. Finally, Edomah et al. [9] focus on the theoretical underpinnings of changes in energy supply infrastructure, which can be perceived as a component of energy transition. The relationship between social learning and energy transition can be determined from the socio-technical transition perspective, which sees social learning as an effectiveness factor in social movements and social innovations. Indirect Links This cluster encompasses four empirical works and one conceptual paper. Groves et al. [27] analyze how the credibility of energy system visions was achieved and posit that social learning is part of the process of the development of visions for demonstrators. Social learning concerns local values, a factor frequently missing in systemic visions, and critical reflections about key assumptions underlying dominant system-level visions. The idea of energy transition as a non-uniform process influenced by geography and history as well as operating at many levels is underlined in a study by Darby [29], who portrayed 'learning stories' in the everyday energy transition of households in Scotland. Social learning as a way of using knowledge from existing cases on energy transition to inform future cases is suggested by Akizu et al. [30]. After comparing different paths of energy transition in the South and the North, they recommend a democratized, universal energy model. The last empirical study in this cluster, conducted by Asayama and Ishii [31], is based on discourse analysis and relates to news media storylines concerning carbon capture and storage. They hold that energy transition in part of a chosen technology can be influenced by news media stories. Therefore, they posit that media should enhance social learning about a given project. The only conceptual study in this cluster, by Sillak et al. [28], proposes a framework to assess co-creation in strategic planning for energy transition. Social learning is included in this framework as one of the activities that foster transformative power in co-creation. Possible Directions for the Development of Inquiry This review allows the researcher to form an initial conclusion, namely, that energy transition research on social learning shares a similar methodological concern with the sustainability transition subfield and transition studies in general, which indicates directions for the development of this scholarship. In all these fields of study, qualitative research designs dominate over quantitative and mixed-method approaches [7,17]. The studies "struggle to connect with disciplines leaning more on quantitative methods ( . . . ) and to generate generic insights beyond single cases" [7] (p. 172). Hence, quantitative and mixedmethods research may offer promising prospects for energy transition research on social learning, e.g., a quantitative analysis of social networks and social learning directed toward energy transition or how the diversity of various stakeholders of energy systems facilitates different orders of learning. Nevertheless, by the very nature of energy transition and social learning, quantitative approaches may not fit all types of posited research questions. Second, although the scholarship is interdisciplinary, to date it has been dominated by two disciplines, sociology and political science. Contributions and insights from other disciplines, such as psychology or management, could also be beneficial. Examples may include the psychological study of the social learning of pro-environmental behaviors and their role in energy transition, green servant leaders as role models in energy transition, the organizational approach to studying superficial social learning in niches, the role of unlearning in social learning, or the occurrence of social learning to resist changes, etc. Third, the overview of the research enabled the identification of potential avenues for future research inspired by each included study ( Social learning triggered by stories about energy transition. 5. Other theoretical underpinnings of energy transition research on social learning, e.g., the study of how social interactions affect the behavior of energy users inspired by Bandura's social learning theory [33], or research on the conditions facilitating social unlearning in energy transition projects underpinned by organizational learning theory. Summary of Narrative Review To recap, based on the studies reviewed, social learning is a crucial force in energy transition that requires the structural change of energy systems, and social learning can promote it. It may occur at different levels of analysis, i.e., micro, meso and macro, and is embedded in social interactions, which may expose controversies concerning values. Social learning facilitates critical reflections about key underlying visions of energy systems. It can support sustainable and just energy transition as it shapes sustainability strategies. Such learning may enable transformative actions triggered by feedback technology. In addition, social learning environments enhance energy transition through capacity building. Energy users may learn from others, while policymakers learn from existing cases about energy transition. Moreover, media can be involved in social learning to promote energy transition. Finally, social learning fosters the effectiveness of social movements and social initiatives engaged in the transition process. To date, social learning has been observed in participation processes, action research, change initiatives, and patchworks of niches. It is part of everyday transition. In addition, it is recommended as a transmission mechanism among various social actors concerning knowledge about energy transition strategies, visions, and values. However, future studies should further develop energy transition research on social learning since, in the view of the author, without a deep understanding of the role and antecedents of social learning in different contexts, turning transition ideas into daily practice may be complicated. The current research has insufficiently differentiated between the processes and outcomes of social learning, its conducive and hampering factors, the durability of changes, or learning to resist changes. Moreover, an imbalance between qualitative and quantitative research is visible. As suggested in another review study on transition research: "Extending the methodological toolbox beyond primarily qualitative process theories might lead to a better understanding of possible intervention strategies-and as such, greater policy impact" [17] (p. 12). Conclusions The field of transition studies is growing and energy transitions appear to be its main focal point [7]. Nonetheless, energy transition research on social learning is still sparse, as this overview has demonstrated, and further elaboration of previous research is necessary. Concerning contributions, the review elaborates upon energy transition studies by showing how links between social learning and energy transition have been analyzed by scholars, which relationships have been identified in prior studies, which theoretical backgrounds have substantiated the studies, and what is still lacking in energy transition research on social learning. This approach allowed suggestions to be made pertaining to how the field can be advanced, i.e., through methodological and theoretical diversity, contributions and insights from more disciplines and research on some possible areas requiring more thorough investigation, as indicated above. More efforts have to be made to better understand social learning in energy transition. The review aims to inspire future empirical research, in which crossovers between transition studies and learning traditions are necessary. The review informs practice and policymakers with respect to the critical role of social learning for energy transition, yet it also claims that the research field is underdeveloped. Concerning practical and political implications, the review indicates that diversity of stakeholders, through divergent perspectives and value controversies, can be conducive to social learning for energy transition and lead to higher orders of learning. However, political and contextual factors may constrain social learning in deliberative processes and should be addressed by appropriate policies. Furthermore, policymakers ought to enhance interactions and dialogue among diverse stakeholders, support their access to multiple information sources about energy transition, and involve various actors in discussions about visions of energy systems. Policymakers may enable the functioning of social movements and initiatives engaged in the transition because these grassroots forces create the conditions for social learning. Both practitioners and politicians should look for actors who can potentially play the role of boundary spanners in energy transition. Moreover, media should also be involved in social learning to promote the necessary changes. Nevertheless, this review is also limited by the predefined search strategy, which excluded certain search engines, and consequently publications, from the analysis. A narrative review method chosen to analyze selected papers is also, by its very nature, subjective. The values and attitudes of the author could have affected the evaluation. Furthermore, narrative reviews are criticized for lacking synthesis and rigor [34], yet the author addressed this issue by applying the PRISMA checklist to the literature analysis. Nonetheless, the review should be subject to further advancements by other scholars.
2021-12-22T17:12:29.399Z
2021-12-17T00:00:00.000
{ "year": 2021, "sha1": "9104813e7cdf3257e6949487dbd42f0020562ee9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/24/8531/pdf?version=1639749266", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e208eea86548f14927a49e8c553350ec8beb0516", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
49862842
pes2o/s2orc
v3-fos-license
Presenting information on dental risk: PREFER study protocol for a randomised controlled trial involving patients receiving a dental check-up Introduction A new dental contract being tested in England places patients into traffic light categories according to risk (Red = High risk). This reflects health policy which emphasises patients' shared responsibility for their health, and a growing expectation that clinicians discuss health risk in consultations. Alongside this, there are technological developments such as scans and photographs which have generated new, vivid imagery which may be used to communicate risk information to patients. However, there is little evidence as to whether the form in which risk information is given is important. Methods The PREFER study is a pragmatic, multi-centre, three-arm, patient-level randomised controlled trial, based in four NHS dental practices, from which 400 high/medium risk patients will be recruited. The study compares three ways of communicating risk information at dental check-ups: 1) verbal only (usual care); 2) a Traffic Light graphic with verbal explanation; 3) a Quantitative Light-Induced Fluorescence (QLF) photograph showing, for example, patches of red fluorescence where dental plaque has been present for two days or more (with a verbal explanation). The study assesses patient preferences using the economic preference-based valuation methodology Willingness-to-Pay (WTP). Any changes in oral self-care (for example in tooth-brushing), will be measured by self-report, and clinical outcome data collected by clinicians and extracted from QLF photographs. Predictors and moderators of any behaviour change will be explored using demographic characteristics and psychological variables from the Extended Parallel Process Model. A cost-benefit framework will explore the financial implications for NHS dentistry of the three risk presentation methods. Background The communication of risk information is a fundamental part of nearly all health promotion interventions [1]; and the emphasis on this growing, given government values of freedom, fairness and responsibility articulated in recent health policy [2]. This is reflected in the NHS general dental practice context, where a new model of remuneration is being piloted, based on a care pathway approach which separates patients into 'Red' (high), 'Amber' (medium), and 'Green' (low) risk categories (RAG) [3]. The categorisation is intended to inform conversations about patient self-care behaviours such as eating less sugar and improving tooth-brushing, which are key lifestyle changes known to improve oral health [4]. However, although a link between clinician-patient communication and post-consultation outcomes has been established, the relationship is not straight forward, since relationships between communication behaviour, meaning and evaluation are complex [5]. Communicating disease risk is especially complex, given that risk judgements are 'imbued by emotion', and 'always interpreted via a social and cultural lens' [6]. Specifically, it is clear that patients do not think about risk as it objectively exists, as a continuum represented by numeric estimates [7,8]. Instead, patients use heuristics, simplified 'rules of thumb, that allow them to understand and make decisions [9][10][11][12]. Thus, the form in which risk information is presented to patients is especially important. Providing personalised information in a simplified and accessible way, such as the proposed RAG categories, therefore potentially influences patients' responses to information on risk. However, little previous research has been undertaken on whether the form in which risk information is presented, matters [13]. In particular, no previous studies have compared patients' preferences for different forms of risk information given in a clinical setting [13]. Developments in medical technology means that the range of possible forms in which risk information can be presented to patients has grown -with routine scans and radiographs now able to demonstrate body fat, heart function, osteo-arthritis of joints etc. Previous studies have shown that medical imagery giving a vivid representation of the consequences of unhealthy behaviour can enhance risk communication, although these have used generalised, not personal images, which provide less tailored information about risk status [14][15][16]. Quantitative Light-Induced fluorescence (QLF) is a recent technological development in the dental field. A QLF camera produces images of teeth, which allows visualisation of tooth mineral loss at a stage before it is visible with the naked eye. It also highlights plaque which has been present in the mouth for more than 48 h [17]. By imaging previously unseen consequences of poor dental self-care, QLF has considerable potential as a risk communication tool, but is, as yet untested. This study aims to investigate the benefits of two alternative means of communicating risk information to patients: a colour-coded RAG graphic, and a QLF image of their teeth and gums, in support of the usual verbal communication between dentist and patient -comparing these to usual care. Of particular interest, is the value which patients attach to different information forms tested, as measured by Willingness-to-Pay (WTP)a measure which is widely used in health economics for measuring patients' preferences and determining the economic value of various services [18]. Study design Using a randomised controlled trial design, we will compare patients' valuation and responses to information given 1) verbally (usual care), [V]; 2) verbally accompanied by a traffic light graphic, [TL]; and 3) verbally accompanied by a QLF image, [QLF]. We expect patients to prefer risk information presented in traffic light and/or QLF groups more than usual verbal communication. We also expect to see a greater improvement in oral health behaviours in the traffic light and/or QLF group compared to the usual care group. Theoretical model Imagery and numeric risk estimates are thought to influence people's reaction to risk messages by increasing patients' perception of the said threat to their health and well-being, thus heightening fear regarding any negative consequences of inaction [19]. The Extended Parallel Process Model (EPPM) describes how two appraisals determine whether a risk communication will prompt patients to adopt healthier behaviour ( Fig. 1) [20]. Firstly, threat appraisals, (encompassing perceptions that negative health outcomes are likely and severe); are postulated to lead to protective behaviour provided that the coping appraisal is also high. Coping appraisal refers to patients' perceptions that they can change unhealthy behaviour (self-efficacy) [21], and that these changes will reduce risk (outcome efficacy). If coping appraisals are high, generating perceptions of threat and fear are thought to promote behavioural change. On the other hand, if coping appraisals are low, this is thought to lead to defensive behaviours (such as denial of the message), even where individuals perceive themselves to be at risk of a threat [20,22]. Imagery, in particular has been associated with defensiveness [23]. The EPPM points to the possibility that certain risk communications can have negative as well as positive effects on individuals [22]. We will therefore use the EPPM as a framework to help understand why traffic light or QLF supplements to usual verbal risk communication at dental check-ups are or are not effective, and how effectiveness of risk communication may be improved. Study objectives • To measure individuals' preferences for three different risk communication forms using Willingness-to-Pay methods. • To identify any differences in preference for information form between differing demographic, behavioural and psychographic groups. • To use variables derived from the EPPM model to predict the likelihood that different information leads to behaviour change; and to measure any actual behaviour change, exploring links between behaviour and patients' valuations. • To conduct a cost-benefit framework analysis of the three different methods and to explore the financial implications for NHS dentistry. Setting Patients will be recruited from four NHS dental practices in two areas of the North of England, which are not involved in piloting of the new NHS dental contract using a RAG categorisation for all patients [24]. Practices will be invited to participate by working down a list of randomly numbered NHS dental practices, until two practices in each geographical area are recruited (excluding single-handed practices in view of these being unlikely to generate sufficient patient throughput to meet recruitment targets). Practices expressing an interest in participation will be provided with an information sheet and will consent to take part in the study by the practice owner/s signing a dental practice consent form. Participants Participants will be recruited by trained staff at each dental practice. Patients will be approached to take part when making a dental check-up appointment. Inclusion criteria NHS adult patients (aged 18 years or older) deemed to be high/ medium risk for poor oral health identified using a nationally developed algorithm, applied by the dental practice [25]. These may be either new patients or regular attenders at that practice. Patients will be screened for eligibility when making the appointment (for example: patient reported symptoms, medical history such as poorly controlled diabetes, and/or health behaviours such as smoking), although eligibility will be fully determined after a clinical examination by a dentist during the dental check-up. This follows the model currently being tested in NHS dental practices where patients are stratified into high/ medium risk groups based on a combination of social history/medical history (patient factors) and clinical assessment criteria [26]. For simplicity, clinical criteria for risk assessment are limited to the most common/serious clinical criteria of dental caries and periodontal (gum) disease factors, with soft tissue lesions and non-carious tooth surface loss (erosion, attrition & abrasion) assessment criteria not included (Fig. 2). Risk for caries and periodontal disease are assessed separately, and then the highest risk rating applied to the patient. Exclusion criteria Patients identified as 'Green' (low risk), based on the absence of either clinical or patient-related factors that increase risk [26] are excluded. Also excluded are patients attending as a new patient for an emergency appointment since they do not routinely receive a full dental check-up; and edentulous patients. Although patients with low literacy will be included in the study, patients who require an interpreter to participate in treatment will be excluded. Patients will receive their usual verbal information from the dentist following their check-up. The dentist will mark any of the six main areas of recommended actions covered in the conversation, on a printed credit-card sized card, which will then be given to patient to take away (Fig. 3). The advice 'Following your dental treatment plan' relates to returning for future dental visits which have been scheduled for that course of treatment. Group 2: Verbal information plus traffic light graphic Patients will receive the same card as Group 1 with any messages covered, marked by the dentist (Fig. 3), but in addition on the reverse side will be a Red/Amber traffic light graphic, corresponding to the risk category to which the patient has been assigned after the clinical assessment (Fig. 4). Group 3: Verbal information plus QLF photograph The dentist will explain the QLF images (anterior teeth and gums only) to the patient and use this to deliver any preventive advice (Fig. 5). A credit card sized colour copy of the QLF photograph which is most relevant to the advice given (plaque coverage or demineralised areas) will be printed. On the reverse of this card, a sticker will be applied to replicate the Group 1 card (Fig. 3), with any messages covered, marked up by the dentist in the same way. Randomisation and allocation concealment The trial will involve simple randomisation of patients into the three arms in a 1:1:1 ratio. Randomisation will occur just after enrolment by dental staff taking a sequentially numbered envelope. The allocation will be revealed by opening the envelope just prior to the information being given (i.e. by the dentist, witnessed by the dental nurse in the dental surgery, after the check-up has been carried out, including baseline clinical outcome assessment -BPE). The allocation sequence will be drawn up by the trial statistician using computer-generated random numbers with block stratification by each of the four dental practices and random variable block sizes. The trial statistician will be blinded to allocation until the final statistical plan is agreed. The researcher extracting clinical outcome data from QLF photographs (plaque coverage and tooth demineralisation) will also be blind to group allocation, as will the researcher gathering 6 and 12 month follow up data. Primary outcome (Willingness-to-pay) Willingness-to-Pay (WTP) will be used to quantify people's preferences for the three forms of information (V, TL, QLF). WTP is recognised as representative of how consumers respond to health care decision making [27]. Other economic preference based measures such as health state utility measures are deemed as unlikely to be sensitive enough to detect small changes in utility; whereas discrete choice experimentation to determine WTP indirectly, are deemed over-complicated for implementation in research based in dental practices [28], which leaves WTP as the most appropriate valuation tool for this context [29]. Secondary outcomes Patient-Clinician communication measured immediately after the intervention • Communication Assessment Tool (CAT) [30] 15 item stem from 1=very poor to 5=excellent Self-reported behaviour change between baseline and 6 months post-intervention; and between baseline and 12-months post-intervention • Change in self-rated oral health between baseline and 6 months post-intervention 'Would you say your dental health (mouth, teeth and/or dentures) is? 1=very good to 5 = very poor [31] • Basic Periodontal Examination (BPE) -clinical data collected by the dentist during the check-up: conversions between codes 1 (bleeding) and 0 (health) between baseline and next dental visit • Change in plaque coverage: percentage of the area of buccal/labial surfaces of anterior teeth calculated from QLF images (Plaque Percentage Index (PPI), ΔR30) [35] taken at baseline and next dental visit. The QLF anterior teeth image will involve a photograph of maxillary and mandibular teeth from canine to canine taken without overlap of incisal edges. • Number of tooth surfaces affected by early caries: Change between baseline and the next dental visit in the number of surfaces which are demineralised on the buccal/labial surfaces of anterior teeth as a proportion of number of surfaces available [35]. Teeth will be cleaned (brushed by the participant after taking the QLF image to measure plaque), before a second QLF image measuring caries. • ΔQ: Change between baseline and the next dental visit in percentage fluorescence loss based on the fluorescence of sound tissue multiplied by the area. This is an estimate of lesion volume, and can be combined over all lesions to give a total estimate of overall severity per patient [35]. Calculated from QLF image taken after cleaning. Predictor and moderator variables Patient socio-demographic characteristics • Area level deprivation based on home postcode (IMD) [36]. • Literacy using the Rapid Estimate of Adult Literacy in Medicine (REALM) [38]. The 8 medical test words, as well as the 3 practice words will be printed and shown to participants on an A4 laminated sheet, but with the American spelling of anemia changed to the English version (anaemia). Patient dental visiting behaviour, dental anxiety and previous dental experiences • Dental visiting: 'In general, do you go to the dentist for: 1=a regular check-up, 2=an occasional check-up, 3=/only when having trouble with my teeth/dentures?' [31]. 'How many times have you been to the dentist in the last five years purely for a check-up?' [31] • Previous pain experience: 'Have you ever experienced dental pain bad enough to make you go to the dentist (tick all that apply): 1=currently in pain, 2=In the last 6 months, 3=6 months to 2 years ago, 4=more than 2 years ago, 5=never)' [31]. • Previous experience of dental treatment: 'Have you ever had fillings/teeth extracted (taken out)/a dental bridge or a tooth crowned/a root canal treatment/a scale and polish?' [31] • Dental anxiety: The Modified Dental Anxiety Scale (MDAS) [39]. Dental provider characteristics • Dental practice • Clinician ID delivering information Oral health • Number of natural teeth as recorded by the dentist at the check-up Risk perception and behaviour change predictors/moderators (based on the EPPM model) [20] • Perceived threat (severity) 'How serious would it be to you if [negative outcome] were to occur, response from 1=not at all serious to 5=absolutely serious Five possible negative outcomes of poor oral health were identified in earlier qualitative work: 'your teeth were to make you feel uncomfortable when smiling, talking and laughing in front of people; people thought you had failed to look after your own teeth; your teeth were to become more painful and sensitive; you were to need treatment which meant spending more time at the dentist; you were to need treatment which you could not afford' • Perceived threat (susceptibility) 'If you do not follow the dentist's advice, how likely is it that [same 5 negative outcomes] will occur' response from 1=absolutely unlikely to 5 absolutely likely • Self-efficacy 'Please consider how confident you are that you can perform the behaviour properly, regularly and on a long-term basis for [target behaviour]' response from 1=not at all confident to 5=absolutely confident. • Danger control response measured using 3 items: 'How likely it will be before your next appointment that you will. 1) follow the advice given by the dentist completely; 2) follow some of the advice given by the dentist 3) talk to someone about the advice the dentist gave you' response from 1=absolutely unlikely to 5=absolutely likely • Fear control response measured using 4 items with stems: 1) Defensive Avoidance: 'I prefer not to think about the advice given to me by the dentist' 2) Perceived Manipulation: 'The advice given to me by the dentist is untrue or manipulated' and 3/4) Message Derogation: 'The advice given to me by the dentist is exaggerated' and 'I do not personally believe the advice given to me by the dentist' responses from 1=strongly disagree to 5=strongly agree • Intention to change behaviour 'Before my next appointment I intend to.[target behaviour]' responses from 1 = absolutely disagree to 5 = absolutely agree. Sample size calculation The required sample size calculation is based on a need to detect significant differences in the primary outcome (WTP) between the two interventions (Traffic Light and QLF) and usual care arms at 80% power with α = 0.05. Sample sizes for WTP studies are recognised as difficult, given the problems in deciding on a minimally important difference amongst others [42]. Moreover, for this study, no previous WTP valuations have been undertaken for similar "goods" i.e. information based "goods" in health. In the absence of reliable standard deviation and effect size estimates, sample size was calculated with effect sizes based on numbers of standard deviations rather than absolute numbers. Thus, to detect a difference of 0.5 standard deviations 63 people is calculated to be required per arm. To detect 0.33 standard deviations, 145 per arm would be needed. Accepting a detectable difference between half and a third of a standard deviation and allowing for around 20% refusal to answer WTP questions (protest responses) gives a figure of 133 in each arm or a total sample size of 400. We then calculated the implication of this sample size for the detection of clinical outcome effects, and based this on the secondary outcome of PPI, measured using QLF [35]. Published data on a group of 38 college students showed a mean PPI of 14.8, with a standard deviation of 7.7 [43]. As this is likely to be a more homogenous population than our study, a more conservative estimate of standard deviation of 10 has been used to calculate a sample size of 133 per group would allow us to detect a mean different of 3.5 in PPI between groups, with 80% power, at the 0.05 significance level. Trial process and trial-specific training All practices will receive two separate sets of training. First in relation to the conduct of the trialthe whole dental team including the receptionist, will receive training in Good Clinical Practice and trial procedures (e.g. patient consent, randomisation procedures and completing study records). A crib sheet detailing study procedures in all three arms will be provided for reference. Secondly, researchers will train dental teams in trial-specific processes including how to take (whole dental team) and clinically interpret (dentists only) QLF photographs, again supported by written information, and a video about the use of the camera. Both types of training will be undertaken in the dental practice itself. Given that research identifies that verbal metaphors can influence patients' mental images of their condition [44], trial specific training will include standard messages such as 'This red patch on the QLF photograph shows bacteria which have not been cleaned by you for 2-3 days. If you do not improve tooth-brushing here you are highly likely to develop problems'. Data collection procedures Data will be collected at baseline, then 2-3 weeks later when the patient returns for their first treatment visit (second dental visit, V2), and then at 5-6 weeks post intervention (third dental visit, V3), (Fig. 6, Table 1). Since eligible patients are those at medium/high risk of poor oral health, we expect the patient to need to return for further care, and so we will collect data opportunistically at this time. At these V2 and V3 follow up visits, the dentist will measure BPE (before the patient receives usual care treatment), and a further QLF photograph will be taken (not shown to the patient) to enable a change in plaque coverage and demineralisation to be measured from QLF images. Participants will be asked about any change in behaviour in relation to the messages (Fig. 3) listed on the printed cards (given to all patients at the first visit), using the item stem 'Since my last appointment I have …. '. There will be two further follow up points at 6 and 12 months post intervention, involving questions completed either by telephone or email, depending on what contact information the patient gave for this purpose. For those who cannot be contacted using this method, postal questionnaires will be sent. This will be conducted by a trained member of the research team. Patients who complete follow-up will be given the opportunity of being entered into a prize draw. Measures will be collected according to the schedule outlined in Table 1. The patients will enter some data themselves using a Tablet computer with Qualtrics survey software (Version 092,017, © 2017 Qualtrics ® ). Trained dental nurses will administer an assessment of literacy (REALM), enter some demographic details and take QLF photographs. Measuring WTP This will be self-completed by patients on the Tablet PC platform, supported by a trained dental nurse, where required. The WTP elicitation will occur before the patient has been given risk information in any form; so all participants will give values for all three interventions, regardless of subsequent randomisation to one intervention. Participants will be given descriptions and sample images of the three interventions (Figs. [3][4][5], and asked to rank them in order of preference, and then asked firstly for their WTP for the lowest ranked intervention. This will be supported by a script which encourages realistic, budget constrained responses and which emphasises that the exercise is about value rather than price. On the computer participants will be presented with a series of virtual cards with different values from 50 pence to £150 on them and asked to drag each card to one of three boxes: "Would pay; Wouldn't pay; Not sure". The different valued cards will be presented in a random order, with a random starting card. Participants will not see the value of subsequent cards until the current card has been placed. Once all cards have been placed, the lowest "Wouldn't pay" and highest "Would pay" value will be shown, and the participant asked for a maximum WTP value in an open-ended format. This shuffled payment card approach to WTP elicitation is thought to reduce starting point and range bias found in WTP methods [45]. Once a WTP value for the lowest ranked intervention has been determined, participants will then be asked what extra they would be willing to pay for their next most preferred intervention in an openended question. Finally, participants will be asked what extra they would be willing to pay on top of their value for their middle-preferred intervention. This incremental approach to eliciting WTP has been shown to give more robust valuations [46]. After receiving information at their check-up, patients will return to the Tablet PC-based task, where they will be reminded of the WTP value they had given prior to the intervention which had been allocated to receive. They will then be asked if they would revise their WTP, given their actual experience with that form of information. If they indicate they would revise this, the new value will be collected using an openended question. Analysis plan A detailed statistical analysis plan will be prepared prior to data analysis. Analysis of WTP We will compare hypothetical WTP across the whole sample, across the three arms. We will also compare hypothetical WTP with WTP for a good that has been "consumed" by comparing WTP before and after receiving the information form in that arm. WTP data will be analysed both descriptively, comparatively and econometrically. Descriptive data analysis will consist of WTP value means with standard deviations, along with medians and quartiles for each information form. The WTP values for each system will be compared using appropriate comparative tests (e.g. most likely to be Mann Whitney U tests given the likely distribution). In order to understand fully how various dental and demographic factors influence values (WTP), econometric regression analyses will be carried out. Analysis of behavioural outcomes Group effects on oral health behaviour between the conditions will be assessed by predicting six and twelve-month follow-up scores from group membership. To minimise the probability of Type 1 error, multivariate analyses will be used. Subsequent univariate analyses will identify specific variables affected by group. Planned comparisons will test the hypotheses that; TL leads to greater positive oral health behaviour change than V, and QLF leads to greater positive behavioural change than V. Separate analyses will examine changes from baseline to 6 months and from baseline to 12 months. Moderation analyses involve the modelling of two-way interaction effects from the product of the predictor and moderator variables and testing the hypothesis that the interaction term uniquely predicts behavioural change controlling main effects terms [47]. Regression analyses will be used to test this hypothesis. Demographic, dental health and dental visiting variables will be tested as potential moderators. Mediation analyses assess the extent to which intermediate variables (mediators) explain variance shared between predictor and outcome variables. Again, this analysis will be performed only where between group differences in behavioural change have been identified. Our mediation analysis strategy is based on recommendations by Zhao et al., [48] who specify pre-conditions that the predictor (group membership) be linked to the mediator (EPPM variables), and that the mediator be linked to the criterion (behavioural change) controlling the predictor. When these are established, path analysis can be used to estimate the mediation effect. To statistically test mediation, we will use a bootstrapping method [49]. All potential mediators will be assessed in the same analysis, reducing the likelihood of Type 2 error that would be associated with testing mediators separately. Analysis of clinical outcomes Clinical outcomes will be analysed at the first short term follow-up visit, no more than three months post-randomisation. All QLF images will be rated as good/poor quality by the assessor (blinded to group allocation). Those not judged to be of sufficient quality to accurately generate the outcome variables will be excluded from the analysis. Analysis of the three QLF generated variables will use general linear models, with value of the variable of interest at the first short-term follow-up appointment as the outcome variable, and intervention group allocation as a fixed factor. Baseline value of the variable of interest will be included as a covariate. All analyses will follow the intention to treat principle as far as is practically possible. Basic Periodontal examination (BPE) scores will be analysed by categorising each patient into one of four outcome categories. 1stable healthy (code 0 at baseline and follow-up), 2-stable bleeding (code 1 or greater at baseline and follow-up), 3change to bleeding (code 0 at baseline, code 1 or greater at follow-up), and 4change to healthy (code 1 or greater at baseline, code 0 at follow-up). The analysis will use multinomial logistic regression to test the effect of intervention group on outcome, with the primary hypothesis of interest between groups 2 and 4. Multiple imputation analyses will be carried out to investigate the robustness of the results to missing data. Using all the data which is available, including baseline variables, five complete case data sets will be imputed, and each analysed in the same way. The results can then be combined, and compared with the analysis excluding missing data, to assess whether this would be likely to change the overall conclusions. Ethical considerations Liverpool Health Partners is the research sponsor (Approval reference no: UoL001042). Favourable ethical opinion for the study was confirmed by the North West, Liverpool-East National Research Ethics Committee on 1.8.14 (REC reference number 14/NW/1016). A subsequent substantial amendment in ethical approval was obtained on 18.3.16 to allow a prize draw of ten lots of £25 for patients completing follow up data collection at 6 and again at 12 months (£500 total). NHS Research Governance approval was obtained from The Royal Liverpool and Broadgreen University Hospitals NHS Trust on 20.8.14 (reference number 4819). Participants meeting eligibility criteria will be given a Patient Information Sheet by dental practice staff which will provide all details of the study procedures, rights to withdrawal, anonymity, confidentiality, along with details of the research team; and have the opportunity to ask questions before informed consent is taken by a member of dental staff trained in GCP. Participant details will be recorded on a recruitment log to facilitate follow-up data collection. Participants will be assured that all data will be anonymised and stored on a database under the guidelines of the 1998 Data Protection Act. No patient-identifiable information will be sent via electronic means (use of coded study number, patient's sex and patient's age only). Any patient identifiable information (e.g. recruitment logs), will be collected in person from dental practices by a member of the research team rather than electronically. Data and study documentation will be archived at the University of Liverpool for at least five years after the completion of the study in line with associated regulations. Discussion Reform of the NHS dental contract is based on a reorientation from a treatment-focused service, to one which prioritises prevention of disease; with patients participating by improving their oral health behaviours to minimise the (re-)occurrence of disease. The idea follows a general drive towards emphasising that patients have a shared responsibility for maintaining their health. Over 75 NHS dental practices are currently testing two alternative prototypes in England; both using a categorisation of patients according to a Traffic Light scheme, with this information communicated to patients at their dental check-up. This study is therefore set to directly inform national policy, relating to whether patients value risk information in this form. A recent nonrandomised study in six NHS dental practices which compared a new model involving a traffic light risk assessment of patients, with other practices using the previous model of dental practice care, found only 291 of 550 participants recruited attended both baseline and follow up appointments (53%). This suggests that long term follow up of patients in a study such as this may present challenges [50]; and recruitment of dental patients to a trial set in dental practice, especially those with a high/medium risk of oral disease, may present further challenges [51]. Nevertheless, when considered against the continuum of pragmatic as opposed to explanatory trials, this study is designed to be at the pragmatic end of the spectrum, since its purpose is to examine the intervention (communication of risk information) under the usual conditions in which they will be applied [52]. The trial will be supported by embedded qualitative work to support understanding the contextual issues influencing the implementation of the intervention, and the trial itself. Study status The first patient was recruited in August 2015. Twelve-month follow up data collection will be completed by October 2017. Contributors RH, JS, CV, LT, SB, and GB designed the study. RH, SB, LL, and VL designed self-report measures. LL, RH, CV, SH, contributed to the design of data collection procedures. RH, SB, CV and GB designed the analysis plan. RH drafted the manuscript, with all authors approving the final manuscript. All authors will meet regularly to ensure the smooth running of the study. Funding This study is funded by the National Institute for Health Research (NIHR), Health Services and Delivery Research [HS & DR] programme (project number 13/33/45 PREFER The cost and value of different Forms of information on oral health status and risk given to patients following a check-up in dental practice: A Randomised Controlled Trial). This report is independent research commissioned by NIHR.
2018-08-06T12:46:11.686Z
2018-05-07T00:00:00.000
{ "year": 2018, "sha1": "586129a7f237b2ef6689634c82c933c3c96cf78c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.conctc.2018.05.009", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e16c3b21898d7861ed4a4dfeaefde054bd232463", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
2865078
pes2o/s2orc
v3-fos-license
Hyperbolicity of Semigroup Algebras Let $A$ be a finite dimensional $Q-$algebra and $\Gamma subset A$ a $Z-$order. We classify those $A$ with the property that $Z^2$ does not embed in $\mathcal{U}(\Gamma)$. We call this last property the hyperbolic property. We apply this in the case that $A = KS$ a semigroup algebra with $K = Q$ or $K = Q(\sqrt{-d})$. In particular, when $KS$ is semi-simple and has no nilpotent elements, we prove that $S$ is an inverse semigroup which is the disjoint union of Higman groups and at most one cyclic group $C_n$ with $n \in \{5,8,12\}$. Introduction In this paper we focus on what we call the hyperbolic property. We say that a finite dimensional Q-algebra A has the hyperbolic property if for every Z-order Γ ⊂ A the unit group U(Γ) does not contain a finitely generated abelian group of rank greater than one. This terminology is suggested by the fact that hyperbolic groups have this property [7]. Research in this direction goes back to Jespers, who classified those finite groups G for which U(ZG) has a non-Abelian free normal complement [9]. More recently Juriaans-Passi-Prasad have given contributions on this topic in the integral group ring case [11], and Juriaans-Passi-Souza Filho in the group ring RG when R is the ring of algebraic integers of a quadratic rational extension [12]. Here we give a complete classification of the finite semigroups whose semigroup algebra KS has the hyperbolic property with K = Q or k = Q( √ −d). Part of this was done by Jespers and Wang [10] who classified the finite semigroups S for which the unit group U(ZS) of the integral semigroup ring ZS (we of course assume that this ring contains an identity) is a finite group. Firstly, we prove a structure theorem for the finite dimensional Q-algebras with the hyperbolic property. We prove that the radical of such an algebra has nilpotency index at most 2 and that its Wedderburn-Malĉev components consist of copies of Q or quadratic fields, totally definite quaternion algebras, two-by-two matrices over Q and upper-triangular matrices over Q. Details on the structure of these algebras are given in section 3. In section 4, we classify the finite semigroups S whose semigroup algebras KS has the hyperbolic property, with K = Q or K is a quadratic extension of Q. In section 5 we study the idempotents of the maximal subgroups of finite semigroups S which are not semi-simple in order to obtain a best comprehension of the structure of S, when QS has the hyperbolic property. Notation is mostly standard and we refer the reader to [3] and [13] for the theory of semigroup and semigroup algebra. However, for the readers convenience, section 2 contains some basic facts on the theory of semigroups. Preliminaries A non-empty set S with an associative binary operation · : S 2 → S is a semigroup. Let S be a semigroup, the set S 1= S ∪ {1}, such that, ∀s ∈ S, s · 1 = 1 · s = s is a monoid, that is, a semigroup with an identity element, and the set S θ= S ∪ {θ}, such that, ∀s ∈ S, s · θ = θ · s = θ, θ called a zero element, is a semigroup with a zero element. A semigroup S with zero θ is a null semigroup if for all x, y ∈ S, x · y = θ. An element e ∈ S, such that, e 2 = e is an idempotent. Denote E(S) := {e ∈ S/e 2 = e} the set of idempotents of S and let e, f ∈ E(S); then e ≤ f if e · f = f · e = e. An idempotent f ∈ E(S) is primitive if f = θ and if e ≤ f yields e = θ or e = f . A semigroup S is simple, if it does not properly contain any two-sided ideal. A semigroup with zero θ is 0-simple if S 2 = {θ} and {θ} is the only proper two-sided ideal of S. A 0-simple semigroup S is completely 0-simple if it contains primitive idempotents. Let I be an ideal of S. The semigroup of the Rees factors, denoted by S/I, is the set (S \ I) ∪ {θ} subject to the operation · defined by A principal series of a semigroup S is a chain The semigroups of the Rees factors S i /S i+1 are called factors of this principal series. It is well known that if S is a finite semigroup, then the factors of S are either a null semigroup with two elements which we will call null factor or a completely 0-simple semigroup. The semigroups S which are union of groups appear naturally in the context we work. Since a semigroup which is a union of groups is the disjoint union of its maximal subgroups we have the following: Lemma 2.1 Let S be a finite semigroup whose factors are isomorphic to groups with a zero element adjoined θ i , that is, S i /S i+1 ∼ = G θi i . Then S is a disjoint union of groups. Let G be a group and I and Λ arbitrary non-empty sets. By an I × Λ Rees matrix, we mean an I × Λ matrix over G θ with at most a unique entry different from θ. For a ∈ G, i ∈ I and λ ∈ Λ, (a) iλ denotes an I × Λ Rees matrix over G θ , where a is the entry corresponding to row i and column λ and all other entries are zero. For any i ∈ I and λ ∈ Λ, the expression (θ) iλ denotes the I × Λ null matrix, which is also denoted by θ. Since we are dealing with finite semigroups, it is sufficient to consider a finite number of rows and columns, m, n, respectively. For 1 ≤ i ≤ m and 1 ≤ j ≤ n, fix P = (p) ij a m × n matrix over G θ , called a sandwich matrix, and let M 0 be the set of the m × n Rees matrices over G θ . In M 0 we define the operation AB = A • P • B, where • denotes the usual matrix product, which is binary and associative and therefore the set {M 0 , •} is a semigroup. This semigroup is denoted by M 0 (G; m, n; P ) and G is called its structural group. In a similar way we define the Munn matrices. Let R be a ring and m, n positive integers. Consider M(R; m, n; P ) the set of m × n matrices over R. For each A = (a ij ), B = (b ij ) ∈ M(R, m, n, P ), 1 ≤ i ≤ m, 1 ≤ j ≤ n, addition is defined by A + B = (a ij + b ij ), and multiplication by AB = A • P • B, where P is a fixed n × m matrix with entries in R and • is the usual matrix operation. The ring M(R; m, n; P ) is called an algebra of matrix type over R or a matrix algebra over R. Let A be a finite dimensional Q-algebra. A unitary subring Γ of A is called a Z-order, or simply an order, if it is a finitely generated Z-submodule such that QΓ = A, (see [16, 1.4]). Remember that, by the Borel-Chandra Theorem [1], the unit group of a Z-order of A is finitely generated and hence the hyperbolicity of U(Γ) makes sense. Furthermore, if U(Γ) is a hyperbolic group then U(Γ 0 ) is hyperbolic for all Z-order Γ 0 ⊂ A, since the unit groups of orders are commensurable. It is known, [7], that the hyperbolicity of U(Γ) implies that Z 2 ֒→U(Γ). This suggests the following definition. Definition 2.2 Let A be a finite dimensional Q-algebra and Γ a Z-order of A. We say that A has the hyperbolic property if Z 2 ֒→U(Γ). Note that, as seen above, this definition does not depend on the particular order Γ of A, (see [1]). Throughout the text we use the standard notation diag(a 1 , · · · , a n ) for a n × n matrix with elements on the main diagonal set to a 1 , · · · , a n and all the other elements set to zero. Also e ij denotes the elementary matrix whose entry is 1 in the position i, j and zero otherwise. We denote by T 2 (Q) :=   Q Q 0 Q   with the usual matrix multiplication. Finite Dimensional Algebras with the Hyperbolic Property The main result of this section is Theorem 3.1 in which we classify the finite dimensional Q-algebras which have the hyperbolic property. (iv) A has the hyperbolic property and is non-semi-simple with non central radical if, and only if, For each item (i) − (iv), the A i 's are either at most a quadratic imaginary extension of Q or a totally definite quaternion algebra. Furthermore, in the decompositions in (i) − (iv) above the direct summands are ideals. We will consider A a finite dimensional Q-algebra with radical J(A). According to a theorem of Wedderburn-Malĉev, there exists a semi-simple subalgebra S(A) of A such PROOF. Obviously 1 + J is a multiplicative torsion free nilpotent group. Let G be any finitely generated subgroup of 1 + J. Hence Z(G) = 1. Since Z 2 ֒→(1 + J) the same holds for G, so G = Z(G) ∼ = Z. Since J is a nilpotent ideal, there exists a least positive integer n, J n = 0 = JJ n−1 thus 1 + J = Z(1 + J) = Z(1 + J n−1 ) ∼ = Z. Hence n = 2 and dim Q J ≤ 1. If J = 0 and Γ ⊂ A is a Z-order, let x, y ∈ J ∩ Γ. Then the group 1 + x, 1 + y < U(Γ), and 1 + x, 1 + y are units of infinite order. Since U(Γ) is hyperbolic we have 1+x, 1+y ∼ = Z. Hence 1+x ∩ 1+y is non-trivial and there exist integers m, n, such that, (1 + x) m = (1 + y) n . Since x, y are 2-nilpotent, we have 1 + mx = 1 + ny, and thus x = n m y. So {x, y} is a Q-linearly dependent set and we conclude that dim Q (J) = 1. Write J = Qj 0 , so 1 + J ∼ = Q. Indeed, φ : 1 + J → Q, φ(1 + qj 0 ) =: q is an isomorphism. If 1}. This yields the existence of a unique index, m say, 1 ≤ m ≤ N , such that, On the other hand, for i = k, we have that j 0 · E i = E i · j 0 = 0. Therefore J commutes with S(A) and thus it is central. The other statements are now also clear. Let A be a rational finite dimensional algebra with the hyperbolic property, We have: Clearly ϕ is an algebra isomorphism. Hence Thus we proved the next theorem: Moreover, for each 1 ≤ i ≤ N , A i is at most a quadratic imaginary extension of Q, or a totally definite quaternion algebra. PROOF. By the previous theorem, B and T 2 (Q) are ideals whose direct sum equals A. Consider the algebra isomorphism is an element of infinite order and set Therefore, by Lemma 2.3 of [16], each A i is at most an imaginary extension of Q, or a totally definite quaternion algebra. To prove the converse it is enough to consider the right and left action of J on the semi-simple part of A. If A has the hyperbolic property and the radical J = {0} is central, then S(A) is a direct sum of division rings: in fact, if any component of S(A) were of matrix type it would have an element of infinite order. Hence once again we could embed Z 2 ֒→ U(Γ), for some Z-order Γ. Therefore the simple components A i , 1 ≤ i ≤ N , of S(A) are as in the corollary above. PROOF. (of Theorem 3.1) Items (iii) and (iv) follow from Theorem 3.6 and its corollary. Corollary 3.8 Let A be a finite dimensional Q-algebra with the hyperbolic property, and We now prove (2): as A is semi-simple with nilpotent elements we have that A ∼ = ⊕M ni (D i ), where the D i 's are division rings. Remark 3.9 implies that n i ≤ 2, ∀i. The hyperbolicity hypothesis implies that there is at most one component with n i0 = 2 and it is isomorphic to M 2 (Q) (this follows by Remark 3.9). Let Γ i ⊂ A i be a Z-order of Γ i and consider the Z- . It follows that all U(Γ i ) are torsion groups and hence they are finite. The converse is straightforward, since GL 2 (Z) is hyperbolic. We now prove (i): if A is semi-simple with no nilpotent elements then M 2 (Q) is not a Wedderburn component of A and hence A ∼ = ⊕A i , a direct sum of division rings. If for any Z-order Γ ⊂ A it holds that U(Γ) is finite we are done. Suppose |U(Γ)| = ∞, Let Γ = ⊕Γ i ; then U(Γ) ∼ = U(Γ i ). The hyperbolicity of U(Γ) implies that there can be at most one index i 0 for which U(Γ i0 ) is infinite and hence we are done. The converse is obvious. Remark 3.10 Let A be a finite dimensional non-semi-simple Q-algebra with the hyperbolic property and J its radical. If a ∈ A is a non-trivial nilpotent element then a ∈ J. In fact, by Theorem 3.1, (respectively a ∈ J). It is sufficient to consider the case for J non-central. Let a = diag(x, z) + ye 12 ; a 2 = 0 yields x = z = 0, and y ∈ Q. Therefore, a = ye 12 ∈ J. Semigroup Algebras with the Hyperbolic Property In this section we classify the finite semigroups S for which QS has the hyperbolic property, we also classify the extensions K = Q( √ −d) with this property. First some terminology: a finite group G is called a Higman group if G is either abelian of exponent dividing 4 or 6 or a Hamiltonian 2-group. Recall that nilpotent free means the absence of nilpotent elements and Q 12 ∼ = C 3 ⋊ C 4 where C 4 acts by inversion on C 3 . Let K be a field and S a semigroup. By the semigroup algebra KS of S over K we mean an algebra A over K which contains a subset S that is a K-basis and a multiplicative semigroup of A isomorphic to S. Let S be a semigroup with a zero element θ. By the contracted semigroup algebra K 0 S of S we mean an algebra over K with a basis B, such that, B ∪ {θ} is a subsemigroup of K 0 S isomorphic to S. If S is a Rees matrix semigroup, S = M 0 (G; m, n; P ), then the contracted algebra K 0 S ∼ = M(KG; m, n; P ), [3,Lemma 5.17]. We suppose the algebra KS has a unity. By [13,Corollary 5.26], if S = M 0 (G; m, n; P ) is a Rees matrix semigroup then the following conditions are equivalent: (i) Q 0 S is unitary; (ii) m = n and P is an invertible matrix in M n (QG). If a structural group G = {1} is trivial then, up to isomorphism, there exist exactly two Rees matrix semigroup S = M 0 ({1}; 2, 2; P ) with Q 0 S a unitary ring. In the following remark, we exhibit these semigroups since they appear as factors of a principal series of S when QS has the hyperbolic property and contains nilpotent elements. In the sequel we shall make free use of the following results: (i) Every periodic 0-simple semigroup (in particular any finite semigroup) is completely 0-simple, [3, Corollary 2.56]. Hence, by Rees Theorem, a 0-simple semigroups is isomorphic to some Rees matrix semigroup. (ii) Let S be a finite simple semigroup, if KS is semi-simple then S is a group, [3,Corollary 5.24]. (iii) QS is semi-simple if, and only if, Q(S i /S i+1 ) is semi-simple for each factor of S, [3, Theorem 5.14]. (iv) Let QS be semi-simple. If S i /S i+1 is a factor of S then S i /S i+1 is isomorphic to a Rees matrix semigroup. Let S be a finite semigroup, a, b ∈ S are inverses if aba = a and bab = b. An inverse semigroup is a semigroup whose non-zero elements have a unique inverse. Suppose ZS has an identity, U(ZS) is finite if, and only if, S is an inverse semigroup which is the disjoint union of groups that are finite Abelian groups of exponent dividing 4 or 6 or 2-Hamiltonian groups [10, Theorem 6.1]. Clearly, for such semigroups the hyperbolic property holds. We shall now start a classification of all finite semigroup whose semigroup algebra over Q has the hyperbolic property. In what follows we suppose that ZS has a unit. Recall that S θ is nilpotent if there exists n ∈ Z + , such that, S n = {θ}. If s ∈ S and s m = θ, then s is called m-nilpotent element, or nilpotent. We use the expression "nilpotent free" to indicate the absence of non-trivial nilpotent elements. Lemma 4.2 Let S be a finite semigroup. Then QS is nilpotent free if, and only if, S admits a principal series whose factors are isomorphic to maximal subgroups G, say, of S and QG is nilpotent free. In particular, S is the disjoint union of its maximal subgroups. PROOF. It is a consequence of [3, Lemma 5.17] and Lemma 2.1. Theorem 4.3 The algebra QS is nilpotent free and has the hyperbolic property if, and only if, S admits a principal series for which every factor is isomorphic to one of the groups below: (i) A Higman group; (ii) One of the following cyclic groups: C 5 , C 8 or C 12 . Furthermore, at most one of the groups of type (ii) occurs. Moreover, S is an inverse semigroup and it is the disjoint union of groups of type (i) with at most one group of type (ii). PROOF. Since QS is nilpotent free, by the previous lemma, S has a principal series S = S 1 ⊃ S 2 ⊃ · · · ⊃ S n+1 = ∅ whose factors S i /S i+1 ∼ = G i , a group, and S ∼ = ∪G i . Thus QS θ ∼ = (⊕ i Q 0 G i )⊕Qθ and Γ = ( Z 0 G i )×Zθ is an order of QS θ . If QS has the hyperbolic property, by Theorem 3.1 item (i), QS ∼ = ⊕A i where at most one component A i0 admits a Z-order Γ i0 such that the group U(Γ i0 ) is hyperbolic infinite. Hence, by [10, Theorem 6.1], the groups G i , i = i 0 are Higman groups and, by [11,Theorem 3], G i0 ∈ {C 5 , C 8 , C 12 }. Conversely, if S is a semigroup with a principal series whose factors S i /S i+1 ∼ = G i then Q 0 S ∼ = ⊕Q 0 (S i /S i+1 ) ∼ = ⊕Q 0 G i . Consider the order Γ previously defined. By hypothesis, we have at most a unique cyclic group G i0 , say, of order 5, 8 or 12 and all other U(ZG i ), i = i 0 are trivial. Therefore, by Theorem 3.1 item (i), QS has the hyperbolic property. An algebra A with the hyperbolic property and which has nilpotent elements may be semi-simple or not. If it is semi-simple then, by Theorem 3.1, its Wedderburn-Malĉev decomposition has a unique component isomorphic to M 2 (Q). For any other component the unit group of every Z-order of this component is a finite group. In the next theorem we classify the finite semigroups whose rational semigroup algebra has these properties. Theorem 4.4 Let QS be a unitary algebra with nilpotent elements. Then QS is semisimple and has the hyperbolic property if, and only if, S has a principal series with all factors, except for one, isomorphic to Higman groups. The exceptional one is isomorphic to a semigroup K of the following type: (i) K ∈ {S 3 , D 4 , Q 12 , C 4 ⋊ C 4 : C 4 acts non trivially on C 4 }; In particular, if K is a group then S is the disjoint union of Higman groups and K. Conversely, if S has a principal series as described then is a Higman group for every i = i 0 and K ∼ = S i0 /S i0+1 the exceptional factor. Since either Clearly, if Γ i is a Z-order of A i then U(Γ i ) is finite. Thus by Theorem 3.1 item (ii) the algebra QS has the hyperbolic property. If S is a finite semigroup which is non-semi-simple then, according to [3,Corollary 5.15], every principal series of S admits a null factor (a null semigroup with two elements). PROOF. Let S θ = S 1 ⊃ S 2 ⊃ · · · ⊃ S n = {θ} ⊃ ∅ be a principal series of S. We have for each factor S i /S i+1 and J(Q 0 (S i /S i+1 )), the radical of Q 0 (S i /S i+1 ), that Thus, if S i0 /S i0+1 is a null factor then Q 0 (S i0 /S i0+1 ) ⊆ J(QS). Suppose that S i1 /S i1+1 is another null factor of S, clearly if x l ∈ S i l /S i l +1 , l = 0, 1, then 1 + x 0 , 1 + x 1 ∼ = Z 2 , which is contrary to hyperbolic property of QS. If f is not nilpotent in S then f 2 ∈ S i0+1 \ {θ} and f is nilpotent inŜ := S/S i0+1 . If f is nilpotent in S, by Remark 3.10, f ∈ J and consequently J = Q f . We claim that I := {θ, f } is an ideal of S. In fact, if s ∈ S then sf ∈ J and hence, by the previous proposition, sf = θ or sf = f and so sf ∈ {θ, f }. Similarly we have that f s ∈ {θ, f }. Since in each case, f ∈ S i0 /S i0+1 the unique null factor of the principal series of S, clearly f is the unique nilpotent element. Proposition 4.7 Let S be a finite semigroup that admits a nilpotent element j 0 ∈ S. QS is non-semi-simple and has the hyperbolic property if, and only if, I = {θ, j 0 } is an ideal of S and S/I has a principal series whose factors are isomorphic to Higman groups. In particular, S/I is the disjoint union of its maximal subgroups. PROOF. We have that QS ∼ = S(QS) ⊕ J with non-trivial J. Since QS has the hyperbolic property we have, by Theorem 3.1, that QS ∼ = (⊕A i ) ⊕ X, where X ∈ {J, T 2 (Q)} depending on the centrality of J. In both cases, if Γ is a Z-order in QS/J, then U(Γ) is finite. Therefore QS/J has the hyperbolic property and is nilpotent free. By hypothesis j 0 ∈ S is nilpotent hence, by the previous lemma, I := {j 0 , θ} is an ideal of S and J = Q j 0 . We have that QI ∼ = Q j 0 and hence QS/J ∼ = QS/QI ∼ = Q 0 (S/I) has the hyperbolic property and is nilpotent free. It follows, by Theorem 4.2, that S/I admits a series whose factors are Higman groups or the cyclic groups C 5 , C 8 and C 12 . Thus the cyclic groups C 5 , C 8 and C 12 do not occur since, by the last paragraph, U(Γ) is finite. Conversely, I = {j 0 , θ} is an ideal of S and S/I admits a series whose factors are Higman groups then, by Lemma 4.2, Q 0 (S/I) ∼ = ⊕ N i=1 QG i and hence QS/QI ∼ = ⊕ N i=1 QG i . Since QI ∼ = j 0 Q = J, we have that the Wedderburn-Malčev decomposition is QS ∼ = (⊕QG i ) ⊕ j 0 Q , where S(QS) ∼ = ⊕QG i is the semi-simple subalgebra of QS. If J is non-central then, by Proposition 3.3, there exist unique E 1 , E N ∈ E(QS) such that In both cases, the A i 's are division rings and U(Γ i ) is finite for any Z-order Γ i ⊂ A i . Thus, by Theorem 3.1 item (iii) and (iv), QS has the hyperbolic property. PROOF. By Lemma 4.6, if QS is non-semi-simple and has the hyperbolic property then S has a principal series with a unique null factor S i0 /S i0+1 = {f, 0} := I, say. If f is nilpotent in S then the result follows by the last proposition. Otherwise, since S i0+1 is an ideal of S letŜ := S/S i0+1 . Then QS ∼ = QS i0+1 ⊕ Q 0Ŝ is a direct sum as ideals and QS i0+1 and Q 0Ŝ has the hyperbolic property. Clearly,Ŝ has the nilpotent f and by the last proposition the factors ofŜ, and thus the factors S i /S i+1 , 1 ≤ i < i 0 , are isomorphic to Higman groups. If Γ is an order of Q 0Ŝ then U(Γ) is virtually cyclic hence QS i0+1 is nilpotent free and has the hyperbolic property. Thus by Theorem 4.3 the factors of S i0+1 , and therefore the factors S i /S i+1 , i 0 < i, are Higman groups. Conversely, on the conditions over the factors of a series of S we have that QS ∼ = QS i0+1 ⊕Q 0Ŝ is a direct sum as ideals. By Theorem 4.3, QS i0+1 has the hyperbolic property and is nilpotent free. By the last proposition Q 0Ŝ is hyperbolic. Clearly, If Γ is an order of QS then U(Γ) is hyperbolic and the result now follows. In [12], [14] and [17] are classified the quadratic extensions where −d is a square free integer, and the finite groups G for which the group ring o K [G] of G over the ring of integers of K has the property that the group U 1 (o K [G]) of units of augmentation 1 is hyperbolic. Therefore it is natural to classify the extensions K and the semigroups S, such that the algebra KS has the hyperbolic property. By remark 3.9, if KS has nilpotent elements then, since the integral basis of o K has two elements, KS does not have the hyperbolic property. Therefore a necessary condition for KS to have the hyperbolic property is that KS must be nilpotent free. [12] that U(Γ) is hyperbolic. Clearly, S is an inverse semigroup. Idempotents of Maximal Subgroups Let S be a finite semigroup with a nilpotent element j 0 . In this section, we investigate the idempotents of maximal subgroups of S. In case QS is non-semi-simple and has the hyperbolic property, the study of idempotents enable us to obtain more information on the structure of S. In fact we prove in the last theorem that S has some explicit semigroups as basic blocks of its structure which we define below as T 2 ,T 2 and T ′ 2 . Definition 5.1 As a set T 2 =T 2 = {e 1 , e 2 , j 0 , θ} and T ′ 2 = T 2 ∪ {e 3 } are semigroups with the operation · given by the Cayley table: In what follows S = ∪G i ∪ {θ, j 0 }, j 2 0 = θ, see Proposition 4.7, and N = |E(QS)| > 2. If E l ∈ E(QS) then 1 = 1<l<N E l + E 1 + E N . Let E := E 1 + E N and e ∈ QS be any idempotent, hence e = 1<l<N eE l + eE, where (eE l ) 2 = eE l ∈ A l which is, by Theorem 3.1, a division ring, ∀ 1 < l < N . Therefore, eE l ∈ {E l , 0}. Let E e l := eE l = 0 thus e = E e l + eE. Proposition 5.2 If e i is the group identity element of the group G i then e i has one of the following expressions: and for some λ, µ ∈ Q, Moreover the last two expressions are central idempotents. PROOF. Write e i = E i l + uE 1 + vE N + wj 0 (recall that the E i l are orthogonal, central, they PROOF. We have that E 1 ∈ A 1 ⊂ QG, and so E 1 = g∈G α g g. By the property of E 1 it holds 0 = j 0 = E 1 j 0 = ( α g λ g )j 0 and, by Proposition 4.5, the λ ′ g s ∈ {0, ±1}, ∀g ∈ G. Therefore, there exists g 0 ∈ G such that λ g0 = 1. If e 1 is the identity of G then it follows that e 1 j 0 = j 0 and thus gj 0 = j 0 , ∀g ∈ G, because G is a finite group and {θ, j 0 } is an ideal. Similarly, if e N is the identity of H then j 0 e N = j 0 and j 0 h = h for all h ∈ H. Since Qj 0 is an ideal we have that j 0 e 1 = ρj 0 and ρ ∈ {0, ±1}. Suppose 0 = ρ = 1, say, thus e 1 j 0 = j 0 = j 0 e 1 ; then e 1 centralizes j 0 and hence e 1 / ∈ A 1 , a contradiction. In the same way we prove that e N j 0 = 0. Proposition 5.4 Let G be a maximal subgroup of S. Denote by e ∈ G its identity element and suppose that ej 0 = j 0 . If e = E e l + E 1 + λj 0 then, ∀g ∈ G, g = gE e l +E 1 +λj 0 . Also if e = E e l +E N +µj 0 then, ∀g ∈ G, g = gE e l +E N +µj 0 . To determine gE 1 , recall that Q E 1 , E N , j 0 is an ideal of QS. So we may write gE 1 = tE 1 + sE N + rj 0 . There exists k ∈ Z such that g k = e; since the orthogonality of E 1 respect E l , l = 1 and E 1 j 0 = j 0 we conclude that E 1 g k = E 1 + λj 0 . By comparing with the equation (gE 1 ) k = (tE 1 + sE N + rj 0 ) k = t k E 1 + s k E n + r ′ j 0 we reach: t k = 0, s k = 1 and r ′ = λ. Hence, g = gE e l ± E 1 + λj 0 and, multiplying on the right by j 0 and using Lemma 5.3, we obtain that g = gE e l + E 1 + λj 0 . For the other case: e = E e l +E N +µj 0 , it holds that j 0 e = j 0 . If g ∈ G then similarly g = gE e l + E N + µj 0 . Theorem 5.5 Let e 1 ∈ G 1 and e N ∈ G N be the group identities and suppose that e 1 j 0 = j 0 e N = j 0 . Write Then only one of following options holds: PROOF. Since the idempotents E i ∈ E(QS) are orthogonal, j 0 e 1 = j 0 E 1 = e n j 0 = E N j 0 = 0, and e 1 j 0 = E 1 j 0 = j 0 e n = j 0 E N = j 0 e 1 e N = E 1 l E N l + (λ + µ)j 0 e N e 1 = E N l E 1 l e 1 e N = e N e 1 + (λ + µ)j 0 Without loss of generality suppose l = 1, k = N . If e 1 e N = 0 then −(λ + µ)j 0 = e N e 1 . In addition, if λ + µ = 0 then e N e 1 = 0, and the converse is clear. If λ + µ = 0 then e N e 1 is a non-trivial nilpotent element of S. Thus e N e 1 = j 0 , since S has a unique nilpotent element, clearly λ + µ = −1 = α (for l = N, k = 1 we have α = 1). The converse is straightforward. If e 1 e N and e N e 1 are non-zero elements then, by equation (1), the set {e 1 e N , e N e 1 , j 0 } ⊆ S is Q−L.D. Since any element of this set is not zero thus e 1 e N = e N e 1 := e 3 is a nontrivial idempotent and λ + µ = 0. The converse is clear. The semigroups T 2 , T ′ 2 eT 2 are, in some sense, the basic building blocks of the semigroups S whose rational semigroup algebra is non-semi-simple and has the hyperbolic property.
2007-11-21T10:49:07.000Z
2007-04-17T00:00:00.000
{ "year": 2008, "sha1": "a7727013f2d069171b497da127f927c9acb449d9", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jalgebra.2008.03.015", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "a7727013f2d069171b497da127f927c9acb449d9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119279488
pes2o/s2orc
v3-fos-license
Operator description for thermal quantum field theories on an arbitrary path in the real time formalism We develop an operator description, much like thermofield dynamics, for quantum field theories on a real time path with an arbitrary parameter $\sigma\,(0\leq\sigma\leq\beta)$. We point out new features which arise when $\sigma\neq \frac{\beta}{2}$ in that the Hilbert space develops a natural, modified inner product different from the standard Dirac inner product. We construct the Bogoliubov transformation which connects the doubled vacuum state at zero temperature to the thermal vacuum in this case. We obtain the thermal Green's function (propagator) for the real massive Klein-Gordon theory as an expectation value in this thermal vacuum (with a modified inner product). The factorization of the thermal Green's function follows from this analysis. We also discuss, in the main text as well as in two appendices, various other interesting features which arise in such a description. I. INTRODUCTION There are two commonly used real time formalisms to describe a quantum field theory at finite temperature. The closed time path formalism [1][2][3] uses the path integral method while thermofield dynamics [4][5][6] has its origin in an operatorial description of thermal quantum field theory. Unlike the imaginary time (Matsubara) formalism [7], there is a doubling of field degrees of freedom in the real time formalisms [8][9][10][11][12] which leads to a 2 × 2 matrix structure for the propagator. Thus, for example, the causal Green's function (the propagator without the factor of i) for a real, massive Klein-Gordon field has the momentum space representation in closed time path of the form where G ++ (p) = 1 Here n B (|p 0 |) denotes the Bose-Einstein distribution function. (The subscripts ± refer to the two real branches of the closed time path in the complex plane.) In thermofield dynamics, on the other hand, the 2 × 2 matrix causal Green's function has the momentum space form where the subscripts 1, 2 refer to the two real branches of the time contour. The components have the explicit forms Here β denotes the inverse temperature in units of the Boltzmann constant. Even though the off-diagonal parts of the Green's functions in (2) and (4) have different forms in the two formalisms, they are known to lead to equivalent results for physical ensemble averages in thermal equilibrium. In general, thermal field theories can be defined on a general time path in the complex t-plane as shown in Fig. 1 where 0 ≤ σ ≤ β [13]. In fact, the path can be generalized even further, in principle, by adding pairs of alternating forward and backward moving real time branches, but it has been shown [14] that such paths are equivalent to the general path shown in Fig. 1. When T = T = 0, the path is associated with the imaginary time (Matsubara) formalism [7]. On the other hand, for real time formalisms where time takes continuous values between −∞ ≤ t ≤ ∞, one takes the limiting values T → −∞, T → ∞ and, for any allowed value of the parameter σ, the path leads to a real time description of the thermal field theory. When σ = 0, we have the closed time path description discussed above while thermofield dynamics is associated with σ = β 2 . For any value of σ in the allowed range 0 ≤ σ ≤ β, there is a thermal field theory description and the 2 × 2 matrix Green's function has the form in the momentum space given by [13,14] with G (σ) where we have introduced We want to emphasize here that conventionally the two real branches of the path are labelled as 1, 2 for any nontrivial value of σ. Only for σ = 0, namely, for the closed time path formalism, they are labelled as ±. Even though for different values of σ the Green's functions (propagators) are different, in thermal equilibrium they lead to equivalent physical results for any λ (or σ) [13,14] as we will also show in a simple manner in appendix A. Thermal field theories defined on any one parameter (σ) family of paths can be given a diagrammatic (path integral) description. However, thermofield dynamics, corresponding to σ = β 2 , also has an operator description. (In fact, thermofield dynamics has its origin in an operator description as we have already pointed out.) Therefore, a natural question arises as to whether for other values of σ, we can also have an operator description of the theory in parallel to thermofield dynamics. (For example, an operator description of theories on the closed time path does not exist yet.) This has been a question of general interest since the work of Umezawa et al [13]. In spite of several attempts to find an operator description, this remains an open question. In this paper, we study this question systematically and show that an operator description for any other value (other than σ = β 2 ) does exist indeed, with a modified inner product (different from the standard Dirac inner product) for the thermal Hilbert space. We restrict ourselves to a free massive Klein-Gordon theory in this study, but generalization to other theories is straightforward. Our paper is organized as follows. In section II, we recapitulate briefly the essential ideas of thermofield dynamics and describe the well studied Bogoliubov transformation operator which takes the zero temperature (doubled) vacuum state to the thermal vacuum (vacuum state in the thermal Hilbert space). We also point out how this Bogoliubov transformation leads to the Green's function (propagator) (4) in a factorizable matrix form. In section III, we point out various symmetry properties of the Green's function (6) for an arbitrary (allowed) value of σ as well as its factorization which is quite useful in our attempt to construct the Bogoliubov transformation relating the thermal vacuum to the zero temperature vacuum. In fact, in section IV, we use these features to construct the Bogoliubov transformation (operator) for the arbitrary parameter 0 ≤ σ ≤ β which leads to the thermal Hilbert space description of the theory. We point out how the inner product of the thermal space changes for σ = β 2 and we show, in particular, in section V, how this description leads to the 2×2 matrix Green's function (propagator) (6) for the Klein-Gordon theory in a factorized manner for an arbitrary parameter σ. We conclude with a brief summary in section VI. In appendix A, we give a simple derivation of the λ (or σ) independence of physical ensemble averages (in thermal equilibrium) in the operator formalism and point out various other features. In appendix B, we give a brief alternative (but equivalent) operator description leading to the Green's function (5)-(6). II. THERMOFIELD DYNAMICS The main idea behind thermofield dynamics [4][5][6]11] is the desire to define a thermal vacuum (and a thermal Hilbert space) so that the ensemble average of any product of operators can be written as a thermal vacuum expectation value of the operators. Namely, if there exists a state |0(β) denoting the thermal vacuum state, then we should be able to write where H represents the dynamical Hamiltonian for the system under study and Z(β) stands for the partition function of the system. In this case, one can naturally develop a perturbative expansion much like at zero temperature. With a little bit of analysis [4,11,12], it is realized that such a state cannot be constructed if one restricts to the original Hilbert space of the theory. Rather, one needs to double the Hilbert space of the theory by adding fictitious particles known as "tilde" particles. Let us illustrate this with the simple example of the one dimensional bosonic harmonic oscillator whose annihilation and creation operators are denoted by a, a † and satisfy the nontrivial commutation relation [a, a † ] = 1. (9) For simplicity we assume that the Hamiltonian of the oscillator corresponds to the one with vanishing zero point energy so that the discussion will naturally generalize to second quantized field theories. (Having a nonvanishing zero point energy does not change the discussion.) Namely, the Hamiltonian for the system has the form where ω denotes the natural frequency of the oscillator. Next, we double the theory by adding "tilde" degrees of freedom through the annihilation and creation operators a, a † which satisfy the same commutation relations as the original oscillator degrees of freedom, namely, [ a, a † ] = 1. (11) Furthermore, the two degrees of freedom are assumed to be independent so that the "tilde" operators commute with the original operators. The Hamiltonian for the combined theory is denoted by In this doubled theory, the Hilbert space is a product space of the form where |n , | n denote the energy states of the two harmonic oscillator systems (namely, eigenstates of H, H respectively). One can construct the thermal vacuum state in this doubled space in the form which is normalized by construction and leads to ensemble averages as thermal vacuum expectation values as desired (see (8)). The thermal vacuum |0(β) can be shown to be related to the vacuum |0, 0 of the doubled space through a Bogoliubov transformation where and the real parameter of the Bogoliubov transformation θ is given by Since the generator of the Bogoliubov transformation is anti-Hermitian it follows that the Bogoliubov transformation is unitary, namely, It also follows from (12) and (16) that implying that the Bogoliubov transformation defines a symmetry of the doubled theory. One can naturally define a thermal Hilbert space built on the thermal vacuum state (15) (or (14)). This is done by defining thermal annihilation and creation operators through the Bogoliubov transformation (16) in the following way. Let us define a doublet of operators Then, with the standard commutation relations it can be derived that the Bogoliubov transformation (16) leads to the thermal doublet of operators where the 2 × 2 matrix mixes up the original and the "tilde" operators under the Bogoliubov transformation and defines the thermal operators which act on the thermal Hilbert space. We note that where σ 3 denotes the third Pauli matrix. We note here that the thermal vacuum is annihilated by the thermal annihilation operators which leads to Explicitly (22), (23) and (25) imply that the thermal vacuum satisfies which is known as the thermal state condition. All of these ideas from the simple example of the harmonic oscillator can be generalized to quantum field theories. The annihilation and creation operators as well as the transformation parameters simply become functions of momentum and one needs to integrate over the momentum where necessary. Thus, for the free massive Klein-Gordon theory described by the field variable φ(x), we introduce the "tilde" field degrees of freedom described by φ(x). (The "tilde" conjugation rules [11,15] in constructing the action for the doubled field, which we do not go into, not only replace the fields by the "tilde" fields but also complex conjugate any coefficient.) In this case, (12) generalizes to where we have identified With the generator of Bogoliubov transformation (see (15)-(20)) we can now define the thermal vacuum in the doubled space as where the parameters of transformation are given by (32) We note here that the generator of the Bogoliubov transformation is anti-Hermitian so that U (θ) is formally unitary as in the harmonic oscillator case so that the Bogoliubov transformation is a symmetry of the Hamiltonian of the doubled system. The unitary operator U (θ) allows us to construct the thermal operators in the following way. If we define a doublet of the Klein-Gordon fields then with the tilde conjugation rules we can separate them into positive and negative frequency parts as where p 0 = ω p (in the exponent) and Under a Bogoliubov transformation, it can be checked that (see, for example, (22) and (23)) where the 2× 2 matrix U (θ(p)) is given by As in (24), we note that This shows that the matrices U (θ) in (24) and U (θ(p)) in (40) belong to the group SO(2, 1). Furthermore, the thermal state condition (27) generalizes in this case to Using all these relations, the thermal 2 × 2 matrix Green's function (propagator) of thermofield dynamics (4) can now be obtained as the expectation value in the thermal vacuum In Fourier transformed space this leads to where the zero temperature Green's function in thermofield dynamics has the simple form and U (θ(p)) is the 2 × 2 matrix defined in (39). Equation (43) is an important result. It shows that the existence of a Bogoliubov transformation leading to a thermal vacuum results in the factorization of the 2×2 matrix Green's function (43) at finite temperature. III. FACTORIZATION OF PROPAGATOR FOR AN ARBITRARY PATH In trying to construct a thermal vacuum for an arbitrary path (σ arbitrary), various properties of the propagator can offer helpful clues. We have already noted the form of the Green's function in (5) and (6). The components of the 2 × 2 matrix in (5) have the explicit forms and as we have noted in (7), λ = σ − β 2 . We note from the forms of the components in (46) that while for any arbitrary (allowed) σ, the off-diagonal elements, in general, satisfy Only for σ = β 2 (or λ = 0), the off-diagonal matrix elements are also symmetric under p ↔ −p. thermofield dynamics, therefore, enjoys a very special status in that all the components of the propagator are symmetric under the reflection of the energy-momentum four vector. We will see later that this symmetry (or lack of it) is reflected in the structure of the thermal Hilbert space of the theory. As we have noted in the last section, the existence of a Bogoliubov transformation leading to a thermal vacuum results in the factorization of the propagator. Therefore, from (49) we feel that there should exist a thermal vacuum for an arbitrary σ which can be obtained from the (doubled) zero temperature vacuum through a Bogoliubov transformation. However, from the form of the factorizing matrix (50), we note that, while the corresponding matrix (39) in thermofield dynamics is symmetric (under transposition), here we have As a result, it follows that, for an arbitrary σ, the factorizing matrix satisfies where σ 3 is the third Pauli matrix. This can be compared with (40) and suggests that the Bogoliubov transformation taking us to the thermal vacuum, if we can determine, may have some unusual features. The factorizing matrix U (−θ(p), λ) in (50) can be further factorized as where V (λ) can be a 2 × 2 matrix either in the diagonal form 1 0 0 e −λp0 or in the off-diagonal form . We note that, in either case, we can write . This factorization in (53) is very interesting because it points to the fact that the Bogoliubov transformation leading to the thermal vacuum may involve a product of operators unlike the case in thermofield dynamics. With all this information, we are ready to construct the Bogoliubov transformation which naturally leads to the thermal vacuum of the theory and to discuss the resulting properties of the theory in the next section. IV. BOGOLIUBOV TRANSFORMATION FOR AN ARBITRARY PARAMETER The ensemble average of a product of operators A 1 · · · A n in thermal equilibrium is defined as (see, for example, (8)) where, as we have noted earlier, β represents the inverse temperature in units of the Boltzmann constant and Z(β) is the partition function for the system. "Tr" stands for trace over a complete set of states and H denotes the dynamical Hamiltonian of the system. To introduce an operator description for a theory defined on the one parameter family of paths, we use the cyclicity of the trace to write Tr e −βH A 1 · · · A n = Tr e −βH e −λH A 1 · · · A n e λH , (55) where λ is defined in (7). (We note here that the cyclicity of the trace has been used earlier [18][19][20] to introduce an arbitrary parameter into the ensemble average in a different context. The description in such a case has been called a non-Hermitian representation of thermofield dynamics.) We note from (8) and (55) that the ensemble average of a product of operators can be written as the (thermal) vacuum expectation value in thermofield dynamics (TFD) A 1 · · · A n β = 0(β)|e −λH A 1 · · · A n e λH |0(β) . (56) At this point there are two equivalent ways of proceeding. We can either keep the doubled operators of thermofield dynamics and look for a modified thermal vacuum state that depends on the parameter λ in addition to β (temperature) such that the ensemble average in (56) can be written as an expectation value of the product of operators A 1 · · · A n in this thermal vacuum. This would be the closest in spirit to thermofield dynamics. Or, alternatively, we can keep the thermal vacuum of TFD unmodified and look for new operators to describe the doubled theory to represent the ensemble average as an expectation value of the product of new operators in the TFD vacuum. This will be closer in spirit to having two real branches of the time path with an arbitrary separation of the imaginary argument. In the main text of the paper, we will follow the first approach in detail while in appendix B we will discuss briefly the alternative approach. Using the definition of the thermal vacuum in TFD, |0(β) , in (15) (see also (31)), we can write the ensemble average also as where we have used the property H|0, 0 = 0. Equation (57) suggests that we can define a new Bogoliubov transformation operator, U (θ, λ), which depends on two parameters and is related to the unitary operator U (θ) of TFD by a similarity transformation as This is quite reminiscent of the factorization in (53) and so, in principle, one can define a thermal vacuum depending on two parameters (for the theory on the arbitrary path) through this operator U (θ, λ), namely, However, there seems to be a problem with this in that the operator U (θ, λ) is not naively unitary (as we would expect for a Bogoliubov transformation to be), namely, since from (58) it follows that As a result, the ensemble average in (57) cannot be written as a thermal vacuum expectation value as in thermofield dynamics (see (8)). However, we also note from the definition in (58) that which is reminiscent of (52). The resolution of this problem can be understood as follows and occurs in several areas of physics, most recently in the study of P T symmetric theories [23,24] and in pseudo-Hermitian systems [25]. Basically, properties such as Hermiticity and unitarity are defined with respect to the inner product of a Hilbert space. The conventional adjoint A † of an operator A is defined with respect to the standard Dirac inner product ·|· . On the other hand, with a modified inner product defined as [26,27] the modified adjoint A ‡ is defined through the similarity transformation [26,27] Therefore, if we choose Λ to correspond to the reflection operator we have Similarly from (64) we note that the adjoint with respect to the modified inner product leads to so that (see (62)) Namely, the Bogoliubov transformation is formally unitary, as it should be, but with respect to the modified inner product in (63). It follows now that 0(θ, λ)|0(θ, λ) Λ = 0(θ, −λ)|0(θ, λ) so that the thermal vacuum is indeed normalized with respect to the modified product. Furthermore, the ensemble average (57) can indeed be written as a thermal vacuum expectation with this modified product, This brings out a very important feature of the thermal Hilbert space. Namely, when λ = σ − β 2 = 0, the thermal Hilbert space develops a natural modified inner product (63) different from the standard Dirac inner product. Only for λ = 0 (or σ = β 2 ), namely, only for thermofield dynamics does the Hilbert space coincide with the one with a standard Dirac inner product. In this sense thermofield dynamics enjoys a very special status in an operatorial description. Since the Bogoliubov transformation U (θ, λ) is formally unitary with respect to the modified inner product (see (68)), operators transform under such a transformation preserving their commutation relations. Thus, for example, if we consider the free massive Klein-Gordon theory described by where ω p > 0 is defined in (29). The annihilation operator a(p) and the creation operator a ‡ (p) = a † (p) satisfy the commutation relation Under a Bogoliubov transformation, these operators transform as so that the commutation relation is preserved (see (68)) In the next section, we will use this operatorial description to derive the propagator for the Klein-Gordon theory for an arbitrary σ as well as its factorization discussed in detail in section III. V. PROPAGATOR FOR THE KLEIN-GORDON THEORY FOR AN ARBITRARY PARAMETER σ The Bogoliubov transformation (58) leading to the thermal vacuum and the thermal Hilbert space can be written in a closed form for the free massive Klein-Gordon theory in the following way. Let us denote U (θ, λ) = e λH U (θ)e −λH = e λH e Q(θ) e −λH = e Q(θ,λ) , (75) where H is given in (71), Q(θ) is the generator of the Bogoliubov transformation for thermofield dynamics noted in (30) and Q(θ, λ) denotes the generator of the Bogoliubov transformation for an arbitrary σ. The three exponents in (75) can be combined using the Baker-Cambell-Hausdorff formula leading to It is clear that the generator Q(θ, λ) is manifestly anti-Hermitian with respect to the modified product, namely, so that (62) (or (68)) follows. Furthermore, it also follows from (76) that (see also (34)) where H is the Hamiltonian for the doubled theory given in (28). As in (35)-(37) we can decompose the doublet of fields into positive and negative frequency parts as with p 0 = ω p in the exponent and It is now straightforward to check that they transform under a Bogoliubov transformation as where U (θ(p), λ) = cosh θ(p) −e λωp sinh θ(p) −e −λωp sinh θ(p) cosh θ(p) . (84) The 2×2 matrix (83) has the same form as (50), which appears in the factorization of the propagator, except that the off-diagonal exponent has ω p > 0 instead of p 0 which can be positive as well as negative (there is no p 0 dependence, only dependence on p on the left hand side in (82)). This is different from the case of thermofield dynamics (see, for example, (38) and (43)). VI. CONCLUSION In this paper we have shown systematically that an operator description for a theory defined on a real time path with an arbitrary σ does indeed exist. We have constructed the Bogoliubov transformation which connects the doubled vacuum state at zero temperature to the generalized thermal vacuum. We have pointed out that, for any value of σ = β 2 (0 ≤ σ ≤ β), the Hilbert space develops a natural modified inner product. Only for σ = β 2 corresponding to thermofield dynamics does the Hilbert space have the standard Dirac inner product. We have derived the 2 × 2 matrix propagator (Green's function) for the Klein-Gordon theory directly from the expectation value of field operators in this thermal vacuum and have shown that the factorization of the propagator (for arbitrary σ) also follows from the Bogoliubov transformation of the field operators. The factorizing matrix , U (−θ(p), λ), belongs to the group SO(2, 1), but only with the modified inner product. We have shown that the further factorization of U (−θ(p), λ) is intimately related to the product nature (58) of the Bogoliubov transformation operator. In appendix A we give a simple derivation of the λ (or σ) independence of physical ensemble averages in thermal equilibrium and also point out various other features. In appendix B we give an alternative (but equivalent) operator description where operators are redefined but the thermal vacuum is kept as that of TFD. Appendix A: λ independence of physical ensemble averages In this appendix we will show that even though the ensemble average (56) in the operator formalism appears to have a λ dependence, physical ensemble averages (thermal correlations of the original fields of the theory) are independent of λ. To see this, we note that using (26) we can write (56) in two equivalent ways It follows now that if the operators A 1 , A 2 , · · · , A n belong to the original theory, then each of them would commute with H and using the second form of (A1) we can write where the λ dependence completely cancels out. This shows that the physical thermal correlation functions are independent of the parameter λ. The same conclusion also follows if all the operators A 1 , A 2 , · · · , A n belong to the doubled (auxiliary) space (namely, if they are all "tilde" operators). In this case, H will commute with each of them and using the first form of (A1) we obtain (A2). This shows that thermal correlations involving only the "tilde" operators are also independent of the parameter λ. The difficulty comes if some of the operators A 1 , · · · , A n belong to the original space and some to the doubled auxiliary space. In this mixed case, neither H nor H will commute with all the operators in the product. As a result, the two λ dependent factors in (A1) cannot be commuted past all the operators to cancel out. Therefore, the mixed thermal correlation functions will depend on the parameter λ. We have already seen this explicitly in (5) and (6) where we have noted that the diagonal elements of the Green's function are independent of λ while the off-diagonal elements are λ dependent. Appendix B: Alternative description In this appendix we will briefly describe an alternative but equivalent operatorial formalism where the operators of the doubled theory are redefined while the thermal vacuum state is taken to be the TFD vacuum. Let us start by noting that e −λ H a(p)e λ H = a(p), e −λ H a(p)e λ H = e λωp a(p). (B1) It follows that if we identify a 1 (p) = a(p), a 2 (p) = e λωp a(p), their adjoints (with respect to the modified inner product) have the forms a ‡ 1 (p) = a † (p), a ‡ 2 (p) = e −λωp a † (p). With the help of these, we can now define two new scalar field operators φ 1 (x), φ 2 (x) in terms of the old scalar field operators φ(x), φ(x) as which have the plane wave expansions Here f (±) p (x) denote the positive and negative frequency plane wave solutions defined in (80). The factors e ∓λ H in the ensemble average (A1) (in the second line) can now be absorbed into a redefinition of the operators into the new operators so that the ensemble average becomes an expectation value of the redefined operators in the standard TFD thermal vacuum. The well known generator of Bogoliubov transformation of thermofield dynamics (30) can now be written in terms of these new operators as Q(θ) = − d 3 p θ(p)(e −λωp a 2 (p)a 1 (p)−e λωp a ‡ 1 (p)a ‡ 2 (p)), (B7) so that the TFD thermal vacuum |0(β) = U (θ)|0, 0 = e Q(θ) |0, 0 , can now be thought of as consisting of a collection of "1, 2" particles. The calculation of the propagator can be carried out now as was done in section V. For example, let us define the doublet of fields Φ (12)
2016-05-18T13:35:35.000Z
2016-05-17T00:00:00.000
{ "year": 2016, "sha1": "5008e0348886bec2035bc2220de2d8a4613d46cb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1605.05165", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5008e0348886bec2035bc2220de2d8a4613d46cb", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
39763390
pes2o/s2orc
v3-fos-license
The Adjacent Yeast Genes ARO4 and HIS7Carry No Intergenic Region* The region between the open reading frames of the adjacent yeast genes ARO4 and HIS7 consists of 417 base pairs (bp). Termination of ARO4 transcription and initiation of HIS7 transcription has to take place within this interval, because both genes are transcribed into the same direction. We show that the ARO4 terminator and theHIS7 promoter are spatially separated, nonoverlapping units. The ARO4 terminator includes 84 bp of theARO4 3′-untranslated region with several redundantARO4 3′ end processing signals. Deletion of theARO4 terminator does reduce but not completely shut down its expression. The adjacent region of 40 bp is neither required for correct ARO4 3′ end formation nor for HIS7initiation but contains the nucleotides corresponding to the wild type mRNA 3′ ends. The following 280 bp are required for theHIS7 promoter. Replacement of the housekeepingARO4 promoter by the stronger ACT1 promoter leads to reduced HIS7 expression due to transcriptional interference. This underlines the compactness of the yeast genome carrying virtually no intergenic regions between adjacent genes. The sequencing of the genome of the budding yeast Saccharomyces cerevisiae has revealed the remarkable compactness of its genome. This results from the short size of regions between the open reading frames. Open reading frames of divergent promoters on average are only 618 bp 1 apart. Open reading frames of convergent terminators are separated by 326 bp on average. Arrangements with a terminator-promoter combination are spaced by 517 bp. Assuming nonoverlapping units this leads to a deduced and calculated average size of 309 bp for a promoter and 163 bp for a terminator (1). For the regulated expression of the yeast genome, it is important that transcription of an upstream located gene does not interfere with the initiation of transcription of an immediately downstream located gene. The goal of this study was to test for a concrete terminatorpromoter combination in yeast the size of the terminator and the promoter. In addition, we wanted to know whether both units are overlapping or whether there is an intergenic spacer region between the terminator and the promoter. In eukaryotes, the process of transcriptional termination is poorly understood. A number of different assays have been developed to measure termination in RNAP II genes, including poly(A) site competition, transcriptional interference (2), and reverse transcription-polymerase chain reaction. Using these methods, termination sequences in mammalians have been identified between two closely spaced genes, human complement genes C2 and factor B. A binding site has been identified in the termination signal that binds the protein MAZ. It seems plausible that the proven ability of MAZ to bend DNA may relate to the RNAP II termination process (3). In S. cerevisiae, in vitro studies with the ADH2 and GAL7 genes lead to the hypothesis that the coupling of a RNAP II pause site to a functional polyadenylation signal results in transcription termination (4). In yeast, as in all eukaryotes, the 3Ј ends of mRNAs are generated by a processing reaction that takes place in the cell nucleus (for review see Refs. [5][6][7][8]. The mRNA precursors first lose a 3Ј-terminal noncoding fragment by endonucleolytic cleavage and then receive a poly(A) tail by polymerization of AMP. In higher eukaryotes two sequence elements define a poly(A) site. One is the almost invariant AAUAAA hexanucleotide, about 15 nucleotides upstream of the poly(A) addition site. The second signal, located downstream of the poly(A) site, is either a run of Us or a poorly defined GU-rich sequence (6). In yeast, however, the situation seems to be more complex. A highly conserved consensus sequence as found in higher eukaryotes is lacking. Sequences that have been identified to play an important role in mRNA 3Ј end formation of one gene are often absent or nonfunctional in other genes. In general, the yeast mRNA 3Ј end formation signals seem to be more degenerate, redundant, and disperse (8). In yeast, the 3Ј processing signal has been proposed to consist of three elements (9). The far upstream element directs the efficiency of the processing site, whereas the near upstream element is required for the positioning of the poly(A) site. The third element is the poly(A) site itself. Two classes of far upstream elements have been discussed (10). An efficient, unidirectional class contains the T 5 TA sequence motif proposed by Henikoff and Cohen (11) or derivatives thereof. A less efficient class functions in both orientations and is defined by the tripartite TAG . . . TA(T)GTA . . . TTT motif and its derivatives originally proposed by Zaret and Sherman (12). For positioning elements a TTAAGAAC motif, an A 8 stretch or the canonical AATAAA element have been discussed (9). Little is known about the exact sequence requirement for the poly(A) site, but CA n or TA n sequences within the permissive distance appear to be preferred (13). Numerous studies have been performed in yeast where either individual promoters or individual mRNA 3Ј end forma-tion signals have been analyzed in various test systems. It is hardly known how different mRNA 3Ј end formation signals affect different promoters in a single test system. Therefore, the aim of this study was to investigate effects on a mRNA 3Ј end formation signal and a promoter simultaneously. The ARO4 gene encodes the tyrosine-regulated 3-deoxy-Darabino-heptulosonate-7-phosphate synthase catalyzing the first step in the Shikimate pathway (14). Its poly(A) site contains the tripartite TAG . . . TATGTA . . . TTT motif proposed by Zaret and Sherman (12) and belongs to the class of bidirectionally functional poly(A) sites (10). The HIS7 gene is located just downstream of the ARO4 gene on yeast chromosome II. It encodes the bifunctional glutamine amidotransferase:cyclase catalyzing the fifth and sixth step in the de novo histidine biosynthesis (15). Basal transcription of HIS7 requires the global factor Abf1p, and it is activated under conditions of amino acid starvation and adenine starvation conditions by Gcn4p and Bas1/2p, respectively (16). The two genes are transcribed in the same direction with a normal spacing of 417 bp between the open reading frames. We show that the ARO4 terminator and the HIS7 promoter are nonoverlapping, spatially separated units. The signals directing proper ARO4 3Ј end formation are spread over 84 bp of the ARO4 3Ј-untranslated region. Various point mutations have no effect on the ability of ARO4 3Ј end formation, suggesting the presence of multiple redundant signals. Deletion of the complete ARO4 3Ј end processing signal reduces but does not completely shut down ARO4 expression. Replacement of the housekeeping ARO4 promoter by the efficiently transcribing ACT1 promoter leads to reduced HIS7 expression due to transcriptional interference between these two genes. Because 280 bp are required for the HIS7 promoter, there are only about 40 bp between these two genes where the actual poly(A) addition sites are located. Construction of the Internal Deletions of the ARO4/HIS7 Intergenic Region-The various internal deletion mutations of the ARO4/HIS7 intergenic region were constructed by Bal31 exonuclease treatment of the linearized plasmid pME947. Plasmid pME947 was constructed based on the pGEM-7Zf (ϩ) plasmid (Promega, Madison, WI) by insertion of the 1.9-kilobase SphI/BamHI fragment of the ARO4/HIS7 locus with a created ClaI site at position Ϫ405 relative to the translational start codon of the HIS7 gene. The plasmid was linearized either with ClaI or EcoRV and subsequently treated with Bal31 exonuclease to obtain 5Ј and 3Ј deletions of the region, respectively. After cloning of a ClaI/HindIII/EcoRV adapter, appropriate 5Ј and 3Ј deletion fragments were combined to obtain the internal deletions of the HIS7 promoter. This resulted in the plasmids pME951 to pME956 (3Ј deletions), pME966 to pME971 (5Ј deletions), and pME991 to pME995, pME997, pME999, and pME1001 (internal deletions). Construction of the Test Gene-Plasmid pME800 was constructed on the basis of pSP64 (Promega, Madison, WI) to obtain an integrative vector. Vector pSP64 was modified by cloning the 1.1-kilobase HindIII fragment of URA3 into the XhoI site, by inserting the 1.1-kilobase BamHI fragment of pME729 (24) into the BamHI site of the polylinker and by introducing a multiple cloning site (double-stranded OLCE1-OLCE2) into the ClaI site of the 1.1-kilobase BamHI fragment. The different mutated alleles of the ARO4/HIS7 intergenic region were amplified by using OLCS26 and OLCS27 as primers and the plasmids pME951 to pME956 (3Ј deletions), pME966 to pME971 (5Ј deletions) and pME991 to pME995, pME997, pME999 and pME1001 (internal deletions) as templates in a PCR reaction and cloned into the multiple cloning site of plasmid pME800 after restriction with KpnI and BglII. Site-directed Mutagenesis of the ARO4/HIS7 Intergenic Region-Site-directed mutations in the ARO4/HIS7 intergenic region were introduced using the PCR technique (25). Oligonucleotides carrying specific mutations were OLCS36 to OLCS40. These oligonucleotides were used in a PCR reaction together with OLCS27 as second primers and pME947-DNA as template. The final PCR products were cut with KpnI and BglII and cloned into plasmid pME800. ␤-Galactosidase Activity Assay-␤-Galactosidase activities were determined by using permeabilized yeast cells and the fluorogenic substrate 4-methylumbelliferyl-␤-D-galactoside as described earlier (15). Routinely, yeast cells were cultivated in MV minimal medium overnight, diluted to an optical density of approximately 0.5 at 546 nm and cultivated for another 6 h before assay. One unit of ␤-galactosidase activity is defined as 1 nmol 4-methyl-umbelliferone h Ϫ1 ml Ϫ1 A 546 Ϫ1 . The given values are the means of at least four independent cultures. The standard errors of the means were less than 20%. DAHP Synthase Activity Assay-3-Deoxy-D-arabino-heptulosonate-7-phospate synthase activities were determined as described in Takahashi and Chan (26). Routinely, yeast was cultivated in MV minimal medium to an optical density of approximately 2 at A 546 , harvested by centrifugation and washed three times with potassium phosphate buffer (50 mM potassium phosphate, pH 7.6, 0.1 mM phenylmethylsulfonyl fluoride, 0.1 mM EDTA, 1 mM dithiothreitol). The cells were resuspended in 5 ml of potassium phosphate buffer, disrupted in a French press (Aminco, Silver Spring, MD), and the cell debris was removed by centrifugation. Finally, the supernatant was applied to a PD 10 column (Pharmacia Biotech Inc. Uppsala, Sweden). 50 l of crude cell extract was incubated for 10 min in 50 l of erythrose-4-phosphate (8 mM), 40 l of phosphoenolepyroat (10 mM), 50 l of 0.4 M potassium phosphate buffer, and 60 l of H 2 O. The enzymatic reaction was stopped by adding 50 l of trichloroacetic acid (20%). 100 l of the reaction solution was added to 100 l of 20 mM NaIO 4 in 0.25 M H 2 SO 4 and incubated for 30 min at 37°C. This reaction was stopped by adding 200 l of NaAsO 2 (2% in 0.5 M HCl). After the solution turned colorless, 800 l thiobarbituric acid (0.3%) was added, and the mixture was boiled for 10 min. The absorption of the product was measured at 550 nm. Isolation of Total RNA from S. cerevisiae-Yeast cells were grown overnight in a 100-ml culture to an optical density at 546 nm of about 2. The cells were spun at 6000 ϫ g for 5 min on one-fifth volume of ice and resuspended in 6 ml of PLE buffer (100 mM PIPES, 100 mM LiCl, 1 mM EDTA, pH 7.4). After centrifugation at 6000 ϫ g for 5 min at 4°C, the cells were resuspended in 300 l of ice-cold PLE buffer and 100 l of ice-cold dichloromethane-saturated phenol equilibrated with PLE buffer. Diethylpyrocarbonate (1%, v/v) was added to inactivate RNases. Sterilized glass beads 0.45 mm in diameter were added, and the cells were disrupted by vigorous shaking for six 15 s periods with cooling on ice in between. Nucleic acids were extracted once with 1 volume of dichloromethane-saturated phenol equilibrated with PLE buffer, 0.05 g of bentonite, and 1% (w/v) sodium dodecyl sulfate and twice with 1 volume of dichloromethane-saturated phenol equilibrated with PLE buffer. Total RNA was precipitated by 1.5 volumes of ice-cold isopropanol, and the concentration was determined spectrophotometrically. The precipitated RNA was stored at Ϫ20°C. RNA Analysis-For Northern (RNA) hybridization experiments, approximately 10 g of total RNA was precipitated, resuspended, and denatured in 30 l of sample buffer (50%, v/v, deionized formamide, 6% v/v formaldehyde, 1 ϫ loading buffer, 10% [v/v] 10 mM Tris-1 mM EDTA [TE] buffer) for 15 min at 65°C and put on ice. The RNA was separated on a denaturing formaldehyde agarose gel. The 1.4% (w/v) agarose gel (3%, v/v, formaldehyde, 20 mM MOPS, 5 mM sodium acetate, 1 mM EDTA) was run for 3 h at 60 V in a buffer containing 20 mM MOPS, 5 mM sodium acetate, and 1 mM EDTA. The gel was soaked twice in 25 mM Na phosphate buffer for 20 min each time, and the RNA was transferred onto a nylon membrane (Amersham, Buckinghamshire, UK) by electroblotting (2 A, 50 V) for 3 h in 25 mM Na phosphate buffer. After washing in 2 ϫ SSC (1 ϫ SSC is 0.15 M NaCl plus 0.15 M sodium citrate), drying on 3MM paper, and cross-linking under UV light (254 nm) for 5 min, the membrane with the bound RNA was hybridized at 42°C with a labeled fragment for 24 h in 50 ml of a hybridization mixture (50%, v/v, formamide, 50 mM sodium phosphate, pH 6.5), 800 mM NaCl, 1 mM EDTA, 0.5% sodium dodecyl sulfate, 10 ϫ Denhardt's solution, 150 g of calf thymus DNA per ml, 500 g of torula yeast RNA/ml). The fragment, representing the 440-bp MluI/XhoI DNA element of the ACT1 5Ј region, was randomly radiolabeled as described previously (27). The RNA was visualized by autoradiography. Band intensities from autoradiographs were quantified with a PhosphorImager (Molecular Dynamics, Sunnyvale, CA). RESULTS The ARO4 Terminator and the HIS7 Promoter Are Nonoverlapping and Spatially Separated Units-The spacing between the open reading frames of the ARO4 gene and the HIS7 gene consists of 417 bp. We wanted to know whether deletions within this region result in interference between ARO4 transcription and the initiation of transcription of the HIS7 promoter. Therefore a deletion analysis of the ARO4/HIS7 intergenic region was performed. ARO4 expression was determined by measuring DAHP synthase activity, which is the gene product. HIS7 transcription was monitored by determining ␤-galactosidase activities of strains carrying respective translational HIS7-LacZ fusions integrated in single copies at the ARO4/ HIS7 locus (Fig. 1). All strains had a gcn4-101 genetic background to avoid interference with the general control of amino acid biosynthesis in yeast. Deletion of large parts of the ARO4 3Ј-untranslated region in the yeast strains RH1768 (⌬ Ϫ405/Ϫ245 relative to the HIS7 AUG start codon) and RH1769 (⌬ Ϫ405/Ϫ280) (Fig. 1) including the mapped poly(A) sites (14) and the tripartite Zaret/ Sherman sequence element (12, 10) reduced ARO4 activity to 37 and 41%, respectively, compared with wild type activity. Smaller deletions of 52 bp in RH1833, 28 bp in RH1834, 12 bp in RH1835, or 20 bp in RH1836 moderately reduced ARO4 expression leading to between 55 and 75% of wild type activity. All these deletions were within the first 140 bp of the ARO4 3Ј-untranslated region and had no effect on HIS7 expression. The four strains RH1837, RH1839, RH1840, and RH1842 carry various deletions between 13 and 42 bp in length, all located more than 140 bp downstream of the end of the ARO4 open reading frame within the HIS7 promoter. None of these four deletions affected ARO4 expression, but all of them reduced HIS7 expression. In summary, any deletion within the first 140 bp of the ARO4 3Ј-untranslated region had a significant effect on ARO4 expression but did not affect HIS7 transcription. By contrast, all deletions within the next 280 bp affected HIS7 transcription, but none of them had any effect on ARO4 expression. These results strongly suggest that the ARO4 termination sequences are located within the first 140 bp of the untranslated region between the ARO4 and the HIS7 genes and do not overlap with the HIS7 promoter. Therefore, the ARO4 termination sequences and the HIS7 promoter sequences are located within spatially clearly separated units. A Region of Maximal 40 bp between ARO4 and HIS7 Is Not Necessary for Efficient ARO4 mRNA 3Ј End Formation nor for HIS7 Promoter Activity but Contains the ARO4 Wild type mRNA 3Ј End Positions-To define whether there is any intergenic spacer region between ARO4 and HIS7, the sequences required for ARO4 mRNA 3Ј end formation were analyzed more precisely. We tested ARO4 3Ј end modifications in an artificial test system that we had established earlier (28). The ARO4 polyadenylation element represents the class of yeast 3Ј processing sites which function in both orientations in an in vivo test system (10). The 3Ј-untranslated region of the ARO4 gene contains the tripartite sequence motif TAG . . . TATGTA . . . TTT, which was proposed to represent a processing consensus element in yeast (Fig. 2) (12). Modifications of the ARO4 3Ј- The ARO4-derived enzyme activity was measured as DAHP synthase activity and is shown in shaded boxes, whereas the HIS7-encoded enzyme activity was measured as ␤-galactosidase activity from corresponding HIS7-lacZ fusions and is indicated by black boxes. Numbers are relative values, with the specific wild type enzyme activity for the ARO4-encoded enzyme DAHP synthase and the wild type activity for the HIS7-lacZ fusion-encoded ␤-galactosidase as 100%. Each number represents an average value of at least six measurements with a standard deviation of not more than 15%. untranslated region included 3Ј and 5Ј end, internal deletions, and specific point mutations inserted into the complete element (Fig. 3). The modified ARO4 3Ј end elements were cloned into the multiple cloning site of the test gene consisting of the ACT1 promoter and the ADH1 terminator ( Fig. 2) (28). The test gene was integrated into the chromosome at the URA3 locus, thereby avoiding multicopy effects. The effects of all modifications were analyzed at the transcript level by performing Northern blot analysis. Functional 3Ј processing elements resulted in short truncated transcripts, whereas nonfunctional elements resulted in long readthrough transcripts as schematically drawn in Fig. 2. 3Ј deletion up to position Ϫ321 relative to the A residue of the translational start codon ATG of the HIS7 gene (deletion ⌬ Ϫ321/Ϫ104 in Fig. 4) resulted in a 3Ј processing efficiency (86% truncated transcript) similar to that of the complete wild type ARO4/HIS7 intergenic region (83-86% truncated transcripts). Further deletion to position Ϫ337 completely abolished 3Ј end formation (deletion ⌬ Ϫ337/Ϫ104 in Fig. 4). Therefore the downstream boundary for a completely functional ARO4 3Ј processing element in the test system was located in the Ϫ337 to Ϫ321 region. The mapped 3Ј ends (positions Ϫ311, Ϫ306, and Ϫ283) (14) are located downstream of this boundary suggesting that they are not important for the efficiency of mRNA 3Ј end formation in the test gene. 5Ј deletion of the part containing the ARO4 open reading frame including 12 bp of the 3Ј-untranslated region had no effect on 3Ј end processing (deletion ⌬ Ϫ440/Ϫ405 in Fig. 4). In this construct the TAG part of the tripartite TAG . . . TAT-GTA . . . TTT Zaret/Sherman sequence element was deleted. Any further 5Ј deletion (deletions ⌬ Ϫ405/Ϫ340 to ⌬ Ϫ405/ Ϫ211 in Fig. 4) resulted in the complete loss of ARO4 3Ј end formation. We therefore conclude that no parts of the ARO4 open reading frame are involved in 3Ј end formation and the 5Ј boundary of the 3Ј processing element must be located somewhere downstream of position Ϫ405. This finding was confirmed by analyzing internal deletion constructs of this region. In the deletions ⌬ Ϫ392/Ϫ340 and ⌬ Ϫ337/Ϫ309 3Ј processing activity was reduced to below 10% (Fig. 4), whereas in the deletion ⌬ Ϫ321/Ϫ309 the ability to process 3Ј ends was restored to almost wild type level (77% truncated transcript), substantiating the 3Ј boundary between positions Ϫ337 and Ϫ321. None of the internal deletions downstream of position Ϫ300 affected 3Ј end formation. Therefore the ARO4 3Ј end processing element could be delimited to the 84 bp between positions Ϫ405 and Ϫ321. Any internal deletion within this part leads to a complete loss of proper 3Ј end generation. Interestingly, neither the TAG part of the tripartite Zaret/Sherman sequence element nor the mapped poly(A) sites are within the boundaries of this element. In a set of point mutations, the involvement of the tripartite TAG . . . TATGTA . . . TTT Zaret/Sherman sequence in ARO4 3Ј end formation was further analyzed. The first TAG part of the element is identical with the ARO4 stop codon. In mutations mut(TAa) and mut(Tga) (Fig. 4) this element was replaced by one of the alternative stop codons TAA or TGA, respectively. In the mutations mut(agcGT) and mut(⌬TATGT) the middle part was either changed to the sequence AGCGT or deleted, whereas in mutation mut(gTa) the third part was exchanged for the sequence GTA. In mutation mut(agcGT-gTa) both the middle and the third element were mutated. None of these point mutations or small deletions had any effect on ARO4 3Ј end formation in the in vivo test system. We therefore conclude that several redundant 3Ј processing signals must be spread over a maximum of 84 bp between position Ϫ405 (which is 12 bp downstream of the ARO4 stop codon) and position Ϫ321 relative to the HIS7 AUG start codon. Taking into account that the HIS7 promoter reaches approximately to position Ϫ280 relative to the HIS7 start codon (Fig. 1), the intergenic region between the ARO4 and the HIS7 genes consists of 40 bp at most. This region carries all mRNA 3Ј ends that were mapped in vitro (positions Ϫ311, Ϫ306, and Ϫ283). Thus, virtually no intergenic region exists between ARO4 and HIS7 underlining the compactness of the yeast genome. Deletion of the ARO4 Poly(A) Signal Reduces Its Expression-In the deletion (⌬ Ϫ405/Ϫ280) all the sequences required for ARO4 3Ј end formation in the artificial test system were removed. Strain RH1769 carrying this deletion in the untranslated region between the ARO4 and the HIS7 genes showed a decreased ARO4 expression level. ARO4 expression in this strain was about 40% when compared with wild type expression levels (Fig. 1). In contrast, deletion of the ARO4 3Ј end processing signals did not affect the expression of the HIS7 gene located downstream (Fig. 1). We therefore concluded that deletion of the 3Ј processing signals reduces ARO4 expression to about 40% compared with its wild type expression level, indicating the existence of cryptic 3Ј end forming signals. Overexpression of the ARO4 Gene Lacking its 3Ј Processing Signals Shuts Down Expression of the Downstream Located HIS7 Gene-The ARO4 terminator and the HIS7 promoter are separate elements, and deletion of the whole ARO4 terminator does not influence HIS7 expression (Fig. 1). This seemed surprising to us, because theoretically we expected that the role of a terminator is not only to correctly process mRNA 3Ј ends but also to avoid transcriptional interference between two adjacent FIG. 2. In vivo test cassette for either wild type or mutant mRNA 3 processing signals in S. cerevisiae. A, the test cassette consists of the ACT1 promoter fused to the ADH1 terminator. Functional 3Ј processing sites were cloned between the ACT1 promoter and the ADH1 terminator and result in short truncated transcripts, whereas nonfunctional sites result in long readthrough transcripts. Because the complete HIS7 promoter is cloned into the test cassette, a short transcript initiated at this promoter and ending in the ADH1 terminator is expected. B, the primary sequence of the ARO4 3Ј-untranslated region and a part of the open reading frame (in boldface italic type) are shown. The tripartite Zaret/Sherman (ZS) motif TAG . . . TATGTA . . . TTT is a putative consensus element and is underlined and in boldface type. The three mapped ARO4 3Ј ends are indicated by black arrows. The numbers correspond to the assignment of position ϩ1 to the A nucleotide of the ATG start codon of the HIS7 gene. genes. Thus, we further investigated the role of the ARO4 terminator for its ability to prevent interference between the transcription of the ARO4 and the HIS7 genes. Replacement of the ARO4 promoter by the ACT1 promoter increased its expression 4-fold and caused a reduction of HIS7 expression to 50% of the wild type expression (Fig. 5). This effect was even more pronounced using the yeast strain RH1815 carrying a 52-bp deletion within the ARO4 3Ј end processing signal reducing ARO4 expression to 70%. In this strain HIS7 activity was slightly reduced to 95% compared with wild type activity. Here, replacement of the ARO4 promoter by the strong ACT1 promoter leading to the yeast strain RH2172 reduced HIS7 activity to 30% of wild type activity. These results indicated that expression of the ARO4 gene under the control of the strong ACT1 promoter at its original chromosomal locus interfered with the initiation of transcription at the downstream located HIS7 promoter and therefore caused a reduction of HIS7 expression. This effect is even more pronounced when simultaneously the ARO4 terminator is lacking. In the ACT1-ARO4 3Ј end formation test gene where the ACT1 promoter is fused to the ARO4/HIS7 intergenic region with only 90 bp of the open reading frame in between, no transcript initiated at the HIS7 promoter could be detected (Fig. 6). Therefore, we tested whether this is due to the strong initiation at the ACT1 promoter and the incomplete 3Ј end formation at the ARO4 polyadenylation site in the ACT1-ARO4 hybrid gene. Two constructs served as controls. In the first construct the ACT1 promoter was destroyed by Bal31 digestion. With no transcript initiated at the strong ACT1 promoter, no interference was expected between the ACT1-ARO4 hybrid transcript and the initiation at the HIS7 promoter. Therefore a short transcript initiated at the HIS7 promoter was expected. In the second construct the strong polyadenylation signal of the GCN4 gene (28) was cloned between the ACT1 promoter and the ARO4/HIS7 intergenic region. In this construct the discrepancy between the strong ACT1 promoter and the weak ARO4 terminator should be abolished, and therefore a transcript initiated at the HIS7 promoter was expected. In a Northern blot experiment with RNA isolated from the yeast strains RH2169 (with inserted GCN4 terminator) and RH2171 (with destroyed ACT1 promoter), a short transcript initiated at the HIS7 promoter could be detected by hybridization with a radiolabeled, 215-bp ADH1 probe. No such transcript was detected using RNA isolated from the yeast strain RH2160 with an intact ACT1 promoter and no inserted GCN4 terminator (Fig. 6). Hybridization of RNA isolated from the yeast strain RH2169 (with inserted GCN4 terminator) with the radiolabeled 524 bp ACT1 probe led to a great amount of ACT1-GCN4 hybrid transcript. The strong ACT1 promoter directed high levels of initiation of transcription and the downstream inserted strong GCN4 terminator resulted in complete termination of transcription. In the strain RH2171 the ACT1 promoter was completely destroyed, because no transcript could be visualized by hybridization of RNA from this strain with the ACT1 probe. In the strain RH2160 (wild type ARO4/HIS7 intergenic region) both truncated and readthrough transcripts were present, indicating incomplete processing of the ACT1-ARO4 hybrid mRNA. These results demonstrated that expression of the ACT1-ARO4 hybrid mRNA abolished initiation of transcription at the HIS7 promoter located downstream due to transcriptional interference between these two genes. In summary, deletion of the ARO4 terminator has no effect on HIS7 transcription. Overexpression of the ARO4 gene by the ACT1 promoter reduces HIS7 expression by a factor of two. Simultaneous overexpression of ARO4 and deletion of its ter- minator reduces HIS7 expression to 30% of wild type level. Finally, as shown in Fig. 6B (first lane), a shortened distance between the ACT1 promoter and the ARO4 terminator with just a little part of the ARO4 open reading frame in the ACT1-ARO4 hybrid gene completely shuts down HIS7 expression. Therefore strain RH2160, where the ACT1 promoter is fused to the ARO4-HIS7 intergenic region with only 90 bp of the ARO4 open reading frame in between and integrated into the yeast genome, raises no HIS7 transcript. DISCUSSION This study had three major results. (a) We wanted to know whether the authentic 3Ј end of a gene is indispensable for its expression at its natural chromosomal locus. We found that we can delete the ARO4 3Ј end signal. Therefore, the ARO4 3Ј end signal is not essential but required for efficient ARO4 gene expression. (b) We wanted to know whether the ARO4 3Ј end formation signals can generally block transcriptional interference and guarantee efficient HIS7 expression. We found that 4-fold increased ARO4 expression reduces HIS7 expression by a factor of two. (c) We wanted to know whether in yeast a terminator and an adjacent promoter are overlapping or whether there is intergenic space between two adjacent genes. Our results suggest two independent nonoverlapping units and no intergenic region between ARO4 and HIS7. Part of our analysis concerns the question of how essential the 3Ј end of a gene is for its expression in the natural chro-mosomal environment. The ARO4 3Ј processing signal includes several redundant elements that are located within 84 bp starting about 12 bp downstream of the ARO4 stop codon. Any deletion within this region reduced ARO4 expression to between 35 and 75% when compared with the wild type activity. Interestingly, deletion of the complete ARO4 3Ј end signal reduces ARO4 expression to 41% when compared with wild type but does not completely shut down its expression. Thus, the complete 3Ј end of ARO4 is only important for the efficiency of gene expression but is not essential for gene expression per se. The cell seems to be able to cope with the lack of the ARO4 3Ј end by using cryptic signals within the HIS7 promoter for ARO4 mRNA 3Ј end formation. Furthermore, the effect of enhanced ARO4 transcription on the initiation of the downstream located HIS7 gene was investigated. Small deletions within the 3Ј processing and termination region of the ARO4 gene reduced its expression but had no effect on HIS7 transcription. Even a 52-bp deletion only hardly reduced HIS7 expression compared with the wild type expression level. By contrast, a shortened ACT1-ARO4 hybrid gene, where the ACT1 promoter was directly linked to the 3Ј-untranslated region of the ARO4 gene, with only 90 bp of the open reading frame in between caused total HIS7 promoter occlusion with no detectable transcript initiated at the downstream FIG. 5. Effects of the ACT1/ARO4 fusion on HIS7 expression. ␤-Galactosidase activities (black boxes) and DAHP synthase activities (shaded boxes) of the four strains RH1616, RH2174, RH1815, and RH2172 carrying respecive HIS7/lacZ fusion constructs are shown. The strain RH1616 represents the wild type ARO4/HIS7 intergenic region, in the strain RH2174 the ARO4 promoter was replaced by the ACT1 promoter, in the strain RH1815 the sequences of the ARO4/HIS7 intergenic region between positions Ϫ392 and Ϫ340 relative to the HIS7 start codon were deleted, and finally the strain RH2172 was constructed by replacement of the ARO4 promoter by the ACT1 promoter in the strain RH1815. The wild type activity was set to 100%. The numbers indicated represent the average value obtained by at least six measurements. The standard deviation did not exceed 15%. FIG. 6. Northern experiments with different ACT1-ARO4 hybrid genes. The strain RH2160 carries the wild type ARO4/HIS7 intergenic region inserted in the in vivo test cassette. In strain RH2169 the strong 3Ј processing signals of the GCN4 gene were cloned between the ACT1 promoter and the ARO4/HIS7 intergenic region. In strain RH2171 the ACT1 promoter was destroyed by Bal31 digestion. In panel A, the blot was hybridized with a radiolabeled 542-bp fragment of the ACT1 promoter to monitor ACT1-ARO4 hybrid transcripts, whereas in panel B a 215-bp fragment of the ADH1 terminator was used to monitor HIS7-ADH1 transcripts. FIG. 4. Effects of the modifications in the ARO4/HIS7 intergenic region on mRNA 3 end processing. Northern hybridization analysis was performed with total RNA isolated from the wild type and mutated strains. The truncated RNA (T-RNA) and the readthrough RNA (RT-RNA) were visualized with a radiolabeled probe derived from the ACT1 promoter. The wild type ACT1 transcript was visualized with the same probe and was used as control. The 3Ј processing efficiencies were determined by using a PhosphorImager. All values represent the ratio between truncated transcripts and the total amount of transcripts, i.e. T-RNA/(T-RNA ϩ RT-RNA), and each is the average of evaluations of at least three Northern blots. The standard deviation did not exceed 10%. The 3Ј processing efficiency with the wild type ARO4/HIS7 intergenic region was approximately 84% (82-86%). located HIS7 promoter. Expression of the complete ARO4 gene under the control of the ACT1 promoter resulted in 4-fold increased ARO4 expression, and simultaneous HIS7 expression was reduced by a factor of two. This effect was even more pronounced when parts of the ARO4 poly(A) signal were deleted. In conclusion the 3Ј end of a gene is adjusted to its own promoter. Deletion of a poly(A) signal affects the expression of a downstream located gene only if the activity of the upstream promoter is simultaneously increased. The adjustment of the 3Ј end formation signal for a mRNA is necessary to prevent transcriptional interference with the adjacent gene. In some further studies, the mechanism should be investigated in more detail, by which transcriptional interference between neighboring genes is prevented. One remarkable feature of the yeast genome is its compact architecture, resulting from short intergenic regions. Some statistical calculations with the yeast genome revealed an average of 309 bp for a promoter (1). This theoretical value fits well with the observed 280 bp for the HIS7 promoter. The calculated size of an average yeast terminator consists of 163 bp. In this study we mapped the ARO4 poly(A) signals within a region of 84 bp starting 12 bp downstream of the ARO4 stop codon. Adding these 12 bp to the poly(A) signal and taking into account that the actual poly(A) addition sites were mapped within 40 bp downstream of the poly(A) signal, the ARO4 terminator consists of 136 bp. Therefore, the theoretically calculated sizes for yeast promoters and terminators fit well with the concrete situation between the open reading frames of the ARO4 and HIS7 genes. Within the 40 bp between the ARO4 3Ј end processing signals and the HIS7 promoter, the actual ARO4 3Ј ends are located. In conclusion, there is virtually no intergenic region between the ARO4 and HIS7 genes underlining the compact architecture of the yeast genome.
2018-04-03T04:39:06.593Z
1997-10-17T00:00:00.000
{ "year": 1997, "sha1": "ca84e8eac7f5c73913550735d1ddab53bcfcb010", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/272/42/26318.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "1190b9f989408a15064f234309dcf92e6e5784ad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15204755
pes2o/s2orc
v3-fos-license
Impact of inflammation, emphysema, and smoking cessation on V/Q in mouse models of lung obstruction Background Chronic obstructive pulmonary disease (COPD) is known to greatly affect ventilation (V) and perfusion (Q) of the lung through pathologies such as inflammation and emphysema. However, there is little direct evidence regarding how these pathologies contribute to the V/Q mismatch observed in COPD and models thereof. Also, little is known regarding how smoking cessation affects V/Q relationships after inflammation and airspace enlargement have become established. To this end, we have quantified V/Q on a per-voxel basis using single photon emission computed tomography (SPECT) in mouse models of COPD and lung obstruction. Methods Three distinct murine models were used to investigate the impact of different pathologies on V/Q, as measured by SPECT. Lipopolysaccharide (LPS) was used to produce neutrophilic inflammation, porcine pancreatic elastase (PPE) was used to produce emphysema, and long-term cigarette smoke (CS) exposure and cessation were used to investigate the combination of these pathologies. Results CS exposure resulted in an increase in mononuclear cells and neutrophils, an increase in airspace enlargement, and an increase in V/Q mismatching. The inflammation produced by LPS was more robust and predominantly neutrophilic, compared to that of cigarette smoke; nevertheless, inflammation alone caused V/Q mismatching similar to that seen with long-term CS exposure. The emphysematous lesions caused by PPE administration were also capable of causing V/Q mismatch in the absence of inflammation. Following CS cessation, inflammatory cell levels returned to those of controls and, similarly, V/Q measures returned to normal despite evidence of persistent mild airspace enlargement. Conclusions Both robust inflammation and extensive airspace enlargement, on their own, were capable of producing V/Q mismatch. As CS cessation resulted in a return of V/Q mismatching and inflammatory cell counts to control levels, lung inflammation is likely a major contributor to V/Q mismatch observed in the cigarette smoke exposure model as well as in COPD patients. This return of V/Q mismatching to control values also took place in the presence of mild airspace enlargement, indicating that emphysematous lesions must be of a larger volume before affecting the lung significantly. Early smoking cessation is therefore critical before emphysema has an irreversible impact on gas exchange. Introduction Ventilation (V) and perfusion (Q) are fundamental physiological processes within the lung contributing to gas exchange, and the relationship between these processes is dysfunctional in patients with chronic obstructive pulmonary disease (COPD) [1]. Cigarette smoke (CS) is a primary risk factor for the disease, prolonged exposure to which can lead to airway inflammation, airspace enlargement, and several other pathologies [2], ultimately resulting in irreversible airflow limitation [3,4]. Cessation of cigarette smoking is capable of slowing the progression of COPD but years of cessation are often necessary before improvements to airflow limitation, inflammatory state, infection risk, and cardiovascular comorbidities are seen [5,6]. To better understand the benefits and limitations of smoking cessation, the impact on V/Q requires investigation in the context of the pathologies associated with cigarette smoke exposure. V/Q relationships in the lung can be measured by several non-invasive methods but these techniques are not commonly used in clinical practice, aside from the diagnosis of pulmonary embolism; quantification of results, even in clinical research, is rare. Rodríguez-Roisin et al. [7] made use of the multiple inert gas elimination technique (MIGET) to quantitatively demonstrate that V/Q is a sensitive measure of the earliest stages of COPD. A more widely available methodology for V/Q can be performed in nuclear medicine departments utilising single photon emission computed tomography (SPECT) to provide three-dimensional maps of ventilation and perfusion [8,9]. Jogi et al. [10] have shown the ability of this technique to identify early disease and stage disease severity in COPD patients. Further, Suga et al. [11] have quantified the impact of emphysema on V/Q in the lungs of COPD patients. Modelling aspects of COPD in mice provides the means by which to investigate the individual pathologies that make up this heterogeneous disease [12]. Using a methodology adapted from clinical V/Q SPECT to a murine model, our laboratory has confirmed the utility of this technique in measuring changes in V/Q with age [13] and in the context of prolonged cigarette smoke exposure [14]. Understanding the mechanisms behind V/Q mismatch in COPD-associated pathologies is an important step in furthering the ability to diagnose and treat this widespread and burdensome disease. In the current study, the impact of smoking cessation on V/Q mismatching has been investigated. In addition, the contributions of inflammation and airspace enlargement have been examined, by employing simple models for these pathologies, to provide insight into the dysfunction associated with cigarette smoke exposure. We have sought to explore the V/Q relationships associated with cigarette smoke exposure using these models alongside cellular and structural assessments, using both traditional and non-invasive methods. Animals Specific pathogen-free 10-12 week old female BALB/c mice were purchased from Charles River Laboratories (Senneville, QC, Canada). The studies were approved by McMaster University's Animal Research Ethics Board in accordance with the Canadian Council on Animal Care guidelines. Cigarette smoke exposure protocol Mice were exposed to cigarette smoke 5 days/week using a SIU48 whole body exposure system (Promech Lab, Vintrie, Sweden). Details of the exposure protocol have been reported previously [15]. Control animals were exposed to room air only. Following 24 weeks of smoke exposure, mice were divided into two groups, continued smoke exposure and smoke cessation, and studied for 16 weeks. Controls continued to receive room air. Lipopolysaccharide exposure protocol To model neutrophilic lung inflammation, mice intranasally received either 10.5 μg of LPS (Sigma-Aldrich, Oakville, ON, Canada) in 35 μL of sterile phosphatebuffered saline (PBS) or PBS only. Animals were imaged 24 hours post LPS exposure and sacrificed immediately after imaging. Porcine pancreatic elastase exposure protocol To model emphysema, mice intranasally received 4 units of PPE (EPC Inc., Owensville, MO, USA) in 30 μL of sterile PBS or PBS only. After exposure mice were left for a period of 45 days prior to acquisition of data. Imaging protocol and per-voxel image analysis Imaging was performed as previously described [13] with minor modifications. SPECT scans were acquired on an X-SPECT system (Gamma Medica, Northridge, CA, USA). Technegas™ and 99m Tc-macroaggregated albumin were used to provide the distributions of V and Q, respectively. CT images were acquired for both SPECT scans, also on the X-SPECT. SPECT and CT images were reconstructed, fused, and co-registered as previously described [13]. A 'Lung' region of interest (ROI) was produced for the ventilation CT images using Amira 5.1 software (Visage Imaging, Andover, MA, USA) and used during co-registration and analysis. V/Q ratios were calculated using normalised V and Q frequencies. To assess emphysema in CT images, volumes of low attenuation (VLA) were calculated by summing the percentage of lung volume less than -400HU. Additional details regarding this threshold and other specifics of the methods used are provided in the Additional file 1. Collection and measurement of specimens Mice were sacrificed at the experimental endpoints indicated in the results. Bronchoalveolar lavage (BAL) was collected and measures of airspace enlargement were made (Pneumometrics software V.1, Hamilton, Ontario, Canada) from haemotoxylin and eosin (H&E) stained lung histology sections as described previously [14]. BAL cell counts and the histological data for animals at 24 weeks CS have been reported previously [14]. Additional detail regarding lung fixation, histological assessment, and BAL quantification is provided in the Additional file 1. Data analysis Data were expressed as the mean ± SEM. Statistical significance was determined by an unpaired, two-tailed t-test in Prism (Graphpad Software Inc, La Jolla, CA, USA) when comparing age-matched experimental groups. For cessation-related data, a one-way ANOVA with Tukey post-hoc test was performed. p < 0.05 was considered statistically significant for all statistical tests. The number of mice studied is described in Table one, found in the Additional file 1. Results Cigarette smoke exposure caused V/Q mismatch, inflammation, and airspace enlargement Mice were exposed to CS for 24 weeks to establish the degree of V/Q mismatching and the lung conditions in which mismatching takes place. Exposure to cigarette smoke for a period of 24 weeks elicited significant V/Q mismatching but no density changes were observed in CT images compared to controls ( Figure 1A). Quantification of log(V/Q) curves ( Figure 1B) yielded a significant decrease in the mean and a significant increase in the standard deviation of the data ( Figure 1C&D). Analysis of BAL fluid demonstrated that 24 weeks of smoke exposure caused robust inflammation, with a total cell count of 3.3 ± 0.5 ×10 6 compared to 0.8 ± 0.1 ×10 6 for controls. Similarly, increases were observed for both mononuclear cells and neutrophils ( Figure 2A). A shift towards greater airspace size was observed histologically in the distribution of airspace area ( Figure 2B) and statistically confirmed by quantification of the airspace area beyond the control 75 th percentile (data not shown). In addition, a decrease of the number of airspaces per unit area ( Figure 2C) was seen. Thus, CS exposure caused V/Q mismatch in the context of both inflammation and a small but significant degree of airspace enlargement. Inflammation alone caused V/Q mismatch The effect of inflammation on V/Q mismatch was investigated by exposing mice to LPS. LPS exposure produced a total cell count of 5.3 ± 0.4 ×10 6 compared to 0.7 ± 0.1×10 6 for control animals; this increase was almost entirely neutrophilic ( Figure 3). The increased inflammation associated with LPS was also apparent in CT images, as depicted by peribronchial increases in density ( Figure 4A). The log(V/Q) distribution demonstrated that this inflammation was capable of altering lung function ( Figure 4B) and significant changes were observed in both the mean and standard deviation of these data ( Figure 4C and D). Thus, neutrophilic inflammation alone was capable of causing V/Q mismatch. Airspace enlargement alone caused V/Q mismatch The role of airspace enlargement in V/Q mismatching was next investigated as a major pathology associated with COPD. To produce airspace enlargement greater than the levels observed after exposure to cigarette smoke, mice were exposed to PPE. Emphysema-like lesions were readily apparent in the CT images ( Figure 5A). While the control group was described by a bimodal distribution representing air-and tissue-filled regions within the lung segmentation, the PPE group showed a large leftward shift towards lower density values, indicating a greater extent of air within the lung ( Figure 5B). The lung ROI volume analysed was significantly greater in PPE-exposed animals than in controls ( Figure 5C). When the percentage of volume below -400HU was calculated, a threshold describing emphysematous lesions, a significant increase was found in PPE animals ( Figure 5D) with 21.9 ± 2.4% of the lung volume below -400HU compared to 0.6 ± 0.1% in controls. Log(V/Q) measurements were affected by the altered lung structure produced by PPE exposure, though no consistent pattern was observed relating emphysematous lesions to mismatched V/Q ( Figure 6A). A broadening of the log(V/Q) distribution was seen ( Figure 6B) and was further described by a significant decrease in the mean and an increase in the standard deviation ( Figure 6C and D). No difference was observed in BAL inflammatory levels between control and PPE exposed mice at the experimental endpoint (data not shown). Therefore, emphysematous lesions were capable of producing V/Q mismatch, which appeared to be distributed throughout the lung. Cigarette smoke cessation resolved V/Q mismatch Cessation of cigarette smoke exposure was next examined to determine the relative roles of inflammation and airspace enlargement to the V/Q mismatching observed after 24 weeks of exposure. Following 16 weeks of cessation, significant decreases were observed in the BAL total cell count, as well as within the mononuclear and neutrophil compartments, bringing these levels back to those observed in controls ( Figure 7A). However, the lungs of smoking cessation animals still showed evidence of bronchial associated lymphoid tissue ( Figure 7B); these immune structures were present after 24 weeks of smoke exposure and did not resolve over this period of cessation. Histological analysis of airspace enlargement demonstrated that continuing smoke and cessation groups had similar airspace area distribution profiles ( Figure 7C). The percentage of area above the control 75 th percentile was significantly increased in both of these groups compared to controls (data not shown). Similarly, the decrease in the number of airspaces per unit area ( Figure 7D), and an increase in the area to perimeter ratio ( Figure 7E), confirmed that the airspace enlargement seen at 24 weeks was still apparent after 16 weeks of cessation. There was no discernible difference in log(V/Q) distributions between control animals and those that stopped smoking after 24 weeks of cigarette smoke exposure while Figure 2 Inflammation and airspace enlargement after 24 weeks smoke exposure. A BAL total cells, mononuclear cells, and neutrophils at 24 weeks. B Average airspace area distributions by logarithmic bins of airspace size for 24 weeks, describing whole slice histology. C Average airspaces per unit area. D Average area to perimeter ratio produced from analysis of whole slice histology at 24. **p < 0.01 by two-tailed t-test compared to age-matched controls. continuing smokers maintained the altered log(V/Q) distribution ( Figure 8A&B). With smoking cessation a significant increase of the mean log(V/Q) value, from −0.08 ± 0.01 after 24 weeks smoke exposure to −0.03 ± 0.01 after 16 weeks of smoking cessation, and a significant decrease of the standard deviation of this data, from 0.38 ± 0.02 to 0.33 ± 0.02 after cessation, was observed. Thus, cessation for 16 weeks returned V/Q mismatching back to control values. Discussion Ventilation and perfusion of the lung can be compromised in COPD, and the capability of matching these processes dysregulated. The objective of the current study was to investigate the V/Q perturbations associated with two of the major pathologies associated with COPD using mouse models of neutrophilic inflammation and emphysema. Further, the impact of smoking cessation was described and evidence gathered regarding the relative roles of inflammation and airspace enlargement in V/Q mismatching. The mouse models employed in the studies presented were relatively simple in nature to allow for effective interpretation of results. These models, utilising LPS, PPE, and cigarette smoke, are all well-established and the impact of these exposures on resistance to airflow, the immune system, and other aspects of the lung have been reviewed previously [12]. Investigation of V/Q relationships in these Density-based analysis of PPE imaging data. A Representative axial, sagittal, and coronal CT slices from PPE-exposed and age-matched controls. B Average Hounsfield unit (HU) density distributions for control and PPE-exposed groups. C Volume of the lung region of interest (ROI). D Quantification of percentage of volume with density values less than -400HU, A.K.A. percentage volume of low attenuation (%VLA), signifying severe airspace enlargement. **p < 0.01, ***p < 0.001 by two-tailed t-test compared to age-matched controls. models adds to the understanding of the consequences of pathological disruption on the potential for gas exchange. The LPS and PPE models, causing inflammation and airspace enlargement, respectively, were used as examples of severe pathology, so the V/Q mismatching observed was not surprising. It is important to note that the extent and distribution of pathology associated with these administered reagents does not necessarily reflect the pathologies found in COPD, but demonstrate that each causes V/Q mismatching. Cigarette smoke, on the other hand, caused less pronounced inflammation and only subtle airspace enlargement but nevertheless caused V/Q mismatching similar to that observed in the other models employed. A seminal article by Wright and Sun [16] investigated long-term smoking cessation in guinea pigs and found that airspace enlargement persists in ex-smokers while pulmonary function increases over their smoking counterparts. Similarly, we found that smoking cessation resulted in a return to normal lung function, as measured by V/Q relationships, and decreased inflammation. However, it has been established that other pathological markers, such as bronchial-associated lymphoid tissue, remain after smoking cessation [17]. Likewise, the airspace enlargement present after 24 weeks of cigarette smoke exposure persisted after smoking cessation, but these emphysematous lesions caused by cigarette smoke exposure were mild compared to elastase induced airspace enlargement and likely were not sufficient enough to contribute to V/Q mismatch. It is possible that the resolution of the imaging methodology was unable to detect the mismatch caused by these small, persistent structural changes; however, if airspace enlargement were to continue, V/Q mismatch and impaired gas exchange would eventually ensue, indicating that smoking cessation in human patients is critical before emphysematous lesions are present; at these early stages of disease the V/Q mismatch from inflammation could resolve leaving the gas exchange capabilities of the lung largely intact. Work by Suga et al. [11] has begun to explore the V/Q relationships in COPD patients with advanced emphysema but our results suggest that changes in V/Q may not be apparent, due to airspace enlargement alone, until this pathology has progressed substantially as evidenced by the lack of V/Q mismatching in the presence of mild emphysema in cessation mice. Also of interest, the relationship between volumes of low x-ray attenuation and V/Q was not apparent in the comparison of log(V/Q) images to CT images in PPE-exposed animals. It is possible that regions neighbouring emphysematous volumes are unable to function properly, leading to the V/Q mismatch observed. This warrants further investigation and V/Q SPECT/CT provides the necessary tools to address this concern. It is now understood that emphysema progression continues after smoking cessation [18,19], so development of methods that can be used clinically to track and understand this pathology are paramount. The impact of inflammation on V/Q status is also an important topic that is not yet well understood. While LPS caused a greater inflammatory reaction than cigarette smoke, as observed in both BAL measurements and CT images, it did not elicit a V/Q disturbance greater than that of cigarette smoke. It is likely that the distribution of this inflammation is an important factor, especially as it pertains to the small airways; constriction of the small airways is undoubtedly heterogeneous and would lead to heterogeneous ventilation patterns. As airflow resistance is inversely proportionate to the radius of the airway to the fourth power, as described by Poiseuille's equation, even slight changes in the lumen of small airways could alter the distribution of ventilation and impact V/Q relationships. Investigations by Gaschler et al. [20] demonstrated that mucus secretion is not present within the small airways after 8 weeks of cigarette smoke exposure, but there is thickening of the epithelial layer [21]. V/Q mismatching is present after 8 weeks smoke exposure in this model [14], but further investigation into the mechanisms by which inflammation could affect airflow in this manner is required. While LPS-derived inflammation caused V/Q mismatch, likely through airflow obstruction, it is important to note that cigarette smoke contains additional components, such as nitric oxide, that could interfere with vascular mechanisms, such as hypoxic vasoconstriction, leading to inadequate matching of perfusion to ventilation [22,23]. Thus, the V/Q mismatch observed in this model of cigarette smoke exposure is likely dependent on both inflammation and an alteration in perfusion, though greater investigation is still necessary. In comparison to clinical findings, our data are consistent with those previously reported by Rodríguez-Roisin et al. [7] using MIGET where V/Q was shown to be sensitive to GOLD stage I and that mismatching increased with GOLD staging severity. The authors also provided evidence that the V/Q abnormalities seen in GOLD stage I were associated with smaller airways, alveolar airspaces, and blood vessels. Our data suggests that inflammation could play a large role in the V/Q mismatching observed in early COPD, while other pathologies, such as emphysema and small airway fibrosis, become a principal cause of V/Q mismatching in the later stages of COPD. SPECT V/Q has previously been shown, through work by Petersson et al. [24], to closely parallel MIGET results such as those described above but it is important to note that our protocol did not contain a measurement of total cardiac output. As such, this preclinical technique approximates overall V/Q distribution. Nevertheless, due to the consistent attributes inherent in an experimental model, as compared to human subjects, we believe that our V/Q results are representative of the state of the lungs in the contexts described. The pulmonary processes of ventilation and perfusion are both affected by long-term exposure to cigarette smoke. Cessation of cigarette smoking results in a return of V/Q assessed lung function to normal, but the pathological consequences of continued exposure eventually lead to structural damage and functional impairment. It is possible that these pathologies could be detected early with the aid of V/Q methods and cessation initiated before major damage permanently alters pulmonary lung function. Conclusions While models of COPD cannot reproduce the disease itself in entirety, they allow research to address both the Figure 7 Inflammation and airspace enlargement after 16 weeks cessation. A BAL total cells, mononuclear cells, and neutrophils at 16 weeks post 24 week smoke exposure. ***p < 0.001 by one-way ANOVA with Tukey post-hoc. B Representative, H&E stained, whole slice histology from a control (left), cessation (middle), and continuing smokers at the +16 week time-point. C Average airspace area distributions by logarithmic bins of airspace, describing whole slice histology. D Average airspaces per unit area and E average area to perimeter ratio produced from analysis of whole slice histology. *p < 0.05, **p < 0.01, ***p < 0.001 by one-way ANOVA with Tukey post-hoc. pathogenesis of the disease and the constituent pathologies therein. We have demonstrated that both inflammation and emphysematous lesions can contribute to V/Q mismatching and that cigarette-smoking cessation prior to large-scale structural changes can reverse the V/Q imbalance. Spatial methodologies utilising V/Q, especially those coupled to anatomical imaging methods, can provide information not accessible by other means. The ability to translate knowledge of lung structure and function garnered in preclinical models to that seen in clinical disease could provide better diagnosis, treatment, and understanding of chronic respiratory diseases such as COPD. Additional file Additional file 1: An expanded Materials and Methods section is available for additional information regarding many of the models and techniques employed in this study. Authors' contributions BNJ was involved in concept and design, experimentation and collection of biological and imaging data, analysis, drafting and review of the manuscript. CAJRM was involved in experimentation and collection of biological and imaging data. MCM was involved in experimentation and collection of biological data as well as review of the manuscript. RGR was involved in collection and analysis of imaging data. MRS contributed to concept and design and review of the manuscript. NRL contributed to concept and design and review/editing of the manuscript. All authors read and approved the final manuscript.
2017-06-19T23:10:33.601Z
2014-04-14T00:00:00.000
{ "year": 2014, "sha1": "0dc51a0da41a6825f93161c76cb1a4a8494670a7", "oa_license": "CCBY", "oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/1465-9921-15-42", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38df078640cef7a129a056b187bb14bd63967e0a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
32186171
pes2o/s2orc
v3-fos-license
Extraction of uranium from seawater: a few facts . Although uranium concentration in seawater is only about 3 micrograms per liter, the quantity of uranium dissolved in the world ’ s oceans is estimated to amount to 4.5 billion tonnes of uranium metal (tU). In contrast, the current conventional terrestrial resource is estimated to amount to about 17 million tU. However, for a number of reasons the extraction of signi fi cant amounts of uranium from seawater remains today more a dream than a reality. Firstly, pumping the seawater to extract this uranium would need more energy than what could be produced with the recuperated uranium. Then if trying to use existing industrial fl ow rates, as for example on a nuclear power plant, it appears that the annual possible quantity remains very low. In fact huge quantities of water must be treated. To produce the annual world uranium consumption (around 65,000 tU), it would need at least to extract all uranium of 2 (cid:1) 10 13 tonnes of seawater, the volume equivalent of the entire North Sea. In fact only the great ocean currents are providing without pumping these huge quantities, and the idea is to try to extract even very partially this uranium. For example Japan, which used before the Fukushima accident about 8,000 tU by year, sees about 5.2 million tU passing every year, in the ocean current Kuro Shio in which it lies. A lot of research works have been published on the studies of adsorbents immersed in these currents. Then, after submersion, these adsorbents are chemically treated to recuperate the uranium. Final quantities remain very low in comparison of the complex and costly operations to be done in sea. One kilogram of adsorbent, after one month of submersion, yields about 2 g of uranium and the adsorbent can only be used six times due to decreasing ef fi ciency. The industrial extrapolation exercise made for the extraction of 1,200 tU/year give with these values a very costly installation installed on more than 1000 km 2 of sea with a lot of boats for transportation and maintenance. The ecological management of this huge installation would present signi fi cant challenges. This research will continue to try to increase the ef fi ciency of these adsorbents, but it is clear that it would be very risky today, to have a long-term industrial strategy based on signi fi cant production of uranium from seawater with an affordable cost. Very large amounts of uranium The average value of the uranium content dissolved in the oceans is estimated at 3.3 micrograms per liter (with dispersal from 1 to 5 micrograms depending on the locations).With a volume in the oceans about 1.37 Â 10 18 m 3 , uranium content is estimated to amount to 4.5 billion tonnes of uranium metal (tU) compared to conventional terrestrial resource estimates of about 17 million tU [1][2][3][4]. In this connection, Japan, which consumed before the Fukushima accident about 8,000 tU per year, sees about 5.2 million tU pass by every year in the great ocean current Kuro Schio in which it lies (Fig. 1) [3].Japan depends entirely on uranium imports, that explains its interest and past research effort for extraction of uranium from seawater. This uranium mainly comes from the soil leaching and related supply from rivers.For example, it is estimated that the Rhone brings 29 tU/year into the sea, and all rivers combined contribute 8,500 tU/year. These virtually inexhaustible quantities have, sporadically since the 1950s, led to much research on the possibility of extraction.The recently launched American Department of Energy program is to develop a realistic cost of production to inform future fuel cycle decisions, i.e. whether to reprocess or not. Note: All metal ions are also found dissolved in seawater in significant overall amounts and often greater than known mineral resources.Only three products: NaCl, MgCl 2 and MgSO 4 can be easily extracted, for example by evaporation.The values for the others are much too low and require more complex selective strategies.It should also be noted that some interesting products such as lithium or indium may also be involved in this research on extraction techniques. Energy balance of extraction 2.1 Extraction by pumping A tonne of seawater therefore contains about 3.3 milligrams of uranium.Every year France uses 8,000 t of natural uranium to produce about 420 TWh, i.e. 52.5 kWh per gram of uranium.The complete extraction of the uranium contained in a cubic meter of seawater (which is not the case), would let to produce about 0.17 kWh of electrical energy in current nuclear water reactors. The electrical energy required to raise 1 m 3 by 10 m is about 0.03 kWh (with a yield of 80%).In addition, there is a pressure drop in the pipes and the filtration membrane.In seawater desalination plants, for example, energy consumption is estimated around 2.5 kWh per tonne [2], well above the 0.17 kWh that could be recovered.Thus, by applying the simple rule of three to the energy balance, the infeasibility of a land-bound plant dedicated to extracting uranium from seawater can be seen. Existing pumping facilities unusable There are significant seawater pumping facilities in nuclear power plants, seawater desalination facilities or tidal power plants.But the amount of uranium that could be hoped from them remains low and therefore unrealistic in relation to the difficulties: increased head losses, actual efficiency, problem of waste and local depletion in terms of concentration, etc. A 1,000 MWe nuclear reactor, for example, will use an annual seawater flow of about 40 million cubic meters.This represents the flow of only 130 kg of uranium.Even if all of it could be recovered (which is impossible), this would, at the current market price, represent a budget of 12,000 euros which of course would not even cover installation and operating costs.Incidentally, this amount is less than one thousandth of the annual consumption of the same reactor (150 tU). The same reasoning applies to seawater desalination units, where the maximum extractable quantities, and therefore the available budget, remain very low in relation to the operations to be performed. Use of ocean currents The amounts of water to be treated are huge compared to the objectives.This is the basic problem. To produce the total amount of uranium currently consumed worldwide every year (about 65,000 tU) and assuming an infeasible 100% yield, 2 Â 10 13 tonnes of water would have to be processed annually, in other words: the entire North Sea [2].Only the great ocean currents are able to supply these volumes: the Gulf Stream, Kuro Shio in Japan, Strait of Gibraltar, etc. The idea is thus to treat these major currents which would also solve the problem of depletion and renewal of seawater, for a land-bound plant.The concept of pumping, filter and efficiency no longer applies.It would be an extraction in the water. Update on extraction techniques Attention has therefore turned to using adsorbents that can collect the uranium (along with other components) in a selective way.Then these adsorbents are removed and the deposits recovered, generally by a chemical process. In-laboratory point values of 2 g/kg of adsorbent per month, or more, have thus been announced (the most recent laboratory batch experiments of the Oak Ridge National Laboratory [ORNL], in 2013, have shown the higher performance: 3.3 g/kg of adsorbent after 8 weeks of contact of the adsorbent with seawater [8]). The performances are much lower in more realistic conditions.In 2009, JAEA presented result from marine experiments [3,6].The device was three superimposed platforms, containing 115 kg of absorbent on supports (Figs. 2 and 3). Table 1 shows the extraction cycles for this system from 1999 to 2001 and the amount of uranium recovered, i.e. 1,083 g over the 12 cycles of 20 to 96 days of immersion.The values clearly fluctuate, but the average value is less than 1 g of uranium per kilogram of absorbent and per month: lower than the "ideal" laboratory values.Even if the more recent batch laboratory experiments with the new adsorbent of ORNL are better (2.6 times higher than that of the JAEA adsorbent under similar conditions [8]), it is still very low. These methods are confronted with many problems in the field: drop in performance after each chemical wash/limited number of cycles; influence of various parameters on the performance such as water temperature, wave height, etc.; deposits of algae and shells; problems related to installing offshore operations (access, weather conditions, resistance to corrosion of structures, etc.); significant amounts of adsorbent to be used, processed and renewed. The important role played by temperature, which is to be as high as possible (25 °C or more) is also obvious.The cycles were carried out from June to October. Cost analysis Researchers working in field announced until the 1980s targeted a price range between 1,000 and 2,000 USD/kgU.After, using point results of better efficiency in terms of grams per kilogram obtained in the laboratory, prices were reduced accordingly and announced between 300 to 600 USD/kgU.More recent cost analyses have been made by the Japanese and also by the American Department of Energy [7,10].The prices quoted are then between 1,000 and 1,400 USD/kgU. The lowest values can be perplexing when you consider the example from the previous section and all the qualified personnel and work required to recover a kilo of uranium in one year: construction of the platform, offshore operations for installation and periodic extraction, onshore processing of the adsorbent, periodic replacement of 115 kilos of adsorbent, etc.What is the final cost of this kilo of uranium? In fact, these costs announced were derived from extrapolations for gigantic installations.The systems are immersed over several kilometers (see Fig. 4 for a project with an annual output of 1,200 tU) as well as shuttle boats and on-shore treatment plant.This should lead to industrial rationalization and a related reduction in costs.It is clear that all costs associated with developing and operating these huge facilities have not yet been determined, particularly for the installation, anchoring, and location of these thousands of offshore platforms, and those costs announced are little more than rough, first order estimates.For the more recent cost analyses [7,10] (prices between 1,000 and 1,400 USD/kgU), the initial parameters used are as follows (for a plant that would produce 1,200 tU/year): capacity of the adsorbent at 2 g/kg; -60 days of immersion; temperature of the water at 25 °C; -5% drop in efficiency of the adsorbent after each chemical rinse; using the adsorbent six times (after which it has to be replaced). It appears that the primary key parameter of the cost model is the adsorbent's capacity in g/kg.The mathematical model is thus used to significantly reduce costs when going from 2 g/kg to 4 and then 6 (the last test presented in 2013 [8,10] had reached 3.3 g/kg in 8 weeks of immersion, 2.6 higher than the previous). All this remains theoretical.The anchoring of these systems over several miles at sea has yet to be defined.What is the drop in efficiency in winter?Is there a depletion over the kilometers of adsorption which would also adversely affect efficiency?What about corrosion and structural maintenance?None of these issues are addressed in the presentation of the model. Environmental problems It should also be noted that the environmental impact of a facility covering over 1,000 km 2 would certainly be prohibitive.Similarly, the amount of chemical by-products produced and handled would be extremely large and also lead to environmental problems. Energy balance Many massive facilities have to be constructed and submerged, tonnes of adsorbent have to be made and renewed, and conventional island component cooling system boats have to go back and forth.The document [2] addresses this point in an original way.Using statistics for fishing costs and related fuel consumption, an estimated 5 kWh/kg is required to extract something free from the sea and bring it back to shore.However, to produce 1 kg of uranium approximately 500 kg of adsorbent have to be handled, i.e. 2.5 MWh per kilogram of uranium produced, for one-way transportation only (a free return trip is assumed, as the boat would not travel unloaded).Similarly, the production of these adsorbents with a limited lifespan also requires energy, estimated in reference [2] at 10 MWh to produce 500 kg of adsorbent (assuming a one-year life cycle, which is optimistic).These calculations, which are already very rough, mean that 12.5 MWh would be used to produce a kilogram of uranium which in turn can theoretically generate about 52.5 MWh in a reactor.And all other energies required in the process should be added to achieve an accurate balance. This energy balance work was carried out in much greater detail by the project proponent [11], which uses more optimistic and lower values than those above.It reaches an EROI (Energy Return On Investment) of 12, a value which is clearly subject to a number of parameters.It should be noted that the EROI is more than 300 times higher for mined uranium. Strategy for the nuclear industry Without a demonstration of industrial feasibility and validation of a credible cost of extraction, it would be extremely risky to work on a long-term industrial strategy based on significant production of uranium from seawater at an affordable cost. It is worth remembering that fast reactors could be operated without the need for new resources of natural uranium for millennia.The economic profitability would be ensured well before the market price of uranium reaches the estimated cost of uranium production from seawater. Conclusion There is an extremely large quantity of uranium solute in the oceans but its low concentration would require a volume of water greater than that of the North Sea to be processed every year in order to extract uranium currently consumed worldwide every year. Basic energy balances show that pumping/filtering systems have no interest and no future. The only other solution would be extraction by adsorbents placed in ocean currents naturally and freely providing drive and renewal of very large flowrates.These techniques currently enable the production of small quantities at prohibitive prices.Extrapolation on an industrial scale has yet to be developed, even in terms of feasibility, and the final cost of production has not yet been firmly established.However, the continuation of this research is interesting if the efficiency of the process can be further improved, and applied to other materials of interest, so as to pool prohibitively high costs of production. It would however, given current knowledge, be extremely risky for the nuclear industry to launch an industrial strategy based on the possible extraction of uranium from seawater, in an affordable way. Fig. 1 . Fig.1.Amounts of uranium present in the oceans and ocean current near Japan[3]. Fig. 2 . Fig. 2. Construction of a platform in which each stack has 115 kg of adsorbent [3].
2017-10-17T12:41:00.264Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "75c636aa788b86bbe40f22f8fc7773adc44701b4", "oa_license": "CCBY", "oa_url": "https://www.epj-n.org/articles/epjn/pdf/2016/01/epjn150059.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "75c636aa788b86bbe40f22f8fc7773adc44701b4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
234637596
pes2o/s2orc
v3-fos-license
Metabolic maturation of differentiating cardiosphere-derived cells Highlights • Collagen IV promotes proliferation of cardiosphere-derived cells.• Fibronectin supports differentiation of cardiosphere-derived cells.• Oxidative metabolism increases as cardiac progenitors mature.• Stimulating fatty acid oxidation promotes cardiac progenitor cell maturation. Introduction Cardiac mesenchymal cell therapy for myocardial infarction, such as that with cardiosphere-derived cells (CDCs), has yet to fulfil the hopes of the early protagonists, despite promising studies in animal models Ellison et al., 2013;Noseda et al., 2015). In part, this is because expansion of progenitor cells from human tissue biopsies generates cells that may have been affected by disease and/or the insult to the heart and generation of sufficient numbers for therapy takes time, during which the damaged heart undergoes detrimental remodelling (Sutton and Sharpe, 2000). However, in most preclinical studies, cells are isolated and expanded from young healthy animals and administered at the time of surgical infarction. Understanding of the effect that expansion in vitro has on the cells can generate valuable information about the biology of the resident mesenchymal cell population and the conditions for optimum therapeutic culture. Following isolation and expansion in high glucose media, cardiac progenitor cells are assumed to be glycolytic, yet the heart derives most of its energy needs from the oxidation of fatty acids (Taegtmeyer et al., 2005). Therefore, transition of non-contractile progenitor cells into beating cardiomyocytes requires transformation of the metabolic infrastructure, with mitochondrial network expansion and a switch from glycolysis to oxidative phosphorylation (Malandraki-Miller et al., 2018). Such a switch occurs in differentiating pluripotent stem cells (Chung et al., 2010;Lopez et al., 2021) and in isolated cardiac mesenchymal cells from the mouse heart (Malandraki-Miller et al., 2019). In the developing heart, the extracellular matrix (ECM) provides cues to guide cell proliferation, migration and differentiation, with changes in ECM composition affecting tissue development and maintenance (Hanson et al., 2013). Among the most prevalent, functionally relevant, ECM proteins in the developing heart, collagen IV (ColIV) is found in atrial stem cell niches, whereas collagen I, which provides structural support, and fibronectin (FN), which mediates changes in the cellular phenotype, are found outside the niche in the myocardium (Schenke-Layland et al., 2011). ColIV induces differentiating embryonic stem cells (ESCs) to form early-stage cardiac progenitors and enhances their expansion whereas FN promotes the differentiation of ESC-derived progenitors into cardiomyocytes (Schenke-Layland et al., 2011). We postulated that expansion of cardiosphere-derived cells (CDCs) on ColIV would generate sufficient cardiac progenitors for therapy more quickly than expansion on FN. Furthermore, since cardiospheres form more rapidly in hanging drops, as the cells aggregate under gravity, we inferred this would maintain stemness as the cells would spend longer in the hypoxic core of the sphere. In parallel, we hypothesised that differentiation of cells on fibronectin might be more efficient than that on collagen IV and that by selection of culture conditions we could generate CDCs at different stages of differentiation to the cardiac phenotype. We compared the metabolic characteristics of CDCs cultured via the two protocols to determine whether oxidative metabolism is activated in progenitor cells during differentiation and whether that is a gradual process or induced only as differentiated progenitors mature. Finally we aimed to further upregulate fatty acid oxidation and mature the metabolic phenotype of the cells by stimulating the peroxisome proliferatoractivated receptor alpha (PPAR α) pathway using the agonist WY-14643 (Gentillon et al., 2019). Animals Male Sprague Dawley (SD) male rats were obtained from a commercial breeder (Harlan, Oxon, UK). Animals were kept under controlled conditions for temperature, humidity and light, with water and rat chow available ad libitum. Rats were anaesthetised with sodium pentobarbital (270 mg/kg body weight, IP; Euthatal, Merial, UK) to allow tissue removal. All procedures were approved by the University of Oxford Ethical Review Committee in accordance with Home Office (UK) guidelines under The Animals (Scientific Procedures) Act, 1986 and with University of Oxford, UK institutional guidelines. Isolation, and expansion of cardiosphere-derived cells Rat CDCs were cultured as previously described, with culture on fibronectin-coated plates and cardiosphere formation in non-adherent poly-d-lysine coated plates or with culture on collagen IV coated plates and cardiosphere formation in hanging drops. Briefly, SD rat hearts (5 weeks old) were excised and hearts weighed (n = 6). Atrial and apex tissues were minced into 1 mm 3 explant fragments in 0.05% trypsin-EDTA (Invitrogen). Explants were plated in petri dishes precoated with either 4 µg/ml fibronectin from bovine plasma (Sigma) (Fig. S1a) or 10 µg/ml collagen IV from Engelbreth-Holm-Swarm murine sarcoma (Sigma) (Fig. S1f). Complete explant medium (CEM ; Table S1) was added and cells were incubated at 37 • C in 5% CO 2 . Supporting cells and phase bright cells (collectively known as explant-derived cells, EDCs), which had grown out from the explants, were harvested and resuspended in cardiosphere growth medium [CGM, Table S1] at a density of 3 × 10 4 cells per well of 24 well plates precoated with poly-dlysine (Fig. S1b) or 1000 cells per 25 µl drop in the hanging drop technique (Fig. S1g) and cultured for 4 days (Figs. S1b-c and S1g-h). Cardiospheres were subsequently expanded in CEM on FN or ColIV-coated tissue culture flasks (Figs. S1d-e or S1i-j, respectively) to generate cardiosphere-derived cells (CDCs), which were maintained in culture with CEM changed every 3 days and passaged every 7 days until passage 2 (P2). All experiments in this study used P2-CDCs at 70 to 80% confluency, unless otherwise stated. At the end of each stage (EDCs, P1-CDCs, and P2-CDCs), confluent cells were enzymatically digested, and counted using a haemocytometer under an inverted light microscope (Nikon, UK). The cell number was calculated per milligram tissue explanted. Cell proliferation assay P2-CDCs (250-20,000 cells per well) were seeded in 96-well plates pre-coated with either FN or ColIV. On days 1, 2, 3, 5, and 7, culture media was aspirated and cells were washed with PBS and kept at 80 • C. Proliferation was assessed using CyQUANT® Cell Proliferation Assay Kit (Molecular Probes, Invitrogen) plus RNase treatment, according to the manufacturer's instructions. In brief, frozen cells were thawed at room temperature, treated with RNase, and lysed by addition of a buffer containing the CyQUANT GR dye. Fluorescence was then measured directly using a microplate reader (FluoStar, BMG) with excitation at 485 ± 10 nm and emission detection at 530 ± 12.5 nm. The assay was calibrated using a standard curve of bacteriophage λ DNA standard according to the manufacturer's instruction. Flow cytometry Control and differentiated CDCs were digested to a single cell suspension using 0.05% trypsin (5 min at 37 • C), washed with PBS, and fixed with 4% paraformaldehyde (Sigma) for 10 min. For intracellular markers, the cells were permeabilised with 0.2% Triton X-100 in PBS for 10 min. The samples were incubated with blocking solution (2% bovine serum albumin, BSA) + 2% fetal bovine serum (FBS) in PBS for 30 min and then treated with primary antibodies (Table S2) for 1 h. After rinsing with PBS, cells were treated with the secondary antibodies (Table S2) for 30 min, then washed and resuspended with fluorescence-activated cell sorting (FACS) buffer (DPBS with 0.5% BSA). and analysed using a FACS Calibur Flow Cytometer (BD Biosciences, San Jose, CA). The data were analysed using FlowJo version 10.7 (Fig. S2). Confocal microscopy CDCs grown on cover slips were fixed with PFA, immuno-labelled with primary then secondary antibodies for stem cell markers (c-Kit, CD90) or cardiac specific markers (cTnnT2, MHC, MLC, CXC43, and Titin) and scanned using an Inverted Olympus FV1000 Confocal system. Rat neonatal cells were stained for cardiac proteins as a positive control (Fig. S4) and differentiated CDCs were stained with secondary antibodies only as a negative control (Fig. S5). Organoids were imaged as a Z-stack and optical sections merged to give a 3D reconstruction of the full length of the sample. RNA extraction and Real-Time quantitative reverse transcriptase PCR Total RNA was extracted and purified from control or differentiated CDCs using a QIAGEN RNeasy Mini Kit (QIAGEN, United Kingdom). RNA concentration and purity were determined by measuring the absorbance at 230, 260, and 280 nm, using a NanoDrop spectrophotometer (NanoDrop technologies, Wilmington, USA). Complementary DNA (cDNA) was synthesized from RNA template using the AB high-capacity reverse transcriptase kit (Applied Biosystems, Paisley, UK) according to the manufacturer's instructions. Primer pairs were designed based on interpretation of GeneBank or Ensemble Genome Browser and Integrated DNA Technology (IDT) (Table S3). Primer specificity was enhanced by designing a primer pair that flanked the exon-exon border of the gene of interest. Primer specificity was confirmed by BLASTing the primer sequence against genomic databases available at NCBI and primer amplification efficiency tests. Real-time PCR amplification was performed using the Applied Biosystem StepOnePlus Real-time PCR system (AB 44 International, Canada) with postamplification melting curves acquired to verify the specificity of PCR products. Relative quantification of target gene expression, normalized to the geometric mean of the housekeeping genes β-Actin (Actb) and β-2 microglobulin (β2M) and one calibrator (the control sample), was performed using comparative Ct method (the 2-ΔΔCt method). Determination of glycolytic flux Glycolysis was measured as the production of 3 H 2 O from 5-3 Hglucose. P2-CDCs (20,000/cm 2 ) were seeded on pre-coated 6-well plates coated with either FN or ColIV and containing CEM. After overnight incubation to allow the cells to attach or at the end of the differentiation procedure, cells were given 10 ml of control or cardiomyogenic differentiation medium spiked with a trace of D[5-3 H]-glucose (1.48 MBq/ mmol; Amersham, Bucks, UK). The glycolytic rates were determined by collecting 0.6 ml of aliquots of culture medium after 8 h. 3 H 2 O was separated from 3 H-glucose using a Dowex anion exchange column (Sigma, UK) and read using a Tri-Carb 2800TR Liquid Scintillation Analyser 28. Determination of substrate oxidation rates Based on the method of Collins et al (Collins et al., 1998), with some modification, control or differentiated CDCs, in a 6-well plate, were incubated for 4 h in the presence of a single substrate spiked with radiolabelled tracer in DMEM (containing no pyruvate, glucose or glutamine) in a total volume of 1 ml/well (Board et al., 2017). Substrates tested were: 10 mM glucose containing 21 MBq/mmol D-U-14 C-glucose; 5 mM pyruvate containing 0.35 MBq/mmol 1-14 C-pyruvate; 10 mM acetoacetate containing 0.185 MBq/mmol 3-14 C-acetoacetate; 2 mM glutamine containing 1.2 MBq/mmol U-14 C-glutamine and 2 mM palmitate containing 10.27 Mβq/mmol 1-14 C-palmitate. In addition, wells without cells containing DMEM with 14 C-substrate were used as a background radioactivity control. The 14 CO 2 produced was trapped on KOH-soaked filter papers covering the inside of the 24-well plate lid of the apparatus (Fig. S6). A perforated rubber gasket, with holes corresponding to each well of the 24-well plate, separated the two plates. Perchloric acid was added to the well, at each desired time-point, to kill the cells and the plates were gently agitated for 60 min to release dissolved 14 CO 2 . Filter papers containing trapped 14 CO 2 were counted in Ecoscint (National Diagnostics) using a Tri-Carb 2800TR Liquid Scintillation Analyser. Statistical analysis Data are presented as mean ± SEM. Data were analysed using an unpaired TTest or ANOVA (GraphPad Prism v8.0.1) with a TUKEY multiple comparison as appropriate. The effect of extracellular matrix on isolation and expansion of CDCs Adult cardiac stem/progenitor cells were isolated and expanded using the traditional method on fibronectin (FN) with cardiosphere formation on poly-D-lysine (PDL) (Fig. S1a-e) or using a modified technique of expansion on collagen IV (ColIV) with cardiosphere formation in hanging drops (HDs) (Fig. S1f-j). Explant-derived cells formed cardiospheres more quickly in hanging drops than on PDL (Fig. S1b, S1g) and formed larger spheres by day 4 (Fig. S1c, S1h). Culture on ColIV/HD resulted in significantly more cells at each stage of expansion and in a 10-fold increase in the number of CDCs at passage 2 (P2-CDCs, p = 0.0004 vs cells expanded on FN) after 46 ± 5 days (Fig. 1A). In addition, P2-CDCs created through ColIV/HD culture had significantly higher proliferation rates in comparison to those on FN/PDL (p < 0.0001; Fig. 1B), indicative of a more stem-cell like phenotype. Flow cytometry revealed that P2-CDCs created through both techniques expressed CD90 (Fig. 1C) with few cells expressing c-Kit, Oct4 or Nanog (data not shown) However, immunocytochemistry for c-kit on CDCs expanded on FN (Fig. S7) showed that the protein was expressed but had been internalized and therefore the cells would not have shown positive with flow. Both populations contained <5% of cells expressing the cardiac transcription factor, Nkx-2.5, the cardiac specific proteins, cardiac troponin T (cTnnT2), Titin and cardiac myosin heavy chain (cMHC), and the endothelial cell marker, von Willebrand factor (vWf) (data not shown). CDCs cultured on ColIV had significantly lower expression of CD90 than those cultured on FN.. The effect of extracellular matrix on differentiation of CDCs Induction of cardiomyogenic differentiation using 5-azacytidine (5 Aza) and ascorbic acid (AA) ( Fig. 2A) resulted in a small number of cells expressing the cardiac specific proteins, cTnnT2, cMHC and Titin. Flow cytometry suggested that the larger cells in the population were those that began to express cardiac proteins, with significantly greater expression of MHC and titin in cells cultured on FN (Fig. 2B). Representative images of 5-Aza-treated cells grown on FN are shown in Fig. S8. Although some treated CDCs expressed cTnnT, αSA, cMyHC and cTnnI, they did not form beating cardiomyocytes and did not show the structural features such as sarcomeres seen in neonatal and adult cardiomyocytes (see eg Fig. S4). Treatment with 5-Aza induced a decrease in mRNA expression of glucose transporter 1 (GLUT1) to below the level of detection in the whole cell population (Fig. 3) and increased expression of pyruvate dehydrogenase kinase isozyme 1 and 4 (PDK1 and PDK4), which control the flux through pyruvate dehydrogenase to mitochondrial oxidation. In addition, there was an increase in mRNA expression of the fatty acid transporters, fatty acid translocase (CD36) and carnitine palmitoyltransferase IB (CPT1B), and of peroxisome proliferator-activated receptor α (PPARα), malonyl CoA-acyl carrier protein transacylase (MCAT) and acyl-coenzyme A dehydrogenase (ACADM) (Fig. 3). GLUT4 expression level was very low in untreated cells, but expression increased 25-fold after 5-Aza treatment on ColIV. For all other genes tested, expression was significantly greater in cells grown on FN than in those on ColIV (Fig. 3). Western blot analysis of levels of glucose transporters and mitochondrial proteins was less conclusive with few significant differences (Fig. S9). Glucose oxidation after 5-Aza treatment was significantly higher in cells grown on FN than in those on ColIV (p < 0.0001; Fig. 4A). Glutamine oxidation was <0.2 nmol/min/mg in control cells from both conditions and did not change after 5-Aza treatment (data not shown). Palmitate oxidation increased significantly after treatment but occurred at comparable rates in cells cultured on ColIV and FN. The increased mitochondrial respiration in differentiated cells was supported by staining for the mitochondrial membrane potential using Mitotracker Red CMXRos (Fig. S10). The rate of ATP production was calculated based on the glycolytic flux and oxidation rates. P2-CDCs generated more ATP by oxidation of acetoacetate than from other substrates (p < 0.0001; Fig. 4B). Acetoacetate oxidation decreased after 5-Aza treatment such that ATP production in cells cultured on FN by oxidation of acetoacetate was comparable to that from other substrates whereas that from cells cultured on ColIV remained significantly higher than from glycolysis or glucose oxidation in those cells. Stimulating fatty acid oxidation by pharmacological upregulation of the PPARα pathway Having concluded that expansion on ColIV and in hanging drops generated P2-CDCs in increased numbers, but that expansion on FN primed cells better for differentiation, we used FN as a substrate for differentiation of P2-CDCs generated via culture on ColIV/HD. To increase differentiation, we adopted a protocol (Fig. 5A) adapted from that of Smits et al (Smits et al., 2009) which we had found to be successful in mouse CDCs (Perbellini et al., 2015) and in addition aimed to stimulate fatty acid oxidation by triggering the PPARα pathway with the agonist WY-14643. Differentiation with the modified TGFβ protocol, both with and without addition of WY-14643, caused the cells to aggregate around day 10 and form organoids after day 20 (Fig. 5B). Immunostaining of organoids, imaged as a Z-stack, showed expression of MHC, cTnnT, MLC, connexin 43 and titin throughout the 3D structure (Fig. 5C). mRNA Fig. 3. Expression of metabolic-related genes in control and 5-Aza treated CDCs. The relative mRNA expression of genes involved in glucose (GLUT1, GLUT4, PDK1 and PDK4) and fatty acid (CD36, CPT1B, PPARα, MCAT and ACADM) metabolism was assessed using qPCR in control and differentiated P2-CDCs created through FN/PDL or ColIV/HD. The mRNA expression was normalized to the geometric mean of Actb and β2M as reference genes and ColIV control as calibrator. Data are presented as mean ± SEM (n = 3), assessed using an ANOVA with Tukey post hoc test. Multicolour stars indicate multiple comparisons between groups and FN and ColIV control, black stars indicate comparison between control and differentiated cells. *p < 0.05; **p < 0.01; ***p < 0.001; **** p < 0.0001. Fig. 4. Substrate utilisation by control and 5-Aza treated CDCs. (A) The rate of glycolysis and of oxidation of glucose, palmitate and acetoacetate measured in control and differentiated P2-CDCs created through FN/PDL or ColIV/HD. (B) Calculated rates of adenosine triphosphate (ATP) production in control and differentiated P2-CDCs. Data are presented as mean ± SEM (n = 3-5), assessed using an ANOVA with Tukey post hoc test. Multicolour stars indicate multiple comparisons between groups and FN and ColIV control, black stars indicate comparison between control and differentiated cells for A and as indicated for B; * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001. expression of MHC6, MHC7 and phospholamban (PLN) increased after differentiation with TGFβ. Differentiation with TGFβ plus WY-14643 caused a greater upregulation of MHC7 and cTnnT with increased expression of Serca2a but lower expression of PLN (Fig. 5D). On occasion, organoids generated by treatment with TGFβ and WY-14643 exhibited spontaneous twitching. Differentiation induced a significant decrease in expression of GLUT1 with an associated upregulation of genes associated with glucose and fatty acid oxidation (Fig. 6A). Treatment with WY-14643 induced a small decrease in expression of PPARα and the fatty acid transporter CD36 and a 40-fold decrease in expression of PDK1. Expression of MCAT, CPT1B and ACADM increased 2.5, 10 and 15-fold, respectively (Fig. 6A). Interestingly there was no significant change in metabolism after differentiation but a significant switch from glycolysis to oxidative metabolism with a 10-fold decrease in the rate of glycolysis and a 5-, and 14-fold increase in the rate of glucose and palmitate oxidation, respectively, after treatment with WY-14643 (Fig. 6B). Mitochondrial staining of cells differentiated using TGFβ and WY-14643 (Fig. 6C) showed formation of mitochondrial networks not seen in undifferentiated CDCs (Fig. S10). Discussion In vitro isolation and expansion of CDCs on ColIV, with the generation of cardiospheres as hanging drops, resulted in an increased yield of CDCs compared with culture on FN/PDL used in the published protocol (Smith et al., 2007) (Fig. 1A). Cardiospheres formed more quickly in hanging drops than on PDL (Fig. S1b, S1g) and the final spheres were larger (Fig. S1c, S1h), such that cells spent longer within the hypoxic core of the sphere, thought to promote progenitor cell proliferation (Tan et al., 2016). We saw an increased proliferation rate of CDCs on ColIV compared to that on FN (Fig. 1B). ColIV-mediated signalling has been shown to induce proliferation of Leydig cells through intracellular signalling molecules in active forms of focal adhesion kinase and the mitogen activated protein kinase (MAPK) 1/2 (Anbalagan and Rao, 2004) and of pancreatic cancer cells through interaction between the ColIV molecule and integrin receptors on the surface of the cancer cells (Öhlund et al., 2013). ColIV is found within the cardiac stem cell niche (Schenke-Layland et al., 2011), where cells are held quiescent and do not proliferate until required for repair processes, suggesting that other factors within the niche inhibit proliferation. ColIV, but not FN, maintained ESCs in an undifferentiated phenotype, with comparable proliferation rates when cultured on ColIV as on other ECM proteins (Hayashi et al., 2007). We have shown that culture in hypoxia increased CDC proliferation and resulted in cells with higher expression of Oct4, Sox2 and Nanog but that were slower to differentiate than those cultured under normoxia (Tan et al., 2016) as was seen here with cells cultured on ColIV. FN has been shown to promote mesoderm formation in ESCs (Cheng et al., 2013) and induce differentiation through integrinmediated activation of the MAPK cascade and by a significant increase in the FN receptor, integrin-β5 (Bentzinger et al., 2013;Sa et al., 2014). We investigated in vitro substrate and energy metabolism of isolated cardiac stem/progenitor cells before and after cardiomyogenic differentiation. As expected, CDCs had a highly glycolytic metabolism and the rate of glycolysis was higher in CDCs cultured on ColIV than on FN (Fig. 4A). In comparison to most adult cells, proliferating tumour cells generate energy by glycolysis regardless of the availability of oxygen, known as 'aerobic glycolysis' and the Warburg effect (Warburg, 1956), and exhibit greatly enhanced glycolytic flux compared with the noncancerous cells in the tissue of origin (Moreno-Sánchez et al., 2007). The high level of glycolytic flux provides sufficient ATP for the proliferating cells and glucose degradation generates the necessary biosynthetic intermediates such as ribose sugars for nucleotides and glycerol and citrate for lipids (vander Heiden et al., 2009). In cancer cells, glutamine utilisation has been shown to supplement that of glucose Figure S11. (D) The relative mRNA expression of cardiac genes was assessed using qPCR. The mRNA expression was normalized to the geometric mean of Actb and β2M as reference genes and baseline as calibrator; Green stars indicate comparison between baseline and differentiated cells and black stars indicate comparison between differentiated cell groups; * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) through glycolysis by providing a carbon source for TCA cycle intermediates and, perhaps more importantly, functioning as a nitrogen donor for nucleotide synthesis (DeBerardinis et al., 2007). Glutamine increased doubling time of c-kit + cardiac progenitor cells and promoted survival under conditions of oxidative stress (Salabei et al., 2015). In contrast, we saw very low rates of glutamine utilisation from the media in CDCs before or after differentiation (data not shown). Tardito et al have shown that the glutamine required for glioblastoma growth is not supplied by the circulation but comes from cataplerosis of glucose (Tardito et al., 2015). Interestingly, undifferentiated CDCs cultured on either FN or ColIV had high rates of acetoacetate oxidation (Fig. 4A), which decreased after treatment with 5-Aza. We have found previously that human bone marrow-derived mesenchymal stem cells oxidised acetoacetate 35 times faster than glucose and that this reduced ROSproduction 45-fold compared with that seen with glucose oxidation (Board et al., 2017). We hypothesised that this may be a mechanism to reduce the high level of ROS production seen during proliferation and to aid cell survival and maintenance in an undifferentiated state (Suda et al., 2011). After 5-Aza treatment, CDCs showed increased oxidation of glucose and palmitate (Fig. 4A), associated with a switch in mRNA expression from that of the ubiquitous GLUT1 to the more muscle-specific highaffinity and insulin-sensitive GLUT4, and increased expression of genes involved in fatty acid metabolism (Fig. 3). 5-Aza-treated CDCs on FN demonstrated higher rates of glucose oxidation in comparison to those on ColIV whereas rates of palmitate oxidation were comparable despite a significantly greater upregulation in mRNA expression of genes associated with fatty acid oxidation in cells on FN compared with those on ColIV. ATP production was higher from oxidation of palmitate than from glucose (Fig. 4b), so it may be that whilst treated CDCs on FN had increased their capability to use fatty acid oxidation, compared with those on ColIV, as they had not differentiated to a contractile phenotype they did not need to generate higher levels of ATP. To further probe the upregulation of fatty acid oxidation, we optimized our differentiation protocol by addition of TGFβ (Smits et al., 2009) and stimulated the PPARα pathway by treatment with the agonist WY-14643 which has been shown to upregulate mitochondrial oxidation in induced pluripotent stem cell derived cardiomyocytes (Gentillon et al., 2019). Addition of TGFβ induced the CDCs to aggregate into organoids after 20 days (Fig. 5), as seen by Goumans et al (Goumans et al., 2008), and increased expression of cardiac genes, GLUT4 and genes associated with fatty acid uptake and metabolism, but did not cause a significant increase in oxidative metabolism (Fig. 6). Addition of WY-14643 significantly increased expression of MHC7, cTnnT and Serca2a as well as of genes associated with fatty acid oxidation, compared with that seen in TGFβdifferentiated CDCs, and this resulted in a 5-fold increase in glucose oxidation and a 15-fold increase in palmitate oxidation, compared with undifferentiated CDCs. This was associated with increased expression of cardiac genes, suggesting that this further push towards fatty acid oxidation had increased the cardiomyogenic differentiation of the cells. Although some organoids were seen to twitch spontaneously, the differentiated cells did not mature sufficiently to form functionally competent cardiomyocytes. Conclusions Here we found that the extracellular matrix on which CDCs were expanded determined the balance between proliferation and differentiation, with ColIV promoting proliferation whilst fibronectin primed cells for differentiation to a cardiac phenotype. Undifferentiated CDCs, when highly proliferative, generated high levels of ATP from glycolysis and from oxidation of acetoacetate. As cells differentiated, we observed Fig. 6. Metabolic changes after differentiation with TGFβ and WY-14643. (A) The relative mRNA expression of genes involved in glucose (GLUT1, GLUT4, PDK1 and PDK4) and fatty acid (CD36, CPT1B, PPARα, MCAT and ACADM) metabolism was assessed using qPCR in control, and TGFβ differentiated CDCs with and without treatment with WY-14643. The mRNA expression was normalized to the geometric mean of Actb and β2M as reference genes and baseline as calibrator. (B) Changes in rates of glycolysis and glucose and palmitate oxidation, expressed as a fold change over baseline. Data are presented as mean ± SEM (n = 3) and assessed using an ANOVA with Tukey post hoc test. Black stars indicate comparison between baseline and differentiated cells and blue stars indicate comparison with TGFβ differentiated cells; * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001. (C) CDCs differentiated using both TGFβ and WY-14643, stained with Mito-tracker® Red CMXRos and showing formation of mitochondrial networks. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) a decrease in glycolysis, upregulation of oxidative metabolism of glucose and fatty acids and decreased oxidation of acetoacetate. Stimulation of PPARα during differentiation resulted in the cells adopting a metabolic phenotype more akin to that of the adult heart. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2020-10-28T17:59:00.495Z
2020-09-10T00:00:00.000
{ "year": 2021, "sha1": "8bd9efd4a7491fa26fefa61c5db64e4e3065dde5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.scr.2021.102422", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25a10dc6956414557a23588ef57bceeb1eaf34fc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
51892128
pes2o/s2orc
v3-fos-license
Exome-first approach identified novel INDELs and gene deletions in Mowat-Wilson Syndrome patients Mowat-Wilson syndrome (MWS) is characterized by severe intellectual disability, absent or impaired speech and microcephaly, with a gradual post-natal onset. The syndrome is often confused with other Angelman-like syndromes (ALS) during infancy, but in older children and adults, the characteristic facial gestalt of Mowat–Wilson syndrome allows it to be distinguished easily from ALS. We report two cases in which an exome-first approach of patients with MWS identified two novel deletions in the ZEB2 gene ranging from a 4 base deletion (case 1) to at least a 573 Kb deletion (case 2). Mowat-Wilson Syndrome (MWS) is caused by haploinsufficiency of the ZEB2 (ZFHX1B) gene on chromosome 2q22. 3. MWS resembles Angelman Syndrome in that all individuals have moderate-to-severe intellectual disabilities and absent or impaired speech. Microcephaly, seizures and/or abnormal EEGs have been observed in up to 90% of affected individuals 1 . Affected people tend to have a smiling, open-mouthed expression and typically have friendly and happy personalities. During infancy, they are often misdiagnosed with other Angelman-like syndromes (ALS) 2, 3 ; however, as they age, MWS patients begin to develop distinctive facial features, and adults with Mowat-Wilson syndrome have an elongated face with heavy eyebrows and a pronounced chin and jaw. The presence of congenital anomalies, including structural heart defects involving the pulmonary valve or arteries, hypospadias, and structural renal anomalies, also distinguishes MWS from ALS 4 . According to the Mowat-Wilson Foundation, there are currently 186 patients worldwide who have received genetic confirmation of the disease. Since MWS is often misdiagnosed as another ALS during early infancy, it is very important to develop a firsttier single genetic test that covers all types of genetic mutations, including SNVs, INDELs and CNVs, to distinguish MWS from other ALS. Whole Exome Sequencing (WES) or Clinical Exome Sequencing (CES) were both recently proposed for use in a first-tier diagnostic test for children with intellectual disabilities, as WES/CES has decreased costs compared to those of traditional diagnostic genetic tests 5 . The effectiveness of WES was also demonstrated for a wide variety of genetic disorders besides neurodevelopmental disorders for the detection of SNVs, INDELs, and CNVs using a single assay 6 . Here, we describe two MWS cases that were sent for genetic testing after being misdiagnosed with ALS for several years. The WES single assay allowed us to describe two novel mutations and to differentiate MWS unequivocally from ALS in both patients. Patient 1 A 17-year-old male from Misiones, Argentina who was born to healthy, non-consanguineous parents. After an © The Author(s) 2018 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. uneventful pregnancy, he was referred for genetic testing after being diagnosed with ALS as an infant; he presented with a normal karyotype and CGH array test results. After years of being misdiagnosed, a genetic counselor suspected he might have been affected with MWS due to his facial features, congenital cardiomyopathy and the presence of generalized refractory epilepsy. He also presented with bilateral hearing loss, hypoplasia of the corpus callosum, and severe neurodevelopmental delay with the absence of speech. Patient 2 A 7-year-old male from Lobos in Buenos Aires, Argentina who was born to healthy, non-consanguineous parents. The relevant clinical features included a severe intellectual disability (ID), severe speech delay, and convulsive seizures. The patient presented with earlobe features that are characteristic of MWS. There was no reported family history of ID in the patient's mother or in other known relatives. Previous testing included a 15p11.2-q13 methylation test, which was normal. This patient was initially diagnosed with ALS during infancy, when the typical phenotypic features were not clearly present 2 . Blood samples were extracted after informed consent was obtained from the parents of each patient. DNA was then extracted from the blood samples using the High Pure PCR template purification kit (Roche S.A.Q.EI, Buenos Aires, Argentina) according to the manufacturer's instructions. The DNA quality and concentration were assessed using an Implen NanoPhotometer (Biosystems SA, CABA, Argentina). Next generation sequencing (NGS) of the whole exomes of each of the subjects was conducted according as follows. Prior to the preparation of the libraries, the DNA quality was assessed using a 2100 Bioanalyzer DNA chip (Analytical Technologies SA, Buenos Aires, Argentina). The samples were prepared using the Nextera Rapid Capture Exome Sequencing panel (Illumina, San Diego, USA). The libraries were sequenced with a NextSeq 500 System (Illumina, San Diego, USA) using a highthroughput kit and a configuration with a read length of 2 × 150 base pairs (bp) and dual indexing. All of the exomes were sequenced with 160 × coverage, with at least 93% of the sequences having a minimum of 20 × coverage. To identify the germ-line variants present within the NGS data, which consisted of the sequences of the exons within the ZEB2 gene (GRCh37/hg19 chr2: g.145141048:145282747) and the adjacent intronic regions ( ± 10 bp), a proprietary bioinformatics analysis was performed that utilized a protocol based on that of the GATK (Genome Analysis Toolkit) from the Broad Institute. We included all variants with a minor allele frequency of at least 20% and with at least 4 reads that represented the alternative allele in our analysis. These variants were subject to comparison with entries in several databases and to analysis using in-sílico prediction programs. The classification of the variants was made according to guidelines published by the American College of Medical Genetics and Genomics (ACMG) 7 . The Copy Number Variants (CNVs) that were identified from the WES data were verified by a comparative genomics hybridization (CGH) array using an Innoscan710 instrument (Innopsis, Santa Clara, CA, USA) according to the manufacturer's instructions. After obtaining the WES data for both patients as well as the parents of patient 1, the subsequent analysis identified a novel truncation variant (NM_014795.3: c.2177_2180delCTTT, NP_055610.1:p.Ser726TyrfsTer7) in patient 1 that was determined to be a deleterious mutation according to the variant interpretation guidelines of the ACMG (Fig. 1). This variant was not present in unrelated healthy controls that were obtained from the exome sequence databases ExAc Browser (http://exac. broadinstitute.org/) and gnomAD Browser (http:// gnomad.broadinstitute.org/) 8 . The novel 4-bp INDEL that leads to the frameshift p.Ser726TyrfsTer7 (Fig. 1) was confirmed as de novo using Sanger sequencing 9,10 . No other mutations were found in other genes in patient 1 that are known to be associated with ALS 2 , confirming the diagnosis of MWS and putting an end to years of misdiagnosis. However, no pathogenic mutations, including SNVs or INDELs, were identified in patient 2 in either ZEB2 or other genes associated with ALS. The WES data for both patients were also screened for possible CNVs using a proprietary bioinformatics analysis protocol based on the eXome-Hidden Markov Model v1.0 (XHMM; https://atgu.mgh.harvard.edu/xhmm/) 11 . The screening analysis of CNVs in patient 2 identified a novel deletion of at least 0.573 Mb (GRCh37/hg19 chr2: g.144704611-145277958) that was predicted to lead to the complete loss of the ZEB2 gene, resulting in haploinsufficiency (Fig. 2a, b). This mutation was confirmed by a CGH array to be a novel 1.08 Mb heterozygous deletion (arr[GRCh37] 2q22.3(144569168_145648045)x1) that encompasses the 0.568 Mb deletion that was detected during the WES data analysis (Fig. 2c). The deleted region also encompasses the neighboring genes GTDC1, which is unrelated to MWS 12 , and TEX41, which is a non-protein coding gene of unknown function (http://www.genecards. org/cgi-bin/carddisp.pl?gene=TEX41). This deletion is not present in the Human Gene Mutation Database, version 2016.2 (HGMD; http://www.hgmd.org/), or in ClinVar (http://www.ncbi.nml.nih.org/clinvar/), which suggests that it is a novel pathogenic variant. Two other larger deletions encompassing the same region as this variant but with different breakpoints, ID 2566 (4.30 Mb) and ID 251811 (2.65 Mb), were found in the Decipher database (http://decipher.sanger.ac.uk). Inheritance of the deletion by patient 2 cannot be excluded, as a DNA sample from the father was not available, but it is reasonable to assume that it is a de novo deletion due to its pathogenic classification and the absence of any phenotypic features of MWS in either of the parents. Our genomic analysis confirmed suspicions of MWS in patient 2, despite an initial misdiagnosis of ALS. In contrast, no CNVs were identified from the WES data obtained from patient 1. In summary, WES was performed on genomic DNA extracted from blood samples obtained from two patients and their parents, when available, after informed consent was obtained. Subsequent analyses revealed the presence of a novel pathogenic truncation variant (NP_055610.1: p. Ser726TyrfsTer7) of the ZEB2 gene in patient 1, which led to the loss of the CID and Zinc-finger 2 protein domains and resulted in haploinsufficiency (Fig. 1). Interestingly, in patient 2, no disease-causing SNVs or INDELs were identified; however, we were able to detect a heterozygous deletion of the entire ZEB2 gene due to a large-scale deletion of the region encompassing GRCh37/hg19 chr2: g.144704611-145277958. We were able to verify this deletion using a CGH array, which detected the presence of a larger chromosomal deletion (arr[GRCh37] 2q22.3 (144569168_145648045)x1) that encompassed the deleted region that was identified via WES. The confirmed heterozygous deletion of a portion of chromosome 2 encompasses a gene that is upstream of ZEB2, GTDC1 (MIM 610165), which encodes a glycosyltransferase-like domain-containing protein 1, as well as a downstream gene, TEX41, that does not encode a known protein and is of unknown function. As the Illumina Exome Capture kit does not screen for the TEX41 gene, it was not detected during the WES CNV analysis. No association was reported in the literature between the large-scale GTDC1-TEX41 deletion and any pathogenic phenotypes; thus, it is not possible to speculate whether these genes contribute to additional phenotypes in this patient other than those associated with MWS (http://www.genecards.org/cgi-bin/ carddisp.pl?gene=TEX41) 13 . It is of note that we were able to correctly re-diagnose these patients as having MWS after both were misdiagnosed during infancy with ALS because the phenotypic features of MWS were not yet present. The successful use of WES/CES as a first-tier single assay test for MWS has recently been highlighted as evidence that it can also be used for differential diagnosis of a wide variety of neurodevelopmental disorders. Using WES as a first-tier test in two patients with an early clinical diagnosis of ALS and normal 15p11.2-q13 methylation test results, we were able to identify two novel mutations that have not been previously described (a 4 bp deletion and a > 0.573 Mb deletion) that unequivocally differentiate MWS from other ALS. As a result, we propose that WES/CES can be used as a cost-effective first-tier assay to diagnose and differentiate MWS from ALS, which is caused by SNVs, INDELs, CNVs, or other factors, in newborns, infants, and young children with suspected ALS who also have a normal 15p11.2-q13 methylation test result. HGV Database The relevant data from this Data Report are hosted at the Human Genome Variation Database at https://doi.org/10.6084/m9.figshare.hgv.2357 https://doi. org/10.6084/m9.figshare.hgv.2360 Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2018-08-14T13:08:07.188Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "13ad9301cceb9728fb53aaa024f108a187160cb9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41439-018-0021-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4ecf787e6149c7c781a78ce832ea016ed7ff31a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59268131
pes2o/s2orc
v3-fos-license
James Parkinson and His Disease James Parkinson was born in 1755 in Shoreditch, close to the City of London and like his father practised medicine there as an Apothecary and Surgeon. His earlier years in practice were disturbed by a rebellious spirit, roused by the poverty and injustices he saw around him. Inevitably he was drawn into politics and joined the provocative London Corresponding Society. He wrote a number of highly critical pamphlets under the pseudonym of “Old Hubert”. His criticisms of government and administration were at times so bitter and fearless that eventually they led to his being subpoened and examined by the Privy Council. During the course of these examinations he had to answer to the Lord Chancellor, the Prime Minister, Mr. William Pitt and others in high office. Fortunately his explanations impressed his interrogators by their honesty and sincerity and he escaped imprisonment. By the time he was 40, with the increasing demands of a busy practice and a young family, he seemed to turn all his efforts to his own work and writings. His interests were broad. His first book was on “The Organic Remains of a Former World”. Later he wrote on medical education, the preservation of health, and a brilliant criticism “Observations on Doctor Hugh Smith’s Philosophy of Physics”. Nevertheless it was not for another 22 years that he wrote his classic essay on “The Shaking Palsy” which was published in 1817 (Critchley 1955). Copyright Royal Medical Society. All rights reserved. The copyright is retained by the author and the Royal Medical Society, except where explicitly otherwise stated. Scans have been produced by the Digital Imaging Unit at Edinburgh University Library. Res Medica is supported by the University of Edinburgh’s Journal Hosting Service: http://journals.ed.ac.uk ISSN: 2051-7580 (Online) ISSN: 0482-3206 (Print) Res Medica is published by the Royal Medical Society, 5/5 Bristo Square, Edinburgh, EH8 9AL Res Medica, April 1967, 5(4): 8-12 doi: 10.2218/resmedica.v5i4.494 JAMES PARKINSON AND HIS DISEASE John Gillingham. M.B.. B.S.. F.R.C.S.. F.R.O.S.E., F.R.C.P.E. Professor of Surgical Neurology, Royal Infirmary o f Edinburgh, and The Western General Hospital Jam es Parkinson was born in 1755 in Shoreditch, close to the C ity of London and like his father practised m edicine there as an A pothecary and Surgeon. His earlier years in practice were disturbed by a rebellious spirit, roused by the poverty and injustices he saw around him. Inevitably he was drawn into politics and joined the provocative London Corresponding Society. H e wrote a number of highly critical pam phlets under the pseudonym of “ Old H ubert” . H is criticisms of governm ent and administration were at times so bitter and fearless that eventually they led to his being subpoened and examined by the Privy Council. During the course of these examinations he had to answer to the Lord Chancellor, the Prim e M inister, M r. W illiam P itt and others in high office. Fortunately his explanations impressed his interrogators by their honesty and sincerity and he escaped im prisonm ent. B y the time he was 40, with the increasing demands of a busy practice and a young family, he seemed to turn all his efforts to his own work and writings. His interests were broad. His first book was on “ T h e Organic Rem ains of a Form er W o rld ” . Later he wrote on m edical education, the preservation of health, and a brilliant criticism “ Observations on Doctor Hugh Sm ith ’s Philosophy of Physics” . Nevertheless it was not for another 22 years that he wrote his classic essay on “ T h e Shaking Palsy” which was published in 18 17 (Critchley 1955). , It is not surprising that this energetic com passionate man with a keen sense of observation and flair for detailed recording, should turn his attention to that hitherto neglected group of patients suffering from the disease later to be called by his followers “ Parkinsonism ” . O n Pages 15 and 16 of his monograph, (Parkinson 1 8 17) he describes how the tremor of an aged patient disappeared follow ing the onset of a “ stroke” — a capsular hemiplegia. In about a fortnight the lim bs had regained most of their m ovem ent. H e says — “ D uring the tim e of their having remained in this state, neither the arm nor the leg of the paralytic side was in the least affected with the tremulous agitation; but as their paralysed state was re­ moved, the shaking returned.” T h e first surgical attem pts to treat Parkin­ sonian tremor were in the early 19 30 ’s by de­ structive lesions at various levels of the cortical spinal pathways — the m otor cortex, the internal capsule, the cerebral peduncle and later the posterior lateral quadrant of the upper cervical spinal cord. However, had this original observation of Jam es Parkinson been carefully considered, a more successful surgical approach to this problem m ight have been achieved earlier. H e clearly stated “ As their paralysed state was removed, the shaking returned” . These first operations often led to considerable disability from paralysis and neither the results nor their physiological basis encouraged pursuit of the problem in this way. Nevertheless it was his experience with these procedures that led Russell Meyers (1942) to put forward his hypothesis that trem or and rigidity might be relieved by interruption of Professor of Surgical Neurology, Royal Infirmary of Edinburgh, and The Western General Hospital Jam es Parkinson was born in 17 5 5 in Shoreditch, close to th e C ity of Lon don and like his father practised m edicine there as an A po th ecary and Surgeon.H is earlier years in practice were disturbed by a rebellious spirit, roused by the poverty and injustices he saw around him .In evitably he was drawn into politics and joined the provocative L on don Corresponding Society.H e wrote a num ber o f highly critical pam phlets under the pseudonym of " O ld H u b ert" .H is criticism s of governm ent and adm inistration were at times so bitter and fearless that eventually they led to his being subpoened and exam ined by the Privy C ou ncil.D uring th e course of these exam inations he had to answer to the L ord C hancellor, the Prim e M inister, M r.W illiam P itt and others in high office.Fortu nately his explanations im pressed his interrogators by their honesty and sincerity and he escaped im prisonm ent.B y the tim e he was 40, with the increasing dem ands of a busy practice and a young fam ily, he seem ed to turn all his efforts to his own work and writings.H is interests were broad.H is first book was on " T h e O rganic R em ains o f a Form er W o rld " .L ater he wrote on m edical education, the preservation o f health, and a brilliant criticism " O bservations on D octor H ugh S m ith 's Philosoph y o f Physics" .N evertheless it was not for another 22 years that he w rote his classic essay on " T h e Shaking P alsy" w hich was published in 1 8 1 7 (C ritch ley 1955). , It is not surprising that this energetic compassionate m an w ith a keen sense of observation and flair for detailed recording, should turn his attention to that hitherto neglected group of patients suffering from the disease later to be called by his followers " Parkinsonism " . O n Pages 1 5 and 16 of his m onograph, (Parkinson 1 8 17) he describes how th e trem or of an aged patient disappeared follow in g the onset of a " stroke" -a capsular hem iplegia.In abou t a fortnight the lim bs had regained m ost o f their m ovem ent.H e says -" D u rin g the tim e of their having rem ained in this state, neither the arm nor the leg of the paralytic side was in the least affected w ith the trem ulous agitation; but as their paralysed state was re m oved, the shaking returned." T h e first surgical attem pts to treat Parkin sonian trem or were in the early 19 3 0 's by de structive lesions at various levels o f the cortical spinal pathways -the m otor cortex, the internal capsule, the cerebral peduncle and later the posterior lateral quadrant o f the upper cervical spinal cord.H ow ever, had this original observation of Jam es Parkinson been carefully considered, a m ore successful surgical approach to this problem m ight have been achieved earlier.H e clearly stated " A s their paralysed state was rem oved, the shaking returned" .T h ese first operations often led to considerable disability from paralysis and neither the results nor their physiological basis encouraged pursuit o f the problem in this way. N evertheless it was his experience with these procedures that led Russell M eyers (194 2) to put forward his hypothesis that trem or and rigidity m ight be relieved by interruption of the pallido-fugal fibres and w ithout any involve m ent of the cortico-spinal tract.T h is marked the great step forward but unfortunately his operation, designed to interrupt these fibres through the lateral wall of the third ventricle, was ill-conceived.T h e results although encour aging in some respects, were disappointing be cause of injury to diencephalic structures adjacent to the third ventricle with stupor and a high m ortality and the operation fell into disrepute.Because of this and preoccupation with the m edical problem s of the Second W orld W ar, nothing m ore was done.Later Fenelon (1949) in Paris took up Russell M eyer's original observation and devised a new operative approach using a sub-frontal route.H e followed the optic tract backwards beneath the frontal lobe to the point where the tract begins to m erge with brain.Using this landm ark and by directing his electrode upwards and slightly laterally for 1 cm., lie was able to create an electro-coagulation lesion of the pallido-fugal fibres -the ansa lenticularis and adjacent globus pallidus.T h e results of this procedure were encouraging.T rem or and rigidity were often reduced and occasionally abolished, yet w ithout any untoward side effects.In particular there was no evidence of stupor, paralysis, sensory deficit or akinesis.T h is work was soon taken up by G u iot who published a series of successful results in 19 5 3 and it was G u io t who dem onstrated this exciting new operation to me in Paris in 1954 (G uiot 19 53). Early in 19 55 our first patient was treated.A m iner of 50 with post encephalitic Parkinsonism, he had been unable to work for 10 years because of severe trem or and rigidity of left lim bs.A t least half of each week was spent in bed because of an exacerbation of akinesis, sweating and tremor.O ver the years he had lost 2 stones in w eight and had becom e gravely disabled.Follow in g operation which of necessity had to be perform ed under local anaesthesia to observe the effect and effective ness of the lesion, he rapidly im proved.H e lost his tremor and rigidity com pletely and there was no paresis.H is sense of well-being and weight were quickly restored, his kyphosis lessened and he returned to surface work at the pit in two m onths.H e has remained well since although some m ild rigidity in the left lim bs has returned in the last few years.H ow ever for tw elve years h e has been w ithout trem or and a A further patient was treated in the same way shortly afterwards and a sim ilar result obtained which has been m aintained after tw elve years. In the m eantim e we were becom ing increas ingly interested in the developm ent o f the first stereotaxic human instrum ent devised and used by Spiegel and W y c is o f Philadelphia in 1947 for the treatm ent of psychiatric disorder (Spiegel et al. 1947).T h is was designed very m uch on the pattern of that devised and used by H orsely and C larke w ith such precision for anim al work in 1908 (H orsley and C larke 1908). T h e open operation of Fenelon, although very' successful, was difficult and hazardous.If a discrete lesion could be sited accurately at a predeterm ined target through a burr hole b y m eans of a suitable guiding apparatus fixed to the head, then m uch w ould have been achieved.T h is was th e great contribution of Spiegel and W y c is for the field of stereotaxic surgery is now one of the m ajor branches of surgical neurology not only in Parkinsonism and the dyskinesias, but also in epilepsy, intractable pain and som e of the psychiatric disorders. G u io t was soon to follow with a m uch sim pler yet very precise apparatus, a modified version of which has been used in my own departm ent for m any years (Figs.I and II).I rem em ber the early discussions w e had in Paris with great pleasure and how ultim ately we decided that the posterior stereotaxic approach using an occipital burr hole, even though it m eant a longer track, would probably be the best.As subsequent events have shown it was a fortuitous decision not only because th e best results were obtained in this way, but also be cause it led to a greater understanding o f the basic problem s of Parkinsonism and the effec tiveness of the different lesions used. T h ese early procedures were som etim es inaccurate and we cam e to realise th e stereotaxic method was fallible.T h e problem was not that of im precise instrum entation or lesion m aking bu t of the anatom ical variation of one brain to another and even of one hem isphere to another.W e had to rely upon radiologically determ ined landm arks such as th e septum pellucidum and the third ventricle (the m id sagittal plane) and the anterior and posterior com m issures shown by m eans o f a radio opaque dye or air.T h ese landm arks gave only a reasonable bearing for our target and we began increasingly to look for physiological m ethods such as electric stim ulation and reversible lesions to help us.B y fractionating the electro-coagulation lesion using low heat initially it was possible to show the dam ping down o f trem or or the relief of rigidity and of equal im portance, side-effects such as speech, m otor or sensory disturbances, before the perm anent lesion was created; this m ethod has proved to be m ore reliable in our hands than stim ulation.G rad ually as a result of the m arking of all lesions at operation by detaching the tiny stainless steel tip of the electrode, taking a skull X ray afterwards, then charting them on stereotaxic atlases, the sites o f the m ost effective lesions for the relief of trem or and rigidity were soon determ ined.T h is was further helped by m aking as small a lesion as would be com patible with m aintained improvem ent.W e also began to m ap out fo r the first tim e the various tract systems such as the corticospinal fibres, the parieto-sensory projection w ithin the posterior lim b of the internal capsule and the tract systems concerned with speech.T h e results of this work using scattergram techniques have been published elsewhere (G illingh am 1962). In 19 55, when m ost o f us were working on the globus pallidus and the pallido-fugal fibres, H assler suggested that the m ost effective lesion for the relief o f trem or should lie in the ventro lateral nucleus o f the thalam us (Hassler 1955).T h is was subsequently proved by R iechert, C ooper, ourselves and m any others as successful operations on the thalam us were reported (R iech ert 19 5 5 , C o op er 1959, G illingham 1 960).N evertheless w ith follow up the lesion o f the globus pallidus rem ained the m ost effective for th e relief of rigidity.B y elevating our posterior track to th e globus pallidus, we found that a double ipsilateral lesion 1 5 mm from the midsagittal plane could be m ade quite successfully with only one insertion of the electrode.T h e posterior lesion for trem or was m ade first in the v e n tro -lateral nucleus posterior to the capsule, and th e second for rigidity in the globus p allidus anterior to the capsule.T h is technique even after som e 700 operations have been perform ed for Parkinsonism has rem ained the m ost effective. T h ere still seemed to be room for error and our restless search for further accuracy was eventually rewarded.F o r som e years the neurophysiologists had relied on electrical recording rather than stim ulation for locating the elcctrode tip and this was beautifully dem onstrated to m e by Professor D avid W h itteridge during explorations o f the external geniculate ganglion of the cat.T h e borders of grey and white m atter were clearly defined with a degree o f accuracy which so far we had not known.T h e subsequent developm ent of the technique and its value in stereotaxic surgery has been published elsewhere (G aze et al. 1964) and follow ed very closely the work of Albefessard (1962). Since then, depth microelectrode recording has becom e alm ost standard practice as further inform ation has accum ulated from greater experience and th e use o f smaller electrodes (1 to10 μ tips).T h e borders o f the thalam us and its sensory relay nucleus, of the internal capsule and the globus pallidus, are identified with confidence and target siting is no longer a problem . D etailed study of electrical activity from the basal ganglia is continuing for there is m uch to learn about sensory m echanism s and the basic pathology o f Parkinsonism .O f particular inter est has been the recognition o f spontaneous rhythm ical activity arising in the thalam us synchronous w ith trem or yet not evoked by any sensory stim ulus.It is not always found and as yet its relation to trem or has not been fully determ ined.T h e m ore sophisticated tech niques o f frequency analysis o f the various patterns o f cell activity m ay give som e of the answers. Perhaps of equal im portance to the under standing o f the basic pathology of Parkinson ism is the biochem ical changes which occur.R ecen tly B arbeau (1962) and others have shown a disturbance o f dopam ine m etabolism and we have followed this work b y a study of this substance in the cerebro-spinal flu id o f the lateral and third ventricles in patients with Parkinsonism and in controls.T h e team responsible for this study, working in m y departm ent and that of Professor Perry, w ill be reporting about it shortly.F u tu re research in the field o f the D yskinesias w ould seem to depend very' m uch upon the pursuit of the abnorm al electrochem ical changes w hich are present. T h e surgeon, stum bling as he does, often in an empirical way, responds to each challenge thoughtfully and with the im provem ent of the patient as his prim ary concern.T h e rewards of such an exercise in understanding the problem s of Parkinsonism have been considerable, and in particular perhaps in the ou tlining of a " pathw ay" which lies w ithin the diencephalon, interruption o f which at any point relieves trem or and rigidity and to a greater or lesser extent som e o f the other associated sym ptom s as w ell, such as oculogyric crises and the compulsive thinking th at som etim es go with them .T h is " pathw ay" w hich has been defined by a w hole series of differently sited yet successful lesions b y surgeons across th e world, extends from the inferior aspect o f the globus pallidus anteriorly, upwards and posteriorly through the globus pallidus, across the posterior lim b of the internal capsule at and above the intercom m issural plane into th e ventro-lateral nucleus o f the thalam us.T h e re it turns downwards through the zona incerta ju st lateral to the red nucleus to th e substantia nigra.Its distance from the m id-line and its w idth varies from p oin t to point and there w ould seem to b e " bottlenecks" w ithin it.A lesion placed strategically w ithin it brings im m ediate relief o f sym ptom s and signs but if it is poorly placed the results are inadequate and side effects follow .Its connections w ith the cortex above and spinal cord below have not yet been defined but in the diencephalon we w ould seem to have shown " the p athw ay" to be the ansa and fasciculus lenticularis (G illingham 1966). M u ch of w hat I have said has been about research and th e solution o f our problem s of accurate placem ent of effective lesions for the relief of this relentlessly progressive disease, but we m ust now look at the results and the indic ations and contraindications o f operation. N o t all patients are benefitted by stereotaxic surgery.T h ose who arc deteriorating rapidly w ith bilateral trem or and rigidity and who show widespread effects of their disease, notably intellectual im pairm ent, gross reduction o f voice volum e and disturbances o f m icturition, cannot be im proved and are often m ade worse.Fortu nately the m ajority are not so severely affected and in these patients operation will always effect som e increase in independence, and in a reasonably high proportion great im provem ent, particularly in those with strictly unilateral rigidity and trem or.B ilateral oper ations, if staged w ith at least a m onth between them , are being carried out increasingly as precision has increased.(G illingh am 1904). Post-o perative drugs are usually necessary although the dosage is often progressively reduced as the m onths and years pass.It is the accum ulating evidence, w ith prolonged followup, now of tw elve years in som e patients, of the greatly slowed or halted disease process which is perhaps th e m ost exciting observation o f all and w hich now poses the im portant question " H ow has it happened?" .C.S.. F.R.O.S.E., F.R.C.P.E. Figure I Figure I Figure II
2018-12-05T04:00:59.948Z
2013-09-20T00:00:00.000
{ "year": 2013, "sha1": "8e6b53f62fec2f0c1b78697853799679305929a3", "oa_license": "CCBY", "oa_url": "http://journals.ed.ac.uk/resmedica/article/download/494/761", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8e6b53f62fec2f0c1b78697853799679305929a3", "s2fieldsofstudy": [ "Medicine", "History" ], "extfieldsofstudy": [ "Psychology" ] }
250026793
pes2o/s2orc
v3-fos-license
Pap smear as early diagnostic tool for cervical cancer- A life saviour Background: Cervical cancer is a leading cause of mortality and morbidity among women globally and most common gynaecological cancer in developing countries. Papanicolaou smear study is a simple and cost effective screening test for cervical cancer. The aim of this study is to evaluate and interpret the pattern of cervical Pap smear cytology in a tertiary hospital. The interpretation and reporting of the Pap smear is based on 2001 Bethesda system. Materials and Methods: This is a retrospective study conducted at Department of Pathology, Basaveshwara Medical College, Hospital and Research Centre, Chitradurga, Karnataka, India. The study was conducted over a period of two years from June 2015 to May 2017. All pap smears received in the department of Pathology during study period were included. Results: A total of 2210 pap smears were reported in the study period. Majority of the cases were inflammatory smears (35.88%) and Negative for Intraepithelial lesion or malignancy (49.86%). Candidiasis, Bacterial vaginosis, Trichomonas vaginalis, atrophy and reactive cellular changes associated with inflammation were seen in 0.49%, 0.72%, o.36%, 8.91% and 0.40% cases respectively. 0.31% Vault smears were studied. Epithelial cell abnormalities (1.4%) include Atypical squamous cells of undetermined significance (0.4%), Low grade squamous intraepithelial lesion (0.63%) and High grade intraepithelial lesion (0.31%). 88% of Low grade squamous intraepithelial lesion was seen in reproductive age group (18-50 years). Conclusion: Cervical cancer is the most common gynaecological cancer in the developing countries. Pap smear is the simple, easy and cost effective screening tool to detect premalignant and malignant cervical lesions, and reduce the mortality due to cervical cancer by early diagnosis and treatment. ………………………………………………………………………………………………………………………………... Epithelial in Introduction Cervical cancer is the most common gynaecological cancer leading to death in developing countries [1][2][3][4]. In 2017, it ranks as 2 nd most frequent cancer among women in India next only to breast cancer and the 2 nd most frequent cancer among women between 15 and 44 years of age. Every year 122844 women are diagnosed with cervical cancer and 67477 died from the disease [5]. With early diagnosis and treatment, morbidityand mortality may be reduced by 70% and 80% respectively [6]. In developing countries, the higher prevalence of cervical cancer is due to ineffective screening programmes. Pap smear is a simple, convenient, cost effective and reliable test for early screening of cervical lesions. Since its introduction there has been a dramatic reduction in the incidence and mortality of invasive cervical cancer worldwide [1,[7][8][9]. The Papanicolaou test also known as Pap smear is a screening method used to detect potentially precancerous and cancerous processes in the cervix. Greek doctor Georgios Papanikolaou invented this test and it was named after him [6,10]. The 2001 Bethesda system terminology reflects important advances in biological understanding of cervical neoplasia and cervical screening technology and is most widely used system for describing Pap smear result [7][8][9]. This study was done to evaluate the pattern of cervical pap smear cytology and find out the incidence of epithelial cell abnormalities. Sampling methods and sample collection: Prior to the study, permission was obtained from Institutional ethical Committee. All cervical pap smears received in the Department of Pathology during the study period were included. Proforma was filled and data were collected. A total of 2210 pap smears were reported in the study period. After receiving, the slides were fixed in 95% ethyl alcohol and stained with Papanicolaou stain by cyto technicians. Slides were then mounted with DPX (Distrene Dibutyl phthalate Xylene) and examined by pathologists. The result of these pap smears was based on 2001 Bethesda System for Reporting Cervical Cytologic Diagnoses (Box 1). All the data were manually collected and subsequently analysed. Original Research Article Inclusion criteria: Women aged between 18 to 80 years. Exclusion criteria: Women aged less than 18 years and more than 80 years. Results A total of 2210 pap smears were analysed during the study period. The age of the patients in the present study were ranged from 18 to 80 years (Table 1). Majority (89%) of the pap smears were from the reproductive age group (18-50 years). 0.63% of pap smears were from the patients aged below 18 years and 0.99% from the patients aged above 70 years. Cervical Pap smear findings were tabulated in Table 2. Epithelial cell abnormalities were seen in 1.40% cases which includes LSIL (0.63%), HSIL (0.31%), ASCUS (0.40%) and atypical glandular cells (0.04%). LSIL was most commonly seen in reproductive age group (15-50 years) and HSIL was most commonly seen in peri menopausal age group (51-65 years). Atrophic and reactive cellular changes were seen in 8.91% and 0.04%% respectively. Organisms associated lesions like candidiasis (0.45%), bacterial vaginosis (0.72%) and trichomonas vaginalis (0.36%) were also seen in our study. Discussion Cervical carcinoma alone is responsible for about 5% of all cancer deaths in womenworldwide [1,14]. It has been reported that in developing countries more than 200,000-300,000 women die from cervical cancer each year. Initiation of national screening program in the developed countries has resulted in a marked decrease in the cervical cancer related deaths [1,15]. Not only in developing countries, tremendous amount of effort is devoted to cervical cancer screening in United States [1,16]. The incidence of cervical cancer has decreased more than 50% in the past thirty years because of widespread screening with cervical cytology [1,8]. Considering the efficacy of pap smear cytology in preventing cervical cancer it is advocated that it should be initiated in all women at the age of 21 years [1,12]. In our study of cervical Pap smear cytology, age of the patients ranged from 18 to 80 years were included and the predominant population in the present study were between 18-50 years (89% The present study shows 0.63% of pap smears were from the patients aged below 18 years than other age groups. This similar result (0.6%) was seen in the study done by pudasaini et al in Nepal on cervical pap smear in a tertiary hospital.However, in the studies done in mid western part of Nepal and Bangladesh, there were no cases below 20 years [8,12]. Contrast to this study and our study, the number of cases of pap smear aged below 20 years was quite high (11.7%) in Pakistanwhich was depicted in the study done by Haider et al [18]. There were 2.01% cases of unsatisfactory smear in the present study and the most common cause for unsatisfactorysmear was obscuring dense inflammation and blood, absence of endocervical or transformation zone component and low squamous cellularity. This is similar to the study done in Pakistan (1.8%) by Bukhari et al [9]. Lots of variation regarding unsatisfactory smear was seen in several studies conducted in different places. Study done in Kathmandu revealed 0.3% of unsatisfactory smear, which is lower than in our study [7]. In contrast to our study, the incidence of unsatisfactory smear was quite high in the studies conducted by Patel [7,8,12]. However in one study done in Gujarat, the incidence of ASCUS (40.74%) was quite high [23]. LSIL and HSIL were most commonly seen in reproductive age group (18-50 years) and perimenopausal (46-55 years) age group respectively which correlates well with the studies done by Pudasainiet al and Hirachand et al [1,7]. Other studiesconducted by Yeasmin et al and Tailor et al revealed that epithelial cell abnormalities were seen in an age group 40 years and above [12,23]. [19,20,21,26]. Original Research Article Trichomonas vaginalis was seen in 0.36% cases which is in contrast to other studies, where there were 3.2%, 0.7% and 0.6% cases [20,23,24]. Pap smear examination should begin as soon as the female are sexually active irrespective of their age and should be practiced as a routine gynaecological screening program. Implementation of pap smear screening program in all parts of developing countries is necessary for early detection of cervical premalignant lesions, which helps in early diagnosis, prompt treatment and reduction in mortality related to cervical cancer. Conclusion Pap smear is a widely accepted and highly effective screening tool for early detection of premalignant and malignant lesions of the cervix,thus helping in prompt treatment at an early stage. Till date Pap smear for cervical cancer is the most useful screening procedure known to reduce the mortality and morbidity associated with cervical malignancy, and thus pap smear a life saviour. The 2001 Bethesda system used for cervical cytology is a standard method and gives descriptive diagnosis that helps the gynaecologists in an individual patient management. Awareness and education regarding anavailability, utility and importance of Pap smear should be created among all women worldwide. Funding: Nil, Conflict of interest: None initiated, Permission from IRB: Yes
2020-04-23T09:12:20.455Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "a75b1aa85c774eaf8e962073fbcc631c849f3016", "oa_license": "CCBY", "oa_url": "https://pathology.medresearch.in/index.php/jopm/article/download/116/229", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "8f8c902465e2c336bad23bf593404df6b00bdf62", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
7601701
pes2o/s2orc
v3-fos-license
A Parametric Study Delineating Irreversible Electroporation from Thermal Damage Based on a Minimally Invasive Intracranial Procedure Background Irreversible electroporation (IRE) is a new minimally invasive technique to kill undesirable tissue in a non-thermal manner. In order to maximize the benefits from an IRE procedure, the pulse parameters and electrode configuration must be optimized to achieve complete coverage of the targeted tissue while preventing thermal damage due to excessive Joule heating. Methods We developed numerical simulations of typical protocols based on a previously published computed tomographic (CT) guided in vivo procedure. These models were adapted to assess the effects of temperature, electroporation, pulse duration, and repetition rate on the volumes of tissue undergoing IRE alone or in superposition with thermal damage. Results Nine different combinations of voltage and pulse frequency were investigated, five of which resulted in IRE alone while four produced IRE in superposition with thermal damage. Conclusions The parametric study evaluated the influence of pulse frequency and applied voltage on treatment volumes, and refined a proposed method to delineate IRE from thermal damage. We confirm that determining an IRE treatment protocol requires incorporating all the physical effects of electroporation, and that these effects may have significant implications in treatment planning and outcome assessment. The goal of the manuscript is to provide the reader with the numerical methods to assess multiple-pulse electroporation treatment protocols in order to isolate IRE from thermal damage and capitalize on the benefits of a non-thermal mode of tissue ablation. Conclusions: The parametric study evaluated the influence of pulse frequency and applied voltage on treatment volumes, and refined a proposed method to delineate IRE from thermal damage. We confirm that determining an IRE treatment protocol requires incorporating all the physical effects of electroporation, and that these effects may have significant implications in treatment planning and outcome assessment. The goal of the manuscript is to provide the reader with the numerical methods to assess multiple-pulse electroporation treatment protocols in order to isolate IRE from thermal damage and capitalize on the benefits of a non-thermal mode of tissue ablation. Background Irreversible electroporation is a new technique for the focal ablation of undesirable tissue using high voltage, low energy electric pulses [1,2]. An IRE treatment involves placing electrodes within the region of interest and delivering a series of electric pulses that are microseconds in duration [3]. The pulses create an electric field that induces an increase in the resting transmembrane potential (TMP) of the cells in the tissue [4]. The induced increase in the TMP is dependent on the electric pulse (e.g. strength, duration, repetition rate, shape, and number) and physical configuration of the electrodes used to deliver the pulses. Depending on the magnitude of the induced TMP, as well as its duration and repetition rate for induction, the electric pulses can have no effect, transiently increase membrane permeability, or cause cell death [5]. Spatially, for a given set of conditions, the TMP and therefore the degree of electroporation is dependent on the local electric field to which the cells are exposed. Because the transitions in cellular response to the electric pulses are sudden, the treated regions are sharply delineated. Consequently, numerical models that simulate the electric field distributions in tissue are needed to predict the treated region [6][7][8]. There have been several studies evaluating the efficacy and safety of IRE in treating both experimental and spontaneous tumors. Al-Sakere et al. subcutaneously implanted sarcoma tumors in mice and achieved a complete response in 12 of 13 tumors with IRE treatment [1]. Guo et al. achieved regression of hepatocellular carcinoma tumors implanted in liver in 9 out of 10 rats treated with IRE [9]. Neal et al. implanted human mammary tumors orthotopically in mice and produced a complete response in 5 of 7 tumors with IRE which verified that IRE can be used in a heterogeneous environment [10]. In a clinical series of IRE based therapies, our group has long-term follow-up on canine patients with spontaneous tumors. One canine patient was treated with IRE and radiation therapy for a non-resectable, high-grade glioma, resulting in complete remission of the tumor at four months [11]. Another canine patient with a focal histiocytic sarcoma has been in complete remission for 8 months since completion of the last IRE treatment [12]. One of the main advantages of IRE over other focal ablation techniques is that the therapy does not use thermal damage from Joule heating to kill the cells. As a result, major blood vessels, extracellular matrix and other critical structures are spared [1,2]. Because electroporation based therapies require high-voltage pulses to be administered to the tissue, thermistors and thermocouples may become damaged during treatment. Therefore, previous investigations into the thermal aspects of electroporation based therapies have relied on numerical modeling, typically using a modified Pennes' Bioheat equation with an added Joule heating term to predict the thermal effects. There have been several theoretical attempts in the literature to investigate the thermal response of tissues to electroporation-based treatments and assess the degree, if any, of thermal damage. In some studies, the authors calculate the pulse time required to reach a maximum temperature of 50°C, which they assume is when instantaneous thermal damage will occur [13,14]. Others calculate the equivalent thermal dose or thermal damage associated with one or multiple pulses to determine the amount, if any, of tissue damage due to exposure of the tissue to elevated temperatures [4,[15][16][17][18][19]. Finally, other papers show the equivalent thermal dose for an 80-pulse IRE treatment [1]. Pliquett et al. performed a qualitative assessment of thermal effects induced by electroporation by using temperature-sensitive liquid crystals that change colors at 40°C , 45°C, and 50°C [20]. Although these theoretical and qualitative analyses are very powerful and well-grounded, to the best of our knowledge there is no experimental data for actual temperature changes during IRE pulse administration to prove that cell death occurs independent of classical thermal-induced mechanisms. This data is vital in order to validate the numerical models and better predict the temperature changes during a procedure for thermal damage assessment. In addition, the numerical models assessing thermal damage in the literature do not simultaneously incorporate the significant changes in the electrical conductivity of the tissue due to temperature changes as well as electroporation. Therefore, models of electroporation-based protocols that include the electrical conductivity changes and do not assume that the heat will dissipate by the beginning of the following pulse are needed in order capture the entire thermal effects of a procedure. It is then possible to maximize the sparing of critical structures in the brain and other organs and to determine the upper limit of the IRE treatment, above which thermal damage ensues. Accurate prediction of all treatment associated effects is vital to the development and implementation of optimized treatment protocols. Our group has confirmed the safety of intracranial IRE procedures in three experimental canines [21]. These procedures were performed through craniectomy defects to expose the cortex (grey matter) and allow for the insertion of the electrodes in the brain. We have also correlated numerical models with 3D lesion reconstructions in order to establish electric field intensities needed to kill grey matter [22]. These studies have shown that IRE has the potential to treat intracranial disorders in canine and human patients. In the present study, we use a previously reported treatment performed through 1.2 mm diameter burr holes with CT guidance placement within a subcortical neuroanatomic target as the basis for a parametric study [23]. The parametric study in brain tissue evaluates the effects that the change in tissue electrical conductivity due to electroporation and the thermal effects have on the electric field distribution. It further simulates treatment volumes for similar procedures performed at three frequencies that have been used clinically in other tissues including prostate, kidney and lungs [24][25][26][27]. This study demonstrates how one can use an Arrhenius analysis to relate temperature and length of exposure during electroporation-based procedures. Clinical Procedure The experimental aspect of the study is described in detail within our previously published conference proceeding [23], and was approved by the Institutional Animal Care and Use Committee and performed in a Good Laboratory Practices (GLP) compliant facility. After induction of general anesthesia in the canine, two 1.2 mm diameter burr holes were created in the skull in preparation for electrode insertion [23]. The CT guidance system was used to place the electrodes into the targeted deep white matter of the corpus callosum, as seen in Figure 1 (TeraRecon, Foster City, CA) [23]. A neuromuscular blocker was administered to suppress patient motion prior to the IRE treatment [23]. A focal ablative IRE lesion was created in the white matter of the brain using the NanoKnife ® generator (Angiodynamics, Queensbury, NY USA) [23]. Two 1-mm diameter blunt tip electrodes with 5 mm exposure were inserted into the brain through the burr holes with a center-to-center separation distance of 5 mm [23]. After insertion of the electrodes, four sets of twenty, 50 μs long, electric pulses were delivered with an applied voltage of 500 V [23]. The polarity of the electrodes was alternated between the sets to minimize charge build-up on the electrode surface [23]. These parameters were determined from our previous in vivo intracranial IRE procedures which showed that they were sufficient to ablate grey matter [21,22,28]. The NanoKnife ® pulses were synchronized with the canine's heart rate in order to prevent ventricular fibrillation or other cardiac arrhythmias and were delivered in trains of ten [23]. Due to recharging demands of the capacitors, each train of ten pulses was delivered 3.5 seconds after completion of the previous train. Temperature Measurements Temperatures were measured in the brain during the procedure using the Luxtron ® m3300 Biomedical Lab Kit Fluoroptic ® Thermometer and STB medical fiber optic probes (LumaSense™ Technologies, Santa Clara, CA USA). The probes, which are immune to electromagnetic interference, consist of a fiber optic cable terminated with a temperature sensitive phosphorescent sensor. Pulsed light strikes the phosphorescent element causing it to fluoresce. The decay time of this fluorescent signal is temperature dependent and is measured with an accuracy of ± 0.2°C. In order to minimize the invasiveness of the procedure, the thermal probes were placed within a 0.78 mm outer diameter polyimide tubing that was attached near the tip of the electrode-tissue interface and 10 mm along the insulation as seen in Figure 2 [23]. The data acquisition was performed with TrueTemp™ software (Version 2.0, Luxtron ® Corporation, Santa Clara, CA USA) in which each probe was set to a recording frequency of 2 Hz. The measured temperature was imported into Wolfram Mathematica 6.0 for students (Champaign, IL USA) for analysis. The oscillatory data was smoothed with the moving average command in which each data point reported is the average of the neighboring ± 10 data points. We present the raw and the smoothed versions of the temperature data in the results section. Image Acquisition Immediately after completion of the pulses, the subject was imaged with CT and a 0.2 T magnetic resonance imaging (MRI) system. The animal was then humanely euthanized 2 hours post-IRE by intravenous barbiturates and the brain was harvested and fixed in 10% buffered formalin. The fixed ex vivo brain was later imaged on a 7.0 T MRI for a more detailed analysis of the lesion produced. Numerical Models Numerical models can be used for treatment planning to ensure that only the targeted regions are ablated [8]. In order for this to be accurate, one has to know both the physical properties of the tissue, the electric field distribution, and the electric field threshold needed for IRE. This study examined two sets of models. The first was developed to replicate the experimental procedure and used the temperature and current data to calibrate the properties and behavior of the tissue in response to the electric pulses. After calibrating the model with properties based on the experimental procedure, the model was adjusted to simulate treatments at three pulse repetition rates (0.5, 1, and 4 Hz) and three voltages (500, 1000, and 1500 V) for up to 80 pulses. The computations were performed with a commercial finite element package (Comsol Multiphysics, v.3.5a, Stockholm, Sweden). Electric Field Distribution The methods used to generate the electric field and temperature distributions in tissue are similar to the ones described by several investigators [4,[6][7][8]22]. The electric field distribution associated with the electric pulse is given by solving the governing Laplace equation: where σ is the electrical conductivity of the tissue and is the electrical potential [8]. The baseline electrical conductivity of the non-permeabilized white matter, s 0 = 0.256 S/m, was based on measurents by Latikka et al. in living humans at 37°C [29]. However, a tissue's conductivity is also a function of its temperature and any electropermeabilization induced by the electric pulses [30][31][32][33]. Therefore, the electrical conductivity was modeled dynamically to incorporate changes due to electroporation and thermal effects and is described by where s 0 is the baseline conductivity, a the temperature coefficient, T the temperature, and T 0 the physiological temperature [22]. Figure 3 displays the smoothed Heaviside function, flc2hs, with a continuous second derivative that ensures convergence of the numerical solution. This function is defined in Comsol, and it changes from zero to one when normE_dc -E delta = 0 over the range ± E range [22]. Initially, the 3D simulation was solved for a negligible fraction of the total treatment duration under homogenous tissue conditions in order to establish a baseline electric field distribution. The homogeneous electric field map provides the starting values for the dynamic conductivity function [22]. In our function, we assumed that the conductivity would increase by a factor of 3.0 due to electroporation, since this is similar to the reported factor in other organs during electroporation [30,[33][34][35][36][37]. Additionally, this factor matches the experimental current (data not shown) that was measured by the NanoKnife ® after the transient membrane charging effects had settled during the delivery of the pulses [23]. The brain was modeled as a 7.0 cm × 5.0 cm × 5.0 cm ellipsoid with the electrodes inserted to a maximum depth of 2.5 cm ( Figure 2). Because electrode placement resulted in the electrodes being surrounded mainly by white matter, homogeneous physical properties were set to those of white matter. The electrodes were modeled as an insulating body with an extension of stainless steel. Boundary conditions often include surfaces where electric potential is specified, as in the case of a source or sink electrode, or surfaces that are electrically insulating, as on the free surfaces of the tissue, for example. The electrical boundary condition along the tissue that is in contact with the energized electrode was = V o . The electrical boundary condition at the interface of the other electrode was = 0. The remaining boundaries were treated as electrically insulating: The models are fully defined and readily solvable using a numerical method once an appropriate set of boundary conditions and the properties of the tissue are defined ( Table 1). Instead of modeling eighty individual pulses, we modified the approach to have a continuous delivery of the electric field since we assume that once the conductivity increased due to electroporation it would not revert back. Using this approach eliminates the need to manipulate the time steps in order to ensure that the microsecond pulses are captured by the solver. This helps the simulation run faster and smoother since there are no abrupt changes due to the pulses. In order to deliver the same amount of energy as in the pulsed approach, we multiplied the Joule heating by the duty cycle (duration/period) of the pulse in the tissue and insulation domains. This ensures that at the onset of each pulse, equal amounts of thermal energy have been deposited in the tissue using either approach. Temperature Distribution The Pennes' Bioheat equation is often used to assess tissue heating associated with thermally relevant procedures, because it accounts for the dynamic processes that occur in tissues, such as blood perfusion and metabolism. Blood perfusion is an effective way to dissipate heat in contrast to metabolic processes which generate heat in the tissue. Modifying this equation to include the Joule heating term gives the equation the following form: where k is the thermal conductivity of the tissue, T is the temperature above the arterial temperature (T a = 37°C), w b is the blood perfusion rate, C b is the heat capacity of the blood, r b the blood density, q''' is the metabolic heat generation, r is the tissue density, and C p is the heat capacity of the tissue. Several thermal boundary conditions can be employed to study the heat exchange between the electrodes and the tissue [13,17,38]. In our models, the electrodes were considered as heat sinks, h = 10 W m 2 · K , which dissipate heat from the tissue through the electrodes to the environment [19,22]. Thermal Damage Distribution Thermal damage occurs when tissues are exposed to temperatures higher than their physiological temperature for extended periods of time. If the period of exposure is long, thermal damage can occur at temperatures as low as 43°C, while 50°C is generally chosen as the target temperature for instantaneous thermal damage [43]. This damage can represent a variety of processes including cell death, microvascular blood flow stasis and/or protein coagulation [44]. The thermal effects can be calculated to assess whether a particular set of pulse parameters and electrode configuration will induce thermal damage in superposition with IRE. The damage can be quantified using an Arrhenius type analysis which assumes that the damage follows first order reaction kinetics given by: where ζ is the frequency factor,E a the activation energy, R the universal gas constant, T(t) is the temperature distribution and τ is the heating time [4,15,44,45]. It has been shown that Ω = 0.53 is the threshold for burn injuries in blood-perfused skin [45][46][47]. We have adapted the Arrhenius equation, which traditionally has been used to study burn injuries in skin and transdermal drug delivery using electroporation, to investigate therapeutic IRE. In order to compute if any thermal damage resulted from the procedure, a timedependent analysis partial differential equation (PDE) was added under the PDE Mode in Comsol Multiphysics to simultaneously solve the distributions of the electrical potential, temperature, and thermal damage within the domain. The temperatures were calculated with the modified Pennes' Bioheat equation described above. Thermal damage was computed in the entire tissue domain in order to perform a comprehensive analysis of the thermal effects. The expression to calculate the damage is given by where Ω is the damage, the Γ is the flux vector and F is the forcing function The forcing function is written in logarithmic form in order to prevent abrupt changes in the solver since small changes in temperature can have significant impact on the damage. The flux vector was assumed to be zero since heat conduction is already incorporated in Equation 4. Similarly, all the boundaries in the domain were assumed to be of the Neumann form where ∂ ∂n = 0. The analysis was performed with a starting temperature equal to physiological conditions and the cell death parameters from Table 2. Clinical Procedure The 0.2 T MRI showed a focal, well circumscribed IRE lesion with calculated volumes of 0.131 cm 3 and 0.120 cm 3 for the T1-weigthed post-contrast and T2-weighted MRIs, respectively which we reported in Garcia et. al [23]. The lesion appeared hyperintense within the white matter on the T1-weighted post-contrast MRI, where contrast was able to leak into the brain due to breakdown of the blood-brain-barrier. The lesion was also hyperintense on the T2-weighted MRI sequence. Figure 4 demonstrates the focal and cavitary nature of the ablative white matter lesion within 2 hours after pulsing on both the ex vivo 7.0 T MRI ( Figure 4A) and with light microscopy ( Figure 4B). The most affected region appears to be directly between the electrodes, which is where the highest electric fields were generated. The reconstructed lesion volume from the high-resolution 7.0 T MRI was 0.058 cm 3 [23]. Experimental Temperature Distribution Figure 5 shows the raw and smoothed experimental temperature (solid) distributions measured with the thermal probes near the tip of the electrode-tissue interface and 10 mm above the insulation [23]. For the probe at the electrode-tissue interface (P1), four sets of mild increases in temperature are seen, which corresponds to each of the pulse sets delivered. The probe at the insulation (P2) shows minimal increase in temperature, mostly appearing due to heat conduction from the treatment region. The experimental changes in temperature resulting from the pulses were less than 1.15°C and were insufficient to generate thermal damage. This confirms that any cell death achieved by the procedure was a direct result of IRE since numerical simulations near the electrode-tissue interface routinely experience the greatest thermal effects [8,14,51]. It is important to note that the starting temperature was approximately 33°C due to the anesthesia effects. However, a starting temperature of 37°C was used in the numerical models investigated in the parametric study. Figure 5 also includes the calculated temperature (dashed) distribution from the calibration model at the two locations where the thermal probes were positioned experimentally. This numerical simulation replicated all aspects of the experimental procedure including the four sets of twenty pulses and the 3.5 seconds delay after the first ten pulses in each set due to the recharging demands of the capacitors. Even though the starting temperature was set to 33°C, we scaled the resulting initial temperatures to match the experimental values in order to provide a more objective comparison. From this figure, it is clear that the temperatures calculated with the numerical model were marginally higher than the measured ones. This calibration model was used as the basis for the parametric study since we were able to closely match the experimental and calculated temperature and electrical current. Figure 6 is a representation of the results from the simulated IRE treatments in brain. This figure displays the electric field, conductivity, temperature, and thermal damage distributions at the end of an entire IRE protocol. The electric field and temperature distributions are critical since they allow for the numerical integration of the electric field ( Figure 6A) to determine volumes of IRE and temperature ( Figure 6C) to assess thermal effects including thermal damage, respectively ( Figure 6D). We provide these distributions for one time point (80 s) and treatment parameter set (e.g. eighty 50-μs pulses at 1000 V delivered at 1 Hz), but could readily report any of the other simulated protocols. Figure 6A displays the electric field distribution on the tissue treated with IRE. Figure 6B shows the distribution of the electrical conductivity of the tissue as given by Equation 2. Figure 6C presents the temperatures at the completion of the pulse delivery. Figure 6D uses the temperature data throughout the treatment delivery to assess the presence of thermal damage. The maximum temperature reached was 47.8°C, with a thermal damage value, Ω, of 0.38. The increase in temperature during this simulation did not generate any tissue death by thermal modes since Ω was below the 0.53 threshold needed for thermal damage. Parametric Study Model After creating and calibrating the numerical model to the experimental data, a parametric study was performed to analyze the effects of varying the IRE treatment by using three pulse repetition rates (0.5, 1, and 4 Hz) and three voltages (500, 1000, and 1500 V) for up to 80 pulses. From these models, the volume of tissue treated by IRE as well as temperature changes and thermal damage was analyzed. Table 3 tabulates the calculated volumes of tissue that were treated with IRE at the onset and completion of the eighty pulses for all treatment scenarios considered, and also compares predictions drawn from the models using static conductivity and the dynamic conductivity equation. Furthermore, the time history of each volumetric quantity for IRE and the thermal assessments are presented to provide a clear delineation of treatment protocols that achieve IRE alone or in superposition with thermal damage. We report the volumes of tissue treated with IRE as those exposed to a minimum electric field of 500 V/cm, which was found to be the IRE threshold in grey matter for similar pulse parameters to those used in this parametric study [22]. Although we used a 500 V/cm threshold for our calculations, other researchers could adapt the numerical model to investigate the results of IRE in other tissues where the threshold could be different. In order to provide insight to the reader, we modeled the electrical conductivity of the tissue with static and dynamic functions. In the static function, s(T), the electrical conductivity of the tissue was assumed to be homogeneous and dependent only on the temperature. The dynamic function, s(E, T), incorporated the dependency of the electrical conductivity on temperature and electroporation. Applying 500 V at 0.5, 1, and 4 Hz resulted in IRE treated volumes between 0.179 -0.182 cm 3 for the static function and 0.293 cm 3 for the dynamic function. The IRE treated volumes ranged between 0.424 -0.460 cm 3 for the static function and between 0.706 -0.732 cm 3 for the dynamic function when the applied voltage was 1000 V. Finally, applying 1500 V generated IRE volumes between 0.683 -0.835 cm 3 for the static models and between 1.134 -1.296 cm 3 for the dynamic ones. The IRE treatment volume increased 55% -69% when the dynamic conductivity function was incorporated as compared to the static conductivity function. The results show the importance of using a conductivity function that takes into account all the relevant physical phenomena that occurs during electroporation in order to provide accurate treatment planning [22,[30][31][32]. Researchers and physicians should be aware of the increase in treatment volumes due to electroporation and temperature based conductivity changes when performing treatment planning as has been described by several groups in the field [21,[30][31][32][33]. Other groups have developed algorithms that are capable of determining optimum electrode configuration and optimum amplitude of the electric pulses for treatment planning of electroporation-based therapies [52,53]. Electrical Current Distribution In addition to monitoring the temperature during the experimental procedure, the current of each individual pulse was measured by the NanoKnife ® . It was found that the current throughout the procedure was 1.11 ± 0.2 A [23]. The resulting currents from the parametric IRE simulations for the 500, 1000, and 1500 V treatments delivered at 0.5, 1, and 4 Hz were calculated and are displayed in Figure 7. Applying 500 V resulted in electrical currents of 1.08 -1.12 A independent of the pulse repetition rate. A Table 3 Volumes (cm 3 ) of tissue treated with IRE for static s(T) and dynamic s(E, T) conductivities. temperatures were significantly higher which also resulted in higher electrical conductivity and thus electrical current. The increase in electrical current was not observed during the 500 V treatments since at this lower voltage the thermal effects are negligible compared to the changes in conductivity due to electroporation. The measured current agreed with the calculated currents, validating our assumption of an increase in the brain electric conductivity by a factor of 3.0 due to electroporation. Traditional Thermal Assessment The volumes of tissue presented in this section were used to calculate the percentage of the tissue that was treated with IRE in superposition with the thermal assessment and are given in parentheses. The curves in Figure 8 are calculated volumes of tissue exposed to temperatures greater than 43°C and 50°C. These values have been used for the assessment of potentially thermally damaging temperatures with 43°C being used for extended exposures and 50°C for instantaneous thermal damage [4]. Figure 8A shows that at the completion of the treatments using a 0.5 Hz pulse repetition rate, volumes of tissue exposed to temperatures greater than 43°C and 50°C were only achieved when delivering 1500 V, up to maximum volumes of 0.235 cm 3 (20.7% -43°C) and 0.002 cm 3 (0.2% -50°C). However, the effects of temperature become more significant when the pulses are delivered at a higher repetition rate, shown in Figure 8B for a frequency of 1 Hz (80 s for total treatment). Here, applying 1000 V resulted in 0.112 cm 3 (15.8%) of the tissue exposed to temperatures greater than 43°C and 0.00 cm 3 (0.0%) at 50°C, significantly lower than the 1500 V treatment, which had tissue volumes of 0.557 cm 3 (48.1%) and 0.158 cm 3 (13.7%) exposed to temperatures greater than 43°C and 50°C, respectively. In Figure 8C one can appreciate the drastic effects of further increasing the repetition rate to 4 Hz (20 s for total treatment). In this scenario, even 1000 V results in tissue heating above 50°C in 0.124 cm 3 (16.9%) of tissue, and greater than 43°C in 0.335 cm 3 (45.7%). Finally, for an applied voltage of 1500 V, the majority of the tissue will be heated to elevated temperatures, where 0.741 cm 3 (57.2%) and 0.410 cm 3 (31.7%) of tissue experiences temperatures greater than 43°C and 50°C, respectively. Thermal Damage Assessment Although the volumes of tissue exposed to a minimum temperature can provide insight to the thermal effects resulting from a particular IRE protocol, they do not provide a quantitative measure of thermal damage based on established metrics in the literature [4,15,44,45]. In Figure 9 we provide plots that show the time dependence of the volume of tissue exposed to a minimum electric field of 500 V/cm, which was found to be the IRE threshold in grey matter for similar pulse parameters to those used in this study [22]. Additionally, we present the volume of tissue that undergoes thermal damage using the Arrhenius analysis presented in the methods section. Similar to the previous analysis, in Figure 9 we investigate the influence of increasing the frequency of pulse delivery in both predicted IRE treatment and thermal damage volumes. Specifically, the curves displayed in Figure 9 correspond to the IRE treated volumes with 500 V (0.293 cm 3 ), 1000 V (0.706 -0.732 cm 3 ), and 1500 V (1.134 -1.296 cm 3 ). An IRE treatment using applied voltages of 500 V and 1000 V did not result in any thermal damage when delivered at 0.5 Hz. For these cases there was sufficient time for the heat to dissipate through conduction and blood perfusion prior to the onset of the following pulse. However, 1500 V pulses delivered at 0.5 Hz resulted in thermal damage in 0.052 cm 3 (4.6%) of the IRE treated tissue. In Figure 9B, there are virtually identical IRE treatment volumes for the 1 Hz repetition rate as the 0.5 Hz of Figure 9A, but when applying 1500 V, there is some thermal damage generated within 20 seconds that affects about 0.183 cm 3 of tissue, approximately 16% of the IRE volume. Finally, Figure 9C displays the IRE and thermal damage volumes for the 4 Hz treatment. In this case, thermal damage occurs in 0.094 cm 3 (12.8%) of the tissue when applying 1000 V, and approximately 29% of the IRE volume is thermally damaged (0.376 cm 3 ) by increasing the voltage to 1500 V. Additionally, if one focuses on the first seconds of the 1500 V, there is also an increase in the IRE lesion volume due to the increase in the temperature, and thus the electric conductivity. Discussion We previously reported on the first experience applying IRE to the deep subcortical white matter of canine brain [23]. In that procedure electrodes were placed under CTguidance through minimally invasive 1.2 mm diameter burr holes in order to produce a lesion. Temperatures were measured during this procedure, including the location in close proximity to where the lesion was produced. The low temperatures measured by our system confirmed the unique, non-thermal mode of IRE cell death. The ex vivo lesion volume was smaller than that observed from the in vivo MRIs due to elimination of edema as well as brain shrinkage during the fixation process. There was also limited time for the lesion to evolve relative to our previous work since the experimental aim of this study was to perform the procedure deep in the brain and evaluate the thermal effects, and therefore did not include the 3-day survival [21]. The ability of IRE to focally ablate small volumes of brain tissue in a minimally invasive fashion has significant potential clinical implications for the treatment of brain diseases in which destruction of focal neuroanatomic target is desired, such as some forms of epilepsy or central neuropathic pain syndromes [54]. We have shown in previous studies the ability to safely produce lesions in the grey matter of the brain cortex [21,22]. However, many of the potential central nervous system targets may reside deep within the brain, including the white matter [54]. Therefore, it is important to show the ability of IRE to produce a lesion deep within the white matter of the brain. To the best of our knowledge, we performed the first report of a CT-guided intracranial IRE treatment, as well as the first showing IRE pulses may be delivered within the deep white matter of the brain without causing significant edema [23]. We believe that the rapid implementation, minimally invasive nature, and precision offered by image guided IRE will be the preferred treatment delivery platform for future applications of this technology in the brain. It should be noted that the thermal effects are most prevalent closest to the electrodes, where the electric field magnitude is also highest. Therefore, any thermal damage induced by an IRE procedure should occur within the targeted ablation volume and will not eliminate the effectiveness of the treatment. However, IREs unique non-thermal mechanisms are the key to its ability to be implemented in the vicinity of sensitive structures such as blood vessels and major nerves, a major limitation to resection and thermal therapies. Therefore, a comprehensive and accurate understanding of the potential thermal effects and/or damage is essential to ensure maintaining of these advantages and mitigating the challenges associated with thermal therapies. Based on the electrodes configuration, measured electrical current and temperature in one canine, we developed a parametric study to investigate the effect of pulse frequency on three different applied voltages of 500, 1000, and 1500 V. The parametric study provides a reliable method to develop treatment protocols to ensure the IRE protocol achieves localized cell death independently from thermal damage. The study was based on pulse frequency, and confirms that if pulses are delivered too rapidly, thermal damage ensues and many of the benefits from this technology will not be optimized for the patient treatment. The described method of this study takes pulse parameters (frequency, magnitude, and number of pulses) into account in addition to the dynamic changes in tissue electrical conductivity due to temperature increase as well as electroporation. Furthermore, the model accounts for the biological processes of the Pennes' Bioheat Equation, including metabolic heat generation and blood perfusion. Several researchers have demonstrated that blood perfusion is compromised after electroporation in organs outside the central nervous system, thus the heat dissipation from blood convection will be reduced, and it becomes even more important to decrease the frequency of pulse delivery in order to allow for thorough heat dissipation through conduction [55][56][57]. Even though the effects of pulse duration, electrode exposure, and separation distance were not explicitly investigated in this manuscript, the method can be readily adapted in order to select protocols that do not generate thermal damage in superposition with IRE. Thus, it is necessary that models are explored for each particular application in order to optimize the treatment protocol and better predict the treatment outcome. Several values have been reported in the literature describing the amount that the electric conductivity of brain tissue increases per degree Celsius [39,58,59]. To be conservative, we selected 3.2%°C -1 as the temperature coefficient in this study as reported by Duck et al. [39]. This value is higher than other reported values in the literature that range between 1.4 -2.0%°C -1 [58,59], resulting in higher calculated temperatures for our models. In order to assess the effect of a lower temperature coefficient, we simulated treatments with a 1.6%°C -1 value since it is half of the magnitude used in the parametric study and it is still within the range reported in the literature. The volumes of tissue treated with IRE at the onset of the pulses were identical to the ones reported in Table 3 with the dynamic function. At the completion of the pulses, the 1000 and 1500 V applied voltages resulted in smaller volumes of tissue treated with IRE compared to the values reported in Table 3. As with the 3.2%°C -1 temperature coefficient, there were no significant increases in the predicted IRE treatment volume for the 500 V trials. Applying 1000 and 1500 V resulted in IRE treatment volume increases of 1.26 -3.27% and 1.91 -8.85% versus those calculated at the onset of the pulses, respectively (before thermal effects begin). The increase in electric conductivity and thus IRE treatment volumes due to the thermal effects are moderate compared to the effects of electroporation. Nevertheless, the electric conductivity dependence on temperature must be incorporated from a thermal perspective in order to optimize IRE protocols, while minimizing any potential thermal damage. IRE is an emerging focal ablation technique and it is vital for researchers and physicians to work together in developing the numerical models for predictable treatment planning. The models presented here provide insight to the role of electroporation and temperature in the resulting volumes of tissue ablated with IRE alone or in superposition with thermal damage. The aim of this work was to provide the reader with numerical methods capable of evaluating pulse parameters used clinically to maximize the benefits of a nonthermal mode of tissue ablation. The numerical methods presented are capable of delineating volumes of tissue undergoing IRE from volumes undergoing thermal damage as a function of time. In this manner, the time point at which different treatment protocols achieve IRE while preventing thermal damage can be determined. It is important for researchers and physicians to be aware of the upper limit of IRE in order to maximize the benefits of a non-thermal model of tissue ablation. Future work should correlate the electric field distribution from these numerical models with reconstructed IRE lesions as seen in MRI and histopathology in order to generate an electric field threshold for brain tissue for clinical use. Future investigations should also determine the electrical magnitudes at which the increase in electrical conductivity occurs for grey matter, white matter, and pathologic brain tissue. Conclusion We present the results of a parametric study in brain tissue that investigates 3 voltages delivered at 3 different frequencies which have been used clinically in other tissues such as prostate, kidney, and lungs [24][25][26][27]. These numerical simulations were based on an in vivo experimental procedure where a lesion was produced in the white matter of brain [23]. The procedure was performed in a minimally invasive fashion through 1.2 mm diameter burr holes and electrode placement was confirmed with CT imaging [23]. For the first time, the current and temperature were measured together in realtime during the delivery of the pulses and were used together as the basis for the numerical models. The models included all relevant pulse parameters and dynamic changes during treatment, and were capable of determining whether the lesions occurred due to IRE alone or in superposition with thermal damage. IRE alone allows preservation of the major vasculature, extracellular matrix and other critical structures, while achieving cell death in a target location. We hope our results provide physicians and researchers a way to assess individual protocols in order to capitalize on the benefits of this non-thermal mode of tissue ablation.
2014-10-01T00:00:00.000Z
2011-04-30T00:00:00.000
{ "year": 2011, "sha1": "af451b1d356f356db2e19aed9a670cbe70eb50d8", "oa_license": "CCBY", "oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/1475-925X-10-34", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af451b1d356f356db2e19aed9a670cbe70eb50d8", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
233424800
pes2o/s2orc
v3-fos-license
Alitretinoin Compliance in Patients with Chronic Hand Eczema Background Oral alitretinoin is effective in the treatment of chronic hand eczema (CHE), and ≥12 weeks of alitretinoin treatment has been shown to be effective in Korean patients. However, in the real world, a considerable number of patients discontinue alitretinoin, which leads to treatment failure. Objective To evaluate the compliance rate of alitretinoin treatment and explore common reasons for poor compliance in patients with CHE in the real world. Methods We retrospectively reviewed the electronic medical records of CHE patients treated with alitretinoin. We defined ‘poor-compliance’ as subjects who were treated with alitretinoin for <12 weeks and ‘good-compliance’ as subjects who were treated with alitretinoin for ≥12 weeks. We reviewed the demographics, dose, and duration of alitretinoin usage, efficacy, and reasons for poor compliance. Results A total of 137 subjects were enrolled, and 77 (56.2%) did not complete the 12-week treatment with alitretinoin. Among them, the non-improvement rate was significantly higher in the poor-compliance group than in the good-compliance group (p<0.01). The main reasons for the alitretinoin cessation in the poor-compliance group were insufficient response (40.8%), followed by high cost (34.7%), and adverse events (24.5%). Conclusion Alitretinoin appears the preferred long-term treatment option for CHE. Although there are complaints about late efficacy, cost, and side effects, following proper explanation, these should not justify discontinuation. Physicians need to recognize the reasons for poor compliance with alitretinoin for each patient and suggest continuing alitretinoin for the successful treatment of CHE. INTRODUCTION Hand eczema (HE) is the most widely recognized form of dermatitis to influence the hands 1 . Chronic hand eczema (CHE) is a chronic form of HE that persists for more than 3 months or recurs more than twice within a year 1 . Approximately 10% of CHE patients are diagnosed with severe CHE by the Physician Global Assessment (PGA), and their longterm prognosis is poor 1 . The reliable use of emollients, topical or systemic steroids, and avoiding aggravating factors can be options for treating mild cases of CHE. However, these conventional treatments yield unsatisfactory results in severe CHE [2][3][4] . Alitretinoin (9-cis isomer of retinoic acid) is a vitamin A metabolite that has been used as a topical or systemic medicine for severe acne vulgaris, psoriasis, and certain malignancies 5 . Alitretinoin is the first drug to be approved as a treatment option for CHE that is unresponsive to classical topical steroids 6 . In clinical trials, studies have shown that alitretinoin for up to 24 weeks is highly effective with a good safety profile 5 . Furthermore, adherence to daily treat- ment with alitretinoin for 12 weeks or more was found to be effective in treating CHE in Korean patients 1 . However, poor compliance with alitretinoin remains a primary cause of treatment failure and is a major challenge that dermatologists encounter in managing CHE patients 3 . There are limited studies on the alitretinoin compliance in CHE, and our retrospective study aimed to evaluate the compliance of alitretinoin and reasons for poor compliance in the real world. Data collection and analysis Fig . 1 shows the classification flow. We defined the 'poorcompliance group' as subjects who were administered with alitretinoin medication for less than 12 weeks with (improved PGA score) or without clinical improvement (similar or worsened PGA score), whereas the 'good-compliance group' comprised subjects who were administered with alitretinoin medication for more than 12 weeks. Based on the classification, we reviewed the electronic medical records of patients, including demographics (sex, age), dose and duration of alitretinoin usage, efficacy of alitretinoin, reasons for poor compliance, and laboratory test results. Efficacy and compliance assessments Efficacy was evaluated by the PGA score at the time patients stopped the alitretinoin treatment. We compared the difference between patients who were treated for less than 12 weeks and those for 12 weeks or more. Regarding the patients in the 'poor-compliance' group, we collected their reasons for alitretinoin withdrawal and analyzed specific adverse drug reactions (ADRs). Furthermore, we subdivided the patients by sex, age, and treatment duration to compare compliance and efficacy. Statistical analysis All data were calculated using IBM SPSS (IBM SPSS Statistics 23.0; IBM Corp., Armonk, NY, USA), and the mean values were identified. The Pearson's chi-squared test and independent two-sample t-test were used for statistical analysis. Results were considered statistically significant at a p-value of less than 0.05. Demographic information A total of 137 moderate (n=31; 22.6%) to severe (n=106; 77.4%) CHE patients were included in this study, and the overall demographic information and alitretinoin usage is shown in Table 1. There was a female predominance, and more than half of the patients were in their 30s∼50s. A total of 60 patients (43.8%) were treated for more than 12 weeks (good-compliance group) and 77 (56.2%) were treated for less than 12 weeks (poor-compliance group); among them, 49 were treated for less than 12 weeks without clinical improvement. Clinical improvement according to treatment duration and alitretinoin dosage Clinical improvement was significantly higher when CHE was treated with alitretinoin for more than 12 weeks (p< 0.01). A dose of 30 mg of alitretinoin had higher rate of clinical improvement than 10 mg of alitretinoin, although they were not significant (p=0.509; Fig. 2). A total of 28 patients who improved within 12 weeks had a 1.04-point improvement in their PGA scores, and 54 patients who improved after 12 weeks had a one-point improvement in their PGA scores. Relationship between compliance and sex, age, alitretinoin, and duration When we compared the good-compliance and poor-compliance groups, people aged 50∼59 years were more likely to be included in the good-compliance group, but there was no statistical significance (p>0.05). Furthermore, 30 mg of alitretinoin had higher compliance rate (n=46, 36.2%) than 10 mg of alitretinoin (n=3, 33.3%), but there was no statistical significance (p>0.05). In terms of alitretinoin dosage, the good-compliance group was occupied more than the poor-compliance group regardless of dose. In the poor-compliance group, 51 of the 77 patients stopped treatment with alitretinoin within 4 weeks. Reasons for poor compliance Among the 77 patients in the poor-compliance group, 28 (36.4%) showed clinical improvement and 49 (63.6%) did not. 1) Patients with clinical improvement (n=28) A total of 28 patients with clinical improvement of CHE stopped alitretinoin within 12 weeks. Their PGA score decreased from 4.85 to 3.82, showing a decrease of approximately one point. Five of the 28 patients showed relapse after a 4-month average. 2) Patients without clinical improvement (n=49) The primary reason for alitretinoin stoppage in the poorcompliance group without clinical improvement was ineffectiveness (n=20, 40.8%), followed by high cost (n= 17, 34.7%), and ADRs (n=12, 24.5%) (Fig. 3). Ineffectiveness was the main reason for stoppage in male patients, while high cost was the main reason for stoppage in female patients. Patients aged over 60 years who had used alitretinoin for 4 to 8 weeks most often discontinued the use due to ineffectiveness. At the age of 40, the high cost was the most likely reason for the alitretinoin stoppage. The reasons for patients discontinuing alitretinoin before 4 weeks were ineffectiveness and high cost in equal numbers. DISCUSSION CHE is a chronic form of HE that persists for more than 3 months or recurs for more than twice within a year 1,7,8 . Genetic predisposition, altered immune response, and environmental factors, such as handling chemicals or other skin irritants, have all been suggested as contributing factors for CHE. Pain, itching, and bleeding from fissures can make manual tasks challenging to perform, and embarrassment caused by persistent disfigurement of the hand may cause substantial physical, social, and psychological stress 2,3 . Moreover, CHE may cause an economic burden and decrease the quality of life of patients. Although mild cases of CHE can be managed by avoiding irritants and/or using emollients with topical corticosteroids, severe CHE is extremely challenging to manage and represents a considerable unmet medical need 4 . Alitretinoin (9-cis isomer of retinoic acid), a vitamin A metabolite, has pharmacological effects on cell proliferation, differentiation, apoptosis, angiogenesis, keratinization, sebum secretion, and immunomodulation mediated by nuclear retinoic acid receptors and retinoid X receptors 5 . Alitretinoin suppresses chemokines that are involved in the recruitment of leukocytes to the sites of skin inflammation, expansion of T lymphocytes, and antigen-presenting cells mediating inflammatory responses and suppressing allogenic leukocyte activation 9,10 . Alitretinoin is the first drug to be approved as a treatment option for CHE that is unresponsive to classical topical steroids 5,6,9,11 . In clinical trials, including the BACH trial (randomized double-blind placebo-controlled study), TOCCATA open study (non-interventional study), and meta-analysis, studies have shown that alitretinoin treatment for up to 24 weeks is a highly effective medicine with a good safety profile 3,5,8,12 . Furthermore, Kwon et al. 1 reported that daily use of alitretinoin for 12 weeks was effective in treating Korean patients with CHE. In our study, 137 patients with CHE using alitretinoin were analyzed, and there were a greater number of females and males in their 40s and 50s. The difference in age distribution between the current study and the previous studies may be because our study design only included patients with moderate to severe CHE. In addition, younger patients with CHE would not have been included because they tend to avoid treatment with alitretinoin because of its relatively high cost and possibility of fetal malformations. When treated for more than 12 weeks, the efficacy of alitretinoin was significant in our study, which was higher than the previous Korean study (90% vs. 44.4%) 1 . It may be because, in our study, all patients who showed improvement of the PGA score were included; however, in the previous study, the improvement was achieved only after 'clear'or 'almost clear'. Although alitretinoin showed good efficacy, more than half of the patients did not complete a sufficient treatment period for alitretinoin, indicating that poor compliance is a major reason for treatment failure rather than the efficacy of the drug itself. In a previous Korean study, the compliance rate was 70.3%, which was higher than ours 1 . It is presumed that previous prospective studies that looked at efficacy and safety might have attempted to improve patient compliance. Similarly, in terms of dosage, 30 mg of alitretinoin is much more effective than 10 mg. In addition, 10 mg of alitretinoin is known to be used if there are side effects or when underlying diseases, such as diabetes and cardiovascular disease, are present. However, the difference between 30 mg and 10 mg in our study was not significant, although the former was more effective. It may be because the 10 mg was used by a female patient with low weight or patients with kidney dysfunction. Furthermore, there were only 10 (7.3%) patients who used 10 mg of alitretinoin. Moreover, more patients used sufficient period in their 30 mg of alitretinoin, which is presumed to be used for a long time because it seemed that 30 mg was more effective for the patients. When the reasons for alitretinoin discontinuation in the poor-compliance group were analyzed, all 28 patients with clinical improvement discontinued alitretinoin as soon as they showed improvement. Of them, five relapsed after an average of 4 months, which was shorter than the 6 months reported in previous studies 11 . This may be because our patients underwent follow-up assessment at shorter intervals. Patients without clinical improvement stopped alitretinoin for various reasons, but majority discontinued treatment because they felt that it was ineffective. Most of the patients who were dissatisfied stopped alitretinoin within 4 weeks, a much shorter period than the recommended dose duration. Alitretinoin is known to require a sufficient dose duration to show its maximal effect, which corresponds to the results of the current study as well as to those of previous studies. The reason a large proportion of patients ceased taking alitretinoin early seems to be due to insufficient explanation from dermatologists. In addition, patients had a low level of understanding that they should take alitretinoin for a sufficient amount of time. Korean patients with urgent personality characteristics who expect fast improvement may also be likely to discontinue medication earlier 13 . Furthermore, in the analysis of patients in the poor-compliance group who stopped alitretinoin within 4 weeks, the number of patients who stopped because of high cost was the same as the number of patients who stopped because they believed the treatment was ineffec-tive. In Korea, alitretinoin (Alitoc Ⓡ ; GlaxoSmithKline (GSK), Brentford, England) tends to be more expensive with regards to insurance coverage than classical CHE treatments. A study of cost-effectiveness concerning CHE patients in Switzerland, which estimated the incremental cost-effectiveness ratio and clinical effectiveness (quality-adjusted life years [QALYs] derived from a randomized controlled clinical trial) in patients with severe CHE showed the cost-effectiveness of oral alitretinoin 2 . This study concluded that although alitretinoin costs more in total treatment costs (€42,208) than in supportive therapy (€38,795), the QALYs were higher in the alitretinoin group (11.21 vs. 10.98), and it was cost-effective in comparison with existing cost-effectiveness thresholds 2 . Many clinical trials, including the BACH trial and TOCCATA open study, have shown the effectiveness of alitretinoin over a course of 24 weeks, while Gulliver and Baker 14 reported effective treatment of CHE with 36 continuous months of alitretinoin treatment. Considering that moderate to severe CHE tends to be refractory to conventional therapy, there are few alternatives to alitretinoin. Thus, it is necessary to persuade patients that alitretinoin should not be discontinued simply due to ineffectiveness or high cost. Some patients stop alitretinoin due to ADRs, which are irritating symptoms such as headache, GI problems, flushing, and dryness. Laboratory abnormalities may be asymptomatic and only detected by examination by a physician; therefore, laboratory abnormalities may not be a factor in lowering compliance. In our study, headache was the most common side effect, as in other studies, and tends to diminish with the continuous use of alitretinoin 1,5 . In addition, like other side effects, such as GI problems, flushing, and dryness, a dose reduction or symptomatic care can manage symptoms. Serious ADRs, such as myocardial infarction, lymphatic edema, paranoia, rectosigmoiditis, and soft tissue swelling, have rarely been reported and are related to underlying diseases and not associated with alitretinoin use 11 . Chronic alitretinoin administration for up to 24 weeks did not lead to drug accumulation in the body, which verifies its long-term safety 15 . That is, most ADRs of alitretinoin are predictable and manageable. Therefore, improved patient screening, education for patients, and close attention to follow-up, for example through telephone and message reminders, may be critical in increasing compliance. The subjects included in this study were moderate to severe CHE patients who had failed conventional treatments. In this study, several factors, such as ineffectiveness, high cost, and ADRs were revealed to result in lower compliance; however, most of these factors are not reasons to discontinue alitretinoin from a doctor's perspective. Considering pharmacological characteristics and other treatment modalities, dermatologists should explain the factors that reduce compliance before initiating treatment. In addition, detailed counselling and management of the factors affecting compliance should be provided at every visit to raise the alitretinoin compliance. The retrospective nature of our study limited our ability to collect case-controlled data. Furthermore, this review was conducted with a relatively small number of patients (n=137) in only two centers. In the future, a large customized study is needed for better analysis of compliance with alitretinoin. CONFLICTS OF INTEREST The authors have nothing to disclose. FUNDING SOURCE This work was supported by the 2019 Inje University research grant. DATA SHARING STATEMENT The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
2021-04-29T05:21:00.635Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "d3b7a769315db8269e21027ee273307043ad51a7", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5021/ad.2021.33.1.46", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3b7a769315db8269e21027ee273307043ad51a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
81531476
pes2o/s2orc
v3-fos-license
Risk of Acute Renal Failure Requiring Renal Replacement Therapy after Cardiac Surgery Background: Acute renal failure is a rare but serious complication following cardiac surgery and associated with increased mortality and morbidity. Objective:To identify factors associated with mortality and mortality of patients with acute renal failure after cardiac surgery treated with continuous renal replacement therapy. Method: This was a cohort retrospective study on cardiac surgery patients who developed acute renal failure requiring renal replacement therapy after surgery in Harapan Kita National Cardiac Center between January 2011 and April 2012. Data was retrieved from medical record and consisted of pre-operative, intra-operative, and post-operative variables. Risk factor identification was done using multivariate logistic regression analysis, whereas relative risk analysis was applied to know the association between risk factor and morbidity. Direct or indirect effect of variables on renal failure was analyzed using Barttlet’s and anti-image correlation test. Results: A total of 110 cases were obtained during the study period; 70 (63.3%) among them were men. Patients mean age was 57.6 years. Preoperative renal failure, New York Heart Association Functional Classification Class (NYHA) class IV, critical condition, coronary revascularization surgery and bleeding, post-operative anemia, bleeding and venous saturation <65% showed a trend of mortality and morbidity rate between 0.1 and 9.1. The Keiser-Meyer-Olkin (KMO) value and Barttlet’s test showed that re-surgery, bleeding and low inotropic score resulted in 31.63% probability of having post-operative renal failure. Conclusion: Re-surgery, bleeding and inotropic use may result in postoperative renal failure. morbidity and mortality. 1The risk of renal failure after cardiac surgery is 4% and most of the patients would need dialysis therapy afterwards. 2,3Risk factors identification in patients with high risk of renal failure after cardiac surgery has been studied using the European System for Cardiac Operative Risk Evaluation (Euro SCORE).However, it may not be strong enough to predict mortality rate in high risk patients and combined surgery.Risk clearance was more than 50 mL/ minute.Severe pulmonary hypertension was defined as increased pulmonary arterial pressure more than 55 mmHg.Left ventricle function was categorized based on the left ventricular ejection fraction [LVEF] as follows: good (LVEF >50%), moderate (LVEF 31-50%), poor (LVEF 21-30%), and very poor (LVEF <20%).NYHA class IV heart failure was established if the patient was not able to perform light activity. Intra-operative risk factor was the cardiac bypass time of more than 90 minutes.Post-operative risk factors consisted of re-surgery, bleeding, anemia, inotropic score, and venous oxygen saturation.Re-surgery was a repeated surgery within 24 hours.Bleeding was defined as a blood loss of more than 2.5 liter within the first 24 hours.Anemia was established if there was a reduction of hemoglobin (Hb) level to more than 50% of the pre-operative value. 4Venous oxygen saturation reflects the venous blood saturation on average, which returns to the right side of the heart from all tissues.Body tissues usually utilize only 25% of the available oxygen, while the rest 75% is reserved for increasing activity or physiologic stress. 5Inotropic score was calculated as dopamine in µg/kg/min x 1+ milrinoneµg/kg/min x 15 + epinephrine in µg/kg/min x100.Inotropic score were grouped as low (<20), moderate (21-30) and severe (>30). 6 Data analyses Sampling adequacy was tested using the Kaiser-Meyer-Olkin (KMO) -Barttlet's test and anti-image correlation.Sample is adequate if the KMO and anti-image correlation values were greater than 0.5.Major component analysis was applied to measure the effects of variables on data variance.Effect of variables on mortality was tested using the multivariate logistic regression analysis with Hosmer and Lemeshow test (HLT) (α= 15% and 95% confidence interval).The HLT was chosen because it is more sensitive than the omnibus value.The HLT value is consistent with Chi-square distribution: the low HLT value, the nearer Chi-square distribution.Assessment of the effect of variables on morbidity was done using the relative risk analysis.Due to sample limitation, relative risk was manually calculated from morbidity on independent variables.Data analysis was done using the statistical software SPSS version 15.0 (SPSS Inc., Chicago, Illinois, USA).factor identification is important to manage patients optimally before surgery and to reduce morbidity and mortality.Therefore, this study was aimed to evaluate the relationship among risk factors (pre-, intra-and post-operatively) and mortality of acute renal failure requiring continuous renal replacement therapy after cardiac surgery. Method Study design and subjects This was a retrospective cohort study in the Intensive Care Unit, Harapan Kita National Cardiac Center.Subjects were patients underwent cardiac surgery between January 2011 and April 2012 who developed renal failure requiring continuous renal replacement therapy.Inclusion criteria were patients underwent coronary revascularization procedure, valve repair or valve replacement and combined coronary revascularization or valve repair.Patients with congenital anomaly were excluded. Risk factors categories and assessment Risk factors for mortality due to renal failure requiring continuous renal replacement therapy were categorized as pre-operative, intra-operative, and post-operative variables.Pre-operative risk factors were body mass index (BMI), prior history of cardiac surgery, type 2 diabetes on insulin treatment, emergency surgery, type of surgery, left main disease, severe renal dysfunction, severe pulmonary hypertension, ejection fraction, and NYHA Class IV heart failure.Body mass index was defined as body weight in kilogram divided by body height in meter square (kg/m 2 ).The results were grouped as underweight (BMI <18.5 kg/m 2 ), normal (BMI 18.5-22,9 kg/m 2 ), overweight (BMI 23-25 kg/m 2 ), or obese (BMI >25 kg/m 2 ).Pre-operative critical conditions were ventricle fibrillation, ventricular tachycardia, cardiac and pulmonary resuscitation, mechanical ventilation, inotropic treatment or intraaortic balloon pump (IABP), anuric acute renal failure or urine production less than 10 mL/ hour.Emergency surgery was defined as an immediate surgery after a diagnostic procedure.Pre-operative renal failure was established if the creatinine Sampling adequacy and principal component analysis The value of KMO test was 0.621 which indicate that the sample was adequate for further analysis.Removal of variables, which had anti-image values less than 0.5, resulted in increased KMO value to 0.640 (p<0.001). Characteristics of the study subjects There were 1520 patients underwent cardiac surgery between January 2011 and March 2012.Renal failure was present in 110 patients and were treated with continous renal replacement therapy.Patients' mean age was 57.6 years.Seventy (63.6%) of the patients were men.Most patients underwent coronary revascularization procedure (Table 1).There were 46.4% of patients who overweight.Mean pre-operative LVEF was 49.1%.There were 90 (81.8%) patients with pre-operative renal dysfunction with a mean creatinine clearance of 54.1 mL/minutes. Mortality rate was 40.9% and most cases occurred in patients underwent coronary revascularization.Mortality was 11.8% in patients with more than one cardiac surgery procedures.Lowest mortality rate was found in patients underwent off-pump coronary revascularization.The mean ICU stay was 12.3 days whereas the mean hospital stay was 30.8 days (Table 2).Each of the principal components could explain about 31.6%, 21.9%, 17.6%, 15.3%, and 13.6% of the variance, respectively.These results showed that the first component (re-surgery, bleeding, and inotropic score) can explain 31.6% of the variance within the data. Multivariate logistic regression analysis There was no difference among gender, BMI and age less than 50 years.This condition was contradictory in patients aged more than 50 years with pre-operative critical condition which increased the probability of mortality by 7.1 times higher than patients without pre-operative critical condition. In general, pre-operative critical condition, renal failure, NYHA class IV heart failure, coronary revascularization procedure, anemia and post-operative bleeding, and venous saturation >65% were identified as factors which increased mortality rate. Patients with bleeding after cardiac surgery had 92 times increased risk of death compared to patients with no bleeding.Factors which did not show effect on mortality rate were diabetes on insulin treatment, LVEF <30%, severe pulmonary hypertension, emergency surgery, cardiac bypass time of more than 90 minutes, and re-surgery.By using a level of significance at 95% on relative risk predictor of post-operative morbidity from pre-operative, intra-operative and post-operative variables, only pre-operative condition had significant effect on mortality rate (Table 5).renal dysfunction, diabetes, low LVEF, emergency surgical procedure, long cross clamp period, and blood transfusion. 7In this current study, we did not use risk factors in the Euro Score because this assessment is too weak to estimate mortality rate in high risk patients and combined coronary revascularization and valve surgery. 8uro Score accuracy differs among types of surgery.From six scoring systems, Euro Score had the highest predictive values although the morbidity predictive value differs from the mortality predictive values. 9n this study, most patients had pre-operative renal dysfunction with a mean creatinine clearance of 54.1 mL/minutes.Severe renal failure was found in 7.34%, which is higher than a previous report on 3154 patients which found that 2.1% of patients had severe renal failure requiring continuous renal replacement therapy. 10Cardiac surgery and the use of heart-lung bypass machine may cause an inflammation response and induce acute renal failure. 11Patients with mild pre-operative renal failure have a higher risk for developing post-operative renal failure and bleedingassociated re-surgery. 12n a study on 105 patients underwent cardiac surgery, mortality rate was 34.3% in those with preoperative ejection fraction of 34.4%, using IABP support and mechanical ventilator of more than 24 hours. 1 Malnutrition increased the risk of morbidity and mortality after cardiac surgery.Observation on 15 patients with low BMI found that 5 out of 12 patients who underwent valve surgery and combined surgery died.This result was similar with another study who found that the risk of renal failure, pneumonia, resurgery, bleeding, and brain ischemia was higher in patients with BMI less than 20 kg/m 2 . 13n imbalance between renal oxygen supply and oxygen demand will induce acute renal failure.Oxygen supply in the kidney depends on the oxygen content in the renal blood flow.Renal blood flow will decrease if the mean arterial pressure is less than the renal optimal autoregulation value.Reduced renal blood flow will decrease glomerular filtration rate (GFR); which in turn, affect tubular oxygen reabsorption and then reduces renal oxygen consumption. 4Renal blood flow reduction is a major cause of renal failure after cardiac surgery. 11Our study showed that with a mean pre-operative hemoglobin levels of 11.4 g/ dL and post operative hemoglobin levels of 8.97 g/ dL, the risk of renal failure was high even when the arterial blood pressure was 76.2 mmHg.Patient with severe hypotension and anemia was more frequently experienced renal failure than hypotension without anemia. 14ean cardiac bypass time in this study was 133.4 minutes.Several studies showed that long cardiac bypass time might cause red blood cell injury which release free hemoglobin along with transferrin, haptoglobin, and scavengers, causing renal tubular damage and death. 15Normal venous oxygen saturation indicates that there is sufficient oxygen supply for the tissue.Low venous oxygen saturation, either due to insufficient oxygen supply or increased oxygen demand, may indicate that the body is under critical condition to keep the balance of oxygen. 5atients with inotropic exposure showed less mortality rate than those without inotropic exposure.This result differed from other observation.In this study, the number of variables and sample size highly affects the real observed values. Conclusion In conclusion, there are several risk factors associated with renal failure requiring continuous renal replacement therapy after cardiac surgery.Pre-operative severe renal dysfunction, NYHA class IV heart failure, critical condition, coronary revascularization surgery, post-operative bleeding and anemia, and venous saturation less than 65% are independent variables that associated with mortality.Based on the principal component analysis, it can be estimated that re-surgery, bleeding and inotropic use may result in post-operative renal failure.These high-risk patients need to be optimally prepared before surgery in hoping that intraand post-operative outcomes could be better and may reduce morbidity and mortality. Table 1 . Characteristics of the study subjects (N=110) Table 2 . Post-operative variables and the outcomes (N=110) Table 3 . Multivariate analyses of all risk factors to predict post-operative mortality.
2018-12-20T22:33:17.716Z
2014-03-25T00:00:00.000
{ "year": 2014, "sha1": "bad59a126aca772127f42b43ad9d89eb721a37bb", "oa_license": "CCBY", "oa_url": "http://ijconline.id/index.php/ijc/article/download/335/321", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bad59a126aca772127f42b43ad9d89eb721a37bb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2009263
pes2o/s2orc
v3-fos-license
Solution of Single and Multiobjective Stochastic Inventory Models with Fuzzy Cost Components by Intuitionistic Fuzzy Optimization Technique Paknejad et al.’s model is considered in this paper. Itemwise multiobjective models for both exponential and uniform lead-time demand are taken and the results are compared numerically both in fuzzy optimization and intuitionistic fuzzy optimization techniques. Objective of this paper is to establish that intuitionistic fuzzy optimizaion method is better than usual fuzzy optimization technique as expected annual cost of this inventory model is more minimized in case of intuitionistic fuzzy optimization method. As a single objective stochastic inventory model where the lead-time demand follows normal distribution and with varying defective rate, expected annual cost is also measured. Finally the model considers for fuzzy cost components, which make the model more realistic, and numerical values for uniform, exponential, and normal leadtime demand are compared. Necessary graphical presentations are also given besides numerical illustrations. Introduction In conventional inventory models, uncertainties are treated as randomness and are handled by appealing to probability theory.However, in certain situations uncertainties are due to fuzziness and in these cases the fuzzy set theory, originally introduced by Zadeh 1 , is applicable.Today most of the real-world decision-making problems in economic, technical and environmental ones are multidimensional and multiobjective.It is significant to realize that multiple-objectives are often noncommensurable and conflict with each other in optimization problem.An objective within exact target value is termed as fuzzy goal.So a multiobjective model with fuzzy objectives is more realistic than deterministic of it. In decision making process, first, Bellman and Zadeh 2 introduced fuzzy set theory; Tanaka et al. 3 applied concept of fuzzy sets to decision-making problems to consider the objectives as fuzzy goals over the α-cuts of a fuzzy constraints.Zimmermann 4, 5 showed Paknejad et al.'s 35 model is considered in this paper, as a single objective stochastic inventory model where the lead-time demand follows normal distribution and with varying defective rate, expected annual cost is measured.Itemwise multiobjective models for both exponential and uniform lead time demand are taken and the results are compared numerically both in fuzzy optimization and intuitionistic fuzzy optimization techniques.From our numerical as well as graphical presentations, it is clear that intuitionistic fuzzy optimization obtains better results than fuzzy optimization.Finally the model considers for several fuzzy costs and numerical values for uniform, exponential, and normal lead-time demand are compared.Necessary graphical presentations are also given besides numerical illustrations. Mathematical Model Paknejad et al. 35 presented a quality adjusted lot-sizing model with stochastic demand and constant lead time and studied the benefits of lower setup cost in the model.We note that the previous literature focuses on the issue of setup cost reduction in which information about lead-time demand, whether constant or stochastic, is assumed completely known.This paper considers Paknejad et al.'s model along with the notations and some assumptions that will be taken into account throughout the paper.Each lot contains a random number of defectives following binomial distribution.After the arrival purchaser examines the entire lot, an order of size Q is placed as soon as the inventory position reaches the reorder point s.The shortages are allowed and completely backordered.Lead-time is constant and probability distribution of lead-time demand is known.Now, we use the following notations: where f x is the density function of lead-time demand, EC Q, s : expected annual cost given that a lot size Q is ordered. Single Objective Stochastic Inventory Model (SOSIM) Thus a quality-adjusted lot-sizing model is formed as MinEC Q, s Setup cost non-defective item holding cost stockout cost defective item holding cost inspecting cost 3.1 It is the stochastic model, which minimizes the expected annual cost. Multiobjective Stochastic Inventory Model (MOSIM) In reality, a managerial problem of a responsible organization involves several conflicting objectives to be achieved simultaneously that refers to a situation on which the DM has no control.For this purpose a latest tool is linear or nonlinear programming problem with multiple conflicting objectives.So the following model may be considered. To solve the problem in 3.1 as an MOSIM, it can be reformulated as 4.1 Multiitem Stochastic Model with Fuzzy Cost Components Stochastic nonlinear programming problem with fuzzy cost components considers as Here K i , π i , h i , h i represent vector of fuzzy parameters involved in the objective function EC.We assume h i , and h i h − i , h 0 i , h i , all of which are triangular fuzzy numbers. Fuzzy Nonlinear Programming (FNLP) Technique to Solve Multiobjective Nonlinear Programming Problem (MONLP) A Multi-Objective Non-Linear Programming MONLP or Vector Minimization problem VMP may be taken in the following form: 6.1 Zimmermann 5 showed that fuzzy programming technique can be used to solve the multiobjective programming problem. To solve MONLP problem, the following steps are used. Step 1. Solve the MONLP of 6.1 as a single objective non-linear programming problem using only one objective at a time and ignoring the others; these solutions are known as ideal solution. Step 2. From the result of Step 1, determine the corresponding values for every objective at each solution derived.With the values of all objectives at each ideal solution, pay-off matrix can be formulated as follows: Here x 1 , x 2 , . . ., x k are the ideal solutions of the objective functions L r and U r are lower and upper bounds of the rth objective functions f r x r 1, 2, . . ., k . Step 3. Using aspiration level of each objective of the MONLP of 6.1 may be written as follows. Find x so as to satisfy Here objective functions of 6.1 are considered as fuzzy constraints.These type of fuzzy constraints can be quantified by eliciting a corresponding membership function: 6.5 Having elicited the membership functions as in 6.5 μ r f r x for r 1, 2, . . ., k introduce a general aggregation function: So a fuzzy multi-objective decision making problem can be defined as Max μ D x subject to x ∈ X. 6.7 Here we adopt the fuzzy decision as follows. Fuzzy decision is based on minimum operator like Zimmermann's approach 4 .In this case 6.7 is known as FNLP M . Then the problem of 6.7 , using the membership function as in 6.5 , according to min-operator is reduced to Stochastic Models with Fuzzy Cost Components Stochastic non-linear programming problem with fuzzy objective coefficient considers as Min Z CX, X ≥ 0. 7.1 Advances in Operations Research Here C represents a vector of fuzzy parameters involved in the objective function Z.We assume C i c i − , c i 0 , c i , which is a triangular fuzzy number with membership function: where 7.4 According to Kaufman and Gupta 36 by combining three objectives into a single objective function, 7.3 can be reduced to an LPP by most likely criteria as Formulation of Intuitionistic Fuzzy Optimization When the degree of rejection nonmembership is defined simultaneously with degree of acceptance membership of the objectives and when both of these degrees are not complementary to each other, then IF sets can be used as a more general tool for describing uncertainty. Advances in Operations Research Figure 1: Membership and nonmemebership functions of the objective goal. To maximize the degree of acceptance of IF objectives and constraints and to minimize the degree of rejection of IF objectives and constraints, we can write max μ i X , XεR, i 1, 2, . . ., K n, where μ i X denotes the degree of membership function of X to the ith IF sets and ν i X denotes the degree of nonmembership rejection of X from the ith IF sets. An Intuitionistic Fuzzy Approach for Solving MOIP with Linear Membership and Nonmembership Functions To define the membership function of MOIM problem, let L acc k and U acc k be the lower and upper bounds of the kth objective function.These values are determined as follows.Calculate the individual minimum value of each objective function as a single objective IP subject to the given set of constraints.Let X * 1 , X * 2 , . . ., X * k be the respective optimal solution for the k different objective and evaluate each objective function at all these k optimal solutions.It is assumed here that at least two of these solutions are different for which the kth objective function has different bounded values.For each objective, find lower bound minimum value L acc k and the upper bound maximum value U acc k .But in intuitionistic fuzzy optimization IFO , the degree of rejection non-membership and degree of acceptance membership are considered so that the sum of both values is less than one.To define membership function of MOIM problem, let L The linear membership function for the objective Z k X is defined as 9.1 Lemma 9.1. In case of minimization problem, the lower bound for non-membership function (rejection) is always greater than that of the membership function (acceptance). Now, we take new lower and upper bounds for the non-membership function as follows: 9.2 Following the fuzzy decision of Bellman-Zadeh 2 together with linear membership function and non-membership functions of 9.1 , an intuitionistic fuzzy optimization model of MOIM problem can be written as Advances in Operations Research The problem of 9.3 can be reduced following Angelov 29 to the following form: Then the solution of the MOIM problem is summarized in the following steps. Step 1. Pick the first objective function and solve it as a single objective IP subject to the constraint; continue the process K-times for K different objective functions.If all the solutions i.e., X * 1 . ., m; j 1, 2, . . ., n are the same, then one of them is the optimal compromise solution and go to Step 6. Otherwise go to Step 2. However, this rarely happens due to the conflicting objective functions. Then the intuitionistic fuzzy goals take the form 9.5 Step 2. To build membership function, goals and tolerances should be determined at first.Using the ideal solutions, obtained in Step 1, we find the values of all the objective functions at each ideal solution and construct pay off matrix as follows: Step 3. From Step 2, we find the upper and lower bounds of each objective for the degree of acceptance and rejection corresponding to the set of solutions as follows: For linear membership functions, 9.8 Step 4. Construct the fuzzy programming problem of 9.3 and find its equivalent LP problem of 9.4 . Step 5. Solve 9.4 by using appropriate mathematical programming algorithm to get an optimal solution and evaluate the K objective functions at these optimal compromise solutions. Few Stochastic Models Case 1 Demand follows Uniform distribution .We assume that lead time demand for the period for the ith item is a random variable which follows uniform distribution and if the decision maker feels that demand values for item I below a i or above b i are highly unlikely and values between a i and b i are equally likely, then the probability density function f i x is given by where b i s i are the expected number of shortages per cycle and all these values of b i s i affect all the desired models. Case 2 Demand follows Exponential distribution .We assume that lead-time demand for the period for the ith item is a random variable that follows exponential distribution.Then the probability density function f i x is given by Case 3 Demand follows Normal distribution .We assume that lead-time demand for the period for the ith item is a random variable, which follows normal distribution.Then the probability density function f i x is given by where b i s i are the expected number of shortages per cycle and all these values of b i s i affects all the desired models and Φ x i represents area under standard normal curve from −∞ to x i . Solution of the Model of 3.1 The lead-time demand follows normal distribution and thus b i s i , the expected demand short at the end of the cycle, takes up the value according to 10.2 .Thus for single item model, we consider the following data: D 2750; K 10; h 0.25; v 0.02; π 1; h 0.15 11.1 all the cost-related parameters are measured in "$" .Here, the lead-time demand follows Normal distribution with mean μ 20 and standard deviation σ 2. The lead-time demand follows normal distribution and thus b i s i , the expected demand short at the end of the cycle, takes up the value according to 10.2 . In Table 1, a study of expected annual cost EC Q, s with lot size Q and reorder point s is given for different defective rate θ.We conclude from Table 1 as well asFigure 2 that the order quantity as well as the expected annual cost increases as θ increases. Solution of the Model of 4.1 In case of MOSIM of 4.1 , we use the methods described in Section 6, to solve it by fuzzy optimization technique, and Sections 8 and 9, to solve it by intuitionistic fuzzy optimization technique and the following data are considered.In this case our objective is to analyze the expected annual cost of different stochastic models when their cost components are not deterministic but several triangular fuzzy numbers and thus the model becomes more practical and realistic. Advances in Operations To solve the model of 5.1 we use the method described in Section 7. The lead time demand follows uniform, exponential, and normal distribution, respectively, and thus the expected demand short at the end of the cycle, takes up the value according to 10.2 , 10.4 , and 10.5 , respectively. In case of Uniform demand, all the cost-related parameters are measured in "$" . Conclusion Paknejad et al.'s model is considered in this paper, as a single objective stochastic inventory model where the lead-time demand follows normal distribution, and with varying defective rate, expected annual cost is measured.Our objective is to minimize the expected annual cost.Itemwise multiobjective models for both exponential and uniform lead-time demand are taken and the results are compared numerically both in fuzzy optimization and in intuitionistic fuzzy optimization techniques.From our numerical as well as graphical presentations, it is clear that intuitionistic fuzzy optimization obtains better results than fuzzy optimization.Thus expected annual cost is more minimized in case of intuitionistic fuzzy optimization than the usual fuzzy optimization technique.Finally the model considers for several fuzzy costs and numerical values for uniform, exponential, and normal lead-time demand are compared.Necessary graphical presentations are also given besides numerical illustrations.This model can also be extended taking lead-time demand as fuzzy random variables. D: expected demand per year, Q: lot size, s: reorder point, K: setup cost, θ: defective rate in a lot of size Q, 0 ≤ θ ≤ 1, h: nondefective holding cost per unit per year, h : defective holding cost per unit per year, π: shortage cost per unit short, ν: cost of inspecting a single item in each lot, μ: expected demand during lead time, b s : the expected demand short at the end of the cycle rej k and U rej k be the lower and upper bounds of the objective function Z k X where L acc k ≤ L rej k ≤ U rej k ≤ U acc k .These values are defined as follows. Figure 4 11. 3 . Figure 4 are the expected number of shortages per cycle and all these values of b i s i affect all the desired models. Table 1 : Variation of "EC" and "s" with "θ".The lead-time demand follows uniform distribution and thus b i s i , the expected demand short at the end of the cycle, takes up the value according to 10.2 . Table 2 : Comparison of solutions of FO and IFO UNIFORM .Then from Table2andFigure 3 we conclude that Intuitionistic fuzzy optimization IFO obtains more optimized values of EC 1 and EC 2 than fuzzy optimization FO .Also solution obtained by IFO 516.8335, 563.7091 is closer to the ideal solution 506.0453, 562.4362 than the solution obtained by FO.The lead-time demand follows exponential distribution and thus b i s i , the expected demand short at the end of the cycle, takes up the value according to 10.4 . Table 3 : Comparison of solutions of FO and IFO EXPONENTIAL . Table 3 and Figure 4we conclude that Intuitionistic fuzzy optimization IFO obtains more optimized values of EC 1 and EC 2 than fuzzy optimization FO .Also solution obtained by IFO 377.5540, 412.6827 is closer to the ideal solution 378.0060, 382.4234 than the solution obtained by FO.Expected annual costs EC 1 and EC 2 are more minimized in case of IFO than FO because, according to Section 6, only the membership functions μ EC 1 and μ EC 2 are maximized when FO technique is applied and the degree of acceptance of the IF objectives is measured by α * .But, when IFO technique is applied, according to Sections 8 and 9 not only are the membership functions μ EC 1 and μ EC 2 maximized but also the non-membership functions ν EC 1 and ν EC 2 are minimized and degree of acceptance of the IF objectives is measured by α * as well as degree of rejection of the IF objectives is measured by β * .As a result of that, IFO obtains more optimized values than FO. Table 4 : Expected Annual costs of different stochastic models.
2016-01-22T01:30:34.548Z
2010-06-01T00:00:00.000
{ "year": 2010, "sha1": "9fd72e0c47e8506d1181561875730e43cfe36bd3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2010/765278", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b96d6a912d59f5c783f91626a311e895fe2346f7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
269770835
pes2o/s2orc
v3-fos-license
Human-like intelligent automatic treatment planning of head and neck cancer radiation therapy Abstract Objective. Automatic treatment planning of radiation therapy (RT) is desired to ensure plan quality, improve planning efficiency, and reduce human errors. We have proposed an Intelligent Automatic Treatment Planning framework with a virtual treatment planner (VTP), an artificial intelligence robot built using deep reinforcement learning, autonomously operating a treatment planning system (TPS). This study extends our previous successes in relatively simple prostate cancer RT planning to head-and-neck (H&N) cancer, a more challenging context even for human planners due to multiple prescription levels, proximity of targets to critical organs, and tight dosimetric constraints. Approach. We integrated VTP with a real clinical TPS to establish a fully automated planning workflow guided by VTP. This integration allowed direct model training and evaluation using the clinical TPS. We designed the VTP network structure to approach the decision-making process in RT planning in a hierarchical manner that mirrors human planners. The VTP network was trained via the Q-learning framework. To assess the effectiveness of VTP, we conducted a prospective evaluation in the 2023 Planning Challenge organized by the American Association of Medical Dosimetrists (AAMD). We extended our evaluation to include 20 clinical H&N cancer patients, comparing the plans generated by VTP against their clinical plans. Main results. In the prospective evaluation for the AAMD Planning Challenge, VTP achieved a plan score of 139.08 in the initial phase evaluating plan quality, and 15 min of planning time with the first place ranking in the adaptive phase competing for planning efficiency while meeting all plan quality requirements. For clinical cases, VTP-generated plans achieved an average VTP score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $125.33\pm11.12$\end{document}125.33±11.12, which outperformed the corresponding clinical plans with an average score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $117.76\pm13.56$\end{document}117.76±13.56. Significance. We successfully integrated VTP with the clinical TPS to achieve a fully automated treatment planning workflow. The compelling performance of VTP demonstrated its potential in automating H&N cancer RT planning. Introduction Treatment planning plays an increasingly important role in modern cancer radiation therapy (RT).In advanced treatment techniques, such as intensity-modulated RT (IMRT) and volumetric modulated arc therapy (VMAT), thousands of parameters are determined via a treatment planning process to control the operation of a medical linear accelerator (LINAC), such as its gantry angles, positions of multi-leaf collimators, and delivered monitor units (MUs), to precisely guide the LINAC to deliver a highly sculptured 3D dose distribution conformal to the tumor target, while maximally sparing radiation dose to normal organs.This step is often accomplished by solving an inverse optimization problem using a treatment planning system (TPS).Specifically, a treatment planner operates the TPS to define a set of dosimetric constraints for the optimization problem.The TPS then launches its optimization engine to determine a solution for the constraints.Based on the solution, the planner manually refines the constraint parameters to steer the solution towards a plan meeting the desired plan quality.It is well known that this trial-and-error treatment planning process causes plan quality variations that highly depend on factors such as the available time for planning and the planner's experience (Nelms et al 2012, Hernandez et al 2020).Current human-centered treatment planning also poses a laborious and time-consuming workflow, presenting challenges in time-sensitive scenarios such as adaptive radiotherapy that requires frequent and rapid planning (Yan et al 1997, Li et al 2013, Brock 2019, Glide-Hurst et al 2021).To mitigate these challenges, automatic planning strategies have been proposed as a solution (Tol et al 2015, Hussein et al 2018). Over the years, extensive research has been conducted to develop techniques that can automate the treatment planning process.Novel approaches have been employed, such as greedy algorithms (Xing et al 1999, Wu andZhu 2001), inverse optimization (Babier et al 2018), knowledge-based planning (Lee et al 2013, Ge andWu 2019), and multi-criteria optimization (Craft et al 2012, Zarepisheh et al 2014, Breedveld et al 2019).Recently, deep learning (Shan et al 2020, Shen et al 2020b) has also demonstrated its power in this domain, for instance by predicting patient-specific best achievable dose distribution to guide treatment planning (Chen et al 2019, Nguyen et al 2019, 2020). One particular approach for automated treatment planning is based on reinforcement learning (RL), or its modern version, deep RL (DRL) that couples RL with the recent advances in deep learning.In RL, a computer agent is trained to be able to autonomously make decisions in response to the observed state of the environment.Specific to the treatment planning context, a virtual treatment planner (VTP) can be trained to operate the TPS, in lieu of a human planner, to define and refine dosimetric constraints and generate high-quality plans (Shen et al 2019, Hrinivich and Lee 2020, Zhang et al 2021, Sprouts et al 2022).Within this framework, we have successfully developed the Intelligent Automatic Treatment Planning (IATP) framework.Our systematic work in this domain has not only demonstrated its potential in external beam RT and brachytherapy (Shen et al 2019(Shen et al , 2020a) ) but also overcome several practical challenges to support clinical applications, such as building large models for sequential decision-making (Shen et al 2021b) and enhancing training efficiency by incorporating prior information (Shen et al 2021a).In recent work, we successfully translated the development and evaluated its performance in real clinical cases of prostate cancer RT, demonstrating the achievement of treatment planning capability of VTP comparable to human planners with a slightly higher mean plan quality score (Gao et al 2023).In a retrospective, but objective evaluation context of the prostate stereotactic body RT (SBRT) Planning Challenge organized by the American Association of Medical Dosimetrists (AAMD) in 2016 (American Association of Medical Dosimetrists 2023), our VTP achieved performance at third place compared to human planners who participated in this challenge with IMRT technique. Along with these successes, we continued our development of IATP by extending it to head-and-neck (H&N) cancer RT treatment planning, which is the focus of this study.Specifically, we successfully developed a VTP to accomplish a fully automated planning workflow for H&N cancer RT by having it operate a commercial TPS in lieu of a human planner.The VTP was tailored for the 2023 AAMD Planning Challenge that focused on not only generating a high-quality treatment plan for an H&N cancer case but also on the most efficient planning process to address the need for online adaptive RT.Our VTP participated in the Planning Challenge and achieved the first position in planning efficiency while meeting all dosimetric requirements.This paper will also report a comprehensive evaluation of clinical cases, where we compared the quality of plans generated by the VTP against those generated by experienced human planners. VTP VTP is a neural network trained to operate the TPS, in lieu of a human planner, to define and refine dosimetric constraints and generate high-quality plans.Our group previously introduced a hierarchical VTP (HieVTP) to address the treatment planning task, emulating the hierarchical decision-making used by human planners (Shen et al 2021b).HieVTP was structured into three sub-networks: Structure-Net, Parameter-Net, and Action-Net, serving the roles of making decisions at various levels-structure, parameter, and adjustment action.They were sequentially applied each time HieVTP interacted with the TPS.Specifically, Structure-Net utilized the dose-volume histograms (DVHs) of the plan to identify structures in need of improvement.Subsequently, Parameter-Net selected a treatment planning parameter (TPP, e.g.priority, dose limit, or volume constraint) that had the most impact on the dose of the structure, considering all associated TPPs.Finally, Action-Net determined the specific adjustment for this TPP, which entailed increasing or decreasing its value.In this study, for the purpose of improving training efficiency, we reduced the complexity of this HieVTP in two aspects.The first one was combining Structure-Net with Parameter-Net.The Parameter-Net decided on one of the eight TPPs to adjust, and subsequently, the Action-Net determined its adjustment direction. Another aspect was to input the VTP with a vector containing plan quality scores, as opposed to the DVHs in our previous studies, because these scores directly indicate plan quality, making it straightforward for the VTP to extract relevant information to make decisions for plan quality improvement.The list of scores followed the ProKnow scoring system (ProKnow Systems, Elekta, Sanford, FL, USA), as we applied our VTP to the 2023 AAMD Planning Challenge that employed this metric for plan quality (American Association of Medical Dosimetrists 2023).The ProKnow scoring system is a commercial software product designed to offer a fair platform for evaluating plan quality.It is a sum of a number of scores, each representing plan quality from a certain aspect (Nelms et al 2012).For the context of H&N planning, the evaluation criteria on plan quality are listed in the supplement material (table S1). With these considerations in mind, the input to the VTP consisted of 21 values, corresponding to various dosimetric criteria.Both sub-networks process data through a sequence of three fully connected layers, with each layer followed by a Leaky ReLU action function with a slope of 0.1.Following the final activation function, Parameter-Net predicted the improvement in plan quality associated with adjusting the TPPs of each structure.This prediction informed the creation of a structure-coded vector, which served as an indicator for the selected structure.The structure-coded vector maintained the specific score of the chosen structure while setting all other scores to zero, preserving the same dimension as the original input.Subsequently, this vector was concatenated with the original 21 ProKnow scores.The combined vector was then passed into the Action-Net to predict the improvement in plan quality corresponding to each parameter adjustment action.The detailed architecture of the VTP is presented in figure 1.Each parameter was allowed for two adjustment directions (increasing or decreasing).The adjustment magnitude for TPPs was empirically selected.We opted to increase the priority value by 6 if there was an improvement over an empirically chosen criterion of 20% in the plan score during the last TPP adjustment or reduce the adjustment size to 3 if the plan score change was less than 20%, deemed minor. Auto-planning workflow Figure 2 outlines the automated workflow of VTP for H&N cancer RT treatment planning within the Eclipse TPS (Varian Medical System, Palo Alto, CA, USA).The treatment planning process by the VTP was initiated through the Eclipse Scripting Application Programming Interface (ESAPI).It first autonomously configured the treatment planning initialization process, including tasks such as specifying the isocenter, defining beam fields, prescribing the dose, and creating auxiliary structures that aid in shaping the dose distribution.Following this, VTP introduced TPPs, such as dosimetric objectives for each structure to formulate the optimization problem within the TPS.Plan optimization and dose calculations were then carried out in the background.Once the optimization phase produced a plan, VTP collected the plan data and evaluated the plan quality.In the case that the stopping criteria were not met, VTP proceeded to determine adjustments to the TPPs for the subsequent planning iteration and start a new optimization cycle.This iterative planning process continues until a high-quality plan is achieved. RL The training process of the VTP followed the Q-learning framework (Watkins and Dayan 1992).We present it briefly here for completeness.Readers interested in details can refer to our previous publications (Shen et al 2021b). In Q-learning, the training process determines the optimal action-value function where Q * (s, a) indicates the expected cumulative rewards associated with taking action a (the decision related to TPP adjustment policy) in state s (represented by the vector of ProKnow scores).It quantifies the desirability of action a in the context of state s.s l and a l represent the state and action at the lth TPP adjustment step, respectively.r l denotes the reward obtained at the lth step, which is determined by a predefined reward function related to clinical objectives, such as the ProKnow scoring system.A positive reward is achieved when the action applied to the state leads to improved plan quality with the updated TPP. The term π = P(a|s) refers to the policy of TPP adjustments, signifying the selection of action a based on the observed state s. In DRL, the action-value function Q * (s, a) is represented by deep neural networks as described in the previous subsection.It is obtained by a training process including an iterative process based on the VTP's experiences with TPP adjustments and interactions with the TPS.The primary goal of training is to determine the optimal policy for VTP to adjust the TPP of the selected structure to maximize the expected cumulative reward.The update of the Q-learning action-value function can be expressed in the following manner, where α is the learning rate to control the weight given to the new information obtained from the current experience. (2) Reward function A prerequisite for VTP is to quantitatively define H&N cancer plan quality.In this study, we incorporated the ProKnow scoring system, which served as the plan evaluation criteria during the 2023 AAMD Plan Study. ProKnow scoring system includes a series of dosimetric metrics of the structure of interests (American Association of Medical Dosimetrists 2023).The detailed computation methods of all metrics were referred to the planning guidance provided by AAMD, which can be found in supplementary material table S1.The final score ψ(d) as a function of the dose distribution of a plan d was determined by the summation of the scores across all these metrics, resulting in a plan score that ranges from 0 to 150, where a higher score corresponds to superior plan quality. Y Gao et al Based on the plan quality score, the reward function to guide the learning process in DRL was naturally defined as the change in plan quality for a TPP adjustment, namely r = ψ(d ′ ) − ψ(d), where dose d and d ′ are dose distributions before and after the TPP adjustment.A positive reward signifies an enhancement in plan quality, whereas a negative reward indicates the opposite. DRL training strategy Our previous work has established the basis for training with hierarchical DRL by incorporating a multi-level decision-making hierarchy that enabled VTP to learn and strategize across multiple levels.We employed the same training strategy with modifications to adapt to the two-level decision-making process of VTP in this study. Let's denote Parameter-Net as S(s, p; θ * S ) and Action-Net as A(s, p, a; θ * A ). Here, θ * S and θ * A represent the optimal network parameters to be determined from the training process, s is the state of the treatment plan (21 ProKnow scores) to input to the networks, p is the set of TPP to adjust, and a the action to adjust the TPP.Once S(s, p; θ * S ) is known, the policy to decide the TPP p * to adjust for the plan state s is achieved by the one that maximizes it, i.e. p * = max p S(s, p; θ * S ).After that, based on the selected TPP, the Action-Net then decides on one of two actions a * to adjust by selecting the one that maximizes the function, i.e. a * = max a A(s, p * , a; θ * A ).In terms of training, we alternatively trained the two networks S(s, p; θ S ) and A(s, p, a; θ A ), each time fixing one while training the other.When S(s, p; θ * S ) is fixed, A(s, p, a; θ A ) can be updated with the standard Q-learning algorithm to solve the problem With the updated A(s, p, a; θ A ) fixed, S(s, p; θ S ) can subsequently be trained as a conventional supervised learning problem by solving We trained the VTP based on the single patient case in the 2023 AAMD Planning Challenge.Throughout the training process, VTP engaged with the Eclipse TPS for treatment planning and collecting data for training.At each iteration of TPP adjustment, we incorporated an ϵ-greedy algorithm to introduce stochasticity.This involved VTP making a random selection among all possible actions with a probability of ϵ, and a probability of (1 − ϵ) to choose actions based on VTP's learned strategies.The value of ϵ gradually diminished at a rate defined as ϵ = max(0.1,0.99/EPI), where EPI is the index of episodes.The ϵ-greedy strategy allowed VTP to explore diverse state-action pairs without biasing towards prior experiences.In each training iteration, a tuple (s, a, r, s ′ ), including the state s, action a, reward r, and the next state s ′ , was collected and stored in a pool.Experience replay strategy was employed to sample data from this pool to update network parameters.Random sampling ensured uncorrelated experiences for updating Q-values, reduced the risk of overfitting, and encouraged exploration, allowing VTP to learn from both successes and failures and refine its strategies over time. It is worth noting that, in our previous developments on prostate SBRT (Shen et al 2021b), we built the VTP trained with an in-house TPS with an inverse planning optimization engine similar to Eclipse TPS.This approach was chosen due to the inefficiencies associated with direct VTP interaction with Eclipse TPS during the training process.We successfully trained the VTP under this setup, which was able to autonomously create high-quality treatment plans using the Eclipse TPS.However, when it comes to the H&N cancer case, we observed the limitation of this approach due to slight dose disparities between the two TPSs.To address these issues, in the current study, we seamlessly integrate VTP with Eclipse TPS, facilitating direct training using it.As such, the state s and s ′ , as well as the reward r, were all computed based on dose distribution computed by the Eclipse dose engine, mitigating the issue of mismatch in dose calculations between the TPS used in training and application of VTP. The VTP was constructed using Python with Pytorch on a desktop workstation with two Nvidia Quadro RTX 5000 GPUs on a computer equipped with a CPU of 26 cores and 64 GB of host memory.We interfaced the VTP with Eclipse TPS v16.0 using ESAPI to enable the fully automated planning workflow and collection of training data. Configuration of plan optimization problem Because the auto-planning workflow directly interacts with the Eclipse TPS, the plan optimization problem was indeed the one in the Eclipse TPS.To define the objective function, a series of dosimetric constraints were specified on DVHs, each including a weighting factor, the dose of the constraint, type (dose volume constraint or mean dose), as well as the direction (upper or lower to penalize overdose or underdose).Treatment targets, such as the planning target volumes (PTVs), have both upper and lower objectives in planning to ensure both dose coverage and dose homogeneity.In contrast, organs at risk (OARs) typically have upper dose objectives, with the primary aim of minimizing radiation exposure to these critical structures. In H&N cancer treatment planning, challenges arose due to significant overlap between targets and OARs.To achieve a sophisticated dose distribution, human planners often create auxiliary structures by cropping the regions of OARs that overlap with targets and introducing additional planning objectives for these auxiliary structures.Our automated treatment planning workflow employed this approach. As such, we included 26 planning structures, including targets, OARs, and their auxiliary structures.The detailed list of planning structures is presented in table 1.The contours of PTVs, clinical target volumes (CTVs), and OARs were segmented by experienced radiation oncologists and dosimetrists.The target volumes were assigned prescription doses at four levels: 63 Gy, 60 Gy, 57 Gy, and 54 Gy, as indicated in their names.To facilitate effective planning, auxiliary structures were created.These included 'L Parotid opti ' , 'R Parotid opti ' , 'Oral Cavity opti ' and 'Constrictor opti ' , all created by excluding the overlap between each organ and the PTVs.This strategy was implemented to resolve conflicting planning objectives between targets and OARs.The 'Cord + 5mm' structure was a 5 mm expansion around the spinal cord, designed to limit the maximum dose received by the cord.Additionally, 'PTV54 Push ' and 'PTV57 Push ' were segmented as 3 mm inner rings of PTV54 and PTV57, respectively, aiming to ensure sufficient dose in the peripheral regions of these target volumes.Control regions named 'Avoidance1' , 'Avoidance2' , and 'Avoidance3' were used to avoid dose distribution in the neck, oral cavity, and trachea.The design of these structures was determined by experienced dosimetrists.All these auxiliary structures were efficiently and automatically generated in Eclipse TPS by the VTP via ESAPI in approximately 5 seconds. As a result, we established 47 planning objectives, including 20 lower dose-volume constraints, 18 upper dose-volume constraints, and 9 mean dose constraints.Each objective i was specified by parameters of dose limit τ i , volume V i if needed for dose-volume constraints, and the weighting factor λ i reflecting the priorities of the respective structure in the plan.This amounted to a total of 141 TPPs.It was computationally challenging for VTP to learn the policy for adjusting all TPPs, and neither was this necessary, as a number of these TPPs can be determined a priori based on human experience.As such, we held constant TPP values of 36 planning objectives designed by the planning expertise of a dosimetrist.For the remaining 11 planning objectives, the dose, volume, and type were predefined, and VTP was trained to adjust their priorities.Among the 11 adjustable TPPs, 3 TPPs of the auxiliary structures shared the same priorities as their primary planning structures, specifically parotid glands and pharyngeal constrictor.In total, the 8 TPPs define the optimization problem, which will be adjusted by the VTP.A comprehensive list of all the planning structures and objectives is summarized in table 1. 2023 AAMD planning challenge case Our VTP participated in the 2023 AAMD Planning Challenge.Organized by the AAMD, a two-phase plan study was undertaken in 2023 for medical dosimetrists to develop an adaptive RT treatment plan for a patient with H&N cancer (American Association of Medical Dosimetrists 2023).This patient, a 34-year-old male, was diagnosed with poorly differentiated squamous cell carcinoma of the left retromolar trigone and positive lymph nodes.The patient's treatment journey began with surgery, followed by post-operative concurrent chemo/RT. During the RT course, the patient received external beam treatment with a 6 MV photon beam, and the prescription dose was 210 cGy per fraction, totaling 30 fractions.The contours were created by the physicians and dosimetrists at Mary Bird Perkins Cancer Center.The treatment plan included four levels for different PTVs at 63 Gy, 60 Gy, 57 Gy, and 54 Gy with the aim of achieving at least 95% coverage for PTV 63 Gy and PTV 60 Gy, and at least 90% coverage for PTV 57 Gy and PTV 54 Gy.This patient presents a substantial overlap between targets and OARs, which introduces complexities in the planning process.Specifically, PTV 54 Gy intersects with the oral cavity, pharyngeal constrictor, both parotid glands and esophagus.PTV 57 Gy exhibits overlap with the pharyngeal constrictor, left parotid gland, and brachial plexus.PTV 60 Gy overlaps with the oral cavity, left parotid gland, and pharyngeal constrictor.Finally, PTV 63 Gy shows an overlap with the pharyngeal constrictor and brachial plexus. Notably, as shown in figure 3, by Week 4 of the RT course, the patient experienced a significant weight loss of 16.5%, which led to an increase of 2 cm in the source-to-surface distance on the patient's left side and 0.8-1 cm on the patient's right side.In response to this change, the RT team decided to implement an adaptive treatment plan, which necessitated acquiring a new CT scan and adjusting the PTVs.Hence, the purpose of the Planning Challenge was twofold.The first was to develop a high-quality plan for the initial anatomy, and the second was to, based on this knowledge, to develop an adaptive plan for the changed anatomy to meet dosimetric requirements in an efficient manner.The plan quality in this Challenge was evaluated using the ProKnow scoring system presented in section 2.2.2.We assessed the planning performance of the developed VTP on the both initial and adaptive phases as part of the AAMD Planning Challenge.As we were participating in the Planning Challenge, we predefined a set of reasonable TPPs for VTP as the initialization of the TPP adjustment process.For the adaptive plan, we used the TPPs of the final plan in the initial phase as the starting point.Planning time was measured as the active time spent on planning operations from beam placement to plan completion.To visualize these changes, we superimposed the initial and adaptive planning CT scans and showed four axial levels here, with the red region highlighting significant anatomical changes. Comprehensive evaluations In order to further gauge the VTP's applicability and generalizability to managing clinically realistic cases, we extended our testing to include 20 clinical cases for patients who had undergone H&N cancer RT treatment at our institution between 2018 and 2023.Patient characteristics are listed in supplementary material table S2.The patient cohort was chosen primarily based on the prescription dose of 63 Gy in 30 fractions, following the setting in the AAMD planning challenge case.We did not consider primary tumor location or extension in patient selection.For each case, we used VTP to generate a plan.Unlike the AAMD case that included four dose levels (63, 60, 57, and 54Gy), patients at our institution are typically treated with 2-3 target dose levels.Other than the highest dose (63Gy) to the primary tumor, target dose levels may be similar to, but not exactly match, the AAMD case.For a target with different prescription dose level from the AAMD case, the fixed optimization parameters in table 1 with the closest prescription dose were chosen.For a fair comparison, each plan generated by VTP was normalized to achieve the same PTV coverage as their respective clinical plans.In our analysis, we compared the ProKnow scores between the plans generated by VTP and the scores from the clinical plans.Our assessment of plan scores and dosimetric metrics utilized a non-inferiority test with the null hypothesis that VTP is inferior to human planners at a significance level of p * = 0.05.We also assessed the planning time of VTP, measuring its active operation duration from beam placement to plan completion.Additionally, we offered insights into the decision-making capabilities of VTP by contrasting the plan quality in its initial, intermediate, and final steps along the treatment planning process, thus highlighting the enhancements made during the process. Initial phase VTP successfully generated a high-quality H&N cancer VMAT plan for the initial phase of the 2023 AAMD Plan Study, utilizing Eclipse TPS. Figure 4(a) presents the dose distributions across six axial slices from the top to the bottom of the targets and figure 4(b) displays the DVHs of the resulting plan.Remarkably, the VTP effectively balanced PTV and OARs during the plan optimization process as the plan maintained PTV coverage and spared the dose to OARs.For a comprehensive overview of the final VTP-generated plan, we presented the ProKnow scores in figure 4(c).The VTP-generated plan achieved full compliance with all clinical metrics outlined with a cumulative score of 139.08 out of 150.Our VTP plan was ranked the 21 st among 149 human-submitted plans with scores 127.32 ± 13.73 (American Association of Medical Dosimetrists 2023).This achievement demonstrated VTP's competence in treatment planning, particularly considering the complexity of the H&N cancer case, where human planners can employ more auxiliary structures to refine dose distribution. Adaptive phase During the RT treatment course, the patient's substantial weight loss necessitated a reevaluation and adaptation of the treatment plan to accommodate the anatomical changes.Figure 5 presents dose distributions in six axial and DVHs of the adaptive plan for the changed patient anatomy.Similar to the process generating the initial plan, the VTP demonstrated remarkable proficiency in preserving adequate coverage of PTVs while effectively minimizing radiation dose to OARs.The primary goal of the adaptive phase of this Challenge is to expedite the patient's treatment with the new plan while striving to achieve the dosimetric goals with the utmost precision.The specific minimally required dosimetric metrics are detailed in figure 5(c).Notably, VTP generated a plan that fulfilled all the specified requirements within 15 minutes Clinical cases 3.2.1. Decision-making behaviors of VTP We first illustrate VTP's decision-making behaviors of operating the Eclipse TPS in a representative example case.In figure 6, we present the details of the planning process driven by the VTP to operate the TPS to improve the plan scores from 132 to 139.41 at various planning steps.Through strategic adjustments of TPPs, VTP effectively improved the scores of various metrics.For instance, by emphasizing the importance of the left parotid gland (TPP 2 ), right parotid gland (TPP 4 ) and pharyngeal constrictor (TPP 7 ) in the plan optimization, VTP enhanced the scores of V ParotidL [30Gy](%), D ParotidL [mean](Gy), V ParotidR [30Gy](%), and D Constrictors [mean](Gy) by 0.75, 0.66, 0.38, and 1.19, respectively.Additionally, VTP demonstrated intelligence to make improvements by decreasing the priority of the planning structure.For example, VTP reduced the priority of the oral cavity (TPP 5 ).As the oral cavity intersects with PTV volumes, this adjustment increased scores of V PTV [54Gy](%), V PTV [57Gy](%), and V PTV [60Gy](%) by 2.48, 0.95, and 0.09, without negatively impacting the score of D Oral [mean](Gy).Note that this entire process was autonomously completed by the VTP without human intervention.It hence indicated the decision-making capability in handling the challenging problem of H&N cancer treatment planning. Performance on clinical cases To demonstrate the model's effectiveness in treatment planning, we conducted an assessment of VTP involving 20 clinical cases of patients who received treatments at our institution. Figure 7 compares dose distributions at four axial levels, plan scores, and DVHs between VTP-generated and human-generated plans for a representative patient case.The VTP-generated plan presented superior dose sparing of OARs, including the brainstem, spinal cord, parotid glands, oral cavity, esophagus, and pharyngeal constrictor, without sacrificing the PTV coverage.The VTP-generated plan achieved a total plan score of 130.86, significantly outperforming the clinical plan, which scored 108.29.It is worthwhile to point out that the VTP-generated plan had lower homogeneity with larger hot spots in PTV60 and PTV57, which may be ascribed to the fact that the ProKnow criteria did not include this requirement, and hence VTP was not trained to achieve this.In contrast, the human planner prioritized homogeneous doses across all PTVs.Despite achieving a higher ProKnow score and better OAR sparing, the hotspots in the lower dose PTVs may not be clinically acceptable. Figure 8 summarizes the ProKnow scores for 18 clinically relevant dosimetric metrics measuring various aspects of plan quality, as well as total plan score, and MUs.We also performed a non-inferior statistical test for each metric to compare them.Statistical significance was considered with a threshold of p * = 0.05.On the whole, VTP achieved an average plan score of 125.33 ± 11.12, in contrast to clinical plans scored an average of 117.76 ± 13.56 (p = 0.069).This indicated that the quality of VTP-generated plans was slightly better than those generated by human planners, although the improvement was not found to be statistically significant. VTP-generated plans exhibited higher MUs, with an average of 795.35 ± 125.18 MU per fraction, compared to human-generated plans with 607.43 ± 108.16 MU (p = 0.002).This implies that VTP-generated plans tend to have more intensity modulations and greater complexity.Considering that the prescription of the highest dose level was 210 cGy per fraction (6300 cGy in 30 fractions), the modulation factors defined as the ratio between MUs and prescription dose were ∼3.8 for VTP plans, and ∼2.9 for clinical plans, both lower than the empirically acceptable threshold of 4.0 to warrant deliverability of the plans. The automated planning process proved efficient, producing plans in an average of 91.46 ± 15.14 minutes.VTP's decision-making was nearly instantaneous, with the majority of planning time spent on Eclipse's optimization process. About the results In figure 7, we observed different plan characteristics between the clinical and VTP-generated plans, e.g. in target homogeneity, dose falloff, etc.The clinical plan exhibited a low plan score with moderate OAR sparing but a higher homogeneity for PTV60 and PTV57.In contrast, the VTP plan scored higher, demonstrating excellent OAR sparing but a poorer homogeneity.This fact can be ascribed to different planning objectives between the human planner and VTP.Notably, the PTV heterogeneity index was not factored into VTP's plan optimization process, as it was not a criterion of the AAMD plan challenge upon which our optimization system was built.However, the ProKnow scores included the requirement for the overall hotspot (<69Gy).Hence, VTP only considered the PTV dose homogeneity for the PTV63 to a certain extent, but not for the other two targets.In contrast, human planners aimed for plans with homogeneous doses across all PTVs, according to our institutional planning guidelines.It is also for this reason that the dose falloffs outside PTV boundaries were made shaper by VTP due to an attempt to spare normal organs allowed by sacrificing PTV homogeneity, compared to those in the human-generated plans.While this behavior of VTP was understandable, it in fact highlighted the need to clearly define treatment planning objectives for VTP to effectively execute planning and generate proper plans. In figure 8, although the plan scores achieved by VTP were higher than those achieved by human planners, it is worth remarking that we present the results only for the purpose of demonstrating the capability of VTP for automated treatment planning by autonomously interacting with the TPS.Multiple reasons may cause the relatively lower scores of human planners, but the plans themselves are still acceptable for the clinical management of these patients.In particular, as the VTP was developed for the AAMD Planning Challenge and the Challenge evaluated the plan quality using the ProKnow score, the VTP incorporated this score in the reward function and hence was trained to learn to optimize this score.In contrast, the clinical plans we extracted for comparison were previously developed for clinical usage.Human dosimetrists developed them following our institutional planning guidelines to meet planning objectives specified by the attending physicians, which were not quantified by the ProKnow score.Hence, the VTP-generated plans and human-generated plans were developed under the guidance that was not necessarily aligned, which contributed to the different plan scores when evaluated using the ProKnow criteria.A higher ProKnow score does not necessarily guarantee clinical optimality.A related question here is whether a numerical metric may be defined to capture all aspects of plan acceptability, including both dosimetric quality and physician's preference.Answering this question remains an area of ongoing research. Contributions of this study Compared with previous work, the contributions of this study were threefold.First, we achieved a fully automated IATP workflow for the clinically challenging scenario of H&N cancer RT treatment planning.The previous focus on prostate cancer represented a relatively simple treatment planning scenario, with only a few critical structures in immediate proximity to the prostate target, allowing for a relatively straightforward strategy to achieve effective dose coverage to targets and OARs sparing.In contrast, H&N cancer treatment Y Gao et al planning is a much more challenging task, even for human planners, due to the unique dosimetric requirement and anatomical complexity.For instance, H&N cancer tumors can be in irregular shapes with large volumes and the plan has to simultaneously address multiple targets with different prescription dose levels.The proximity of the target to normal organs substantially increased planning complexity to achieve a sharp dose fall-off for normal organ sparing.Our study for the first time demonstrated the feasibility of addressing this intricate treatment planning problem using the DRL approach. Second, we overcame the challenge of low training efficiency when using a real clinical TPS.Our earlier work in prostate SBRT trained the DRL agents using an in-house TPS similar to the commercial TPS.Integrating VTP to Eclipse during model training enhances generalizability and eliminates the issues caused by different dose engines between the in-house TPS and Eclipse.Previously for prostate SBRT cases, we were not able to use Eclipse due to the large amount of optimization tasks to be accomplished to train the VTP with a relatively large network size and a number of possible actions.It was acceptable to use the in-house TPS to train VTP and apply it to Eclipse, likely because of the relatively easy planning task for prostate SBRT and loose planning requirements.For H&N cases, the planning task is more difficult, and it is necessary to maintain consistency between TPSs used in model training and application.Hence, this study used Eclipse TPS for model training and to achieve so, we innovatively developed strategies such as reducing network complexity and the number of possible actions. Third, the VTP was trained to address the 2023 AAMD Planning Challenge.As only one patient case was available for this competition, we trained VTP on this single case via the end-to-end DRL approach.Different from the majority of deep learning studies that are inherently limited by available data, DRL generates data by itself during the training process.Our study demonstrated that the VTP trained on only one patient achieved great generalizability in terms of performing high-quality treatment planning for real patient cases not seen in the training process. Limitations and future work The current study has the following limitations.First, model generalizability is a central topic of deep learning.When characteristics of the data at the model inference stage deviate from that of the training data, model performance may degrade.DRL training is less restricted by the quantity of training cases, as the training data is in fact state-action pairs generated during the training process itself.In this study, we successfully demonstrated this aspect by showing that VTP trained on a single patient case can still generally perform relatively well in other real clinical cases.Nonetheless, we remark that the fact that the data were generated for only one patient still bears the generalizability concern.We observed that VTP encountered difficulties in cases that have quite distinct anatomy from the case used in training.For example, during comprehensive evaluation, a difficult case was observed where VTP performed notably worse.This patient underwent unilateral neck RT, with PTV volumes substantially overlapping with the brachial plexus.As VTP had not encountered such a scenario during training, it struggled to balance target coverage and OAR sparing, as shown in figure 9. Therefore, it is still desired to further train the VTP in other cases to enhance its generalizability, which is our ongoing work. Second, the VTP was trained to only adjust TPPs in inverse planning optimization.However, other parameters may significantly influence the resulting plan quality and should be adjusted as well.For example, the collimator angle may affect dose fall-offs around the targets.Human planners know to set up parameters like this based on experiences.Our current study represents just the initial phase in developing the IATP framework for challenging H&N cases.We expect extensive subsequent studies to enhance the intelligence level of the virtual planner. Third, the simplicity of the ProKnow scores may not fully capture the complexities of planning objectives in real clinical practice.As discussed previously, to align with our institutional treatment planning guideline, target homogeneity should be included.Moreover, recognizing the limitations of quantitative metrics, integrating criteria such as physician judgment is necessary to ensure clinical acceptance of VTP-generated plans.One potential solution involves leveraging our recent development of a deep learning-based virtual physician (Gao et al 2022(Gao et al , 2024)), which predicts plan approval probability using adversarial learning based on clinically approved plans.We plan to build the virtual physician model to the H&N context and integrate it into the DRL training in our future study.Doing so is expected to generate plans that not only meet dosimetric standards but also closely align with the clinical preferences of human physicians.On the evaluation side, in addition to quantitative measure plans with certain metrics, it is also important to involve domain experts to fully assess the clinical impact of our research development.We are in the process of performing human evaluation studies comparing VTP plans and human-generated plans and commenting on their strengths and weaknesses.The results will be reported in our future publication. Conclusion In this paper, we reported our recent progress on the continuous development of the IATP framework by extending the VTP to H&N cancer treatment planning, a much more challenging task than prostate cancer treatment planning addressed by our previous studies.We implemented a hierarchical DRL framework for the VTP to mimic the treatment planning processes performed by human planners.We seamlessly integrated VTP with the Eclipse TPS with ESAPI to train VTP using this TPS and achieve a fully automated treatment planning workflow.In a prospective evaluation context of the 2023 AAMD Planning Challenge, the VTP achieved first place in treatment planning efficiency.An evaluation study using 20 real clinical cases showed that the VTP achieved an average score of 125.33 ± 11.12, higher than that of the plans generated by experienced human planners at 117.76 ± 13.56.These results demonstrated the potential of VTP for automated H&N cancer treatment planning. Y Figure 1.The detailed architecture of the VTP.VTP consists of two subnetworks (a) Parameter-Net and (b) Action-Net.Output sizes for the fully connected layers are specified. Figure 2 . Figure 2. The fully automated workflow of VTP to automatically generate VMAT plan for H&N cancer cases in Eclipse TPS. Figure 3 . Figure3.The patient experienced a 16.5% weight loss during the 4th week of RT treatment, leading to a significant change in body shape.To visualize these changes, we superimposed the initial and adaptive planning CT scans and showed four axial levels here, with the red region highlighting significant anatomical changes. Figure 4 . Figure 4.The plan for the initial phase of the AAMD Planning Challenge case.(a) Dose distributions across six axial slices from the top to the bottom of the targets.(b) DVHs.(c) Dosimetric results and scores. Figure 6 . Figure 6.(a) DVH comparisons of plans at step 0 (triangle), 4 (dot), 7 (heart), and 9 (square).(b) The detailed scores of clinical metrics achieved by the plans at steps 0, 4, 7, and 9. (c) Decisions that VTP were made at each planning step to improve the plan score from 132 to 139.41. Figure 7 . Figure 7. Comparisons between dose distributions, ProKnow scores, and DVHs of manual and VTP plans for a case example. Figure 8 . Figure 8. Boxplots display the ProKnow scores for clinically relevant plan quality metrics, the overall plan score, and monitor units, comparing 20 clinically approved plans and their corresponding ones generated by VTP.Median values are represented by horizontal lines inside the boxes, and the calculated p-value for non-inferiority testing is displayed in the upper right corner of each boxplot.The p-values below 0.05 are highlighted to indicate statistical significance. Figure 9 . Figure 9. Comparisons between dose distributions and ProKnow scores of manual and VTP plans for a challenging case for VTP. Table 1 . Planning objectives involved in the H&N cancer planning by VTP.The first section lists the objectives fixed in our study.The second section includes objectives that VTP adjusts their priorities.
2024-05-16T06:17:54.886Z
2024-05-14T00:00:00.000
{ "year": 2024, "sha1": "cc9813e96dff26fa75b7bbe147e38a5e06010698", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b8dc4138ec39adb0fd64583376e002ee10deb8b0", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
5157194
pes2o/s2orc
v3-fos-license
BicOverlapper 2.0: visual analysis for gene expression Motivation: Systems biology demands the use of several point of views to get a more comprehensive understanding of biological problems. This usually leads to take into account different data regarding the problem at hand, but it also has to do with using different perspectives of the same data. This multifaceted aspect of systems biology often requires the use of several tools, and it is often hard to get a seamless integration of all of them, which would help the analyst to have an interactive discourse with the data. Results: Focusing on expression profiling, BicOverlapper 2.0 visualizes the most relevant aspects of the analysis, including expression data, profiling analysis results and functional annotation. It also integrates several state-of-the-art numerical methods, such as differential expression analysis, gene set enrichment or biclustering. Availability and implementation: BicOverlapper 2.0 is available at: http://vis.usal.es/bicoverlapper2 Contact: rodri@usal.es INTRODUCTION BicOverlapper 1.0 (Santamaria et al., 2008) focused on the visualization of complex gene expression analysis results coming from biclustering algorithms. Based on Venn-like diagrams and overlapping visualization layers, it successfully conveyed biclusters. With the use of BicOverlapper by the authors and thirdparty users, several new requirements arose, and it has evolved to support other analysis techniques and additional steps of the analysis process. Similar evolutions have occurred on other tools on the field. For example, Expander has extended microarray data analysis with relational and functional information (Ulitsky et al., 2010). Hierarchical Clustering Explorer, although originally designed for general use, added new methods for bioinformatics analysis (Seo et al., 2006). Treeview (Saldanha, 2004) is developing toward a new version that will address highthroughtput biology needs (see https://www.princeton.edu/ *abarysh/treeview/). APPROACH During the design of BicOverlapper 2.0, we focused on a high level of interaction and a visual analytics (Thomas and Cook, 2005) approach. Another important design principle was the simplification of installation and interfaces. Finally, following the original 'overlapping' philosophy, we designed linked visualizations and an agglomerative use of standard numerical analyses. For example, differential expression analysis compares two experimental conditions, but BicOverlapper 2.0 allows to compare several combinations of experimental conditions at once and then to visualize the relationships between the differentially expressed groups. METHODS The tool is implemented as two interconnected layers: visualization and analysis. The analysis layer is R/Bioconductor-dependent, using several packages and ad hoc scripts. Data retrieval from Gene Expression Omnibus (GEO) and ArrayExpress is supported by its corresponding packages (Davis and Meltzer, 2007;Kauffmann et al., 2009), although it requires high bandwith and not all of the experiments are supported. Data analysis includes the following: Differential expression with limma (Smyth, 2005). In addition to oneto-one comparisons, BicOverlapper allows to perform multiple comparisons at once, visualized as intersecting differentially expressed groups. This way, analysis time is reduced, and the differences between the comparisons can be inspected. Gene set enrichment analysis is also implemented via GSEAlm (Oron and Gentleman, 2008). Enriched gene sets are visualized as overlapping groups. Biclustering, as in the previous version, is computed with biclust (Kaiser et al., 2013) package. The Iterative Search Algorithm (ISA) algorithm is now also available by the isa2 package. Correlation networks. This is a simple yet powerful method to find groups. Genes with low overall expression variation are filtered out, and the rest are linked if they have a profile distance below some standard deviations. The resulting network is visualized as a forcedirected layout, where nodes can be colored by the expression under selected conditions. The visualization layer is developed in Java and it communicates with the analysis layer via rJava (Urbanek, 2007). This layer contains several visualization techniques, with implementations based on Prefuse (Heer et al., 2005) (networks, scatterplots), Processing (Reas and Fry, 2007) (overlapper, heatmap) and plain Java (parallel coordinates, word clouds). RESULTS To involve biology specialists on bioinformatics analyses, we need simpler and highly interactive tools. For example, Figure 1 was generated only by clicking two menu options and selecting one visual item and gene/condition labels, on a process that takes not more than 5 min (see Supplementary Video at http://vis.usal.es/bicoverlapper2/docs/tour.mp4). Underneath, this requires the seamless connection of different steps: expression data loading, computation of distribution statistics, three differential expression analyses (for up-and downregulation), gene annotation retrieval and the visualization of four interactive representations. Figure 1 provides a considerable amount of information about the experiment. First, parallel coordinates (Inselberg, 2009) indicate with boxplots that the data are normalized, although probably skewed towards upregulation. Second, differential expression groups, displayed as Venn diagrams, present a large overlap for genes upregulated at mid and early timepoints with respect to late timepoints. These intersecting genes have a clear pattern under heatmap and parallel coordinates and include nine genes related to the Gene Ontology (GO) terms 'oxidation-reduction process' and five related to 'fatty acid beta-oxidation'. CONCLUSION BicOverlapper is a simple-to-use, highly visual and interactive tool for gene expression analysis. Easily and without programming knowledge, the user can have an overall view of several expression aspects, from raw data to analysis results and functional annotations. This may significantly reduce the analysis time and improve the analytical discourse with the data. For the future, we are working on the support of high-throughput data, especially RNA-Seq and a comprehensive report and image generation. (Tu et al., 2005). Each cell cycle is divided into three time intervals (early, mid and late). Differential expression for every combination of such intervals is computed and visualized as overlapping groups. Thirty-six genes high-regulated at early and mid intervals have been selected (intersection between 'early versus late' and 'mid versus late' groups at the bottom left); their expression profiles are shown in parallel coordinates and heatmap visualizations. Finally, the functional annotations, stacked by term, are shown as a word cloud, indicating, for example, that 9 of the 36 genes are related to metabolic and oxidation-reduction processes
2015-07-06T21:03:06.000Z
2014-03-03T00:00:00.000
{ "year": 2014, "sha1": "5f1eb7bb4d47bfa3424a6898eaa550b19c91476c", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/bioinformatics/article-pdf/30/12/1785/9963818/btu120.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b5d3ae982d5c18980ebd3fb4e2e1aec48d753bac", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
8888679
pes2o/s2orc
v3-fos-license
SensoTube: A Scalable Hardware Design Architecture for Wireless Sensors and Actuators Networks Nodes in the Agricultural Domain Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture. Introduction Wireless Sensors and Actuators Networks (WSANs) is an established and challenging technology, with a great potential impact on the measurement, communication and control applications to a variety of activities of the modern postindustrial society. According to market analyses, WSAN node sales will constitute a multi-trillion market in the next few years [1,2]. Given the fact, during the last 25 years, the agricultural production sector has been transformed from a traditional labor-intensive sector into a technology-intensive one, it has been strongly considered as a very prosperous potential area for WSAN technology use. Indeed, a vast range of existing and future WSAN applications in agriculture have been identified and reported by many researchers [3][4][5]. Moreover, relatively-new terms have been introduced in current terminology, in order to express the trends in modern agriculture, such as: precision agriculture; precision farming; variable-rate management; etc. On the other hand, MCUs' firmware development, power electronics, sensors, radio frequency (RF) communications, wireless networking, printed circuits board (PCB) design, prototyping, testing and evaluation. Furthermore, it is necessary to have a keen awareness of updated solutions launched by the microelectronics industry, which can benefit new systems' designs (i.e., new integrated semiconductors (ICs), systems-on-chip [54] and other electronic components, in general). Diving deep into such fields is very often out of scope, for example, in cases where the aim is to monitor a particular physical phenomenon using WSAN technology in-situ. Although the aforementioned requirements are critical preconditions to WSAN systems, they are not included in the training courses for WSAN systems and applications [55]. The view perspectives of what is a WSAN system may vary among different areas of interest [6]. In particular, for the agricultural domain, the various differences in perspectives of a WSAN node are summarized in Table 1. Design inefficiencies like the fragmentation and limitations caused by the lack of skills can jeopardize the anticipated full-scale commercialization and popularization of WSAN systems. In order to build successful systems that can face the difficulties and the requirements of real-life applications, each stakeholder has to take into consideration other stakeholders' needs and idiocyncracies. As depicted in Figure 2, there are five major stakeholder groups, namely the application experts, the systems designers, the end-users, the industry/market, and the authorities (the external circle in Figure 2). The arrows illustrate the influence between different parts. Practically, this influence is based on the flow of tangibles (e.g., technologies, systems, tools, documents, etc.) and intangibles (e.g., skills, ideas, needs, expectations, etc.). Sensors 2016, 16,1227 5 of 59 systems, MCUs' firmware development, power electronics, sensors, radio frequency (RF) communications, wireless networking, printed circuits board (PCB) design, prototyping, testing and evaluation. Furthermore, it is necessary to have a keen awareness of updated solutions launched by the microelectronics industry, which can benefit new systems' designs (i.e., new integrated semiconductors (ICs), systems-on-chip [54] and other electronic components, in general). Diving deep into such fields is very often out of scope, for example, in cases where the aim is to monitor a particular physical phenomenon using WSAN technology in-situ. Although the aforementioned requirements are critical preconditions to WSAN systems, they are not included in the training courses for WSAN systems and applications [55]. The view perspectives of what is a WSAN system may vary among different areas of interest [6]. In particular, for the agricultural domain, the various differences in perspectives of a WSAN node are summarized in Table 1. Design inefficiencies like the fragmentation and limitations caused by the lack of skills can jeopardize the anticipated full-scale commercialization and popularization of WSAN systems. In order to build successful systems that can face the difficulties and the requirements of real-life applications, each stakeholder has to take into consideration other stakeholders' needs and idiocyncracies. As depicted in Figure 2, there are five major stakeholder groups, namely the application experts, the systems designers, the end-users, the industry/market, and the authorities (the external circle in Figure 2). The arrows illustrate the influence between different parts. Practically, this influence is based on the flow of tangibles (e.g., technologies, systems, tools, documents, etc.) and intangibles (e.g., skills, ideas, needs, expectations, etc.). Obviously, the typical architecture of Figure 1 or the COTS approach cannot support the design of systems that will meet all the expectations of everyone that has a vested interest in a WSAN application. On the other hand, the reaction of the expandable multi-board systems' designers and developers signals the direction for future architectures, in order to confront the changing, demanding, and complex applications' ecosystems. Obviously, the typical architecture of Figure 1 or the COTS approach cannot support the design of systems that will meet all the expectations of everyone that has a vested interest in a WSAN application. On the other hand, the reaction of the expandable multi-board systems' designers and developers signals the direction for future architectures, in order to confront the changing, demanding, and complex applications' ecosystems. Embedded Systems Development Technologies and Market Trends Embedded systems technology has strongly been influenced by the dramatic changes that have occurred in the mobile phone market. In the last decade, consumer demand for ever more powerful smart phones have driven the electronics industry to design and manufacture high processing and low power MCUs and microprocessors (MPUs). This evolution has helped the introduction of mobile computing devices, e.g., tablets, etc., which in turn has acted as an additional reason for the development of new semiconductors, processors, sensors, batteries, and communication modules. Due to economies of scale of such markets, the cost of embedded systems has significantly diminished, whilst the cost-to-performance ratio has increased notably. Consequently, the design of hardware WSAN solutions has vastly been affected by the aforementioned changes. In Figure 3, the most important changes in technologies and approaches associated with the sub-parts of the typical WSAN node system, in the last decade, is illustrated. Obviously, nowadays, experienced designers and developers have plenty of choices at their disposal, in order to build either end-to-end generic commercial solutions or optimized application-specific solutions. In particular, in the field of agriculture, there are stanch technologies for energy, communications, processing, etc. that can positively help towards the development of reliable and vigorous outdoor WSAN systems. Of course, the lack of skills and knowledge make these efforts difficult. On the other hand, this technological breakthrough brings about hurdles race conditions, because the revolutionary technologies have to be quickly assimilated and used, new development tools have to be launched to support designers in the previous effort as well as to produce new technology in their turn, whilst at the same time, under the pressure of stakeholders for robust and standardized solutions, the new solutions have to be commercialized as soon as possible. As a response to this perpetual need, many significant developments have been made by the electronics industry and market, in order to provide new tools, methods and solutions that encapsulate new technology and allow fast prototyping (such as the Mbed [56] and Codebender [57], and i-Sense [58]). The real revolution came from a new area of systems development, the so-called open-source software and hardware. According to this concept, the various design artifacts, i.e., documentation, circuits, software code, hardware project implementations, application case studies, etc. are freely shared among big users' communities under the license scheme of Creative Commons Attribution Share-Alike, which allows for both personal and commercial derivative works as long as the credit to original creator is granted [59]. The most successful case of the open-source design approach is the Arduino platform [60]. The Arduino platform (Arduino SRL, Scarmagno, TO, Italy) is a MCU-based board using an Atmel AVR 8-bit MCU, which provides all of the microcontroller's pins to pass-through pin-headers. Through these headers, all the major functional peripherals of the MCU are available to users. Users, in their turn, can connect other personal or commercial hardware boards, called shields, to the main Arduino board, in order to build their own specific applications. In order to make easy the firmware development process to users without much experience in embedded systems, Arduino provides a ready-made library of APIs in its integrated development environment (IDE). Thus, developers dispose an open-source hardware and software platform that allows the expandability and reusability they are looking for. Despite the fact that Arduino was originally established for education and hobbyists [61], it soon became very popular in research and development of real-life applications, even in demanding areas, such as agriculture. In recent years, the electronics industry realized the advantages of the open-source design approach and Arduino concept and it is foreseen as a new prosperous market. The idea of expandable modular hardware development tools and the application-centric programming concepts, of course, cannot be attributed to Arduino or to its successors. Many implementations, such as the Basic Stamp for MCU programming in Basic language at early 1990s [62,63], and the e-Blocks modular hardware tools [64], targeted providing easy-to-build hardware embedded systems. The reason that these efforts didn't attract the popularity of Arduino concept is probably associated with the fact that they were single-source commercial solutions with negative cost and openness implications. The free support from a vast community of designers and developers, in the case of open-source platforms, has made the big difference, and it seems to be the solution to the way out of the demanding conditions for integration and use of new technology, especially in cases such as WSANs. Today, there are two popular competitive open-source platforms, namely the Arduino [60] (Figure 4a) and the Lauchpad (Figure 4b) from Texas Instruments (Dallas, TX, USA) [65]. For each platform, there are several add-on boards aiming to provide application-specific functionality, produced by the original creators or by third parties such as companies or individuals from various users' communities. Both Arduino and Launchpad provide several alternatives, in terms of processing power, number of input/output pins and peripherals. On the other hand, this technological breakthrough brings about hurdles race conditions, because the revolutionary technologies have to be quickly assimilated and used, new development tools have to be launched to support designers in the previous effort as well as to produce new technology in their turn, whilst at the same time, under the pressure of stakeholders for robust and standardized solutions, the new solutions have to be commercialized as soon as possible. As a response to this perpetual need, many significant developments have been made by the electronics industry and market, in order to provide new tools, methods and solutions that encapsulate new technology and allow fast prototyping (such as the Mbed [56] and Codebender [57], and i-Sense [58]). The real revolution came from a new area of systems development, the so-called open-source software and hardware. According to this concept, the various design artifacts, i.e., documentation, circuits, software code, hardware project implementations, application case studies, etc. are freely shared among big users' communities under the license scheme of Creative Commons Attribution Share-Alike, which allows for both personal and commercial derivative works as long as the credit to original creator is granted [59]. The most successful case of the open-source design approach is the Arduino platform [60]. The Arduino platform (Arduino SRL, Scarmagno, TO, Italy) is a MCU-based board using an Atmel AVR 8-bit MCU, which provides all of the microcontroller's pins to pass-through pin-headers. Through these headers, all the major functional peripherals of the MCU are available to users. Users, in their turn, can connect other personal or commercial hardware boards, called shields, to the main Arduino board, in order to build their own specific applications. In order to make easy the firmware development process to users without much experience in embedded systems, Arduino provides a ready-made library of APIs in its integrated development environment (IDE). Thus, developers dispose an open-source hardware and software platform that allows the expandability and reusability they are looking for. Despite the fact that Arduino was originally established for education and hobbyists [61], it soon became very popular in research and development of real-life applications, even in demanding areas, such as agriculture. In recent years, the electronics industry realized the advantages of the open-source design approach and Arduino concept and it is foreseen as a new prosperous market. The idea of expandable modular hardware development tools and the application-centric programming concepts, of course, cannot be attributed to Arduino or to its successors. Many implementations, such as the Basic Stamp for MCU programming in Basic language at early 1990s [62,63], and the e-Blocks modular hardware tools [64], targeted providing easy-to-build hardware embedded systems. The reason that these efforts didn't attract the popularity of Arduino concept is probably associated with the fact that they were single-source commercial solutions with negative cost and openness implications. The free support from a vast community of designers and developers, in the case of open-source platforms, has made the big difference, and it seems to be the solution to the way out of the demanding conditions for integration and use of new technology, especially in cases such as WSANs. Today, there are two popular competitive open-source platforms, namely the Arduino [60] (Figure 4a) and the Lauchpad (Figure 4b) from Texas Instruments (Dallas, TX, USA) [65]. For each platform, there are several add-on boards aiming to provide application-specific functionality, produced by the original creators or by third parties such as companies or individuals from various users' communities. Both Arduino and Launchpad provide several alternatives, in terms of processing power, number of input/output pins and peripherals. In the meantime, all the key-player semiconductor manufacturers decided to launch Arduino-like, or Arduino-compatible, platforms, in order to promote their own new MCUs and microelectronics portfolios. Among these platforms, they are the Nucleo from ST-Microelectronics (Geneva, Switzerland) [66], the FRDM from Freescale/NXP (Eindhoven, The Netherlands) [67], the XPresso from NXP [68], and Blackfin DSP platform from Analog Devices (Analog Devices Inc., Norwood, MA, USA) [69]. Others, such as Infineon (Infineon Technologies AG, Neubiberg, Germany), have launched application-specific Arduino shields [70]. As the acceptance of the open-source expandable platforms (OSEP) increased, the introduction of the single-board-computers (SBCs) extended the capabilities of such platforms, regarding the processing and computational power, the use of open-source operating systems such as Linux, interface of low-cost USB communication modules (WiFi, ZigBee, Bluetooth, GSM modules etc.), and the connectivity with cameras and screen displays, interfacing with audio sources and outputs, etc.. The most of these SBCs are miniature in their physical dimensions (i.e., credit-card sized), low-power and low-cost compared to mini computers. The expandable SBCs allow very easily the development of web-based applications that is very useful for WSAN applications in remote areas (common in agriculture). Some of the SBCs provide hosting of Arduino shields, in order to ensure compatibility with all the already existing application shields. Thus, the result of this compatibility is the reusability of hardware implementations. Among the most popular SBCs are the BeagleBone (BeagleBoard.org Foundation, Oakland Twp, MI, USA) [71], the Raspberry Pi (Raspberry Pi Foundation, Caldecote, Cambridgeshire, UK) [72], and the Galileo from Intel (Santa Clara, CA, USA) [73]. The most recent of these derivatives, namely the Raspberry Pi 2, the BeagleBone Black, and the Galileo Gen2 are illustrated in Figure 5. Following the introduction of these SBCs, other movements to the same direction took place either from Arduino (e.g., Arduino Tre, Leonardo, and Due) [74], or from well-known semiconductors industries such as Freescale/NXP (FRDM Kinetis KL64) [75]. A well-documented presentation and comparison of all the existing SBCs is given in [76]. In general, the SBCs cannot be considered as a design basis for the build of a WSAN node because of their extended requirement for energy. Sensors 2016, 16, 1227 8 of 59 In the meantime, all the key-player semiconductor manufacturers decided to launch Arduinolike, or Arduino-compatible, platforms, in order to promote their own new MCUs and microelectronics portfolios. Among these platforms, they are the Nucleo from ST-Microelectronics (Geneva, Switzerland) [66], the FRDM from Freescale/NXP (Eindhoven, The Netherlands) [67], the XPresso from NXP [68], and Blackfin DSP platform from Analog Devices (Analog Devices Inc., Norwood, MA, USA) [69]. Others, such as Infineon (Infineon Technologies AG, Neubiberg, Germany), have launched application-specific Arduino shields [70]. As the acceptance of the opensource expandable platforms (OSEP) increased, the introduction of the single-board-computers (SBCs) extended the capabilities of such platforms, regarding the processing and computational power, the use of open-source operating systems such as Linux, interface of low-cost USB communication modules (WiFi, ZigBee, Bluetooth, GSM modules etc.), and the connectivity with cameras and screen displays, interfacing with audio sources and outputs, etc.. The most of these SBCs are miniature in their physical dimensions (i.e., credit-card sized), low-power and low-cost compared to mini computers. The expandable SBCs allow very easily the development of web-based applications that is very useful for WSAN applications in remote areas (common in agriculture). Some of the SBCs provide hosting of Arduino shields, in order to ensure compatibility with all the already existing application shields. Thus, the result of this compatibility is the reusability of hardware implementations. Among the most popular SBCs are the BeagleBone (BeagleBoard.org Foundation, Oakland Twp, MI, USA) [71], the Raspberry Pi (Raspberry Pi Foundation, Caldecote, Cambridgeshire, UK) [72], and the Galileo from Intel (Santa Clara, CA, USA) [73]. The most recent of these derivatives, namely the Raspberry Pi 2, the BeagleBone Black, and the Galileo Gen2 are illustrated in Figure 5. Following the introduction of these SBCs, other movements to the same direction took place either from Arduino (e.g., Arduino Tre, Leonardo, and Due) [74], or from well-known semiconductors industries such as Freescale/NXP (FRDM Kinetis KL64) [75]. A well-documented presentation and comparison of all the existing SBCs is given in [76]. In general, the SBCs cannot be considered as a design basis for the build of a WSAN node because of their extended requirement for energy. The multi-board expandable platform architectures have significantly influenced the typical architecture of WSAN nodes hardware design. The functional blocks that are depicted in Figure 1, is, now, possible to be physically separated from each other thanks to the boards' mechanical and physical layer "standardization". Multi-Board Architectures Expandability Mechanisms In order to connect two printed-circuit boards (PCBs), it is necessary to use board-to-board connectors. On the other hand, in order to connect more than two boards, a solution that ensures boards to be stackable has to be used. Multi-board platforms, such as Arduino, adopted the passthrough pin-headers (see Figure 6a). There is not an established name for these headers. Sometimes, one refers to them as pin-headers with long pins, or just as Arduino headers. In this work, the term "boards-expansion-connectors" (BECs) is proposed. BECs are placed and soldered on boards and they The multi-board expandable platform architectures have significantly influenced the typical architecture of WSAN nodes hardware design. The functional blocks that are depicted in Figure 1, is, now, possible to be physically separated from each other thanks to the boards' mechanical and physical layer "standardization". Multi-Board Architectures Expandability Mechanisms In order to connect two printed-circuit boards (PCBs), it is necessary to use board-to-board connectors. On the other hand, in order to connect more than two boards, a solution that ensures boards to be stackable has to be used. Multi-board platforms, such as Arduino, adopted the pass-through pin-headers (see Figure 6a). There is not an established name for these headers. Sometimes, one refers to them as pin-headers with long pins, or just as Arduino headers. In this work, the term "boards-expansion-connectors" (BECs) is proposed. BECs are placed and soldered on boards and they usually have a male and a female side, so as to allow for boards stackability. Initially, Arduino architecture used four BECs in total (Arduino Uno Rev. 3) [77,78], in order to connect, in a somewhat random way, all the pins of its MCU (Figure 6b). Practically, the Arduino BECs provide a mechanical and physical access to the on-board MCU. Whilst this technique seems quite simple, in terms of state-of-the-art microelectronics, it freed engineers by giving them a convenient way to design expandable and reusable hardware. Further, this technique is particularly used in radio communication modules that come with BECs, in order to be placed on different MCU-based boards. This capability allows systems designers to plug in and test several alternatives for wireless communication, using the same MCU-based main-board. Besides its easy way of firmware applications development through high level APIs, Arduino owes its popularity in the BECs approach for expandability. Any third-party board, which hosts the four Arduino BECs in the exact physical places, it can be considered as an Arduino expansion shield. This way, designers are free to develop the application shields for their particular applications. usually have a male and a female side, so as to allow for boards stackability. Initially, Arduino architecture used four BECs in total (Arduino Uno Rev. 3) [77,78], in order to connect, in a somewhat random way, all the pins of its MCU (Figure 6b). Practically, the Arduino BECs provide a mechanical and physical access to the on-board MCU. Whilst this technique seems quite simple, in terms of stateof-the-art microelectronics, it freed engineers by giving them a convenient way to design expandable and reusable hardware. Further, this technique is particularly used in radio communication modules that come with BECs, in order to be placed on different MCU-based boards. This capability allows systems designers to plug in and test several alternatives for wireless communication, using the same MCU-based main-board. Besides its easy way of firmware applications development through high level APIs, Arduino owes its popularity in the BECs approach for expandability. Any third-party board, which hosts the four Arduino BECs in the exact physical places, it can be considered as an Arduino expansion shield. This way, designers are free to develop the application shields for their particular applications. With regard to the applications in agriculture, the typical architecture of WSAN nodes can take a stackable form, allowing, as much as possible, for facilitating the dramatic technological changes (see Figure 3). In Figure 7a, an illustration of the physical transformation of a WSAN node keeping all the functional parts of the typical architecture together, but mechanically separated from each other, is given. This approach has started to become popular in WSAN applications development in the agricultural domain and appears to be the solution for the sought reconfigurable WSAN nodes [79][80][81][82]. Figure 7b shows a WSAN node built on one Arduino main-board and two expansion shields, one with Ethernet networking circuitry [83], and a second one with a IEEE802.15.4/ZigBee radio module (in particular, XBee module [84]). In parallel with the increase of acceptance of the expandable platforms, there has been an expansion of the requirements and expectations to be fulfilled by this approach. Consequently, all the key-player electronics industries have launched boards that keep the mechanical compatibility with the Arduino platform, but they have also put more powerful processing units and extra BECs (even Arduino does so). Of course, the mandated increasing need for systems' expansion negatively impacts any attempt for mechanical and functional standardization. Others provide development platforms that are mechanically compatible with more than one platforms, e.g., the Arduino and Mbed [66,68], or Arduino and Launchpad [85]. In general, there is a race in the industry to provide expandable solutions. In practice, their efforts are focusing on the physical layer design through the introduction of different mechanical expansion mechanisms. With regard to the applications in agriculture, the typical architecture of WSAN nodes can take a stackable form, allowing, as much as possible, for facilitating the dramatic technological changes (see Figure 3). In Figure 7a, an illustration of the physical transformation of a WSAN node keeping all the functional parts of the typical architecture together, but mechanically separated from each other, is given. This approach has started to become popular in WSAN applications development in the agricultural domain and appears to be the solution for the sought reconfigurable WSAN nodes [79][80][81][82]. Figure 7b shows a WSAN node built on one Arduino main-board and two expansion shields, one with Ethernet networking circuitry [83], and a second one with a IEEE 802.15.4/ZigBee radio module (in particular, XBee module [84]). In parallel with the increase of acceptance of the expandable platforms, there has been an expansion of the requirements and expectations to be fulfilled by this approach. Consequently, all the key-player electronics industries have launched boards that keep the mechanical compatibility with the Arduino platform, but they have also put more powerful processing units and extra BECs (even Arduino does so). Of course, the mandated increasing need for systems' expansion negatively impacts any attempt for mechanical and functional standardization. Others provide development platforms that are mechanically compatible with more than one platforms, e.g., the Arduino and Mbed [66,68], or Arduino and Launchpad [85]. In general, there is a race in the industry to provide expandable solutions. In practice, their efforts are focusing on the physical layer design through the introduction of different mechanical expansion mechanisms. Open-Source-Hardware Architectures versus Open-Architecture Systems Undoubtedly, OSH architectures have been seen as a significant way to avoid having to design hardware systems from scratch. For WSANs systems, the adoption of the OSH expandable multiboard architectures is very convenient, due to the fact that the developers (engineers and researchers) can test, evaluate and integrate several wireless connectivity solutions coming in the form of plug-in modules together with several MCUs' main-boards alternatives. This approach increases the degree of freedom and decreases the development cycle time in sake of the applications deployment. In the following subsections we identify and present several constraints associated with the existing OSH expandable architectures in order to emphasize the need for new solutions that could help the open-source approach to make the next step, namely the step from the prototyping to the optimization and reliability. Signals Management Constraints (a) Signal conflicts and short-circuits: According to the expandable multi-board architectures, all the input and output pins (i.e., digital, analog, buses and ports pins) are coming directly from the main-board's MCU and, through the BECs, are available for use by the rest of the expansion shields. The MCU solely manages every pin regarding its signal direction (i.e., input or output), its function, and its frequency of operation. Expansion shields cannot change the characteristics of the BECs signal-pins. On the other hand, if a pin is declared, for example, as output from the MCU, then, this signal must be an input for the rest of the shields, otherwise serious problems will appear due to electrical short-circuits. (b) Limited multi-MCU development: The egocentric style of pins management by the main-boards does not allow for real multi-processor designs. Thus, it is very difficult to have two or more shields with a MCU in each of them, which, at the same time, are managing some of the signals of the common BECs. (c) Waste of existing system's resources: In the OSEP-based WSAN hardware implementations it is very common to plug-in a wireless communication module on a MCU-based main-board shield. In this case, the MCU just reads and writes data from/to the wireless module through an embedded serial port or bus (e.g., UART port, I2C-bus, or SPI-bus). In practice, the majority of the wireless modules have their own MCU into which the communication stack is running, while several input and output pins and ports are available to the developer for application use. This means that, in this design, there are two MCUs, but, in practice, just one MCU can be used for the application's scenario (that of the main-board), so the distribution of processing power among the shields is rather limited and several development resources are left unused. Open-Source-Hardware Architectures versus Open-Architecture Systems Undoubtedly, OSH architectures have been seen as a significant way to avoid having to design hardware systems from scratch. For WSANs systems, the adoption of the OSH expandable multi-board architectures is very convenient, due to the fact that the developers (engineers and researchers) can test, evaluate and integrate several wireless connectivity solutions coming in the form of plug-in modules together with several MCUs' main-boards alternatives. This approach increases the degree of freedom and decreases the development cycle time in sake of the applications deployment. In the following subsections we identify and present several constraints associated with the existing OSH expandable architectures in order to emphasize the need for new solutions that could help the open-source approach to make the next step, namely the step from the prototyping to the optimization and reliability. Signals Management Constraints (a) Signal conflicts and short-circuits: According to the expandable multi-board architectures, all the input and output pins (i.e., digital, analog, buses and ports pins) are coming directly from the main-board's MCU and, through the BECs, are available for use by the rest of the expansion shields. The MCU solely manages every pin regarding its signal direction (i.e., input or output), its function, and its frequency of operation. Expansion shields cannot change the characteristics of the BECs signal-pins. On the other hand, if a pin is declared, for example, as output from the MCU, then, this signal must be an input for the rest of the shields, otherwise serious problems will appear due to electrical short-circuits. (b) Limited multi-MCU development: The egocentric style of pins management by the main-boards does not allow for real multi-processor designs. Thus, it is very difficult to have two or more shields with a MCU in each of them, which, at the same time, are managing some of the signals of the common BECs. (c) Waste of existing system's resources: In the OSEP-based WSAN hardware implementations it is very common to plug-in a wireless communication module on a MCU-based main-board shield. In this case, the MCU just reads and writes data from/to the wireless module through an embedded serial port or bus (e.g., UART port, I2C-bus, or SPI-bus). In practice, the majority of the wireless modules have their own MCU into which the communication stack is running, while several input and output pins and ports are available to the developer for application use. This means that, in this design, there are two MCUs, but, in practice, just one MCU can be used for the application's scenario (that of the main-board), so the distribution of processing power among the shields is rather limited and several development resources are left unused. (d) Signal voltage level incompatibilities: Several times, there is incompatibility in terms of the logic levels of the signal pins among the various expansion shields. In the embedded systems circuits, there are two typical logic families, that of +5 Vdc and that of the +3.3 Vdc. In cases where two or more expansion shields, with different voltage logic levels, have to be interconnected through common BECs, then, specific extra logic level translators circuits must be in place. Depending on the direction of the signal pins, i.e., inputs or outputs or both, the voltage level translators should be single-directional or bi-directional. This issue cause extra cost, more physical space on the shields' boards, and degradation in energy efficiency, as well as reduction of reusability of shields. (e) Unused signal pin conditioning: When the application does not need all of the available pins from the BECs, then, these pins are left floating, in terms of circuit termination. Each particular MCU explicitly defines the signals conditioning for its unused signal pins. The lack of unused pins management can cause significant problems related to the loss of energy, poor electromagnetic noise immunity, low ESD and EMI performance [86], and application scenario intermittent execution caused by erroneous interrupts activation in the MCU's firmware. The definition of the unused pins level can be done either by enabling the MCU's internal pull-up resistors or by connecting external pull-up or pull-down resistors. In both cases, there are energy balance disorders. On the other hand, there is no provision for the external resistors in the main-boards or in the rest of the expansion shield boards [87]. (f) Signal routing inflexibility: There is no mechanism to terminate the route of the BECs signals at the shields level. For instance, when the MCU outputs a signal to a particular shield, then this signal is needlessly forced to be an input to the rest shields due to the common BECs signal pins. On the other hand, the existing BEC style of mechanical standardization limits the full exploitation of the overall system, since any single signal of the BECs, except from the supply voltages and ground pins as well as the data busses pins, can be used only from one shield and the main-board, so the functionality is sacrificed on the altar of the invaluable expandability and reusability. Power Management Constraints (a) Poor energy conversion efficiency: Due to the fact tha, all the OSH expandable architecture solutions have originally been designed for pilots and proof of concepts in indoor test environment, they disregard the need for efficient power management. For WSAN applications in the field of agriculture where the hardware nodes have to be battery-operated, the existing OSH architectures entail problems, because these solutions are not energy optimized. Arduino-like as well as the SBC hardware solutions require external +5 Vdc power sources, which in most of the cases is coming from the USB port of a personal computer. Some of these solutions also accept external supply voltages above the +5 Vdc, usually ranging from +7 Vdc up to a maximum of +18 Vdc. To keep the manufacturing cost down, the shields designers' choice, for the external voltage management, is to use linear voltage regulators. These electronic components don't require much physical space on the boards, but, on the other hand, they suffer from very low energy efficiency. Also, the higher the external supply voltage value is from the base +5 Vdc value, the more energy is lost in form of heat at the regulator's package. Therefore, the use of this type of voltage regulators, in the multi-board architectures, downgrades the overall energy efficiency of the final system. (b) Inability to ensure the power of the expansion shields: After the regulation of the external voltage, the voltage supply signal of the MCU, e.g., +5 Vdc, is routed to the related BECs pins in order to power the expansion shields. Unfortunately, the amount of power that can be drawn from the expansion shields is limited by the particular voltage regulator of the main-board shield [88]. When the expansion shields require levels of power higher than that sourced from the main-board, then, some or all of the shields must have their own power source circuits, in order to be able to accept external power. (c) Power cabling ataxia: The high power consuming shields have their own connection terminal blocks or headers separated from the common BECs. In this way, the total shields constructions are suffering, not only from poor energy inefficiency, but also from power cabling ataxia. The power cabling burdens the ESD and EMI performance and makes the system vulnerable to noise interferences. This problem is significantly escalated when, in the multi-board system, there is the need for secondary voltages, e.g., +3.3 Vdc for some shields with low power MCUs. (d) No provision for voltage signals other than logic levels: Various difficulties appear when higher voltages than the basic +5 Vdc and +3.3 Vdc, e.g., +12 Vdc, are required to drive actuators' loads. In this circumstance, the use of particular external power supply units for the system is mandatory. From the physical layer perspective, none of the existing OSEP mechanisms of expansion is supporting the physical connection of multi-value voltage signals. (e) Energizing unnecessary circuits: In the expandable architectures boards, it is very popular for the main-boards and their shields to have some extra circuitry for general-purpose use, e.g., LEDs, MEMS-based sensors, etc. Actually, this is a very common practice also in WSAN end-solution and COTS solutions [43]. Such extra circuitry may be useful for testing, during the development phase, but it is totally useless in the final in-situ application, because it wastes significant amounts of energy. For battery-operated WSAN systems in the agricultural environment, this testing circuitry degrades the valuable available energy. Unfortunately, the existing OSEP architectures do not have any provision for this constraint, so there is no mechanism for developers to disengage that extra circuitry, in order to build energy optimized systems. To emphasize this problem, a single LED indicator that is blinking inside of a closed plastic box, installed in the agriculture field, is useless for the users and it consumes more energy than that consumed by e.g., a ZigBee RF transceiver. Firmware Development Constraints (a) Lack of code optimization: The application scenario that is hosted and running in the program memory of the main-board's MCU, also called as firmware, normally ought to be optimized and tidily developed, so as to ensure the reliability of the ultimate hardware system. In the case of the Arduino-like expandable architectures, the firmware development is mainly implemented into the particular IDE of the hardware vendors. Whilst such IDEs provide many development facilities to the engineers through the use of extensive ready-made APIs or project templates, they produce firmware that is far from being optimized. For instance, the firmware for just toggling a single LED indicator may involve several kilobytes of program code. Every single line of code in the firmware, when it is executed by the processor, consumes a portion of the available system's energy. In battery-operated WSAN systems, such in the case of remote agricultural applications, wordy firmware is a well-hidden source of energy wastage. Thus, the ease of firmware development is counteracted by the excess in energy consumption. Today, there are programming tools solutions for the open-source developers that can help to the production of efficient and optimized code (e.g., mbed IDE). Hence the key point is the firmware developers to start thinking about the energy effectiveness. The trend of open-source to use ready-made pieces of firmware or even complete projects from the developers' community is very catty, because the hardware details are totally ignored, or in the best cases, they are partially acknowledged. Programming and Debugging Constraints (a) Peripherals and energy charge: One of the most convenient and low cost methods to download the firmware to the program memory of a MCU is the in-circuit programming, or else, in-system programming. The hardware implementation of this programming method requires the usage of a certain number of the MCU's pins (i.e., Vdd, Vss, Reset, SPI pins, UART pins, etc.), which have to be connected accordingly, in order the MCU to be programmed directly by a personal computer via a USB port or by inserting specific external serial programming devices. Whilst, the programming operation takes place once in-house and lasts for a few minutes, the programming circuitry remains permanently on-board. Furthermore, this circuitry may cause electrical conflicts with other shields, because the programming pins are physically routed to the rest of the shields through the BECs, so in most of the cases, it is mandatory to remove any connected shields before a MCU-based shield programming take place. (b) Limited debugging capabilities: Since, the development of the firmware is mostly based on a combination of ready-made open-source parts of code, written by someone else, it is very critical for the system to be able to support real-time debugging with all the shields engaged. (c) Lack of support for multi-MCU development: OSEP expandable architectures cannot support multi-core developments. Practically, the main-board and each one of the expansion shields which incorporate a MCU must host their own programming and debugging circuits on-board. Robustness and Reliability Constraints Of course, all the aforementioned constraints can harm, in a lower or higher degree, the robustness and the reliability of the total system, but there are also some additional issues that relate to the real-life applications: (a) Lack of compliance to norms and regulations: Because the majority of the existing OSEP solutions are considered as prototyping development tools, they are not tested and certified in terms of specific norms and regulations for particular application domains. (b) Poor system's integrity and reliability: The absence of a central power management mechanism, the erroneous electromagnetic sources from sketchy handling of the unused BECs' pins (these pins may act as antennae), and the uncontrolled performance in the various shields from different vendors, constitute only some of the issues that are responsible for poor reliability. In addition, certain security issues may arise for some MCUs which are very popular in the OSH platforms [89]. (c) No form factor and encapsulation provision: Any WSAN application domain has its own particular requirements for the form factor and the encapsulation of the systems in order to facilitate the deployment and to ensure the longevity of the systems. The existing OSEP solutions do not care about the physical dimensions of the final implementation. The SensoTube Architecture Taking into consideration the constraints mentioned in the previous section, the prospect of a new scalable architecture which, on one hand, can maintain the obvious advantages of the OSH expandable architectures, and, on the other hand, can help OSH concept take the next step towards optimization and reliability by provide the mechanisms for avoiding the existing limitations can be reasonably raised. Hence, the OSEP concept can be fully exploited in the WSAN applications even in the demanding domain of agriculture. Therefore, a new architecture, namely the SensoTube, is proposed and described in this section. The grand aim of the SensoTube architecture is to enable a WSAN hardware system to: (a) Escape from the structural restrictions of the existing architectures: The adhesion to the traditional architecturea together with the persistence for miniaturization seems to be rather inappropriate for real-life applications in agriculture. Actually, the size of a WSAN node doesn't matter [90], and for the case of agriculture, this is evident from the trend to use large-sized OSH platforms. (b) Keep the advantages of OSH expandable concept: The new architecture should maintain the reasons for which the Arduino-like OSH architectures became popular, i.e., the simplicity, the expandability, the reconfigurability and the reusability of hardware. (c) Avoid existing OSH architectures' implementation constraints: The new architecture should provide new mechanisms, in order to avoid the constraints of existing expandable architectures (see Section 2) and, on the other hand, ensure the highest versatility and flexibility. (d) Satisfy all the applications' stakeholders: The new architecture should allow designers of different design fields (e.g., power electronics, communications, data acquisition etc.) to easily adapt their contributions. Also, the end-users should have clear and reusable building blocks for their present and future integrations. (e) Support the "separation of concern": Regarding the research efforts to study new challenging technologies with potential benefits for WSAN systems, there is a trend for decoupling the WSAN from the application [91]. Also, several other studies, e.g., for WSAN nodes' scenario reconfiguration [92,93], for strategies on WSAN power management [94], for data acquisition development, or for implementing technologies like Wakeup-Radio (WuR) [95], and many others, are indicatory cases where the decoupling from the wireless networking is required. This decoupling is practically achieved either by the addition of more than one MCU on-board, or by the usage of FPGAs, or by the addition of extra RF communication circuits or modules. Such modifications are necessary to overcome the limited boundaries of the traditional architectures, in order to implement the pilots. On the other hand, they may be considered as custom closed-architecture designs. Therefore, the new architecture should ensure the accommodation of research in new and challenging technologies. (f) Support modeling: The new architecture should allow for modeling of the WSAN hardware system. To meet this target the architecture should provide the highest scalability and standardization. In this way, the WSAN systems could be seen from the middleware infrastructure as well-defined functional multi-class objects. (g) Ensure optimization: The systems based on the new architecture should combine the performance level required in real-world WSAN applications [10] with the vagaries of the agricultural domain. Ideally, the systems should have the optimization level of the commercial end-systems, but with the flexibility of a testbed. Furthermore, a provision should be made in terms of the form factor of the WSAN systems and their encapsulation, in order to cover the specific requirements in the open agricultural field. Actually, the name SensoTube reflects the idea of using plastic tubes for the encapsulation of the WSAN systems in the agricultural fields. The first step towards the foundation of the SensoTube architecture, was to identify every single possible function that should be exist in an ideal hardware WSAN node and to classify the functions into certain groups according to their similarities and their scope. Next, these groups are considered as seven discrete functional layers (see Table 2) by which any WSAN hardware system can be studied, designed and built. As it is reported in [96], any efforts towards the substantial hardware abstraction can increase the fidelity of the characterization and classification of WSAN systems. Application-Specific Layer ASL 5 Programming and Debugging Layer PDL 6 Power Management Layer PML 7 Evaluation and Testing Layer ETL According to SensoTube, each one of the suggested functional layers has to be able to be implemented as a separate expansion shield. In particular, such functional shields have to be: ‚ Autonomous: Each functional layer shield should be fabricated on its own PCB. ‚ Dedicated: Each shield should be designed in order to implement the tasks of the functional layer at which it belongs. ‚ Intelligent: Local intelligence in every shield is necessary, in order to take care of its functions and to allow for reasonable reconfiguration. ‚ Uniquely identified: When a system needs to incorporate several shields of different functional layer, as well as more than one shields of the same functional layer, then each shield should be able to be uniquely identified by the system. ‚ Addressable: The system, according to the execution of its application scenario, should consider the functional layer shields as addressable units. ‚ Self-expandable: In cases where the PCB surface area is not enough to host the necessary circuitry of a particular function, then one or more complementary extra PCBs should be able to be added without, at the same time, to disturb the rest of other functional layer shields. ‚ Context aware: Each functional layer shield should be aware of its environment, that is, to be able to interact with other shields. ‚ Testable: Each functional layer shield should provide plain testing facilities, e.g., connection points for signals testing. ‚ Compatible: Each functional shield should be designed with respect to the homogeneity in form factor and expansion mechanisms. From the above characteristics, which form the profile of the ideal functional layer implementation, it is evident that the WSAN system should be a multi-processor system. In part, this is a mandatory in several COTS approaches which use MCUs together with MCU-controlled radio modules [97]. On the other hand, the provision of a multi-processor ability will help designers to escape from the egocentrism of the existing multi-board expandable architectures (e.g., Arduino and the like), which whilst theoretically support the concept but in practice only the MCU of the main-board has the total control of the common BECs. The commercial MCU solutions present in the market today [40], together with the ongoing research on ultra-low power MCUs can guarantee multi-processor operation, even for battery-operated WSAN nodes [98,99]. In particular, among the most significant commercial achievements are the new low-power high performance ARM-based MCUs, which already have found their way into WSAN node implementations [100,101] and the ultra-low power 16-bit MSP430FR MCUs based on non-volatile ferro-electric RAM [102]. At the same time, researchers are striving towards the elimination of MCU leakages [103], the lowering of the MCUs' operating voltage level [104,105] and the improvement of the internal power management circuits of the MCUs [106,107]. In addition, particular techniques for energy saving in WSAN nodes have already been studied and have shown remarkable results. Some of them have focused on the wakeup and idle states of MCUs [99,108,109], or on the behavior of MCUs as normally-off devices [110,111]. The realization of functional layers in autonomous shields can help the designers to decide which and how many layers are needed to build a particular WSAN system, to work with discrete functional building blocks, to focus on specific systems' features, to isolate other functional layers from changes at a particular layer, and to work on an add-and-remove basis, in order to adapt to the specific requirements of implementations. In order to handle the functional layer shields as autonomous functional blocks, which can occasionally be added and removed from the main system, particular provisions have to be in place, so as to avoid anarchy in the expandability and scalability. All the functional shields have some common characteristics regarding their operation. In particular, they have to share their electrical signals with other shields, they demand either a single or a multi-value voltage source, they have to be in-system programmed and updated, their MCUs must easily communicate with other MCUs from other shields. The proposed SensoTube architecture establishes the necessary mechanism to support these uniformity and openness needs by the introduction of four inter-layer services: Programming and debugging ‚ Energy management Since the four service layers cannot be implemented as distinct plug-in shields, specific provisions have been made in the form of electrical channels in the BECs of the expansion mechanism of the SensoTube architecture, as explained in the implementation reference model in Section 4. Actually, the establishment of the inter-layer services is the key enabler for the realization of the proposed functional abstraction. Without the inter-layer services provision, the elimination of the Arduino-like OSH expandable platforms' constraints could not be avoided at all. Furthermore, the inter-layer services can ensure the building of a sound, expandable and scalable system. For instance, it is possible to have a system comprised of several OSH main-board shields sharing the very same expansion mechanisms, but at the same time each one of them can be self-expandable and autonomous. A complete representation of the SensoTube architecture is given in Figure 8. At a conceptual level, the presented architecture can satisfy the sub-aims (a) to (g), posed at the beginning of this section. Since the four service layers cannot be implemented as distinct plug-in shields, specific provisions have been made in the form of electrical channels in the BECs of the expansion mechanism of the SensoTube architecture, as explained in the implementation reference model in Section 4. Actually, the establishment of the inter-layer services is the key enabler for the realization of the proposed functional abstraction. Without the inter-layer services provision, the elimination of the Arduino-like OSH expandable platforms' constraints could not be avoided at all. Furthermore, the inter-layer services can ensure the building of a sound, expandable and scalable system. For instance, it is possible to have a system comprised of several OSH main-board shields sharing the very same expansion mechanisms, but at the same time each one of them can be self-expandable and autonomous. A complete representation of the SensoTube architecture is given in Figure 8. At a conceptual level, the presented architecture can satisfy the sub-aims (a) to (g), posed at the beginning of this section. In the following sub-sections, the usage and benefits of the proposed seven functional layers are described, with particular emphasis on the advantages of the novel mechanisms of the inter-layer services. Additional emphasis is given on the facilitation of challenging WSAN research aspects, and on the solutions to existing design constraints. At the same time, the target is to explain how the WSAN designers can use the SensoTube architecture to adapt their particular requirements. Data Acquisition and Control Layer (DCL) In real-world WSAN implementations, it is very common for the specifications of the data acquisition (DAQ) and control to change [6]. For example, a new type of sensor may require a higher conversion resolution and a higher sampling rate, or the need for some extra sensors may require extension of the existing analog inputs, or a new actuator may need more energy and special driving circuits in order to be driven, etc. Regarding the field of agriculture, the measuring and monitoring of various physical parameters require, very often, the use of complex sensory devices [5]. For such reasons, in [9] it is pointed out that, regarding the agricultural domain, a data acquisition daughter card is required. Today, the support of the data acquisition and control aspect of the WSAN systems seems to be rather underrated in the existing architectures. For instance, in COTS (e.g., motes), the emphasis has In the following sub-sections, the usage and benefits of the proposed seven functional layers are described, with particular emphasis on the advantages of the novel mechanisms of the inter-layer services. Additional emphasis is given on the facilitation of challenging WSAN research aspects, and on the solutions to existing design constraints. At the same time, the target is to explain how the WSAN designers can use the SensoTube architecture to adapt their particular requirements. Data Acquisition and Control Layer (DCL) In real-world WSAN implementations, it is very common for the specifications of the data acquisition (DAQ) and control to change [6]. For example, a new type of sensor may require a higher conversion resolution and a higher sampling rate, or the need for some extra sensors may require extension of the existing analog inputs, or a new actuator may need more energy and special driving circuits in order to be driven, etc. Regarding the field of agriculture, the measuring and monitoring of various physical parameters require, very often, the use of complex sensory devices [5]. For such reasons, in [9] it is pointed out that, regarding the agricultural domain, a data acquisition daughter card is required. Today, the support of the data acquisition and control aspect of the WSAN systems seems to be rather underrated in the existing architectures. For instance, in COTS (e.g., motes), the emphasis has been entirely put on the RF communications. In particular, some of them have a couple of sensors soldered on-board, just as to be able to demonstrate the networking capabilities in measured data from the wireless nodes [26,43]. In most of the cases, these on-board sensors are not of the proper type and form, in order to be useful in agricultural applications. In addition, as reported in [54], there are serious limitations in data sampling periods, when a single MCU is responsible for both the networking protocol and the DAQ functions under an operating system. In general, the COTS-based systems leave the development of the DAQ and actuators circuitry in the users' hands. On the other hand, the existing expandable OSH architectures provide limited support for a sound DAQ and control function due to their inherent structural constraints (see Section 2). Furthermore, these solutions are not energy optimized so as to support battery-operated WSAN applications. In particular, the various analog or digital sensors which are connected to an OSH main-board, are always activated, regardless the fact that the sampling rate may be very low. This is particularly evident, for example, in the management of the soil sensors which are based on the SDI-12 bus [112]. Also, the existing expandable OSH-based systems suffer from scalability, in terms of processing power and communication peripherals. Thus these architectures appear to be convenient only for the limited scope of short-term experimentation. On the contrary, the SensoTube architecture with its inter-layer services facilitates the design and development of flexible and scalable DAQ and control shields. According to the SensoTube, a WSAN system is capable to use more than one DCL shield. Each DCL shield can employ its own MCU. This ensures the ability for reconfiguration at shield's level, as well as the capability to undertake the execution of measurements scenarios locally without disturbing other functional layer shields. Regarding the MCU, designers can make their choice either by selecting a commercial ultra-low power one, or by using an FPGA [113], or an analog mixed-signal processor [114]. Furthermore, with the introduction of the energy management service, the DCL shield(s) can be entirely powered-ON or OFF, according to the application scenario. In this way, the maximum level of energy consumption control is achieved. Also, a dedicated MCU-enabled shield can help the designers to take all the necessary PCB design precautions for the highest performance in signals integrity (SI) and EMC. Moreover, SensoTube aims to provide the necessary polymorphism, in terms of signal connections, to allow for increased flexibility and versatility. In particular, each of the DCL shields can have its own analog channels and communication interfaces, through the use of the mechanisms of the introduced inter-layer signals management service. For instance, a shield can be self-expanding without disturbing the neighboring functional shields, as well as permit the use of terminal blocks for easy access to signals for connecting external sensors. On the other hand, with SensoTube, there is no limitation of processing, analog channels, and communication interfaces. Wireless Networking Layer (WNL) The establishment of a discrete functional shield for the wireless data communication, firstly, allows designers to focus on the wireless networking aspects (e.g., routing protocols [115,116], operating systems [22], new trends [117], new technologies [118][119][120], etc.), and, secondly, supports the requirement for the decoupling of applications from the wireless networking field [91]. According to the SensoTube, a WNL can be implemented in its own PCB with a dedicated MCU on-board. In this way, the WNL shield can collaborate with other functional shields for the sake of any particular application scenario. At this shield, any of the known design practices, i.e., chip-set, SoCs, and modules, can be accommodated, in order to implement the wireless data networking. Contrary to the existing architectures, SensoTube allows for the engagement of numerous WNL shields through its inherent expansion mechanism and its inter-layer services provisions. Thus, challenging implementations, such as for heterogeneous communications, as in the case of [31], can be seamlessly facilitated to the WSAN system. Additionally, the specific inter-layer service provision for energy management, allows the WNL shield to be energy aware of every single operation of its data radio communication sub-systems. The ability to have a complete control and monitoring of energy is of crucial importance for the real-world WSAN applications in the field. Also, a WNL shield, thanks to the holistic strategy for the energy management that is achieved by the inter-layer energy management service, can be entirely powered-ON or OFF from other functional layer shields, in order to minimize the energy consumption. On the other hand, the inter-layer service for signals management eliminates the resources constraints of the existing OSH architectures, while, at the same time, provide the polymorphism in expansion mechanisms, in order to help towards the maximum scalability, openness and reusability. This is very important, in the case of the usage of the various integrated communication modules (e.g., Bluetooth, WiFi, and ZigBee modules) in the design of WSAN nodes. According to the existing expandable OSH architectures, such modules are serially interfaced with the main-board's MCU, in order just to transmit and receive data. In this case, these modules are not fully exploited for the sake of the system. Actually, these modules are built around of a reprogrammable MCU, the signals of which are provided at the module's miniaturized PCB. A WNL shield can fully exploit the capabilities of these modules. In particular, the analog and digital I/O signals of the modules can be routed to the BECs of the system, and also, through the use of the inter-layer service for programming and debugging, to allow for in-system firmware development. Thus, the SensoTube WNL shields can achieve the integration of such modules in a homogeneous and uniform way. Data Gateway Layer (DGL) A WSAN gateway should be able to bridge the local wireless network (e.g., based on ZigBee, etc.) with other communication networks, using proper RF communication modules (e.g., WiFi, GSM/GPRS, GSM 3G/4G, etc.). In contrast to the existing architectures, SensoTube-based WSAN systems can have more than one data gateway channel in the same system through the use of many DGL shields. Thus, SensoTube can effectively support the general domain of the interconnections to external networks [121]. For example, one DGL shield is used for the Internet access while another DGL shield provides Bluetooth connectivity for local user-interface (Human-Machine Interface-HMI), and another DGL shield provides a wired interface via USB or RS-485 data busses for, e.g., local configurations and reporting. The master MCU of the SensoTube-based WSAN system (e.g., the MCU of the DCL, or the MCU of some other layer's shield) can direct the operations of the WNL and DGL shields through their MCUs, in order to achieve any data interconnection scenario. Furthermore, as in the case of WNL shields, the DGL shields can fully integrate and exploit the inherent capabilities of the modern integrated communication modules (see Section 3.2). Moreover, the DGL entity can support research and experimentation in the challenging application areas such as the Internet-of-Things (IoT) [33], which at least for now practically appears to be an Internet-of-Gateways (IoG). Similarly, the cyber-physical space (CPS) [122] research field could also be facilitated in SensoTube-based systems. Towards this direction, the processing and memory resources, required for local embedded web servers and other web technologies, can be exclusively designed in DGL shield without disturbing the operations of the other functional layers of the system. Application-Specific Layer (ASL) The ASL functional layer can be an application specific shield. This discrete shield can accommodate any functional requirements of the overall WSAN system that does not conceptually fit into other functional layers. On the other hand, the ASL entity can be also considered as a reservation for future needs. In practice, designers could use a MCU-based ASL shield as the director of the rest of functional shields, in order to execute the application scenarios. Such a design option can facilitate the reconfiguration of the system's intelligence and increase the flexibility in development. Another possible use of this layer's shield could be the accommodation of various types of memory storage media, in order to store various system measurements, data, operating parameters and execution logs. Such capabilities are very useful in the real-world WSAN applications in agriculture, where the nodes' data has to be stored locally when the RF network is momentarily down. Power Management Layer (PML) Energy is a very critical factor for the real-world WSAN applications, especially in the agricultural environment, and can influence the lifetime and the reliability of the overall application [18]. As the WSAN technology evolves, the need for power management is increasing [23]. Unfortunately, there are several trade-offs in the commercial WSAN solutions, regarding the use of energy [11]. Traditionally, the WSAN hardware solutions being based either on the traditional architecture and COTS, or on OSH architectures, have been designed without paying particular attention to the energy implications. Furthermore, regarding the provisions for the energy sources, these systems just provide some kind of connection through the use of pins or screw-drive terminal blocks and they leave the users to take care of supplying power, under their own responsibility. In practice, this can jeopardize the overall system's performance and reliability. Additional pressure for energy management is coming from the need to exploit challenging energy-related technologies [34]. The WSAN nodes in agriculture, except from the use of photovoltaic panels [123], can also make use of other, more sophisticated, energy harvesting techniques [124][125][126]. Except for the mature battery types, the harvested energy can be also stored in relatively new media such as the Li-Ion batteries [127], supercapacitors [94,128,129], hybrid ultracapacitors [42], or combinations of thin-film batteries and supercapacitors [130]. The spread of such technologies in real-world WSAN applications entails sound evaluation and modeling. Otherwise they will be limited to pilots and demonstration implementations. In this context, the existence of a separate shield that accommodates all the energy requirements of the overall system is very critical. A SensoTube-based system could have more than one PML shield, which can be replaced on an occasional basis, in order to fulfill the scalable energy requirements of the system. With PML shields, the power electronics researchers and designers have a discrete functional shield, into which they can contribute towards the design of energy optimized WSAN systems. At the same time, the PML shields ensure a unified and well-organized energy management that can guarantee the reliability and the lifetime of the WWSAN system. Additionally, with the provision of the SensoTube inter-layer service for energy management, an MCU-based PML shield through the use of on-board electronic switches can power-ON and OFF, in real time, all of the other functional shields. The incorporation of an MCU in the PML shield that will manage the overall systems' energy could uplift the prospects for optimization. Program and Debug Layer (PDL) Programming and debugging are very important functions of a WSAN system [91]. For this reason, SensoTube has made specific provisions. In particular, one or more of the PDL shields can be installed in a SensoTube-based system, so as to accommodate the electronic circuits associated with the programming and debugging functions of the MCUs of the various functional shields. These can be removed from the WSAN system, when the development has been successfully completed and the system is ready for installation in the field. This facility results in energy saving in the final system. Additionally, when there is no support for programming in the abandoned system, it avoids undesirable access to the firmware of the nodes. Using the PDL approach also allows one to reduce the cabling complexity in the final system while it permits MUCs of the same technology (e.g., ARM, or MCUs from the same manufacturer), to share the very same PDL shield for their programming and debugging, so designers can provide more compact, energy optimized, and low cost implementations. Another feature, which is very crucial in the cases of remote WSAN applications, is the ability to perform remote upgrades, or else, upgrades over-the-air (OTA) [131]. According to this function, the network stack inside a MCU can be reprogrammed remotely [132]. With PDL shields, this function can be extended also to the remote firmware upgrade of each of the on-board MCUs of the functional shields. Of course, in this case, the PDL should not be removed from the final system. Furthermore, the ability of PDL shields to decouple the programming and debugging circuitry from the MCU-enabled shields, allows for the design of particular circuitry in order to facilitate the connection of novel programming devices that also perform various statistics and energy profiling of the target MCU [133,134]. Evaluation and Testing Layer (ETL) The behavior of WSAN systems is severely differentiated when they are deployed in the real-world applications environment [21], and practically, this behavior cannot be simulated [135]. Moreover, the detection of possible faults is of crucial importance for the remote system [136]. Traditionally, WSAN designers and developers use various tools for evaluation and diagnostics, referred to as testbeds [29,137]. Ideally, as reported in [135], an evaluation tool should be scalable, flexible, accurate, repeatable, visible, cross-environment valid, and re-usable. Unfortunately, there are very few testbeds available today [138], and, on the other hand, they appear to be inappropriate for in-situ post-deployment testing [117,135]. A thorough study of WSAN testbeds is reported in some studies [139,140]. Through the use of the ETL shields, the SensoTube architecture allows for real-time in-situ monitoring and testing of every single operation of particular circuits and procedures of the WSAN system. In other words, an ETL shield can be considered as the testbed inside the final system. A SensoTube-based system may incorporate more than one ETL shield. An ETL shield is not intrusive on other functional shields and can be easily removed at any time. Among the most interesting testing operations that can be implemented onto an ETL shield are the in-system energy monitoring, the control over the networking protocol execution, the diagnostics of malfunctions in the firmware of MCUs, energy storage monitoring, reliability and lifetime anomalies detection [18,141] etc. Obviously, the ETL entity can open up new horizons for a WSANs' characterization and modeling based on the systems' behavior, under real-world deployment conditions. The SensoTube Architecture's Implementation Reference Model The idea behind the principles of the proposed architecture was to have the WSAN system encapsulated inside a plain plastic tube as those used for irrigation in agriculture. In fact, the very name SensoTube has its roots at this concept. The advantages of this approach are described in depth in Section 6. The definition of a fixed PCB design model is a prerequisite to enable the use of the proposed architecture. The design of the physical expansion mechanism has been accomplished by taking into consideration: the PCB form factor, the fulfillment of the operational requirements of the various functional layers shields, the reusability of the hardware shields, the simplification in modification and cabling, the maximum expandability and openness to support research and development, the easy and low-cost boards fabrication, and the provision of a standardized and uniform way to design the SensoTube-based hardware shields (i.e., to provide a design template). Printed-Circuit Board (PCB) Model The form factor of the SensoTube PCBs is determined from the ability of the boards to be placed inside a tube of 90 mm diameter, as it is illustrated in Figure 9. The diameter of 90 mm allows for enough PCB space. Evaluation and Testing Layer (ETL) The behavior of WSAN systems is severely differentiated when they are deployed in the realworld applications environment [21], and practically, this behavior cannot be simulated [135]. Moreover, the detection of possible faults is of crucial importance for the remote system [136]. Traditionally, WSAN designers and developers use various tools for evaluation and diagnostics, referred to as testbeds [29,137]. Ideally, as reported in [135], an evaluation tool should be scalable, flexible, accurate, repeatable, visible, cross-environment valid, and re-usable. Unfortunately, there are very few testbeds available today [138], and, on the other hand, they appear to be inappropriate for in-situ post-deployment testing [117,135]. A thorough study of WSAN testbeds is reported in some studies [139,140]. Through the use of the ETL shields, the SensoTube architecture allows for real-time in-situ monitoring and testing of every single operation of particular circuits and procedures of the WSAN system. In other words, an ETL shield can be considered as the testbed inside the final system. A SensoTube-based system may incorporate more than one ETL shield. An ETL shield is not intrusive on other functional shields and can be easily removed at any time. Among the most interesting testing operations that can be implemented onto an ETL shield are the in-system energy monitoring, the control over the networking protocol execution, the diagnostics of malfunctions in the firmware of MCUs, energy storage monitoring, reliability and lifetime anomalies detection [18,141] etc. Obviously, the ETL entity can open up new horizons for a WSANs' characterization and modeling based on the systems' behavior, under real-world deployment conditions. The SensoTube Architecture's Implementation Reference Model The idea behind the principles of the proposed architecture was to have the WSAN system encapsulated inside a plain plastic tube as those used for irrigation in agriculture. In fact, the very name SensoTube has its roots at this concept. The advantages of this approach are described in depth in Section 6. The definition of a fixed PCB design model is a prerequisite to enable the use of the proposed architecture. The design of the physical expansion mechanism has been accomplished by taking into consideration: the PCB form factor, the fulfillment of the operational requirements of the various functional layers shields, the reusability of the hardware shields, the simplification in modification and cabling, the maximum expandability and openness to support research and development, the easy and low-cost boards fabrication, and the provision of a standardized and uniform way to design the SensoTube-based hardware shields (i.e., to provide a design template). Printed-Circuit Board (PCB) Model The form factor of the SensoTube PCBs is determined from the ability of the boards to be placed inside a tube of 90 mm diameter, as it is illustrated in Figure 9. The diameter of 90 mm allows for enough PCB space. Certainly, the spacious PCBs approach is not aligned with the notion of the miniaturization in WSANs design [6] but, in practice, there are no restrictions for the physical dimensions of the PCBs of the WSAN systems in real-life applications in agriculture, where the usage of big waterproof plastic enclosures is a common practice. Because the thickness of the commercially available 90 mm diameter tubes varies from 1.8 mm up to 3.2 mm, the diameter of the board is suggested to be at 83.60 mm. This permits the shield's PCB to be seamlessly inserted even into the thickest of tubes. The cuts at the right and left sides of the PCB have been intentionally made, in order to reserve enough space for any potential cabling among shields, photovoltaic panel, externally located sensors, and batteries (battery cells should be located at the bottom of the tube and under the shield synthesis). Expandability and Inter-Layer Services Mechanisms In order to support the inter-layer functionality of the SensoTube architecture for signals management, communication, energy management, and firmware programming and debugging, three types of expansion means have been designed and proposed, namely: ‚ The S-BEC for signals distribution management These expansion mechanisms have been based on the usage of the popular BECs (i.e., pass-through pin-headers) and they have been enriched with critical technical enhancements. Figure 10 shows the SensoTube PCB model with its three different types of BECs positioned at their exact places. All the blue-colored area is at the disposal of the designer to implement any of the seven functional layers of his system. Certainly, the spacious PCBs approach is not aligned with the notion of the miniaturization in WSANs design [6] but, in practice, there are no restrictions for the physical dimensions of the PCBs of the WSAN systems in real-life applications in agriculture, where the usage of big waterproof plastic enclosures is a common practice. Because the thickness of the commercially available 90 mm diameter tubes varies from 1.8 mm up to 3.2 mm, the diameter of the board is suggested to be at 83.60 mm. This permits the shield's PCB to be seamlessly inserted even into the thickest of tubes. The cuts at the right and left sides of the PCB have been intentionally made, in order to reserve enough space for any potential cabling among shields, photovoltaic panel, externally located sensors, and batteries (battery cells should be located at the bottom of the tube and under the shield synthesis). Expandability and Inter-Layer Services Mechanisms In order to support the inter-layer functionality of the SensoTube architecture for signals management, communication, energy management, and firmware programming and debugging, three types of expansion means have been designed and proposed, namely:  The S-BEC for signals distribution management  The P-BEC for energy monitoring, control and management  The J-BEC for programming and debugging of JTAG-enabled MCUs These expansion mechanisms have been based on the usage of the popular BECs (i.e., passthrough pin-headers) and they have been enriched with critical technical enhancements. Figure 10 shows the SensoTube PCB model with its three different types of BECs positioned at their exact places. All the blue-colored area is at the disposal of the designer to implement any of the seven functional layers of his system. Inter-Layer Signals Management Service Mechanism According to the SensoTube architecture, signals management includes not just the physical connection among the various shields of the system, but also a mechanism for signal isolation and other auxiliary signals connections alternatives. In the proposed PCB reference model two 1ˆ20-pins signals BECs, namely S-BEC 1 and S-BEC 2, have been used, as illustrated in Figure 11. The same figure also depicts the proposed types of signals that have been decided to be included in these BECs. As the colors denote, the signals have been conceptually grouped into four functional categories, namely the communication signals (green color), the digital and analog input and output signals (orange color), the signals for programming and debugging through the JTAG standard interfaces (blue color) [106], and the power supply signals (red color). The predefined positioning of the signals on the BECs ensures the standardization for the design of various new shields from conglomerate developers. Forty pins of different types can completely cover the requirements of any WSAN functional shield. Furthermore, the introduction of four exclusive pins to serve interrupt signals can significantly support the design of multi-processor applications, e.g., an MCU-enabled shield can wake up other shields from deep sleep mode, which is a technique, in order to reduce energy consumption. On the other hand, this provision can also support the adoption of challenging embedded systems design techniques, such as event-driven programming, and Synchronous Finite State Machines (SFSM) [142], which can contribute to energy optimization at the system level. According to the SensoTube architecture, signals management includes not just the physical connection among the various shields of the system, but also a mechanism for signal isolation and other auxiliary signals connections alternatives. In the proposed PCB reference model two 1 × 20-pins signals BECs, namely S-BEC 1 and S-BEC 2, have been used, as illustrated in Figure 11. The same figure also depicts the proposed types of signals that have been decided to be included in these BECs. As the colors denote, the signals have been conceptually grouped into four functional categories, namely the communication signals (green color), the digital and analog input and output signals (orange color), the signals for programming and debugging through the JTAG standard interfaces (blue color) [106], and the power supply signals (red color). The predefined positioning of the signals on the BECs ensures the standardization for the design of various new shields from conglomerate developers. Forty pins of different types can completely cover the requirements of any WSAN functional shield. Furthermore, the introduction of four exclusive pins to serve interrupt signals can significantly support the design of multi-processor applications, e.g., an MCU-enabled shield can wake up other shields from deep sleep mode, which is a technique, in order to reduce energy consumption. On the other hand, this provision can also support the adoption of challenging embedded systems design techniques, such as event-driven programming, and Synchronous Finite State Machines (SFSM) [142], which can contribute to energy optimization at the system level. Another novelty is the introduction of ten pins devoted to the power management of the expansion functional shields. In particular, there are five different voltage signals alternatives, two with predefined values, i.e., +3.3 Vdc and +5 Vdc, and three that can be defined by the developer, i.e., the V_IN, the V_BAT, and the V_AUX, respectively. Each one of these voltage input signals has its own ground pin, which is isolated from the rest of the ground pins. This is very useful for mixedsignal circuits design, because it allows the reduction of electric noise interference. Any connections between different ground signals can be implemented at the PCB of any functional shield. The voltage signals should derive from one, or multiple, shields of PML type. The two S-BEC signals pins are passing through all the connected functional shields. To overcome the aforementioned signals management constraints that exist in the expandable architectures (see Section 2), SensoTube S-BECs provide polymorphism in terms of the signals connections and routing. In particular, as Figure 12 depicts, two rows of through-hole pads have been added in parallel with the pads of S-BEC 1 and S-BEC 2. The pads of the internal rows are directly connected to the pads of the S-BECs, while the pads of the external rows can be connected to the signals of the shield's circuits. In this way, the signals coming from the S-BECs are mechanically disconnected from the signals of the shield. Another novelty is the introduction of ten pins devoted to the power management of the expansion functional shields. In particular, there are five different voltage signals alternatives, two with predefined values, i.e., +3.3 Vdc and +5 Vdc, and three that can be defined by the developer, i.e., the V_IN, the V_BAT, and the V_AUX, respectively. Each one of these voltage input signals has its own ground pin, which is isolated from the rest of the ground pins. This is very useful for mixed-signal circuits design, because it allows the reduction of electric noise interference. Any connections between different ground signals can be implemented at the PCB of any functional shield. The voltage signals should derive from one, or multiple, shields of PML type. The two S-BEC signals pins are passing through all the connected functional shields. To overcome the aforementioned signals management constraints that exist in the expandable architectures (see Section 2), SensoTube S-BECs provide polymorphism in terms of the signals connections and routing. In particular, as Figure 12 depicts, two rows of through-hole pads have been added in parallel with the pads of S-BEC 1 and S-BEC 2. The pads of the internal rows are directly connected to the pads of the S-BECs, while the pads of the external rows can be connected to the signals of the shield's circuits. In this way, the signals coming from the S-BECs are mechanically disconnected from the signals of the shield. As illustrated in Figure 13, by placing of dual male pin-headers at the available two rows of pads, and through the use of shorting jumpers, the signals of the shield can be selectively connected to the signals of the S-BECs. This option is very critical for the system's reconfigurability. Furthermore, in cases where the signals of the S-BECs must be remapped, with regard to the signals of the shield, then instead of the male pin-headers, the developers can make their own wiring at the two rows of pads. Additionally, in the external row of through-hole pads, extra BECs can be soldered, in order to enable selected shield's signals connection to its top and bottom neighboring shields. This is a secondary provision for local signals connections among shields. This can be considered as a nested connection method, and allows a functional shield to have its own sub-functional shields without intervention to the rest of the system's functional shields. Figure 14 illustrates an example of the combination of a secondary BEC together with pin-headers and jumpers. In this example, just five of the S-BEC's signals have been connected to the shield's circuits, while six signals of the shield are ready for connection with its two neighboring shields (or with just one of them). All of these modifications, which are based on the particular usage of the two rows of pads, are not permanent in nature, they do not impose limitations to boards stacking, and they can be performed very easily by the end-users of the shields, e.g., researchers and any kind of developers. Also, there is a third alternative for signals connections, which is very convenient at the systemlevel signals physical connections. One or more screw terminal blocks can be placed at the pads of As illustrated in Figure 13, by placing of dual male pin-headers at the available two rows of pads, and through the use of shorting jumpers, the signals of the shield can be selectively connected to the signals of the S-BECs. This option is very critical for the system's reconfigurability. Furthermore, in cases where the signals of the S-BECs must be remapped, with regard to the signals of the shield, then instead of the male pin-headers, the developers can make their own wiring at the two rows of pads. As illustrated in Figure 13, by placing of dual male pin-headers at the available two rows of pads, and through the use of shorting jumpers, the signals of the shield can be selectively connected to the signals of the S-BECs. This option is very critical for the system's reconfigurability. Furthermore, in cases where the signals of the S-BECs must be remapped, with regard to the signals of the shield, then instead of the male pin-headers, the developers can make their own wiring at the two rows of pads. Additionally, in the external row of through-hole pads, extra BECs can be soldered, in order to enable selected shield's signals connection to its top and bottom neighboring shields. This is a secondary provision for local signals connections among shields. This can be considered as a nested connection method, and allows a functional shield to have its own sub-functional shields without intervention to the rest of the system's functional shields. Figure 14 illustrates an example of the combination of a secondary BEC together with pin-headers and jumpers. In this example, just five of the S-BEC's signals have been connected to the shield's circuits, while six signals of the shield are ready for connection with its two neighboring shields (or with just one of them). All of these modifications, which are based on the particular usage of the two rows of pads, are not permanent in nature, they do not impose limitations to boards stacking, and they can be performed very easily by the end-users of the shields, e.g., researchers and any kind of developers. Also, there is a third alternative for signals connections, which is very convenient at the systemlevel signals physical connections. One or more screw terminal blocks can be placed at the pads of Additionally, in the external row of through-hole pads, extra BECs can be soldered, in order to enable selected shield's signals connection to its top and bottom neighboring shields. This is a secondary provision for local signals connections among shields. This can be considered as a nested connection method, and allows a functional shield to have its own sub-functional shields without intervention to the rest of the system's functional shields. Figure 14 illustrates an example of the combination of a secondary BEC together with pin-headers and jumpers. In this example, just five of the S-BEC's signals have been connected to the shield's circuits, while six signals of the shield are ready for connection with its two neighboring shields (or with just one of them). All of these modifications, which are based on the particular usage of the two rows of pads, are not permanent in nature, they do not impose limitations to boards stacking, and they can be performed very easily by the end-users of the shields, e.g., researchers and any kind of developers. As illustrated in Figure 13, by placing of dual male pin-headers at the available two rows of pads, and through the use of shorting jumpers, the signals of the shield can be selectively connected to the signals of the S-BECs. This option is very critical for the system's reconfigurability. Furthermore, in cases where the signals of the S-BECs must be remapped, with regard to the signals of the shield, then instead of the male pin-headers, the developers can make their own wiring at the two rows of pads. Additionally, in the external row of through-hole pads, extra BECs can be soldered, in order to enable selected shield's signals connection to its top and bottom neighboring shields. This is a secondary provision for local signals connections among shields. This can be considered as a nested connection method, and allows a functional shield to have its own sub-functional shields without intervention to the rest of the system's functional shields. Figure 14 illustrates an example of the combination of a secondary BEC together with pin-headers and jumpers. In this example, just five of the S-BEC's signals have been connected to the shield's circuits, while six signals of the shield are ready for connection with its two neighboring shields (or with just one of them). All of these modifications, which are based on the particular usage of the two rows of pads, are not permanent in nature, they do not impose limitations to boards stacking, and they can be performed very easily by the end-users of the shields, e.g., researchers and any kind of developers. Also, there is a third alternative for signals connections, which is very convenient at the systemlevel signals physical connections. One or more screw terminal blocks can be placed at the pads of Also, there is a third alternative for signals connections, which is very convenient at the system-level signals physical connections. One or more screw terminal blocks can be placed at the pads of the external rows, which are directly connected with the S-BECs signals ( Figure 15). This facilitates users' physical access to the various shields' signals by just wiring instead of risky soldering. Such, a function is particularly useful for easily adding and removing several types of sensors in WSAN agricultural applications. Sensors 2016, 16,1227 24 of 59 the external rows, which are directly connected with the S-BECs signals ( Figure 15). This facilitates users' physical access to the various shields' signals by just wiring instead of risky soldering. Such, a function is particularly useful for easily adding and removing several types of sensors in WSAN agricultural applications. The proposed signal management mechanisms are low-cost and easily implemented in the PCB. The addition of the extra rows of pads (two rows per S-BEC) can be easily hosted in WSAN hardware systems for agricultural applications, due to the fact that, there are no space limitations in this application domain. Despite the fact of the space reservation from the added rows of pads, there is more than enough PCB space for the development of the shield circuitry. Following the aforementioned signals management mechanisms, the poor performance and low reliability of the WSAN systems, due to clumsy wiring of signals connections, are minimized. On the other hand, the various different shields can be designed as totally independent functional entities without signals connections and boards' expansion barriers. The SensoTube polymorphism in signals management is depicted in Figure 16. From the designers' perspective, the methodology of incorporating the proposed mechanisms is very convenient and straightforward. In Figure 17, on the left and right sides there are the sheet symbols of the S-BEC 1 and S-BEC 2. In each one of these sheet symbols there are the signal ports entities which represent both the common signals of the stacking BECs and the signals of the parallel pads row. The pads row signals are numbered as PIN_1 up to PIN_40. In the middle of the Figure 17 there is a third sheet symbol. This represents the new, under design, functional shield. Designers can choose to connect all, or just some, of the signals of their shield to the pads row's signals. Signals of similar function e.g., UART transmit and receive signals should be connected to the sheet port entities that are opposite to the TXD and RXD signals of the BECs. There is no physical connection amongst the signals of the shield and the signal of the predefined signals of the BECs. It is on the designers' discretion to use shorting jumpers (see Figure 13), or secondary BECs (see Figure 14), or screw terminal blocks (see Figure 15) to route the signals of the shield. A detailed usage of the development steps using the SensoTube reference model is presented in Section 7. The proposed signal management mechanisms are low-cost and easily implemented in the PCB. The addition of the extra rows of pads (two rows per S-BEC) can be easily hosted in WSAN hardware systems for agricultural applications, due to the fact that, there are no space limitations in this application domain. Despite the fact of the space reservation from the added rows of pads, there is more than enough PCB space for the development of the shield circuitry. Following the aforementioned signals management mechanisms, the poor performance and low reliability of the WSAN systems, due to clumsy wiring of signals connections, are minimized. On the other hand, the various different shields can be designed as totally independent functional entities without signals connections and boards' expansion barriers. The SensoTube polymorphism in signals management is depicted in Figure 16. the external rows, which are directly connected with the S-BECs signals ( Figure 15). This facilitates users' physical access to the various shields' signals by just wiring instead of risky soldering. Such, a function is particularly useful for easily adding and removing several types of sensors in WSAN agricultural applications. The proposed signal management mechanisms are low-cost and easily implemented in the PCB. The addition of the extra rows of pads (two rows per S-BEC) can be easily hosted in WSAN hardware systems for agricultural applications, due to the fact that, there are no space limitations in this application domain. Despite the fact of the space reservation from the added rows of pads, there is more than enough PCB space for the development of the shield circuitry. Following the aforementioned signals management mechanisms, the poor performance and low reliability of the WSAN systems, due to clumsy wiring of signals connections, are minimized. On the other hand, the various different shields can be designed as totally independent functional entities without signals connections and boards' expansion barriers. The SensoTube polymorphism in signals management is depicted in Figure 16. From the designers' perspective, the methodology of incorporating the proposed mechanisms is very convenient and straightforward. In Figure 17, on the left and right sides there are the sheet symbols of the S-BEC 1 and S-BEC 2. In each one of these sheet symbols there are the signal ports entities which represent both the common signals of the stacking BECs and the signals of the parallel pads row. The pads row signals are numbered as PIN_1 up to PIN_40. In the middle of the Figure 17 there is a third sheet symbol. This represents the new, under design, functional shield. Designers can choose to connect all, or just some, of the signals of their shield to the pads row's signals. Signals of similar function e.g., UART transmit and receive signals should be connected to the sheet port entities that are opposite to the TXD and RXD signals of the BECs. There is no physical connection amongst the signals of the shield and the signal of the predefined signals of the BECs. It is on the designers' discretion to use shorting jumpers (see Figure 13), or secondary BECs (see Figure 14), or screw terminal blocks (see Figure 15) to route the signals of the shield. A detailed usage of the development steps using the SensoTube reference model is presented in Section 7. From the designers' perspective, the methodology of incorporating the proposed mechanisms is very convenient and straightforward. In Figure 17, on the left and right sides there are the sheet symbols of the S-BEC 1 and S-BEC 2. In each one of these sheet symbols there are the signal ports entities which represent both the common signals of the stacking BECs and the signals of the parallel pads row. The pads row signals are numbered as PIN_1 up to PIN_40. In the middle of the Figure 17 there is a third sheet symbol. This represents the new, under design, functional shield. Designers can choose to connect all, or just some, of the signals of their shield to the pads row's signals. Signals of similar function e.g., UART transmit and receive signals should be connected to the sheet port entities that are opposite to the TXD and RXD signals of the BECs. There is no physical connection amongst the signals of the shield and the signal of the predefined signals of the BECs. It is on the designers' discretion to use shorting jumpers (see Figure 13), or secondary BECs (see Figure 14), or screw terminal blocks (see Figure 15) to route the signals of the shield. A detailed usage of the development steps using the SensoTube reference model is presented in Section 7. Each one of the above sheet symbols represents a unique schematic drawing file. The use of sheet symbols is a practical and convenient design method for hierarchical structure of a schematic and PCB project that allows designers to re-use ready-made drawings. This facility is common in the majority of the electronic design software suites (e.g., Altium Designer which has been employed in this study). Designers can repeatedly use the schematic sheet symbols and their PCB objects in a copy and paste fashion and put entirely the emphasis on the design of the circuits of the shields. Figures 18 and 19 present the schematic drawings of the S-BEC 1 and S-BEC 2. Each one of the above sheet symbols represents a unique schematic drawing file. The use of sheet symbols is a practical and convenient design method for hierarchical structure of a schematic and PCB project that allows designers to re-use ready-made drawings. This facility is common in the majority of the electronic design software suites (e.g., Altium Designer which has been employed in this study). Designers can repeatedly use the schematic sheet symbols and their PCB objects in a copy and paste fashion and put entirely the emphasis on the design of the circuits of the shields. Figures 18 and 19 present the schematic drawings of the S-BEC 1 and S-BEC 2. Each one of the above sheet symbols represents a unique schematic drawing file. The use of sheet symbols is a practical and convenient design method for hierarchical structure of a schematic and PCB project that allows designers to re-use ready-made drawings. This facility is common in the majority of the electronic design software suites (e.g., Altium Designer which has been employed in this study). Designers can repeatedly use the schematic sheet symbols and their PCB objects in a copy and paste fashion and put entirely the emphasis on the design of the circuits of the shields. Figures 18 and 19 present the schematic drawings of the S-BEC 1 and S-BEC 2. Inter-Layer Communication Service Mechanism Inter-layer communication includes data and commands transfers among the MCUs of the shields, and among MCUs and various integrated circuits, such as digital sensors, memory chips, analog-to-digital conversion chips, integrated RF modules, etc. In order to support such needs, three types of serial data communication means have been incorporated, namely the I2C-bus, the SPI-bus, and the UART port [143]. The physical access to them can be accomplished through specific connection pins at the S-BECs of the system. From the three, only the I2C-bus can be used for multiprocessor communications, because it is a data bus, which can support up to thirty two devices in either a single or multi-master topology [144]. By the use of I2C-bus extenders, the number of supported devices can significantly increased [145]. In addition, the I2C-bus has been proven to be very successful in various applications, as for machine-to-machine interconnection [146]. An additional option for multi-processor communication is the adoption of the CAN-bus, a well-known automotive communication standard [147], which appears to be present in many of the ARM-based MCUs as an integrated peripheral. In this case, some of the general-purpose input/output pins should be reserved for the CAN-bus signals. In the proposed communication signals positions of the S-BEC 1 there are three pairs of UARTs and two chip select signals for SPI in order to avoid resources limitations inherent to the most of the existing expandable platforms. In addition, the SensoTube with the polymorphism in signals connections can ensure unlimited communication resources and, at the same time, it insures the maximum flexibility and openness. Inter-Layer Programming and Debugging Service Mechanism The embedded MCUs at the end-systems can be programmed through the method of the insystem programming (ISP). Basically, the SPI-bus is mainly used for this task together with certain signal, such as the reset, the voltage supply, and the ground signals of the devices that are to be programmed. Inter-Layer Communication Service Mechanism Inter-layer communication includes data and commands transfers among the MCUs of the shields, and among MCUs and various integrated circuits, such as digital sensors, memory chips, analog-to-digital conversion chips, integrated RF modules, etc. In order to support such needs, three types of serial data communication means have been incorporated, namely the I2C-bus, the SPI-bus, and the UART port [143]. The physical access to them can be accomplished through specific connection pins at the S-BECs of the system. From the three, only the I2C-bus can be used for multi-processor communications, because it is a data bus, which can support up to thirty two devices in either a single or multi-master topology [144]. By the use of I2C-bus extenders, the number of supported devices can significantly increased [145]. In addition, the I2C-bus has been proven to be very successful in various applications, as for machine-to-machine interconnection [146]. An additional option for multi-processor communication is the adoption of the CAN-bus, a well-known automotive communication standard [147], which appears to be present in many of the ARM-based MCUs as an integrated peripheral. In this case, some of the general-purpose input/output pins should be reserved for the CAN-bus signals. In the proposed communication signals positions of the S-BEC 1 there are three pairs of UARTs and two chip select signals for SPI in order to avoid resources limitations inherent to the most of the existing expandable platforms. In addition, the SensoTube with the polymorphism in signals connections can ensure unlimited communication resources and, at the same time, it insures the maximum flexibility and openness. Inter-Layer Programming and Debugging Service Mechanism The embedded MCUs at the end-systems can be programmed through the method of the in-system programming (ISP). Basically, the SPI-bus is mainly used for this task together with certain signal, such as the reset, the voltage supply, and the ground signals of the devices that are to be programmed. SensoTube supports the ISP method by providing all of the SPI-bus signals to its S-BEC 1 expansion pins, i.e., the MOSI, MISO, CLK and SEL pins. Several MCUs of the interconnected functional shields can be programmed via the very same ISP circuits and by addressing them using some of the general purpose input/output pins of the S-BECs. Regarding the debugging function, the majority of the in-system emulation and code tracing is traditionally accomplished using specific external devices, which support the IEEE 1149.1 standard for boundary scans, and are widely known as the JTAG debuggers [106,148]. All the necessary signals for the JTAG have been provided at the S-BEC 2 (see Figure 11). The JTAG debuggers are also used for the programming task. From the side of the end-system, a sizable JTAG connector of ten up to twenty pins must be permanently soldered in the PCB. Despite the fact, that there is no use of all of the signal pins of a JTAG connector, its soldering to the end-system is mandatory, in order to ensure the connection compatibility with the JTAG devices. To overcome this design limitation, in the SensoTube, the JTAG signal pins can be routed to a particular PDL shield, on which there is the necessary JTAG connector. The PDL shield could be removed from the final WSAN system, when the latter is ready for installation in the field. In case of the ARM-based MCUs, which are becoming more and more popular in embedded systems [149], the IEEE 1149.1 boundary scan standard can support the debugging of two or more cores, simultaneously. Therefore, through a single interface, the designers are able to perform synchronized debugging of multi-core systems. The multiple cores can be either identical (symmetric multi-core processing-SMP), or different (asymmetric multi-core processing-AMP). The only drawback in the multi-core debugging is the need to use two, instead of one, JTAG connectors at the end systems, in order to implement the necessary scan chain. In particular, the JTAG data output signal from a target system (TDO signal) must be the JTAG data input signal to the next target system (TDI signal) in the chain. To enable the multi-core debugging, the SensoTube reference model has been enriched with the J-BEC expansion mechanism, with which the MCUs of the functional shields can have their JTAG_TDI and JTAG_TDO daisy-chained (see Figure 20). More specifically, two dual-pin connectors have been incorporated for this goal. Since the through-hole BECs cannot be daisy-chained, two dual-pin surface-mount connectors have been employed, one soldered at the top side (the white connector shown at Figure 10) and the other at the bottom side of the PCB. The signal connections, between the two connectors, can be achieved by PCB metal-plated through-holes (known as signal vias). Every SensoTube functional shield should have its own J-BEC connectors soldered onto its PCB. In cases where this feature is not used in a shield, then, the JTAG_TDI and JTAG_TDO signal should be shorted, in order to allow the signals to pass through the neighboring shields. SensoTube supports the ISP method by providing all of the SPI-bus signals to its S-BEC 1 expansion pins, i.e., the MOSI, MISO, CLK and SEL pins. Several MCUs of the interconnected functional shields can be programmed via the very same ISP circuits and by addressing them using some of the general purpose input/output pins of the S-BECs. Regarding the debugging function, the majority of the in-system emulation and code tracing is traditionally accomplished using specific external devices, which support the IEEE 1149.1 standard for boundary scans, and are widely known as the JTAG debuggers [106,148]. All the necessary signals for the JTAG have been provided at the S-BEC 2 (see Figure 11). The JTAG debuggers are also used for the programming task. From the side of the end-system, a sizable JTAG connector of ten up to twenty pins must be permanently soldered in the PCB. Despite the fact, that there is no use of all of the signal pins of a JTAG connector, its soldering to the end-system is mandatory, in order to ensure the connection compatibility with the JTAG devices. To overcome this design limitation, in the SensoTube, the JTAG signal pins can be routed to a particular PDL shield, on which there is the necessary JTAG connector. The PDL shield could be removed from the final WSAN system, when the latter is ready for installation in the field. In case of the ARM-based MCUs, which are becoming more and more popular in embedded systems [149], the IEEE 1149.1 boundary scan standard can support the debugging of two or more cores, simultaneously. Therefore, through a single interface, the designers are able to perform synchronized debugging of multi-core systems. The multiple cores can be either identical (symmetric multi-core processing-SMP), or different (asymmetric multi-core processing-AMP). The only drawback in the multi-core debugging is the need to use two, instead of one, JTAG connectors at the end systems, in order to implement the necessary scan chain. In particular, the JTAG data output signal from a target system (TDO signal) must be the JTAG data input signal to the next target system (TDI signal) in the chain. To enable the multi-core debugging, the SensoTube reference model has been enriched with the J-BEC expansion mechanism, with which the MCUs of the functional shields can have their JTAG_TDI and JTAG_TDO daisy-chained (see Figure 20). More specifically, two dualpin connectors have been incorporated for this goal. Since the through-hole BECs cannot be daisychained, two dual-pin surface-mount connectors have been employed, one soldered at the top side (the white connector shown at Figure 10) and the other at the bottom side of the PCB. The signal connections, between the two connectors, can be achieved by PCB metal-plated through-holes (known as signal vias). Every SensoTube functional shield should have its own J-BEC connectors soldered onto its PCB. In cases where this feature is not used in a shield, then, the JTAG_TDI and JTAG_TDO signal should be shorted, in order to allow the signals to pass through the neighboring shields. The J-BEC mechanism can be incorporated by designers by simply use the sheet symbol depicted in Figure 21 which represents the j-BEC circuit. The J-BEC mechanism can be incorporated by designers by simply use the sheet symbol depicted in Figure 21 which represents the j-BEC circuit. Inter-Layer Energy Management Service Mechanism The power of the shields has been designed so as to ensure the maximum versatility in development and experimentation. In particular, there are ten independent connection pins at the S-BEC 2 (see Figure 11), which provide all of the functional shields with the necessary voltage sources. The maximum current rating per BEC's pin is around 5 A and it is more than enough for the vast majority of the WSAN systems in agriculture or other similar application domains (e.g., forestry, environmental monitoring, etc.). In addition to the above typical power supply mechanism, an additional mechanism has been introduced, referred to as the P-BEC. The P-BEC has been designed so as to allow the functions of insystem monitoring and control of the energy in a WSAN node. The implementation of the P-BEC incorporates a 2 × 8 pins BEC. As shown in Figure 22a, the upper pins of the BEC are routed to a row of through-hole pads. The lower pins of the BEC have to be terminated at a PML shield. To ensure standardization in design, certain voltage names have been given to the lower eight pins of the BEC, i.e., +3.3 Vdc, +5 Vdc, V_BAT, and V_AUX0 up to V_AUX4. One or more of these voltages can be connected to the upper pins of the BEC by another shield, e.g., an ETL shield, which can monitor and control the energy flow to other shields. Next, with the use of a dual-pin-header (P12 in Figure 22b), any shield can select one of the available voltage sources from the BEC. The pins of this pin-header can either be shorted by the use of jumper, in order to directly provide the voltage source to the circuits of the shield, or help the engagement of various circuits for current monitoring and/or circuits for switching ON and OFF the energy flow. Although, the feature of breaking the voltage supply signals path in order to measure the flow of currents is known in the embedded systems design area [150], the integration of this technique into the WSAN systems is new. (a) (b) Figure 22. The P-BEC mechanism for inter-layer power management services: (a) the schematic drawinng; and (b) the three-dimensional view. Figure 23 illustrates the sheet symbol of the schematic drawing of the P-BEC mechanism. The complete schematic drawing file is given in Figure 24. Signals with the "_In" post-fix are considered as potential power signals to the shield, whereas signals with the "_Out" post-fix are those that coming from any other shields, e.g., from a PML shield. Inter-Layer Energy Management Service Mechanism The power of the shields has been designed so as to ensure the maximum versatility in development and experimentation. In particular, there are ten independent connection pins at the S-BEC 2 (see Figure 11), which provide all of the functional shields with the necessary voltage sources. The maximum current rating per BEC's pin is around 5 A and it is more than enough for the vast majority of the WSAN systems in agriculture or other similar application domains (e.g., forestry, environmental monitoring, etc.). In addition to the above typical power supply mechanism, an additional mechanism has been introduced, referred to as the P-BEC. The P-BEC has been designed so as to allow the functions of in-system monitoring and control of the energy in a WSAN node. The implementation of the P-BEC incorporates a 2ˆ8 pins BEC. As shown in Figure 22a, the upper pins of the BEC are routed to a row of through-hole pads. The lower pins of the BEC have to be terminated at a PML shield. To ensure standardization in design, certain voltage names have been given to the lower eight pins of the BEC, i.e., +3.3 Vdc, +5 Vdc, V_BAT, and V_AUX0 up to V_AUX4. One or more of these voltages can be connected to the upper pins of the BEC by another shield, e.g., an ETL shield, which can monitor and control the energy flow to other shields. Next, with the use of a dual-pin-header (P12 in Figure 22b), any shield can select one of the available voltage sources from the BEC. The pins of this pin-header can either be shorted by the use of jumper, in order to directly provide the voltage source to the circuits of the shield, or help the engagement of various circuits for current monitoring and/or circuits for switching ON and OFF the energy flow. Although, the feature of breaking the voltage supply signals path in order to measure the flow of currents is known in the embedded systems design area [150], the integration of this technique into the WSAN systems is new. Inter-Layer Energy Management Service Mechanism The power of the shields has been designed so as to ensure the maximum versatility in development and experimentation. In particular, there are ten independent connection pins at the S-BEC 2 (see Figure 11), which provide all of the functional shields with the necessary voltage sources. The maximum current rating per BEC's pin is around 5 A and it is more than enough for the vast majority of the WSAN systems in agriculture or other similar application domains (e.g., forestry, environmental monitoring, etc.). In addition to the above typical power supply mechanism, an additional mechanism has been introduced, referred to as the P-BEC. The P-BEC has been designed so as to allow the functions of insystem monitoring and control of the energy in a WSAN node. The implementation of the P-BEC incorporates a 2 × 8 pins BEC. As shown in Figure 22a, the upper pins of the BEC are routed to a row of through-hole pads. The lower pins of the BEC have to be terminated at a PML shield. To ensure standardization in design, certain voltage names have been given to the lower eight pins of the BEC, i.e., +3.3 Vdc, +5 Vdc, V_BAT, and V_AUX0 up to V_AUX4. One or more of these voltages can be connected to the upper pins of the BEC by another shield, e.g., an ETL shield, which can monitor and control the energy flow to other shields. Next, with the use of a dual-pin-header (P12 in Figure 22b), any shield can select one of the available voltage sources from the BEC. The pins of this pin-header can either be shorted by the use of jumper, in order to directly provide the voltage source to the circuits of the shield, or help the engagement of various circuits for current monitoring and/or circuits for switching ON and OFF the energy flow. Although, the feature of breaking the voltage supply signals path in order to measure the flow of currents is known in the embedded systems design area [150], the integration of this technique into the WSAN systems is new. (a) (b) Figure 22. The P-BEC mechanism for inter-layer power management services: (a) the schematic drawinng; and (b) the three-dimensional view. Figure 23 illustrates the sheet symbol of the schematic drawing of the P-BEC mechanism. The complete schematic drawing file is given in Figure 24. Signals with the "_In" post-fix are considered as potential power signals to the shield, whereas signals with the "_Out" post-fix are those that coming from any other shields, e.g., from a PML shield. Figure 23 illustrates the sheet symbol of the schematic drawing of the P-BEC mechanism. The complete schematic drawing file is given in Figure 24. Signals with the "_In" post-fix are considered as potential power signals to the shield, whereas signals with the "_Out" post-fix are those that coming from any other shields, e.g., from a PML shield. Figure 25 illustrates an example of the P-BEC mechanism usage. In particular, a PML shield accepts +12 Vdc and converts it to a +5 Vdc to power any shield that needs +5 Vdc for its operation. The +5 Vdc is then routed to the particular P-BEC's pin. Through the P-BEC an ETL shield can measure the current flowing to other shields that are using the +5 Vdc for their operation. Additionally, the ETL shield is able to switch-ON or OFF the +5 Vdc voltage source. Such features enable the energy management of a group of functional shields. At the shield level, e.g., in the case of a DCL shield, the +5 Vdc voltage can also be controlled and monitored locally. The logical signals that are mentioned in Figure 25 can be digital output signal of the shields' on-board MCUs. Support for Firmware, Software and Middleware As it was explained above, the SensoTube architecture can ensure the development of WSAN firmware applications either in a distributed single-master MCU, or in a multi-master collaborative mode. The proposed particular expansion mechanisms facilitate the use of all the popular development tools, such as programmers and debuggers, for any possible MCU. Moreover, the firmware can be remotely maintained and managed with the use of programming and upgrade over-the-air (OTA) techniques, which can be implemented without intervention on the WSAN system. Regarding the development of the MCUs' firmware, the designers and the developers are free to use the software tool chains of their choice. Except from the firmware, a WSAN system may include the software development of particular PC software applications for either the in-house testing of the system, or for the rapid control prototyping (RCP) (e.g., by the use of Matlab, or LabVIEW software development suites [63]), or for the implementation of the final application for the operation and administration of the system. A SensoTube-based WSAN system completely supports these three tasks by the use of its wireless and wired data interconnection. For the RCP, in particular, any ARM-based SensoTube shield, and through the use of a PDL shield, can enable the hardware-in-the-loop technique of Matlab/Simulink. In other words, the developer can build, execute, and test ARM-based MCUs' firmware using the Matlab platform. In cases of medium up to very large-scale WSAN deployments, the software application development invokes middleware [91]. Towards this direction, several promising methodologies have been reported by the research community. Domain-specific modeling languages based on the Model-driven Engineering (MDE) approach [151] to describe the application, the middleware of systems' virtualization [152], are only an indication of the current research trends in this WSAN software aspect. In particular, as it is pointed out in [153], it is required for the application and services to be decoupled from the WSAN, i.e., the wireless networking technical operations. In addition, as it is enunciated in [154], the implementation of a substantial middleware would require a layered architecture, through which the overall system could be decomposed into specific modules (layers). Therefore, the abstraction of SensoTube architecture, with the foundation of the seven functional layers appears to be particularly convenient for the development of the middleware. Specifically, the existence of local intelligence in the MCU-based functional shields, together with the inherent support for multi-processor distributed logic, can support the WSAN sub-system's modeling [155], and allows for the development of comprehensive and substantial libraries of APIs. Systems Encapsulation and Installation In the harsh environment of agricultural domain, the WSAN nodes' housings, as well as the total mechanical structure of them, is a key factor for the total robustness and reliability of the remote WSAN system [14]. The external influences that a WSAN node may suffer in an agricultural field may include chemical influences (such as acids, etc.), dust, ice, corrosion, air moisture, aggressive constituents of rainwater (such as heavy metals, etc.), solar radiation (UV radiation and high temperature), soil salinity, contamination from birds and insects, contamination from micro-organisms (such as fungi, moss, etc.), and other factors related to air pollution. In order to protect the electronic circuits of the WSAN system from these external influences, particular enclosures proper for electrical and electronics systems are extensively used by both the commercial systems' vendors and the researchers. These cases are graded according to their resilience to dust and water (IEC IP Codes) [156]. In practice, these enclosures are the only solution for water ingress protection at outdoor deployments, but they are quite expensive and they are not so convenient in terms of the interior configuration of the electronics circuits and other electrical parts, such as batteries etc. Figure 26 shows a typical experimental configuration of a WSAN node for an agricultural application. commercial systems' vendors and the researchers. These cases are graded according to their resilience to dust and water (IEC IP Codes) [156]. In practice, these enclosures are the only solution for water ingress protection at outdoor deployments, but they are quite expensive and they are not so convenient in terms of the interior configuration of the electronics circuits and other electrical parts, such as batteries etc. Figure 26 shows a typical experimental configuration of a WSAN node for an agricultural application. In cases where a WSAN node requires a significant amount of energy autonomy, then more than one, or battery cells have to be used on the spot, installed in multiple electrical enclosures at the expense of cost, cabling distribution order, and appearance. Additionally, these enclosures suffer from drilling and cutting, which are frequent functions in experiments, and extra care has to be taken in order to maintain their durability against dust and water. Another issue is the support of the enclosures on the metallic support poles. Figure 27 shows a typical WSAN node implementation with a solar panel, a water-proof electrical enclosure and a metallic pole. In addition, significant complexity is usually added from the RF antennae installation because the antennae must be installed in such places where the electromagnetic signals are not influenced from the metallic materials of the WSAN node. This is the reason why the antennae are typically installed outside the enclosures or, in many cases, at additional support arms. Very often, all these issues are becoming sources for reduced reliability and durability for the deployed system. Keeping all the aforementioned issues in mind, the use of ordinary drain and water supply tubes are proposed here as an extremely convenient solution for environmental and agricultural WSAN In cases where a WSAN node requires a significant amount of energy autonomy, then more than one, or battery cells have to be used on the spot, installed in multiple electrical enclosures at the expense of cost, cabling distribution order, and appearance. Additionally, these enclosures suffer from drilling and cutting, which are frequent functions in experiments, and extra care has to be taken in order to maintain their durability against dust and water. Another issue is the support of the enclosures on the metallic support poles. Figure 27 shows a typical WSAN node implementation with a solar panel, a water-proof electrical enclosure and a metallic pole. In addition, significant complexity is usually added from the RF antennae installation because the antennae must be installed in such places where the electromagnetic signals are not influenced from the metallic materials of the WSAN node. This is the reason why the antennae are typically installed outside the enclosures or, in many cases, at additional support arms. Very often, all these issues are becoming sources for reduced reliability and durability for the deployed system. commercial systems' vendors and the researchers. These cases are graded according to their resilience to dust and water (IEC IP Codes) [156]. In practice, these enclosures are the only solution for water ingress protection at outdoor deployments, but they are quite expensive and they are not so convenient in terms of the interior configuration of the electronics circuits and other electrical parts, such as batteries etc. Figure 26 shows a typical experimental configuration of a WSAN node for an agricultural application. In cases where a WSAN node requires a significant amount of energy autonomy, then more than one, or battery cells have to be used on the spot, installed in multiple electrical enclosures at the expense of cost, cabling distribution order, and appearance. Additionally, these enclosures suffer from drilling and cutting, which are frequent functions in experiments, and extra care has to be taken in order to maintain their durability against dust and water. Another issue is the support of the enclosures on the metallic support poles. Figure 27 shows a typical WSAN node implementation with a solar panel, a water-proof electrical enclosure and a metallic pole. In addition, significant complexity is usually added from the RF antennae installation because the antennae must be installed in such places where the electromagnetic signals are not influenced from the metallic materials of the WSAN node. This is the reason why the antennae are typically installed outside the enclosures or, in many cases, at additional support arms. Very often, all these issues are becoming sources for reduced reliability and durability for the deployed system. Keeping all the aforementioned issues in mind, the use of ordinary drain and water supply tubes are proposed here as an extremely convenient solution for environmental and agricultural WSAN Keeping all the aforementioned issues in mind, the use of ordinary drain and water supply tubes are proposed here as an extremely convenient solution for environmental and agricultural WSAN system enclosures. Tubes of polyvinyl chloride (PVC), or unplastisized polyvinyl chloride (PVC-U) inherently provide the required soil and dust ingress protection. According to the proposed SensoTube architecture, the various expansion shields are placed within a plastic tube ( Figure 28). The PCB of the SensoTube board model has been designed so as to be fitted within tubes of 90 mm in diameter. The technical specifications of the PVC tubes are shown in Table 3. The selection of the pressure tolerance, e.g., 4 atms or 6 atms, influences the thickness of the tube. inherently provide the required soil and dust ingress protection. According to the proposed SensoTube architecture, the various expansion shields are placed within a plastic tube ( Figure 28). The PCB of the SensoTube board model has been designed so as to be fitted within tubes of 90 mm in diameter. The technical specifications of the PVC tubes are shown in Table 3. The selection of the pressure tolerance, e.g., 4 atms or 6 atms, influences the thickness of the tube. The very same tube acts also as the installation support pole. The height of the tube may vary according to the precise farming application needs. The WSAN system's boards can be placed at various heights within the tube. Developers can use tubes of smaller diameters under the boards' synthesis in order to act as a support spacer. For the battery cells, it is suggested they be placed at the bottom of the tube. This helps the centroid of the tube to be underground. Moreover, the rich set of pipe management accessories, e.g., expansion adaptors, fittings, tees, sleeves, connectors, bends, flanges, etc. can be creatively used for sensory installation above the surface or underground. Figure 29a displays a SensoTube-based WSAN node in an orchard. As it is shown, the plastic tube has been used not only for the encapsulation of the electronic systems but also as a support pole. The solar panel of the node has been easily adapted to the top cap of the plastic tube, as it is shown in Figure 29b. In contrast with the traditional installation methods, such as that of Figure 27, in the proposed installation method the RF antenna is encapsulated within the tube and in this way they are fully protected from the external environment. The very same tube acts also as the installation support pole. The height of the tube may vary according to the precise farming application needs. The WSAN system's boards can be placed at various heights within the tube. Developers can use tubes of smaller diameters under the boards' synthesis in order to act as a support spacer. For the battery cells, it is suggested they be placed at the bottom of the tube. This helps the centroid of the tube to be underground. Moreover, the rich set of pipe management accessories, e.g., expansion adaptors, fittings, tees, sleeves, connectors, bends, flanges, etc. can be creatively used for sensory installation above the surface or underground. Figure 29a displays a SensoTube-based WSAN node in an orchard. As it is shown, the plastic tube has been used not only for the encapsulation of the electronic systems but also as a support pole. The solar panel of the node has been easily adapted to the top cap of the plastic tube, as it is shown in Figure 29b. In contrast with the traditional installation methods, such as that of Figure 27, in the proposed installation method the RF antenna is encapsulated within the tube and in this way they are fully protected from the external environment. The advantages of the SensoTube approach for the WSAN hardware enclosures are numerous. Some of them are listed below:  Ruggedness: The drain and water PVC tubes by default assure the coveted resistance to water, chemicals, salinity, acids, etc.  Non-metallic support poles: the use of the plastic tube as the support pole benefits the total system because the weight is greatly reduced. Furthermore; the material is inexpensive compared to traditional metallic constructions. In addition, it provides better lightning protection. Finally, it is not attractive to thieves looking for scrap metal.  Underground installation: It is a robust enclosure solution for WSAN nodes in underground installation where the whole of the node, or the most of it, has to be buried underground [158].  Internal temperature stability: The temperature of the air mass inside a PVC tube is slightly different from that of the open air, because this plastic material has a very low thermal conductivity. Also, as deeper the tube is installed underground, the temperature difference is increased due to the facts that a certain part of the enclosure are not directly exposed to the external environment, and that the temperature under the surface is almost constant during the day and night periods. Thus for deployments in environments with high temperature and high sunlight radiation it is possible to keep the temperature of the enclosed electronics at a lower level with regard to the open air temperature. Such a feature is of great importance for the energy balance of the WSAN system.  Easy deployment: PVC tubes are easily transported and, due to their convenient centroid, they do not require deep paddles and cableways for their support on the ground.  Larger inner space: e.g., in a 2.5 m PCV tube of 90 mm diameter and 2.7 mm thickness, the total inner useful volume is about 13,723 cm 3 whereas the inner volume of a 170 mm × 170 mm × 75 mm electrical enclosure, such as the one displayed in Figure 26a, is just about 2100 cm 3 .  RF antennae friendliness: the RF antennae are installed within the tube and they are fully protected from the threats of the external environment. Additionally, the ability of using all the internal space of a tube facilitates the encapsulation of very long RF antennae, so it is easy to incorporate antennae from λ/4 up to λ (λ is the wavelength of a radio signal, expressed in units of meters). For example, the wavelength of a 433 MHz RF signal is around 69 cm. In this particular case, the ideal antenna should be a 69 cm long wire. In general, antennae close to the wavelength of the The advantages of the SensoTube approach for the WSAN hardware enclosures are numerous. Some of them are listed below: ‚ Ruggedness: The drain and water PVC tubes by default assure the coveted resistance to water, chemicals, salinity, acids, etc. ‚ Non-metallic support poles: the use of the plastic tube as the support pole benefits the total system because the weight is greatly reduced. Furthermore; the material is inexpensive compared to traditional metallic constructions. In addition, it provides better lightning protection. Finally, it is not attractive to thieves looking for scrap metal. ‚ Underground installation: It is a robust enclosure solution for WSAN nodes in underground installation where the whole of the node, or the most of it, has to be buried underground [158]. ‚ Internal temperature stability: The temperature of the air mass inside a PVC tube is slightly different from that of the open air, because this plastic material has a very low thermal conductivity. Also, as deeper the tube is installed underground, the temperature difference is increased due to the facts that a certain part of the enclosure are not directly exposed to the external environment, and that the temperature under the surface is almost constant during the day and night periods. Thus for deployments in environments with high temperature and high sunlight radiation it is possible to keep the temperature of the enclosed electronics at a lower level with regard to the open air temperature. Such a feature is of great importance for the energy balance of the WSAN system. ‚ Easy deployment: PVC tubes are easily transported and, due to their convenient centroid, they do not require deep paddles and cableways for their support on the ground. ‚ Larger inner space: e.g., in a 2.5 m PCV tube of 90 mm diameter and 2.7 mm thickness, the total inner useful volume is about 13,723 cm 3 whereas the inner volume of a 170 mmˆ170 mmˆ75 mm electrical enclosure, such as the one displayed in Figure 26a, is just about 2100 cm 3 . ‚ RF antennae friendliness: the RF antennae are installed within the tube and they are fully protected from the threats of the external environment. Additionally, the ability of using all the internal space of a tube facilitates the encapsulation of very long RF antennae, so it is easy to incorporate antennae from λ/4 up to λ (λ is the wavelength of a radio signal, expressed in units of meters). For example, the wavelength of a 433 MHz RF signal is around 69 cm. In this particular case, the ideal antenna should be a 69 cm long wire. In general, antennae close to the wavelength of the incorporated RF signal can benefit the RF signals propagation performance, and they allow for low-cost and low energy antenna driving circuits. ‚ Zero RF signal attenuation: the PVC material doesn't block the propagation of radio signals. A typical example is the use of PVC-based constructions to hide the antennae of the cellular networks on the terrace of the block of flats at the cities. ‚ Neat cabling: all the cables of the WSAN system can be tidily routed along the inner side of the tube. Hence, the cabling is protected from the environmental influences. ‚ Greater energy storage units performance: The temperature and solar radiation at the bottom part of the tube, which is buried underground, permits batteries and several other alternative energy storage units to maintain their nominal efficiency, capacity and lifetime. ‚ Farm machine friendliness: tubes do not require shoring and cableways for their support. Thus, they allow the unrestricted movement of the various machines and equipment used in farm management. Migrating Existing OSH Designs to SensoTube Architecture The SensoTube architecture allows designers and developers to continue their implementations using the toolchains they already know, and to freely design their own circuits according to their experience and their applications' specific requirements. SensoTube ensures the above groups of users the necessary expansion mechanisms with which they can successfully make the next step of their open-source designs towards the needed optimization and reliability. Towards Energy Optimized MCU-Based Functional Expansion Shields As to the question if any one of the existing OSEP main-boards could be used as a MCU-based functional layer shield, or else, if a WSAN system was exclusively based on an existing OSEP main-board, the answer is yes, but the system would suffer from the constraints identified in Section 2 above, and the most important, the system would have very poor energy efficiency. To prove this statement, we chose three of the most popular OSH platforms today, namely the Arduino Uno Rev. 3, the Nucleo STM32L152, and the FRDM-KL25Z. The first one is an 8-bit MCU platform whereas the last two are 32-bit ARM-based MCU platforms (see Figure 30). Our aim was to demonstrate their energy efficiency for a battery-operated WSAN system deployed in the agricultural field. incorporated RF signal can benefit the RF signals propagation performance, and they allow for low-cost and low energy antenna driving circuits.  Zero RF signal attenuation: the PVC material doesn't block the propagation of radio signals. A typical example is the use of PVC-based constructions to hide the antennae of the cellular networks on the terrace of the block of flats at the cities.  Neat cabling: all the cables of the WSAN system can be tidily routed along the inner side of the tube. Hence, the cabling is protected from the environmental influences.  Greater energy storage units performance: The temperature and solar radiation at the bottom part of the tube, which is buried underground, permits batteries and several other alternative energy storage units to maintain their nominal efficiency, capacity and lifetime.  Farm machine friendliness : tubes do not require shoring and cableways for their support. Thus, they allow the unrestricted movement of the various machines and equipment used in farm management. Migrating Existing OSH Designs to SensoTube Architecture The SensoTube architecture allows designers and developers to continue their implementations using the toolchains they already know, and to freely design their own circuits according to their experience and their applications' specific requirements. SensoTube ensures the above groups of users the necessary expansion mechanisms with which they can successfully make the next step of their open-source designs towards the needed optimization and reliability. Towards Energy Optimized MCU-Based Functional Expansion Shields As to the question if any one of the existing OSEP main-boards could be used as a MCU-based functional layer shield, or else, if a WSAN system was exclusively based on an existing OSEP mainboard, the answer is yes, but the system would suffer from the constraints identified in Section 2 above, and the most important, the system would have very poor energy efficiency. To prove this statement, we chose three of the most popular OSH platforms today, namely the Arduino Uno Rev. 3, the Nucleo STM32L152, and the FRDM-KL25Z. The first one is an 8-bit MCU platform whereas the last two are 32-bit ARM-based MCU platforms (see Figure 30). Our aim was to demonstrate their energy efficiency for a battery-operated WSAN system deployed in the agricultural field. The methodology of the test was to measure the current drawn by each one of the aforementioned platforms having their MCUs in full active and in deep sleep operation modes. The difference between these two current consumptions is equal to the current required for a functional shield which is using just the MCU circuits and not any kind of auxiliary circuits. The current consumption in MCU deep sleep mode indicates the current consumption of the auxiliary circuits. The methodology of the test was to measure the current drawn by each one of the aforementioned platforms having their MCUs in full active and in deep sleep operation modes. The difference between these two current consumptions is equal to the current required for a functional shield which is using just the MCU circuits and not any kind of auxiliary circuits. The current consumption in MCU deep sleep mode indicates the current consumption of the auxiliary circuits. The power supply was decided to be +5 Vdc provided through the USB ports of the three individual platforms. The alternative of providing external voltages greater than +5 Vdc through the external voltage inputs of the boards was rejected because the voltage regulation circuitry of each board is differently implemented and it has different energy efficiency. Therefore, the power supply through the USB ports ensures an equal treatment of the three boards. The full active state of the MCUs was realized by putting them in an endless loop using a while loop programming structure. For the deep sleep mode of MCU state, we used the minimum possible programming functions for each one of the MCUs. The firmware development was easily implemented using the C programming language through two popular integrated development environments (IDEs), namely the Arduino IDE for the Arduino, and the Mbed for the Nucleo and the FRDM respectively. The power supply was decided to be +5 Vdc provided through the USB ports of the three individual platforms. The alternative of providing external voltages greater than +5 Vdc through the external voltage inputs of the boards was rejected because the voltage regulation circuitry of each board is differently implemented and it has different energy efficiency. Therefore, the power supply through the USB ports ensures an equal treatment of the three boards. The full active state of the MCUs was realized by putting them in an endless loop using a while loop programming structure. For the deep sleep mode of MCU state, we used the minimum possible programming functions for each one of the MCUs. The firmware development was easily implemented using the C programming language through two popular integrated development environments (IDEs), namely the Arduino IDE for the Arduino, and the Mbed for the Nucleo and the FRDM respectively. Figures 31 and 32 present the particular codes for the active and deep sleep modes of the MCUs for both IDEs. The results of the current measurements are presented in Table 4. Ifa stands for full the active current, Ids stands for the current in deep sleep mode, while Imcu stands for the maximum current consumption of the MCU circuitry. As revealed, the current consumption due to the operation of various auxiliary circuitry on the boards (e.g., programming circuitry, sensors, LED indicators, etc.) is orders of magnitude greater than that actually required for operating the MCU circuitry. Hence, the use of the existing open-source hardware mainboards is not suitable for battery-operated WSAN systems. On the other hand, following the abstraction concept of SensoTube architecture, more energy efficient WSAN systems can be built. Figure 33 indicates how much energy would be reserved if a MCU-based expansion shield facilitated just the MCU circuitry. The power supply was decided to be +5 Vdc provided through the USB ports of the three individual platforms. The alternative of providing external voltages greater than +5 Vdc through the external voltage inputs of the boards was rejected because the voltage regulation circuitry of each board is differently implemented and it has different energy efficiency. Therefore, the power supply through the USB ports ensures an equal treatment of the three boards. The full active state of the MCUs was realized by putting them in an endless loop using a while loop programming structure. For the deep sleep mode of MCU state, we used the minimum possible programming functions for each one of the MCUs. The firmware development was easily implemented using the C programming language through two popular integrated development environments (IDEs), namely the Arduino IDE for the Arduino, and the Mbed for the Nucleo and the FRDM respectively. Figures 31 and 32 present the particular codes for the active and deep sleep modes of the MCUs for both IDEs. The results of the current measurements are presented in Table 4. Ifa stands for full the active current, Ids stands for the current in deep sleep mode, while Imcu stands for the maximum current consumption of the MCU circuitry. As revealed, the current consumption due to the operation of various auxiliary circuitry on the boards (e.g., programming circuitry, sensors, LED indicators, etc.) is orders of magnitude greater than that actually required for operating the MCU circuitry. Hence, the use of the existing open-source hardware mainboards is not suitable for battery-operated WSAN systems. On the other hand, following the abstraction concept of SensoTube architecture, more energy efficient WSAN systems can be built. Figure 33 indicates how much energy would be reserved if a MCU-based expansion shield facilitated just the MCU circuitry. The results of the current measurements are presented in Table 4. I fa stands for full the active current, I ds stands for the current in deep sleep mode, while I mcu stands for the maximum current consumption of the MCU circuitry. As revealed, the current consumption due to the operation of various auxiliary circuitry on the boards (e.g., programming circuitry, sensors, LED indicators, etc.) is orders of magnitude greater than that actually required for operating the MCU circuitry. Hence, the use of the existing open-source hardware mainboards is not suitable for battery-operated WSAN systems. On the other hand, following the abstraction concept of SensoTube architecture, more energy efficient WSAN systems can be built. Figure 33 indicates how much energy would be reserved if a MCU-based expansion shield facilitated just the MCU circuitry. Development Steps of a Functional Expansion Shield As an example of designing a SensoTube-based shield, we describe the considerations and the development steps of a DCL functional shield. This shield will be used for agricultural applications. For this reason, it will have the necessary circuitry for the interfacing with an external air temperature/humidity sensor, and the circuitry for the interfacing with soil moisture sensors using the SDI-12 commercial standard (a 1-wire bus). The specific development steps are: (1) Creation of a new design project: Open a new design project in the electronic design application software tool (EDA tool), adding to this the two sheet symbols and their associated schematic files (see Figure 34). These drawing files contain the connections and parts regarding the implementation of the S-BECs, the P-BEC, and the J-BEC ad they can be used as it is, without making any change or extra work. (2) Initiation of the new shield circuitry design: Create a new sheet symbol for the accommodation of the particular circuits of the DCL shield. (3) Consideration and establishment of required signals: Create the sheet port entities reflecting the particular signal pins requirements of the specific DCL shield. Regarding the temperature/ humidity sensor, we have used the popular DHT22 device. This sensor will be installed outside of the system's enclosure and will interface with the DCL shield through two digital signal pins, namely the clock and the data. This sensor requires a +5 Vdc level power supply. Regarding the soil moisture sensors interface, just one digital signal pin will be required according to the SDI-12 bus. Similarly, thus interface requires +5 Vdc and ground signals from the DCL shield. For the MCU of the shield, we decided to use an AVR Atmega328 due to the fact that it is the basic MCU used by Arduino main-boards. After flashing the MCU with the Arduino bootloader firmware, the MCU will act as an Arduino main-board and the developers can use the Arduino IDE software tool for the application firmware of the shield. As Figure 34 illustrates, on the bottom left side of the DCL sheet symbol, five sheet ports have been added, namely the SDI-12, the TH_CLK (DHT 22 clock), the TH_DATA (DHT 22 data), and +5 Vdc and ground for both interfaces. The rest of the sheet's ports entities are the remaining available signals of the MCU that can be routed to the S-BEC 1 in order to be exploited in various application scenarios. (4) Making the power management decisions: The next step is to make the necessary connections for the power management of the DCL shield. For the particular shield, we connect +5 Vdc in the P_Out port of the sheet symbol. With this option the shield can be monitored and controlled by other dedicated shields (e.g., ETL shields) in terms of its energy consumption. Alternatively, we create Development Steps of a Functional Expansion Shield As an example of designing a SensoTube-based shield, we describe the considerations and the development steps of a DCL functional shield. This shield will be used for agricultural applications. For this reason, it will have the necessary circuitry for the interfacing with an external air temperature/humidity sensor, and the circuitry for the interfacing with soil moisture sensors using the SDI-12 commercial standard (a 1-wire bus). The specific development steps are: (1) Creation of a new design project: Open a new design project in the electronic design application software tool (EDA tool), adding to this the two sheet symbols and their associated schematic files (see Figure 34). These drawing files contain the connections and parts regarding the implementation of the S-BECs, the P-BEC, and the J-BEC ad they can be used as it is, without making any change or extra work. (2) Initiation of the new shield circuitry design: Create a new sheet symbol for the accommodation of the particular circuits of the DCL shield. (3) Consideration and establishment of required signals: Create the sheet port entities reflecting the particular signal pins requirements of the specific DCL shield. Regarding the temperature/humidity sensor, we have used the popular DHT22 device. This sensor will be installed outside of the system's enclosure and will interface with the DCL shield through two digital signal pins, namely the clock and the data. This sensor requires a +5 Vdc level power supply. Regarding the soil moisture sensors interface, just one digital signal pin will be required according to the SDI-12 bus. Similarly, thus interface requires +5 Vdc and ground signals from the DCL shield. For the MCU of the shield, we decided to use an AVR Atmega328 due to the fact that it is the basic MCU used by Arduino main-boards. After flashing the MCU with the Arduino bootloader firmware, the MCU will act as an Arduino main-board and the developers can use the Arduino IDE software tool for the application firmware of the shield. As Figure 34 illustrates, on the bottom left side of the DCL sheet symbol, five sheet ports have been added, namely the SDI-12, the TH_CLK (DHT 22 clock), the TH_DATA (DHT 22 data), and +5 Vdc and ground for both interfaces. The rest of the sheet's ports entities are the remaining available signals of the MCU that can be routed to the S-BEC 1 in order to be exploited in various application scenarios. (4) Making the power management decisions: The next step is to make the necessary connections for the power management of the DCL shield. For the particular shield, we connect +5 Vdc in the P_Out port of the sheet symbol. With this option the shield can be monitored and controlled by other dedicated shields (e.g., ETL shields) in terms of its energy consumption. Alternatively, we create the +5 Vdc and GND_5V sheet ports in order the shield to be able to be powered from the generic power signal pins of the S-BEC 2. The aforementioned ports and connections are illustrated in Figure 34. the +5 Vdc and GND_5V sheet ports in order the shield to be able to be powered from the generic power signal pins of the S-BEC 2. The aforementioned ports and connections are illustrated in Figure 34. Figure 34. The design of a new shield, e.g., the DCL shield, using the SensoTube design template. (5) Considerations regarding the programming and debugging of the shield's MCU: in our example, for the programming and debugging of the AVR MCU just the UART TX and RX signal pins are required together with the Reset pin of the MCU. All of these signal pins have been added to the sheet symbol of the DCL shield named as RXD, TXD, and MCLR, respectively. Additionally, the SPI signal pins which are also present in the contemplated sheet can be used for the in-system programming of the AVR MCU. The programming and debugging circuitry, as proposed by the SensoTube architecture, ought to be hosted in a PDL shield. The sheet symbol of the J-BEC has intentionally been left unconnected to the DCL shield's sheet symbol because there is no use of JTAG-based programming and debugging in this shield. (6) Design of the schematic drawing of the shield's circuitry: the design is achieved following the datasheets of the incorporated components and the signal pins strategy decision made in the previous development steps. In the case of our design example, the DCL schematic drawing is given in Figure 35. In this drawing one can notice the sheet port entities' names of the sheet symbol. (7) Design of the printed-circuit board (PCB) of the shield: The PCB design must be accomplished with respect to the proposed SensoTube PCB model (see Figures 9 and 10) in order to maintain the standardization of the form factor and encapsulation aspects. The resulting DCL shield is given in Figure 36. The board is a regular double-layer PCB. The connections among the components made manually, without auto-rooting, and it took few hours. In case of auto-routing, the design task could take a few minutes. (8) Fabrication and Testing of the Shield: Figure 37 shows how the new DCL shield will look like after its fabrication. The fabrication of a regular double-sided PCB is very easy, very low-cost and it doesn't require any pretentious processes. The green-colored screw terminal block placed on the bottom left of the shield (see Figure 37) helps the physical connections with the external temperature/humidity sensor and the SDI-12 bus soil moisture sensors. Figure 34. The design of a new shield, e.g., the DCL shield, using the SensoTube design template. (5) Considerations regarding the programming and debugging of the shield's MCU: in our example, for the programming and debugging of the AVR MCU just the UART TX and RX signal pins are required together with the Reset pin of the MCU. All of these signal pins have been added to the sheet symbol of the DCL shield named as RXD, TXD, and MCLR, respectively. Additionally, the SPI signal pins which are also present in the contemplated sheet can be used for the in-system programming of the AVR MCU. The programming and debugging circuitry, as proposed by the SensoTube architecture, ought to be hosted in a PDL shield. The sheet symbol of the J-BEC has intentionally been left unconnected to the DCL shield's sheet symbol because there is no use of JTAG-based programming and debugging in this shield. (6) Design of the schematic drawing of the shield's circuitry: the design is achieved following the datasheets of the incorporated components and the signal pins strategy decision made in the previous development steps. In the case of our design example, the DCL schematic drawing is given in Figure 35. In this drawing one can notice the sheet port entities' names of the sheet symbol. (7) Design of the printed-circuit board (PCB) of the shield: The PCB design must be accomplished with respect to the proposed SensoTube PCB model (see Figures 9 and 10) in order to maintain the standardization of the form factor and encapsulation aspects. The resulting DCL shield is given in Figure 36. The board is a regular double-layer PCB. The connections among the components made manually, without auto-rooting, and it took few hours. In case of auto-routing, the design task could take a few minutes. (8) Fabrication and Testing of the Shield: Figure 37 shows how the new DCL shield will look like after its fabrication. The fabrication of a regular double-sided PCB is very easy, very low-cost and it doesn't require any pretentious processes. The green-colored screw terminal block placed on the bottom left of the shield (see Figure 37) helps the physical connections with the external temperature/humidity sensor and the SDI-12 bus soil moisture sensors. The two blue-colored shorting jumpers at the top left side of the board connect the I2C-bus signals of the MCU to the I2C-bus signals of the system's common BECs in order to allow the intrashields communication. Additionally, plain dual-pin headers have been soldered to the rest of the pads rows nearby the stacking pass-through headers of the S-BEC 1 and S-BEC 2 in order to selectively connect the general signals of the MCU to the common BECs signals of the system. Decoupling from the Programming and Debugging Function The biggest portion of the energy wastage demonstrated in the beginning of this section, is due to the programming and debugging circuitry. According to the SensoTube architecture, this circuitry should be decoupled from the WSAN system through its implementation in the form of a PDL functional shield. As a proof of the proposed concept, we take the case of Arduino Uno Rev. 3 as a use case. We took the schematic file of this specific board from the original webpage of Arduino and we broken ιt down into three parts: the MCU circuitry ( Figure 38); the programming and debugging circuitry ( Figure 39); and, the power management circuitry ( Figure 40). The programming and debugging of the MCU of the Arduino Uno Rev. 3 is performed through the UART's TXD and RXD signal pins (see Figures 38 and 39). The SensoTube facilitates the routing of these two signals through the pins of the S-BEC1. Thus, it is very easy to decouple the circuitries of the MCU and the programming and debugging which can be realized in form of separated functional shields. On the other hand, as it can be seen in Figure 40, the circuitry of power management is devoted just to regulate the external supply voltage and serve power to the MCU circuitry. According to the SensoTube concept, the power management circuitry can be also decoupled from the main-board and to be accommodated onto a PML functional shield. In this way, there is also the opportunity to make such design options so as to satisfy the forecasted energy supply of all of the expansion shields of the system. Among the most significant advantages of the proposed functions decoupling there are the dramatic reduction of energy consumption at the functional MCU-based shield and the reusability of expansion shields due to their austere implementation. At the same time there are also advantages in terms of the cost of the decoupled implementations. Below we show Tables 5-7 with the partnumbers, descriptions and prices of the components used in the aforementioned three Arduino Uno Rev. 3 circuits. The prices are based on unit costs, and they taken from the Mouser Electronics' webpage (www.mouser.com). The two blue-colored shorting jumpers at the top left side of the board connect the I2C-bus signals of the MCU to the I2C-bus signals of the system's common BECs in order to allow the intra-shields communication. Additionally, plain dual-pin headers have been soldered to the rest of the pads rows nearby the stacking pass-through headers of the S-BEC 1 and S-BEC 2 in order to selectively connect the general signals of the MCU to the common BECs signals of the system. Decoupling from the Programming and Debugging Function The biggest portion of the energy wastage demonstrated in the beginning of this section, is due to the programming and debugging circuitry. According to the SensoTube architecture, this circuitry should be decoupled from the WSAN system through its implementation in the form of a PDL functional shield. As a proof of the proposed concept, we take the case of Arduino Uno Rev. 3 as a use case. We took the schematic file of this specific board from the original webpage of Arduino and we broken ιt down into three parts: the MCU circuitry ( Figure 38); the programming and debugging circuitry ( Figure 39); and, the power management circuitry ( Figure 40). The programming and debugging of the MCU of the Arduino Uno Rev. 3 is performed through the UART's TXD and RXD signal pins (see Figures 38 and 39). The SensoTube facilitates the routing of these two signals through the pins of the S-BEC1. Thus, it is very easy to decouple the circuitries of the MCU and the programming and debugging which can be realized in form of separated functional shields. On the other hand, as it can be seen in Figure 40, the circuitry of power management is devoted just to regulate the external supply voltage and serve power to the MCU circuitry. According to the SensoTube concept, the power management circuitry can be also decoupled from the main-board and to be accommodated onto a PML functional shield. In this way, there is also the opportunity to make such design options so as to satisfy the forecasted energy supply of all of the expansion shields of the system. Among the most significant advantages of the proposed functions decoupling there are the dramatic reduction of energy consumption at the functional MCU-based shield and the reusability of expansion shields due to their austere implementation. At the same time there are also advantages in terms of the cost of the decoupled implementations. Below we show Tables 5-7 with the part-numbers, descriptions and prices of the components used in the aforementioned three Arduino Uno Rev. 3 circuits. The prices are based on unit costs, and they taken from the Mouser Electronics' webpage (www.mouser.com). The sub-total costs from the tables above are analyzed in Figure 41, where it is explicitly revealed that the cost of an Arduino MCU-based shield could be just the 41.16% of the total cost of the Arduino Uno Rev. 3. Thus the proposed decoupling of programming and debugging function from the main-board can result to lower in cost implementations. This is very critical in the case of middle and large-scale WSAN deployments in agriculture where significant cost savings can occur. The sub-total costs from the tables above are analyzed in Figure 41, where it is explicitly revealed that the cost of an Arduino MCU-based shield could be just the 41.16% of the total cost of the Arduino Uno Rev. 3. Thus the proposed decoupling of programming and debugging function from the mainboard can result to lower in cost implementations. This is very critical in the case of middle and largescale WSAN deployments in agriculture where significant cost savings can occur. Design and Development Support of Multi-MCU Systems The SensoTube architecture provides the maximum possible support for the multi-MCU functional shields as it described in Section 4. In particular:  It provides a plethora of communication peripheral signals through the S-BECs, e.g., three UARTs, SPI-bus with two distinct enable pins, I2C-bus, etc., which can meet the requirements of many MCU-based expansion shields;  It provides four interrupt signals to support multi-MCU collaborative application scenarios and functions such as wake up from deep sleep modes;  It provides BEC signal rerouting or isolation mechanisms at each functional shield in order to avoid signal conflicts;  It provides a variety of scalable power management configurations;  It particularly supports 16-bit and 32-bit MCUs, such as MSP430 and ARM cores by reserving specific signal BEC pins for Spy-Bi-Wire (SBW), Serial Wire Debugging (SWD) and JTAG compatible cores; and, Design and Development Support of Multi-MCU Systems The SensoTube architecture provides the maximum possible support for the multi-MCU functional shields as it described in Section 4. In particular: ‚ It provides a plethora of communication peripheral signals through the S-BECs, e.g., three UARTs, SPI-bus with two distinct enable pins, I2C-bus, etc., which can meet the requirements of many MCU-based expansion shields; ‚ It provides four interrupt signals to support multi-MCU collaborative application scenarios and functions such as wake up from deep sleep modes; ‚ It provides BEC signal rerouting or isolation mechanisms at each functional shield in order to avoid signal conflicts; ‚ It provides a variety of scalable power management configurations; ‚ It particularly supports 16-bit and 32-bit MCUs, such as MSP430 and ARM cores by reserving specific signal BEC pins for Spy-Bi-Wire (SBW), Serial Wire Debugging (SWD) and JTAG compatible cores; and, ‚ It supports the programming and debugging of multiple MCU-based shields through flexible, energy efficient, and cost effective combinations. Regarding the programming and debugging of multiple MCUs in the same system we show two use cases. In the first use case we have three functional shields which have an Arduino MCU on-board (namely an AVR MCU from Atmel). To ensure a concurrent firmware development, programming and debugging of the three MCUs we have plug-in three Programming and Debugging Layer (PDL) shields. Each one of these PDL shields has a circuit identical or similar to that illustrated in Figure 39 (programming and debugging circuit of Arduino Uno Rev. 3) and it can be connected via a USB port to a personal computer. In the personal computer there are three instances of the Arduino IDE, one per MCU of the contemplated system. In Figure 42 we show the exact signals connections among the aforementioned expansion shields. Dotted lines indicate the internal signals connections through the shared BECs of the system. In cases where there is no need of concurrent firmware development and testing, then just one PDL shield is enough. Specifically, through the signals rerouting mechanism provided by SensoTube, the RXD and TXD signals pins of the PDL should be rerouted to the associated signals pins of the functional shield. In either case, when the programming and debugging is completed, then the PDL(s) can be totally removed from the system. Regarding the programming and debugging of multiple MCUs in the same system we show two use cases. In the first use case we have three functional shields which have an Arduino MCU onboard (namely an AVR MCU from Atmel). To ensure a concurrent firmware development, programming and debugging of the three MCUs we have plug-in three Programming and Debugging Layer (PDL) shields. Each one of these PDL shields has a circuit identical or similar to that illustrated in Figure 39 (programming and debugging circuit of Arduino Uno Rev. 3) and it can be connected via a USB port to a personal computer. In the personal computer there are three instances of the Arduino IDE, one per MCU of the contemplated system. In Figure 42 we show the exact signals connections among the aforementioned expansion shields. Dotted lines indicate the internal signals connections through the shared BECs of the system. In cases where there is no need of concurrent firmware development and testing, then just one PDL shield is enough. Specifically, through the signals rerouting mechanism provided by SensoTube, the RXD and TXD signals pins of the PDL should be rerouted to the associated signals pins of the functional shield. In either case, when the programming and debugging is completed, then the PDL(s) can be totally removed from the system. A second use case includes several functional shields which have an ARM-core MCU on-boards. In this case, as it is shown in Figure 43, only one PDL shield is required for the programming and the debugging of all the MCUs. This is achieved by exploiting the function of the JTAG scan chain. The PDL shield through its JTAG signals pins which occupy in S-BEC2 and in the J-BEC, is connected to JTAG probe (e.g., the I-JET from IAR) and via this probe is interfacing with the personal computer. In the personal computer any one of the popular IDEs can be used for the firmware development. In the use case we chose the Embedded Workbench from IAR. This use case particularly demonstrates the novelty of the J-BEC which actually acts as pass-through BEC but in practice it is comprised by two distinct pin headers (one female on the top side and one male on the bottom side of the board-see Section 4). Of course, the above use cases are just an indication of the real capabilities of SensoTube for multi-MCU support. For instance, a system could comprise any number of different types of MCUs, e.g., Arduino MCUs, A second use case includes several functional shields which have an ARM-core MCU on-boards. In this case, as it is shown in Figure 43, only one PDL shield is required for the programming and the debugging of all the MCUs. This is achieved by exploiting the function of the JTAG scan chain. The PDL shield through its JTAG signals pins which occupy in S-BEC2 and in the J-BEC, is connected to JTAG probe (e.g., the I-JET from IAR) and via this probe is interfacing with the personal computer. In the personal computer any one of the popular IDEs can be used for the firmware development. In the use case we chose the Embedded Workbench from IAR. This use case particularly demonstrates the novelty of the J-BEC which actually acts as pass-through BEC but in practice it is comprised by two distinct pin headers (one female on the top side and one male on the bottom side of the board-see Section 4). Of course, the above use cases are just an indication of the real capabilities of SensoTube for multi-MCU support. For instance, a system could comprise any number of different types of MCUs, e.g., Arduino MCUs, ARM-core MCUs, MSP430 MCUs, etc., which through the use of single or several PDL shields could be in-system programmed and debugged. Such a PDL shield which can support AVR, MSP430, and ARM-core MCUs is presented in the following sub-section. ARM-core MCUs, MSP430 MCUs, etc., which through the use of single or several PDL shields could be in-system programmed and debugged. Such a PDL shield which can support AVR, MSP430, and ARM-core MCUs is presented in the following sub-section. Design and Implementation of a PDL Shield The aim of this design was to create a single shield which could accommodate, firstly, the various types of JTAG connectors for the most popular MCUs, and secondly the MCUs used in various Arduino main-boards, such as the AVR Atmega328 which is mentioned in the previous sub-sections. Table 8 presents the four different types of JTAG connectors that were decided to incorporate in this PDL shield. Therefore, this PDL could be used to program and debug any ARM, ARM-Cortex, AVR (using Atmel's JTAG tools), and MSP430 MCUs (from Texas Instruments), which may exist at any of the SensoTube functional layer shields. The first column of the Table 8 gives the specific names of the aforementioned JTAG headers during the design of the PCB of this PDL shield. The implementation of the PDL shield was based on the design template which was explained in Sections 4 and 7.2. The sheet symbol of the circuitry of the new PDL shield is shown in the middle of the Figure 44. As it is depicted, the sheet port entities created in this sheet symbol are relevant to Design and Implementation of a PDL Shield The aim of this design was to create a single shield which could accommodate, firstly, the various types of JTAG connectors for the most popular MCUs, and secondly the MCUs used in various Arduino main-boards, such as the AVR Atmega328 which is mentioned in the previous sub-sections. Table 8 presents the four different types of JTAG connectors that were decided to incorporate in this PDL shield. Therefore, this PDL could be used to program and debug any ARM, ARM-Cortex, AVR (using Atmel's JTAG tools), and MSP430 MCUs (from Texas Instruments), which may exist at any of the SensoTube functional layer shields. The first column of the Table 8 gives the specific names of the aforementioned JTAG headers during the design of the PCB of this PDL shield. The implementation of the PDL shield was based on the design template which was explained in Sections 4 and 7.2. The sheet symbol of the circuitry of the new PDL shield is shown in the middle of the Figure 44. As it is depicted, the sheet port entities created in this sheet symbol are relevant to the UART, the SPI and the JTAG signals. Additionally, for extra reconfigurability, the UART TX and RX signals of the shield have been also routed to the other two pairs of UARTs which are present to the stacking BECs of the S-BEC 1. In the case of this PDL there is no power monitoring requirements, that's why the P-BEC mechanism is not connected to the sheet symbol of the shield's circuitry. The schematic drawing of the shield's circuitry is given in Figure 45. Obviously, a MCU does not need to be present in this PDL shield. Some extra pin-headers were added, in order to ensure that the target processors could be powered either from the external JTAG devices or from its own system's voltage source. Furthermore, pin 19 of the JTAG ARM connector was handled in order to be driven either at logic high (+5 Vdc) or low (0 Vdc, i.e., GND). This facilitates the challenging technique of power profiling, or else power debugging, which is supported by the IAR JTAG debuggers [133]. In parallel with the JTAG facilities, this PDL shield could also support the use of other development hardware tools that may use the I2C-bus, or SPI-bus, or even the UART port. The signals of these interfaces already exist in the S-BECs connectors. The circuitry incorporates a UART-to-USB converter chip, i.e., the CP2102 from Silicon Labs, which can be used to program AVR MCUs which they primarily flashed with the Arduino bootloader. Alternatively, this converter can be occasionally connected to one of the three UARTs of the system pins in order to allow a shield to interface with a personal computer. This facility is very convenient during the system's firmware development processes. the UART, the SPI and the JTAG signals. Additionally, for extra reconfigurability, the UART TX and RX signals of the shield have been also routed to the other two pairs of UARTs which are present to the stacking BECs of the S-BEC 1. In the case of this PDL there is no power monitoring requirements, that's why the P-BEC mechanism is not connected to the sheet symbol of the shield's circuitry. The schematic drawing of the shield's circuitry is given in Figure 45. Obviously, a MCU does not need to be present in this PDL shield. Some extra pin-headers were added, in order to ensure that the target processors could be powered either from the external JTAG devices or from its own system's voltage source. Furthermore, pin 19 of the JTAG ARM connector was handled in order to be driven either at logic high (+5 Vdc) or low (0 Vdc, i.e., GND). This facilitates the challenging technique of power profiling, or else power debugging, which is supported by the IAR JTAG debuggers [133]. In parallel with the JTAG facilities, this PDL shield could also support the use of other development hardware tools that may use the I2C-bus, or SPI-bus, or even the UART port. The signals of these interfaces already exist in the S-BECs connectors. The circuitry incorporates a UART-to-USB converter chip, i.e., the CP2102 from Silicon Labs, which can be used to program AVR MCUs which they primarily flashed with the Arduino bootloader. Alternatively, this converter can be occasionally connected to one of the three UARTs of the system pins in order to allow a shield to interface with a personal computer. This facility is very convenient during the system's firmware development processes. The final PCB design of this shield is presented in Figure 46. The semicircular cut at the top side of the board was intentionally made in order to allow any external antenna that may exist in a WNL shield, under the PDL shield. The name Pelti was given in this board, inspired by the shape of ancient Greek archers' shields. The PCB was designed as a double-layer board. All the S-BEC signals have been routed to screw terminal blocks for easy access to them during the development phase. Also, due to the fact that a PDL shield is always placed at the top of the shields' synthesis in order to allow the easy insertion of the programming and debugging connectors, there is no need to place the P-BEC and the top part of the J-BEC. The Pelti PDL shield, after the manufacture of its PCB and the insertion of its components, is illustrated in Figure 47. Figure 48 shows the PDL shield connected to a WNL shield in order to program and debug the CC430F5137 MCU of the WNL shield through the MSP-FET430UIF tool of Texas Instruments. The final PCB design of this shield is presented in Figure 46. The semicircular cut at the top side of the board was intentionally made in order to allow any external antenna that may exist in a WNL shield, under the PDL shield. The name Pelti was given in this board, inspired by the shape of ancient Greek archers' shields. The PCB was designed as a double-layer board. All the S-BEC signals have been routed to screw terminal blocks for easy access to them during the development phase. Also, due to the fact that a PDL shield is always placed at the top of the shields' synthesis in order to allow the easy insertion of the programming and debugging connectors, there is no need to place the P-BEC and the top part of the J-BEC. The Pelti PDL shield, after the manufacture of its PCB and the insertion of its components, is illustrated in Figure 47. Figure 48 shows the PDL shield connected to a WNL shield in order to program and debug the CC430F5137 MCU of the WNL shield through the MSP-FET430UIF tool of Texas Instruments. The PDL shield is connected onto the WNL shield through the S-BECs and J-BEC for easy access to the required JTAG signal pins. Cost Implications The SensoTube architecture can significantly influence the cost associated with the most critical aspects of a WSAN application. Both direct and indirect costs can be decreased. The benefits from the reduction of costs can be found in the following three areas of interest: (1) Cost reduction at the development phase: The use of comprehensive functional building blocks, in order to build a WSAN system can reduce the traditionally long development periods, especially in the cases where diverse expertise and skills are required. Thus, the labor cost is decreased and the time-to-market could be shortened too. Towards this direction, the ability of SensoTube to allow designers to use established as well as new innovative tool chains can also help. Moreover, the decomposition of the system into its functions, which are implemented in the form of independent shields, can enable the reduction of the NRE costs associated with the development of the prototype systems because the changes in design can be focused just on particular system's parts. (2) Cost reduction at the systems level: The increased flexibility, reconfigurability, and reusability ensured by the proposed architecture can positively influence the system's total cost. In particular, the introduction of centralized provisions for the power supply and the programming and debugging, can release hardware subsystems from integrating superfluous circuits. On the other hand, the ability to add and remove functional building blocks can increase the value of the investment during the lifetime of the application, because any future change can be limited just to the purchase of a single expandable shield and avoid the need to change the whole system. Additionally, the adoption of the plastic PVC tubes for the final encapsulation of the system, instead of special expensive electrical boxes, can help to minimize the cost per WSAN node. (3) Cost reduction at the application domain: A serious problem in the agricultural domain is the theft of the WSAN nodes due to their metallic support poles which can be sold for scrap. This can be prevented by the proposed plastic tubes-based encapsulation, which is simultaneously used as the support pole of the system. Moreover, the proposed approach can significantly help the handling and transportation of the WSAN nodes due to the reduction of the systems weight. Discussion and Conclusions A new architecture for hardware design of WSAN systems was presented in this work. The aims of the SensoTube architecture, as defined in Section 3, have been rather successfully met. The existing end-solutions and COTS-based WSAN systems in their vast majority have been designed based on the traditional architecture (see Section 1, Figure 1), which basically places the emphasis entirely on the wireless networking aspects. This is reasonably explained by the fact that a decade ago the WSAN technology was very new and the efforts had to be diverted towards the design of hardware, which would primarily support the development, the evaluation and the demonstration of that revolutionary technology. Nowadays, when WSANs can be considered as a mature technology and the interest has shifted from the core technology to its embodiment in real-world applications, the rigidity of the traditional architecture appears to constitute a significant barrier for the expansion of WSAN solutions. The abstraction of the SensoTube architecture provides the necessary design framework with which the real needs of a WSAN system can be accommodated, so each functional aspect of the system can have a clear, predefined space, in order to be tidily implemented without, at the same time, conflicting with other functions of the system. In particular, the seven functional layers proposed are: the data acquisition and control layer (DCL), the wireless networking layer (WNL), the data gateway layer (DGL), the application-specific layer (ASL); the programming and debugging layer (PDL), the power management layer (PML), the evaluation and testing layer (ETL). Thus, the hardware designers, as well as the developers of different specializations, can know in advance how to jump into the WSANs field and at which particular function their design contribution belongs to. Hence, the proposed functional abstraction can support the integration of several challenging technologies that could potentially have a positive impact on the design of real-world application-oriented WSAN systems. Regarding the practical side of the hardware design, the seven SensoTube functional layers' ability to be realized in the form of discrete expansion shields ensures that the benefits of the popular open-source hardware platforms will be experienced by all those involved. However, it is totally meaningless to design any of the new functional shields just as another expansion shield for one of the existing expandable platforms, because, although these platforms are very convenient for small-scale prototyping and proof-of-concept implementations, in practice, they cannot be considered as a reliable solution for real-world WSAN applications, such in the case of agriculture (see Section 2). Also, an insurmountable obstacle in the adoption of the existing expandable OSH platforms is that they do not allow for real scalability and expandability, which actually is a key factor for the accommodation of a hardware abstraction, such as the one proposed by SensoTube. In accordance to the architecture of these platforms, the type and the number of the extension shields is determined by the features of the MCU of the main-board, so it is very common for designers to face resource limitations, such as in the processing power, the program memory size, the input/output signals, or the communication interfaces, and the only way to respond is to replace the main-board with one that has enhanced features, but jeopardizing in this way the compatibility with the already existed expansion shields. Moreover, practically none of the existing main-boards support the engagement of MCU-based shields, they do not care about how the expansion shields will fulfill their particular energy needs, and, they do not pay any attention on the form-factor of the expansion shields. Therefore, there are serious risks and troubles in WSAN systems based on these architectures. In order to overcome the egocentrism of the existing expandable OSH architectures, SensoTube architecture introduces the concept of the inter-layer services. In particular, there are four inter-layer services, according to the SensoTube: the inter-layer signals management service, the inter-layer communication service, the inter-layer programming and debugging service, and the energy management service. These services are easily implemented by the use of BECs and with some enhancements at the PCB design level (see Section 4). With these four straightforward and comprehensive supportive mechanisms, it can be guaranteed that a WSAN system can be built not by using partly suitable expansion shields with superfluous circuitry, but with functional dedicated shields. SensoTube, with its seven functional layers shields, together with the four inter-layer services, can really be used to decompose a WSAN system into its vital functions and ensure the desirable separation of concerns. Furthermore, the introduction of polymorphism in the signal connections together with the anticipation of shields' local intelligence, by supporting the multi-processor collaboration development, allows for the facilitation of the embedding of any challenging technology into the WSAN system. For instance, it was explained how to exploit the SensoTube architecture in order to host, on an occasional basis, challenging features such as energy monitoring and control, over-the-air programming, and so on. Fortunately, the recent progress in MCU technology allows for several ultra-low energy solutions, and there is practically no change either in the system's energy balance or in the manufacturing budget. On the contrary, through the use of software and hardware techniques for, e.g., idleness, the multi-processor approach can contribute towards energy consumption reduction. Moreover, the SensoTube architecture can essentially support the modeling of the component functions of a WSAN system not relying on simulation and arbitrary approaches but on the basis of the real behavior of the operational entities under the post-deployment real-world conditions. The ability for a realistic modeling together with the energy management insights of the SensoTube-based systems can lead to the design of really energy-and function-optimized WSAN solutions that will meet the demands of applications not only in the agricultural domain, but also in applications such as the forestry domain, the environmental monitoring domain, the wildlife monitoring domain, and others. To further support the real-world deployments in agriculture, particular emphasis was given in the encapsulation of the systems, so plastic irrigation tubes were chosen to be the system encapsulation, because they constitute a novel, low-cost, and extra durable medium that not only can withstand the harsh agricultural environment, but can also allow for better facilitation of antennae, energy storage media, etc. (see Section 6). In conclusion, the SensoTube is an open architecture that allows the design of scalable, flexible, reliable, and optimized WSAN systems, while, at the same time, it ensures enhanced versatility by facilitating novel and challenging technologies by multi-skilled designers. As presented through certain use case examples (see Section 7), the migration of existing open-source designs to the SensoTube system entails several advantages for energy efficiency, explicit circuitries' design, multi-MCU concurrent development support, and low cost. The cost of implementation has been kept to the minimum, because the proposed novelties have been limited just to the system's physical layer, i.e., to the PCB design. With the SensoTube architecture, the WSAN stakeholders (see Section 2) can have the maximum expandability, scalability, reconfigurability, reusability, and standardization needed in order to fulfill their particular requirements. Additionally, as it was revealed from the literature review (see Section 5), the SensoTube architecture can be a significant contribution towards the requested hardware abstraction need for the development of sound middleware. Future work on the subject includes the design and implementation of an extensive series of functional layer shields and the development of the hardware abstraction layer (HAL) set of APIs with emphasis on the support of middleware requirements.
2017-05-08T07:04:21.904Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "1b559668ca277ee9dab043c4f6981eb7c02caee5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/16/8/1227/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b559668ca277ee9dab043c4f6981eb7c02caee5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Medicine" ] }
264454118
pes2o/s2orc
v3-fos-license
Acute-on-chronic liver failure induced by antiviral therapy for chronic hepatitis C: A case report BACKGROUND There have been no reports of acute-on-chronic liver failure (ACLF) during treatment of chronic hepatitis C (CHC) with direct-acting antivirals (DAAs). CASE SUMMARY We report a 50-year-old male patient with CHC. The patient sought medical attention from the Department of Infectious Diseases at our hospital due to severe yellowing of the skin and sclera, which developed 3 mo previously and attended two consecutive hospitals without finding the cause of liver damage. It was not until 1 mo ago that he was diagnosed with CHC at our hospital. After discharge, he was treated with DAAs. During treatment, ACLF occurred, and timely measures such as liver protection, enzyme lowering, anti-infective treatment, and suppression of inflammatory storms were implemented to control the condition. CONCLUSION DAA drugs significantly improve the cure rate of CHC. However, when patients have factors such as autoimmune attack, coinfection, or unclear hepatitis C virus genotype, close monitoring is required during DAA treatment. INTRODUCTION Hepatitis C is an infectious disease caused by hepatitis C virus (HCV).HCV exposure can cause acute hepatitis C, which is defined as the 6-mo period after HCV exposure.Patients who fail to spontaneously clear the virus during acute infection develop persistent infection, which can cause liver inflammation and other serious liver damage.Chronic hepatitis C (CHC) occurs in 50%-80% of patients [1], and 5%-30% of CHC patients develop liver cirrhosis, liver failure and even hepatocellular carcinoma within 20-30 years [2].Therefore, early diagnosis and treatment of CHC are very important.Over the past 10 years, direct-acting antivirals (DAAs) have revolutionized HCV treatment, increasing cure rates from < 50% to > 95%.However, we report a CHC patient in our hospital who developed acute-on-chronic liver failure (ACLF) during DAA treatment. Chief complaints A 50-year-old male patient presented with yellow staining of the skin and sclera, poor appetite and fatigue for 1 wk. History of present illness Symptoms started 1 wk before presentation with yellow staining of the skin and sclera, poor appetite and fatigue. History of past illness Three months ago, due to yellow staining of the skin and sclera, the patient went to two tertiary hospitals for consecutive visits.Examination showed severe liver damage but the following causes were excluded: Viral hepatitis (negative for hepatitis A, B, C and E); autoimmune liver diseases (negative for autoimmune hepatitis antibody 1, autoimmune hepatitis antibody 2, immunoglobulin quantification and IgG4); [negative for rubella virus, Epstein-Barr virus (EBV), cytomegalovirus (CMV), and herpesvirus]; Toxoplasma gondii and hepatolenticular degeneration (negative for ceruloplasmin).Endoscopy showed chronic nonatrophic gastritis.Abdominal ultrasound showed rough echo in liver parenchyma.Upper abdominal magnetic resonance imaging (MRI) showed hepatitis or liver injury, reactive cholecystitis, and splenomegaly.Liver pathological biopsy showed that liver cells were edematous, focal necrosis, scattered lymphocytes, neutrophil infiltration, and chronic inflammatory cell infiltration in the portal area with fibrosis, in line with chronic hepatitis grade 2 and stage 1.Bilirubin level gradually increased after conservative treatment with drugs, and the patient was admitted to our hospital.The admission examination showed that the total bilirubin was increased to 389.5 mmol/L, hepatitis C antibody was weakly positive, and hepatitis C RNA load was 2.281 × 10 3 IU/mL.The HCV genotype could not be typed due to low viral load.The patient's other examinations revealed no abnormalities.He was diagnosed with severe CHC and received medication (glycyrrhetinic acid monoamine S 160 mg ivgtt qd, Shuganning injection 10 mL ivgtt qd, ursodeoxycholic acid capsule 250 mg po tid) and artificial liver treatment (plasma exchange + double plasma molecular adsorption system) on March 31 and April 2, 2022, respectively.The patient was discharged on April 18, 2022 with improved liver function.After discharge, he was treated with sofosbuvir-velpatasvir 400:100 mg one tablet/d. Personal and family history The patient had smoked 10 cigarettes/d for > 20 years, and had no history of drinking, drug use, blood transfusion, or family history of CHC. Physical examination On physical examination, the vital signs were as follows: body temperature, 36.2°C;blood pressure, 127/78 mmHg; heart rate, 88 bpm; respiratory rate, 20 breaths/min.He also had severe yellowing of the whole body skin and sclera.Moist rales were heard in both lungs on auscultation. Imaging examinations Upper abdominal MRI (plain scan + enhancement + hepatobiliary pancreatic MRI water imaging) showed liver cirrhosis, splenomegaly, portal hypertension (maximum diameter of main portal vein approximately 15 mm) and suspected cholecystitis.Multiple lymph nodes in the abdominal cavity and retroperitoneum were observed.Chest computed tomography showed bilateral lower lobe pneumonia. Further diagnostic work-up Hepatitis B surface antigen, hepatitis virus A and E antibodies, ceruloplasmin, transferrin saturation, EBV DNA, CMV DNA, a-fetoprotein and thyroid function were all negative.Autoimmune hepatitis antibodies show positivity for antinuclear and anti-mitochondrial antibody M2, immunoglobulin quantitative: negative.Samples were sent to Jinyu Medical Test Center to examine the eight items of autoimmune hepatitis antibody, among which anti-mitochondrial subtype-2 antibody was positive.Dynamic monitoring of liver function, coagulation function, and routine blood changes during hospitalization are shown in Table 1. FINAL DIAGNOSIS Combined with the patient's medical history, the final diagnosis was: ACLF, CHC and pulmonary infection. TREATMENT The patient was admitted to the hospital on April 27, 2022.In order to rule out drug factors, sofosbuvir and velpatasvir were discontinued.The patient received a hepatoprotective treatment (magnesium isopyrrhizinate injection 150 mg ivgtt qd, Shuganning injection 10 mL ivgtt qd, ursodeoxycholic acid capsule 250 mg po tid) and anti-infective treatment (ceftazidime 2 g ivgtt q8h).However, on April 30, the patient's bilirubin continued to rise to 415.7 mmol/L, PTA continued to decrease to 33.1%, and blood cell counts were WBCs 13.91 × 10 9 /L, atypical lymphocytes 6%, neutrophils 9.88 × 10 9 /L, and lymphocytes 2.64 ×10 9 /L.The patient had depression, the gastrointestinal symptoms worsened, and hiccups occurred.ACLF was considered.Therefore, the antibiotics were adjusted to piperacillin-tazobactam sodium 3.75 g ivgtt q8h to continue antibacterial treatment.Aciclovir 0.25 g ivgtt q8h antiviral treatment, and hormones (methylprednisolone 40 mg qd) were given to suppress immunity, over a course of 5 d.Simultaneously, artificial liver (plasma exchange + double plasma molecular adsorption system) adjuvant therapy was administered.On May 5, routine blood examination showed normal results, and EBV and CMV DNA were negative.As the patient's atypical lymphocytes only appeared once, it was considered secondary to immune disorders.Therefore, ganciclovir and piperacillin-tazobactam sodium were discontinued, but hepatoprotective treatment was continued.Re-examination on May 17 showed that liver function parameters were ALT 27 U/L, AST 36 U/L, and total bilirubin 77.4 μmol/L; therefore, the patient was discharged from hospital on May 19, 2022.He continued to take sofosbuvir-velpatasvir 400: 100 mg 1 tablet/d for antiviral treatment. OUTCOME AND FOLLOW-UP The outpatient department checked that the patient's liver function was normal on June 23, 2022, and he received antiviral treatment until August 20.Follow-up to March 1, 2023 showed that HCV RNA was consistently below the detection limit, liver function, routine blood examination and a-fetoprotein were normal.Abdominal ultrasound showed that the light spots on the liver had thickened, instantaneous elastic imaging of the liver showed a hardness of 15.7 kPa and fat attenuation of 247 dB/m.Unexpectedly, autoimmune hepatitis and mitochondrial antibodies were negative. DISCUSSION Sofosbuvir-velpatasvir is a combined oral DAA.Sofosbuvir is a nucleotide analog NS5B polymerase inhibitor that inhibits viral replication by targeting key targets of RNA replication, while velpatasvir is a second-generation NS5A replication complex inhibitor with high antiviral activity against all HCV genotypes[3].In noncirrhotic patients, the sustained virological response rate (SVR) can reach 95% [4].Even in patients with HCV-related decompensated cirrhosis, the SVR rate is > 80%[5].The patient had no previous underlying diseases, and only took sofosbuvir-velpatasvir following the diagnosis of hepatitis C, without drug interaction.There is no pharmacokinetic basis for liver damage due to sofosbuvir-velpatasvir.The occurrence of ACLF during DAA treatment may be related to the following factors. Autoimmunity The emergence of autoimmune diseases may be related to viral infection, especially chronic viral infection.HCV infection has long been suspected to be associated with the development of autoimmune diseases, as demonstrated by cryoglobulinemia [6], and antineutrophil and smooth muscle actin are the most frequently detected autoantibodies [7].The mechanisms by which these antibodies are produced are not fully understood, but HCV can trigger a B-lymphocytemediated immune response shortly after immune system activation.B-lymphocyte-driven humoral immunity produces specific antibodies that are unable to inactivate virus production and replication.Therefore, the continuous replication of HCV results in constant stimulation of B cells, which may lead to B-cell dysfunction and abnormal antibody production [8].Alternatively, the presence of autoantibodies in HCV patients may be caused by chronic apoptotic hepatocytes.Viruses, unlike bacteria and fungi, cannot reproduce on their own and must use the host-cell processes to replicate as they cannot synthesize their own proteins [9,10].However, the pathogenic mechanism of the virus and whether antibody production truly represents an independent autoimmune disease have not been fully elucidated.During the progression of CHC in the present case, autoimmune hepatitis antibodies showed positivity for anti-nuclear and anti-mitochondrial antibody M2, and atypical lymphocytes briefly appeared, which is rare in viral hepatitis [11].Timely use of artificial liver replacement therapy and hormonal suppression of immunity can control disease development, indicating that autoimmunity plays an important role in the progression of hepatitis C to liver failure.When hepatitis C was cured, the above antibody test results were negative.It can be seen that these antibodies became negative after HCV clearance and were not an independent factor in liver function damage. Bacterial infection Bacterial infection may be another important reason for the rapid progression to ACLF in this case.The Asia Pacific Liver Research Association defines ACLF as an acute liver injury characterized by jaundice [serum bilirubin ≥ 5 mg/dL (85 μmol/L) and coagulation disorders (INR ≥ 1.5 or PTA < 40%), accompanied by clinical ascites and/or hepatic encephalopathy within 4 wk, with or without prior diagnosis of chronic liver disease/cirrhosis, and associated with a high 28-d mortality rate [12].It is well known that bacterial infection is the most common precipitating factor of ACLF.One study demonstrated that the overall rate of ACLF related to bacterial infection was 48%, but the rate varied between geographical regions (38% in southern Europe, and 75% in the Indian subcontinent [13].In particular, extensively drug resistant bacteria caused by spontaneous bacterial peritonitis, pneumonia, or infection are more frequently associated with ACLF.Timely empirical antibiotic treatment can change the balance between bacteria and the host, which is beneficial for bacterial clearance.Bacterial infection may be another important reason for rapid progression to ACLF.Our patient had a pulmonary infection during his second admission, and the condition was still progressing after ceftazidime treatment for the infection.The infection was gradually controlled by changing to piperacillin-tazobactam sodium, and the bacteria that may have caused the patient's pulmonary infection were sensitive to piperacillin-tazobactam sodium. Refractory genotypes and resistance-associated variants The primary goal of DAA therapy is SVR, which is defined as undetectable HCV RNA 12 wk after the end of antiviral therapy [14,15].Viral resistance is a major cause of virological failure in patients receiving DAAs for CHC.Selection of the DAA regimen needs to take account of the drug resistance of the virus and HCV genotype.The proportion of patients with HCV genotype 3 was higher in the population who experienced DAA failure [16].The current first-line drugs for hepatitis C, sofosbuvir and velpatasvir, have high antiviral activity and a high resistance barrier, and resistanceassociated variants may exist in patients with HCV genotype 3 and other rare HCV genotypes.This patient was diagnosed with a low viral load and genotype was not detected during two consecutive hospitalizations and ACLF occurred during treatment.After active treatment, the patient improved and then continued to take antivirals.Follow-up showed that the treatment was effective, and no drug resistance had developed, but the genotype was still unknown.The global distribution of HCV genotypes is regional, with 1b being the main genotype in China.At the same time, there are significant population differences in the distribution of HCV genotypes, and transmission methods may also vary.Our patient has no history of blood transfusion or drug use.During antiviral treatment, liver failure occurred, and the genotype was not detected during two consecutive hospitalizations.We therefore speculate that this patient is rare in terms of the HCV genotypes, or there might be a seventh genotype that we do not know. CONCLUSION DAAs have significantly improved the cure rate of CHC.However, this case also suggests that there is still a risk of liver failure during CHC treatment with DAAs if there are factors such as autoimmunity, combined bacterial infection, or unclear HCV genotype, and timely therapy requires close monitoring. FOOTNOTES Author contributions: Zhong JL contributed to manuscript writing and editing, and data collection; Zhao LW contributed to data analysis; Chen YH and Luo YW contributed to supervision; all authors have read and approved the final manuscript. Supported by the National Natural Science Foundation of China, No. 82160558; and Zunyi Science and Technology Fund. Informed consent statement: Informed written consent was obtained from the patient for publication of this report and any accompanying images. Conflict-of-interest statement: The authors declare that they have no conflict of interest to disclose. CARE Checklist (2016) statement: The authors have read the CARE Checklist (2016), and the manuscript was prepared and revised according to the CARE Checklist (2016). Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial.See: https://creativecommons.org/Licenses/by-nc/4.0/Country/Territory of origin: China ORCID number: Ya-Wen Luo 0000-0002-8845-1265. S-Editor: Yan JP L-Editor: Webster JR P-Editor: Xu ZH
2023-10-26T15:04:02.891Z
0001-01-01T00:00:00.000
{ "year": 2023, "sha1": "a96026453c6b1834a09ec4aac2e14caa32ff9ec1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12998/wjcc.v11.i30.7463", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5d7ed7a9c73808d6611656c7da2bc2b41e822c40", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235228748
pes2o/s2orc
v3-fos-license
Advances in Atypical FT-IR Milk Screening: Combining Untargeted Spectra Screening and Cluster Algorithms Fourier-transform mid-infrared spectrometry is an attractive technology for screening adulterated liquid milk products. So far, studies on how infrared spectroscopy can be used to screen spectra for atypical milk composition have either used targeted methods to test for specific adulterants, or have used untargeted screening methods that do not reveal in what way the spectra are atypical. In this study, we evaluate the potential of combining untargeted screening methods with cluster algorithms to indicate in what way a spectrum is atypical and, if possible, why. We found that a combination of untargeted screening methods and cluster algorithms can reveal meaningful and generalizable categories of atypical milk spectra. We demonstrate that spectral information (e.g., the compositional milk profile) and meta-data associated with their acquisition (e.g., at what date and which instrument) can be used to understand in what way the milk is atypical and how it can be used to form hypotheses about the underlying causes. Thereby, it was indicated that atypical milk screening can serve as a valuable complementary quality assurance tool in routine FTIR milk analysis. Introduction Fourier-transform mid-infrared spectrometry (FT-IR) is a recognized and widely used method to rapidly determine the compositional quality of raw milk and other liquid milk products. With the use of sophisticated multivariate statistical models, it is possible to calculate from the FT-IR spectrum the concentration of fat, protein, lactose, urea, fatty acid groups, individual fatty acids [1][2][3], and other milk characteristics such as pH and freezing point. FT-IR spectrometry is also an attractive technology to screen for possible adulteration of liquid milk products [4][5][6]. In fact, various papers have been published on how infrared spectroscopy can be used to screen spectra for atypical milk composition. Approaches for screening atypical FT-IR milk spectra can be classified into targeted and untargeted methods [4]. Targeted methods rely on mathematical models trained to detect the presence-or estimate the quantity-of specific adulterants in the milk. Development of these models requires the adulteration of a sufficiently large and representative collection of milk samples with an adulterant, possibly at different concentrations. Due to the characteristic effect of the adulterant on the milk's FT-IR spectrum, mathematical models can be trained to distinguish spectra belonging to adulterated milk from those belonging to normal milk. This way, mathematical models have been developed to identify milk adulterated with melamine [7], urea [3], water, starch, sodium citrate, formaldehyde, sucrose, and other adulterants [8,9]. The advantage of targeted methods is that they can indicate the presence of specific adulterants in the milk. Moreover, mathematical models tuned to specific adulterants typically exhibit lower detection limits (i.e., are more sensitive) compared to untargeted methods. The main disadvantage of targeted methods is that they are only capable of detecting known adulterants. In reality, however, it is often not known which adulterants are currently used or will be used in the future. Adulteration with substances (or complex blends thereof) that the mathematical models were not explicitly trained to detect can therefore go undetected. This is where untargeted screening methods excel. These methods do not rely on mathematical models trained to detect a specific deviation in the FT-IR spectrum. Instead, they are sensitive to any deviation present in the spectrum by creating a mathematical model based on FT-IR spectra from authentic milk samples that still contain all normal variation (e.g., seasonal variation, different cow breeds and farm management practices). This mathematical model acts as a normal FT-IR milk fingerprint that can be compared with spectra from milk samples that are to be examined. If a spectrum deviates above a stated threshold from the normal milk fingerprint, it is denoted atypical. Since untargeted methods only rely on regular milk spectra, their development often is less expensive, and only requires a little statistical fine-tuning while offering a broad protection for possible adulteration of milk samples. This has made untargeted screening methods a popular subject of scientific research [6,10] and commercial applications [11]. Although untargeted screening methods are capable of revealing any type of adulteration with a strong effect on the spectrum, their disadvantage is that they do not comprehensibly characterize in what way a spectrum is atypical and what the root cause is. However, information about how and why a milk sample is atypical is crucial for effective and appropriate follow-up action (e.g., contacting the farm or manufacturer, identifying the root cause, and evaluating the possible risks for safety and quality of dairy products). In this paper, we therefore evaluate the potential of combining untargeted methods for atypical spectra screening with cluster algorithms to reveal in what way an individual milk spectrum is atypical and what the underlying cause might be. To do so, untargeted spectra screening was applied to a large dataset of bovine herd bulk milk spectra. This way, a dataset of atypical milk spectra was created. Cluster algorithms were then used to identify possible categories of atypical milk spectra. We show how information in the spectra (e.g., the compositional milk profile) and meta-data associated with their acquisition (e.g., time of measurement) can be used to understand in what way the spectrum is atypical and how it can be used to form hypotheses about the underlying causes. Data Acquisition and Preprocessing We consider a main dataset of 5,847,603 spectra that correspond to bovine herd bulk milk samples of 16,898 farms across the Netherlands. All milk samples were collected between January 2018 and November 2020 and routinely analyzed for milk payment purposes. We made sure that farmers delivering Jersey milk, having elevated fat and protein content as compared to milk from other common breeds in the Netherlands, were not present in our database. For acquisition of the spectra, milk samples were randomly assigned to one of four FT-IR instruments (Milkoscan FT +, FOSS Analytical A/S, Hillerød, Denmark) where they all underwent the same pre-treatment before the scan took place. FT-IR spectra were obtained in the mid infrared region with wavelengths between 1.995 (5012 cm −1 ) and 10.8 µm (926 cm −1 ). All spectra contained 1060 data points and were converted from transmission to absorbance. FT-IR instruments were standardized monthly using the FOSS equalizer application in accordance with the manufacturer's instructions. Details about the standardization procedure can be found in a white paper provided by the manufacturer [12]. For each milk sample, FT-IR-based predictions of fat, protein, lactose, urea, freezing point, and milk fat acidity were available. Calculations of fat, protein, and lactose were also validated. This was done by having per batch of 47 milk samples an additional pilot milk sample with known values of fat, protein, and lactose. In addition to the spectra and the compositional milk profile, the dataset also contained meta-data concerning the time at which the FT-IR measurement took place, an identifier corresponding to the particular infrared instrument from which the spectrum was obtained, and an indicator that allowed us to determine which spectra belonged to the same farm. Basic preprocessing of the spectra consisted of the selection of relevant wavenumbers (between 925 and 1600 cm −1 , 1690 and 1900 cm −1 , and 2700 and 2971 cm −1 ) and a calculation of the spectra's first derivative. Data preprocessing, analysis, and visualizations were performed using Python (version 3.7) [13] with the packages SciPy (version 1.6.0) [14], scikit-learn (version 0. 23 For the development of an untargeted mathematical model to identify atypical milk spectra, we followed a conceptually similar approach as described in [6] employed by commercial manufacturers of FT-IR instruments [11]. For the current dataset, we randomly sampled from the main dataset up to 15,000 milk spectra per month in the period between January 2018 and December 2019. This resulted in a dataset of 354,537 spectra containing twice the seasonal variation in milk. Before working with the spectral data, we first removed all spectra where the freezing point of the corresponding milk samples was above or below the highest and lowest 99 th percentile, respectively. This was done in an attempt to make the screening model more sensitive to spectral deviations indirectly associated with variations in freezing point. The remaining spectra were, per wavelength, centered to have zero mean and scaled to unit variance. We then performed a principal component analysis (PCA) on the spectra and extracted the first 16 components that together explained around 95% of the variation in the spectra. Importantly, due to the scaling transformation, the absorption at each wavelength contributed equally to the construction of the eigenvectors. At this time, we also ensured that the four different IR instruments did not emerge as distinct clusters in the latent space. This is important because the mathematical model should not be more sensitive to one instrument than another. Compared to earlier studies, we decided to extract a few more components from the PCA in order to slightly over-fit the data. This was done with the intention to increase the relative contribution of spectral patterns that normally explain only small fractions of the expected variation encountered in milk. Upon encountering atypical spectra, however, variations described by such components could be important. After transforming the spectra to the latent space, we computed the covariance matrix and calculated per spectrum the Mahalanobis distance. The Mahalanobis distance reflects the distance of an individual spectrum to the distribution of all other spectra in the latent space. In the next step, we used PCA to perform an inverse transformation on the spectra in the latent space in order to obtain the reconstructed spectra in the original space. By calculating, across all wavelengths, the root-mean-squared error between the original and reconstructed spectra, we obtained the spectral residuals. In the last step, the Mahalanobis distances and the spectral residuals were each standardized to have zero mean with unit variance before they were summed to a single score per spectrum: the spectrum anomaly score. The higher the spectrum anomaly score, the more a spectrum deviates from all other spectra. In order to reduce the impact of spectral outliers, we performed two iterations using the procedure described above. After each iteration, spectra with the highest 0.1% anomaly scores were removed. The final mathematic model used to identify atypical spectra was based on 344,781 spectra. Figure 1 shows the distribution of anomaly scores of those spectra that were used to construct the mathematical model. Classifying Atypical Spectra Using the final model for untargeted spectra screening, spectrum anomaly scores were computed for the entire main dataset of 5,847,603 spectra. In order to classify spectra as atypical, the anomaly scores have to be thresholded. Because the true prevalence of atypical milk that can be detected with FT-IR is usually unknown, a somewhat arbitrary threshold has to be defined. Given the strict regulations and tight on-farm inspection regimes in the Dutch dairy sector, we assumed one in every thousand (0.1%) milk samples to be atypical-on a monthly basis. In earlier research, a prevalence of up to 1% was sometimes used [6]. Defining the threshold on a monthly basis ensured that we had an equal number of atypical spectra per month over a period of almost three years. This way, a dataset with 5671 atypical spectra was constructed. This dataset was further randomly split into a training (n = 4253 spectra) and test dataset (n = 1418 spectra). The 4253 atypical spectra in the training set were centered and scaled using robust statistics (i.e., the median and interquartile range). Data transformations with robust statistics were performed to reduce the impact of spectral outliers. We then used a PCA and extracted the first 24 components that together explained 99.5% of the variance in the spectra. We found the clustering to result in more stable and interpretable results after the spectra had been transformed to the latent space. This is most likely because the PCA reduces the amount of redundancy in the spectra and thereby, relatively speaking, increases the signal-to-noise ratio. We then fitted various Gaussian Mixture Models to the spectral data in the latent space. Gaussian Mixture Models (GMM) are probabilistic generative models and a generalization of the K-means cluster algorithm. In a GMM, multiple multivariate Gaussian distributions are fitted and mixed in such a way that they can generate synthetic data that resemble the actual data as much as possible. The most important hyperparameter of a GMM is the number of components that one has to select a priori. In other words, how many different categories of atypical milk spectra the dataset contains. The true number of distinct categories cannot be known and depends on many factors. Therefore, many possible cluster solutions exist across different datasets and even within a single dataset. The selection of the optimal solution should be guided by the data (i.e., which solution generalizes to new data) and domain-specific knowledge (which clusters are expected and what do they reflect). This leads to a trade-off between generalizability and interpretability similar to the over-and underfitting trade-off encountered in Classifying Atypical Spectra Using the final model for untargeted spectra screening, spectrum anomaly scores were computed for the entire main dataset of 5,847,603 spectra. In order to classify spectra as atypical, the anomaly scores have to be thresholded. Because the true prevalence of atypical milk that can be detected with FT-IR is usually unknown, a somewhat arbitrary threshold has to be defined. Given the strict regulations and tight on-farm inspection regimes in the Dutch dairy sector, we assumed one in every thousand (0.1%) milk samples to be atypical-on a monthly basis. In earlier research, a prevalence of up to 1% was sometimes used [6]. Defining the threshold on a monthly basis ensured that we had an equal number of atypical spectra per month over a period of almost three years. This way, a dataset with 5671 atypical spectra was constructed. This dataset was further randomly split into a training (n = 4253 spectra) and test dataset (n = 1418 spectra). The 4253 atypical spectra in the training set were centered and scaled using robust statistics (i.e., the median and interquartile range). Data transformations with robust statistics were performed to reduce the impact of spectral outliers. We then used a PCA and extracted the first 24 components that together explained 99.5% of the variance in the spectra. We found the clustering to result in more stable and interpretable results after the spectra had been transformed to the latent space. This is most likely because the PCA reduces the amount of redundancy in the spectra and thereby, relatively speaking, increases the signal-to-noise ratio. We then fitted various Gaussian Mixture Models to the spectral data in the latent space. Gaussian Mixture Models (GMM) are probabilistic generative models and a generalization of the K-means cluster algorithm. In a GMM, multiple multivariate Gaussian distributions are fitted and mixed in such a way that they can generate synthetic data that resemble the actual data as much as possible. The most important hyperparameter of a GMM is the number of components that one has to select a priori. In other words, how many different categories of atypical milk spectra the dataset contains. The true number of distinct categories cannot be known and depends on many factors. Therefore, many possible cluster solutions exist across different datasets and even within a single dataset. The selection of the optimal solution should be guided by the data (i.e., which solution generalizes to new data) and domain-specific knowledge (which clusters are expected and what do they reflect). This leads to a trade-off between generalizability and interpretability similar to the over-and underfitting trade-off encountered in machine learning problems. Solutions with few clusters may generalize well but may suffer from being too general to be useful (i.e., they underfit the data). Solutions with many clusters, on the other hand, can lead to highly segmented and specific clusters that often fail to generalize to new data (i.e., they overfit the data). We evaluated multiple cluster solutions, each time with a different number of clusters. We considered GMMs with four to twenty components. GMMs were fitted such that each Gaussian component had its own fullyparameterized covariance matrix. This made it possible to fit clusters with different spheroidal shapes. The location parameter of the individual components was initialized using the K-means algorithm. Moreover, each GMM was fitted to the data 50 times and the model that best fitted the data (in terms of the log likelihood) was kept as the final model to be evaluated. The quality of the results was evaluated by investigating (i) the similarity of spectra within each cluster in relation to the difference between the clusters, (ii) the size of the clusters (i.e., too many small clusters could indicate overfitting, too few large clusters underfitting), (iii) the degree to which clusters mainly contain atypical spectra from only a very small number of farms, (iv) the temporal profile, (v) whether a cluster contains spectra from only one or two FT-IR instruments, and (vi) the compositional milk profile in terms of fat, protein, lactose, urea, freezing point, and milk fat acidity. On the basis of this information, we found that a GMM with 11 components yielded a meaningful cluster solution. However, two clusters were still too similar in terms of the spectra and the compositional milk profile. Moreover, one cluster only contained spectra from two of the four different FT-IR instruments, while the other cluster exclusively contained spectra from the other two FT-IR instruments. In other words, the difference between the two clusters could be mainly attributed to differences between the FT-IR instruments. We therefore decided to group these two clusters together. This resulted in 10 final clusters. Although some cluster algorithms can also be used to categorize new data, they are inflexible when it comes to combining multiple clusters into one, such as we did. We therefore followed a more general route and used a dedicated classifier to assign new atypical spectra into one of the identified clusters. We fitted a support vector classifier on the spectra in the latent space using the 10 clusters as class labels. Optimal parameters of the classifier were determined by combining a grid search with 5-fold stratified crossvalidation. The cross-validated weighted F1-score of 0.95 indicated excellent classification performance. To obtain an indication about the generalizability of the identified cluster solution, we used the 1418 atypical spectra from the test set. Before predicting their cluster membership, we centered and scaled the spectra and transformed them to the latent space. All data transformations were performed with the parameters calculated from the training set (i.e., median, interquartile range, and eigenvectors). Afterwards, the classifier was used to assign the spectra to their most likely cluster. Generalizability was qualitatively assessed by comparing the distribution of the compositional milk profile between the training and the test dataset. Results and Discussion The goal of the study was an evaluation of the potential of combining untargeted methods for atypical milk spectra screening with cluster algorithms to reveal in what way a milk sample is atypical and what the underlying causes could be. How useful a combined approach is depends on whether atypical milk spectra can be clustered into robust and meaningful categories that can, in turn, be linked to possible root causes. However, identifying meaningful and generalizable clusters in complex data is inherently difficult. Because the true number of distinct clusters and their actual meaning cannot be known a priori, the challenge lies in selecting one out of many possible cluster solutions. In the current study, we tried to balance generalizability and interpretability and found that categorizing atypical milk spectra into ten distinct clusters provided a robust and meaningful description of our dataset. As can be seen in Figure 2, the size of the clusters varied from 11 milk spectra (0.26% of all spectra) in cluster 9 to 1554 milk spectra (36.54%) in cluster 4. Because some factors that cause milk spectra to be atypical are more likely than others, variability in the size of the clusters is expected. Interestingly, out of 4253 spectra, only 11 milk samples with similar spectral characteristics were necessary to form a distinct cluster. This indicates how sensitive cluster algorithms can be in detecting very small groups of atypical spectra-even within large datasets. Foods 2021, 10, x FOR PEER REVIEW 6 of 13 As can be seen in Figure 2, the size of the clusters varied from 11 milk spectra (0.26% of all spectra) in cluster 9 to 1554 milk spectra (36.54%) in cluster 4. Because some factors that cause milk spectra to be atypical are more likely than others, variability in the size of the clusters is expected. Interestingly, out of 4253 spectra, only 11 milk samples with similar spectral characteristics were necessary to form a distinct cluster. This indicates how sensitive cluster algorithms can be in detecting very small groups of atypical spectraeven within large datasets. Figure 3 shows for both the training dataset and the test dataset the compositional milk profile on a cluster-by-cluster basis. Notice the similarity of the distributions when comparing the training and test dataset. This shows that our cluster solution generalizes to new data and does not merely describe peculiarities in our specific training set. The compositional milk profile also allows for an initial characterization of the clusters so that hypotheses can be made about the underlying root cause. For example, cluster 3 is characterized by an increased fat content of up to 22%. This can be the consequence of insufficient mixing of milk before sampling. Therefore, if untargeted screening methods identify an atypical milk spectrum which is subsequently classified as belonging to cluster 3, it is most likely because the milk has a drastically increased fat content as it was not homogeneously sampled. Cluster 6, on the other hand, is marked by a decreased protein and lactose concentration but an increased freezing point. This is typical for extraneous water in the milk [18]. This can be the result of an influx of water due to improper functioning of valves or an insufficient draining of water residues after cleansing the milking system or the tank. Cluster 7 is marked by a drastic increase in free fatty acids in the milk. This can be the consequence of the lipase activity [19] in sensitive milk (e.g., due to an increased somatic cell count in late lactation cows [20]) or milk that is subject to mechanical strain (e.g., air inclusion in milking systems) [21]. Because increased free fatty acid concentration in the milk can result in rancid flavors of milk products [21,22] and loss of fat in whey during cheese production [23], identifying such atypical milk samples therefore is of direct economic relevance. Finally, cluster 9 is characterized by an increased protein and lactose concentration combined with a decreased freezing point. This could be indicative of adulteration with protein-rich (e.g., milk powder, whey protein isolate) and carbohydratebased adulterants (e.g., glucose, starch) to increase the apparent concentration of protein and lactose [24]. Individual milk spectra categorized as belonging to cluster 9 could therefore indicate economically motivated adulteration. The ability to identify such cases makes it possible to initiate further target-oriented chemical analyses and contact with the farm for clarification. Although informative, milk spectra can also be atypical for reasons Figure 3 shows for both the training dataset and the test dataset the compositional milk profile on a cluster-by-cluster basis. Notice the similarity of the distributions when comparing the training and test dataset. This shows that our cluster solution generalizes to new data and does not merely describe peculiarities in our specific training set. The compositional milk profile also allows for an initial characterization of the clusters so that hypotheses can be made about the underlying root cause. For example, cluster 3 is characterized by an increased fat content of up to 22%. This can be the consequence of insufficient mixing of milk before sampling. Therefore, if untargeted screening methods identify an atypical milk spectrum which is subsequently classified as belonging to cluster 3, it is most likely because the milk has a drastically increased fat content as it was not homogeneously sampled. Cluster 6, on the other hand, is marked by a decreased protein and lactose concentration but an increased freezing point. This is typical for extraneous water in the milk [18]. This can be the result of an influx of water due to improper functioning of valves or an insufficient draining of water residues after cleansing the milking system or the tank. Cluster 7 is marked by a drastic increase in free fatty acids in the milk. This can be the consequence of the lipase activity [19] in sensitive milk (e.g., due to an increased somatic cell count in late lactation cows [20]) or milk that is subject to mechanical strain (e.g., air inclusion in milking systems) [21]. Because increased free fatty acid concentration in the milk can result in rancid flavors of milk products [21,22] and loss of fat in whey during cheese production [23], identifying such atypical milk samples therefore is of direct economic relevance. Finally, cluster 9 is characterized by an increased protein and lactose concentration combined with a decreased freezing point. This could be indicative of adulteration with protein-rich (e.g., milk powder, whey protein isolate) and carbohydrate-based adulterants (e.g., glucose, starch) to increase the apparent concentration of protein and lactose [24]. Individual milk spectra categorized as belonging to cluster 9 could therefore indicate economically motivated adulteration. The ability to identify such cases makes it possible to initiate further target-oriented chemical analyses and contact with the farm for clarification. Although informative, milk spectra can also be atypical for reasons other than those found in the compositional milk profile. For clusters 5 and 10, for instance, the compositional milk profile does not seem to provide relevant diagnostic information. It is therefore also important to focus on the characteristic spectra of the clusters. other than those found in the compositional milk profile. For clusters 5 and 10, for instance, the compositional milk profile does not seem to provide relevant diagnostic information. It is therefore also important to focus on the characteristic spectra of the clusters. show the kernel density estimates for fat, protein, lactose, urea, freezing point, and milk fat acidity for each cluster in the training set (green) and test set (orange). The boxplot contained in each violin describes the minimum, 25, 50, 75%, and maximum value of the training dataset. The horizontal dashed lines and the shaded region correspond to reference values (mean ± 95% confidence interval) calculated from the milk spectra used to develop the untargeted spectra screening model. Figure 4 gives an illustration of how the average milk spectrum per cluster deviates from a typical milk spectrum. Inspecting the average spectra reveals that the change in absorbance of some clusters mainly occurs in spectral regions that are directly related to the concentration of fat (around 1750 and 2860 cm −1 ), protein (around 1540 cm −1 ), and lactose (around 1040 cm −1 ). Cluster 3, for example, is marked by increased absorption in the regions around 1750 and 2860 cm −1 which corresponds to the C = O bonds (carboxyl group) and C-H bonds (ethylene group) characteristic for fat. For cluster 6, on the other hand, decreased absorption around 1540 (H-N-C = O amides) and 1040 cm −1 (C-O carbohydrates) correspond to a decreased protein and lactose concentration, respectively. However, some clusters also show a structurally abnormal absorbance profile. A closer look at the spectral region between 2740 and 2970 cm −1 reveals an interesting pattern for cluster 9. The spectral absorbance pattern in this region differs from the remaining clusters and the natural variation encountered in typical FT-IR spectra. Similarly, for cluster 2 in the region between 926 and 1100 cm −1 . Such structural abnormalities can go unnoticed when only the compositional milk profile is analyzed but may provide important information about the nature of the atypical appearance. This information can then be used to generate hypotheses about a possible explanation that can be tested by target-oriented chemical analyses or further exploration on the spot. show the kernel density estimates for fat, protein, lactose, urea, freezing point, and milk fat acidity for each cluster in the training set (green) and test set (orange). The boxplot contained in each violin describes the minimum, 25, 50, 75%, and maximum value of the training dataset. The horizontal dashed lines and the shaded region correspond to reference values (mean ± 95% confidence interval) calculated from the milk spectra used to develop the untargeted spectra screening model. Figure 4 gives an illustration of how the average milk spectrum per cluster deviates from a typical milk spectrum. Inspecting the average spectra reveals that the change in absorbance of some clusters mainly occurs in spectral regions that are directly related to the concentration of fat (around 1750 and 2860 cm −1 ), protein (around 1540 cm −1 ), and lactose (around 1040 cm −1 ). Cluster 3, for example, is marked by increased absorption in the regions around 1750 and 2860 cm −1 which corresponds to the C = O bonds (carboxyl group) and C-H bonds (ethylene group) characteristic for fat. For cluster 6, on the other hand, decreased absorption around 1540 (H-N-C = O amides) and 1040 cm −1 (C-O carbohydrates) correspond to a decreased protein and lactose concentration, respectively. However, some clusters also show a structurally abnormal absorbance profile. A closer look at the spectral region between 2740 and 2970 cm −1 reveals an interesting pattern for cluster 9. The spectral absorbance pattern in this region differs from the remaining clusters and the natural variation encountered in typical FT-IR spectra. Similarly, for cluster 2 in the region between 926 and 1100 cm −1 . Such structural abnormalities can go unnoticed when only the compositional milk profile is analyzed but may provide important information about the nature of the atypical appearance. This information can then be used to generate hypotheses about a possible explanation that can be tested by target-oriented chemical analyses or further exploration on the spot. Describing clusters on the basis of their compositional milk profile and the underlying milk spectra provides important information and allows for an assessment of whether classifying atypical milk spectra is feasible in the first place. However, we believe that an evaluation of the applicability of a cluster solution also depends on important meta-information regarding the atypical spectra. One such source of information relates to the proportion of unique farms in relation to the size of each cluster. This gives an indication of the extent to which the root cause underlying atypical milk spectra within a cluster can be found at many different farms, or only a few farms. As can be seen in Figure 5, most of the distinct clusters have a proportion of unique farms in between 40 and 80%. For example, 43% of all the spectra in cluster 7 belong to different farms. Considering the actual size of the cluster, it can be calculated that the 529 spectra in cluster 7 belong to 227 different farms. This also means that, considering a period of almost 3 years, a farm in cluster 7 will have on average 2.3 atypical spectra in this cluster. This could be indicative of certain factors more prominent at these farms. Knowledge about these factors (e.g., factors leading to increased mechanical strain of milk) could then be used to take preventive measures. We also found two clusters, cluster 1 and cluster 4, where the proportion of unique farms was particularly low (9.33 and 16.34%, respectively). This indicates that the majority of atypical spectra correspond to milk supplies from relatively few farms. In cluster 5 and cluster 9, on the contrary, all spectra belong to different farms. This could either mean that the underlying cause affects farms only incidentally or that the reason is not located at the farms but instead at the laboratory where the FT-IR measurements take place. Describing clusters on the basis of their compositional milk profile and the underlying milk spectra provides important information and allows for an assessment of whether classifying atypical milk spectra is feasible in the first place. However, we believe that an evaluation of the applicability of a cluster solution also depends on important metainformation regarding the atypical spectra. One such source of information relates to the proportion of unique farms in relation to the size of each cluster. This gives an indication of the extent to which the root cause underlying atypical milk spectra within a cluster can be found at many different farms, or only a few farms. As can be seen in Figure 5, most of the distinct clusters have a proportion of unique farms in between 40 and 80%. For example, 43% of all the spectra in cluster 7 belong to different farms. Considering the actual size of the cluster, it can be calculated that the 529 spectra in cluster 7 belong to 227 different farms. This also means that, considering a period of almost 3 years, a farm in cluster 7 will have on average 2.3 atypical spectra in this cluster. This could be indicative of certain factors more prominent at these farms. Knowledge about these factors (e.g., factors leading to increased mechanical strain of milk) could then be used to take preventive measures. We also found two clusters, cluster 1 and cluster 4, where the proportion of unique farms was particularly low (9.33 and 16.34%, respectively). This indicates that the majority of atypical spectra correspond to milk supplies from relatively few farms. In cluster 5 and cluster 9, on the contrary, all spectra belong to different farms. This could either mean that the underlying cause affects farms only incidentally or that the reason is not located at the farms but instead at the laboratory where the FT-IR measurements take place. Figure 5. Proportion of unique farms per cluster. The percentages reflect the ratio between how many spectra a cluster contains (i.e., the cluster size) and from how many different farms these spectra are. The lower the proportion of unique farms, the more it indicates that the spectra belong to milk from only few farms. We also analyzed the temporal profile of the clusters. In Figure 6, we visualized per cluster the distribution of milk samples over time. This makes it possible to identify the extent to which a cluster primarily describes milk that is atypical because the underlying root cause has a seasonal profile (such as feeding and variation in temperature). Moreover, it can be useful to know whether the underlying root cause is confined to a temporally isolated event or whether it is more permanent. In our dataset, most of the clusters do not appear to have a seasonal profile but are evenly distributed across the three-year period. This suggests that the clusters are stable in the sense that the underlying root causes exist throughout the year and are likely to be permanent. For example, the few spectra belonging to cluster 9 were discovered evenly throughout a period of three years and each spectrum corresponds to milk from a different farm. This is not indicative of a systematic intentional adulteration, where multiple milk samples from the same farms are expected to be affected over a certain period of time. Nevertheless, evidence suggests that atypical spectra belonging to cluster 9 will also be found in the future, and this may justify further target-oriented analyses concerning these milk samples or the farms they are coming from. We also identified clusters with a seasonal profile. As can be seen in Figure 6, clusters 1, 4, and 10 have a yearly seasonal profile. In cluster 10 the majority of spectra originate from milk samples collected between April and May. This period of the year typically marks the start of the grazing season, which goes along with a transition from silage-based feeding to autonomous grazing. It is well known that this leads to characteristic changes in the milk's fatty acid profile [25,26]. Because over 80% of the farms in the Netherlands have their cows out in the meadow during the grazing season, natural variation due to grazing is already incorporated in our mathematical model. Cluster 10 could therefore contain milk from farms where the transition to the grazing season leads to particularly drastic changes in the fatty acid profile of the milk-perhaps due to abrupt changes in the food intake. Note that a change in the fatty acid profile does not have to be reflected in the milk's overall fat percentage. Another two clusters that follow a yearly seasonal profile are cluster 1 and, albeit being less pronounced, cluster 4. Both clusters mainly contain milk samples collected around November and December. This period is likely associated with the transition from autonomous grazing to silage-based feeding at the end of the grazing season. Figure 5. Proportion of unique farms per cluster. The percentages reflect the ratio between how many spectra a cluster contains (i.e., the cluster size) and from how many different farms these spectra are. The lower the proportion of unique farms, the more it indicates that the spectra belong to milk from only few farms. We also analyzed the temporal profile of the clusters. In Figure 6, we visualized per cluster the distribution of milk samples over time. This makes it possible to identify the extent to which a cluster primarily describes milk that is atypical because the underlying root cause has a seasonal profile (such as feeding and variation in temperature). Moreover, it can be useful to know whether the underlying root cause is confined to a temporally isolated event or whether it is more permanent. In our dataset, most of the clusters do not appear to have a seasonal profile but are evenly distributed across the three-year period. This suggests that the clusters are stable in the sense that the underlying root causes exist throughout the year and are likely to be permanent. For example, the few spectra belonging to cluster 9 were discovered evenly throughout a period of three years and each spectrum corresponds to milk from a different farm. This is not indicative of a systematic intentional adulteration, where multiple milk samples from the same farms are expected to be affected over a certain period of time. Nevertheless, evidence suggests that atypical spectra belonging to cluster 9 will also be found in the future, and this may justify further target-oriented analyses concerning these milk samples or the farms they are coming from. We also identified clusters with a seasonal profile. As can be seen in Figure 6, clusters 1, 4, and 10 have a yearly seasonal profile. In cluster 10 the majority of spectra originate from milk samples collected between April and May. This period of the year typically marks the start of the grazing season, which goes along with a transition from silage-based feeding to autonomous grazing. It is well known that this leads to characteristic changes in the milk's fatty acid profile [25,26]. Because over 80% of the farms in the Netherlands have their cows out in the meadow during the grazing season, natural variation due to grazing is already incorporated in our mathematical model. Cluster 10 could therefore contain milk from farms where the transition to the grazing season leads to particularly drastic changes in the fatty acid profile of the milk-perhaps due to abrupt changes in the food intake. Note that a change in the fatty acid profile does not have to be reflected in the milk's overall fat percentage. Another two clusters that follow a yearly seasonal profile are cluster 1 and, albeit being less pronounced, cluster 4. Both clusters mainly contain milk samples collected around November and December. This period is likely associated with the transition from autonomous grazing to silage-based feeding at the end of the grazing season. Another discovery that can be made from the temporal profiles relates to cluster 5. More than 99% of the spectra in cluster 5 were measured on two consecutive days. We found that all but two spectra were measured on a single FT-IR instrument. This highly suggests that the spectra in cluster 5 are atypical due to temporary instabilities related to the particular FT-IR instrument. For example, multiple reflections inside the layers within the measurement cell of the infrared instrument can produce characteristic artifacts in the spectrum known as fringes [27]. This is an interesting and important discovery. On the one hand, it indicates that such instrument instabilities produce characteristic spectral patterns that can be detected by a cluster algorithm. On the other hand, it shows that such instabilities can lead to drastic changes in the spectrum that go unnoticed when only the milk's compositional profile or even the raw spectra are analyzed. Moreover, mathematical models used for the calculation of chemical compounds that are present only in small concentrations (e.g., individual fatty acids) can produce invalid results if the instabilities affect spectral features used by these models. In addition, such instabilities can, and in our case did, lead to anomaly scores higher than 99.9% for all other scores in this period. In other words, these spectra were considered as atypical for reasons that are unrelated to the chemical composition of the milk. By analyzing the time-course of anomaly scores on an instrument-by-instrument basis, it is possible to identify periods of instrument instabilities that are reflected as increased anomaly scores. This way, atypical milk spectrum screening can also be used to simultaneously screen FT-IR instruments for instabilities that can lead to systematic changes in milk spectra. Moreover, such instrument instabilities can indicate that the instrument requires maintenance. Implications, Limitations, and Future Outlook Despite the promising results obtained from our cluster-level analyses, we also acknowledge some limitations. First, it may not always be obvious or even possible to describe what it means for the actual milk if it has a specific atypical absorbance pattern. This is particularly the case for categories characterized by non-trivial absorbance patterns rather than isolated peaks. In our case, we found that clusters 5 and 10 were interesting in this regard. Both clusters could hardly be characterized by inspection of the spectra or Another discovery that can be made from the temporal profiles relates to cluster 5. More than 99% of the spectra in cluster 5 were measured on two consecutive days. We found that all but two spectra were measured on a single FT-IR instrument. This highly suggests that the spectra in cluster 5 are atypical due to temporary instabilities related to the particular FT-IR instrument. For example, multiple reflections inside the layers within the measurement cell of the infrared instrument can produce characteristic artifacts in the spectrum known as fringes [27]. This is an interesting and important discovery. On the one hand, it indicates that such instrument instabilities produce characteristic spectral patterns that can be detected by a cluster algorithm. On the other hand, it shows that such instabilities can lead to drastic changes in the spectrum that go unnoticed when only the milk's compositional profile or even the raw spectra are analyzed. Moreover, mathematical models used for the calculation of chemical compounds that are present only in small concentrations (e.g., individual fatty acids) can produce invalid results if the instabilities affect spectral features used by these models. In addition, such instabilities can, and in our case did, lead to anomaly scores higher than 99.9% for all other scores in this period. In other words, these spectra were considered as atypical for reasons that are unrelated to the chemical composition of the milk. By analyzing the time-course of anomaly scores on an instrument-by-instrument basis, it is possible to identify periods of instrument instabilities that are reflected as increased anomaly scores. This way, atypical milk spectrum screening can also be used to simultaneously screen FT-IR instruments for instabilities that can lead to systematic changes in milk spectra. Moreover, such instrument instabilities can indicate that the instrument requires maintenance. Implications, Limitations, and Future Outlook Despite the promising results obtained from our cluster-level analyses, we also acknowledge some limitations. First, it may not always be obvious or even possible to describe what it means for the actual milk if it has a specific atypical absorbance pattern. This is particularly the case for categories characterized by non-trivial absorbance patterns rather than isolated peaks. In our case, we found that clusters 5 and 10 were interesting in this regard. Both clusters could hardly be characterized by inspection of the spectra or their compositional milk profile. Nevertheless, information about why they were atypical could be inferred from the corresponding meta-data. It is therefore possible to identify categories for which an explanation can be given as to why milk spectra are atypical without being able to explain in what way they are atypical. Another limitation concerns the classification of new atypical milk spectra. We demonstrated that is possible to identify meaningful categories of atypical milk spectra that also generalize to new datasets. Although generalizability of a cluster solution is crucial, it should be clear that a spectrum is always more similar to one cluster than to another. This means that a milk spectrum will always be assigned to a category, no matter how well it actually fits the category. This can lead to erroneous conclusions about how and why the milk is atypical. Ideally, only those milk spectra that are sufficiently similar to the spectra that were used by the cluster algorithm to identify the categories in the first place should be assigned to a category. Probabilistic generative models such as Gaussian Mixture Models can, theoretically, do exactly that. Since they can be used to generate synthetic data, they can also be used to calculate the probability that a particular milk spectrum can be randomly generated from this model. When the probability for a milk spectrum is particularly low, the model may not be appropriate for categorizing the spectrum. This way, it is also important to keep in mind that a cluster solution will necessarily become outdated over time. This is because the chemical composition of normal milk changes over the years and the reasons that cause milk spectra to be atypical do so too. This requires that the cluster algorithm also be updated. Monitoring, over time, the proportion of atypical spectra that cannot be categorized by the model can give important information about the up-to-datedness of the models. However, more research needs to be done in this area. A final limitation concerns untargeted milk spectra screening methods in general. Untargeted spectra screening relies on a comparison of an individual milk spectrum with a reference FT-IR milk fingerprint. The reference milk fingerprint represents the gold standard of what is assumed to reflect normal milk. However, milk with a spectrum that deviates from this fingerprint does not automatically reflect poor quality. In fact, even the opposite can be the case. With the use of cluster algorithms and a thorough analysis of the resulting categories, it may be possible to distinguish milk that is atypical for undesired reasons (e.g., adulteration with water) from milk that is atypical for desirable reasons (e.g., particularly good farm management practices or feeding regimes). On the basis of our findings, we believe that a combined approach for atypical spectra screening can serve as a valuable complementary quality assurance tool in routine FT-IR milk analysis. After meaningful and generalizable categories of atypical milk spectra have been identified, new atypical milk spectra can be classified into these categories. Knowledge about the different categories, in terms of what they reflect and what the possible root causes are, can then be used to describe why and in what way a new milk spectrum is atypical. Moreover, our approach is computationally efficient, scalable, and can be implemented in large-scale routine screening for atypical milk. Being able to give an indication about how and why a milk spectrum is atypical is necessary for taking appropriate actions (e.g., rejecting the milk) and is an indispensable prerequisite when contacting the farm for clarification. Our discovery that instrument instabilities can also be responsible for atypical spectra is of particular relevance here. It is important to differentiate between spectra that are atypical because the milk is chemically abnormal or because the FT-IR instrument produces measurement artifacts in the spectrum. This way, atypical spectra screening can be used as a tool for monitoring the quality and authenticity of milk, and for monitoring the FT-IR instruments from which the spectra are obtained and the compositional milk profile is computed. Conclusions When applied to the FT-IR spectra of liquid milk samples, we have shown that a combination of untargeted screening methods and cluster algorithms reveals meaningful and generalizable categories of atypical milk spectra. We demonstrated that information in the spectra (e.g., the compositional milk profile) and meta-data associated with their acquisition (e.g., when and at which instrument) can be used to understand in what way the milk is atypical and how it can be used to form hypotheses about the underlying causes, whether on-farm or in the laboratory. Our combined approach resulted in the identification of ten categories of atypical milk spectra. Some of them could be to linked to increased mechanical strain of the milk, adulteration with extraneous water, or insufficient homogenization during sampling at the farm. Another category could reflect economically motivated adulteration of milk with protein-rich and carbohydrate-based adulterants. We also identified a category that contains spectra that are atypical due to measurement artifacts associated with the infrared instrument. However, more research is needed to confirm these hypotheses. Future studies could also investigate the potential of the described methodology to identify and categorize such measurement artifacts to assess the maintenance status of infrared instruments. A combined approach in which atypical spectra are detected by untargeted methods and then assigned to categories revealed by cluster algorithms provides important information about how and why a milk spectrum is atypical. This is important for selecting effective and appropriate follow-up actions and greatly extends the practical utility and scope of FT-IR milk screening when used as a complementary tool for quality assurance in the dairy.
2021-05-29T05:14:39.487Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "0e8e473da4811449f7ef725b35886a9e1990ccd1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/10/5/1111/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e8e473da4811449f7ef725b35886a9e1990ccd1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
110952281
pes2o/s2orc
v3-fos-license
Shock wave propagation of circular dam break problems We examine the behavior of shock wave propagation of circular (radial) dam break problems. A dam break problem represents a reservoir having two sides of water at rest initially with different depth separated by a wall, then water flows after the wall is removed. The behavior of shock wave propagation is investigated with respect to water levels and with respect to the speeds of the shock waves. To the author's knowledge, such investigation for circular dam break problems had never been done before. Therefore, this new work shall be important for applied computational mathematics and physics communities as well as fluid dynamic researchers. Based on our research results, the propagation speed of shock wave in a circular dam break is lower than that of shock wave in a planar dam break having the same initial water levels as in the circular dam break. Introduction Water can flow in either a closed or open space. An example of water motion in a closed channel is pipe flows. An example of water motion in open space is flood flows. Studies of water flows are important, as they occur in many situations or conditions (see References [1]- [6]). This paper considers water flows in an open channel. We solve a circular dam break problem. It is also known as a radial dam break problem. A circular dam represents a water reservoir having a wall with circle in shape, where the depth of water inside the circle wall is greater than that of water outside. Then the circular dam break problem means that we need to find the properties (water surface, momentum, velocity, energy, etc.) of water after the circular dam wall is removed completely at an instant of time. We assume that initially the area outside of the circular wall has a positive constant depth. Therefore when dam break happens, a shock wave appears and propagates radially [3]. The circular dam break problem can be modelled by the the one-dimensional shallow water equations with varying width as well as the standard two-dimensional shallow water equations. A simulation of the problem through the one-dimensional shallow water equations with varying width was conducted by Roberts and Wilson [5]. Some simulation results of the problem through the standard two-dimensional shallow water equations was presented by Mungkasi [4]. Shallow water flows was modelled mathematically for initial stages by Saint-Venant [2]. Our goal is to research on the shock wave propagation of the circular dam break problem. We implement the one-dimensional shallow water equations with varying width following Roberts and Wilson [5]. An advantage of using these equations is that the numerical method (used to solve these equations) is simpler than the two dimensional version. It is because the onedimensional shallow water equations with varying width have the same form as the standard one-dimensional shallow water equations. The rest of this paper is organized as follows. Governing equations are recalled in Section 2. The numerical method of Roberts and Wilson [5] is briefly presented in Section 3. Then Section 4 presents and discusses numerical results on the shock wave propagation. Finally we conclude our presentation with some remarks in Section 5. Governing equations The one-dimensional shallow water equations with varying width are [5] Here h(x, t) is water depth, u(x, t) is horizontal velocity, z(x) is the topography, b(x) is the channel varying width and g is the accelleration due to gravity. The free variables are time t and space x . The absolute water level is called stage and defined extensively as w = h + z . Some notes are as follows. When the width b(x) is constant, then the shallow water equations (1) and (2) are simplified to the standard one-dimensional shallow water equations. When we have horizontal topography, the source term −gh dz dx b disappears as dz dx = 0 . Numerical method The finite volume method of Roberts and Wilson [5] is used to solve the shallow water equations (1) and (2). The method is briefly described as follows. Consider equations (1) and (2). These equations are conservation laws of the form Here q is the vector of conserved quantities, f (q) is the vector of fluxes and s is the vector of sources. Assume that we are given a space domain. A discretization of the space domain leads equation (3) to the semi-discrete finite volume scheme The accuracy of the finite volume method is then dependent on the accuracy of the numerical fluxes f i+ 1 2 and f i− 1 2 as well as on the accuracy of the solver of the ordinary differential equation (4). For our simulations we use second order finite volume method. It is second order accurate in space and second order accurate in time. We implement the minmod limiter to overcome artificial oscillation of numerical solutions. For more details on this finite volume method for solving the shallow water equations (1) and (2), we refer to Roberts and Wilson [5]. Numerical results To achieve the goal of this paper we consider a circular dam break problem. All quantities are given in SI units, so we omit the writing of units as they are already clear. (1) and (2) with varying width b(x) = 2πx . The length of the channel is 100 . Stage is 10 for x ∈ [0, 50] . However stage is 1 for x ∈ [50, 100] . (Roberts and Wilson [5] set stage to be 2 for x ∈ [50, 100] .) This problem mimics the two-dimensional circular dam break problem. We can solve this problem using equations (1) and (2), as we exploit the symmetry of the water motion after dam break. The acceleration due to gravity is set to 9.81 . The space domain [0, 100] is discretized into 1000 cells uniformly. Figure 1 shows the simulation results at time t = 2. The first subfigure is the stage. The second and third subfigures are momentum and velocity respectively. We can see that there is no constant velocity across the moving water. To see the shock front clearer, we magnify Figure 1 around the shock position. The magnification is shown in Figure 2. Moreover, the track of the shock front is summarized in Table 1 for time t ∈ [0, 1] and Table 2 for time t ∈ [1,2] . To see the relationship between the shock track with respect to time, we plot the data of Tables 1 and 2 into Figure 3. A magnification of Figure 3 for time t ∈ [1.75, 2] is given in Figure 4. From Figures 3 and 4 we conclude that the relationship between the shock wave propagation with respect to time is nonlinear. This means that the shock speed of the circular dam break problem is not constant. This is different phenomena from a corresponding planar dam break problem. Recall that for the planar dam break problem, the relationship between the shock wave propagation with respect to time is linear. That is due to the constant shock wave propagation. We note that after 2 seconds of the circular dam break, the shock front travels about 18.915 m. However if we set a corresponding planar dam break problem (having the same initial water levels as in the circular dam break), based on the analytical solution of Stoker [6], the shock speed of the corresponding planar dam break is 9.8193 . Therefore after 2 seconds of the planar dam break, the shock travels a distance of 19.6386 . That is, the propagation speed of shock wave in a circular dam break is lower than that of shock wave in a planar dam break having the same initial water levels as in the circular dam break. Note that a corresponding planar dam break for our simulation means that we have a space x ∈ [−100, 100] , where the initial still water depth is 10 for x ∈ [−100, 0] and 1 for x ∈ [0, 100] . Conclusions We have presented research results on the shock wave propagation of a circular dam break problem. We find that the relationship between the shock front of the circular dam break problem with respect to time is nonlinear. This nonlinear phenomenon is due to the nonconstant shock speed of the circular dam break problem. In addition, we find that a corresponding planar dam break problem having the same initial water levels as in the circular dam break produces a faster shock propagation.
2019-04-13T13:05:12.957Z
2014-10-13T00:00:00.000
{ "year": 2014, "sha1": "7d404ee1f882b6a4f632e2d30f7ad13aa528feaa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/539/1/012022", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c43843f8d864b40e69046253eac4514e0e3cd5e0", "s2fieldsofstudy": [ "Engineering", "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
42209937
pes2o/s2orc
v3-fos-license
Multiple regulatory roles of a novel Saccharomyces cerevisiae protein, encoded by YOL002c, in lipid and phosphate metabolism. The yeast open reading frame YOL002c encodes a putative membrane protein. This protein is evolutionarily conserved across species, including humans, although the function of each of these proteins remains unknown. YOL002c is highly expressed in yeast cells that are grown in the presence of saturated fatty acids such as myristate. Furthermore, cells in which the YOL002c gene is disrupted grow poorly on this carbon source. These mutant cells are also resistant to the polyene antibiotic, nystatin. Gene chip analysis on yol002cDelta cells revealed that a variety of genes encoding proteins involved in fatty acid metabolism and in the phosphate signaling pathway are induced in this mutant strain. In addition, our studies demonstrated that in the disruption strain acid phosphatase activity is expressed constitutively, and the cells accumulate polyphosphate to much higher levels than wild-type cells. A homologous human protein is able to partially rescue these defects in phosphate metabolism. We propose that YOL002c encodes a Saccharomyces cerevisiae protein that plays a key role in metabolic pathways that regulate lipid and phosphate metabolism. The yeast open reading frame YOL002c encodes a putative membrane protein. This protein is evolutionarily conserved across species, including humans, although the function of each of these proteins remains unknown. YOL002c is highly expressed in yeast cells that are grown in the presence of saturated fatty acids such as myristate. Furthermore, cells in which the YOL002c gene is disrupted grow poorly on this carbon source. These mutant cells are also resistant to the polyene antibiotic, nystatin. Gene chip analysis on yol002c⌬ cells revealed that a variety of genes encoding proteins involved in fatty acid metabolism and in the phosphate signaling pathway are induced in this mutant strain. In addition, our studies demonstrated that in the disruption strain acid phosphatase activity is expressed constitutively, and the cells accumulate polyphosphate to much higher levels than wild-type cells. A homologous human protein is able to partially rescue these defects in phosphate metabolism. We propose that YOL002c encodes a Saccharomyces cerevisiae protein that plays a key role in metabolic pathways that regulate lipid and phosphate metabolism. The yeast Saccharomyces cerevisiae is able to survive and grow on a wide range of media due to its ability to activate pathways that enable utilization of alternate carbon sources. This organism responds to a diverse array of signals, and many of the responses involve the regulation of gene expression. This transcriptional regulation results in a corresponding adjustment in signaling systems that are activated by the switch in carbon source supplied for growth. Thus, the ability of yeast to utilize various carbon sources is highly regulated, and the expression of genes required for the utilization of specific carbon sources has been a major topic of study for many years (1). Much is known regarding the regulatory mechanisms that occur when yeast is grown on glucose. Many proteins are dispensable under these conditions; therefore, the expression of genes encoding such proteins is transcriptionally repressed. These include genes required for the metabolism of carbon sources that are utilized less efficiently than glucose, such as glycerol (GUT (2)), galactose (GAL (3)), ethanol (ADH2 (4)), and fatty acids (POX1 (5)). The expression of genes encoding enzymes involved in the utilization of carbon sources other than glucose are often up-regulated in the presence of the appropriate nutrient. The shift from a glucose-to an oleate-containing medium leads to induction of a number of enzyme activities, including the peroxisomal enzymes involved in the ␤-oxidation pathway, and it also leads to peroxisome proliferation (6). A sequence motif within the promoter region of genes encoding these proteins was demonstrated to be the binding site for protein(s), as then unidentified, responsible for oleate induction, and was termed the oleate-response element (ORE) 1 (7). In addition to fatty acid-dependent transcriptional activation of specific genes, other genes are transcriptionally repressed in response to a change in carbon source. The OLE1 gene encodes ⌬-9 fatty acid desaturase, an enzyme involved in the formation of unsaturated fatty acids. OLE1 expression is repressed when certain unsaturated fatty acids such as oleate are added to the growth medium, but it is induced when cells are grown in the presence of a saturated fatty acid (8,9). Recently, two genes, MGA2 and SPT23, were implicated in the transcription of several genes in S. cerevisiae, including OLE1. The loss of function of both MGA2 and SPT23 results in a 15-fold decrease in the level of the OLE1 transcript (10). The active Spt23p transcription factor is synthesized in the form of an inactive membrane-bound precursor, and the active form is generated by a processing step that is regulated by unsaturated fatty acids (11). POX1 encodes fatty acyl-CoA oxidase, the rate-limiting enzyme of the peroxisomal ␤-oxidation cycle. Previously, we extensively mapped the promoter region of POX1 and have demonstrated that there are at least three regulatory elements in this promoter (12,13), one of which is an ORE. We and others (14 -16) subsequently identified two transcription factors, Oaf1p and Pip2p, that bind to the ORE and mediate oleate-dependent transcriptional activation. The completion of the yeast genome sequencing project in 1996 provided us with the possibility of finding additional genes that contain an ORE in their promoters. We demonstrated that more than 20 genes, encoding proteins with various subcellular locations, are regulated by the Oaf1p/Pip2p transcription factors (17). In addition to genes encoding known peroxisomal, mitochondrial, and nuclear proteins, we found that several open reading frames (ORFs), encoding proteins of unknown location and function, are also regulated by Oaf1p and Pip2p. The gene YOL002c was among the ORFs that we found to contain an ORE (17). This gene encodes a putative protein that is predicted to contain seven transmembrane domains. In this current study we demonstrate that the expression of YOL002c is highly induced in cells grown in the presence of a medium chain length saturated fatty acid, such as myristate. We further show that a strain in which the YOL002c gene is disrupted grows poorly in medium containing myristate as the main carbon source, and that yol002c⌬ cells are resistant to the polyene antibiotic, nystatin. In addition, we found that high chain length inorganic polyphosphate accumulates to higher levels in the disruption strain when compared with the wildtype strain. Finally, we show that 32 PO 4 is taken up by yol002c⌬ cells at a higher rate than in wild-type cells and that acid phosphatase activity is constitutively expressed in yol002c⌬ cells, even in phosphate-rich medium. Phosphate is an essential nutrient for all organisms including yeast; thus a tight regulatory mechanism for acquisition, storage, and release of phosphate has evolved. When phosphate becomes limiting in the growth media, there is an increased production of a high affinity phosphate transporter and of secreted phosphatases that scavenge phosphate from the environment. The signal transduction pathway involved in the regulation of phosphateresponsive genes is complex and involves over 20 genes (for review see Ref. 18). We suggest that the YOL002c protein plays a role in this elaborate pathway. Taken together, our results suggest that cells lacking YOL002cp display multiple defects in lipid homeostasis as well as in the phosphate levels of the cell. Yeast Strains and Media The yeast strains used in this study are described in Table I. Yeast strains were grown in either YPD (1% yeast extract, 2% peptone, 2% glucose); SD (0.67% yeast nitrogen base without amino acids, 2% glucose); or YPG (1% yeast extract, 2% peptone, 3% glycerol). Liquid-rich media containing fatty acids (YPGFA) were composed of YPG supplemented with 0.1% (w/v) of the respective fatty acid and with 0.25% (v/v) Tween 40. In the case of very long chain fatty acids (C20 and longer), the Tween 40 was substituted with 0.25% Tergitol. YNCFA plates contained 0.67% yeast nitrogen base without amino acids, 0.3% casamino acids, 0.25% (v/v) Tween 40% or 0.25% (v/v) Tergitol, and 0.1% (w/v) of the respective fatty acid. In order to prepare low phosphate media (YPD Ϫ P i ), the method of Kaneko et al. (19) was used with minor modifications. Briefly, 20 g of peptone and 10 g of yeast extract were first dissolved in 800 ml of H 2 O, and 10 ml of 1 M MgSO 4 and 10 ml of 30% ammonium hydroxide were then added. The resultant solution was incubated for 30 min at room temperature to form a precipitate and was then filtered through Whatman No. 1 paper. The pH of the filtered solution was adjusted to pH 5.8 using 1 N HCl, and the volume was brought to 900 ml with H 2 O. The medium was sterilized by autoclaving, and 20% dextrose was added to a final concentration of 2%. To make YPD ϩ P i , 1 M KH 2 PO 4 was added to a final concentration of 10 mM. To obtain fatty acid-containing low phosphate media, low phosphate YPG was first prepared as described above, and fatty acids plus Tween 40 were then added. Auxotrophic supplements were included in the media at 20 g/ml (40 g/ml in the case of leucine) as needed. Nystatin, when used, was added at a concentration of 100 units/ml. Disruption of YOL002c, YDR492w, and YOL101c To disrupt each of the endogenous copies of YOL002c, YDR492w, and YOL101c, we first amplified DNA fragments encoding these genes using total yeast DNA and pairs of primers G1 with G2, G3 with G4, G5 with G6, respectively (see Table II). The purified fragments were then subcloned into the PCR 2.1-TOPO TA vector, resulting in p002, p492, and p101. A YOL002c disruption strain was created as follows. First, p002 was digested with NdeI, blunt-ended, and dephosphorylated. A 1.7-kb fragment containing the S. cerevisiae HIS3 gene was inserted into the digested plasmid, resulting in p002c::HIS3. The disruption construct was then digested with EcoRI, and the linear YOL002::HIS3 fragment was used for transformation into S. cerevisiae strain W3031A. Selected clones were screened for correct integration of the disruption cassette by PCR analysis of total DNA extracted from the transformants. The disrupted strain, named yol002c⌬, was selected for further studies. To prepare yol002c⌬ strains that also carry a disruption in either YDR492w or YOL101c, we made use of a system that allows repeated use of URA3 selection in the construction of multiply disrupted genes (20). For this purpose, we first digested p492 with ClaI and HincII, and p101 with BbsI and SspI. Both plasmids were then blunt-ended and dephosphorylated. The 3.8-kb BamHI/BglII fragment of pNKY274 (20) was blunt-ended and inserted into the digested p492 and p101, resulting in p492::hisG-URA3 and p101::hisG-URA3. Both constructs were digested with EcoRI, and the reaction mixtures were used for transformation of the released disruption cassettes into the yol002c⌬ cells. URAϩ transformants were screened for correct integration by Southern blot analysis of total DNA purified from individual clones. Further selection of URAϪ auxotrophs using 5-fluoroorotic acid was performed as described (20), and the double disruption strains obtained were named yol002c⌬ydr492w⌬ and yol002c⌬yol101c⌬, respectively. YDR492w and YOL101c single disruption strains (ydr492w⌬ and yol101c⌬, respectively) were created in the same manner as described above for double disruption strains, using W3031A as a host strain for yeast transformation. A triple disruption strain (yol002c⌬ydr492w⌬yol101c⌬), in which YOL002c, YDR492w, and YOL101c were all disrupted, was constructed by transforming the yol002c⌬ydr492w⌬ cells with the YOL101c disruption cassette followed by the selection procedure described above. Plasmids All recombinant plasmids were created using a combination of PCR and subcloning techniques. The oligonucleotide primers are shown in Table II. YIp357-002c-ATG1-In order to confirm the data obtained by Northern blot analysis for YOL002c expression, two constructs containing the lacZ sequence under the control of the YOL002c promoter were created. A pair of primers, G9 with G10, was used in a Pfu Turbo-driven PCR to amplify a fragment that contains the 1040-bp promoter region and the predicted initiation codon of YOL002c, immediately followed by a HindIII site. Following amplification, the DNA fragment was cleaved with BamHI and HindIII and then subcloned into the corresponding sites of the vector YIp357 (21), producing YIp357-002c-ATG1. This construct failed to produce ␤-galactosidase activity when introduced into our wild-type strain W3031A (W002c-BG1). YIp357-002c-ATG2-To create a fragment that contains the YOL002c promoter and the sequence 30 bp downstream from the predicted initiating ATG, which included a second ATG codon, also immediately followed by a HindIII site, a pair of primers, G9 with G11, was used. After amplification, the DNA fragment was subcloned into the MATa leu2 ura3 trp1 ade2 his3 yol002cϻHIS3 This study YIp357 vector as described above, resulting in YIp357-002c-ATG2. The resultant plasmid was then introduced into our wild-type yeast strain creating strain W002c-BG2, and ␤-galactosidase activity was measured from these cells. The same construct was also introduced into our OAF1/PIP2 double-disruption strain (17), creating ⌬O1⌬P2-BG2 (Table I). pRS-002c-CGI-45-To create a construct that expresses the human gene CGI-45 under the control of the YOL002c promoter, YIp357-002c-ATG2 DNA was first digested with BamHI and HindIII, and the 1.1-kb fragment containing the promoter region and the second ATG codon of YOL002c was gel-purified. The fragment was then subcloned into the BamHI and HindIII sites of pRS306 (22), resulting in pRS306-002cp. A fragment encoding the CGI-45 sequence was amplified from a human fibroblast cDNA library (kindly provided by Dr. Andrew Chen, Mount Sinai Medical Center) using a pair of primers G7 with G8. The amplified product was then digested with HindIII and was subcloned into the HindIII and blunt-ended XhoI sites of pRS306-002cp. The resultant construct, designated pRS-002c-CGI-45, was used for transformation into yol002c⌬ cells. RNA Purification and Northern Blot Analysis Yeast strains were grown overnight on YPD, and the pre-cultures were then diluted 10-fold with fresh YPD and grown to mid-logarithmic phase. Cells collected from mid-logarithmic cultures were used to inoculate YPD, YPG, or YPGFA media to an absorbance of Ϸ0.1 at 600 nm and were grown to mid-logarithmic phase. For Northern blot analysis, total yeast RNA was purified according to a hot phenol extraction procedure as described (17). A phenol/chloroform extraction procedure was used to isolate yeast total RNA for DNA Microarray Analysis. 2 Poly(A) ϩ fractions were prepared from total yeast RNA using Oligotex milk as specified by the manufacturer (Qiagen, Valencia, CA). Yeast mRNA was resolved, transferred to nylon membrane, and hybridized as described previously (14). Yeast gene-specific probes were generated by PCR amplification with primers based on sequence from the yeast genome data base (Table II) and total yeast DNA. The PCR products were resolved on a standard 1% agarose gel, purified using a Geneclean kit (Bio 101, Vista, CA), and labeled with a Prime-It RmT kit (Stratagene, La Jolla, CA) and [␣-32 P]dCTP. Hybridization and subsequent analyses were performed exactly as described previously (17). DNA Microarray Analysis To compare the gene expression patterns between SCY325 cells (Table I) and isogenic yol002c⌬ (s⌬2) cells, the Affymetrix GeneChip Yeast Genome S98 Arrays (YG-98) were used. Both yeast strains were grown on YPGM medium, and poly(A) ϩ mRNA fractions were then prepared from total yeast RNA as described above. Double-stranded cDNA preparation, synthesis of biotin-labeled cRNA target, hybridization, washing and staining, subsequent scanning of the hybridized array, and data processing were performed as specified in the Affymetrix Gene Chip Expression Analysis Technical Manual. Polyphosphate Detection by Gel Electrophoresis Pre-cultures were grown at 30°C in YPD Ϫ P i media overnight as described (19). The cells were collected, washed with water, and diluted 1:100 in YPD Ϫ P i . After incubation for 6 h at 30°C, KH 2 PO 4 was added to a final concentration of 10 mM. Following a 2-h incubation, the cells were collected, washed with water, and resuspended in 1 ml of cold LETS buffer (0.01 M Tris-HCl, pH 7.4, 0.01 M EDTA, 0.1 M LiCl, 2% SDS). Glass beads (at approximately equal volume) and 700 l of phenol/chloroform (saturated to pH 5.0) were added to the cells. The cells were vortexed 6 times for 10 s and were then centrifuged at 10,000 rpm for 20 min. The aqueous phase was collected and extracted twice with phenol/chloroform. The RNA, together with the polyphosphates, was precipitated in the presence of 0.2 M LiCl and 2.5 volumes of ethanol. Following centrifugation, the pellet was resuspended in 50 l of H 2 O. The concentration of total RNA was calculated by measuring the absorbance at 260 nm. P i Uptake Experiments In order to measure the uptake of P i , cells were grown overnight in YPD Ϫ P i or YPD ϩ P i media. The cells were collected, washed with water, and resuspended in low phosphate synthetic complete medium (SC Ϫ P i ), or high phosphate synthetic complete medium (SC ϩ P i ), to a final concentration of A 600 ϭ 0.1. Cells were shaken for 2 h at 30°C, and then 1 Ci/ml 32 PO 4 in KH 2 PO 4 was added to give a final P i concentration of 0.1 mM in the assay reaction. Samples were withdrawn at different time points and were filtered through a nitrocellulose filter (Millipore HA, 0.45 m). The filters were washed with 10 ml of SC Ϫ P i medium and were then dried, and the radioactivity collected on each filter was quantitated using a liquid scintillation counter (Beckman LS1801). Acid Phosphatase Activity Assay In order to measure the acid phosphatase activity, cells were grown overnight at 30°C in YPD Ϫ P i or YPD ϩ P i media. The cells were collected by centrifugation, washed with water, and resuspended in lysis buffer (10 mM Tris-HCl, pH 7.4, 10 mM NaCl, 5 mM MgCl 2 , 0.5 mM CaCl 2 , 0.2% Nonidet P-40, 2 mM phenylmethylsulfonyl fluoride). Glass beads (at approximately equal volume) were added, and the tubes were vortexed for 30 min at 4°C. The tubes were then centrifuged at 10,000 rpm for 20 min, and the supernatant was recovered for phosphatase activity and protein assays. Acid phosphatase was measured using p-nitrophenyl phosphate (Sigma) as a substrate, and the assay was performed using a citrate buffer (90 mM sodium citrate, 10 mM NaCl, pH 4.5). The absorbance at 405 nm was then measured, and the activity was calculated as described (19). The activity was expressed as micromoles of p-nitrophenol liberated by 1 mg of protein in 1 ml of reaction mix. Expression of YOL002c-We demonstrated previously that YOL002c is expressed at high levels in glucose-grown cells, whereas the expression is low in cells grown in the presence of oleate (17). Expression in each of these media is reduced in a strain in which the Oaf1p/Pip2p transcription factors are deleted (17). Because the majority of genes that are regulated under the control of Oaf1p and Pip2p are induced in cells grown in the presence of oleate, this result was unexpected. Therefore, we proceeded to examine the expression of YOL002c in cells grown in additional unsaturated or saturated fatty acids. The 2 S. Sturley and L. Wilcox, personal communication. G1 5Ј TCACCTACGGGTTAAAATCTGAC 3Ј G2 5Ј TCCTCTGAGACGCTAAAGTCGTG 3Ј G3 5Ј TTATCATGCAACCTTTAGGGTC 3Ј G4 5Ј GCGTACCGAAAAATATAAGGAAGTC 3Ј G5 5Ј GGCACATAAAAGCAAAGGC 3Ј G6 5Ј GTTCTTTCAGCGGTGTATAACC 3Ј G7 5Ј ATGAAGCTTTCTTCCCACAAAGGATCTGTGGTG 3Ј G8 5Ј TCATCAGTACAGCCCGCCTTCTAG 3Ј G9 5Ј GGGGATCCAAGAAGAGATCCTAGGATGAGT 3Ј G10 5Ј GATTCTTCTCCAACATGCAGCAAGCTTCATAATTAAACGACGGG 3Ј G11 5Ј GGGGATCCATTGCTATCAGCGATACTAACATTCGTG 3Ј levels of YOL002c were determined by Northern analysis of poly(A) ϩ RNA isolated from wild-type and oaf1⌬pip2⌬ cells grown in glucose, glycerol, or glycerol plus the following fatty acids: C18:1 (oleate), C18:2, C6:0, C14:0, C16:0, C18:0, and C20:0. Our results demonstrated that YOL002c is highly induced when cells are grown in the presence of C14:0 (myristate) and that this level of expression is dependent on the Oaf1p/ Pip2p transcription factors (Fig. 1). In order to confirm the expression of YOL002c in the presence of different fatty acids, we studied the transcription using a YOL002c-lacZ reporter construct. The first construct that we prepared, based on the ORF provided by the yeast genome data base, failed to produce any ␤-galactosidase activity. However, when we prepared an alternative construct in which we fused the second ATG, which is 30 nucleotides downstream from the predicted initiating ATG, with the lacZ gene (Fig. 2a), we measured a high level of ␤-galactosidase activity in extracts from cells grown in the presence of myristate (Fig. 2b). Thus, the correct initiating ATG is downstream from that given in the yeast genome data base, and the YOL002c protein is 10 amino acids shorter than predicted. The protein contains 317 amino acids and has a predicted molecular mass of 36.3 kDa. The ␤-galactosidase activity in extracts from wild-type cells was ϳ5-fold higher from cells grown in the presence of myristate when compared with glycerol-grown cells (Fig. 2b, left panel). Furthermore, we found that this activity was ϳ50-fold lower in extracts from cells in which the Oaf1p/Pip2p transcription factors were deleted (Fig. 2b, right panel). These results support the data obtained by Northern analysis and provide additional evidence that transcription of YOL002c is increased in cells grown in the presence of myristate and that this regulation is dependent on Oaf1p and Pip2p. YOL002cp Is Evolutionarily Conserved-The YOL002c gene is predicted to contain seven transmembrane domains using the Dense Alignment Surface program (39). In a search for proteins that share homology to YOL002c, we found that there are two ORFs in the S. cerevisiae data base, YDR492w and YOL101c, that encode proteins homologous to this protein especially in the membrane-spanning regions (Fig. 3A). This finding caused us to examine closely the promoter regions of these two genes, and we found that they contain ORE-like sequences at positions Ϫ302 to Ϫ328 (YDR492w) and Ϫ240 to Ϫ263 (YOL101c) upstream from their respective initiation codons. Furthermore, we demonstrated that both of these genes are transcriptionally induced by saturated fatty acids. YDR492w shows highest expression in cells grown in the presence of C18:0, whereas YOL101c is highly expressed in both C18:0 and C20:0. Expression of both of these genes is reduced in the oaf1⌬pip2⌬ strain (Fig. 3B). Through an extended search for proteins homologous to YOL002c, we found genes in Caenorhabditis elegans, Drosophila, and humans (CGI-45), each of which encode proteins that show a striking homology to this ORF (for example, YOL002cp and the human protein CGI-45 share 29% identity) (Fig. 4). Thus far, the function of each of these proteins remains unknown. However, the fact that the proteins are conserved from yeast to man suggests that the encoded proteins have an important role. Phenotype of a YOL002c Disruption Strain-In an attempt to gain insight into the function of the YOL002c protein, we prepared a strain in which the gene encoding this protein was disrupted (see "Materials and Methods"). Yol002c⌬ cells appeared to grow at normal rates in YPD medium; however, they grow poorly on plates provided with myristate as the sole carbon source (data not shown). The disruption strain also grows poorly on non-fermentable carbon sources, such as glycerol or lactate (data not shown). We further found that the YOL002c deletion strain exhibits resistance to the polyene antibiotic, nystatin (data not shown). Nystatin forms a complex with sterols in the cell membrane of sensitive organisms, resulting in leakage of essential cellular metabolites (23). Because wild-type yeast strains are sensitive to this antibiotic (24), we hypothesize that deletion of the YOL002c gene results in a qualitative change in the sterol composition of the cell membrane. Introduction of the human CGI-45 gene into our yol002c⌬ strain failed to rescue either of these mutant phenotypes (data not shown). Ydr492w⌬ and yol101c⌬ cells grew at a similar rate as wild-type cells in the presence of myristate, whereas the growth of these mutant strains in the presence of nystatin was variable. Since the sequence of the entire S. cerevisiae genome has been available, there have been several genome-wide expression analyses carried out to explore the global response of gene [25][26][27][28]. Such analyses have been possible by taking advantage of DNA microarray assays. In order to determine whether disruption of the YOL002c gene caused global changes in gene expression, we compared the transcriptional response of genes from a yol002c⌬ strain with an isogenic wild-type strain grown in the presence of myristate. Labeled RNA from each strain was hybridized to Affymetrix S98 Gene-Chips, as described under "Materials and Methods." We screened the resulting data for genes that demonstrated increased expression in yol002c⌬ cells compared with wild-type cells. This analysis revealed that a significant number of genes whose expression was specifically induced in yol002c⌬ cells compared with wild-type play a role in phosphate metabolism (Table III). Among the most highly induced genes are PHO5 (5.3-fold) and PHO11 (5.7-fold), both of which are known to be regulated by the PHO pathway that regulates genes involved in phosphate metabolism (29). Thus, YOL002cp appears to act as a negative regulator of the PHO system. In addition, our gene chip analyses revealed that a number of genes involved in fatty acid metabolism were induced in the yol002c⌬ strain (for examples see Table III). Taken together, these data suggest that YOL002cp plays an important role in cellular metabolism and that cells lacking this protein have multiple defects. The PHO Signal Transduction Pathway Appears to Be Involved in YOL002c Regulation-The regulation of PHO gene expression by P i is accomplished via a cascade of events, of which the ultimate regulator is Pho4p, a protein that binds to each regulated PHO gene and activates its transcription. The Pho4p-binding site consists of the following motif: CACGTG and/or CACGTT (30). We identified a putative Pho4p-binding site (CACGTT) in the YOL002c gene promoter 365-370 nucleotides upstream from the initiating ATG codon. When yeast cells are grown under phosphate-rich conditions, Pho4p is phosphorylated by the Pho80p-Pho85p cyclin-cyclin-dependent kinase complex and is exported to the cytoplasm (31,32). This results in a lack of expression of phosphate-responsive genes. In phosphate-depleted medium, however, Pho4p is localized to the nucleus and actively regulates the expression of PHO genes. In order to determine whether expression of YOL002c is affected by phosphate concentration, we measured the YOL002c mRNA levels in our W3031A wild-type cells, grown in high or low phosphate media. Depletion of phosphate from the culture media led to a significant reduction of the YOL002c mRNA levels when cells were grown on glucose-or myristatecontaining media, when compared with cells grown in these media supplemented with phosphate (Fig. 5). This result suggests that the PHO-signal transduction pathway may be involved in maintaining low YOL002c mRNA levels under phosphate starvation conditions, most likely via the Pho4p transcription factor. The YOL002c Protein Is Involved in Polyphosphate Accumulation and in Regulation of Acid Phosphatase Activity-Polyphosphate is a linear polymer of up to hundreds of P i residues linked by phosphoanhydride bonds. In S. cerevisiae polyphosphate accumulates in vacuoles, and when needed it is hydrolyzed to P i by an exopolyphosphatase (33). Because disrupting the YOL002c gene appears to affect phosphate metabolism, we asked whether there is any difference in polyphos- phate accumulation in the yol002c⌬ strain compared with our wild-type strain. Polyphosphate chains in extracts from yol002c⌬ and wild-type yeast were analyzed by PAGE followed by staining with toluidine blue. Total polyphosphate was greatly increased in the yol002c⌬ strain, and the average size of the polyphosphate molecules was significantly larger than that in the wild-type strain (Fig. 6, lanes 2 and 5). This accumulation of polyphosphate is not found in strains in which the genes homologous to YOL002c (YDR492w and YOL101c) are disrupted (Fig. 6, lanes 3 and 4). A similar result was found in a strain in which both of these genes are disrupted (data not shown). Expression of the human CGI-45 gene in the yol002c⌬ strain partially rescued the mutant phenotype (Fig. 6, lanes 6 and 7). We proceeded to measure P i uptake in both wild-type and yol002c⌬ cells, and we found that when the cells are grown in medium lacking P i , 32 PO 4 is taken up by yol002c⌬ cells at a higher rate than in wild-type cells (data not shown). In both wild-type and yol002c⌬ cells there is a linear uptake of P i for at least 20 min when they are incubated in the presence of 0.1 mM phosphate. yol002c⌬ cells, however, accumulate approximately twice as much phosphate during this time when compared with the wild-type cells (21 ϫ 10 3 and 13.4 ϫ 10 3 cpm, respectively). Furthermore, acid phosphatase activity is constitutively expressed in yol002c⌬ cells even when the cells are grown in phosphate-rich medium, whereas in wild-type cells the activity is repressed in such medium (Fig. 7). The acid phosphatase activity of ydr492w⌬ cells was similar to that measured in wild-type cells (Fig. 7), as was that measured in yol101c⌬ cells (not shown). The human gene CGI-45 partially rescued the mutant phenotype (Fig. 7). These data suggest that cells lacking YOL002cp are defective in their ability to regulate levels of polyphosphate, and that this defect is partially rescued by the human gene CGI-45. Phosphatidylethanolamine-binding protein ϫ2.1; ϫ2.6 YMR296c Serine palmitoyltransferase ϫ1.9; ϫ1.6 YER064c Involved with ergosterol-9 expression ϫ2.0; ϫ1.9 YIL160c Peroxisomal thiolase ϫ2.0; ϫ1.9 FIG. 5. Expression of YOL0002c is reduced when the P i concentration in the medium is low. Relative expression of YOL002c in wild-type cells grown in the absence of phosphate (Ϫ phosphate) or in medium containing 10 mM phosphate (ϩ phosphate). Levels of mRNA for YOL002c were quantitated as described in Fig. 1, taking the expression of cells grown in YPD in the presence of phosphate as 100%. DISCUSSION We have shown that the YOL002c gene has an ORE in its promoter region and that it is regulated by the Oaf1p/Pip2p transcription factors. However, unlike the majority of genes that are regulated by these proteins, it is not induced by the unsaturated fatty acid oleate but rather is up-regulated in cells grown in the presence of a saturated fatty acid such as myristate. In addition, disruption of the YOL002c gene causes the cells to grow poorly in the presence of myristate, whereas wild-type cells grow at a normal rate on this medium. Furthermore, yol002c⌬ cells are resistant to the polyene antibiotic, nystatin. Nystatin resistance has been associated with mutations that lead to changes in the sterols in the cell membrane (24), suggesting that the YOL002c protein may play a role (either direct or indirect) in maintaining the sterol composition of the yeast cell membrane. By performing gene chip analysis on wild-type cells and yol002c⌬ cells grown in the presence of myristate, we found that a number of genes involved in fatty acid metabolism are up-regulated in the mutant strain. In addition, the deletion strain demonstrated increased expression of genes involved in the PHO signaling pathway. There is a putative Pho4p-binding site (CACGTT) in the YOL002c promoter 365-370 nucleotides upstream from the initiating methionine. Genetic studies, together with a recent DNA microarray study, identified more than 20 PHO-regulated genes, most of which contained at least one copy of the Pho4p recognition site (28). Furthermore, this study examined the expression of genes in a strain in which the PHO85 gene, which encodes a cyclin-dependent kinase that interacts with Pho80p to regulate the activity of Pho4p, was disrupted. The data revealed that the mRNA level of YOL002c is decreased under these circumstances (web data from Ref. 28). We found that the expression of YOL002c is decreased when cells are grown in low phosphate media. This observation is consistent with the DNA microarray analysis data obtained for the PHO85 disruption strain and suggests that Pho4p may play a role as a negative regulator of YOL002cp. Phosphate is an essential nutrient that is used in the biosynthesis of many cellular components, including nucleic acids, proteins, lipids, and sugars. The possibility that YOL002cp may play a role in the phosphate metabolic pathway led us to compare the amino acid sequence of this protein with that of other proteins involved in this complex pathway. We determined that the carboxyl-terminal portion of YOL002cp is homologous to similar regions of Pho80p-like cyclins that are known to interact with the Pho85p cyclin-dependent kinase (34) (Fig. 8). This finding raises the possibility that the multiple phenotypes associated with deleting the YOL002c gene may be mediated through the Pho85p cyclin-dependent kinase, since strains lacking PHO85 have a similar phenotype to yol002c⌬ cells. Based on the published observations described above, and on our recent findings that many genes involved in phosphate metabolism are induced in a strain in which the YOL002c gene is disrupted (Table III), we postulate that YOL002cp may act as a negative regulator in the phosphate-dependent signal transduction pathway. This possibility is consistent with our finding that expression of YOL002c is repressed under phosphatestarvation conditions. We cannot rule out, however, the possibility that the YOL002c protein plays an altogether different role in the cell. For example, a novel positive regulator of PHO5 expression, the PHO23 gene, was recently identified in a genetic screen for PH081-dependent mutants with a constitutive PHO5 expression phenotype (35). Furthermore, these studies revealed that Pho23p is associated with a histone acetyltransferase activity as well as with the Rpd3p histone deacetylase complex. Rpd3p is the catalytic component of the Rpd3p histone deacetylase complex that also contains Pho23p, Sin3p, Sap30p, and many other proteins (36). Mutations in genes encoding these proteins result in multiple phenotypes including constitutive PHO5 expression, enhanced or derepressed silencing of rDNA, and enhanced silencing of telomeric and mating-type loci (37). Be- FIG. 7. Acid phosphatase activity is derepressed in yol002c⌬ cells, grown in medium containing phosphate. Acid phosphatase was measured in wild-type, yol002c⌬ (⌬002c), ydr492w (⌬492w), and yol002c⌬ ydr492w (⌬002c⌬492w) strains, as well as in strains in which YOL002c was disrupted and human GCI-45 (2 separate clones; h1 and h2) was expressed. Units are expressed as micromoles of p-nitrophenol liberated by 1 mg of protein in 1 ml of reaction mix. Results are the mean of two experiments each measured in triplicate, and standard deviations are indicated. cause deletion of YOL002c results in a strain that is incapable of derepressing acid phosphatase activity (i.e. constitutive PHO5 expression), it is possible that this protein plays a similar role to that of Pho23p. Experiments designed to distinguish between these possible roles of the YOL002c protein are currently in progress. Our attempts to determine the subcellular location of the YOL002c protein have been unsuccessful thus far. We believe that this is due to instability of the protein. We have attempted to introduce epitope tags within various regions of the protein, as well as at either end, and in each case, we have been unable to detect a product. We are currently in the process of purifying this protein in order to raise specific antibodies, in addition to raising peptide antibodies to hydrophilic portions of the protein. These tools will enable us to determine the subcellular location of YOL002cp, and thus will assist us in defining the function of this protein. We have found that we can partially rescue the defects in phosphate metabolism in the yol002c⌬ strain by introducing the human CGI-45 gene into this strain. On the other hand, this human gene failed to rescue the defects associated with fatty acid metabolism in the mutant strain. This raises the possibility that in yeast there is a single gene, YOL002c, that has multifunctional metabolic roles, whereas in humans these various roles may be carried out by different genes. Further analysis of the human genome as the sequence becomes available may reveal additional homologs of YOL002c, whose function is involved in these additional cellular roles.
2018-04-03T05:16:46.314Z
2002-05-31T00:00:00.000
{ "year": 2002, "sha1": "679f9cc4a5fc865c6cfe623f97f3851f036c23a4", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/22/19609.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9fdf67dc9e4195dc33ad34fef9b498f3702c28ef", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
226761706
pes2o/s2orc
v3-fos-license
Statistical Modeling of Solar Energy Renewable energy comprises solar, wind, tidal, biomass and geothermal energies. Use of renewable energy resources as a substitute for fossil fuels inevitably reduce environmental footprint. Therefore, integration of renewable energy to the power grid, smart grid planning and grid-storage preparations are some of the major concerns in all developing countries. However, unpredictability in renewable energy resources makes the situation challenging. In light of this, the present study aims to develop a solar energy forecasting model to estimate future energy supply for a smooth integration of solar energy to the current electric grids. A suite of eight probability models, namely exponential, gamma, normal, lognormal, logistic, loglogistic, Rayleigh and Weibull distributions are used. While the model parameters are estimated from the maximum likelihood estimation method, the performance of the candidate distributions is tested using three goodness of fit tests: Akaike information criterion, Chi-square criterion, and K-S minimum distance criterion. Based on the sample data obtained from the Charanka Solar Park, Gujarat, it is observed that the Weibull model provides the best representation to the observed solar radiations. The study concludes with the analysis of forecasted solar energy and its possible role in replacing thermal energy resources. heat. It contributed to almost 20% to human's global energy consumption and 25% to global electricity generation in 2015 and 2016, respectively (REN 21 homepage 2019; Global energy homepage 2019). India is one of the largest renewable energy producing countries accounting for about 35% of the total installed power capacity in the electricity sector. The target by 2030, as stated in the Paris Agreement, is to achieve 40% of total India's electricity generation from non-fossil fuel sources (Global energy Despite the installation of many renewable energy plants, their integration to the main power grids is crucial in harnessing renewable energy applications (REN 21 homepage 2019; Global energy homepage 2019; Paris Agreement homepage 2019; Rather 2018; Zhang et al. 2015). The unpredictability of renewable energy resources, such as wind speed and solar radiation makes integration difficult, as the current electric grids cannot operate unless there is a mutual balance between supply and demand (Zhang et al. 2015;Su et al. 2012;Jacobson and Delucchi 2011;Delucchi and Jacobson 2011;NREL homepage 2019). An imbalance may result in voltage fluctuations and even worse (NREL homepage. https://www.nrel.gov/. 2019). Other problems related to renewable energy sources include the unavailability of solar power at night during which the power consumption is at its peak and the lack of efficient energy storage systems to save the excess electricity production NREL homepage 2019). In addition, as renewable energy plants are usually located far away from the consumption location, transportation of power may cause unwanted transmission losses (Zhang et al. 2015;Su et al. 2012;Jacobson and Delucchi 2011;Delucchi and Jacobson 2011;NREL homepage 2019). Several methods are employed for the forecasting of solar irradiation considering numerical weather prediction, artificial neural networks (ANN), linear and non-linear stochastic models, remote sensing based models and hybrid models (Ferrari et al. 2013;Zhang et al. 2015;Inman et al. 2013). Comparison of several autoregressive models (AR, ARMA, ARIMA) (Ferrari et al. 2013) and neural network based models such as Radial Basis Function Neural Networks (RBFNN), Least Square Support Vector Machine (LS-SVM), k-Nearest Neighbour (kNN), and Weighted kNN (WkNN) methods (Zhang et al. 2015) have been implemented as forecasting engines. Use of empirical probability models (Pasari 2015(Pasari , 2018 could also be tried for energy forecasting. In summary, two main categories of studies have evolved, one focusing on the smart grid or grid energy storage technology and another aiming at forecasting of renewable energy (Rather 2018; Zhang et al. 2015;Su et al. 2012;Jacobson and Delucchi 2011;Delucchi and Jacobson 2011;NREL homepage 2019). The present study considers the latter issue and concentrates on the statistical modeling of solar power output at Charanka Solar Park, Gujarat. The aim is to select the best-fit probability distribution(s) among exponential, gamma, normal, lognormal, logistic, log-logistic, Rayleigh and Weibull models to forecast solar radiations. Data Description Solar radiation, the radiant energy emitted by the sun, is the primary data for the present analysis. When solar radiation enters into the Earth's atmosphere, a fraction of the radiation reaches directly to the surface. Such radiation is called beam or direct radiation. The remaining fraction may be scattered or absorbed by air molecules, clouds or aerosols. A part of such scattered radiation reaches the ground and is known as diffuse radiation. Another part of the direct radiation hitting the surface gets reflected and may reach upon another surface, such as solar collector or photovoltaic panel. Such radiation is called albedo. The sum of these three components is termed as global radiation (Rather 2018). The quantum of global irradiation collected per unit area is an important parameter for solar power forecast. Direct Normal Irradiance (DNI) is the amount of solar radiation received per unit area by a surface that is always held perpendicular to the rays coming in a straight line from the direction of the sun at its current position in the sky. Diffuse Horizontal Irradiance (DHI), on the contrary, is the amount of radiation received per unit area by a surface that does not arrive on a direct path from the sun, but has been scattered by molecules and particles in the atmosphere and comes equally from all directions. Global Horizontal Irradiance (GHI) is the total amount of shortwave radiation received from above by a surface horizontal to the ground (Rather 2018). The GHI may be calculated from DNI and DHI as Where 8 is the solar zenith angle (Rather 2018; Zhang et al. 2015). The data of the Charanka Solar Power Park (23.95°N, 71.15°E) in Gujarat was procured from the National Solar Radiation Database of National Renewable Energy Laboratory (NREL) (NREL homepage 2019). It comprises hourly data of all the variables (e.g., DNI, DHI, GHI, and many others) affecting the solar irradiation from 2000 to 2014. It is observed that depending on the season, about 12 h of daily solar irradiation data (06:30-18:30 h) contain non-zero positive entries of DNI, DHI and GHI values. There are many zero values in the sample data indicating that the day did not start or the day had ended. To maintain consistency in the analysis, 08 h of daily data (09:30-16:30 h) is considered for modeling. With this filtering, yearly 2920 data points are obtained. All DNI, DHI and GHI data are fitted separately to identify the best-fit probability model(s) for solar power forecast. It may be noted that the original datasets also contain information on temperature, pressure, relative humidity and precipitation among others, although those are not used in the present analysis. Methodology and Results On a temporal scale, solar power forecasting may be classified into now casting (forecasting up to a few hours in advance), short-term forecasting (forecasting up to a few days in advance) and long-term forecasting (forecasting months or years ahead) (Rather 2018; Zhang et al. 2015;Su et al. 2012;Jacobson and Delucchi 2011;Delucchi and Jacobson 2011). Depending upon the range of forecasts required, forecasting models have been developed accordingly incorporating parameters that are affecting solar radiation in the range (Zhang et al. 2015). Both short and long-term power forecasts have their specific applications. While system operators use short term forecasts in unit commitment analysis and determining reserve unit requirements, solar farm owners use such forecast for bidding strategy planning (in electricity markets) and dealing with voltage imbalance issues while integrating solar power supply to major thermal power distribution networks (Rather 2018; Zhang et al. 2015;Su et al. 2012;Jacobson and Delucchi 2011;Delucchi and Jacobson 2011;NREL homepage 2019). The long-term solar power forecasts are particularly important for smart city planning and negotiating contracts with financial entities or utilities (Zhang et al. 2015). Statistical approaches, as in this study, are preferred for long-term forecasts. The methodology here comprises three major steps: probability model assumption, parameter estimation, and model validation. Based on some graphical representation of data, eight probability models are considered to fit DNI, DHI and GHI data separately. Model parameters of the studied distributions are estimated from the classical maximum likelihood estimation (MLE) method, whereas the model selection is performed based on three goodness of fit tests, namely Akaike information criterion (AIC), Chi-square criterion and K-S minimum distance criterion. The AIC test is a simple modification from the log-likelihood scores and it accounts for the additional number of parameters in the competitive models. The Kolmogorov-Smirnov (K-S) test, in contrast, is a non-parametric approach. The Chi-square test determines significant differences between the expected and observed frequencies in one or more categories (Ferrari et al. 2013;Zhang et al. 2015). The results of estimated parameters and selection scores corresponding to average GHI data are presented in Table 16.1. It may be noted that each parameter in the studied distributions has its respective role (e.g., shape, scale, and location) (Pasari 2015(Pasari , 2018. Moreover, like the results in Table 16.1 for GHI data, one may obtain results corresponding to DHI and DNI data using simple excel tools along with Matlab plots. It is observed that the Weibull model consistently provides the best representation for DHI, DNI and GHI data. The pictorial representation of the model fit for DHI, DNI and GHI data of year 2000 is illustrated in Figs. 16.1, 16.2 and 16.3. With the above process of finding the best fit probability distribution, one can now analyze solar irradiation data for future estimation. As a secondary illustration, forecasting of solar irradiance may be carried out using a simple linear regression model (Rather 2018;Zhang et al. 2015;Su et al. 2012;Jacobson and Delucchi Summary and Conclusions Statistical modeling of renewable energy plays a pivotal role in the future energy sector and therefore its importance can never be disregarded. In this work, first the best-fit probability model of solar irradiation data is identified using eight popular probability distributions. Then a linear regression analysis is carried out to forecast solar energy for the Charanka Solar Park, Gujarat. During the course of the study, the following important observations are noted: • As the day progresses, the amount of DHI, DNI and GHI values increases till afternoon and then decreases. This is because the zenith angle gradually decreases to zero as the day advances to afternoon and then the zenith angle gradually increases as the day advances into evening followed by night. • The best-fit distribution for a particular hour over the months remains consistent although with varying means. This may be attributed to the variation in the amount of solar irradiation received on account of the seasons in respective years. • The standard deviations of the fitted distribution are very high. Even the MSE (Mean Squared Error) for the regression is also high probably due to the less amount of data points. As a conclusion, the present work has provided a layout to develop a solar energyforecasting model towards the endeavor of estimating future energy supply for a smooth integration of solar energy to the current electric grids. Results, based on the limited data, are preliminary and require further analysis for a stringent conclusion. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2020-07-30T02:07:27.232Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "14816ce38b319e5629e26a8aafca428522534f7c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-44248-4_16.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "65579aa29836953c441bf07eede6b4d467ce45a3", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
2961241
pes2o/s2orc
v3-fos-license
Pharmacokinetics of surotomycin from phase 1 single and multiple ascending dose studies in healthy volunteers Background Surotomycin, a novel, orally administered, cyclic, lipopeptide antibacterial in development for the treatment of Clostridium difficile-associated diarrhea, has demonstrated minimal intestinal absorption in animal models. Methods Safety, tolerability, and plasma pharmacokinetics of single and multiple ascending oral doses (SAD/MAD) of surotomycin in healthy volunteers were characterized in two randomized, double-blind, placebo-controlled, phase 1 studies. Results Participants were sequentially enrolled into one of four SAD (500, 1000, 2000, 4000 mg surotomycin) or three MAD (250, 500, 1000 mg surotomycin twice/day for 14 days) cohorts. Ten subjects were randomized 4:1 into each cohort to receive surotomycin or placebo. Surotomycin plasma concentrations rose as dose increased (maximum plasma concentration [Cmax]: 10.5, 21.5, 66.6, and 86.7 ng/mL). Systemic levels were generally low, with peak median surotomycin plasma concentrations observed 6–12 h after the first dose. In the MAD study, surotomycin plasma concentrations were higher on day 14 (Cmax: 25.5, 37.6, and 93.5 ng/mL) than on day 1 (Cmax: 6.8, 11.0, and 21.1 ng/mL for increasing doses), indicating accumulation. In the SAD study, <0.01% of the administered dose was recovered in urine. Mean surotomycin stool concentration from the 1000 mg MAD cohort was 6394 μg/g on day 5. Both cohorts were well tolerated with all adverse events reported as mild to moderate. Conclusion Both SAD and MAD studies of surotomycin demonstrated minimal systemic exposure, with feces the primary route of elimination following oral administration; consistent with observations with similar compounds, such as fidaxomicin. Results of these phase 1 studies support the continued clinical development of surotomycin for the treatment of Clostridium difficile-associated diarrhea. Trial registration NCT02835118 and NCT02835105. Retrospectively registered, July 13 2016. Background Clostridium difficile-associated diarrhea (CDAD) is a key cause of hospital-and community-acquired diarrhea and is associated with longer length of hospital stay, increased medical costs, and high rates of morbidity and mortality [1][2][3]. Attributable healthcare costs of CDAD in the United States are estimated to be between $433 million and $797 million per year [4]. Over the past decade, the incidence and severity of CDAD has increased across the United States, Canada, and Europe [5][6][7]. Despite having a clinical response rate of~73 to 85%, vancomycin and metronidazole treatments are associated with recurrent CDAD in up to 45% of patients [8][9][10][11]. Aggressive vancomycin and metronidazole treatment is also associated with disruption of the intestinal microbiota and can promote colonization by vancomycin-resistant enterococci, highlighting the need for novel treatment options [12,13]. An ideal agent for the treatment of CDAD should be associated with low levels of systemic absorption, resulting in high concentrations of the drug in the colon, combined with a narrow spectrum of activity against C. difficile to limit its impact on the established intestinal microbiota. Surotomycin (CB-183,315; MK-4261) is a novel, orally administered, cyclic, lipopeptide antibacterial currently in phase 3 development for the treatment of patients with CDAD [14]. Surotomycin has a fourfold greater in vitro potency than vancomycin against C. difficile (minimum inhibitory concentration at which 90% of the isolates were inhibited [MIC 90 ] = 0.5 μg/mL vs 2.0 μg/mL) and other Gram-positive bacteria with minimal impact on the Gram-negative organisms of the intestinal microbiota [15,16]. Surotomycin, given orally, has been shown to be highly effective against both initial and relapsing hamster CDAD, with potency similar to vancomycin [14]. Surotomycin has previously demonstrated minimal intestinal absorption (<1%) in rats and dogs (Yin N et al., ICAAC 2010, unpublished data). The objectives of these phase 1 studies were to characterize the safety, tolerability, and plasma pharmacokinetic (PK) profile of single and multiple ascending oral doses of surotomycin in healthy volunteers. Study design and participants Written informed consent was obtained from all participants, and the protocols were approved by the institutional review board of the study site (West Coast Clinical Trials, LLC, Cypress, CA, USA). These randomized, double-blind, placebo-controlled, phase 1 studies consisted of a single ascending dose (SAD) study (protocol number LCD-SAD-08-04, NCT02835105) and a multiple ascending dose (MAD) study (protocol number LCD-MAD-08-08, NCT02835118). Both studies were conducted in accordance with the ethical principles originating from the Declaration of Helsinki and its amendments, consistent with Good Clinical Practices and local regulatory requirements. Male and female subjects aged 18-75 years were eligible for these studies if considered by the investigator to be in good health with unremarkable current and past medical history before the first day of study. Subjects were required to have no clinically significant abnormalities in prestudy physical examination, electrocardiogram (ECG), and laboratory evaluations. Subjects with findings outside of the normal range were included in the study only if these findings were deemed not clinically significant by the investigator or medical monitor. Subjects had no evidence of prior chronic gastrointestinal inflammatory disease. Exclusion criteria included incidence of C. difficile disease within 1 year before study entry (SAD); prior exposure to surotomycin (MAD); known hypersensitivity to lipopeptide antibacterials; any comorbid disease judged by the investigator to be clinically significant; any concomitant medication, except low-dose aspirin, paracetamol, and multivitamins, in the 2 weeks before dosing (investigator-and medical monitor-approved concomitant medications were permitted in patients aged 49 years and above); or any antibiotic within 30 days before the first dose of the study drug. Women who were unwilling or unable to use an acceptable method to avoid pregnancy or who were pregnant or lactating during the conduct of the study and until 1 month after last surotomycin dose were excluded. For the SAD study, eligible subjects were sequentially enrolled into 1 of 4 dose cohorts: 500 mg, 1000 mg, 2000 mg, or 4000 mg surotomycin (Fig. 1a). In total, 10 subjects were intended to be randomized into each dose cohort in a 4:1 ratio to receive surotomycin (n = 8) or placebo (n = 2). At least 24 h before dosing the first full cohort, 2 subjects received surotomycin or placebo (randomized 1:1). The remaining 8 subjects were randomly assigned to surotomycin (n = 7) or placebo (n = 1) only if no significant safety findings or clinically significant abnormal laboratory values were reported for the first 2 subjects. Randomization was assigned by blinded study personnel and stratified by gender to achieve an equal number of male and female subjects in each cohort. Subjects aged 18-49 years and 50-75 years were equally distributed in each dosing cohort. Subjects received a single dose of surotomycin or placebo during the morning of day 1 (1 h after breakfast) and were followed as an inpatient through day 4 when they were discharged to return for a follow-up visit on day 8. Eligible subjects were recruited and sequentially enrolled into 1 of 3 MAD dose cohorts: 250 mg, 500 mg, or 1000 mg surotomycin twice daily (BID) (Fig. 1b). A total of 10 subjects were randomized (4:1) to receive surotomycin (n = 8) or placebo (n = 2) in each cohort. Randomization was also stratified by gender and distributed by age to ensure that dosing cohorts were balanced. Subjects were dosed BID, once in the morning and once in the evening, for 14 consecutive days, with at least 8 ounces of water and approximately 1 h after breakfast and 1 h after dinner. Subjects were observed as inpatients through day 15 and were then discharged to return for follow-up on day 21. In both studies, dose escalation to the next cohort occurred sequentially, and only after review of key safety data obtained from the previous cohort indicated that it was safe to proceed. The investigator and all personnel involved in the clinical or analytical evaluations of the study remained blinded to treatment until all cohorts had completed and the database was locked. Treatment doses were administered according to a randomization code by a pharmacist who was not an investigator or involved in study evaluations. Pharmacokinetic analysis Any subject receiving at least one full dose of the study drug was included in the PK analysis population. PK analysis during the SAD study was conducted on serial plasma samples collected predose and at 30 min, 1, 2, 4, 6, 8, 12, 24, 48, and 72 h after dosing, and analyzed using a validated liquid chromatography-tandem mass spectrometry (LC/MS/MS) method, with a lower limit of quantification (LLOQ) of 1 ng/mL. Urine and stool samples were also collected during this period and analyzed using LC/MS/MS. Urine and feces were collected for 7 days after dose administration. The samples were analyzed using an API 5000 triple quadrupole mass spectrometer (Applied Biosystems/ScieEx, Concord, ON, Canada) using electrospray ionization in positive ion mode. Analyst TM software (version 1.4.2., Applied Biosystems, Foster City, CA, USA) was used for data acquisition. In total, 8 calibration solutions with a range of 1.00 ng/mL to 1000 ng/mL were used as internal standards in addition to a blank. Inter-assay bias was determined to be -2.6 to 2.5% with inter-assay precision of 3.9 to 9.4%. During the MAD study, serial plasma samples for PK analysis were collected predose and at 30 min, 3, 6, and 9 h after the morning dose on days 1 and 14, and before the morning dose on days 4, 7, 10, and 12 (trough levels). Stool was also collected in its entirety from all bowel movements for PK analysis following the morning dose on day 5 through predose day 6 in the 1000-mg BID dose cohort. Samples were analyzed using the same bioanalytical LC/ MS/MS method as in the SAD study (LLOQ of 1 ng/mL). Plasma PK parameters were calculated using standard noncompartmental methods in a validated version of WinNonlin (version 5.2, Pharsight, Mountain View, CA, USA). All concentrations that were below the LLOQ prior to the first detectable concentration were assigned a value of 0. All concentrations that were below the LLOQ after the first quantifiable concentration were designated as missing and replaced with a period. Actual sample collection times were used in the analysis of the concentration versus time profiles for individual subjects. Integration of plasma concentrations versus time was conducted using the linear-up, log-down function in WinNonlin. The following parameters were determined: maximum plasma concentration (C max ), time of C max (T max ), area under the concentration-time curve (AUC) from 0 to last measurable plasma concentration (AUC 0-t ), AUC from 0 to infinity (AUC 0-∞ ), percent of dose excreted in the urine, and terminal exponential half-life (t ½ ). Sample sizes were chosen based primarily on clinical considerations and were considered sufficient for the exploratory evaluation of single-and multiple-dose safety and PK. Safety analysis Safety was monitored throughout the studies and on return for follow-up assessment on day 8 (SAD group) or day 21 (MAD group), by observation or reports of adverse events (AEs), and by changes in physical examination findings, vital signs, ECG, and laboratory tests. Concomitant medications and procedures were recorded. Any subject who received any dose of the study drug was included in the safety analysis population. Statistics Statistical methods were primarily descriptive and no formal hypothesis tests were planned or completed. Data were summarized and analyzed using Statistical Analysis System SAS ® (version 9.1.3; SAS Institute, Cary, NC, USA). Subject demographics and characteristics Both study groups were enrolled to completion with four 10-subject cohorts randomized in the SAD study and three cohorts of the same size randomized in the MAD study. All 40 subjects in the SAD group received all scheduled study medication. One subject (2000 mg) withdrew from the study early and missed the follow-up visit due to a family emergency. In all, 28 of 30 subjects completed the MAD study as planned. One subject (500 mg BID) discontinued treatment due to AEs of anxiety and dyspnea after 5 doses, and one subject (250 mg BID) did not complete the day-21 follow-up visit (considered lost to follow-up). Subject demographics and baseline characteristics are summarized in Table 1. Half of all subjects enrolled in both studies were male, and treatment cohorts were generally well balanced with respect to age, race, and body mass index (BMI). In the SAD study, across all subjects who received surotomycin, the mean age was 42.9 years (range: 19 to 69 years) and mean BMI was 25.5 kg/m 2 . By comparison, subjects receiving placebo in the SAD study had a mean age of 28.3 years (range: 18 to 58 years) and a mean BMI of 23.9 kg/m 2 . For all subjects who received surotomycin in the MAD study, the mean age was 47.6 years (range: 20 to 70 years) and mean BMI was 27.4 kg/m 2 . Participants in the MAD study receiving placebo had a mean age of 48.8 years (range: 25 to 70 years) and a BMI of 25.8 kg/m 2 . Pharmacokinetic results The median plasma concentration-time profiles of surotomycin following administration of single oral doses (SAD) and following administration of a single dose and repeated doses (MAD) are presented in Fig. 2. However, although quantifiable levels of surotomycin were observed in 4 subjects in the SAD 2 g cohort and 6 subjects in the 4 g cohort, a full plasma concentration-time curve was obtained in only 2 subjects, 1 in each dose cohort. Following a brief lag after the administration of a single dose, median plasma concentrations of surotomycin were quantifiable and seemed to increase with an increase in dose ( Fig. 2a and b). Peak median plasma surotomycin concentrations were observed from 6 to 12 h after the first single dose in both the SAD and MAD groups. For patients receiving repeated surotomycin dosing, the median plasma concentration versus time profile on day 14 was flat ( Fig. 2c), indicating that steady state had been reached and that the concentrations were essentially steady or constant during the dosing interval. The PK parameters of surotomycin following administration of single oral doses (SAD) and a single oral dose and repeated oral doses (MAD) are summarized in Table 2. In the SAD group, median C max ranged from 10.5 ng/ mL in the 500-mg dose cohort to 86.7 ng/mL in the 4000-mg dose cohort, and the AUC 0-∞ ranged from 317 ng*h/mL in the 500-mg dose cohort to 2572 ng*h/ mL in the 4000-mg dose cohort. While the overall exposure of surotomycin as measured by C max and AUC 0-∞ increased with increases in dose, the median elimination half-life was independent of the dose administered and ranged between 14.8 and 21.1 h. In the MAD study, C max on day 1 (following a single dose) was consistent with the findings of the SAD group and ranged from 6.8 ng/mL to 21.0 ng/mL. On day 14, the median C max ranged from 25.5 ng/mL to 93.5 ng/mL, possibly suggesting an accumulation of surotomycin in the body following repeated dosing. No additional PK parameters for surotomycin could be computed due to the nature of the plasma concentration-time profile on days 1 and 14, and limited sampling. Figure 3 shows the dose-normalized C max and AUC 0-∞ for surotomycin after a single dose (SAD), indicating that surotomycin exposure increased in a dose-dependent manner. In the SAD study, concentrations of surotomycin in the urine were detected in 12 of 32 subjects treated, and the median cumulative amount of the administered dose recovered in the urine was <0.01% over a 7-day period. The median cumulative fraction of the administered surotomycin dose excreted in the feces over a 4-day period ranged from 20.8 to 60.2% after a single dose (SAD). Stool levels of surotomycin increased proportionally with the dose administered. In the MAD study, the mean concentration of surotomycin from the 1000-mg BID dose cohort on day 5 was 6394 μg/g. Safety analysis During the SAD study, a total of 13 (32.5%) subjects experienced at least one AE. Ten subjects had AEs that were assessed as treatment-related, including 9 (28.1%) of the 32 subjects who received surotomycin and 1 (12.5%) of the 8 subjects who received placebo. The most commonly reported treatment-related AE was diarrhea, reported in 5 (15.6%) surotomycin subjects and none of the placebo subjects (Table 3). Increased transaminases were reported in 2 (6.3%) of the 32 subjects who received surotomycin. Although a total of 18 (60.0%) subjects experienced at least one AE during the MAD study, all reported AEs were considered unrelated to the study drug. The most common AE in the MAD group was headache, reported in 2 (8.3%) of the 24 subjects who received surotomycin (1 each in the 250-mg BID and 500-mg BID dose cohorts) and 2 (33.3%) of the 6 placebo subjects (Table 3). Constipation, back pain, oropharyngeal pain, and pruritus were each reported in 2 (8.3%) of the subjects who received surotomycin. All reported AEs in both studies were mild to moderate in severity. No serious AEs were reported and none of the subjects discontinued the study due to AEs in the SAD group. One subject in the MAD group (500 mg BID) discontinued due to transient moderate anxiety and associated mild dyspnea. The events were reported 2 h after the 5 th dose of study treatment. The anxiety resolved within 4 h and the dyspnea within 3 min, both without the need for additional treatment. Discussion In both SAD and MAD studies, C max and AUC values were low, demonstrating the limited systemic exposure and that less than 0.01% of the administered compound was excreted in the urine. In addition to the clinical findings, complementary information obtained from preclinical animal studies with radiolabeled surotomycin suggests poor systemic absorption and exposure without elimination through the hepatobiliary route (Merck & Co., Inc., Kenilworth, NJ, USA, unpublished data). These results support the insignificant absorption of surotomycin observed in these clinical trials. Peak median surotomycin plasma concentrations were achieved between 6 and 12 h post-dose, also suggesting that a small amount is absorbed in the lower gastrointestinal tract in healthy volunteers. In the SAD study, there appeared to be a degree of dose-non-linearity in surotomycin plasma concentrations, appearing to peak at the 2000-mg dose and decrease with Table 2 Summary of pharmacokinetic parameters for surotomycin. Following administration of a single dose and following administration of a single dose (day 1) and multiple doses (day 14) the 4000-mg dose (Fig. 3). However, these data should be interpreted with care due to the very low concentrations detected in this study. In the SAD study at the 2000-mg and 4000-mg doses, a full plasma concentration-time curve could be determined from only two subjects. In addition, the concentrations detected were less than 3fold of the LLOQ and thus no inferences could be made from these data. As expected with a molecule with minimal absorption, stool levels of surotomycin increased proportionally with the dose administered. During the SAD and MAD studies, substantial concentrations of surotomycin were observed in the stool, indicating that this is the major route of elimination for orally dosed surotomycin. Assuming dose proportionality, stool concentrations of surotomycin should greatly exceed the MIC 90 for C. difficile, suggesting that 250-mg BID oral dosing of surotomycin will be effective for the treatment of CDAD. Indeed, the low systemic exposure observed here did not come as much of a surprise, as it has been shown previously that in rats bioavailability after oral administration is very low (Yin N et al., ICAAC 2010, unpublished data). Single oral doses of surotomycin (500 mg, 1000 mg, 2000 mg, or 4000 mg) were well tolerated, with all AEs reported as mild to moderate in severity. The most commonly reported AE in the SAD group was diarrhea, but this was not dose-dependent. Differences in AEs reported between the surotomycin cohorts and the placebo cohort may have been due to differences in the ages of subjects in the two groups (mean age ± standard deviation: surotomycin, 42.9 ± 16.89 and placebo, 28.3 ± 13.25), consistent with the observation that older populations have increased susceptibility to gastrointestinal complications of comorbid disease [17]. Multiple oral doses of surotomycin (250 mg, 500 mg, 1000 mg BID) were also well tolerated in healthy adult volunteers, with all AEs reported as mild to moderate in severity and no AEs reported as being related to the study drug. Conclusion In summary, single (500 mg, 1000 mg, 2000 mg, 4000 mg) and multiple doses (250 mg, 500 mg, 1000 mg BID) of the novel lipopeptide antibiotic surotomycin demonstrated minimal systemic exposure, with the feces being the primary route of elimination following oral administration. These results are consistent with those observed with similar compounds, such as fidaxomicin. Results of these phase 1 studies support the continued clinical development of surotomycin for the treatment of CDAD. In addition to defining the efficacy profile of surotomycin against CDAD, currently being addressed in phase 3 trials, additional studies will determine the in vivo effects of surotomycin on intestinal microbiota.
2017-08-03T02:56:14.737Z
2017-03-28T00:00:00.000
{ "year": 2017, "sha1": "b031a36499aaff94ca3eff5cffed6d150ea93981", "oa_license": "CCBY", "oa_url": "https://bmcpharmacoltoxicol.biomedcentral.com/track/pdf/10.1186/s40360-017-0123-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b031a36499aaff94ca3eff5cffed6d150ea93981", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
43992034
pes2o/s2orc
v3-fos-license
Active AC/DC control for wideband piezoelectric energy harvesting This paper proposes a simple interface circuit enabling resonant frequency tuning of highly coupled piezoelectric harvesters. This work relies on an active AC/DC architecture that introduces a tunable short-circuit sequence in order to control the phase between the piezoelectric current and voltage, allowing the emulation of a capacitive load. It is notably shown that this short-circuit time increases the harvested power when the piezoelectric operates outside of resonance. Measurements on a piezoelectric harvester exhibiting a large global coupling coefficient (k2 = 15.3%) have been realized and have proven the efficiency and potential of this technique. Introduction The last decade has seen a growing interest in new sustainable energy sources that could replace batteries. Mechanical energy harvesting is a good alternative to solar or thermal generators, since vibrations can be found in closed confined environments. Piezoelectric elements are of particular interest because of their high energy densities and integration potential [1,2]. Piezoelectric Energy Harvesters (PEHs) are relatively efficient when the vibration frequency matches the harvester's resonance frequency. However, environmental excitations are subject to variations that can lead the vibration frequency to shift away from the generator resonance frequency, reducing considerably the efficiency of the PEH and thus the extracted energy [3]. In order to extend the frequency range where a large amount of energy can be harvested, it is possible to dynamically adjust the interface circuit, which has an influence on the harvester due to the electromechanical coupling [4]. In addition to the well-known resistive tuning, it has been recently shown that adding capacitances in parallel of a highly coupled piezoelectric material allows the tuning of the stiffness of the harvester, leading to an adaptation of its resonance frequency. However, this strategy requires the use of a large off-chip capacitive bank that needs to be tuned step by step [5,6]. In this paper, we propose a new solution to emulate a capacitive behavior by short-circuiting the PEH during a tunable time, as shown in figure 1. This enables the control of the phase between the piezoelectric voltage and current, allowing the emulation of a complex load that has a direct influence on the resonance frequency of the harvester. Using this strategy, a continuous tuning of the resistive ( ) and capacitive ( ) load is achievable with components that can easily be integrated in a small chip. Theoretical analysis The study of this strategy is derived through a theoretical demonstration that is introduced in the following. The standard mass-spring-damper type PEH model with a single degree of freedom is shown in figure 1. Assuming that the strain of the piezoelectric material is purely sinusoidal, with a constant amplitude ( = ), its governing equations are given by (1). where , , , , refer to the driving force, the dynamic mass of the system, the mechanical damping, the global equivalent stiffness and the piezoelectric coupling coefficient respectively. is the dielectric capacitance of the harvester, and are the piezoelectric voltage and the load current respectively. The expression of the piezoelectric voltage during a half-period is given by (2). Where is the angle when the piezoelectric voltage reaches 0 and when the generator should be short-circuited, the angle when the short-circuit is opened, the angle when the piezoelectric voltage reaches , and the output voltage of the AC/DC rectifier, directly controlled by the input impedance of the DC/DC converter. We can express the parameters , and as in (3). (1) From (1), (2) and (3), we can express the Fourier series representation of as a function of and . The fundamental of this series is given by (4). and are the Fourier coefficients of the piezoelectric voltage's first harmonic. Assuming that only the fundamental of the voltage affects the dynamic mass motion, the differential equation of the piezoelectric system given by (1) can be rewritten in the Fourier domain as shown by (5). Applying the Laplace transform on (5), isolating , and getting its amplitude , we obtain the expression of the strain amplitude given by (6). The expression of the harvested power transmitted in the DC/DC's input resistance is given by equation (7). Combining (3), (6) and (7), we can determine for any parameter couple the harvested power with this strategy. Experimental results A PEH having a high electromechanical coupling has been used for the experimental validation of the proposed strategy ( figure 3). The characteristics of the PEH are given in Table 1. The experimental setup consists in an electromagnetic shaker that can simulate an input vibration. The cantilever displacement and acceleration are sensed by a laser placed a few decimeters upon the shaker. The piezoelectric device is placed on the shaker and is directly connected to the electrical interface. The experimental results have been compared with the theoretical results, obtained thanks to the computation of the equations (6) and (7), as shown on figure 4. The piezoelectric voltage, the short-circuit control and the piezoelectric system acceleration, respectively in yellow, pink, and blue, can be observed on figure 5. Discussions We can observe that the short-circuit technique improves the performances of the PEH between its resonance (253Hz) and anti-resonance (275Hz) frequencies, due to its capacitive effect. Experimentally, under a sinusoidal ambient acceleration of 0.58G, we have been able to harvest more than 600 over a 20Hz bandwidth. In order to take into account the losses in the piezoelectric material ( ) and the voltage drops across the diodes ( , we added these parameters in our theoretical models, as shown on figure 3 and figure 4. In order to reduce these losses, an interface circuit implementing this strategy based on an active AC/DC that includes a shortcircuit sequence control should be designed. Conclusion This paper presents a new interface circuit that can dynamically tune the resonance frequency of a highly coupled PEH. Using an active AC/DC that introduces a short-circuit sequence, this interface is able to synthesize a capacitive load. Through a theoretical analysis and an experimental validation, we proved that this strategy enhances up to 35% the off-resonance performances of the PEH. Maximum harvested power using the short-circuit strategy without any losses consideration Maximum harvested power on a classic AC/DC without any losses consideration Maximum harvested power using the short-circuit strategy taking the losses into account Maximum harvested power on a classic AC/DC taking the losses into account Experimental results using the short-circuit strategy Experimental results on a classic AC/DC
2018-01-19T15:07:55.462Z
2016-12-06T00:00:00.000
{ "year": 2016, "sha1": "5bc4da2c4ab6e243aba069a6a62b786a1cce6b1e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/773/1/012059", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "094162575f80e87896b3475de320cee1bebaa8e6", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
221097621
pes2o/s2orc
v3-fos-license
Quantum chemical insight into molecular structure, NBO analysis of the hydrogen-bonded interactions, spectroscopic (FT–IR, FT–Raman), drug likeness and molecular docking of the novel anti COVID-19 molecule 2-[(4,6-diaminopyrimidin-2-yl)sulfanyl]-N-(4-fluorophenyl)acetamide - dimer Novel antiviral active molecule 2- [(4,6-diaminopyrimidin-2-yl)sulfanyl]-N-(4-fluoro- phenyl)acetamide has been synthesised and characterized by FT-IR and FT-Raman spectra. The equilibrium geometry, natural bond orbital calculations and vibrational assignments have been carried out using density functional B3LYP method with the 6-311G++(d,p) basis set. The complete vibrational assignments for all the vibrational modes have been supported by normal coordinate analysis, force constants and potential energy distributions. A detailed analysis of the intermolecular interactions has been performed based on the Hirshfeld surfaces. Drug likeness has been carried out based on Lipinski's rule and the absorption, distribution, metabolism, excretion and toxicity of the title molecule has been calculated. Antiviral potency of 2- [(4,6-diaminopyrimidin-2-yl)sulfanyl]-N-(4-fluoro-phenyl) acetamide has been investigated by docking against SARS-CoV-2 protein. The optimized geometry shows near-planarity between the phenyl ring and the pyrimidine ring. Differences in the geometries due to the substitution of the most electronegative fluorine atom and intermolecular contacts due to amino pyrimidine were analyzed. NBO analysis reveals the formation of two strong stable hydrogen bonded N–H···N intermolecular interactions and weak intramolecular interactions C–H···O and N–H···O. The Hirshfeld surfaces and consequently the 2D-fingerprint confirm the nature of intermolecular interactions and their quantitative contributions towards the crystal packing. The red shift in N–H stretching frequency exposed from IR substantiate the formation of N–H···N intermolecular hydrogen bond. Drug likeness and absorption, distribution, metabolism, excretion and toxicity properties analysis gives an idea about the pharmacokinetic properties of the title molecule. The binding energy −8.7 kcal/mol of the nonbonding interaction present a clear view that 2- [(4,6-diaminopyrimidin-2-yl)sulfanyl]-N-(4-fluoro- phenyl) acetamide can irreversibly interact with SARS-CoV-2 protease. Introduction Pyrimidine and its derivatives take up a key position in the field of medicinal chemistry due to its multifarious pharmacological activities. In an urge for searching new promising small therapeutic agents, we introduce 2-[(4,6-diaminopyrimidin-2-yl)sulfanyl]-N-(4-fluoro-phenyl) acetamide (DAPF). In the present study, we focus on the investigation on the molecular structure, electronic properties, vibrational spectra and molecular docking of the title compound, with the hope that the results of the present investigation may be decisive in the prognosis of its mechanism of biological activity. Pyrimidines, the fundamental building blocks for nucleic acids, are invoking much scientific interest owing to their potential biological activities and pharmacological applications [1]. Pyrimidines are also reported to show anti-HIV, [2] antidengue [3] and anticancer [4] activities. The title compound DAPF, which has the amino substituent at the 4,6-position are found to be Troponin I-Interacting Kinase (TNNI3K) Inhibitors [5]. Also, the presence of the amino group in the 4position are found to be HIV inhibitors [6]. Aminopyrimidines and polyaminopyrimidines are important therapeutic agents used as tyrosine kinase inhibitor such as Gleevec and the hypocholesterolemic agent rosuvastatin [7][8][9]. Molecular and spectral investigation of 2-mercapto pyrimidine and 2,4-diamino-6-hydroxy-5-nitroso pyrimidine [10], pyrazinamide [11], DFT assisted Quantum computations of aminopyrimidine [12], 2amino-5-nitropyrimidines [13], FT-IR spectral study on 4aminopyrimidine and deuterium substituted 4-aminopyrimidine [13], fluocytosine [14] have been carried out and the vibrational bands have been reported. The C\ \S stretching frequencies in sulfanylaminobenzene have been reported [15]. The nature of substituent at 2-and 6-positions in the pyrimidine ring was found to greatly influence the anti-tubercular activity. Structural study by spectroscopic and quantum chemical methods, have been reported on chloropyrimidine based anti-microbial agents such as 4-cholro2,6-dimethylsulfanyl pyrimidine-5-carbonitrile and 4-cholro-2-methylsulfanyl-6-(2-thienyl) pyrimidine-5-carbonitrile which demonstrates activity against M. Tuberculosis [16]. The title molecule has gained attention owing to its structure, functional group and their diverse biological activity. Crystal structure of the title compound [17] has been reported and no other studies have been reported so far. The initiation of cART (combinational antiretroviral therapy) drugs ritonavir, darunavir, and lamivudine/zidovudine did not reduce the cerebellar dysfunctions such as ataxia, dementia and neurocognitive disorders associated with HIV infections. The presence of the amino group in the title molecule has the ability to reduce the cerebellar dysfunction [18]. The physical, biochemical, pharmacological and pharmacokinetic properties of fluorine atom in the title molecules may play an important role in drug design owing to their C\ \F bond strength, dipole moment, strong electronegetivity and modest lipophilicity [19]. Elucidating the structure activity of DAPF using density functional theory, spectroscopic techniques and molecular docking may pave a way in the development of new HIV protease inhibitor which may reduce the cerebellar dysfunctions. Even though DFT studies have been reported on pyrimidine derivatives, spectral investigation and density functional theory studies of DAPF has not been carried out. Vibrational spectral analysis of DAPF using quantum chemical computations aided by density functional theory is an efficient method in understanding the various types of bonding and normal modes of vibrations. The complete vibrational assignments for all the vibrational modes have been supported by normal coordinate analysis, force constants and potential energy distributions. A detailed analysis of the intermolecular interactions has been performed using NBO analysis and the intermolecular contacts have been exposed based on the Hirshfeld surface analysis. Antiviral potency of DAPF has been investigated by docking against viral proteins. The theoretical and experimental calculations have been carried out to probe the structure of DAPF. Also, FT-IR and FT-Raman spectra of DAPF have been described by both experimental and theoretical methods. The antiviral activity has been performed using molecular docking studies, which shows it can irreversibly interact with SARS-CoV-2 main protease. Experimental To an ethanolic solution of 4,6-diamino-pyrimidine-2-thiol (0.5 g, 3.52 mmol) potassium hydroxide (0.2 g, 3.52 mmol) was added and the mixture was refluxed for 30 min. To this 3.52 mmol of 2-chloro-N-(4-fluoro-phenyl) acetamide was added and the mixture was refluxed for 4 h. The completion of the reaction has been monitored by thin layer chromatography (TLC). Ethanol was evaporated in vacuo and cold water was added and the precipitate formed was filtered and dried to give a crystalline powder. Colourless block-like crystals were obtained by slow evaporation [17]. The room temperature FTIR spectra of the compound was measured in the 4000-400 cm1 region at a resolution of š1 cm1 using a BRUKER IFS-66V vacuum Fourier transform spectrometer equipped with a mercury cadmium telluride (MCT) detector, a KBr beam splitter and globar source. The far IR spectrum was recorded on the same instrument using the polyethylene pellet technique. The mid-infrared spectrum of the sample has been recorded in the region 4000-400 cm −1 at a resolution of 1 cm −1 using a PerkinElmer Spectrum1 FT-IR spectrophotometer, with the samples in the form of KBr pellets. The FT-Raman spectrum has been recorded using Bruker RFS 27 spectrometer in the region 4000-50 cm −1 with the use of Nd: YAG 1064 nm laser source. Computational details The equilibrium geometry and the vibrational wavenumbers of the title molecule has been done using Gaussian 09W [20] program package. The geometric optimization has been carried out using DFT calculations at the B3LYP/6-311G++(d,p) level of theory. The natural bonding orbitals (NBO) calculations [21] have been carried out using NBO3.1 program as implemented in the Gaussian 09W package at the DFT/ B3LYP level in order to understand the interactions that takes place between the filled and vacant orbitals, which is a measure of delocalization or hyperconjugation. The normal coordinate analysis (NCA) has been performed using MOLVIB 7.0 program [22,23]. The input for the MOLVIB program has been given as suggested by Pulay [24]. Crystal Explorer program 3.1.0 has been employed to carry out the Hirshfeld surface [25] and the associated 2D-fingerprint plots [26]. Molecular docking simulation has been carried out using the Auto Dock 4.2.6 software package and the ligand-protein interactions have been studied [27]. The ligand-protein binding sites have been visualized using PYMOL graphic software [28]. Optimized geometry The optimized structural parameters of the monomer and dimer form of the title compound have been performed using GAUSSIAN 09W program package and the optimized structure has been visualized using Gauss View 5.0.9. Geometric optimization has been carried out using B3LYP function and 6-311G++(d,p) basis set. The optimized dimer structure of the DAPF is depicted in Fig. 1. The optimized bond length of DAPF is given in Table 1. The optimized bond angle and dihedral angle of DAPF have been shown in Tables S2 and S3. The molecular structure and the crystallographic information of DAPF [C 12 H 12 FN 5 OS] have been taken from Cambridge Crystallographic Data Center (CCDC 1529607). The molecular structure of DAPF constitutes a di substituted phenyl ring and a tri substituted pyrimidine ring bridged by thioacetamide moiety. The observed theoretical parameters are in good agreement with the experimental data with certain discrepancies. The C2 ̶ C3, C3 ̶ C4 bond lengths in the phenyl ring are shortened when compared to other C\ \C bond length and the endoangle C2-C3 ̶ C4 (121.56°) has been increased, leading to the distortion from the regular hexagon structure, due to the substitution of the most electronegative fluorine atom. There exist an intramolecular interaction between N12 ̶ H13 of the sulfanylacetamide moiety and the nitrogen atom N25 of the pyrimidine ring resulting in an increase of N12 ̶ H13 bond length (0.01Ǻ). On dimerization, DAPF leads to the formation of two N ̶ H···N hydrogen bonded intermolecular interactions, which adds stability to the system. This hydrogen bonded interaction causes substantial changes with an increase in the N30 ̶ H31 bond length by 0.018 Ǻ and the C22 ̶ N30 ̶ H31 bond angle by 5°when compared with the single molecular structure. This elongation of N ̶ H bond may be due to charge redistributions and orbital interactions [29]. Natural bond orbital study DAPF has been subjected to NBO analysis to elucidate the possible intramolecular and intermolecular interactions between the filled and vacant orbital, which is a measure of hyperconjugation or intramolecular delocalization. The stabilization energy E(2) associated with the interaction between the filled orbital i and the vacant orbital j, calculated by the second order perturbation theory [30] have been tabulated (Table 2). Intramolecular interactions arises due to the hyperconjugation and electron density transfer (EDT) from filed lone pair electrons of the n (Y) of the "Lewis base" Y into the unfilled anti-bond σ*(X ̶ Y) of the "Lewis acid" in X-H···Y have been recorded [31]. The calculated second order perturbation energies (E(2)) in NBO basis confirms the presence of a hydrogen bonded intramolecular interactions n1(O25) → σ*(N12 ̶ ̶ H13) with the stabilization energy of 7.72 kcal/mol. As a result, the length of the N ̶ ̶ H bond involved in intramolecular interaction is lengthened by 0.01 Ǻ respectively. This is well reflected in the optimized molecular geometry. The NBO studies on DAPF monomer and dimer manifests the formation of two strong H-bonded intermolecular interactions between the nitrogen lone pairs n1(N30) and σ* (N62-H63) antibonding orbital. The occupancies and their energies for the interacting NBOs are represented in Table 3. Bond length Monomer Table 4 shows that the s-character of N30 ̶ H31 hybrid orbital has been increased by 4.59% from sp 2.60 to sp 2.16 that leads to the strengthening of N30 ̶ H31bond and its contraction. The composition of hydrogen bonded natural bonding orbitals in terms of natural atomic hybrids shows that the redistribution of natural charges in the N ̶ H bond becomes negative (−0.0177) at H31 resulting in the destabilization of the H-bond. The effect of rehybridization and hyperconjugation result in the contraction and the elongation of the N ̶ H bond. However the effect of rehybridization has been overcome by the hyperconjugative effect resulting in the elongation of N ̶ H bond and a concomitant red shift in N ̶ H stretching frequency. Hirshfeld surface analysis Hirshfeld surface analysis provides an insight into the intermolecular interactions in the crystal structure because it is not only connected with molecule itself but it has also contributions from nearest neighbour molecules. Hirshfeld surface analysis of DAPF was carried out using Crystal Explorer 3.1 to investigate the short contacts between atoms with potential to form hydrogen bonds and the quantitative ratios of these interactions besides of the π stacking interactions [32][33]. The Hirshfeld surface of DAPF mapped over d norm has been depicted in Fig. 2. For the 3D Hirshfeld surfaces, 2D view on intermolecular contacts in crystals can be generated by building 2D finger plots [34]. From the fingerprint plot Fig. 3(a), the N···H interactions are represented by a spike in the bottom left of the fingerprint plot, whose contribution is 8% and the counterpart H···N interaction is represented on the bottom right of the fingerprint plot with the contribution of 6.6%, of the total N···H/H···N 14 of the neighbouring pyrimidine ring, which is responsible for the distinctive red spot on the d norm surface as shown in Fig. 3(b). As seen in Fig. 3(c) O···H interaction makes up 9.5% of the Hirshfed surface of the molecule in the structure. The red spot on the d norm surface is due to the interaction of the carbonyl oxygen of the acetamide group with the proton of the diaminopyrimidine Fig. 3(d). It is noteworthy that H···F contributes 11% on the Hirshfeld surface as seen by two sharp peaks Fig. 3(e), is due to the fluorine atom from the phenyl group interacts with hydrogen atom of the neighbouring phenyl group, which is responsible for the red spot on the d norm surface as seen in Fig. 3(f). Hirshfeld analysis results shows that prominent interaction has been observed in N···H, than H···F and Symbols used: ν -stretching; ν as -asymmetric stretching; ν as -symmetric stretching; βbending; ωwagging; γinplane bending; γ'outplane bending; ρrocking; τtorsion; Radasymmetric deformation; Rad'-asymmetric deformation out of plane; Rpuk-puckering; Amd -amide; Am -amine; ipinplane stretching; Met-methyl; M1-moleculeI; M2-moleculeII; r1-ring1; r2-ring2; r3-ring3; r4-ring4. H···O. Multiple hydrogen bonding interaction, impart enhanced stability to supramolecular structures. Vibrational spectral analysis The dimer molecule of DAPF consists of 64 atoms, which undergo 186 vibrational modes. The assignments of the fundamental modes of vibrations have been made based on the normal coordinate analysis following the force field calculation with the ab initio method used for the geometry optimization of the dimer molecule. Multiple scaling factors have been employed for scaling, and those are available in supplementary Table S3. The vibrational assignments have been carried out on the basis of the characteristic group vibrations of phenyl ring, acetamide group, methylene group, pyrimidine ring and amine group. The experimental and calculated wave numbers of DAPF along with their normal modes and their corresponding potential energy distribution (PED) are presented in Table 5. Experimental and simulated IR and Raman spectra of DAPF have been shown in Figs. 4 and 5. Phenyl ring vibrations The vibrations of the disubstituted phenyl ring have been adopted using Wilson's scheme [35]. The C ̶ H stretching frequencies of the disubstituted benzene are expected to be in the region 3000 ̶ 3100 cm ̶ 1 [36]. The selection rules allowed for disubstituted benzene for C ̶ H stretching vibrations are 2, 7b, 20a and 20b. The bands observed at 3102 cm ̶ 1 with medium intensity and the band observed at 3078 cm ̶ 1 with strong intensity in Raman spectra have been assigned to mode 20b and 2 respectively. The C ̶ H inplane bending vibrations falls in the region 1300 to 1100 cm ̶ 1 and is characterized by the normal modes 3, 9a, 15, 18a, 18b. The band with weak intensity observed at 1148 cm ̶ 1 and a strong band at 1130 cm ̶ 1 in IR have been assigned to mode 18b. The band at 1240cm ̶ 1 in Raman spectra with weak intensity is assigned to mode 3. The out of plane bending vibrations of the para disubstituted phenyl ring exhibits in the region 1000 ̶ 675 and the allowed selection rules are 10a, 10b, 17a, 17b. The bands observed at 836, 805 cm ̶ 1 in IR spectra and Raman bands at 894 cm ̶ 1 , 933 cm ̶ 1 have been assigned to modes 10b and 10a respectively. In the case of disubstituted benzene derivatives the selection rule allows five normal modes for C ̶ C stretching vibrations. The modes are 8a, 8b, 19a, 19b and 14. In Raman spectrum the vibration mode 8a has been observed at 1563 cm ̶ 1 and 1544 cm ̶ 1 and in simulated spectrum it has been observed at 1572 cm ̶ 1 and 1545 cm ̶ 1 . The ring mode appears at 1470 cm ̶ 1 in the IR spectrum and the corresponding calculated wavenumber is 1492 cm ̶ 1 . The counterpart Raman band has been observed at 1463 cm ̶ 1 and the theoretical band at 1479 cm ̶ 1 . The radial skeletal mode 6a of the phenyl ring has been observed at 808 cm ̶ 1 in IR and Raman. The simulated IR band observed at 805 cm ̶ 1 and the Raman band at 795 cm ̶ 1 has been assigned to mode 6a. The outofplane skeletal mode 4 has been observed at 674 cm ̶ 1 in IR spectrum and at 670 cm ̶ 1 in simulated IR spectrum. Amide group vibrations Amides are of fundamental chemical interest owing to their conjugation between lone pair electrons in the Nitrogen and the carbonyl bond results in distinct physical and chemical properties [37]. Secondary amides are probably the most important as they are the backbone of every protein molecule. Secondary amide contains only one N\ \H stretching band in the infrared spectrum. This band appears between 3370 and 3500 cm −1 [38]. Therefore, band observed at 3282 cm −1 in IR is assigned to the N\ \H stretching. The down shift in N ̶ H stretching frequency validates the spectral evidence of the N ̶ H···N intramolecular hydrogen bond formation. The strength of N ̶ H···N bond has been well reflected by an increase in the bond length (0.001 Å). Also, the shift has been evinced by the intramolecular charge transfer interaction between N25 → N12 ̶ H13 in NBO basis with the stabilization energy 7.31 kcal/mol. An interesting feature of the amide group is the amide I band and is known as the C_O stretching mode. In fluorophenylacetamide the amide I band arises due to the delocalization of the nitrogen lone pair electrons and is observed as a strong intense band at 1658 cm −1 in IR. Normal Coordinate analysis of DAPF shows amide I band has been coupled with amide II and amide III bands. The amide II band known as N ̶ H inplane bending has been observed as an intense peak in the IR spectra at 1515 cm −1 . The Raman counterpart is observed at 1519 cm −1 . The C_O out of plane bending known as amide VI band has been observed at 518 cm −1 in IR and at 525 cm −1 in Raman. Methylene vibrations The asymmetric and the symmetric vibrations of the methylene group normally occur in the region 3100 ̶ 2900 cm −1 [39]. The presence of the neighbouring acetamide moiety and the sulphur lone pair affects the spectral behaviour of the sp 3 methylene group. The occurrence of the adjacent sulphur atom can shift the position and the intensity of the CH stretching and bending vibrations. The hyperconjugative interactions between the s lone pair and σ*(C ̶ H), lowers the wavenumber and the weakening of C ̶ H bonds. NBO result substantiates the above fact as seen from the hyperconjugative interaction S19 → C16 ̶ H18 with the stabilization energy of 1.72 kJ/mol. The IR band observed at 2942 cm −1 with medium intensity is assigned to CH 2 symmetric and asymmetric stretching. Pyrimidine ring vibrations Pyrimidine ring vibrations have been characterized by the C ̶ N stretching vibration, C ̶ C stretching and bending vibrations. In pyrimidine, quadrant stretch bands occur at 1590 ̶ 1520 cm −1 [40,41] and the semicircle bands occur in the region 1480-1375 cm-1. These bands can be viewed both in IR spectra as well as Raman spectra [42,43]. The spectrum of DAPF shows the band with strong intensity at 1558 cm-1in IR and the band at 1544 cm −1 in Raman with medium intensity have been assigned to quadrant stretching mode. Semicircle stretching vibrations has been observed at 1507 cm −1 in IR and Raman spectra. Quadrant inplane bending mode has been observed at 632 cm −1 in Raman spectra and the theoretical band at 636 cm −1 . Amine vibrations In primary amines, vibrational frequencies have been characterized by NH2 antisymmetric stretching, NH2 symmetric stretching, NH2 scissoring, NH2 wagging, C ̶ N stretching and CCN bending. Strong absorption in the range 3548-3459 cm-1 has been marked as NH2 antisymmetric stretching in the IR spectra [44]. In DAPF, the antisymmetric N ̶ H stretching mode has been observed at 3459 cm −1 as an intense band in the IR spectrum. The dimer molecule of DAPF has been bonded by two strong N ̶ H···N bonds. In case of primary aromatic amines the symmetric stretching is expected in the region 3422-3360 cm −1 [45]. The band observed at 3183 cm −1 in IR spectra has been assigned to N ̶ H symmetric stretching. The red shift in wavenumber (~180 cm-1) shows the spectral evidence for the formation of N ̶ H···N intermolecular interactions. This has been validated from the results of optimized geometry as seen with an increase in the N30 ̶ H31 bond length by 0.018 Ǻ and the C22 ̶ N30 ̶ H31 bond angle by 5°o ver the isolated molecule. Also, the down shift in stretching frequency has been substantiated by the second order perturbation energy that takes place between n1(N53) → σ* (N30-H31) with the stabilization energy of 10.54 kcal/mol and the occupancy of the interacting NBOs. These hydrogen bonded interactions restrain protein molecules to their native configurations and an important role in inhibiting the gelling of sickle-cell deoxyhemoglobin. Also, intermolecular N···H ̶ N hydrogen bonding plays an important role in the stability of protein structure [46]. The NH 2 scissoring mode in aryl amine is in the region of 1638 ̶ 1602 cm −1 [47,48]. The NH 2 scissoring mode known as the symmetric deformation mode is assigned to the band at 1622 cm −1 in the IR while the respective Raman band is observed at 1608 cm −1 . Raman bands with weak intensity at 698 cm −1 and the widened IR band at 669 cm-1contributes to the NH 2 wagging mode. The band observed with medium intensity at 1446 cm −1 and 1406 cm −1 have been assigned to the C-N stretching vibration in IR and Raman respectively. Drug likeness of DAPF According to the rule of thumb, orally absorbed drugs tend to obey Lipinski's rule of five. The rule of five was derived from an analysis of compounds from the World Drugs Index database aimed at identifying features that have been important in making a drug orally active. It has been found that the factors concerned involved numbers that are multiples of five: a molecular weight less than 500; no more than 5 hydrogen bond donor (HBD) groups; no more than 10 hydrogen bond acceptor groups; a calculated log P value less than +5 [49][50][51][52][53][54]. DAPF has been passed through Lipinski's rule of five (Table 6) to overcome drug-likeness filter. ADMET property analysis There is an overall of 26 constraints in ADMET statistics, which has been taken from the full text of peer-reviewed scientific journals through weekly PubMed and Google Scholar searches from 2002 to 2011 [55]. ADMET (Absorption, Distribution, Metabolism, Excretion and Toxicity) results show that DAPF (+) in human intestinal absorption and blood-brain barrier permeability, which suggests that the molecule is well absorbed in the human body (Table 7). Inhibition and initiation of P-glycoprotein have been described as the causes of drugdrug interactions. [56]. It has been observed that DAPF has Pglycoprotein non-inhibitor, which shows the noninteracting activity of DAPF with other drugs. ADMET data show DAPF is in permissible limit [55,57]. Organic cation transporters are accountable for drug absorption and disposition in the kidney, liver, and intestine [58]. ADMET result of DAPF shows that it has been a non-inhibitor of renal organic cation transporter. The human cytochromes P450 (CYPs) are responsible for about 90% oxidative metabolic reactions. Inhibition of CYP enzymes will lead to inductive or inhibitory failure of drug metabolism [59]. A non-inhibitor and non-Substrate property of DAPF supports the fact it is safe to the human liver. The Ames test is employed to test the mutagenic activity of chemical compounds. It is usually carried out to test bacteria and viruses to whether a given chemical can cause cancer [60,61]. ADMET result of DAPF is shown in Table 7. ADMET result shows DAPF has been non-ames toxic and noncarcinogenic. Human Ether-à-go-go-Related Gene (hERG) is a gene delicate to drug binding [62]. ADMET results shows DAPF have been weak inhibitor and non-inhibitor of hERG inhibition (predictor I and II). That means the DAPF molecules will well bind with SARS-CoV-2 main protease [63]. Analyzing the ADMET properties, together with their attributes and prediction, has given an idea about the pharmacokinetic properties of DAPF. 4.7. Docking study of DAPF with SARS-CoV-2 main protease 4.7.1. In silico calculation Molecular docking has been used to acquire binding modes and binding affinities of DAPF. Binding mode and affinity to SARS-CoV-2 main protease are essential for insilico drug design [64][65][66][67][68]. The protein structure of SARS-COV-2 has been retrieved from protein data bank (PDB id: 5r80). The protein (SARS-COV-2) and ligands (DAPF) structures have been modified by Autodock Tools [69]. The chains of main protease have been modified by removing water and bound ligand. Missing amino acids have been checked and polar hydrogens have been added to the protein structure. Center Grid box x:5.108, y:18.9177, z:-18.1863 and number of points in x,y,z dimensions are considered as 30x30x30 Å 3 respectively and grid spacing has been taken as 0.3750 Å. Ligand has been prepared by adding Gasteiger charges, detecting root and choosing torsions from the torsion tree of Autodock Tools panel [70]. Docking procedure has been performed by using the Lamarckian genetic algorithm [71] and the results have been tabulated in Table 8. DAPF bound to the active site of main protease with good complementarity (Fig. 6) and formed three hydrogen bonds and four hydrophobic bonds with the mainprotease. The binding energy of the Other Pi-sulphur nonbonding interaction is −8.7 kcal/mol. All these results present a clear view that DAPF can irreversibly interact with main protease. The catalytic dyad composed of Histidine 41 and Cysteine 145 is a set of amino acids that can be found in the active site of most SARS-CoV-2 main proteases, plays an essential role in drug binding [72]. DAPF bound both to Histidine 41 and Cysteine 145 ( Fig. 6 and Table 8) claims to be a good antiviral drug. Conclusion The compound DAPF has been characterized by FT-IR, FT-Raman at B3LYP/6-311++G(d,p) level using DFT calculations and the complete vibrational analysis has been carried out in order to elucidate the structure activity relationship. The presence of the intermolecular and intramolecular hydrogen bonds has been analyzed using NBO analysis. The transfer of electrons from the lone pair nitrogen to the anti-bonding orbital of N\ \H bond evinces the formation of two hydrogen bonds that brings about most interesting biological properties. The occurrence of N-H···N intermolecular interactions and the conspicuous shifting in the wavenumber have been authenticated by the increase in N\ \H bond length and an increase in the electron density in the antibonding orbitals. Also, intermolecular N ̶ H···N hydrogen bonding plays an important role in the stability of protein structure. Hirshfeld surfaces and the 2D fingerprint plot confirms the presence of the intermolecular contacts N ̶ H···N, C ̶ H···O, C ̶ H···F and their quantitative contributions, impart stability to the system. Drug likeness and ADMET property analysis gives an idea about the pharmacokinetic properties of the title molecule. The binding energy −8.7 kcal/mol of the nonbonding interaction presents a clear view that DAPF can irreversibly interact with SARS-CoV-2 protease. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2020-08-12T13:06:45.312Z
2020-08-12T00:00:00.000
{ "year": 2020, "sha1": "d8dbb066de6c8cb95c0957aa99451143705641ab", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.saa.2020.118825", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d9822b1e6706a649e0fd279c0a038b9fe13b93fa", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
244078513
pes2o/s2orc
v3-fos-license
A Community Pharmacy-Based Intervention in the Matrix of Type 2 Diabetes Mellitus Outcomes (CPBI-T2DM): A Cluster Randomized Controlled Trial Introduction: Egypt has the ninth highest diabetes mellitus (DM) prevalence in the world. There is a growing interest in community involvement in DM management. Aim of the study: The aim of the study was to evaluate the tailored diabetes care model (DCM) implementation in Alexandria governorate by community pharmacy-based intervention (CPBI) from a clinical, humanistic, and economic aspect. Methods: This is a 6-month period cross-over cluster randomized control trial conducted in Alexandria. Ten clusters owing 10 community pharmacies (CPs) recruited 100 health insurance-deprived T2DM patients with >7% HbA1c in 6-months. The study was divided into 2 phases (3 months for each period) with a 1-month washout period in between. After CPs training on DCM, the interventional group received pictorial training for 45 minutes in first visit, and 15 minutes in weekly visits, whereas the control group patients received the usual care (UC). At baseline and end of each phase (3 months), patients had clinical and physical activity assessments, filled all forms of study questionnaire (knowledge, self-management, satisfaction, and adherence) and did all laboratory investigations (Fasting Blood Glucose [FBG]), HbA1c, protein-creatinine clearance (PCR), creatine clearance (GFR), and lipid profile. Results: There was no significant difference in the basal systolic and diastolic blood pressure between patients in the CBPI and UC groups, but the CBPI had significantly decreased the mean SBP and DBP by (P = .008, .040, respectively). Also, significant waist circumference and BMI reductions (−5.82 cm and −1.86 kg/m2, P = .001) were observed in the CBPI. The CBPI patients achieved a greater reduction in FBG and HbA1C than the UC patients (102 mg/dL and 1.9%, respectively P < .001). Also, significant reductions in total cholesterol, LDL, and triglyceride (−6.4, −15.4, and −6.3 mg/dL respectively, P = .001) were achieved in the CBPI group. No significant differences were found in HDL, GFR, and PCR. Moreover, significant improvements of behavior, score of knowledge, self-management, satisfaction, and adherence were observed in CBPI patients. After multivariate analysis, HbA1C readings were significantly influenced by baseline HbA1C and eating habits. The cost saving for CPBI was −1581 LE per 1% HbA1c reduction. Conclusion: This is the first study in Egypt that illustrated the positive impact of pictorial DCM delivered by CPBI collaborative care on clinical, humanistic, laboratory, and economic outcomes to local T2DM patients. Introduction The Middle East and North Africa (MENA) has an estimate of 35.3 million diabetic patients; 9 million of whom live in Egypt. 1 Care of Diabetes Mellitus (DM) patients aspires to prevent or delay the development of its complications and improve patients' quality of life. Chronic care model (CCM) is a method of restructuring health care services through interactions between health systems and communities aiming to enable patients to control diabetes. 2 This model emphasizes person-centered close follow up and adherence to treatment plan including medication, lifestyle measures, and blood glucose monitoring. 3 However, such strategy has a challenge due to shortage of resources and economic exhaustion. Discrete communities in Egypt have high prevalence of illiteracy and low socioeconomic states. Low educational attainment in these communities may also affect diet quality, physical inactivity, and unhealthy behaviors resulting to increased diabetes cases. 4 Due to their accessibility and flexibility, community pharmacies are well suited to support and reach out to this vulnerable group about optimal diabetic care, in developing countries as Egypt. 2 Clinical Medicine Insights: Endocrinology and Diabetes The effectiveness of pharmaceutical care interventions should be appraised in research by measuring diabetic patients' knowledge, self-efficacy, quality of life, and cost. Aim of the Study The aim of the study was to evaluate the community pharmacy-based chronic care of diabetic patients on diabetes control in Alexandria governorate. Also, to assess a simple diabetes care model (DCM) related to Egyptian culture from clinical, humanistic, and economic perception. Study design A cluster randomized control trial (RCT) with simple 2 × 2 crossover was adopted for 6-months. Eligible patients and pharmacists were identified at the 10 community pharmacies located in Amryia governorate, Alexandria. The study was divided into 2 phases (3 months for each period) with a 1-month washout period in between followed by cross over of groups (see Figure 1). Participants To achieve 90% power with a target significance level at 5%, we calculated that 68 patients should be enrolled. Sample size was increased to 100 patients to compensate for attrition. Using simple random sampling 2 groups were allocated, 1 group was randomly allocated to the community pharmacy-based intervention (CPBI) implementing the DCM (the CPBI group also received UC in addition to the intervention); while the other was allocated to UC only. Ten clusters (Shiakhas) which included 10 pharmacies were selected; with each aiming to enroll 10 patients. The targeted patients were >18 years old, with uncontrolled Type 2 Diabetes Mellitus (T2DM) (HbA1c: ⩾7%), resident within the pharmacy catchment area, regularly visiting the pharmacy performing the study and without health insurance (thus permitting the study to offer them laboratory analysis). Patients with mental and physical disabilities (including dementia, stroke, advanced retinopathy/blindness, deafness, and/or muteness) were excluded. Also, pregnant/lactating females, decompensated liver, renal, and heart failure were excluded to exclude gestational diabetes, or uncontrolled diabetes due to co-morbidities, respectively. Usual care is dispensing drugs with accurate reading of the signature as stated by the physician. Intervention After face-to-face training of the pharmacists, they provided face-to-face patient education in the interventional group, whereas the pharmacists providing usual care were not invited figure 1. Flow chart of the study periods and participants. 3 to the patient education session and the control group patients received UC. The patient-oriented diabetic care model (DCM) was constructed according to American diabetes Association guidelines. 4 The intervention included: (a) an educational component comprising of pictorial posters of DM and its complications, (b) a non-pharmacological component in the form of lifestyle changes comprising; nutritional therapy, smoking cessation strategies, physical exercise: walking at least 3 times weekly for 30 minutes, (c) a pharmacological component in the form of a review of medication indications, doses, and frequency, and (d) regular weekly visits of CPBI patients to their assigned pharmacies (45 minutes in the initial visit and 15 minutes in subsequent visits) for consolidation of the education, goal review insisting on drug adherence. At baseline and end of each phase (3 months Measurement tools • • The structured interview questionnaires assessed the level of patient knowledge of diabetes management, 5 patient adherence to treatment using Morisky Medication Adherence Form, 6 and patient self-efficacy using 6-items self-efficacy scale for managing chronic diseases. 7 Also, patient satisfaction was assessed using 18-item short term patient satisfaction questionnaire (PSQ-18). 8 • • The laboratory investigations were carried out at 2 private health clinics located within the catchment area of the pharmacies, where the assessors were blinded, and included glycated hemoglobin (HbA1c%), lipid profile (cholesterol, LDL, triglycerides (TG), and HDLcholesterol), and Protein creatinine ratio (PCR). Ethical consideration Ethics approval was obtained from the Ethics Committee of the High Institute of Public Health. Written informed consent was obtained from all study participants. The study is registered in pactr.samrc.ac.za (Registration no.: PACTR201909534333056). Statistical analysis Demographic and baseline clinical data was summarized as frequency or mean ± SD. Repeated-measures ANOVA, and t-test were applied to compare the laboratory profiles of both groups, for normally distributed data; otherwise, Kruskal Wallis, and Mann-Whitney tests were used. The questionnaires were compared using Chi squared test and paired analysis (McNamara test, Marginal homogeneity test). There was no significant carry over effect revealed by summation of HbA1C at the end of the period 1 and 2 (P = .390). A P value of <.050 was considered to be statistically significant. All data analysis was performed accordingly using IBM SPSS Statistics 25 (SPSS, statistical package for social science Chicago, IL). Economic analysis An incremental cost-effectiveness analysis (ICER) was performed, with costs and effectiveness of CPBI compared with that of usual pharmacy care. The health sector's perspective was used. All direct costs of providing CPBI, including training costs, and medications' cost were included. Also, the absenteeism directed to DM or it is complications (absenteeism [from work] attributed to DM or it is complications [absenteeism days] within the last 3 months multiplied by 100 Egyptian Pound [EGP]) was calculated. Also, cost of diabetes-related healthcare resources: included the doctors and emergency department and hospital admissions based on the receipt of the hospital was estimated. We excluded costs of the time pharmacists spent in the training and implementation of the study. Because pharmacists participated voluntary on their free time. In addition, the implementation was during their working hours, so it did not take additional time. Socio-demographic features and medical history of the studied patients More than half (57%) of the enrolled patients were female. The patients were aged between 40 and 60 and had poor sociodemographic features with almost 40% being illiterate. The most frequent comorbidity was hypertension (see Table 1). Blood pressure and anthropometric measures At baseline, there was no significant difference in the basal SBP and DBP between the CBPI group versus the UC group. After the intervention, the CBPI had significantly decreased the mean of SBP by (−7 mmHg vs +0.96 mmHg, P = .008) and mean of DBP by (−3.3 mmHg vs +2.1 mmHg, P = .040). However, there was no significant effect of UC on SBP nor DBP. Also, significant reductions in the waist circumference (−5.82 cm vs +2.13 cm, P < .001) and BMI (−1.86 kg/m 2 vs +0.69 kg/m 2 , P = .001) were observed in the CBPI group versus the UC group (see Figures 2 and 3). 4 Clinical Medicine Insights: Endocrinology and Diabetes Laboratory tests result of the studied patients FBG showed marked decrease among the patient in CPBI-UC group (121.5 mg/dL) during the intervention (period 1). Also, the patients within UC-CBPI group had marked decrease in FBG (82 mg/dL) during the intervention (period 2). The CBPI patients achieved a greater reduction in FBG values than the UC patients (−0.44% vs +0.12%, P < .001) at the end of the study (see Figure 4). Despite no significant difference in the basal HbA1c, the CBPI groups achieved a double reduction in HbA1c than the UC patients (−1.88% vs +1.89%; P < .001) at the end of the intervention period (see Figure 5). A significant reduction in total cholesterol (−6.4 mg/dL vs +9.8 mg/dL, P = .001) and LDL (−15.4 mg/dL vs +10 mg/dL, P = .001) was achieved in the CBPI group versus the UC group. Moreover, TG was significantly decreased in the CBPI group versus the UC group (−6.3 mg/dL vs +3.8 mg/dL, P = .001). No significant differences were found in HDL levels between the groups (P = .530). No significant differences were found in GFR or PCR levels between the groups (P = .300, and P = .400, respectively) (see Table 2). Knowledge acquisition, behavioral, selfmanagement, patient satisfaction, and adherence to medications Despite an insignificant difference in the baseline score, there was a significant increase in the knowledge of the study participants after the intervention compared to baseline knowledge by 30% (P < .020). False knowledge significantly decreased after the intervention from 59.6% to 12.4% (P < .001). The highest correct answer before and after intervention for question (High blood sugar may happen because you eat too much); 63.9% and 94.4%, respectively (see Table 3). Comparatively to the UC, exposure to CPBI significantly increased physical activity from mild or no physical activity to moderate activity (23.6% for UC vs 45% for CPBI); whole grain weekly intake (n = 41.6% UC vs 60% CPBI) as well as vegetables/fruits weekly (n: 43% UC vs 55% CPBI). There was no significant reduction in smoking habits with CPBI (P > .050). Exposure to CPBI significantly raised the mean score self-management in comparison to exposure to the UC (4.7 ± 1.1 vs 6.2 ± 1, respectively, P < .001) (see Table 3). For the CPBI group, regardless of accessibility and convenience, there was a statistically significant increase in patient satisfaction score in all domains and in the composite score following the intervention (P < .001) (see Table 3). Clinical Medicine Insights: Endocrinology and Diabetes Economic outcome post exposure to UC and CPBI Unlike UC, exposure to CPBI significantly dropped average cost of medicines/patient, frequency of dosage, and insulin doses (P < .010, P < .010, and P < .030, respectively) monthly. Lower frequency of GP and specialist consultations, hospitalization cost, and absenteeism days with CPBI than UC were documented but they failed to reach a significant level, P > .050. A significantly reduction in HbA1C of CPBI led to a lower total cost with the CPBI (20 023.35 EGP) than the UC (25 983.65 EGP). The incremental costs and HbA1c reduction for CPBI compared to UC were 5961 EGP and 3.77%, respectively. Thus, the ICER was −1581 EGP per 1% HbA1c reduction for patient maintained on CPBI for 3 months (see Figure 7). Discussion To the best of our knowledge, this study is the first to address the role of community pharmacy in the management of diabetic patients in Egypt using an CCM model on patient selfmanagement, satisfaction, adherence to treatment, and disease knowledge. A significant reduction in mean FBG and HbA1c levels (−102 mmHg, −1.88%, P < .001) in the CPBI group was observed. According to the UKPDS, each 1% reduction in A1c levels reduces the risk of death related to diabetes by 21%, the risk of myocardial infarction by 14%, and the risk of microvascular complications by 37%. 10 A recent literature review found that interventions performed by pharmacists showed a significant reduction of 0.18%-2.1% in HbA1c levels, after an average interval of 3-12 months. 11 Also, a recent meta-analysis showed that integrated DM education-pharmaceutical care intervention had a significant role in lowering the HbA1C and FBG (−0.86%, −34.95 mg/dL, respectively). 12 The improvement of HbA1C was influenced by baseline HbA1C and eating habits. Likewise, the educational pharmacist-led care program improved glycemic control (mean HbA1C-0.5%) through lifestyle changes and controlling patients' eating habits. 13,14 In contrast to this result, the Fremantle Diabetes Study reported a smaller reduction in HbA1C and FBG (<0.5%, <15 mg/dL, respectively). 15 A study was conducted in Iran, 85 patients were recruited, the level of HbA1C was insignificantly decreased after the intervention. 16 Also, the effect of adding pharmacists to primary care teams in T2DM patients, didn't achieve any statistical significance. 17 The lack of HbA1C effect was attributed to either contamination between cases and control or mildly uncontrolled basal level of HbA1C. Abbreviations: B1, baseline period 1; B2, baseline period 2; cPBI, community pharmacy-based intervention; M1, first month; M2, second months; M3, third months; Uc, usual care. 7 Non-pharmacological measures have an essential role in diabetes control and HbA1C reduction. In this research the level of LDL showed a significant decline after CPBI under the effect of changing eating habits. Just like our results, Lee et al 18 and Huete et al 19 demonstrated that LDL cholesterol level was statistically decreased by the effect of CPBI educational program targeted food intake. Nonetheless, a systematic review studied the effect of pharmacy-based intervention on the control of dyslipidemia, revealed that, the intervention decreased the level of LDL among the intervention group but this reduction was not statistically significant. 20 In line with this, a recent study revealed that LDL-c levels did not change under the effect of CPBI. 13 This controversy could be explained by low basal values already close to international recommended values leaving no room for further improvement. TG was significantly decreased in the CBPI group versus the UC group and baseline HbA1C, WC, and patient knowledge were the main modifiable factors. Similarly, Paulós et al 21 discussed a 16-week CPBI to manage hyperlipidemia, and the level of triglycerides significantly decreased in the intervention compared to the control group. On the other hand, findings from a RCT of a 100 diabetic patients aiming to assess the effectiveness of CPBI in the management of lipid profile, the intervention showed no significant change on the TG level. 22 The difference in their results to ours could be due to the telephone call-based intervention, not face to face which could be less effective in some models. In this study, HDL did not show any significant difference after the intervention. Similarly, a meta-analysis that evaluated the role of pharmacists in modifying cardiovascular risk factors found that CBPI interventions have no significant effect on HDL level. 23 The failure to increase HDL level may be due to short period of the study. In our study, the intervention significantly decreased SBP (P = .010) and DBP (P = .040) compared with the UC group. Other studies have shown heterogeneous results for any effect of pharmaceutical care programs on blood pressure control. Consistent with our results, a systematic review showed significant reductions statistically and clinically significant improvement in BP in the intervention group at follow-up. 24 A CPBI for a year revealed a significant reduction in SBP and DBP (−5, −3 mmHg; P < .040, respectively). 25 On the contrary, other interventions were not associated with changes in blood pressure which attributed to different characteristics of patients and most of them almost well-controlled hypertension. 19,26 The CBPI group had a significant reduction in the WC and BMI (P = .001). Published results regarding the effectiveness of pharmaceutical care for reducing BMI and WC demonstrate considerable discrepancies. In a 12 month study, CPBI achieved significant reductions in BMI (−1.24 kg/m 2 vs +0.4 kg/m 2 , P < .001) and WC (−1.94 cm vs +0.64 cm, P < .001) in the intervention group. 27 In a 3-month study, a significant reduction (−0.4 kg/m 2 ) was achieved in BMI values. 28 However, Correr et al, 29 demonstrated a smaller reduction in BMI (−0.2 kg/m 2 ) over 12 months from a higher baseline value in the Brazilian health system. Unlike the UC, CPBI saliently improved the patient's selfmanagement score and behavior (physical activity, increased the intake of whole grain and vegetables, while processed meat, trans fats, and sugary drinks were decreased). However, it failed to exert any significant change on smoking; most probably due to preponderance of non-smokers (66.3%), and females (57%) among the participants as in our country, females are less likely to smoke. Similar to our results, Northern Cyprus study showed significant improvements were observed in self-care activities such as diet without significant improvement in the smoking behavior domains. 27 A non-blinded RCT conducted on 34 Abbreviations: B1, baseline period 1; B2, baseline period 2; cPBI, community-pharmacy based intervention; GFR: glomerular filtrations rate; HDL, high density lipoproteins; LDL, low density lipoproteins; M3I, third months after community-pharmacy based intervention; M3U, third months after usual care; PcR: protein creatinine ratio; Uc, usual care at period 1. *Repeated measure ANOVA. Clinical Medicine Insights: Endocrinology and Diabetes patients who received daily educational messages (via SMS) about DM management through mobile phones. The intervention group experienced significant increment in the self-management score, however the drop in the HbA1C level unexpectedly couldn't reach a significant level. 30 The significant improvements in self-care activities in our study might be attributable to the intense pictorial education, and close follow up. Unlike the UC, there was an upgrading of patient satisfaction in conjunction with exposure to the CPBI compared to the baseline level. The availability of costless and comprehensive individualized healthcare and medical consultation at any time of a patient-centered approach makes it reasonable. The interventions of pharmacists have been proven to improve glycemic control, and increase patients' satisfaction. 31 In contrast, Abbreviations: cPBI, community pharmacy-based intervention; Uc, usual care. 9 Schroeder couldn't assure the role of community pharmacy in improving patient satisfaction. However; increased satisfaction was insignificant because of higher basal score and both groups of patients received care by the same pharmacy team. 32 Compared to the baseline values, there was a significant increase in moderate and high drug adherence to 90% of patients. Many studies have approved the role of community pharmacy in improvement of patient's adherence, which was explained by the knowledge gained from the trained pharmacist. 33 On the other hand, patient adherence was not statistically increased after CPBI in a study conducted in Washington University. 34 This was explained as 40% of the enrolled participants did not attend diabetes care plan, and 50% of people taking a drug are considered unsuitable, mainly due to the expenses of treatment. The economic evaluation of CPBI revealed that the gross cost incurred per patient for 3 months was 23% less in CBPI than UC. Similarly, 1 study demonstrated a 15% decrease in total direct costs for patients with diabetes who received pharmacist multidisciplinary care in an outpatient setting over a 6 month period. 35 Our cost savings were comparable to a 6-months study in the United States, which found savings equivalent to 450 EGP per patient. 36 The cost savings observed in our study were attributed to the lower medication costs, closer therapeutic monitoring, and decrease work absenteeism. On contrary, others showed no significant difference in total healthcare costs after CPBI among patients with type 2 diabetes. 37 A Canadian study showed no significant difference at 3 or 12 months healthcare cost between the CPBI and UC. 38 The variations of the results could be explained by contamination bias between groups, irrespective different GDP in high income country, lack of adequate perspective, and uncertainty treatment adherence were observed. This study has some limitations. A relatively small number of participants, as this study will be a base for future studies to confirm the capacity of such interventions in T2DM practice. Also, the long-term economic impact cannot be ascertained, due to the short study duration. However, the anticipated economic impact of this care approach in the long-term may be greater than our analysis suggests, as sustained improvements in HbA1C would lower the risk of future diabetes-related complications and, as a result, further reduce the costs associated with diabetes. This study's success can be attributed to 3 critical elements. First, the adopted study design and type of study (cross over-RCT) makes the results of this study reliable for providing invaluable diabetic patient-centered pictorial CCM to healthcare-deprived T2DM patients. Second, the risk of contamination between the UC and CPBI was ensured as the 2 sets of study were enough away from each other. Third; on contrary to most of other studies who enrolled well trained clinical pharmacists, we involved community pharmacists who did not received any previous training. This may increase a generalizability of the results of the study. Conclusion This is the first RCT in Egypt revealing that CPBI can deliver a simplified, convenient, effective, and cost-effective guided care to local T2DM patients using a pictorial patient centered DCM. This improves patient's behavior, knowledge, self-efficacy, and adherence which in turn control their glycemic and cardiometabolic parameters. As a future plan, this study could be tailored to other chronic diseases: hypertension, hyperlipidemia. Some parts of the CPBI could also be performed by phone calls or online for the time being due to the COVID-19 situation, especially in quarantine time. Author Contributions HM has participated in the development of the study plan, implementation of the study plan, and contributed to the interpretation of cases, and critically reviewed the manuscript. MA has participated in the development & implementation of the study plan, development of the study questionnaire, training of pharmacists, contributed to the interpretation of cases, and critically reviewed the manuscript. NH has participated in the development & implementation of the study plan, development of the study questionnaire, training of pharmacists, and critically reviewed the manuscript. RG has participated in the development of the study questionnaire, training of pharmacists, implementation of the study plan, and contributed to the interpretation of cases. RE has participated in the development of the study questionnaire, and training of pharmacists.
2021-11-14T16:18:23.761Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "39353c48189cbefccdada7a3ab19442283ecc8fb", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/11795514211056307", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0552bc035927f341b8e49582c7975ba6a23c262", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247777584
pes2o/s2orc
v3-fos-license
pH‐Triggered Removal of Nitrogenous Organic Micropollutants from Water by Using Metal‐Organic Polyhedra Abstract Water pollution threatens human and environmental health worldwide. Thus, there is a pressing need for new approaches to water purification. Herein, we report a novel supramolecular strategy based on the use of a metal‐organic polyhedron (MOP) as a capture agent to remove nitrogenous organic micropollutants from water, even at very low concentrations (ppm), based exclusively on coordination chemistry at the external surface of the MOP. Specifically, we exploit the exohedral coordination positions of RhII‐MOP to coordinatively sequester pollutants bearing N‐donor atoms in aqueous solution, and then harness their exposed surface carboxyl groups to control their aqueous solubility through acid/base reactions. We validated this approach for removal of benzotriazole, benzothiazole, isoquinoline, and 1‐napthylamine from water. Introduction Hazardous organic micropollutants are found in natural water resources worldwide, posing a threat to human health and to ecosystems. [1,2] Thus, there is a pressing need to develop strategies and materials for water purification. Among the most efficient strategies for removal of organic micropollutants from water is adsorptive removal, which in some cases is followed by degradation of the pollutant. Effective adsorbents must combine high surface areas with strong chemical affinity for the target pollutants. Promising candidates include porous materi-als such as zeolites, activated carbon, covalent organic frameworks, and metal-organic frameworks, all of which offer large surface areas and whose pores can be chemically modified. [3][4][5][6] Other candidates are nanomaterials (e. g., nanoparticles, nanotubes, graphene, etc.), which boast high surface area-to-volume ratios, given their small size. Additionally, some nanomaterials exhibit highly reactive surfaces that can be functionalised for catalysing the degradation of the adsorbed pollutants. [7,8] Researchers have recently begun to develop supramolecular strategies based on host-guest chemistry to capture and separate substances of interest. [9][10][11][12] In these strategies, discrete molecular compounds can be used in solution to selectively recognise, adsorb, and entrap the substance of interest inside their cavities. [13][14][15][16] The resultant host-guest complex is then isolated in solution by liquid/liquid extraction or phase transfer. Finally, the guest molecule is liberated from the host upon breakage of the host-guest interaction. For example, metalorganic coordination cages have been used to selectively separate specific polycyclic aromatic hydrocarbons from a mixture of similar molecules by phase-transfer phenomena. [17] Alternatively, multitopic ion-pair receptors based on calix [4]pyrrole derivatives have been employed to remove inorganics (K + , Li + , and Cs + ) from aqueous solution by liquid/ liquid extraction. [18,19] Our group recently reported an interesting alternative to the aforementioned host-guest approach to capture species of interest: rather than do coordination chemistry in the pores or cavities of molecular systems such as cages, we instead focus on coordination chemistry at the external surface of metalorganic cages or polyhedra (MOP). As proof-of-concept, we used the prototypical Rh II -based MOP that comprises 12 divalent RhÀ Rh paddlewheel clusters and 24 angular benzene-1,3-dicarboxylate (bdc) linkers, exhibits a cuboctahedral shape, and has an external diameter of 2.5 nm. Given the nanoscopic size and functional outer surface of this MOP, using it to capture species resembles the use of nanoparticle to do the same, albeit with the benefit of stoichiometric precision. This precision stems from the 84 available positions on the outer surface of the MOP, of two types. The first type, of which there are 12, are located in the 12 RhÀ Rh paddlewheels. Each of these clusters expose a single exohedral axial coordination site that can be harnessed to bind coordinating molecules. The second type, of which there are 72, derive from the 24 bdc linkers, each of which can be functionalised at three positions (4, 5 and 6) of its phenyl ring by conventional organic chemistry. [20] Our group previously demonstrated that this surface chemistry could be used to separate physicochemically similar molecules that differ in their affinity to the exohedral Rh II axial site (separation by phase-transfer) [21] and in the steric hindrance around their coordinating atom (separation by liquid/liquid extraction). [22] In the work that we report here, we adapted our previous surface chemistry approach to develop a new supramolecular strategy that uses the cuboctahedral Rh II -MOP as a capture agent to remove organic micropollutants from water by pHcontrolled precipitation ( Figure 1). This strategy is based on combining the coordination ability of the Rh II sites to capture organic pollutants that bear functional groups, with a simple acid-based reaction performed on the bdc linkers to control the solubility of the MOP in water. Thus, each one of the characteristic 12 exohedral axial coordination sites of these cuboctahedral Rh II -MOPs is used to capture and bind coordinating pollutants from water by coordination chemistry. Moreover, among the different members of these cuboctahedral MOPs, we selected [Rh 2 (COOH-bdc) 2 ] 12 (hereafter named COOHRhMOP; where COOH-bdc = 5-carboxy-1,3-benzenedicarboxylate). This MOP is functionalised with a carboxylic acid group at the 5position of the phenyl ring of each bdc linker, such that its external surface is functionalised with a total of 24 carboxylic acid groups. [23] These groups are essential for our supramolecular strategy, as they confer the MOP with pHdependent aqueous solubility. Indeed, when COOHRhMOP is exposed to a base (e. g., NaOH), it becomes an anionic, watersoluble MOP of formula Na 24 [Rh 2 (COO-bdc) 2 ] 12 (hereafter named COONaRhMOP; where COO-bdc = 1,3,5-benzenetricarboxylate). COONaRhMOP can then be reprotonated upon exposure to an acid (e. g., HCl), which precipitates it out from water such that it can be recovered by filtration or centrifugation. We envisioned that this pH-based solubility could serve as a trigger in a pollutant-removal system, reasoning that, once a coordinating molecule had become anchored to the surface of a MOP, the latter would govern the solubility of the former. Thus, our supramolecular process for pollutant-removal is based on four steps. Firstly, coordinative interactions between the organic pollutant and the water-soluble COONaRhMOP are established in solution ( Figure 1). Secondly, the COONaRhMOPpollutant complex is precipitated out, by lowering the pH using HCl, and subsequently isolated from water by filtration or centrifugation. Thirdly, the precipitate is washed with aq. CaCl 2 , leading to liberation of the pollutant. Through this washing step, Ca II ions coordinate strongly to the organic micropollutant, breaking the COOHRhMOP-pollutant coordination interaction and dissolving the pollutant back into an aqueous solution. In the fourth and final step, the COOHRhMOP is treated with NaOH, causing it to redissolve in the remaining solution, which might contain residual pollutants, thereby enabling its reuse for subsequent cycles of water purification. Removal of benzotriazole as a test-case. Step 1: Coordination between benzotriazole and COONaRhMOP in water As proof-of-concept, we chose to test our MOP-based strategy by attempting to remove benzotriazole (BT) from water. Benzotriazole is broadly used both in industrial and household products: for instance, as a corrosion inhibitor; as an ultraviolet light-stabilizer in plastics; as an ultraviolet filter; as antifogging or defogging agent; and in de-icing/anti-icing fluids. The high amount of BT disposed by such activities, the high watersolubility of BT (ca. 20 g · L À 1 ), and the slow biodegradation of BT lead to its high persistence in aquatic environments, where, at concentrations above about 5 ppm, [24,25] it causes environ- mental harmful long-term effects. [26][27][28] In fact, BT has been proposed as a micropollutant indicator of water contamination through anthropogenic activities, due to its ubiquity in surface water and its environmental toxicity. [29] Given that BT contains a triazole functional group fused to a benzene ring, we envisaged that we could anchor the molecule to the surface of COONaRhMOP, by coordinating the free N-donor atoms in the triazole of the former to the exposed axial sites of the RhÀ Rh paddlewheels in the latter. To confirm this, we added BT to an aqueous solution of COONaRhMOP, and then monitored their interaction by naked eye. We found that the addition of BT (24 mol equiv, 380 ppm) to an aqueous solution of COONaRh-MOP (0.133 μmol, 1 mL) induced an immediate change in colour of the solution, from blue to purple, suggesting a coordinative interaction between the BT and the RhÀ Rh paddlewheel. Next, we monitored this interaction by UV-Vis spectroscopy, focusing on the bands centred at 500 to 600 nm, which correspond to the π*!σ* transitions (λ max ) of the Rh À Rh bonds. A shift in the RhÀ Rh bond absorption band (λ max ), from 585 to 551 nm, corroborated coordination of the BT to the RhÀ Rh paddlewheel ( Figure S3 in the Supporting Information). [30,31] To further study the coordination of BT to the RhÀ Rh paddlewheel units, we followed the titration of COONaRhMOP with BT by UV-Vis spectroscopy. We found that, below 10 mol equiv (159 ppm) of BT, the isosbestic point is preserved, indicating that each exohedral dirhodium axial site behaves independently. [22] Finally, we gained additional evidence that coordination of BT to COONaRhMOP proceeds through the exposed surface dirhodium axial sites, upon observing a marginal shift in λ max above 12 mol equiv of BT (190 ppm). Step 2: pH-triggered precipitation of COONaRhMOP(BT) Having demonstrated the coordination of BT to COONaRhMOP, we reasoned that the captured BT could then be removed from water through in situ precipitation of the formed complex (hereafter named COONaRhMOP(BT)). Our hypothesis was based on the premise that, once the coordination between COONaRhMOP and BT had occurred, the solubility of the latter would be dictated by the solubility of the COONaRhMOP. Therefore, we expected that protonation of the surface carboxylate groups of COONaRhMOP(BT) would induce its precipitation, as we had already observed for COONaRhMOP alone (Figures S1 and S2). We expected that the precipitate (hereafter named COOHRhMOP(BT)) could then be easily removed by using simple techniques such as centrifugation or filtration. To this end, we optimised the pH at which a quantitative precipitation of the MOP occurs, while minimising protonation-induced cleavage of the BTÀ Rh II À MOP coordination bond. These experiments were performed by preparing two different mixtures containing COONaRhMOP (0.133 μmol, 1 mL) and 6 mol equiv (95 ppm) or 24 mol equiv (380 ppm) of BT, being these two concentrations representative examples of defective and excess concentrations with respect to the 12 exohedral axial sites present in the MOP structure. Both resulting solutions (final pH = 8.6 and 7.8, respectively) were incubated for only 10 s and subsequently precipitated by lowering the pH with different amounts of HCl acid. The precipitation solids were isolated by centrifugation. Once the solids had been isolated, the optimum amount of acid for the precipitation process was determined by analysing the remaining BT in the aqueous solution after the precipitation step by means of UV-Vis measurements and establishing its removal efficiency ( Figure S4). For both mixtures, the best performance was observed under milder acidic conditions when 10 μL of HCl 1 M (final pH = 2.3) were used to precipitate out the COOHRhMOP(BT). Based on this amount of acid, the following removal efficiency values were determined: 77 % for the solution initially containing 6 mol equiv of BT; and 53 % for the one containing 24 mol equiv of BT (see Section S3.5 in the Supporting Information). In both cases, the amount of MOP remaining in solution was lower than 0.1 % (Table S1). Importantly, blank experiments (i. e., lacking COONaRhMOP) were performed in solution. These experiments revealed that the concentration of BT remained constant throughout the pH range studied (pH: 5.8 to 1.9), demonstrating the high aqueous solubility of BT and indicating that the solubility of the COOHRhMOP(BT) complex is indeed governed by that of COOHRhMOP itself ( Figure S5). Influence of the pH, the incubation time, and the concentration of BT on the removal efficiency Once we had optimised the pH at which COOHRhMOP(BT) precipitates, we sought to elucidate the influence of the pH of the polluted water on the coordination of BT to COONaRhMOP and therefore, on the pollutant removal efficiency. This parameter might influence the removal efficiency of our proposed pH-triggered pollutant removal methodology due to the amphoteric properties of BT (pK a1 : 0.42, pK a2 : 8.27). To this end, we ran new experiments. Thus, three aqueous solutions containing COONaRhMOP (0.133 μmol, 1.03 mL) and BT (24 mol equiv; 380 ppm) were prepared. Each of these solutions, were brought to a different pH: either acidic (pH: 4.3), neutral (pH: 7.6) or basic (pH: 9.9). The UV-Vis spectrum of each aqueous phase revealed that the largest λ max shifts were observed for the acidic conditions ( Figure S10). This result is consistent with the fact that, at basic pH, BT can be deprotonated, which weakens its coordination to COONaRhMOP due to electrostatic repulsion. Contrariwise, acidic conditions lead to the formation of neutral BT, thereby favouring the formation of COONaRhMOP(BT) ( Figure S9). However, these differences in the coordination of BT to COONaRHMOP did not translate into significative differences in the removal efficiencies after the precipitation step. Thus, after addition of the proper amount of diluted HCl (1 M) to each solution (1 μL to the acidic solution; 10 μL to the neutral one; and 10 μL to the basic one) to reach the optimised precipitating pH 2.3, the removal efficiency for BT was found to be around 54 % in all cases (see Section S3.6). Altogether, these results highlight that our supramolecular strategy works in polluted water samples that vary in their initial pH level. The experiments that we have described so far indicate that the interaction between COONaRhMOP and BT is fast, due to the absence of diffusion barriers, making a rapid method for pollutant removal feasible. To further confirm the lack of significant diffusion barriers in our proposed method, we performed new experiments to assess the impact of the incubation time prior the precipitation step on the BT removal efficiency. Accordingly, four independent aqueous solutions containing COOHNaRhMOP (0.133 μmol, 1 mL) and 6 mol equiv of BT (95 ppm) were prepared. Each solution was incubated for a different time (either 10 s, 10 min, 30 min, or 60 min), and then subjected to the aforementioned precipitation procedure. The UV-Vis spectra did not indicate any significant differences in the removal efficiencies, all of which were > 70 %, suggesting that the incubation time does not influence the removal efficiency (Figure 2a, green dots, see Section S3.7.1). These results further confirmed the rapid capture and binding of BT to COONaRhMOP. Once we had demonstrated that our pH-triggered, COO-NaRhMOP-based strategy could indeed remove BT from water, we next evaluated its performance at different concentrations of BT (Figure 2b, see Section S3.8). The initial and final concentrations were determined by either UV-Vis spectroscopy (conc. BT: > 16 ppm) or 1 H NMR spectroscopy (conc. BT: < 16 ppm; Figures S21 and S23). The consistency of the results obtained using both techniques was corroborated by analysing samples at an intermediate concentration (6 mol equiv, 95 ppm), using both techniques ( Figures S21c and S22, Tables S6 and S7). The removal efficiency of BT for the solutions that had initially contained between 1 mol equiv (16 ppm) and 10 mol equiv (158 ppm) was found to be > 70 % in all cases, with a linear increase in BT removed per MOP with increasing initial conc. of BT (Figure 2b). Remarkably, the removal efficiency was up to 90 % in solutions that had contained BT at initial concentrations of 16 ppm. In this case, the remaining BT was below 1.6 ppm, which is below the level considered to be environmentally toxic (ca. 5 ppm). [24,25] Contrariwise, when 24 mol equiv (380 ppm) were initially present in the solution, the final amount of pollutant dropped to 10 mol equiv (158 ppm; removal efficiently: 55 %). These experiments confirmed that the ratio between pollutant and exohedral axial coordination metal sites in the COONaRhMOP determines the performance of our pH-triggered coordination-removal strategy. Step 3: Regeneration and reusability of COOHRhMOP Rapid and easy regeneration of the COOHRhMOP is essential for the feasibility of the proposed removal supramolecular strategy. Initially, we reasoned that exposing the COOHRhMOP(BT) solid to harsh acidic conditions would promote the formation of the protonated form of BT and consequently, its detachment from COOHRhMOP. However, we found that the detachment of BT required incubation under extremely acidic (3 M HCl) conditions, which we rule out, as they endangered the structural stability of the Rh II -MOP ( Figure S25). Alternatively, we sought a milder methodology that would entail washing of the precipitate with an aqueous solution containing a competing metallic centre for the coordination of BT. The formation of a new metal-BT complex would thus favour the regeneration of the occupied axial coordination sites of COOHRhMOP, whereby Chemistry-A European Journal Research Article doi.org/10.1002/chem.202200357 new uptake cycles could be performed. To this end, Ca II was selected as a competing metallic centre, because of its reduced toxicity as well as its low cost. To explore its efficacy, we performed a preliminary washing experiment. To this end, the solid COOHRhMOP(BT) was washed twice by incubating it in a saturated solution of aq. CaCl 2 for 30 s. Note that this COOHRhMOP(BT) was initially precipitated from 5 mL solution of BT (6 mol equiv; 95 ppm), to which the removal strategy had earlier been applied using COONaRhMOP (0.67 μmol). After both washing steps, the remaining COOHRhMOP was quantitatively transformed into COONaRhMOP through basification with NaOH (16.1 μmol). This basification step caused the Rh II -MOP to redissolve in the (now basic) aqueous solution. The success of the regeneration procedure was confirmed by UV-Vis analysis, as the initial λ max of 583 nm characteristic of COONaRhMOP was obtained, thus further corroborating the detachment of BT from the outer surface of the Rh II -MOP (Figures S26 and S27). Finally, the recovery and reusability of COONaRhMOP was confirmed by comparing the BT removal efficiency for three additional cycles of uptake (using the same conditions as in Cycle 1), precipitation, and regeneration. Remarkably, the recovered COONaRhMOP maintained its removal efficiency for BT for at least three consecutive cycles ( Figures S28 and S29, Table S9). Moreover, the integrity of COONaRhMOP was maintained through the whole cycle, as evidenced by UV-Vis, 1 H NMR, and mass spectrometry (MS; see Section S3.10). Use of filtration in the pH-triggered, supramolecular strategy Next, we simplified our supramolecular strategy by replacing the centrifugation steps with a unified filtration process in which recovery of COOHRhMOP(BT), detachment of BT, and regeneration of COONaRhMOP occur sequentially. To this end, a test was run on 5 mL of a solution of BT (6 mol equiv; 95 ppm), to which the removal strategy was applied using COONaRhMOP (0.67 μmol). After precipitation with HCl, the aqueous solution was passed through a nylon syringe-filter (0.45 μm). The filter captured the COOHRhMOP(BT), observed as a purple solid, and the aqueous supernatant was recovered and analysed by means of UV-Vis spectroscopy (Figure 3a). This analysis revealed a comparable removal efficiency to that previously obtained (73 %; Figures 3b, S33, Table S10). The filter was then treated with saturated aq. CaCl 2 , which induced an immediate change in colour of the solid, from purple to blue, representative of cleavage of the Rh-BT coordination bond and subsequent release of the BT into the CaCl 2 solution. Finally, aq. NaOH was passed through the filter to redissolve the resultant COOHRhMOP, through formation of COONaRhMOP (Figures 3a and S34). Importantly, a blank experiment (i. e., without COOHRhMOP) revealed that the nylon syringe-filter alone does not contribute to any removal of BT ( Figure S32). Interestingly, the strategy also proved successful when, instead of Milli-Q water, regular tap water was employed throughout the entire process (Section S3.12). Comparison of the removal of benzotriazole using COOHRhMOP as a porous solid To evaluate the possible influence of classical diffusion barriers on the efficiency of COOHRhMOP at removing BT from water, we next ran tests under heterogeneous conditions. To this end, COOHRhMOP powder (5 mg, 0.67 mmol) was soaked in a solution containing BT (6 mol equiv; 95 ppm; 5 mL) at a pH of 6.0. This pH was selected to ensure that the MOP remains in its neutral form, and that the BT is predominantly in its morecoordinative state. The heterogeneous removal tests were performed both with stirring and without (hereafter called Figure 3. a) Schematic of the partial removal of BT by the pH-triggered, supramolecular strategy using filtration (rather than centrifugation) to isolate the capture agent. b) Comparison of removal efficiency for filtration vs. centrifugation. The data are reported as the average uptake value from duplicate experiments. Error bars indicate the standard deviation. static), and the removal efficiency was quantified after different incubation times. Interestingly, the removal efficiency was slightly higher in the stirred reactions (46 %) than in the static one (33 %; Figure 2b, purple and sky-blue dots, and Section S3.7.2). For both conditions, the best performance was observed after 1 h of incubation. Altogether, these results evidence the detrimental effect of diffusion barriers in the coordination of BT to COOHRhMOP under when used as a solid powder (i. e., heterogeneous conditions), for which the kinetics are less favourable -and consequently, the removal efficiency, lowercompared to using COONaRhMOP in solution (i. e., homogeneous conditions). Expanding the scope of the pH-triggered coordinative-removal supramolecular strategy We reasoned that our strategy might become more challenging to apply to removal of pollutants that contain coordinating groups that are more pH-sensitive than BT, as protonation of these groups preclude the interaction between the pollutant and the COONaRhMOP. To explore this hypothesis, we tested the performance of our strategy at removing other nitrogenous organic micropollutants from water, whose coordinating groups are easier to protonate than are the coordinating N-atoms in the triazole ring of BT (pK a1 : 0.42). Thus, we chose three polar pollutants commonly found in water: i) benzothiazole (BTZ; pK a : 2.28), which is used as vulcanisation accelerator in rubber manufacture and as an herbicide; [32,33] ii) 1-naphtylamine (NA; pK a : 4.25), used both as a fungicide and as a precursor of azodyes, which is classified as a carcinogen; [34,35] and iii) isoquinoline (IQ; pK a : 5.26), [36,37] which is a potentially genotoxic compound found in coking wastewaters (Figure 4, right). Firstly, the coordination of BTZ, NA, or IQ to COONaRhMOP was confirmed by UV-Vis spectroscopy ( Figures S37, S50 and S66). Then, the precipitation conditions for each pollutant were optimised, using a similar approach to that previously followed for BT, whereby a balance between the removal efficiency and the complete precipitation of COOHRHMOP(pollutant) was found. For all these experiments, UV-Vis and 1 H NMR were employed to quantitatively evaluate the removal efficiency. As depicted in Figure 4, very high removal efficiencies were found for BTZ (ca. 80 % to 90 %) throughout the tested concentration range, with values even higher than those for BT. We attributed this performance to the greater hydrophobicity of BTZ relative to BT, which facilitates its removal from water (Section S4.2.4). The removal of pollutants containing weak bases/nucleophiles could be limited by the weakness of the coordination bonds. However, despite the weak coordination observed between NA and the dirhodium paddlewheel ( Figure S50), the removal efficiency for NA was 80 % at low concentrations (1 mol equiv; 19 ppm; Section S4.3.4). Moreover, although the required pH for the precipitation of either COOHRhMOP(NA) or COOHRhMOP(IQ) was lower (ca. 3.5) than the reported pK a for either NA or IQ, the removal efficiency for both pollutants was 70 % (Sections S4.3.4 and S4.4.4). These results confirm that the respective coordinative bonds remain intact at these (more acidic) pH values. Lastly, in all cases, the complexes obtained from precipitation were recoverable again by using Ca II as a competing metallic centre for the coordination of the organic coordinating pollutants). Note that both NA and IQ could also be recovered by using an acidic (0.3 M HCl) wash, as their easier protonation allowed the complete recovery without endangering the MOP structure. Additionally, the recovered materials were reusable, as they demonstrated comparable removal efficiency values in subsequent cycles structure (Sections S4.2.5, S4.3.5 and S4.4.5). The integrity of COONaRhMOP was further evidenced by UV-Vis, 1 H NMR, and MS (Sections S4.2.6, S4.3.6 and S4.4.6). Simultaneous removal of multiple nitrogenous organic micropollutants from an aqueous solution Encouraged by our results, we envisaged that we could use our pH-triggered supramolecular strategy to simultaneously remove multiple organic pollutants from water. To this end, we performed a test in an aqueous solution containing a mixture of BT, BTZ, NA, and IQ. Thus, a 3 mL of an aqueous solution containing COONaRhMOP (0.4 μmol) and equimolar mixture of BT (6 mol equiv; 95 ppm), BTZ (6 mol equiv; 108 ppm), NA (6 mol equiv; 114 ppm), and IQ (6 mol equiv; 102 ppm) was prepared. Then, this mixture was subjected to multiple cycles of pollutant removal/capture-agent regeneration. As indicated by 1 H NMR spectra taken before and after the first cycle, the removal efficiency values were, from highest to lowers: 87 % (BTZ), 85 % (NA), 74 % (IQ) and 66 % (BT; Figure 5). Although a similar value was observed for removal of BTZ from this multipollutant solution compared to from the mono-pollutant solution, the corresponding values for the other pollutants did differ. Interestingly, the values for removal of NA and IQ were higher than from the respective mono-pollutant solutions, whereas that for BT was slightly lower. Overall, these values confirm that each COONaRhMOP can capture more than 12 pollutant molecules (ca. 20). We attributed the results to the contribution of cooperative hydrophobic and Van der Waals interactions, which can enhance the performance of the capturing agent once all the specific binding sites have been occupied. Next, the capture agent was regenerated, and the remaining aqueous solution with the four residual pollutants was subjected to additional removal/regeneration cycles. By the second cycle, all of the BTZ and NA had been fully removed, and by the third cycle all of the BT and IQ had been removed (Section S5.3). Again, the stability of the COONaRhMOP was maintained throughout each cycle, as confirmed by UV-Vis, 1 H NMR, and MS (Section S5.4). Conclusion Through this proof-of-concept study, we have demonstrated that robust, water-soluble MOPs can be harnessed to remove organic pollutants from water in a pH-triggered fashion. We engineered a novel organic-pollutant removal supramolecular strategy based on the pH-dependent solubility of COOHRhMOP. At high pH, the anionic, water-soluble COONaRhMOP is formed, whose outer surface interacts, through coordination chemistry, with organic pollutants bearing basic N-donor atoms. A rapid decrease in pH forces the precipitation of the resultant MOPpollutant complex from purified water. We demonstrated the efficacy of our approach at removing four common micropollutants -benzotriazole, benzothiazole, isoquinoline, and 1-napthylamine -from water at various concentrations, using both single and multiple-pollutant solutions. In all cases, the COOHRhMOP can be easily regenerated by using readily available and mild reagents (CaCl 2 and NaOH), and its performance is maintained through multiple cycles of removal/regeneration. Our work lays the foundation for the development of pHinduced precipitation of organic pollutants, akin to currently used methods for the removal of inorganic salts. Moreover, we envisage that the wide structural versatility of MOPs will enable our approach to be extended to many other organic pollutants, especially by exploiting other MOP-pollutant interactions than (or in addition to) coordination chemistry, including host-guest, π-π and electrostatic interactions, and combinations thereof. Thus, the results presented here widen the scope of applications for the emerging water-soluble metal-organic cages toward pollutant removal and environmental applications. [38]
2022-03-30T06:17:48.391Z
2022-03-29T00:00:00.000
{ "year": 2022, "sha1": "e7e18c335a0dbc0b37857546725a82943d2ec6f7", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/chem.202200357", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "286ebd4ee71b0ce7377ea0a545e75dfa5e11bdf2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
146609799
pes2o/s2orc
v3-fos-license
Book Review: Missed Connections: Integrating Proximate and Ultimate Explanations in Cognitive Neuroscience As readers of this journal will happily attest, evolutionary thinking now penetrates and energizes virtually all areas of psychology (Gaulin and McBurney, 2003; Buss, 2005, 2007; Gray, 2006). As readers may also recognize, an exciting recent trend is that scholars are bringing evolutionary thinking to fields that might appear even further from its purview, including law (Jones, 2004), religion (e.g., Boyer, 2001; Atran, 2002; Wilson, 2003), history (Smail, 2007), aesthetics (e.g., Dissanyake, 1994; Miller, 2000), literature (e.g., Carroll, 1994; Gottschall and Wilson, 2005; Barash and Barash, 2005), and morality (e.g., Hauser, 2006; Haidt, 2007). In this new volume, Evolutionary Cognitive Neuroscience (hereafter ECN), editors Platek, Keenan and Shackelford go the other way, bringing evolutionary thinking to an area where most people would assume it’s already integrated. In particular, the editors claim that the field of cognitive neuroscience—despite its deep relations with hard-core biological fields such as anatomy, physiology, biochemistry, and genetics—has missed the Darwinian boat by neglecting to harness the power of adaptive thinking. Instead, it has focused on proximate mechanisms without giving proper consideration to the possible function of those mechanisms and their evolutionary history. The editors, who are practicing neuroscientists and evolutionary psychologists, argue that this state of affairs is finally changing (c.f., Webster, 2007) and that it’s high time to take stock of what the evolutionary perspective has provided so far. Their stated goal (p. xvi) is, “to present, in an organized overview, the way in which researchers are beginning to wed In particular, the editors claim that the field of cognitive neuroscience-despite its deep relations with hard-core biological fields such as anatomy, physiology, biochemistry, and genetics-has missed the Darwinian boat by neglecting to harness the power of adaptive thinking. Instead, it has focused on proximate mechanisms without giving proper consideration to the possible function of those mechanisms and their evolutionary history. The editors, who are practicing neuroscientists and evolutionary psychologists, argue that this state of affairs is finally changing (c.f., Webster, 2007) and that it's high time to take stock of what the evolutionary perspective has provided so far. Their stated goal (p. xvi) is, "to present, in an organized overview, the way in which researchers are beginning to wed the disciplines of evolutionary psychology and cognitive neuroscience in order to provide new data on and insights into the evolution and functional modularity of the brain." In this review, we first discuss the overall goals and organization of ECN before offering remarks on specific sections and chapters. Overall Organization Because ECN was not the byproduct of a conference, the editors enjoyed free rein in soliciting chapters. They clearly opted for diversity, as the authors of the 21 chapters, despite their shared incorporation of evolutionary principles, often employ dramatically different methods, species of study, and theoretical frameworks. These diverse approaches present a formidable organizational challenge, which the editors have attempted to meet by organizing sections of the book around different adaptive problems. This seems inspired by Buss' successful textbook (2007) that includes sections on such topics as food acquisition, long-term mating, short-term mating, and parenting. Although this organization works beautifully for Buss, it is less successful here, owing to the fact many of the contributors apparently do not structure their research programs by initially identifying specific adaptive challenges. Instead, they appear to work from the other direction, identifying a psychological phenomenon of interest and then testing predictions derived from both proximal and functional hypotheses. Such research programs have been patently successful (see below), but the chapters describing them do not form a coherent whole. For example, Part III, called "Reproduction and Kin Selection," begins with a superb chapter by Fernald reviewing his research on the reciprocal influences of social interactions, development, and various physiological systems in African cichlid fish. The next chapter, by Platek and Thompson, introduces fascinating recent work on visuallymediated kin discrimination in humans, its potential utility for male parental investment decisions, and its neural correlates. Both chapters deal with visual discrimination, but the adaptive problem they address is quite different: competitor assessment in fish and paternal investment in humans. Other chapters in Part III, by Fisher and Thompson and by Newlin, do not address either of these problems, instead focusing on the psychopharmacology of mating and dopaminergic motivation systems, respectively. Thus, in this section, and others, the reader is often left wondering what neighboring chapters share besides physical proximity. In addition, throughout ECN, authors working on related problems and brain systems almost never make reference to findings and approaches discussed in other chapters. A major theme that might have effectively tied ECN together is the influence of Cosmides's (1992, 2005) claims on cognitive neuroscience. These claims-the reality of massive modularity, the importance of the environment of evolutionary adaptiveness (EEA), and the universality of human cognitive architecture-are highly influential in evolutionary psychology, especially among the leading thinkers (see Gaulin and McBurney, 2003;Buss, 2005Buss, , 2007. In fact, the ECN editors profess commitment to these claims and view them as central to evolutionary psychology's contribution to cognitive neuroscience (preface, chapter 1; and see Krill, Platek, Goetz, and Shackelford, 2007) stating (p. xiv), for instance, that "all learning is a consequence of carefully crafted modules dedicated to solving specific evolutionary problems." Unfortunately, very few of the chapters address such claims head on, and some are completely at odds with them. Take, as one instance, the chapter by Rushton and Ankney, which authoritatively summarizes the mountain of evidence that brain size and IQ are correlated in humans. Naturally, this work supposes the reality of IQ as a measure of domain-general intelligence, and, as Rushton and Ankney note, the evidence for this reality is overwhelming. This finding conflicts with the claim of massive modularity, and ECN should somewhere have addressed such apparent incongruities. A related shortcoming is that there are no studies presented where Tooby and Cosmides's massive modularity claim-which ought to be a central issue in cognitive neuroscience-is truly put to the test. Over the past decade, for example, the proposition that there are dedicated, encapsulated modules for face processing has been vigorously probed by neuroscientists of all stripes (e.g., Kanwisher, McDermott, and Chunn, 1997;Gauthier, Skudlarski, Gore, and Anderson, 2000). Closer to home, Cosmides, Tooby and their colleagues have recently extended their seminal investigations of cheater detection by incorporating lesion and neuroimaging data (Stone, Cosmides, Tooby, Kroll, and Knight, 2002;Ermer, Guerin, Cosmides, Tooby, and Miller, 2006). Crucially, both of these research programs have relentlessly sought to characterize precisely how modular the putative mechanisms actually are. ECN would have benefited if a chapter was dedicated to presenting one of these research programs or a similar one. Another thing that might have brought more coherence to the volume would be a chapter providing a historical overview of the relationship between cognitive neuroscience and evolutionary thinking. Such a chapter might have traced the origins of the cognitive revolution and explored the reluctance of many major players (e.g., Chomsky, Fodor) to fully embrace an evolutionary approach. At the same time, such a chapter could have considered those neuroscience research programs that have been ethologically based (e.g., birdsong) and probed their intellectual origins and influence. Finally, this type of chapter might have presented readers with a few clear examples where an evolutionary approach has been truly necessary to understand cognitive neuroscience phenomena, or conversely, where cognitive neuroscience has strongly informed evolutionary psychology and ethology. Evolutionary psychologists have frequently done this for psychological phenomena (e.g., prepared learning), and it would have been useful to have analogous neuroscience cases brought into relief. Specific Sections and Chapters Section 1, "Introduction and Overview" begins with a chapter by Goetz and Shackelford that reviews basic evolutionary principles and how most evolutionary psychologists think they apply to behavior and cognition (i.e., Tooby and Cosmides's claims). The chapter provides few novel ideas but is written crisply and will serve as useful refresher for many readers. Unfortunately, it provides few thoughts on what challenges must be confronted when evolutionary psychologists bring their ideas to neuroscience. In Chapter 2, Dunbar presents an updated and detailed review of his and others' work addressing the evolution of brain size across species and the theory that primate brain evolution is best accounted for by social demands. The second half of the chapter focuses on theory of mind studies in humans and chimpanzees. It's a nice chapter and stands out as one of the only ones in ECN to explicitly grapple with the possibility of domain-generality. In Chapter 3, Patel and colleagues provide an introduction to cognitive neuroscience methods and how they may best be employed by evolutionary psychologists. Although the chapter contains a wealth of information, the overall organization is difficult to discern, and there are some inaccuracies (e.g., transcranial stimulation is described as an imaging technique.) We recommend that readers looking for a methodological overview begin with a more traditional cognitive neuroscience text (e.g., Gazzaniga et al., 2002). Section 2 is titled, "Neuroanatomy: Ontogeny and Phylogeny", and it is a smorgasbord. It begins with a wide-ranging chapter by Stone on the evolution of human life history and its relation to brain development. The chapter culminates with comparative analyses of primate life history and brain size, making it the only one with original analyses in book. Most importantly, Stone attempts to support the long-standing hypothesis of coevolution between age at maturity and brain size, especially executive brain size (Reader and Laland, 2002). Although the correlation is shown, there are some apparent problems with the methods. For instance, Stone ignores the issue of phylogenetic non-independence, an omission which is no longer excusable given its demonstrated importance and the availability of practical methods to address it (reviewed in Nunn and Barton, 2001). Furthermore, the chapter overlooks several papers that have developed similar hypotheses and shown similar results (Joffe, 1997;Kaplan, Hill, Lancaster, and Hurtado, 2000;Deaner, Barton, and van Schaik, 2002). Chapter 5 is written by Hopkins and titled, "Hemispheric specialization in chimpanzees: evolution of hand and brain." Hopkins details the accumulating evidence that captive chimpanzees exhibit population-wide asymmetries, both behaviorally and neuroanatomically. The chapter concludes with a review of recent research exploring whether behavioral and neuroanatomical asymmetries might correlate within individuals and whether such individual differences are heritable and/or related to birth order. While the chapter is lucid and will be valued by scientists who work on these topics, nonspecialists may be dissatisfied because it barely touches the question of how asymmetries might be related to adaptations or other broader issues. In Chapter 6, Rushton and Ankney offer an elegant and comprehensive review of the relation between brain size and IQ, showing that the evidence for the linkage is now overwhelming even when body size is controlled. They also do an admirable job of reviewing the politically unpalatable empirical evidence indicating appreciable variation according to age, sex, socioeconomic status, and race. As we stated above, we would have appreciated a discussion of how the phenomena discussed in the chapter relate to Tooby and Cosmides's seminal claims. Chapter 7 is written by Marino and titled, "The evolution of the brain and cognition in cetaceans." It concisely reviews the evolution of this mammalian order, their peculiar neuroanatomy, how this neuroanatomy relates to their cognition and behavior, and the (often unusual) methods developed to study cetacean brains and cognition. If you've ever wondered how and why dolphins and toothed whales became so big-brained, this outstanding chapter is one of the first things you should read. Section III, titled, "Reproduction and Kin Selection," is the first section of ECN that is supposedly organized around an adaptive problem. As we noted above, the chapters in this section actually bear little relation to one another, although they are each valuable in their own right. Fernald's chapter, titled, "The social control of reproduction: physiological, cellular, and molecular consequences of social status," provides a wonderful tour of a fully developed research program, describing the mechanisms by which behavioral states and brain physiology interact. Even readers with no interest in fish will be inspired. Chapter 9, authored by Platek and Thomson, starts by describing some exciting recent experiments showing that humans discriminate visual images morphed to subtly resemble their own face or their kin's. Moreover, context-dependent preferences and aversions to such morphed faces are explained tidily by various adaptive hypotheses. Platek and Thomson especially focus on results indicating that men may use self-referent visual cues when making parental investment decisions. They conclude by describing their recent neuroimaging studies, which seek to identify a neural signature of males' heightened sensitivity to a child's resemblance. Their data indicate that males show increased activation in several brain areas when viewing a self-child morph. Future studies will hopefully replicate these results and extend them by (1) demonstrating that differential activation is unique to kinship recognition and not merely a product of salient or familiar social images (e.g., friend-child morph), (2) verifying that the hypothesized kinshiprecognition circuit is anatomically plausible, i.e., that neuroanatomical connections exist between areas in the hypothesized module, and (3) showing that the strength of these anatomical connections or degree of network activation correlates with performance on kin-recognition tasks. In sum, although much work awaits, this emerging research program provides a terrific example of how a rigorous evolutionary perspective can highlight a topic overlooked by traditional cognitive neuroscience. Chapter 10 is written by Fisher and Thompson and the title aptly summarizes their message: "Lust, romance, attachment: do the side effects of serotonin-enhancing antidepressants jeopardize romantic love, marriage, and fertility?" This fascinating and detailed review drives home why an evolutionary perspective is necessary for both cognitive neuroscientists and practicing clinicians. Newlin concludes this section by presenting what he calls SPFit (self-perceived survival and reproductive fitness) theory, which seeks to bring an evolutionary framework to substance abuse disorders. SPFit theory essentially holds that the corticomesolimbic dopamine system is not a reward pathway, as some addictions researchers claim, but instead embodies the neural substrate for SPFit, which is, "a basic survival and reproductive motivational system that is activated by drugs of abuse and by perceived threats to survival and reproductive fitness..." (p 286.) Although there are many interesting ideas presented in this chapter, and readers with little knowledge of addictions will learn much in reading it, we remain unconvinced of SPFit's utility. For one thing, our reading of this literature suggests that, despite the new jargon, Newlin's interpretation of corticomesolimbic dopamine system is now nearly mainstream. In addition, Newlin's invocation of a unitary fitness representation appears at odds with Cosmides's (1992, 2005) claim that natural selection has shaped human psychology to execute specific adaptations, not to maximize fitness. Most passages in this chapter appeared in a previously published review article (Newlin, 2002). Section IV of ECN is titled, "Spatial Cognition and Language," and it contains two chapters on sex differences in spatial ability and one on the evolution of language. In the first chapter addressing spatial ability, Puts and colleagues begin by systematically characterizing sex differences in humans and rodents and then turn to various adaptive hypotheses that might account for them. The bulk of the chapter reviews the scores of studies detailing the relevant hormonal, developmental, and brain mechanisms. The chapter is scholarly and thorough, so thorough in fact that non-specialists might easily be overwhelmed by the detail and length (38 pages, almost entirely text). The clear writing and effective use of section headers, however, should allow readers to keep their bearings. Gur and colleagues wrote Chapter 13, and it is similar to the previous one in that the goal is to characterize sex differences in spatial abilities and review what is known about their mechanistic basis and evolution. It is well-written and, while largely overlapping with the previous chapter, does introduce some different material, such as studies of brownheaded cowbirds, a species in which the demands of brood parasitism have apparently selected for greater spatial abilities in females. To conclude Section IV, Corballis offers a highly synthetic chapter, titled, "The evolution of language: from hand to mouth." It first provides an overview of the central challenges in language evolution and puts them in a paleoanthropological and life history context. Corballis then presents an updated distillation of his own theory (Corballis, 2002), which marshals a vast array of data (e.g., anthropological, genetic, neurological) to argue that full-blown human languages were comprised mainly of manual gestures until some time between 100,000 and 50,000 years ago, when vocal language was invented. Although there are reasons to be skeptical of this theory, it seems plausible and certainly succeeds in sensibly organizing the relevant observations and providing an entertaining read. Section V is titled, "Self-awareness and Social Cognition," and contains five chapters. Chapter 15 is written by Santos and colleagues and titled, "The evolution of human mindreading: how nonhuman primates can inform social cognitive neuroscience." The authors first remind us of the methodological advantages of studying nonhumans and then, focusing mainly on their recent work on macaques, argue that there is now compelling evidence that some nonhuman primates possess aspects of a Theory of Mind, although much work still remains in characterizing these abilities. Finally, they review what is known or has been speculated about the neurobiological bases of such abilities and provide examples of how neuroscientific and behavioral methods can complement one another. The chapter is superbly written and will be valuable to many, including monkey neurophysiologists and developmental psychologists. Chapter 16 is written by Focquaert and Platek, who develop a theory of selfprocessing, introducing evidence from the nonhuman primate literature as well as their own functional neuroimaging studies. In particular, Focquaert and Platek argue that social cognition relies heavily on simulation and that simulation provides a means not only of understanding the other, but also the self. For these reasons, they hypothesize that in humans, and perhaps other primates, self-awareness is strongly linked to an individual's ability to attribute mental states to others. The authors survey a broad range of findings while outlining this particular approach to self-awareness and Theory of Mind, struggling admirably to bring evolutionary perspectives to a philosophically and semantically challenging area. Nonetheless, non-specialists may find the terminology difficult, and, because alternative paradigms mentioned in the chapter (e.g., the "theory theory") are never described in detail, readers may finish the chapter feeling they've not been fully exposed to contemporary neuroscientific views of self-awareness. Chapter 17 is authored by Baron-Cohen and is titled, "The assortive mating theory of autism." It presents an updated version of the author's theory that systemizing and empathizing are fundamentally opposite ways of understanding the world and that a core feature of autistic spectrum disorders is hypersystemizing. The chapter then reviews the evidence implicating genetics in autism, including that individuals with two parents that are high systemizers are at especially heightened risk. The chapter is cogent and should be read by all who have interest in autism or social cognition, even those highly skeptical of Baron-Cohen's theory. We note, however, that Baron-Cohen's use of the word "assortive mating" in his title and throughout the paper is confusing since biologists use this word in a different way, meaning that individuals in a population with a particular phenotype have a greater (or lesser) than chance likelihood of mating with one another. Baron-Cohen, despite using the word "assortive," does not claim that high systemizers are especially likely to mate, only that when they do, the offspring are at heightened risk of autism. This chapter appeared, in virtually identical form, in a previously published article (Baron-Cohen, 2006). Stevens and colleagues wrote Chapter 18, which is entitled, "Deception, evolution, and the brain." It speculates on the possible advantages of deception, reviews Trivers's and Ramachandran's theories of self-deception, and discusses various neurological disorders and other phenomena related to deception, including misidentification syndromes and memory problems. The chapter's overall goal is difficult to determine, but it contains several interesting ideas. Kosslyn provides this section's finale, presenting his theory of "social prosthetic systems," which are defined as, "human relationships that extend one's emotional or cognitive capacities." The basic claim is that humans are motivated to use the abilities of others to achieve their own interests; conversely, each of us is being employed as a prosthesis by others, so that ultimately we all comprise a social network with multiple goals and abilities, many of which are partly in conflict. The chapter effectively illustrates these points with examples, including an amusing (and enlightening) discussion about the crosswalk featured on the album cover of the Beatles's Abbey Road. Despite the intuitive appeal of the ideas, we suspect that most readers will find themselves ultimately unsatisfied with Kosslyn's chapter. The problem is that it puts insufficient effort in distinguishing itself from related ideas (a theory of motivation that requires only 16 references?) or developing any specific predictions. Readers are advised to consult Tooby and Cosmides's (1996) paper on friendship, which develops similar ideas but with clearer directions for empirical research. Section VI is last and titled, "Theoretical, Ethical, and Future Implications for Evolutionary Cognitive Neuroscience." In Chapter 20, Kimberly and Wolpe explore the philosophical, ethical, and social questions posed by the kinds of research presented in ECN. They first discuss philosophical implications for issues such as morality and free will and then turn to the implications of controversial knowledge that such research might generate. The last, and perhaps freshest, part of the chapter is the discussion of how evolutionary psychology and cognitive neuroscience research intersects with society, media, and the criminal justice system. It's a fine chapter, and researchers from all backgrounds will profit by reading it. The final chapter, Chapter 21, is written by Keenan and colleagues, and it offers several helpful remarks on the challenges and rewards of engaging in a truly interdisciplinary field such as evolutionary cognitive neuroscience. Conclusion Evolutionary Cognitive Neuroscience's goal is to demonstrate the developing links between cognitive neuroscience and evolutionary psychology, especially Cosmides's (1992, 2005) influential version of EP. Although the volume's overall structure does little to illuminate these connections, there are some chapters that certainly succeed. Interestingly, it is often the chapters that appear least beholden to Tooby and Cosmides's stricter claims that do the best job of illustrating how evolutionary perspectives have motivated active neuroscientific research programs. Although readers may initially buy this book because of a specific chapter relevant to their research goals, we suspect they will find it contains many accessible and provocative contributions. In conclusion, we recommend ECN to all psychologists and cognitive neuroscientists with interest in evolution.
2019-05-07T14:21:34.835Z
2007-07-01T00:00:00.000
{ "year": 2007, "sha1": "b0e2e1cf5f66eed6e007b17f758c40ce8bdf3279", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/147470490700500311", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "b0e2e1cf5f66eed6e007b17f758c40ce8bdf3279", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
119266173
pes2o/s2orc
v3-fos-license
Stop and Sbottom LSP with R-parity Violation Considering a third-generation squark as the lightest supersymmetric particle (LSP), we investigate R-parity violating collider signatures with bilinear LH or trilinear LQD operators that may contribute to observed neutrino masses and mixings. Reinterpreting the LHC 7+8 TeV results of SUSY and leptoquark searches, we find that third-generation squark LSPs decaying to first- or second-generation leptons are generally excluded up to at least about 660 GeV at 95%C.L.. One notable feature of many models is that sbottoms can decay to top quarks and charged leptons that lead to a broader invariant mass spectrum and weaker collider constraints. More dedicated searches with $b$-taggings or top reconstructions are thus encouraged. Finally, we discuss that the recently observed excesses in the CMS leptoquark search can be accommodated by the decay of sbottom LSPs in the LQD$_{113+131}$ model. I. INTRODUCTION Supersymmetry (SUSY) has been considered as a leading candidate for physics beyond the where µ denotes the supersymmetric mass parameter of the Higgs bilinear operator H u H d . Simultaneous presence of λ and λ makes proton unstable and thus has to be avoided. The proton stability may be ensured by imposing various discrete symmetries [2]. One of them is the standard R-parity forbidding all of the above operators. 1 Another popular options are to consider the B-parity and L-parity forbidding only B and L violating operators, respectively. The B-parity has an attractive feature that the allowed L violating operators could be the origin of tiny neutrino masses [3]. Motivated by these, we investigate signatures of stop/sbottom LSP directly decaying into a quark and a lepton through either the bilinear LH or trilinear LQD couplings which can contribute to the observed neutrino masses and mixing. One of the search channels for such RPV stop/sbottom is the conventional leptoquark search [4] which have been looked for at the HERA [5], and more recently at the LHC [6][7][8][9]. In this paper, we study various prompt multilepton and/or multijet signatures of the stop/sbottom LSP with the LH or LQD RPV to constrain the stop/sbottom mass combing all the relevant current LHC results not only from the leptoquark search but also from the RPC stop/sbottom as well as RPV multilepton searches. Our RPV models can have various types of couplings such as LH i and LQD ij3,i3k , and the interpretation of data in terms of these variant models can be different. The L violating RPV signatures of stop have been studied earlier in Refs. [10], and more recently in Ref. [11]. Leptoquark signatures of stop/sbottom LSP have also been explored recently in Ref. [12] in the context of a bilinear spontaneous RPV model. in the CMS leptoquark searches in our context, and its possible implication to EWPD constraints. Finally we conclude in Sec. V. A. General Consideration As mentioned, we consider the LH and LQD operators relevant for the stop and sbottom LSP decays: LH: Let us first derive the stop and sbottom couplings arising from the bilinear LH RPV. For this, we need to include also soft SUSY breaking bilinear terms, which generate the vacuum expectation value (vev) of a sneutrino fieldν i parameterized as Here t β is the ratio between two Higgs vevs: t β = H 0 u / H 0 d The bilinear couplings i and a i induce mixing masses between neutrinos (charged leptons) and neutralinos (charginos) and thereby non-vanishing neutrino masses as well as effective RPV couplings of the stop and sbottom LSP of our interest. To see this, it is convenient to diagonalize away first these mixing masses as discussed in Ref. [14]. The relevant approximate diagonalizations valid in the limit of i , a i 1 are collected in Appendix A. After these diagonalizations, we get the following RPV vertices of stops: where Similarly, the sbottom RPV vertices are given by where LQD: It is straightforward to get the stop and sbottom RPV vertices coming from the trilinear RPV couplings, λ ijk with j or k = 3: When the LH and LQD PRV are allowed, their couplings can contribute to generate neutrino mass matrix components respectively at tree and one-loop (see Fig. 1) as follows: where m b X b is the sbottom mixing mass-squared and only sbottom contributions are included assuming mb md k for k = 1, 2. A complete 1-loop calculation can be found in Refs. [14,15]. In the case of the neutralino LSP, the RPV signatures correlated with the neutrino mixing angles have been extensively studied [16][17][18] as well as in the split SUSY [19]. Similar studies are worthwhile in the case of the stop/sbottom LSP as well. We leave this issue as a future work. From the expressions in Eqs. (19,20), the LH and LQD couplings are constrained by the measured values of tiny neutrino masses. As a rough estimate, the following bilinear and trilinear couplings are required to generate the neutrino mass components of m ν,ii = 0.01 eV: TeV. These coupling sizes are small enough that they do not affect production rates and do not make resonances broader than experimental resolutions so that collider physics is mostly independent on them. Nevertheless, they are large enough to allow prompt decays of squark LSPs. B. Benchmark Models We now introduce three benchmark models. Sbottom and stop LSPs decay to either first-or second-generation leptons. Model names imply the involved RPV interactions and subscripts imply lepton and/or quark generations. In the presence of the mixing between left-handed and right-handed stops/sbottoms, we can write the stop/sbottom mass eigenstates,q 1 andq 2 with q = t, b: where θq is the squark mixing angle. We are interested in the RPV vertices of the lightest stop (t 1 ) or sbottom (b 1 ). LH i : Stop and sbottom decay modes areb 1 → e i t, ν i b andt 1 → e i b, ν i t, and the branching fraction for the charged lepton modes are given by (ignoring top and bottom masses) where we neglect the terms suppressed by m e i /F C . As the stop or the sbottom is the LSP, it is expected to have M Z µ and thus |c N Note that the LH model becomes effectively equivalent to the LQD i33 model with λ i33 ≡ i y b (see below) in the limit of vanishing ξ i . Thus, the sbottom and stop decay branching ratios for the charged lepton modes are LQD ij3+i3j : Only λ ij3, i3j = 0 is assumed to allowb 1 → e i u j , ν i d j ort 1 → e i d j . The sbottom and stop branching ratios for the charged lepton modes are The first two models, LH i and LQD i33 , involve heavy quarks (tops and bottoms) in the final states while only light quarks are produced in the LQD ij3+i3j model. III. LHC SEARCHES AND BOUNDS Let us first consider how the sbottom LSP can be constrained at the LHC. Sbottom pair productions in the LH 1 and LQD 133 models, leave the final states: The bbνν is constrained by RPC sbottom searches through b 1 → bχ 0 1 with the massless LSP, hence bb+missing transverse energy(MET). The existing strongest bound on the sbottom mass is 725GeV from CMS 19.4/fb [20]. The tbeν can be constrained from the eνjj searches of first-generation leptoquarks [7] -the CMS analysis uses two hardest jets of any flavor. Note that the sbottom and the leptoquark have the same quantum numbers as color triplet, and their production rates are almost identical, as dictated by QCD interactions. So it is appropriate to use this result to extract bounds on sbottoms. The ttee can be constrained from the eejj searches of leptoquarks and additionally from multi-lepton(≥ 3 ) RPV LLE searches [21]. We comment on other searches in Appendix B. We recast these search results to exclusion bounds on the sbottom in the left panel of Fig. 2 -we refer to Appendix B for how we obtain these bounds. The same bounds apply to both LQD 133 and LH 1 as they predict the same final states. Large βb is constrained from the eejj and the multi-lepton RPV searches whereas small βb is constrained from the RPC sbottom search. In general, sbottoms lighter than about 660 GeV is excluded by at least one of those searches. We now turn to the stop LSP. Stop pairs in the LH 1 and LQD 133 models decay as where the first two modes are not allowed in the LQD 133 model. The ttνν channel is constrained by RPC stop searches through t 1 → tχ 0 1 with the massless LSP. The existing strongest bound is 750 GeV from CMS 19.5/fb [22]. The remaining decay modes, tbeν and bbee, can be constrained from the eνjj and eejj searches of first-generation leptoquarks [7]. Note that the stop also has the same quantum numbers as leptoquarks. Unlike sbottoms, stop pairs do not lead to final states with more than 3 leptons. Recasting these search results to exclusion bounds on the stop, we obtain the right panel of Fig. 2. Similarly to the sbottom case, stops lighter than about 660 GeV is excluded. There is one notable difference between the sbottom LSP and the stop LSP. Sbottom pairs decay to ttee while stop pairs decay to bbee. Tops produce more jets, and each jet becomes softer as decay products share the energy-momentum of sbottoms. Thus the acceptance under leptoquark search cuts gets lower. The eejj exclusion bound (blue-dashed) on sbottoms (the left panel of Fig. 2) is indeed weaker than that on stops (the right panel of are included. It will be useful to measure this characteristic difference in the future searches. Therefore, potentially significant improvements in the third-generation squark LSP searches can be achieved with b-taggings and/or top reconstructions. With 20/fb of data, 8.1fb × 20/fb 160 pairs of 700 GeV sbottoms are produced, and much better bounds are beginning to be statistically limited. In any case, 160 is still a reasonably large number, and more dedicated searches implementing b-tagging and/or top reconstruction are certainly worthwhile. We can repeat the same analysis in the LH 2 and LQD 233 models allowing sbottom and stop LSP decays to µ, and apply the CMS second-generation leptoquark searches [8]. The resulting bounds are shown in Fig. 4. Compared to the eejj search, the µµjj search is somewhat more stringent partly because µ is more accurately measured and cleaner -compare blue-dashed lines in the left panels of Fig. 2 and Fig. 4. On the other hand, µνjj results are similar to eνjj results (red-solid lines). To summarize, again, third-generation squark LSPs lighter than about 660 GeV are generally excluded. Finally, the LQD ij3+i3j models with i, j = 1, 2 are equivalent to the leptoquark models and the current search results can be directly applied to constrain the sbottom/stop LSP mass. IV. THE OBSERVED LEPTOQUARK EXCESS FROM SBOTTOM DECAYS The CMS leptoquark analysis has recently reported excesses in 650GeV leptoquark searches in both eejj and eνjj channels [7]. The excesses are claimed to be 2.4 and 2.6σ significant, respectively. The excesses disappear when a b-jet is required, and no similar excess is observed in searches with µ [8] and τ [9]. In this section, we discuss how our third model, LQD 113+131 , can fit the excesses. A. Sbottoms as Leptoquarks Sbottom pairs in the LQD 113+131 model decay as with BR=(1 − β) 2 , 2β(1 − β) and β 2 , respectively. This model is identical to the firstgeneration leptoquark model considered in the CMS analysis except that β is given differently by Eq. (31) in our model. The best fit is allegedly reported to be with 650 GeV and β = 0.075. Our model can accommodate this by the decay of sbottom LSPs. By simply assuming λ 113 = λ 131 as an example, we can extract more specific information on the underlying parameters. Then, β = sin 2 θb/(1 + sin 2 θb) ≤ 0.5 is now bounded from above. The bestfit value, β = 0.075, requires sin 2 θb = 0.081, meaning that the sbottom LSP is mostly left-handed. The constraint from electroweak precision test is briefly discussed in the next subsection. The m ej,min invariant mass spectrum is also scrutinized in the CMS analysis. So far, no sharp peak is observed unlike the expectation from leptoquark decays. As compared to our previous two models, the LQD 113+131 does not involve top quarks and would also predict the same sharp peak in the invariant mass as leptoquark model does. See Fig. 5 for the comparison of the model prediction and data -no clear resonance-like structure is seen in data, but the model prediction is not significantly different from data yet. Our interpretation of the sbottom LSP in the LQD 113+131 model as a leptoquark of 650 GeV responsible for the mild CMS excesses requires the corresponding couplings, λ 113 and λ 131 , to dominate over other sbottom LSP RPV couplings if any. As discussed in Eq. (21,22), these couplings can take the values of λ 113 ∼ λ 131 ∼ 10 −3 to produce (mainly) the (11) component of the observed neutrino mass matrix. Then, the other components can come from smaller bilinear RPV couplings ξ i c β ∼ 10 −6 and/or trilinear couplings, e.g., λ i33 ∼ 10 −4 to produce m tree ν,ij ∝ ξ i ξ j c 2 β and/or m loop ν,ij ∝ λ i33 λ j33 . In this scenario, the sbottom LSP can have additional but suppressed decay modes in the µ and τ channels which may provide a test of the model. Of course, the neutrino mass components can come mainly from the LLE couplings, e.g., m loop ν,ij ∝ λ i33 λ j33 , which has no impact on the sbottom LSP phenomenology. B. Electroweak Precision Data and Stop Masses The mostly left-handed sbottom solution obtained in the previous subsection may imply that other stops (and/or sbottoms) are also light; otherwise, the model is inconsistent with the electroweak precision data(EWPD). The possible other light particles can provide additional collider constraints on the model. Indeed, it has been shown that the EWPD can give important constraints on the stop masses and mixing angles in combination with the RPC searches of sbottoms [23]. The deviation from the custodial symmetry in the SM is bounded to [24] ( The sbottom and stop contribution to the ρ parameter [25] is where F 0 is defined by From the mass terms for stops and sbottoms, we can infer the following relation between physical squark masses and mixing angles, In Fig. 6, we show bounds on the masses of other sbottoms and stops by assuming the best-fit parameters, mb 1 = 650 GeV and sin 2 θb = 0.081, chosen in the previous subsection. Although the EWPD bound depends on various other parameters including stop mixing angle, the lighter stop mass is bounded up to about 740 GeV and the stop mass splitting is bounded up to about 190 GeV for a maximal stop mixing. In particular, when the collider limit on the heavier sbottom mass increases, the lighter stop mass and the stop mass splitting tend to get larger so the allowed parameter space in the stop sector is reduced. The 125 GeV Higgs mass would require stop masses of 500 − 800 GeV for a maximal stop mixing or stop masses above 3 TeV for a zero stop mixing [26]. Thus, in the case of a small stop mixing, the Higgs mass condition would be incompatible with EWPD. On the other hand, for a maximal stop mixing, the stop masses required for the Higgs mass can constrain the parameter space further. When there is a new dynamics for enhancing the Higgs mass such as a singlet chiral superfield, we may take the EWPD in combination with sbottom mass limit to be a robust bound on stop masses. V. SUMMARY AND CONCLUSION Through LH and LQD RPV couplings, the third-generation squark LSP can decay to leptons and jets. Jet+MET final states are constrained by conventional RPC SUSY searches, and multilepton(+jets)+MET final states are constrained from leptoquark searches as well as multilepton RPV searches. We found that the sbottom and the stop LSP decaying to e or µ are similarly well constrained up to about 0.66 ∼ 1 TeV depending on leptonic branching fractions. When the sbottom decays to a top quark and an electron as in the LH i and LQD i33 models, the bounds are slightly weaker as each top decay product is softer and not all is used in the analyses. The resulting characteristically different m ej invariant mass spectra can distinguish the models. More dedicated search for this case can be pursued by implementing b-taggings and/or top reconstructions. The bounds on µ final states are somewhat stronger than those on e final states so that a wider region of parameter space above 660 GeV is excluded for the LH 2 and LQD 233 models. Lastly, we proposed the LQD 113+131 model with sbottom LSPs as a good fit to the recently observed mild leptoquark excesses and discussed its possible implications on the masses of other stops and sbottoms in view of the EWPD and the 125 GeV Higgs mass. where (ν i ) and (χ 0 j ) represent three neutrinos (ν e , ν µ , ν τ ) and four neutralinos (B,W 3 ,H 0 d ,H 0 u ) in the flavor basis, respectively. The rotation elements θ N ij are given by Here s W = sin θ W and c W = cos θ W with the weak mixing angle θ W . (ii) Charged-lepton-chargino diagonalization: where e i and e c i denote the left-handed charged leptons and anti-leptons, (χ − j ) = (W − ,H − ) and (χ + j ) = (W + ,H + ). The rotation elements θ L,R ij are given by The ttee final states can involve more than three leptons or same-sign dileptons and bjets which are often clean. We find that multilepton(N ≥ 3) RPV LLE search [21] with various binned discovery cuts is most relevant to us. We simulate all the discovery cuts with 300 < S T < 1500 GeV and use the most stringent result to obtain bounds. The strongest bound is usually from discovery cuts with ≥ 1b and S T 1000 GeV requirements. Similar searches of same-sign dileptons plus b-jets plus multijets [33], four-lepton [34] and other ≥ 3 + b-jet searches in, e.g., Refs. [35] are less optimized for our benchmark models of about 700GeV squarks.
2014-09-02T11:28:08.000Z
2014-08-19T00:00:00.000
{ "year": 2014, "sha1": "f65f9902aff9430a29e8dc1f2bac3e543679b50b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1408.4508", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f65f9902aff9430a29e8dc1f2bac3e543679b50b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
232337316
pes2o/s2orc
v3-fos-license
Fracture Risks in Patients Treated With Different Oral Anticoagulants: A Systematic Review and Meta‐Analysis Background Evidence on the differences in fracture risk associated with non‐vitamin K antagonist oral anticoagulants (NOAC) and warfarin is inconsistent and inconclusive. We conducted a systematic review and meta‐analysis to assess the fracture risk associated with NOACs and warfarin. Methods and Results We searched PubMed, Embase, Cochrane Library, Scopus, Web of Science, and ClinicalTrials.gov from inception until May 19, 2020. We included studies presenting measurements (regardless of primary/secondary/tertiary/safety outcomes) for any fracture in both NOAC and warfarin users. Two or more reviewers independently screened relevant articles, extracted data, and performed quality assessments. Data were retrieved to synthesize the pooled relative risk (RR) of fractures associated with NOACs versus warfarin. Random‐effects models were used for data synthesis. We included 29 studies (5 cohort studies and 24 randomized controlled trials) with 388 209 patients. Patients treated with NOACs had lower risks of fracture than those treated with warfarin (pooled RR, 0.84; 95% CI, 0.77–0.91; P<0.001) with low heterogeneity (I 2=38.9%). NOACs were also associated with significantly lower risks of hip fracture than warfarin (pooled RR, 0.89; 95% CI, 0.81–0.98; P=0.023). A nonsignificant trend of lower vertebral fracture risk in NOAC users was also observed (pooled RR, 0.74; 95% CI, 0.54–1.01; P=0.061). Subgroup analyses for individual NOACs demonstrated that dabigatran, rivaroxaban, and apixaban were significantly associated with lower fracture risks. Furthermore, the data synthesis results from randomized controlled trials and real‐world cohort studies were quite consistent, indicating the robustness of our findings. Conclusions Compared with warfarin, NOACs are associated with lower risks of bone fracture. O ral anticoagulants (OACs) are commonly prescribed to prevent or treat thromboembolic events in patients with atrial fibrillation or venous thromboembolism. 1,2 Warfarin, a vitamin K antagonist, is a traditional OAC and has been a primary long-term treatment option for patients with atrial fibrillation or venous thromboembolism for decades. Recently, nonvitamin K antagonist oral anticoagulants (NOACs) have been approved as alternatives to vitamin K antagonists and have demonstrated similar or superior efficacy and safety compared with warfarin. 3,4 Because aging is one of the strongest risk factors for both atrial fibrillation and venous thromboembolism, 5,6 the prescription of OACs has gradually increased in the aging population worldwide. Some previous studies have suggested that warfarin may increase fracture risks via its vitamin K antagonizing effect, which impairs bone mineralization; in contrast, NOACs are independent of mechanisms associated with vitamin K antagonists. However, Huang et al. Oral Anticoagulants and Fracture Risks previous studies have provided conflicting evidence on the association between warfarin and fracture risks. [7][8][9][10] A previous cohort study published in 2017 was the first to compare the fracture risk associated with an NOAC (dabigatran) and warfarin, reporting a significantly lower fracture risk in dabigatran users. 11 Another cohort study in 2017 observed no significant difference in fracture risks among patients taking NOACs (both dabigatran and factor Xa inhibitors) and warfarin. 12 In 2018, a meta-analysis based on 12 randomized controlled trials (RCTs) showed that patients treated with NOACs had significantly lower risks of overall fracture, but not hip or vertebral fracture, than patients treated with warfarin. 13 However, all trials included in that meta-analysis considered fracture data as adverse events, and none of them were specifically designed to assess fracture risks. Moreover, the follow-up duration was ≤12 months in over half of these trials, which might yield insufficient statistical power for evaluating fracture events and the relatively low risks of fracture reported by these trials. Recently, several populationbased cohort studies with larger sample sizes, longer follow-up durations, and greater statistical powers were conducted to evaluate the association between different OACs and fracture risks. [14][15][16][17] In addition, there is a growing number of RCTs comparing NOACs with warfarin. The relevant evidence regarding the fracture risks associated with OACs continues to accumulate. Because osteoporosis and bone fractures pose major threats to the elderly and OACs are mainly prescribed to older adults who have multiple risk factors for fractures, 17,18 it is critically essential to determine whether a difference in fracture risks exists between NOACs and warfarin. Therefore, we conducted a systematic review and meta-analysis to compare the risk of bone fractures between patients treated with NOACs and those treated with warfarin. We searched for both clinical trials and observational studies to comprehensively evaluate the current evidence on this issue and compared the results from RCTs to realworld evidence. METHODS The authors declare that all supporting data are available within the article and its online supplementary files. Data Sources and Literature Search This systematic review and meta-analysis was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 19 We searched PubMed, Embase, Cochrane Library, Scopus, Web of Science, and ClinicalTrials.gov for studies from their inception until May 19, 2020. We applied 2 search strategies. In brief, we initially searched articles using combinations of the following terms: "NOAC," "dabigatran," "rivaroxaban," "apixaban," "edoxaban," "warfarin," "vitamin K antagonist," and "fracture" (search strategy 1). This search was conducted to identify articles that evaluated the fracture risk in patients treated with NOACs versus those treated with warfarin, regardless of the study design. However, this search could not identify RCTs, which may have reported fracture data as a part of adverse events only, available on websites for clinical trial registration (eg, ClinicalTrials.gov). Therefore, we conducted another search that used the following terms: "NOAC," "dabigatran," "rivaroxaban," "apixaban," "edoxaban," and "clinical trial," to identify all potential eligible trials related to NOACs (search strategy 2). These terms represent a simplified search concept; the full details and the combinations of search terms used in literature search strategies 1 and 2 are described in Data S1. Additionally, we examined the reference lists from relevant review articles and meta-analyses for additional articles to be included. The protocol registration application for this study was performed (PROSPERO CLINICAL PERSPECTIVE What Is New? • This systematic review and meta-analysis gathered data from 388 209 participants in 29 studies and showed that patients taking non-vitamin K antagonist oral anticoagulants had an overall 16% lower risk of developing fractures compared with those taking warfarin. • Subgroup analyses for individual non-vitamin K antagonist oral anticoagulants demonstrated that dabigatran, rivaroxaban, and apixaban were significantly associated with lower fracture risks. • The evidence from real-world cohort studies and randomized controlled trials is quite consistent, indicating the robustness of our findings. What Are the Clinical Implications? • This meta-analysis provided up-to-date evidence showing that non-vitamin K antagonist oral anticoagulants may be the preferred alternatives to warfarin for lowering fracture risks in patients requiring oral anticoagulant therapy. Nonstandard Abbreviations and Acronyms NOAC non-vitamin K antagonist oral anticoagulant OAC oral anticoagulant [International Prospective Register of Systematic Reviews] registration number: CRD42020182607). Institutional ethical approval was not required because this was a meta-analysis of data from published studies only. Study Selection and Inclusion Criteria We included studies if they (1) presented outcome measurements for any fracture in both NOAC and warfarin users, regardless of whether it was reported as primary/secondary/tertiary outcomes or referred to as an adverse/safety event in the articles or supplementary data or on online sites (such as the ClinicalTrials.gov website); and (2) compared and reported the relative risk (RR) of fracture between NOACs and warfarin, or it could be derived from the data in the study. No specific restrictions were set for study population, treatment indication, or treatment/follow-up duration. Clinical trials, cohort studies, and case-control studies that provided sufficient data were eligible. As observational studies (cohort and case-control) should report properly adjusted estimates by considering potential confounders (age and sex, at minimum), studies in which only unadjusted estimates were available were excluded. We excluded review articles, case reports, editorials, and letters to the editor that did not report original findings, and studies conducted in a laboratory or on an- independently screened all titles and abstracts and evaluated the relevant articles. If a study was deemed eligible by any reviewer, that study was included for further full-text review. Two reviewers (H.-K.H. and C.-C.P.) then independently assessed the full texts of the studies, and any disagreement was resolved by consensus among members of the study team. We used Covidence systematic review software (Veritas Health Innovation, Melbourne, Australia) to manage our systematic review process, coordinating each author's work and enabling collaboration among the entire review team. Study Outcomes The primary outcome was any fracture event. The secondary outcomes were events of hip and vertebral fractures (the most common and quintessential osteoporotic fractures). Comparisons were made between all NOACs and warfarin, as well as between individual NOACs and warfarin. Data Extraction and Quality Assessment Two authors (H.-K.H. and C.-C.P.) independently extracted data using a prespecified standardized form, which included author names, trial name, publication year, study design, country, treatment indication, NOAC type, follow-up duration, mean age, sex, sample size, reported fracture sites, fracture risk estimates, and covariates adjusted in observational studies. A third author (S.-M.L.) then independently examined the extracted data and resolved any discrepancies. For observational studies, adjusted hazard ratios (HRs) or risk ratios along with standard errors were extracted. When a study reported estimates from covariate-adjusted models and propensity score matching/weighting models, we considered the latter less biased and preferred for inclusion in the meta-analysis. 15 Statistical Analysis Risk ratios for fracture reported by RCTs and the adjusted HRs reported by observational studies were pooled together to calculate RRs in our meta-analysis. Between-study heterogeneity was evaluated using the I 2 and τ 2 statistics. 22,23 The heterogeneity was considered low, moderate, and high for I 2 <50%, 50% to 75%, and >75%, respectively. The τ 2 was interpreted using the same units as the pooled effect (logarithm of RR). Considering the between-study heterogeneity, we calculated the pooled RR and respective 95% CIs using the DerSimonian and Laird random effects model. 24 We conducted several predefined subgroup analyses to determine if the pooled estimates were influenced by different study-level factors. The subgroup analyses were conducted according to NOAC type (dabigatran, rivaroxaban, apixaban, or edoxaban), study design (RCT or cohort study), indication (atrial fibrillation or venous thromboembolism), mean follow-up period, mean age, sample size, and the geographic location of participants (North America, Europe, Asia, or multiple continents). We made efforts to contact the corresponding authors when additional information was required for subgroup analyses, but we did not receive any replies from the authors *In search strategy 2, we searched only for RCTs of NOACs; if the titles/abstracts were enough to help judge that the publications were not RCTs of NOACs or on an irrelevant topic, they were excluded in the title/abstract screening stage. NOAC indicates non-vitamin K antagonist oral anticoagulant; and RCT, randomized controlled trial. Huang et al. Oral Anticoagulants and Fracture Risks of those studies. 11,12,17 Meta-regressions were further performed if sufficient studies (n≥10) were available. Egger's regression test and Begg's adjusted rank correlation test were conducted to determine publication bias. 25,26 If there were ≥10 studies included in the metaanalysis, we conducted a funnel plot analysis to assess publication bias or small study bias. A sensitivity analysis was conducted to evaluate the influence of each study on the overall pooled estimate (by omitting each study individually). All statistical tests were 2 sided, and results with P<0.05 were considered statistically significant. All statistical analyses were conducted using Stata version 15.1 (Stata Corporation, College Station, TX, USA). Search Results With search strategy 1, a total of 1742 studies were identified. After excluding duplicates and screening the titles and abstracts, 64 potentially relevant studies were retrieved for full-text review. Three studies, which were conducted by Lutsey et al, 15 Norby et al, 27 and Bengtson et al, 28 primary outcome and thus reported more comprehensive results. 15 In addition, Lau et al published 2 studies 11,17 using the same database (Clinical Data Analysis and Reporting System of the Hong Kong Hospital Authority); we included the data of only the latter 17 in the meta-analysis owing to its more comprehensive data and longer follow-up period. We excluded a cross-sectional study reporting the clinical signs of cranial fracture after a traumatic brain injury in patients using different anticoagulants. 29 With search strategy 2, a total of 17 400 relevant studies were identified. After excluding duplicates and screening the titles and abstracts, 2428 studies were retrieved for review of the full-text or trial registration. The search strategies for article selection are shown in Figure 1A and 1B. After careful review, 29 studies met the eligibility criteria, including 5 cohort studies 12,14-17 and 24 RCTs 30-53 from search strategies 1 and 2, respectively. For cohort studies, we obtained the fracture data from the full-text articles. For RCTs, fracture data were obtained from the ClinicalTrial.gov website, because fracture was reported as an adverse event only in the included trials; no trial reported fracture events as a primary or secondary outcome. In total, 388 209 participants were included in this meta-analysis. Except for RCTs, all included cohort studies had a large-scale population-based design. Additional information, such as age, sex, sample size, treatment indication, follow-up duration, NOAC type, and reported fracture sites, are summarized in Tables S1 and S2 (for cohort studies and RCTs, respectively). The results of the quality assessments are summarized in Table S3 (for cohort studies) and Table S4 (for clinical trials). The adjusted covariates for each cohort study are shown in Table S5. Risk of Any Fracture Associated with All NOACs Versus Warfarin All 29 studies reported at least 1 fracture site and were included in the analyses for any fracture. Patients treated with NOACs experienced a lower risk of any fracture than those treated with warfarin (pooled RR, 0.84; 95% CI, 0.77-0.91; P<0.001) with low heterogeneity (I 2 =38.9%) ( Figure 2; Table 1). No evidence of publication bias was detected according to Egger's test (P=0.149) and Begg's test (P=0.955). The associated funnel plot is shown in Figure S1. Subgroup Analyses and Sensitivity Analyses for Any Fracture Event All subgroup analyses consistently revealed lower risks of any fracture in patients taking NOACs than in those taking warfarin, irrespective of study design, treatment indications, mean follow-up period, mean age, sample size, and geographic location. Meta-regressions suggested no significant differences in the protective effects of NOACs between the subgroups ( Table 2). The sensitivity analysis after omitting each study, in turn, demonstrated a robust pooled RR with only negligible differences ( Figure S2). We summarized the detailed results for each design subgroup, including the effect of each NOAC compared with warfarin. Overall, the evidence from realworld cohort studies and those from adverse events reported by RCTs are comparable ( Table 3). The forest Risk of Hip Fractures Seventeen studies (3 cohort studies and 14 trials) reported hip fracture events for data synthesis. Overall, patients on NOACs had a lower risk of hip fracture than those on warfarin (pooled RR, 0.89; 95% CI, 0.81-0.98; P=0.023) with minimal heterogeneity (I 2 =0%) ( Figure 3; Table 1). Egger's test (P=0.997) and Begg's test (P=0.592) demonstrated no evidence of publication bias, which is supported by the funnel plot shown in Figure S5. The subgroup analyses of individual NOACs demonstrated that the risk of hip fracture was significantly lower in patients taking apixaban (pooled RR, 0.68; 95% CI, 0.52-0.89; P=0.006) but not in patients taking other NOACs ( Table 1). Risk of Vertebral Fractures Eleven studies (1 cohort study and 10 trials) reported vertebral fracture events for data synthesis. A lower, but not significant, risk of vertebral fracture was observed in NOAC users (pooled RR, 0.74; 95% CI, 0.54-1.01; P=0.061) with low heterogeneity (I 2 =38.8%) ( Figure 4; Table 1). No evidence of publication bias was found according to Egger's test (P=0.923) and Begg's test (P=0.640), and this was supported by the funnel plot shown in Figure S6. DISCUSSION In this large-scale meta-analysis involving 388 209 participants from 29 studies (with follow-up time ranging from 3 to 36 months), we found that patients treated with NOACs had an overall 16% and 11% lower relative risk of any fracture and hip fracture, respectively, than those treated with warfarin. The subgroup analyses showed that dabigatran, rivaroxaban, and apixaban were significantly associated with a lower fracture risk. The results from RCTs and real-world population-based cohort studies were quite consistent, indicating the robustness of our findings. We undertook separate meta-analyses for each type of NOAC as well as for different fractures sites, providing a comprehensive evaluation to address the knowledge gap. Comparisons with Previous Studies One previous meta-analysis, synthesizing data from 12 RCTs published before 2017, evaluated the differences in the risk of fracture associated with NOAC and warfarin. 13 All studies included in that meta-analysis reported fracture data as adverse events only online; none of them were specifically designed to assess fracture risks. That previous meta-analysis observed that patients taking NOACs showed a lower overall fracture risk than those taking warfarin (RR, 0.82; 95% CI, 0. Our results also showed that the risk estimates from real-world population-based cohort studies and RCTs (Table 3; Figures S3 and S4) were very similar, with a low between-study design heterogeneity. This indicates that the results of our meta-analyses were quite robust, and real-world evidence from cohort studies strengthens the evidence from RCTs on the protective effects of NOACs. Possible Underlying Mechanisms Although precise underlying mechanisms are still unclear, some factors might explain why NOACs are associated with a lower fracture risk than warfarin. Warfarin, a vitamin K antagonist, may interfere with bone formation. 55 Warfarin antagonizes vitamin Kdependent processes and impairs the γ-carboxylation of osteocalcin and other bone matrix proteins, which are important in bone mineralization and formation. 9,55 NOACs are independent of the mechanisms related to antagonizing vitamin K and do not interfere with bone metabolism. 9 Previous animal studies revealed that NOACs have positive effects on bone biology, such as increased bone volume, decreased trabecular separation, and reduced bone turnover rate, increased bone mineral density of the fracture zone, and improved fracture healing, compared with those in the warfarin-treated or control groups. [56][57][58] Furthermore, possible positive effects of NOACs on bone health and the prevention of falls have been proposed recently. 59 Additional studies are needed to evaluate the underlying mechanisms of NOACs on bone health and fracture risks. Clinical Implications The lower risk of fractures in patients taking NOACs is an important finding. Osteoporotic fractures, especially hip and vertebral fractures, that occur more frequently in elderly people, cause significant morbidity, mortality, and socioeconomic burden. Previous studies have reported that atrial fibrillation, the most common indication for OAC treatment, is an important risk factor for osteoporotic fractures. 60,61 Many risk factors for osteoporotic fractures, such as old age and a history of diabetes mellitus and cardiovascular diseases, coexist in patients with atrial fibrillation. 11,62 Venous thromboembolism and fractures also share similar important risk factors, such as old age, immobilization, smoking, previous fracture, and malignancy. [63][64][65] As patients taking OACs are often at a higher risk of fractures, evidence regarding the differences in fracture risks associated with the use of different OACs is clinically useful. The use of anticoagulants also poses a challenge to anticoagulation Figure 3. Forest plot of the relative risk of hip fracture associated with NOACs compared with warfarin. [14][15][16][31][32][33]36,38,39,[41][42][43][44][45]47,48,50 NOAC indicates non-vitamin K antagonist oral anticoagulant; and RR, relative risk. during the surgical treatment of fractures. 66,67 This meta-analysis provides updated clinical evidence for the association between NOACs and lower fracture risk. Therefore, we suggest that when prescribing OAC treatment, the risk of fractures to patients should be carefully evaluated, and NOACs may be preferred over warfarin to lower fracture risks if both OAC types could be indicated. However, treatment decisions should consider all risks and benefits of NOACs versus warfarin for an individual patient. Study Limitations In this systematic review and meta-analysis, we provided comprehensive and updated evidence on the protective effects of NOACs on fracture compared with warfarin. However, there are several limitations worth addressing. First, similar to the meta-analysis mentioned previously, 13 the data of fracture events from included RCTs were reported as only one of the adverse events in ClinicalTrials.gov. None of the included trials were explicitly designed to assess fracture risks; therefore, detailed assessment methods for identifying fracture events remain unclear. In addition, the follow-up duration was relatively short (≤12 months) in over half of the included trials; the proportion of fracture events to the number of patients was also low. Furthermore, we calculated the number of any fracture events by summing up the events of each fracture site, which may not be equal to the exact number of patients with fractures because a patient might have experienced more than 1 fracture. However, this method of calculating outcome events was not different between the NOAC and warfarin groups; thus, the bias in RRs we obtained from these RCTs was likely minimal. Second, despite the considerably larger sample size of realworld observational studies than that of RCTs, the real-world observational data may be biased owing to unknown or unmeasured confounders. The claimbased retrospective cohort studies may have a problem in accurately capturing diseases with codes, as medical information/histories are not adjudicated or captured systematically. The mean follow-up time of the included real-world cohort studies was also short (range from<12 to 29.2 months; Table S1). However, in our meta-analysis, the results from studies with 2 different study designs were comparable, indicating that such a bias was likely not of great concern. Third, the results from some of our subgroup analyses were not statistically significant (eg, the analysis of edoxaban), although their point estimates were similar to those of statistical significance (Table 1). This may be because of insufficient statistical power in the subgroup of different NOACs, especially in the analyses of specific fracture sites (hip and vertebral). It should also be noted that none of the subgroups of individual NOACs or fracture sites reached statistical significance in the analyses focusing on only RCTs ( Table 3). The statistical power of these subgroup analyses was low, because these RCTs were not designed for evaluating fracture events and tended to have a relatively short follow-up duration. More studies, especially RCTs and high-quality prospective studies with longer follow-up, are needed to evaluate the effect of individual NOACs on each specific fracture site. CONCLUSIONS This systematic review and meta-analysis gathered data of 388 209 participants involved in 29 studies and revealed that patients taking NOACs had a 16% lower risk of developing fractures than those taking warfarin. The subgroup analyses demonstrated a similar effect on lower fracture risk for individual NOACs (dabigatran, rivaroxaban, and apixaban) than for warfarin. Based on current evidence, NOACs may be preferred over warfarin to lower fracture risks in patients with indications for OAC therapy. Future studies are necessary to investigate the mechanisms underlying the associations between different OACs and bone health. AND ("warfarin" OR "vitamin k" OR "Coumadin" OR "acenocoumarol" OR "acenocoumarol"[mh] OR "phenprocoumon"[mh] OR "phenprocoumon" OR "Vitamin K/antagonists and inhibitors"[Mesh] OR "jantoven") AND ("fracture" OR "fractures" OR "fractures, bone"[mh])
2021-03-25T06:16:39.318Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "5d71bf08d6e31eb03f6310acb55683fb945e9fc8", "oa_license": "CCBYNC", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.120.019618", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8263b32c0aecb0d4f3a7be55238c6d8ba02f346a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252088998
pes2o/s2orc
v3-fos-license
Fast geometric trim fitting using partial incremental sorting and accumulation We present an algorithmic contribution to improve the efficiency of robust trim-fitting in outlier affected geometric regression problems. The method heavily relies on the quick sort algorithm, and we present two important insights. First, partial sorting is sufficient for the incremental calculation of the x-th percentile value. Second, the normal equations in linear fitting problems may be updated incrementally by logging swap operations across the x-th percentile boundary during sorting. Besides linear fitting problems, we demonstrate how the technique can be additionally applied to closed-form, non-linear energy minimization problems, thus enabling efficient trim fitting under geometrically optimal objectives. We apply our method to two distinct camera resectioning algorithms, and demonstrate highly efficient and reliable, geometric trim fitting. Introduction Over the past decade, deep convolutional neural networks (CNNs) have lead to significant advances in automated visual perception capabilities. However, it remains true that the performance of CNNs in classification problems still dominates regression performance despite the availability of large amounts of training data. Accurate model-based geometric fitting therefore remains important, especially if the dimensionality of the problem is low and thus tractable via commonly available solvers. The present paper addresses a very classical problem in geometric fitting, which is outlier contamination in the data. Solutions to robust geometric fitting may be grouped into two sub-categories. The first one is given by bottom-up approaches such as the popular RANSAC [9] algorithm. The second class of algorithms are top-down methods which try to directly fit the model to all the data by using a robust cost function [1] or global optimization [7]. However, robust kernel based methods depend on a sufficiently good initial guess, and global optimization methods are often computa- Only the part that contains the trimming boundary needs to be further processed, and only swaps across the trimming boundary are important for further processing. Bottom left: Histogram over times for sorting an array of doubles. Incremental sorting is applied to a pre-sorted array for which the values are disturbed. Bottom right: Incremental summation of elements smaller than the median after value perturbation and incremental sorting. The plot shows the fraction of operations required with respect to naive summation. tionally demanding and thus only work for low dimensions. An interesting top-down alternative is given by robust nullspace estimation methods such as dual principal component pursuit proposed by Tsakiris et al. [29]. While the method is showing advantages over RANSAC in situations in which the subspace dimension is large, it remains computationally demanding and often cannot be considered as a viable alternative in real-time applications. The present work picks up the work by Ferraz et al. [8] which introduces a very fast, trimming-based technique for robust nullspace fitting. Though the method is similar than many others in that it does not guarantee optimality of the identified outlier subset, we demonstrate that it works surprisingly well in the real-world application of camera resectioning. Our work makes two important contributions: • We introduce two important algorithmic modifications to the commonly applied sorting mechanisms in trimming approaches. We demonstrate that partial, incremental sorting is enough. We furthermore demonstrate how the swapping nature of common sorting algorithms can be immediately reused to improve the efficiency of the iterative solution of the null-space. • Robust linear null-space fitting typically employs an algebraic error. We extend the idea of robust trim fitting from linear null-space fitting to geometrically optimal, non-linear closed-form solvers. In the practical part of our work, we show an application of the idea to robust camera resectioning, demonstrating outstanding computational efficiency and success rate even in challenging, high-outlier scenarios. Related work Geometric fitting problems often appear in computer vision and aim at solving the absolute camera pose resectioning problem [14,16,19,20,33], the relative camera pose problem [11], homography estimation for pure rotation and planar structure [12,26], or 3D point set registration [2,15,30]. For absolute and relative camera pose problems, there also exist minimal [17,23] and directionalcorrespondence-based [10,18] solvers. The body of literature on geometric problems is large and the algorithms listed here are only some of the more established solvers for a few of the more fundamental problems. The present work addresses the solution of geometric fitting problems in the presence of outlier samples. A popular way to deal with outliers consists of moving from a least-squares estimator to the more general class of M-estimators [13]. As originally demonstrated by Weiszfeld [31], increased robustness against outliers can be obtained by exchanging the common least-squares L 2 -norm objective against the L q -norm objective (with q smaller than 2) 1 . Aftab and Hartley [1] furthermore prove that Iteratively Reweighted Least Squares (IRLS) for properly chosen weights converges to the L q Weiszfeld algorithm. Furthermore, they prove that IRLS can be extended to an entire family of robust M-estimators that employ robust norms and kernels (e.g. Huber norm, Pseudo-Huber norm) [12]. While the application of robust kernels and IRLS is an established technique, the method employs local gradients and depends on the availability of a sufficiently good initial guess. 1 The Lq Weiszfeld algorithm seeks the Lq mean which minimizes the the sum of the q-th power of the residual distances for given samples In order to achieve optimal identification of outliers, the community has proposed a number of globally optimal solutions to inlier cardinality maximization [3,6,7,21,25,27,32]. These methods often employ the L ∞ -norm and utilize the branch-and-bound algorithm. While certainly interesting from a theoretical stand-point, the branch-andbound algorithm suffers from the curse of dimensionality and quickly becomes computationally demanding as the dimensionality of the problem increases. Solutions based on branch-and-bound are often computationally intractable in real-time applications. A more efficient and established technique for dealing with outliers in geometric fitting problems is given by the RANSAC [9] algorithm. It typically employs a solver that finds an initial hypothesis for the model parameters from a minimal, randomly sampled set of input samples. In an alternating second step, the scheme then aims at determining the amount of consensus between this hypothesis and all other sample points. The procedure is repeated until convergence. Several extensions to the algorithm have been proposed over the years (MLESAC [28]), Preemptive RANSAC [24], PROSAC [4], GroupSAC [22]). While the RANSAC algorithm still counts as the standard solution to robust geometric fitting, the success of the algorithm is compromised for larger cardinalities of the minimal sample set required to establish a hypothesis. Recent times have seen the surge of competing, powerful top-down algorithms that may even come with convergence guarantees. Tsakiris and Vidal [29] present the DPCP algorithm for robust nullspace fitting, which works with a dual representation of the nullspace and aims at finding its orthogonal complement. The approach works well particularly in scenarios where the sub-space dimension is large compared to the ambient dimension, a situation in which RANSAC often fails. The success of the method is recently demonstrated by Ding et al. [5], who successfully apply the method to homography estimation problems. In 2014, Ferraz et al. [8] propose a highly efficient alternative to the IRLS algorithm. Rather than performing iterative reweighting of the samples, the algorithm performs trimming and iteratively solves for the nullspace using the n-th percentile of samples sorted by their residual distances. The algorithm works for linear problems, which is both an advantage and a disadvantage. The advantage is that no gradients are required and the method operates in closed-form. The disadvantage is that it often implies the use of simplified algebraic cost functions. Our work makes two important contributions with respect to this technique. First, we introduce important insights that lead to a significant algorithm speedup of this and in fact all trim fitting approaches. Second, we demonstrate that the idea remains amenable to non-linear, geometrically optimal closed-form solvers. Theory We formulate our theory from an abstract perspective. Let y = f θ (x) be a vectorial function f : R n → R m that depends on the parameter vector θ. Our approach aims at fitting problems which-in their most basic form-have an input given by a set of N noisy input correspondences (1) The problem consists of identifying the optimal parameters The function f θ is often a non-linear function and numerous algebraic linearizations of such non-linear functions have been introduced. A more general form of our objective is therefore given by minimizing where r(·) represents a residual function that vanishes for any parameter θ that bringsx i into ideal agreement with y i . One of the two following statements often holds: • r(·) is linear in θ and the entire objective therefore can be solved using linear least squares. However, the residual error does not correspond to a clearly defined, geometric distance, and it is non-trivial to make a statement about the optimality of the identified solution. • r(·) is scalar-valued and corresponds to a clearly defined geometric distance, but is non-linear in nature. The objective may therefore only be solved using nonlinear least-squares solvers or-in the situation of a polynomial form-Gröbner basis solvers derived for the first-order optimality conditions. The above only outlines the most basic form of the problem in which the correspondence set S is not affected by outliers (e.g. correspondences which are not following assumptions made by a Gaussian noise model). Let S in ⊂ S be the subset of maximum cardinality for which parameters θ * exist such that Outlier robust fitting is therefore often formulated as a cardinality maximization problem over θ where δ(·) is the indicator function and returns one if the internal condition is true, and zero otherwise. S out = S \ S in is defined as the set of outliers, and τ is a pre-defined inlier threshold. As mentioned in Section 2, many approaches to this problem have already been presented. In the following, we will introduce a very fast, robust trim fitting approach. Trimming using partial incremental sorting Similar to the REPPnP method [8], the core of our robust geometric fitting algorithm is given by a trimming strategy in which samples are ranked by how well they fit a hypothesis θ k in terms of the residual error r(x i ,ỹ i , θ k ) . The x-th percentile of the data is then alternatingly used to calculate new model parameters θ k+1 . Formally, the algorithm consists of the alternating execution of two steps. In step one, we use the current hypothesis θ k to obtain the sorted set of correspondences The generation of s k obviously requires the execution of a sorting algorithm. Step two then consists of finding new model parameters with the x-th percentile of lowest residual correspondences, i.e. Our first main contribution relies on the insight that-in case of using a fixed x-th percentile-only partial sorting of s is required. We use the quick sort divide-and-conquer algorithm, for which the main steps are as follows: • Pick one element in the set as pivot element. • Partition the remaining elements into two sub-sets such that any element in subset one is smaller than the pivot element, and any element in sub-set two is larger than the pivot element. The partition algorithm works by using two indices α and β that scan the array from the smallest to the largest element and vice-versa. Incrementing α is paused as soon as an element bigger than the pivot element is encountered. Decrementing β is paused as soon as an element smaller than the pivot element is encountered. The two elements are swapped, and the scanning continues. As soon as the indices cross, the partitioning is finished. The pivot element is placed at the boundary by another swapping operation. This concludes the partitioning with the desired property. • Recursively apply to both sub-sets. This is text-book knowledge, so further details are omitted here. The important insight is that recursive application can be limited to one of the subsets, only. Let us denote by p the final position of the pivot element in s k after the first partitioning step is completed. The position of the pivot element segments s k into a left part s kl and a right part s kr such that any element in s kl is less or equal to the pivot element, and any element in s kr is bigger than the pivot element (note furthermore that every element in s kl is smaller than any of the elements from s kr ). Three scenarios may occur: needs further processing. • p = x 100 N : In this case, the algorithm may be readily terminated. The x-th percentile score is given by It is furthermore clear that-as the overall fitting algorithm approaches convergence-an increasing number of correspondences that have been ranked within the x-th percentile eventually remain within that percentile even after an update has been generated. If starting from the previous order, only a limited number of swapping operations will need to be executed. A small experiment in which we simply calculate the median (i.e. 50-th percentile) of an array of numbers is presented in Figure 1. Numbers are uniformaly sampled from the interval [−10, 10], and results are averaged over 1000 experiments. As expected, partial sorting significantly increases the computational efficiency of retrieving the median. If we sort the vector, add a perturbation to the elements by uniformly sampling in the interval [−1, 1], and then repeat the sorting, another substantial gain in computational efficiency can be observed. The complexity of the median retrieval behaves approximately linear in the number of points. Incremental accumulation Next, let us suppose that the residual may be written in polynomial form. We have where D(x i ,ỹ i ) is a matrix that depends only on the data, and m(θ) is a column vector that consists of different monomial forms of the unknowns. Note that linear forms are included in polynomial forms and simply given if m(θ) = θ. Given this form, the objective that needs to be updated in each iteration can be written as A k is what we denote here as an accumulator. It is the accumulator in the k-th iteration which is composed by using the x-th percentile of data given the sorting in that iteration. Note that many if not the majority of non-minimal fitting algorithms include a similar formation of an accumulator as one of their sub steps. In both linear as well as iteratively linearized non-linear regression problems, the formation of the present accumulator is what we do when forming the normal equations of the system, and a solution or update is found by singular value decomposition of A k . In closedform non-linear solvers, the elements of A k are used to fill the elimination template of a Gröbner basis solver. The second main contribution of our fast trimming strategy then relies on the insight that the accumulator A k does not have to be recalculated in each iteration. More specifically, since the quick sort algorithm performs sorting by a sequence of swapping operations, the accumulator can be incrementally updated whenever we swap a pair of correspondences for which one is on the left side of the x-th percentile boundary, and the other one on the right. Given that the number of such swap operations in partial incremental sorting is substantially lower than the number of actual correspondences, we again obtain a significant gain in computational efficiency. We denote our algorithm quicksort4trim, and it is defined to return two logs denoted plusLog and mi-nusLog. The latter refer to the indices of the elements involved in cross-percentile-boundary swaps during sorting, and thus have to be added or removed from the accumulator (note that redundant swaps are ignored). The effectiveness of this approach is again verified in a small experiment in which we incrementally calculate the sum of all elements smaller than the median. As indicated in Figure 1, the number of required operations to update the sum for moderate perturbations can be as low as 10% of the number of operations required for a naive summation. Accumulators in geometric fitting often involve matrix operations, which is why the impact on overall computational efficiency can be substantial. Application to camera resectioning We apply our robust trim fitting strategy to a classical problem from geometric computer vision: camera pose resectioning. After a definition of the problem, we will first see an efficient, incremental variant of the original REPPnP algorithm proposed by Ferraz et al. [8]. We will furthermore see an application of the incremental trimming strategy to a geometrically optimal, closed-form non-linear solver, which is the UPnP algorithm by Kneip et al. [16]. Problem statement The goal of the Perspective-n-Point (PnP) problem is to find extrinsic camera pose parameters R and t that transform points p i from the world frame to the camera frame such that they come into alignment with direction vectors f i measured in the camera frame, i.e. λ i denotes the unknown depth of the point seen from the camera frame. The PnP problem is solved for an arbitrarily large number of points, and state-of-the-art solutions typically have linear complexity in this number. Incremental REPPnP REPPnP by Ferraz et al. [8] is strongly inspired by the EPnP algorithm [19] and relies on the prior extraction of control points in the world frame. Using the latter, every world point can be expressed as a linear combination p w i = 4 j=1 α ij c w j . Knowing that the linear combination weights do not depend on the reference frame, it is easy to see that Assuming that f i = [u c i v c i 1] T , the third row can be used to eliminate the unknown depth, and it immediately follows that is the solution space given by the control points expressed in the camera frame, and ⊗ denotes the Kronecker product. The camera pose is subsequently found by control point alignment. REPPnP solves this problem robustly via trim fitting. For N points, it iteratively updates θ by nullspace extraction, i.e. Originally, w i = 1∀i. Let θ k be the solution found in iteration k. The w i are then updated such that w i = 1 if D i θ k < τ , and w i = 0 otherwise. τ is defined as the median of the sequence The original REPPnP algorithm applies full sorting and accumulation in each iteration. The incremental version of REPPnP-denoted REPPnPIncr-is obtained by applying our quicksort4trim partial sorting algorithm and performing incremental accumulation. We use the 50-th percentile throughout this paper, and the resulting algorithm is summarized in Algorithm 1. Incremental UPnP REPPnP solves for a linear nullspace and therefore relies on an algebraic cost function. Geometrically optimal solvers can return superior results, but require the closedform solution of a non-linear objective. Such an alternative for the camera resectioning problem is given by the UPnP algorithm [16]. In simple terms, UPnP expresses the sum of geometric object space errors (i.e. point-to-ray distances) as a polynomial function of the quaternion parameters of the exterior orientation of the camera. This energy is then minimized in closed-form by finding the roots of the first-order optimality conditions using a Gröbner basis solver. Interestingly, the algorithm also employs accumulations over the input correspondences to generate the values of the elimination template, which is why our fast trimming strategy may also be applied to this non-linear objective. However, the equations of the original paper need to be slightly reformulated in order to single out a clear function of accumulators that can be updated incrementally. We do this here for the central case, but the rule can easily be extended to the noncentral case as well. For details, the reader is kindly referred to [16]. Let θ be a four-vector of the quaternion variables. The rotation of the 3D point into the camera frame is given by R(θ)p q , which-for the sake of a simplified derivation-is rewritten as the product Φ(p i ) is a 3 × 10 matrix that is filled with elements of p i , and m(θ)is a 10 × 1 vector filled with all order-2 forms of the quaternion variables. The object-space error is given by where and I is the 3× identity matrix. w i = 1 if correspondence i is considered. The overall objective energy as a function of individual accumulators is finally given by Again, we solve this problem using our incremental sorting and accumulation algorithm quicksort4trim. Rather than summing up all terms for which w i = 0, we register swaps across the x-th percentile boundary. The return variables plusLog and minusLog register terms for which w i toggles from 0 to 1 or 1 to 0, respectively, and only those terms need to be taken into account in order to update the accumulators H, A 0 , A 1 , A 2 , and A 3 . Sorting is based on the reprojection error. The algorithm is summarized in Algorithm 2. Experimental Results We compare our incremental implementations of REPPnP [8] and UPnP [16] against their corresponding algorithms employing full sorting and naive accumulation as well as two state-of-the-art, non-robust PnP methods, i.e., ePnP [19] and Ransac based on the P3P algorithm [17] (denoted P3PRansac). All results are obtained by C++ implementations running on an Intel® Core™ i5-8250U 8-core CPU clocked at 1.60GHz × 8. We conduct rigorous synthetic experiments assuming a virtual calibrated camera with a focal length of 800. We generate random 2D-3D correspondences by distributing 12: 28: Solve for t 31: update reprojection errors in s 32: return R and t the camera. We finally add different levels of uniform noise to the image measurements and produce random outliers in the data by randomizing the direction vectors of a fraction of the correspondences. Ground-truth rotations and translations are generated and used to transform the 3D points into the world frame. Absolute errors in rotation (in rad) and translation (in m) are calculated and compared for different numbers of correspondences, noise levels and outlier fractions. If R gt and R denote the ground truth and estimated rotations, our error is given by R T R gt − I Fr , which is equivalent to the angle of the residual rotation expressed in radiants. We furthermore evaluate computational efficiency by running each set of experiments more than 1000 times and considering the average running time. Computational Efficiency The goal of this paper is to present an algorithmic approach to improve the efficiency of robust trim-fitting in outlier-affected geometric regression problems. The computational efficiency comprises two parts. The first one is given by the efficiency of the sorting itself, which has been analysed in Figure 1. Here we focus on the second part, which is the impact of reducing the number of operations required during accumulation. Figure 2 illustrates partial incremental sorting in operation for an experiment of RobustUPnPIncr. It visualizes the score values before (top row) and after (bottom row) sorting. Assuming that the amount of outliers is less than 50%, a fixed 50 percent (x=50 in (6)) threshold is used throughout all experiments in this work. As illustrated, the perturbed presort scores become partially ordered after sorting. Furthermore, the residual errors within the 50-th percentile are becoming gradually smaller, thus indicating convergence of the algorithm. It can furthermore be seen that the scores before and after sorting present decreasing differences as iterations proceed, thus indicating that sorting and accumulation efficiency increase during convergence. The average number of non-redundant cross-percentile-boundary swaps in each iteration is illustrated in Figure 3, confirming a fast decline over the very first iterations. Given that the linear complexity step of the employed solvers outweighs all other steps (at least for sufficiently many points), this behavior leads to a substantial increase in computational efficiency. We evaluate the mean running time of P3PRansac [17], ePnP [19], REPPnP [8], REPPn-PIncr (REPPnP + fast geometric trim fitting, Algorithm 1), UPnP [16], RobustUPnP (UPnP + regular geometric trim fitting) and RobustUPnPIncr (UPnP + fast geometric trim fitting, Algorithm 2). The results are summarized in Figure 4, where the computational efficiency is evaluated for a varying number of correspondences. It is highly interesting to see that REPPnPincr becomes at least twice as fast and achieves a running time that is comparable to P3PRansac. All experiments are executed for constant Gaussian noise of 3 pixels and with a fixed outlier fractions of 10% (left) and 30% (right). Figure 5 shows the errors over the number of correspondences. Results are obtained for uniform noise of 3.0 pixels and an outlier fraction of 30%. Both RobustUPnP and Ro-bustUPnPIncr return much lower mean and median (position and rotation) errors than P3PRansac, UPnP and EPnP, demonstrating simultaneous strong rejection of outliers and high accuracy of the trim-fitting based, geometric solver. Furthermore, note that-while the median errors of the algebraic and the geometric solvers are practically identical-REPPnP and REPPnPIncr are significantly outperformed by their geometrically optimal counter-parts in terms of the mean error owing to the fact that the algebraic solver often fails to converge. We have tried both the original implementation of [8] as well as our own re-implementation of the algorithm. The indicated results are the best results we were able to obtain using the algebraic error criterion. Note that the errors of RobustUPnP and RobustUPnPIncr are practically the same, which indicates that the incremental sorting merely increases computational efficiency without affecting the results. The same is true for REPPnP and REPPnPIncr. Figure 6 finally shows errors for varying uniform noise levels reaching from 0 to 6.0 pixels. The experiments use an outlier fraction of 30% and 2000 correspondences. The result demonstrates that REPPnP and RobustUPnP (with and without fast incremental trim fitting) have lower median position and rotation errors than P3PRansac, EPnP and UPnP, and the geometric solver ultimately produces the most accurate results. Figure 7 finally shows errors obtained for varying outlier fractions between 2.5% and 50%. The noise level is kept at 3.0 pixels and the number of correspondences remains 2000. As can be observed in Figure 7, an increasing outlier fraction leads to increasing mean errors for EPnP and UPnP, which is natural owing to their non-robust nature. P3PRansac has a break down point of about 30%, while the algebraic solver shows high instability starting from outlier levels as low as 10%. RobustUPnP and RobustUPnPincr demonstrate the best performance and have a similar breakdown point than P3PRansac, but lower errors owing to the geometric nature of the algorithm. Figure 6. Errors for varying noise levels. Conclusion The presented algorithm makes an astute use of the internal swapping operations in trimming methods for a significant reduction of computation time of the most important, linear-complexity step in geometry solvers. We have furthermore demonstrated that this technique is amenable to non-linear geometrically optimal solvers. This leads to a significant improvement in success rate compared to linear algebraic null space solvers, and makes trim-fitting a viable alternative in practical applications. The present work limits the evaluation to camera resectioning, for which very good performance is achieved, but the application of Ransac followed by a refinement over the inlier subset remains the gold standard. Our current investigations focus on higher dimensional problems, for which the proposed technique could achieve an unprecedented mix of accuracy, success rate and computational efficiency.
2022-09-07T01:16:09.194Z
2022-09-05T00:00:00.000
{ "year": 2022, "sha1": "c8a0a4960c664c4fcb4f61291b2db7535a74e79d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c8a0a4960c664c4fcb4f61291b2db7535a74e79d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
211057663
pes2o/s2orc
v3-fos-license
Transitory Master Key Transport Layer Security for WSNs Security approaches in Wireless Sensor Networks (WSNs) are normally based on symmetric cryptography. Instead of symmetric encryption, some alternative approaches have been developed by using public-key cryptography. However, the higher computational cost represents a hard limitation to their use. In this paper, a new key management protocol is proposed. A transitory symmetric key is used to authenticate nodes in the network during the key establishment. However, pairwise keys are established using asymmetric cryptography. A theoretical analysis shows that the computational effort required by the public key cryptosystem is greatly reduced, while the security of the network is increased with respect to state-of-the-art schemes based on a transitory master key. Moreover, an experimental analysis demonstrates that this proposed approach can reduce the time spent for key establishment by about 35%. I. INTRODUCTION Wireless sensor networks (WSNs) [1] are a well-established pervasive technology that represents an ideal sensing component in the Internet of things (IoT) [2].They are composed of low cost and low power devices, called sensor nodes, which sense the environment, process the collected data and exchange information through a wireless connection.They are applied in numerous fields, like industry 4.0 [3], smart city [4] and air quality monitoring [5]. WSNs share fundamental characteristics with embedded systems, like low power devices with low cost hardware and special purpose applications.According to these characteristics and to their network system, ad-hoc solutions must be implemented to solve ordinary issues like power consumption [6], channel allocation [7] and reliability [8].In particular, WSNs are affected by security threads (e.g., eavesdropping [9] and hardware tampering [10]). Symmetric cryptography in WSNs is normally preferred to public cryptography, since it requires a lower computational effort.However, with symmetric cryptography two nodes can communicate only if they share a common secret, i.e. a key.Various key management approaches have been developed in order to establish and distribute keys in a WSN [11].The Plain global key (PGK) represents the basic The associate editor coordinating the review of this manuscript and approving it for publication was Gerhard P. Hancke .security solution.A single global key is used to encrypt all the communications.This scheme has very low memory overheads but it also provides a low security level because all nodes store the common secret.Therefore, if an adversary compromises a node the entire network is compromised.Full pairwise key (FPWK) is another basic approach.Also this scheme is based on key predistribution: before the deployment the keys are stored in the memories of the nodes.Additional computation or data exchange are not required.In this case, each possible link in the network has its own secret key.Therefore, each node stores a key per node in the network.The required memory is proportional to the size of the network, so FPWK can be only applied to small networks.However, if an adversary compromises a node he/she cannot use the achieved information to eavesdrop on the communications among the other nodes. Within the techniques based on symmetric encryption, transitory master key represents a well-known solution for static networks.At deployment each node stores a global secret that is used to generate pairwise keys.However, after a small period of time each node deletes the global secret.In this way, if a node is compromised after the deletion, the rest of the network is safe.LEAP+ [12] is the main example of schemes based on this technique. The communication security within WSNs is based on symmetric cryptography.Moreover, public key cryptography is also considered too expensive to protect the key establishment, since the limited computational and power resources of the nodes clash with the involved computational overheads.However, public key could simplify the key management.An example of public key management scheme is the simplified versions of TLS (Transport layer security) [13], which is the protocol used on the Internet to establish secure connections.In this scheme, public key digital certificates are used for authentication and then the key establishment is done with key agreement functions (e.g.Diffie-Hellman). This paper presents Transitory master key TLS (TMKTLS), an hybrid protocol based on both symmetric and asymmetric cryptography.Its main goal is to reduce the computational requirements for the application of public cryptography in WSNs.In TMKTLS, a temporary master key, shared by all nodes, is employed to authenticate the public keys of the nodes.After an initial time slot, the transitory master key is deleted and pairwise keys among the nodes are generated using asymmetric cryptography.This scheme authenticates the public keys with a message authentication code instead of a digital certificate; the former operation takes negligible time compared to the latter, so the overall time required for key establishment is greatly reduced.If an adversary compromises the transitory master key, he/she can only add fake nodes to the network, while data secrecy is always preserved by public cryptography.Moreover, these malicious nodes can be detected by a malicious node detection routine.This kind of technique has been also used in [14].Differently from that approach, TMKTLS is compliant with node adding and mobile nodes, even if it has better performance with static networks. A theoretical analysis shows that the computational effort required by TMKTLS is greatly lower than standard public key cryptosystems.The schemes based on transitory master key have common security limitations related to the possibility that the transitory master key is compromised.However, TMKTLS provides a good protection even if the transitory master key is compromised, since public cryptography still protects the links from eavesdropping and the malicious node detection routine allows to identify fake nodes.Moreover, an experimental analysis on real nodes validates the theoretical analysis and demonstrates that this new approach can decrease the time spent for key establishment up to a third. The rest of the paper is organized as follows.In Sect.II related works are described.Sect.III presents the proposed key management scheme.In Sect.IV, the proposed approach is theoretically analyzed and compared with the state-of-theart approaches, while in Sect.V, an experimental analysis validates the scheme.Finally, conclusions are drawn in Sect.VI. II. RELATED WORKS Many key management schemes based on different approaches have been presented in literature [15], [16].In the following, the most important ones are described. A. GLOBAL MASTER KEY Schemes based on the Global Master Key approach use a unique master key that is shared by all the nodes and is used to protect all the communications or to provide security during the pairwise key establishment.The main approach in this category is Symmetric-key Key Establishment (SKKE) [17], which is the key management scheme used by ZigBee.In SKKE every node is preloaded with the master key.To generate a pairwise key between node A and node B, node A sends a challenge C A , i.e., a random number, to node B. Node B sends back a message composed by its identifier ID B , a challenge C B and the message authentication code computed over a constant k 1 , ID A , ID B , C A and C B .Then a keyed hash function with the master key is executed over the two IDs and the challenges.The results is used as a common secret by both the nodes.From this secret the nodes compute two pairwise keys, one used to sign messages and the other used for encryption. B. TRANSITORY MASTER KEY Also this family of protocols uses a global secret in order to protect the pairwise key establishment.However, the global key is deleted after a timeout.The assumption at the base of this approach is that an adversary cannot compromise a node in a short period of time.Therefore, by deleting the global key before a proper timeout the network can be considered safe. After the deployment, a node has the master key in its memory.This period is defined initialization phase.After the deletion of the master key the node starts the working phase.Without special techniques, it is possible to establish keys only within the initialization phase, since two nodes need to share the master key.Therefore, adding new nodes after the first deployment would be impossible.However, key management schemes can use specific techniques to provide the possibility to add new nodes able to establish pairwise keys with the previously deployed ones. If an adversary compromises a node and steals the master key from its memory, he/she can decrypt all the messages exchanged for the pairwise key establishment.Otherwise, if an adversary compromises a node after the deletion of the master key, he/she cannot get advantages for eavesdropping on the other links in the network. A critical point is represented by the timeout for the deletion of the master key.If the timeout is too long an adversary could be able to compromise the master key.Otherwise, if the timeout is too short the nodes could not be able to establish all the pairwise keys. The most important scheme based on a transitory master key is LEAP+ [12].This protocol allows the generation of different types of keys (e.g. a key to exchange messages with the base station, the pairwise keys, etc.).However, the establishment of pairwise keys represents the base of the scheme while all the other keys are derived from these ones. In [18], the use of a transitory master secret was mixed with the random distribution of keys.This approach provides a high level of security, since an adversary needs both the transitory secret and the correct key to eavesdrop on a link.However, the computational and communication overheads are high. C. PUBLIC-KEY CRYPTOGRAPHY Some key management schemes use public-key cryptography to protect the pairwise key establishment.Transport Layer Security (TLS) represents the basic approach.This protocol is commonly used on generic computer networks to establish secure communications.In a typical WSN implementation of TLS [19], each node has a couple of public and private keys, a certificate that ensures that they are authentic and the public key of the administrator.To generate a common pairwise key, two nodes exchange their certificates and check their authenticity by using the administrator's public key.After the authenticity check, they generate a pairwise key with a public-key agreement algorithm, like Diffie-Hellman.In the rest of the paper this scheme is referred to as Simplified TLS (STLS). STLS provides a high level of security, since the adversaries cannot forge a certificate and they cannot take any advantage by compromising a node.However, the public-key operations require a great computational effort.Moreover, each asymmetric operation executed by a normal node would be very slow.Therefore, these approaches are often considered too complex for WSNs. III. TRANSITORY MASTER KEY TRANSPORT LAYER SECURITY This section presents Transitory Master Key Transport Layer Security (TMKTLS), a key management scheme based on a transitory master key.The main goal of this scheme is to achieve most of the benefits of public key cryptography and transitory master key approaches with the limited resources available in WSNs.Therefore, TMKTLS is designed to require lower computational overheads than basic public key schemes and to provide a better security level than transitory master key schemes. In the proposed scheme, the transitory master key is used for the authentication of the public credentials of the nodes.Each node demonstrates the authenticity of its identity and its public key with a message authentication code, which is indexed with the transitory master key.After a timeout, each node deletes the transitory master key, so possible adversaries cannot forge the signatures. In STLS, the verification of digital signatures requires a high computational effort, greater than the one required by key agreement.In TMKTLS, the computation of a message authentication code requires negligible time, considerably reducing the computational time needed to authenticate a node.In transitory master key schemes, if a node is compromised before the deletion timeout, the adversary is potentially able to eavesdrop on all the links and to introduce into the network new nodes that pass any authenticity check.TMKTLS has the same protection against eavesdropping of Diffie-Hellman, since it is used to generate the pairwise keys.The introduction of malicious nodes is possible, since the authentication is based on a global secret, but it can be detected by using the asymmetric cryptography. The key establishment of TMKTLS is divided into four main phases: • Hello Phase: hello messages are broadcasted by the nodes to perform the neighboring discovery.The nodes check the authenticity of the received messages.At the end of this phase, the transitory master key is deleted; • Pairwise Phase: each node computes pairwise keys with each of the authenticated neighboring nodes; • Acknowledge Phase: acknowledge messages are exchanged in order to confirm each link; • Working Phase: the nodes can still establish new pairwise keys by using public-key digital signatures instead of the transitory master key.Fig. 1 shows an example of the proposed key establishment.In this case, node 1 can communicate with node 0 and 2. However, node 0 and node 2 cannot communicate directly since they are too far.The data represented between curly brackets are stored into the memory of the nodes, while the ones in square brackets are sent over the wireless channel. A. NOTATION AND ASSUMPTIONS The proposed scheme is based on the following assumptions: there is no deployment knowledge; all nodes are homogeneous; nodes can roam into the network and can be added after the initial deployment; an adversary can eavesdrop on all the messages, inject packets and replay older messages; an adversary can compromise a node and obtain all the data stored in it.After a node is compromised, the adversary has the total control of that node. The following symbols are used in the rest of the paper: • n: number of nodes in the network; • i x : identifier of a generic node x; • e x , d x : public and private keys of node x; • s: transitory master key; • k x,y : pairwise key between node x and node y; • c x : digital signature in the public certificate of node x; • e CA : public key of the network administrator; • r x : signature of x's hello message; • M (): message authentication code function; • DS(): digital signature; • KA(): key agreement function; • ||: concatenation operator. B. PREDEPLOYMENT PHASE Before deploying the network, all nodes are preloaded with initial data.The data assigned to node u are i u , e u , d u and s.Each node also knows the functions M () and KA(). At the boot, a node computes its hello message, which is composed of i u , e u and a signature corresponding to the message authentication code computed over the concatenation of i u and e u , indexed by s: r u = M (i u ||e u , s).Since the hello message of a node is based on static data, it can be computed just once. C. FIRST PHASE: NEIGHBOR DISCOVERY During the first phase, the nodes look for possible neighboring nodes, in order to establish secure links.Each node periodically broadcasts an hello message in order to communicate to the other nodes its credentials. The neighbor discovery phase is composed by µ rounds, with a duration of t h per round.The total duration of the first phase is T h = µ • t h .Each node broadcasts its hello message once per round.The message is sent at a random instant of each round, in order to decrease possible collisions. Node v sends an hello message composed of [i v , e v , r v ].After receiving this message, node u checks the received signature.If r v == MAC(i v ||e v , s), the message is authentic, since v knows s.Node u saves i v and e v into its memory. The duration of this phase should be chosen carefully, depending on the network characteristics.If it is too short, not all the nodes will be able to establish a pairwise key with all their neighboring nodes and the network connectivity decreases.If it is too long, the probability for an adversary to capture a node and compromise the transitory key increases. In Fig. 1, node 0 broadcasts its hello message and node 1 receives and verifies its authenticity; then, node 1 and 2 repeat the same operation.As it can be seen in the µ-th round, if a certain hello message was already received, it is not verified again. D. SECOND PHASE: PAIRWISE KEY ESTABLISHMENT The goal of the second phase is the establishment of the pairwise keys.Then, since the authentication has been completed in the previous phase, the transitory key is deleted at the beginning of this phase.Each node computes a pairwise key per authenticated neighboring node.At the end of the phase each link will have a unique symmetric key, known only by a pair of nodes. The establishment of the pairwise keys is protected by public key cryptography.TMKTLS is compliant with key agreement functions based on a Diffie-Hellman key exchange (e.g.Elliptic Curve Diffie-Hellman, ECDH).These public key functions allow two devices to generate a common secret, which can be used as symmetric key.For example, in order to establish a pairwise key between nodes u and v, node u computes KA(d u , e v ) while node v computes KA(d v , e u ).Both the nodes generate the same common secret k u,v . In many contexts, but in particular for WSNs, Diffie-Hellman-like functions represent a proper solution, since to establish a key it is sufficient to know the other party's public key and to exchange few messages [20]. In Fig. 1, each node deletes s from its memory and then computes the pairwise keys for every discovered neighbors by using KA().In this case node 0 and 2 just compute the key with node 1, while node 1 computes both the pairwise keys. E. THIRD PHASE: KEY ACKNOWLEDGEMENT During the third phase, a 2-way acknowledgement is executed in order to confirm the correct generation of the pairwise keys.The goal of this phase is to ensure that the nodes can correctly communicate with each other.The acknowledgement also allows to detect malicious nodes that participated to the previous phases without the correct credentials.In every pair of nodes, the one with the lowest identification number is defined initiator while the other one is defined recipient.As an example, let's consider node u as the initiator and node v as the recipient.The initiator starts the acknowledgment: it signs its identification number i u with M (i u , k u,v ) and sends this message to the recipient.The recipient answers with i v signed with M (i v , k u,v ).If both u and v correctly verify the received signatures it means that the pairwise key k u,v was correctly created. In the example, node 0 initiates the acknowledgment with node 1, that responds accordingly; node 1 does the same with node 2. The message authentication code verification is not represented in Fig. 1. F. FOURTH PHASE: KEY ESTABLISHMENT WITHOUT THE TRANSITORY KEY When a node is added to the network after the initial deployment or it is moved, it is unable to establish pairwise keys with the nodes that have already deleted s.Therefore, a key establishment among nodes in the working phase is required.This routine is based on public certificates. Each node uses its own digital signature c u and the network administrator's public key e CA , that is used to verify other certificates. First of all, each node in the working phase periodically broadcasts its discovery message, in the same way as in the first phase of the protocol.This node becomes the initiator of the key establishment.Assuming that node u is the initiator, the discovery message is composed of [i u , e u , c u ], where c u is equal to the digital signature DS(i u ||e u , d CA ), computed over the message that has to be sent with the private key of the certificate authority, that in this case is the network administrator.If a node that does not share a pairwise key with the initiator receives a discovery message, it immediately checks the received message.Then, if it is authentic, the node responds with its own certificate and starts to compute the pairwise key.The answer is sent in unicast, to reduce the computation done by the remaining nodes in the network. Then the initiator verifies all the received discovery messages.For example, it will verify v's message with V (i v ||e v , c v , e CA ), where V () is the complementary function of DS(); it takes the message and the signature as input and verifies them against the public key e CA .If the function succeeds, it means that v is a valid node, so the pairwise key is computed with the function KA(), as in the second phase. Finally, each new pairwise key is confirmed with a 2-way acknowledgment, as in the third phase. G. MALICIOUS NODE DETECTION If the transitory key is compromised, an adversary can introduce into the network malicious nodes able to establish keys with other nodes with s.This possibility is limited, since each node will recognize as authentic the malicious nodes only for the short time before deleting s.However, a malicious node detection routine can identify them. The general idea is to execute a key establishment without the transitory key, since only an authentic node can generate a pairwise key in that way.In order to verify the authenticity of a suspect node, an inquirer node sends a check message.This message has the same content of the discovery message used in the fourth phase.The suspect node checks the received message and answers with a discovery message.The inquirer node checks the received message.If the message is valid, both the nodes compute a shared key with the function KA().Finally, a 2-way acknowledgment is performed in order to verify the authenticity of the suspect node. IV. EVALUATION AND COMPARISON In this section, the proposed algorithm is analyzed from the theoretical point of view and compared with other state-of-the-art protocols.The analysis is focused on security, memory and computation. A. CONNECTIVITY The connectivity level is here defined as the probability of successfully establishing a link with a neighboring node.The maximum level of connectivity is reached if every node is able to communicate with all its neighboring nodes.The minimum, if no node is able to communicate. LEAP+ can provide the maximum connectivity only if the neighbor discovery phase is long enough, since a routine for the establishment of pairwise keys between two nodes without the transitory key is not present. FPWK, STLS and TMKTLS guarantee that all links are created, so the level of connectivity is the maximum.In particular, in FPWK each node knows the key for each link.In STLS the asymmetric cryptography allows all the nodes to establish a pairwise key with any other node.In TMKTLS, although the nodes could be unable to complete the key establishment within a too short timeout, they can always establish the pairwise keys without the transitory key. B. RESILIENCE The resilience level is defined as the ability of resisting to compromised secret material.In particular, the resilience against eavesdropping is computed according to the probability that an adversary cannot eavesdrop on a link, while the resilience against node forgery is identified by the probability that an adversary cannot pass an authentication check.The possibility to eavesdrop on the links of the compromised nodes or to clone the compromised nodes are not considered.The maximum level of resilience against eavesdropping is reached if no link can be eavesdropped.The minimum if all the links can be eavesdropped.The maximum level of resilience against forgery is reached if all nodes can recognize the fake nodes.The minimum if all the nodes cannot recognize them.Table 1 shows the level of resilience against eavesdropping (eavesdropping rows) and against node forgery (forgery rows) if an adversary has compromised nodes within the initialization or the working phases. FPWK and STLS always provide the maximum level of resilience.This level is reached since a node does not store any information useful to find the keys generated by the other nodes. In LEAP+, if at least a node is compromised during the initialization phase all the links could be eavesdropped and the adversary can forge new nodes able to pass all the authenticity checks.If some nodes are compromised in the working phase, the level of resilience is maximum. In TMKTLS, the transitory key is the only secret that can be stolen from a node in order to attack other parts of the network.Therefore, TMKTLS provides the maximum resilience during the working phase.During the initialization, the level of resilience against eavesdropping is maximum, since all the pairwise keys are protected by asymmetric cryptography.However, an adversary with the transitory master key can introduce malicious nodes able to establish a pairwise key with nodes that still store the transitory key. C. RECOVERABILITY The recoverability level is defined as the ability of restoring secure communication after some compromised secret material has been revoked [21].It can be computed according to the probability that a link is still safe or a new key can be established to protect it.The maximum level of recoverability is reached if every be able to communicate with all its neighboring nodes after revoking the compromised secret material.The minimum if no node can communicate after revoking the secret material.Table 2 shows the level of recoverability if an adversary has compromised nodes within the initialization or the working phases. FPWK, STLS and LEAP+ have a level of recoverability equal to the level of resilience, which is always the maximum or the minimum.Therefore, FPWK and STLS always provide the maximum level of recoverabilty, while LEAP+ provides the maximum if a node is compromised during the working phase, and the minimum if a node is compromised during the initialization phase. If a node is compromised during the working phase, TMKTLS provides the maximum level of recoverability, as for the resilience.If a node is compromised during the Initialization phase, the adversary is able to introduce new malicious nodes.Nevertheless, the malicious node detection routine allows to check the authenticity of the nodes and to revoke the keys used to communicate with them.Therefore, the maximum level of recoverability is provided. D. COMPUTATIONAL REQUIREMENTS The operations involved by the analyzed schemes are shown in Table 3, where: v represents the verification of a public key signature, dh represents a shared secret computation using the Diffie-Hellman scheme, r represents a keyed pseudo-random function, and m represents a message authentication code or a hash function.The most efficient scheme is FPWK, since all the pairwise keys are preloaded and no operation is required after the deployment.LEAP+ requires to execute two pseudo-random functions and a message authentication code in order to compute the other node individual master key, the pairwise key and to verify a message.The involved computational overheads are still low. In STLS, the operations involved are the verification of a digital signature and two message authentication codes.TMKTLS requires a Diffie-Hellman scheme execution and three message authentication codes.Both in STLS and TMKTLS, two of the message authentication codes are needed for the acknowledgement task.Since the computational overheads of a message authentication code execution are extremely lower than public key operations, STLS requires more computational time than TMKTLS, which requires more time than LEAP+. E. MEMORY REQUIREMENTS Table 4 shows the memory required by each scheme in order to store the secret material.The number of nodes in the network and the number of nodes in direct communication are n and v, respectively.The length of secret material are: l k for the symmetric keys, l d for the private keys, l e for the public keys, l s for the public digital signatures, l m for the message authentication codes, and l a for the temporary key.The length of the identification number of a node is l id . FPWK has the largest memory overhead, since each node stores a key per node of the network, independently from the density.During the working phase, in all the other schemes, each node stores the pairwise keys with its neighboring nodes, for an area that corresponds to v(l k + l id ).Among these schemes, LEAP+ has the lowest memory overhead, since each node stores only two keys during the initial phase.STLS has larger requirements, since each node stores a pair of asymmetric keys for the communication, the digital signature and the public key of the certificate authority.TMKTLS has larger memory requirements than STLS, since each node stores the same material as in STLS plus a message authentication code signature and the transitory master key.However, during the working phase, the additional secret material is deleted. F. OVERALL COMPARISON The analyzed schemes are all based on similar assumptions.They all allow, to some extent, to add nodes after the initial deployment (only FPWK has some limitations in this area because it requires unused pairwise keys); they also provide a high level of security when nodes are compromised during the working phase of the protocols. In order to provide a quantitative comparison, the following case study is considered (all values are in bytes): l k = 16, l d = 20, l e = 40, l s = 40, l m = 4, l a = 48 and l id = 1,; the limit to the memory size dedicated to the secret material is 4 KB. According to the formulas in Table 4, FPWK is compliant with networks composed by at most 256 nodes, STLS and TMKTLS with networks with at most 232 nodes in direct communication and LEAP+ 240 nodes in direct communication.Therefore, LEAP+, STLS and TMKTLS are compliant with large networks with high density. From the computational point of view, FPWK is the scheme with the lowest requirements.LEAP+ has but they are still faster than the protocols based on public key cryptography.TMKTLS has lower requirements than STLS, which is the scheme with the largest requirements. Although the schemes based on random predistribution cannot reach the maximum resilience, if some nodes are compromised during the working phase all the considered schemes provide a high level of security.In particular in FPWK, LEAP+, STLS and TMKTLS, an adversary that compromises a node doesn't obtain useful information about other nodes, so he/she cannot eavesdrop on any link or introduce new nodes different from the compromised ones.If some nodes are compromised during the initialization phase, in LEAP+ the adversary could eavesdrop on all the links and introduce new nodes.In TMKTLS, an adversary cannot eavesdrop on any link.However, he/she could introduce new nodes able to establish a link with nodes that still store the transitory key.Nevertheless, these fake nodes will be able to operate only until a malicious node detection. For medium to large size WSNs with high density, TMKTLS represents a suitable key management scheme, since it provides a high level of security with computational effort lower than STLS. V. EXPERIMENTAL ANALYSIS A prototypical implementation of TMKTLS has been developed on the TinyOS platform in order to evaluate its feasibility on the current generation of nodes and verify its performances.The main goal of this analysis is to compare TMKTLS and STLS, in order to correctly evaluate the execution times of the proposed protocol; LEAP+ was not considered because, from the performance point-of-view, it will always be faster that protocols based on public-key cryptography.Furthermore, resilience was not considered in this analysis because it is a static property that does not affect the experimental comparison. The prototypical implementation was executed on the TelosB Tmote Sky IV, a sensor node that mounts a MSP430 8 MHz microcontroller with 10 KB RAM and a CC2420 wireless chip with a transmission rate of 250 kbps.From the software point-of-view, TMKTLS was developed with the NesC language, a C dialect specifically designed for TinyOS; the library TinyECC was used to provide support for elliptic curve cryptography, in particular for the Elliptic Curve Diffie-Hellman (ECDH) algorithm.Moreover, Crypto library was used to for the Multilinear Modular Hashing (MMH), a fast message authentication code algorithm. The actual implementation of TMKTLS has some small differences with the described algorithm that, however, do not affect the overall analysis.In particular, public and private key pairs are generated on-line at the node boot-up instead of being preloaded into the node memory. An implementation of STLS was also developed in order to perform the comparison with TMKTLS; these implementations are very similar: STLS uses a public key digital signature instead of a message authentication code, that is verified during the pairwise phase for each of the neighboring nodes. The focus of this analysis is on the optimization of the neighbor discovery, in order to reach a high connectivity and on the execution time of the algorithm, to verify the benefits with respect to STLS. A. HELLO PHASE OPTIMIZATION Optimizing the hello phase means choosing the minimal duration that guarantees the desired average level of connectivity.Here, connectivity is the ratio between successfully created links and the total number of possible links among nodes.Shortening the initial phase also implies a higher security of the protocol. The neighbor discovery phase, as explained in section III-C, is divided into rounds, in which nodes broadcast their hello messages at random instants.Using probability methods it is possible to analyze this procedure and compute the average connectivity: where the involved parameters are: t tx as the duration of the transmission, t h as the round duration, v as the number of neighboring nodes and µ as the number of rounds.In order to correctly receive a whole message, even partial overlap with other messages must be avoided.Therefore, in order to receive the hello message from a node within a round, the fraction of time in which no other message can be sent is 2 t tx t h .Considering that each node sends an hello message per round, the probability that in a round the hello message from a node has no collision is 1 − 2 t tx t h v−1 .The probability that the hello message from a node has at least a collision in every . The final square operation is required since both the involved nodes have to successfully receive the reciprocal message.In order to verify the proposed formula, a in-house simulator was developed in Python programming language.The tests were run 30 times with random parameters, each case was simulated 10 6 times and, on average, the difference with the formula was below 1%. Using (1), it is possible to obtain, through a simple lookup table, the best values of t h and µ for a given t tx , v and the desired C avg . Figure 2 shows the minimal neighbor discovery duration given the number of neighboring nodes, for different target connectivity levels; the transmission time was set to 3.2 ms, that corresponds to the time required by a TelosB sensor node to send 100 bytes.Even for more than 250 nodes, the minimum neighbor discovery duration is about 18 s; if the target connectivity is lowered to 0.95, the duration becomes about 12 s.This shows that, even for high density networks, the initial phase is short, so the time window in which the protocol is vulnerable is small.The number of neighboring nodes v depends on the network density; for instance, if the density is 1 (i.e. each node is able to directly communicate with every other node), v is equal to the number of nodes in the network. These optimizations were used for the analysis of the execution time of TMKTLS, in order to minimize the discovery phase.A target connectivity of 0.99 was set, and t tx was set to 2.33 ms, that is the transmission time of the hello message (73 B) on the TelosB sensor node.Table 5 shows the values used for the TMKTLS discovery phase, for different number of nodes.They were obtained with the method explained in this section; the actual round duration was rounded up to an integer number for implementation reasons. B. EXECUTION TIME ANALYSIS Figure 3 shows the total execution time of TMKTLS and STLS (neighbor discovery, pairwise and acknowledge phase), with a network with a number of nodes from 4 to 16, adding 2 nodes each time; the target connectivity was set to the maximum, so that each node is able to communicate with every other node in the network.The values in this section are the average execution time over 5 experiments for each number of nodes. The neighbor discovery phase of TMKTLS was optimized with the values reported in table 5, while STLS uses the same approach but with different values, that are due to a longer hello message. The pairwise phase may have a variable duration on different nodes because of an intrinsic variability of the ECDH algorithm (on TelosB motes, 2971 ± 33.9 ms).This requires that the acknowledgment phase is long enough to allow all nodes to finish at different instants but still have enough time to perform all the acknowledgments.Because of this, the acknowledgment phase duration was set to a value that is proportional to the ECDH variability and the number of nodes plus a fixed time; in general it goes from 5 to 8 seconds. As we can see, TMKTLS execution time is drastically lower than STLS, for any number of nodes.The ratio between the two slopes is 0.35; this means that TMKTLS has an execution time that is about 35% of STLS.This result is mainly due to the pairwise phase duration, which is longer in STLS; for each neighbor node, in TMKTLS one ECDH computation is done, while in STLS one digital signature verification and one ECDH are executed.Considering that the average ECDH execution is of 2971 ms, while verifying a certificate takes 5889 ms, if we compute 2971 / (5889 + 2971), the result equal to 0.34 confirms the previous analysis.The duration is extremely linear with respect to the number of nodes for both TMKTLS and STLS; this is because all the three phases of the algorithms are dependent on the network density. VI. CONCLUSION In this paper, a new key management scheme for WSNs, TMKTLS, has been presented.It is based on a hybrid approach, in which a transitory master key is used to authenticate nodes in the network, while pairwise keys are created using public cryptography.The main benefits of TMKTLS are the lower computational overheads with respect to TLS-based approaches, and a higher level of resilience with respect to transitory master key schemes. A comparative theoretical and experimental analysis showed that TMKTLS provides good security properties and that its computational overheads are compliant with real WSNs.Therefore, TMKTLS can be considered a valuable solution, especially for WSNs with strict security constraints. As a future work, the proposed approach will be texted on a larger network. FIGURE 2 . FIGURE 2. Neighbor discovery minimum duration of TMKTLS according to the target connectivity. TABLE 1 . Resilience comparison.All the schemes provide the maximum of the minimum level. TABLE 2 . Recoverability comparison.All the schemes provide the maximum of the minimum level. TABLE 5 . TMKTLS neighbor discovery final settings according to the number of neighboring nodes.
2020-01-30T09:04:44.975Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "4613467436f64ffa2cc16aa428bf491e04da17d6", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/08967016.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "59ed6dfb16e921fb3ee12326b3706c01cee2ea56", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
231613367
pes2o/s2orc
v3-fos-license
Hyperthyroidism in pregnancy: evidence and hypothesis in fetal programming and development The management of hyperthyroidism in pregnant patients has been a topic of raised clinical awareness for decades. It is a strong recommendation that overt hyperthyroidism of Graves’ disease in pregnant women should be treated to prevent complications. The consequences of hyperthyroidism in pregnancy are less studied than hypothyroidism, and a literature review illustrates that the main burden of evidence to support current clinical guidance emerges from early observations of severe complications in Graves’ disease patients suffering from untreated hyperthyroidism in the pregnancy. On the other hand, the more long-term consequences in children born to mothers with hyperthyroidism are less clear. A hypothesis of fetal programming by maternal hyperthyroidism implies that excessive levels of maternal thyroid hormones impair fetal growth and development. Evidence from experimental studies provides clues on such mechanisms and report adverse developmental abnormalities in the fetal brain and other organs. Only few human studies addressed developmental outcomes in children born to mothers with hyperthyroidism and did not consistently support an association. In contrast, large observational human studies performed within the last decade substantiate a risk of teratogenic side effects to the use of antithyroid drugs in early pregnancy. Thus, scientific and clinical practice are challenged by the distinct role of the various exposures associated with Graves’ disease including the hyperthyroidism per se, the treatment, and thyroid autoimmunity. More basic and clinical studies are needed to extend knowledge on the effects of each exposure, on the potential interaction between exposures and with other determinants, and on the underlying mechanisms. Introduction Hyperthyroidism is the clinical state that results from an excessive production of thyroid hormones in the thyroid gland (1,2). It is a signature of the disease that the incidence of the different subtypes of hyperthyroidism varies with age (3). While toxic nodular goiter is the predominant cause of hyperthyroidism after the age of 50 years, the predominant cause of hyperthyroidism in patients younger than 50 years of age is Graves' disease (GD) (3). the 19th century, and considerations on the management in pregnant women, specifically, can be ascertained from the beginning of the 20th century with a main concern about adverse pregnancy outcomes in women suffering from severe, untreated hyperthyroidism (5). The use of antithyroid drugs (ATDs) for the treatment of hyperthyroidism was introduced in clinical practice in the 1940s and is currently the recommended treatment for the hyperthyroidism of GD in pregnant women (6). Clinical guidelines indisputably state that overt hyperthyroidism caused by GD in pregnant women should be treated to prevent maternal and fetal complications, however, the management is challenged by the potential risk of severe side effects associated with the treatment (1,2). Furthermore, a pertinent question is on the role of thyroid autoimmunity. Thus, the determination of causal factors for outcome of a pregnancy and offspring development in women suffering from the hyperthyroidism of GD is complex and hitherto not clarified in detail. In this review, we explore outcomes of hyperthyroidism in pregnancy with a focus on the underlying mechanisms and different exposures associated with the disease (hyperthyroidism per se, antithyroid drug treatment, and thyroid autoimmunity). We describe the hypothesis of fetal programming by maternal hyperthyroidism and supporting evidence from experimental and human studies, and we discuss methodological aspects and implications for scientific and clinical practice. The hypothesis of fetal programming Fetal programming is a concept within reproductive epidemiology that links exposures during fetal life with the later development of disease in the offspring. It has been described in relation to different maternal diseases and different mechanisms have been proposed, however, the overall hypothesis is analogous irrespective of the specific exposure and outcome (7). The concept is also known as 'fetal origin of adult diseases' (8), and the basic idea is that disturbances during fetal life can cause permanent alterations in the offspring that at a later point in time might predispose to the development of adverse outcomes. Many aspects are yet to be clarified considering the mechanisms, but growing evidence is linking the concept to epigenetic alterations (9). Different study designs are used to investigate the hypothesis. Experimental evidence is a classic determinant of causality as brought forward by Bradford Hill in the 1960s (10). In addition to such results, the main burden of evidence develops from observational human studies. The determination of causality in observational studies is a difficult task, and it is a challenge to distinguish the exposure of interest from other prenatal exposures and from the role of postnatal exposures during development (7). Considering fetal programming by maternal thyroid disease, the role of thyroid hormones during fetal development is a key mechanism (7). Thyroid hormones are important developmental factors (11). The fetal thyroid gland is increasingly able to synthesize thyroid hormones in the second half of a pregnancy, which emphasizes the importance of maternal thyroid hormones in the early pregnancy. Furthermore, the importance of maternal thyroid hormones in later pregnancy after the onset of fetal thyroid hormone production is evident from the measurement of thyroxine (T4) in cord blood from newborns with a defect in thyroid hormone synthesis (12). Thus, maternal thyroid function remains important to the fetus throughout the pregnancy. The transport of thyroid hormones from the mother to the fetus during a pregnancy and physiological alterations affecting maternal thyroid function should be considered. In the early pregnancy, the pregnancy hormone human chorionic gonadotropin (hCG) stimulates the maternal thyroid gland to an increased production of thyroid hormone, potentially balancing the extra need of thyroid hormones to supply both the mother and the fetus (13). Yet, another mechanism in the early pregnancy that tends to balance the effect of hCG is the type 3 deiodinase (DIO3) in placenta (13). This enzyme inactivates thyroid hormones by catalyzing the conversion of T4 to reverse T3 (rT3) and T3 to T2. Activity of DIO3 in placenta is apparent from the early pregnancy weeks in rats and in humans and is evident from the high rT3/T3 ratio seen in pregnant women (13). The activity of DIO3 is considered part of the reason why athyreotic women need a 50% increase in their Levothyroxine dose by the time they become pregnant (14). Thus, the activity of DIO3 is likely to explain the higher maternal TSH in the early pregnancy prior to the hCG-peak (13,15). In line with this thought, patients with DIO3 containing hemangiomas present with consumptive hypothyroidism and a high rT3/ T3-ratio (16). These findings suggest a delicate balance under strict hormonal control and propose clinically important impact of slight imbalance. Considering outcomes of maternal thyroid disease in pregnancy, the focus has especially been turned to hypothyroidism. The hypothesis of fetal programming by maternal hypothyroidism is biological plausible from experimental evidence and from the description of cretinism with profound mental and physical deficits in S L Andersen and S Andersen Hyperthyroidism in pregnancy R79 10:2 children born to mothers with severe hypothyroidism caused by iodine deficiency (7). Consequently, clinical guidelines unanimously state that overt hypothyroidism in pregnant women should be treated, whereas the management of smaller abnormalities in thyroid function such as subclinical hypothyroidism, and the entity of isolated low T4 (hypothyroxinemia) is unclarified (1,17). Turning from lack of maternal thyroid hormone to excess, it is similarly a strong and consistent recommendation that overt hyperthyroidism caused by GD should be treated in pregnant women (1,2). However, the hypothesis of fetal programming by maternal hyperthyroidism (Fig. 1) has gained less attention (1). It is likely that the association between thyroid activity and adverse outcomes of pregnancy and child development is u-shaped. Such dependency is seen for other prenatal exposures for example, maternal hemoglobin concentration in pregnancy and outcomes of pregnancy as well as environmental factors for example, iodine and iron intake (18). This offers a path to follow for describing the influence of maternal thyroid dysfunction on pregnancy outcomes. Hyperthyroidism and fetal brain development Thyroid hormones regulate numerous processes during early brain development including neuronal proliferation, migration, differentiation, synaptogenesis, and myelination (19). In addition to the development of brain structures, they also play a role in the regulation of the neurochemical environment. It sounds reasonable that the lack of thyroid hormones might disturb these processes, whereas it is less clear how an excessive production of thyroid hormones associated with hyperthyroidism could affect fetal development. We searched the PubMed database for original, experimental studies on fetal outcomes of maternal hyperthyroidism in pregnancy up until October 1, 2020, and this search identified 52 publications. By contrast, a search for hypothyroidism identified 247 publication, which illustrates the predominant focus on this entity. After review of the search results, we identified nine studies (20,21,22,23,24,25,26,27,28) that investigated the impact of maternal hyperthyroidism on fetal brain development in experimental animals (Table 1). Notably, all the studies reported one or more abnormal findings in the offspring after exposure to maternal hyperthyroidism. However, the findings were diverse. All studies used T4 for the induction of maternal hyperthyroidism, but the method of T4 administration differed between the studies and the timing of outcome assessment in the offspring ranged from gestational day 21 up until the third postnatal month ( Table 1). It is beyond the scope of this review to describe and discuss details regarding the design and methodology of studies in experimental animals. However, some considerations seem important to highlight when interpreting and including evidence from experimental studies in a clinical context. First, the age of an experimental animal and the duration of a pregnancy are not interchangeable with humans (29,30). Whereas the human pregnancy is on average 40 weeks, the length of a pregnancy is 22 days in rats and 19 days in mice (29). Furthermore, disparities exist regarding the postnatal age as compared to humans and among experimental animals, for example, rats and mice. Thus, the lifespan of laboratory rat is about 3 years, whereas it is about 2 years for laboratory mice (30). Considering these life spans in relation to human age, an age of 1/3/6 months in rats approximate 9/15/18 years of age in humans and an age of 1/3/6 months in a mice approximate 14/23/34 years of age in humans (30). Secondly, the timing and duration of the various neurodevelopmental stages are not completely synchronous in humans and in experimental animals (29). Furthermore, the structural and functional properties of different brain regions and organs are not identical. For example, the placentas of humans and rats show anatomical similarities with a discoid shape and hemochorial type of fetal-maternal interface, however, disparities exist regarding the histological structure and the function of the yolk sac (31). Finally, important considerations are on the assessment of brain development in humans and in experimental animals, respectively (32). Figure 1 The hypothesis of fetal programming by maternal hyperthyroidism. S L Andersen and S Andersen Hyperthyroidism in pregnancy R80 10:2 A commonly used marker in humans is the intelligence quotient (IQ). It is a standardized measure based on a subset of tests (33). Alternative markers of brain development in humans include structural abnormalities in the brain assessed using for example brain scans of the child at a certain age (34). Furthermore, information on diagnosis of neurodevelopmental diseases in the child can be used as a proxy for impaired brain development (7). As opposed to this, the assessment of brain development in experimental animals such as rats and mice commonly relies on histopathological examination and evaluation of gene expression and in addition to these markers, the performance of the animal in different test (e.g. maze) can be evaluated (Table 1). However, no measure of brain development in an experimental animal directly translates to IQ in humans (32). Furthermore, it is important to notice that many methodological considerations exist when the role of maternal thyroid disease in relation to fetal brain development is assessed in humans and in an experimental design. In humans, the risk of confounding is a particular concern in observational designs (7), and in experimental animals it has recently been discussed that the currently available models may not be sensitive enough to detect the neurodevelopmental abnormalities associated with different degrees of abnormal maternal thyroid function (32). Although the findings are diverse, evidence suggests that maternal hyperthyroidism in pregnancy may impair fetal brain development in experimental animals (Table 1) via alterations in the development and organization of neurons, in the neurochemical environment, and altered expression of different proteins in the brain. However, the human brain is more complex and slight developmental skewness may cause disturbances that are detectable in humans only. So, what do we know from human studies about brain development in children born to mothers with hyperthyroidism? Few studies addressed outcomes of brain development in the child in relation to maternal hyperthyroidism. In contrast, the number of studies that addressed the association between insufficient levels of maternal thyroid hormones and child brain development is considerable (1,17). A recent systematic review and meta-analysis identified nine observational studies on the association between maternal hyperthyroidism in pregnancy and neurodevelopmental diseases in Table 1 Experimental studies on maternal hyperthyroidism in pregnancy and brain development in the offspring. Author Year Animal S L Andersen and S Andersen Hyperthyroidism in pregnancy R81 10:2 the offspring including attention deficit hyperactivity disorder, autism spectrum disorder, epilepsy, and schizophrenia (35). Most of these studies were registerbased studies, which are typically large, but are hampered by the fact that the assessment of exposure in pregnancy is indirectly performed from hospital diagnoses and/or redeemed prescriptions of drugs. For each of the different outcomes, only two individual studies were included in a meta-analysis, and the combined measures showed a significant association between maternal hyperthyroidism and ADHD and epilepsy in the child (35). In another study (36), using a case-cohort design, the assessment of maternal hyperthyroidism was made from the measurement of thyroid function parameters in stored blood samples from the early pregnancy. In this study, a risk of epilepsy in the child was corroborated, but no association between maternal hyperthyroidism and ADHD in the child was seen (36). Notably, high circulating levels of thyroid hormones in patients with generalized resistance to thyroid hormone (mutation in the thyroid receptor β-gene) have been associated with a high occurrence of ADHD (37), providing a clue toward an association between hyperthyroidism and brain development. Furthermore, parallel observations in human and in rats have shown that fetal exposure to high maternal thyroid hormone levels is associated with persistent central resistance to thyroid hormones in adulthood, likely mediated via increased expression of the DIO3 in the brain (38). Hence, mechanisms of fetal programming may include offspring alterations in the hypothalamic-pituitary-thyroid hormone axis. Other outcomes of human fetal neurodevelopmental (child IQ and brain scans) are similarly rarely investigated in relation to maternal hyperthyroidism in pregnancy, but studies within different birth cohorts have evaluated the association between levels of TSH and free T4 in pregnancy and child IQ as well as child cortex and gray matter volume (33,34). The findings are not consistent, and many determinants are to be considered, but results provide clues of a possible u-shaped association. Hyperthyroidism and other outcomes of fetal development The critical role of thyroid hormones during brain development is an important concern, but the consequences of a disturbance in maternal thyroid function in pregnancy may extend beyond fetal brain development. Thyroid hormones are developmental factors and regulate numerous processes in many organs. Considering the hypothesis of fetal programming by maternal hyperthyroidism (Fig. 1), one may speculate on other outcomes of fetal development that are not related to the brain. From the literature search, we identified seven experimental studies ( Table 2) that evaluated outcomes of maternal hyperthyroidism in pregnancy in relation to the development of other organ systems in the offspring not related to brain development (39,40,41,42,43,44,45). The studies were predominantly performed in rats, and the timing and type of outcome differed (Table 2). Thus, the studies assessed the development of genital organs, the cardiovascular system as well as bone and cartilage. Notably, all studies reported at least one abnormal finding, however, it appeared that some of the alterations were reversible for instance in the development of the bone ( Table 2). Considering human findings, only few studies investigated such other outcomes of fetal development. Studies from different birth cohorts have instigated blood pressure, BMI, total fat mass, and abdominal s.c. fat mass in children born to mothers with hyperthyroidism (46,47,48). Overall, results did not point toward associations except that lower maternal TSH levels associated with lower child BMI, fat mass, and diastolic blood pressure in one of the cohorts, in which no association with clinically diagnosed hyperthyroidism was seen (46). On the other hand, maternal hyperthyroidism as well as hypothyroidism has been associated with alterations in maternal body weight (48). Thus, it is a methodological challenge to distinguish the role of maternal thyroid disease from other BMI-related factors in the evaluation of fetal outcomes. Hyperthyroidism and pregnancy complications From the experimental and human studies reviewed previously that addressed the role of maternal hyperthyroidism in pregnancy in relation to fetal brain development and the development of other organ systems, it seems as if the strong and consistent clinical recommendation on treatment of overt hyperthyroidism caused by GD in pregnant women relies on other determinants. Thus, the main concern related to hyperthyroidism in pregnant women and the recommendation for treatment relate to the risk of complications during the pregnancy and/or at birth of the child and to a lesser extent on the evidence considering more long-term outcomes in the child. S L Andersen and S Andersen Hyperthyroidism in pregnancy R82 10:2 It has been clinically recognized for more than a century that maternal hyperthyroidism can seriously complicate a pregnancy (5). The evidence in humans arises from clinical case studies, and the description of pregnancy complications in women referred to a hospital for the management of Graves' hyperthyroidism in pregnancy. These reports from 1929 and onwards have substantiated a focus on the adverse effects of untreated or insufficiently treated hyperthyroidism in pregnant women with GD (5,49,50,51). Thus, it has been consistently shown that women who remained overtly hyperthyroid in pregnancy had a higher risk of pregnancy loss, preterm birth, low birth weight of the child, preeclampsia, and maternal heart failure. These early observations have later been corroborated in large observational studies including nonexposed controlled groups (52,53,54). On the other hand, subclinical hyperthyroidism has not been associated with a risk of pregnancy complications and no recommendation of treating this entity is proposed in clinical guidelines (55). It remains a pertinent question how the thyroid autoimmunity itself, via the presence of TRAb in GD patients, potentially affects the outcome of a pregnancy. A main clinical focus regarding TRAb exists in the second half of pregnancy after the onset of fetal thyroid hormone production, which introduces the risk of fetal and neonatal hyperthyroidism caused by TRAb from the mother. However, the distinct roles of high maternal thyroid hormone levels as compared to high levels of TRAb remain to be elucidated concerning pregnancy complications and the hypothesis of fetal programming. Antithyroid drug treatment Another determinant considering outcomes of maternal hyperthyroidism in pregnancy is potential side effects to the treatment. As recently reviewed in detail, a major focus and concern is on the risk of teratogenic side effects with the use of ATDs in early pregnancy (56). This focus has emerged from a series of large, observational studies published in the 2010s that reported a risk of birth defects after early pregnancy exposure to Methimazole (MMI), and lately also after Propylthiouracil (PTU). However, the pattern and severity of malformations strikingly differed between MMI and PTU exposure with the most severe malformations observed after early pregnancy treatment MMI. Thus, the recommendation is to use PTU in early pregnancy and to shift from MMI to PTU already when pregnancy is planned or as soon as it is detected (1,2). Even when several large observational studies point toward an association, one may yet speculate on the underlying mechanisms and determinants of causality. Only few studies so far included data to evaluate the existence of a biological gradient from the dose of the drug, but a large study from Korea showed that a higher cumulative dose of MMI was associated with a higher risk of birth defects (57). Further clues to causal determinants may arise from experimental evidence (10). Thus, we searched for experimental studies that investigated the risk of malformations after prenatal exposure to ATDs. We identified four studies (Table 3) Table 2 Experimental studies on hyperthyroidism in pregnancy and development of other organs in the offspring. Author Year Animal that investigated this exposure and outcomes in rats, mice, and frogs (58,59,60,61). Notably, the findings were diverse and in contrast to the findings in humans. Thus, in an experimental setting, MMI revealed adverse outcomes in the offspring in one of the four studies, whereas PTU associated with birth defects in the offspring in two of the three studies that examined this type of drug exposure (Table 3). We can only speculate on possible explanations for this disparity between experimental and human findings. Considering the types of malformations observed in humans after exposure to ATDs (56), it may be speculated that the less severe malformations seen after PTU exposure are not detectable in the rat (e.g. preauricular sinus) and similarly with some of the malformations observed after MMI exposure (e.g. aplasia cutis). Furthermore, the morphological differences between the human and the rat placenta mentioned previously may influence the evaluation of toxicological effects (31). Rate of organ development in different animals and in comparison with humans as well as dose dependency may differ and influence the risk of developmental defects. Finally, in human studies it is often difficult to distinguish between the role of hyperthyroidism per se, the treatment, and the thyroid autoimmunity. This may also be the case in experimental animals since the treatment with ATDs may induce hypothyroidism. The opposite may also be the case and could challenge experimental studies on the role of maternal hypothyroidism per se. Thus, maternal hypothyroidism in an experimental animal is typically induced from treatment with ATDs. Consequently, interpretation of the findings is a difficult task in an experimental design as much as in observational human studies. Concluding remarks It has long been recognized that overt hyperthyroidism of GD in pregnant women should be treated to prevent complications (Table 4). This review highlights that evidence on the adverse effects of untreated or insufficiently treated hyperthyroidism is predominantly obtained from early clinical observations. In these reports and subsequent larger observational studies, a higher risk of pregnancy complications has consistently been reported if the disease is left untreated. Thus, it is well-established and in line with current recommendations that the disease should be treated in pregnant women. On the other hand, it is noteworthy that the burden of evidence from experimental studies and from observational human studies on postpartum and long-term outcomes in the offspring is limited as compared to maternal hypothyroidism in pregnancy. The experimental evidence provides some clues on potential adverse effects in the fetus associated with maternal hyperthyroidism, indicating that perhaps the association between maternal thyroid hormone levels in pregnancy and fetal development is u-shaped. Still, many aspects remain unclarified regarding the underlying mechanisms. In experimental animals as well as in humans, difficulties exist on the distinction between the different exposures that constitute parts of the autoimmune entity of GD. Furthermore, methodological aspects on outcome assessment apply to both settings and adds to difficulties in the comparison of experimental and human findings. As discussed, this is apparent from the inconsistency between experimental and human findings considering teratogenic side effects to the use of ATDs. Nevertheless, to inform clinical practice it is crucial to encourage future studies, basic as well as clinical, to address the distinct role of hyperthyroidism per se, the treatment, and the Table 3 Experimental studies on antithyroid drug exposure in pregnancy and outcomes in the offspring. S L Andersen and S Andersen Hyperthyroidism in pregnancy R84 10:2 autoimmunity (Table 4). To move forward from here, it seems crucial to determine the underlying mechanisms by each exposure and potential interaction between and with other maternal characteristics. Such focus can at the same time provide important guidance on potential targets and possibilities for new treatments with less severe side effects. Table 4 Key points and implications for future studies. Hyperthyroidism in pregnancy • Graves' hyperthyroidism is an autoimmune disorder. • Graves' hyperthyroidism should be treated in pregnant women. • Untreated Graves' disease is associated with pregnancy complications. • Hyperthyroidism in pregnancy is less studied than hypothyroidism. • Experimental studies provide clues on a fetal programming effect. • Human studies are sparse and provide no definite conclusions. • The programming role of thyroid autoimmunity is unclarified. • Teratogenic side effects to antithyroid drugs pose a challenge. Implications for future studies • enhance the understanding of underlying mechanisms • address the role of hyperthyroidism, treatment, and autoimmunity • assess different short-and long-term outcomes in the offspring • consider mechanisms to support the development of new treatments
2021-01-16T06:16:26.643Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b877d810878a7a0bfef277c7c62cefa897affbc6", "oa_license": "CCBYNCND", "oa_url": "https://ec.bioscientifica.com/downloadpdf/journals/ec/10/2/EC-20-0518.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b40f848bb9e87e7cff26fb7120fd5530808fa0d7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119608328
pes2o/s2orc
v3-fos-license
Differently knotted symplectic surfaces in D^4 bounded by the same transverse knot In this paper we show that there are two symplectic surfaces in the 4-ball which bound the same transverse knot, have the same topology (as abstract surfaces), and are distinguished by the fundamental groups of their complements. Introduction This paper is concerned with symplectic surfaces in the four-dimensional ball. More precisely, we consider embedded connected surfaces S ⊂ D 4 whose boundary ∂S = S ∩ ∂D 4 ⊂ S 3 is a transverse knot. We prove: Theorem. There are two symplectic surfaces S 1 and S 2 which bound the same transverse knot, have the same topology (as abstract surfaces), and such that π 1 (D 4 \ S 1 ) is not isomorphic to π 1 (D 4 \ S 2 ). This builds on a previous example ( [2], §5) of surfaces bounding the same transverse link, which had different topology (one was connected, and the other was not). We add the same piece to both these examples to construct ours. 1.1. Acknowledgments. This work was done under the mentorship of Paul Seidel and supported by a grant from the Massachusetts Institute of Technology. Background It is well-known [4] that the closure of any braid β ∈ Br m is naturally a transverse link. This has been used to construct examples of transverse links which lie in the same topological isotopy class, but are not isotopic as transverse links [5]. A factorization of β is an expression where each σ i is conjugate to one of the standard Artin generators of Br m . Every factorization of β describes a symplectic surface S ⊂ D 4 whose boundary is the transverse link associated to β (see e.g. [3]). S is connected if and only if the images of σ 1 , . . . , σ k in the symmetric group Sym m act transitively. The Euler characteristic of S is m − k; hence the topological type of the abstract surface S depends only on β, and not on the factorization. There is a well-known method [8] for computing a presentation π 1 (D 4 \S), as a quotient of the fundamental group of the m-punctured disc, which is F m = x 1 , . . . , x m . Every word σ i in the factorization yields a relation. This is best represented graphically as follows. Conjugates of the standard Artin generators correspond bijectively to embedded paths in the disc (up to isotopy) joining two of the punctures. Given any such path, one fattens it into a figure-eight loop enclosing the punctures at its endpoints. From a path γ joining two punctures p i and where c i and c j are counterclockwise circles around the punctures and γ ′ is an appropriate segment of γ. This identifies a conjugacy class in F m , any element of which can be taken as the relation implied by this path. For consistency or ease of computation we may choose any convenient basepoint to determine what this relation is. For example, let a, b, and c denote the Artin generators in Br 4 . The word (ac)b(ac) −1 yields the following: basepoint basepoint From factorization (2), we have the following relations: Given these relations, the relations arising from the rest of the factors simplify to identity. With, x 1 = x 2 = x 3 = x 4 , π 1 (D 4 \ S 1 ) must be Z. From factorization (3), we have: This last relation can be rewritten, using x 2 = x 3 , as: As with the previous example, the remaining factors yield relations that simplify to identity given these three. Thus π 1 (D 4 \ S 2 ) = Br 3 . A Note on Double Branched Covers of D 4 Given any connected surface S ⊂ D 4 , one can form the double cover of D 4 branched along S, which we denote by M . If S is symplectic and its boundary is a transverse link, M is an exact symplectic manifold with contact type boundary [3]. Take the examples S 1 , S 2 from the previous Section, and let M 1 , M 2 be the associated double branched covers. It is easy to see that π 1 (M 1 ) is trivial. In the second case, we have a homomorphism π 1 (D 4 \ S 2 ) ∼ = Br 3 → Sym 3 , which restricts to π 1 (D 4 \ S 2 ) ′ → A 3 ∼ = Z/3. Since x 2 goes to zero under this, we get an induced homomorphism π 1 (M 2 ) → Z/3. One easily checks that this is surjective. Hence, M 1 and M 2 are two different exact symplectic fillings of the same contact three-manifold ∂M 1 = ∂M 2 . Closing Remarks Numerical evidence suggests that, for odd s ≥ 3, using a s ba −s in place of a 3 ba −3 makes π 1 (D 4 \ S 2 ) = x, y | x 2 = y s while π 1 (D 4 \ S 1 ) remains as Z. These groups are distiguished from each other by the number of homomorphisms from them to the dihedral groups. Electronic resources (Python code) for reproducing the numerical results can be found online at http://www-math.mit.edu/~seidel/geng/.
2010-07-28T17:04:06.000Z
2010-07-21T00:00:00.000
{ "year": 2010, "sha1": "83818c2420eccd75eb1a482ada220e8396fe2fc9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "83818c2420eccd75eb1a482ada220e8396fe2fc9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
265066274
pes2o/s2orc
v3-fos-license
Laparoscopic partial versus radical nephrectomy for localized renal cell carcinoma over 4 cm Purpose To compare the long-term clinical and oncologic outcomes of laparoscopic partial nephrectomy (LPN) and laparoscopic radical nephrectomy (LRN) in patients with renal cell carcinoma (RCC) > 4 cm. Methods We retrospectively reviewed the records of all patients who underwent LPN or LRN in our department from January 2012 to December 2017. Of the 151 patients who met the study selection criteria, 54 received LPN, and 97 received LRN. After propensity-score matching, 51 matched pairs were further analyzed. Data on patients’ surgical data, complications, histologic data, renal function, and survival outcomes were collected and analyzed. Results Compared with the LRN group, the LPN group had a longer operative time (135 min vs. 102.5 min, p = 0.001), larger intraoperative bleeding (150 ml vs. 50 ml, p < 0.001), and required longer stays in hospital (8 days vs. 6 days, p < 0.001); however, the level of ECT-GFR was superior at 3, 6, and 12 months (all p < 0.001). Similarly, a greater number of LRN patients developed CKD compared with LPN until postoperative 12 months (58.8% vs. 19.6%, p < 0.001). In patients with preoperative CKD, LPN may delay the progression of the CKD stage and even improve it when compared to LRN treatment. There were no significant differences between the two groups for OS, CSS, MFS, and PFS (p = 0.06, p = 0.30, p = 0.90, p = 0.31, respectively). The surgical method may not be a risk factor for long-term survival prognosis. Conclusion LPN preserves renal function better than LRN and has the potential value of significantly reducing the risk of postoperative CKD, but the long-term survival prognosis of patients is comparable. Supplementary Information The online version contains supplementary material available at 10.1007/s00432-023-05487-3. Introduction Renal cell carcinoma (RCC), the most common type of urogenital cancer, accounts for approximately 2-3% of all malignant tumors and is more commonly seen in men than women (Motzer et al. 2022;Siegel et al. 2023).Based on cancer statistics report by the American Cancer Society, the incidence of RCC in both men and women in the United States has increased by about 1% per year since the mid-twentieth century (Siegel et al. 2023).Despite a steady increase in incidence, the mortality rate for RCC was reduced by about 2% per year from 2016 to 2020, which may be related to the early diagnosis and the increased nephrectomy rate (Medina-Rico et al. 2018;Motzer et al. 2022;Siegel et al. 2023). For localized RCC, surgical treatment is preferred.With the application and development of laparoscopic techniques, laparoscopic partial nephrectomy (LPN) is recommended for patients with T1a (≤ 4 cm) tumors in the American Urological Association (AUA) guideline (Liss et al. 2014;Campbell et al. 2021).For larger renal tumors (> 4 cm), laparoscopic radical nephrectomy (LRN) is still the standard treatment (Rini et al. 2009;Lee et al. 2014), but in selected cases, the long-term efficacy of LPN is similar to that of LRN, and LPN has a better effect on the preservation of renal function (Ching et al. 2013;Tuderti et al. 2022Tuderti et al. , 2023)). However, the "optimal" surgical treatment for renal tumors > 4 cm is still disputed, and not all studies indicate that LPN is preferable to LRN.As exhibited in some studies, compared with the LRN group, LPN had a higher risk of complications such as bleeding (Mir et al. 2017).In terms of prognosis, a prospective randomized-controlled study, EORTC 30904, found that LPN was not superior to LRN in patients with overall survival (OS) (Scosyrev et al. 2014).Based on the regression analysis of competing-risks data, after mortality rates from other causes are adjusted, there is no statistically significant correlation between nephrectomy type and cancer-specific mortality rate (Meskawi et al. 2014).Based on these debates, there have been no definitive conclusions made about the role of LPN for RCC > 4 cm. Thus, this study compared the long-term clinical and oncologic outcomes between LPN and LRN, so as to provide references for the clinical treatment of RCC. Study population This retrospective study collected information on patients with RCC who had their initial surgery, either LPN or LRN in Zhejiang Provincial People's Hospital from January 2012 to December 2017.The study protocol conforms to the ethical guidelines of the 2013 Declaration of Helsinki.The informed consent processes had been approved by the ethics committees in Zhejiang Provincial People's Hospital (Approval No. 2021QT082) before the study started. The inclusion criteria were: (1) meeting the 2023 NCCN guideline, radiological, or histological diagnostic criteria of RCC before surgery; (2) normal renal function on the healthy side, with a solitary tumor on the affected side; (3) undergoing LRN or LPN with localized renal masses measuring > 4 cm.(4) no obvious abdominal mass or another advanced renal carcinoma. The exclusion criteria were: (1) presenting contralateral solitary, atrophic, or congenital absent kidney; (2) history of any other kind of tumors or recrudescent tumors of the kidney; (3) distant metastasis; (4) severe disease of the heart, lungs, kidneys, brain, blood, or other vital organs before operation; (5) complicated with systemic disease; (6) performed open surgery, interventional surgery, or non-surgical treatment; (7) performed other surgeries at the same time; and (8) incomplete clinical data. Data collection and follow-up The baseline date was defined as the date RCC patients underwent their initial surgery.Patients' demographics, clinical characteristics, tumor localization, tumor size, tumor stage, preoperative American Society of Anesthesiologists (ASA) score, and test results of biochemistry, and ultrasonography were collected.Meanwhile, information regarding operation time, ischemia time, postoperative pathology (Sup.1), positive surgical margin, intraoperative complications, postoperative complication score, and hospital stay was also collected.The postoperative complication score was evaluated on the basis of the Clavien-Dindo Classification of Surgical Complications (Dindo et al. 2004). Kidney function evaluation included SCr and emission computed tomography for glomerular filtration rate measurement (ECT-GFR, ml/min) preoperatively and before discharge and at postoperative 3 months, 6 months, and 12 months (Sup.2).The follow-up date was defined as the specific time of follow-up every 3 months after the patient was discharged from the hospital.The patients were followed every 3 months within months 6, semi-annual for up to 3 years, annually thereafter for survival and disease status, including sites of first recurrence and post-recurrence survival.Endpoint events were defined as recurrence, metastasis, and death during follow-up.According to the collected data, the occurrence of end-point events in RCC patients treated with LPN and LRN during the follow-up period was statistically analyzed, as were the related influencing factors. Surgical technique Surgical procedures for all patients were performed the standard transperitoneal approach.Endotracheal intubation and general anesthesia were carried out on patients under a lateral position with the normal side down to fully expose the surgical field of the affected side.With a Trocar approach, a laparoscope was inserted to make pneumoperitoneum to expand the surgical space.The Gerota fascia was opened with an ultrasonic scalpel to completely dissociate the kidney on the affected side from bottom to top, so as to expose renal tumors.For LPN, the renal artery was blocked with a bulldog clamp and the tumor was resected with a margin of 0.5-1.0cm of healthy renal parenchyma.And for LRN, after the renal artery on the affected side was blocked, the kidney, the surrounding tissue, and the fascia were completely excised, and then, the ruptured blood vessels were sutured.The pneumoperitoneum was closed after there was no abnormality, the Trocar was withdrawn, and the drainage tube was placed beside the kidney.Finally, the puncture was closed to finish the surgery. The outcome assessment The primary outcomes were the effects of different surgical methods on renal function and the incidence of chronic kidney disease (CKD), which was defined as GFR < 60 ml/min over 3 months.Meanwhile, the long-term survival prognosis and influencing factors of the patients were also observed. The secondary outcomes included the following: (1) operative time, intraoperative bleeding, and postoperative hospital stay, (2) incidence rate of postoperative complications, and (3) postoperative pathological features. Statistical analysis Continuous variables are expressed as mean ± SD or median (IQR), while categorical variables are presented as n (%).Qualitative and quantitative differences between the two groups were analyzed by the Chi-square or Fisher's exact test for categorical variables, and the Student's t test or Mann-Whitney U test for continuous variables, as appropriate. To minimize the biasing effects of confounders, a 1:1 caliper width of 0.2 for the propensity-score matching (PSM) analysis was performed on the following variables: tumor size, cTNM stage, preoperative Hb, and SCr.Pairs of patients were selected using the "nearest-neighbor" matching method.The log-rank test was used to compare the Kaplan-Meier estimates for overall survival (OS), cancerspecific survival (CSS), metastasis-free survival (MFS), and progression-free survival (PFS).OS was calculated from the date of surgery to the date of death for any reason; CSS was calculated from the date of surgery to the date of death caused by RCC; MFS was calculated from the date of surgery to the date of the first-time tumor metastasis, or the death of any cause; PFS was calculated from the date of surgery to the date of disease progression, or death.For the univariate and multivariate analyses, the Cox proportional hazard regression model was applied to estimate the prognostic risk factors of patients. Patients' enrollment and baseline characteristics During the study period, 207 patients with RCC > 4 cm were screened retrospectively for eligibility at Zhejiang Provincial People's Hospital.After excluding 56 patients for various reasons, 151 patients were enrolled.In this study, the PSM analysis was further used to match the baseline of the two groups.After matching, a total of 102 people were included in the final study, with 51 people in each group.Of the remaining 102 patients, 92 completed the follow-up with outcome data collected (Fig. 1). The patients' baseline characteristics before and after propensity-score matching are shown in Table 1.Before propensity-score matching, more patients in the LRN group had a significantly larger tumor size (6 cm vs. 5 cm) and higher cTNM stage than the LPN group.In addition, some test results of biochemistry like Hb (p = 0.036) and SCr (p = 0.008) before surgery also had differences between the two groups.After matching, all items were seen to have no significant statistical difference when comparing both groups (p > 0.05). The comparison of surgery-related indicators Patients treated with LPN had longer operation time (135 min vs. 102.5 min, p = 0.001), hospital stay (8 days vs. 6 days, p < 0.001), and larger intraoperative bleeding (150 ml vs. 50 ml, p < 0.001) in comparison to patients receiving LRN.Other items like a positive surgical margin, intraoperative or postoperative complications, and even postoperative histology were seen to have no significant statistical difference.More details are shown in Table 2 and Tables S1-S2. Efficacy of different surgery on postoperative renal function The renal function tests were remarkably improved in the LPN group, as shown in Fig. 2.There was no meaningful statistical difference in the average preoperative SCr levels between the LRN and LPN groups.However, before discharge and at the following 3rd, 6th, and 12th months, the mean postoperative SCr values in the LPN group were significantly lower than those in the LRN group (all p < 0.001) (Fig. 2a).In the LRN group, we saw a significant increase from the baseline in the mean SCr at each subsequent follow-up (all p < 0.001).Overall, the mean difference in SCr levels between the two groups with a 95% confidence interval is shown in Fig. 2b, c.The net increases of SCr in the LRN group were significantly higher compared with the LPN group after the operation (all p < 0.001).In contrast to LRN, the level of ECT-GFR was superior in LPN at the postoperative time of 3 months, 6 months, and 12 months (all p < 0.001) (Fig. 2d-f). In addition, we also assessed the rate of chronic kidney disease (CKD: GFR < 60 ml/min) and confirmed the results of CKD rate in the LPN group to be significantly lower on the following 3rd, 6th, and 12th (all p < 0.001) months (Fig. 3a, Table S3).We plotted the Sankey diagram to represent the change of GFR from baseline to the following 12 months, which showed remarkably more favorable results in the LPN group (Fig. 3b, c).Additionally, as shown in Fig. 3, patients with preoperative GFR ≥ 60 ml/ min in the LPN group rarely experienced renal function deterioration or CKD, even remarkably improving in the following 12 months after the operation.Most patients with preoperative GFR ≥ 60 ml/min in the LRN group developed CKD and did not improve at the last follow-up in patients with preoperative CKD. Comparison of survival between the two groups The follow-up deadline was set for February 25, 2023.Ten patients (3 cases in the LPN group and 7 cases in the LRN group) were lost to follow-up, and the loss rate was 9.8%.Finally, 92 patients completed long-term follow-ups.The median duration for follow-up completion was 88 months for the LPN group and 98 months for the LRN cohort.While completing the follow-ups for the study, 5 (11.4%) patients in the LRN cohort and no patients in the LPN group died (p = 0.022).In the LRN group, only 1 patient experienced cancer-related deaths, and the other 4 patients died because of non-cancer-related causes (p = 0.360).Both groups had 3 patients who experienced local recurrence or developed distant metastatic diseases, respectively (p > 0.99).The patients with metastases underwent surgical resection or Discussion Our current study shows that in patients with RCC > 4 cm, LPN has better renal function and a comparable oncology outcome than the LRN group.Additionally, the surgical method in our study may not be a risk factor for long-term survival.8.00 (6.00, 10.00) 6.00 (5.00, 7.00) < 0.001 a In this study, after adjusting for differences in baseline characteristics through PSM analysis, 102 patients with RCC > 4 cm were included, of whom 51 received LRN treatment and the remaining 51 received LPN treatment.To our knowledge, although we are not the first study to report on the surgical treatment of patients with RCC > 4 cm, we have made some additional findings and supplements in terms of changes in postoperative long-term renal function, particularly CKD, and the influencing factors of patient survival prognosis.Furthermore, the study's follow-up time was adequate, with an average follow-up time of 7.5 years. The basic principles of nephron-sparing surgery (NSS) for RCC are complete excision of the lesion with negative margins while preserving as many viable renal parenchymas as possible (Deklaj et al. 2010;Alyami and Rendon 2013).It emerged that the LPN cohort was superior in both levels of SCr and its changes, as well as ECT-GFR, as shown by the previous studies (Kaushik et al. 2013;Larcher et al. 2016;Cai et al. 2018;Yang et al. 2020).The operation requires temporary blockade of the renal artery during surgery, which can cause ischemia-reperfusion injury in the kidney, and with the extension of renal warm ischemia time (WIT), it will cause irreversible damage to the remaining renal function.To preserve remaining renal function and reduce blood loss, preoperative superselective transarterial embolization (STE) was first described by Gallucci et al. as an option to perform LPN without hilar clamping (Gallucci et al. 2007).Our findings indicated that patients who underwent LPN had negative surgical margins, with a median warm ischemia time (WIT) of 20 min and a maximum WIT of 30 min, and 80.4% (41/51) of them had a WIT time < 25 min.This suggested that as surgical technology advances and surgeons gain experience, WIT shortens, ischemia injury to residual kidneys is minimized, and LPN preserves more postoperative nephrons in patients and plays an even greater protective role in renal function (Gallucci et al. 2007;Simone et al. 2009;Rajan et al. 2016;Mehra et al. 2019;Takeda et al. 2020). The protection of renal function after LPN depends on the preoperative GFR, WIT, and the amount of renal parenchyma preserved after the operation (Thompson et al. 2012;Rogers et al. 2013).These determinants in turn influence the development of CKD, which is a recognized risk factor for anemia, hypertension, malnutrition, and neurological diseases (Huang et al. 2006).It is associated with poorer quality of life in patients, a higher risk of hospitalization, the occurrence of cardiovascular events, and death.Notably, data on patients further demonstrated fewer CKD after LPN procedures, which showed agreement with the available literature (Kaushik et al. 2013;Larcher et al. 2016).However, unlike previous studies, we also included patients with preoperative CKD and stratified them according to their stage.After visualizing the data with the Sankey diagram, it was found that the patients with GFR < 60 ml/min in both groups were similar before surgery (12 vs. 11).Until the 12th month followup, only 1 patient in the LPN group developed to CKD stage IV, while 6 patients showed improvement in ECT-GFR.In the LRN group, 2 patients developed stage IV, and 1 patient showed improvement in ECT-GFR.Though there was no statistical difference in postoperative progression (p = 0.590) and improvement (p = 0.069) between the two groups of preoperative CKD patients, LPN may have protected postoperative renal function.Combining the results of others' (Britton et al. 2022), we believe that LRN was more likely to result in a decline in GFR and CKD stage progression compared to LPN for patients with preoperative GFR < 60 ml/min.Thus, it is reasonable to persuade that LPN can prevent or delay renal, cardiovascular, and other debilitating systemic impairments by providing the benefit of preserving renal function. Another contentious matter for LPN and LRN is the oncological outcomes, which play a major role in the improvement of life expectancy.Kopp et al. (2014) revealed that there was no difference in survival for LRN versus LPN by Kaplan-Meier analysis for OS, CSS, or PFS.Similarly, our research found that there was no statistical difference, although the LPN group offered a superior OS (p = 0.060) Fig. 4 Kaplan-Meier analysis of overall survival (A, p = 0.06), cancer-specific (B, p = 0.30), metastasis-free (C, p = 0.90), and progression-free survival (D, p = 0.31) for patients after LPN or LRN for RCC > 4 cm than patients undergoing LRN.However, compared to previous reports, we did not find that the improvement of renal function after LPN improves the patients' survival, like CSS, MFS, or PFS (Scosyrev et al. 2014;Tobert et al. 2014;Jang et al. 2016).Interestingly, we found that low preoperative Hb and high preoperative LDH were risk factors for OS.Similarly, preoperative low Hb has been considered a risk factor for OS in patients with RCC, especially those with renal venous cancer embolus (Abel et al. 2017;Peng et al. 2018).Hb, along with serum calcium ion and alkaline phosphatase (ALP) levels, is also considered to be a risk factor for advanced bone metastasis in patients with RCC (Hu et al. 2020;Kaul et al. 2021).In addition, previous studies have shown that higher serum LDH level is a poor prognostic factor for PFS (HR = 1.74, 95% CI 1.48-2.04,p < 0.001) (Shen et al. 2016).Besides, LDH has been shown to be an independent prognostic marker for patients with metastatic RCC (Shen et al. 2016).However, this study indicated that higher preoperative LDH levels were linked with OS in univariate analysis but not in multivariate analysis.LDH has been previously confirmed to participate in glycolysis of tumor cell metabolism, provide energy for tumor cells, directly inhibit et al. 2017). In our study, postoperative histology was also an independent factor of OS.Its impact on survival is still controversial.A large study from China examined the effect of postoperative pathologic types on the prognosis of patients with RCC, using the SEER database as a reference (Shao et al. 2021).The results showed that, among the 1346 patients included, clear cell renal cell carcinoma in the Huaxi database offered a superior OS than papillary renal cell carcinoma (Shao et al. 2021).However, the opposite is true in the SEER database: papillary renal cell carcinoma had a better OS than clear cell renal cell carcinoma.Our results are similar to those found in the SEER database (Fig. S1).Due to the limited sample size, some biases may have been introduced in our study.The effects of postoperative histology on survival still need to be studied with a large sample size. In addition, the study also focused on intraoperative indicators and postoperative complications.We tend to use the transperitoneal approach to provide a larger space in the surgery (Takagi et al. 2021).The median operation time for the LRN cohort was significantly short compared with the LPN group (102.5 min vs. 135 min, p = 0.001).It is logically concerning that the technical procedures when performing LPN are more challenging than LRN for large renal masses, which do not need the suture of the kidney (Yang et al. 2020).Similar to previous reports (Deklaj et al. 2010;Rinott Mizrahi et al. 2018), we also found greater intraoperative bleeding in the LPN group (150 ml vs. 50 ml, p < 0.001).Importantly, there was no increase in the incidence of complications both intraoperative and postoperative in patients who had LPN when compared with patients who received LRN.Obviously, for large tumors, LPN has limitations such as reduced operating space, increased operating difficulty, and larger surgical wound surface, which increases the intraoperative suture time, intraoperative bleeding, and even hospital stay. Our study has some limitations.First, as a result of matching the majority of patients in the original sample using PSM analysis, the sample size was insufficient to fully utilize the collected data.Second, the baseline did not include indicators like the R.E.N.A.L. score to quantify tumor location and other anatomic features and complexity due to retrospective study data limitations.Furthermore, certain patients were lost to follow-up during the procedure of long-term follow-up.Due to the insufficient number of patients with RCC > 7 cm, further analysis is also not feasible.Thus, it is imperative to conduct prospective, controlled, and randomized trials that incorporate a more extensive database and have an extended follow-up period in order to investigate long-term clinical and survival outcomes in greater depth. In conclusion, we validated that both LPN and LRN were safe and effective for patients with localized RCC > 4 cm.The surgical methods did not affect the survival prognosis of patients.LPN effectively protects renal function and has the potential to drastically reduce the risk of postoperative CKD.However, the risk of intraoperative and postoperative bleeding, as well as a longer hospital stay for LPN, must still be considered.In this study, we recommend LPN as a treatment option for patients with localized RCC > 4 cm, but the choice of LPN should be based on individual patient characteristics, surgeon expertise, and technological feasibility. need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Fig. 2 Fig. 2 Comparison of renal function before and after operation between the two groups.A Mean SCr value at baseline and follow-up in patients in the LPN and LRN group.The differences in SCr value in these 2 groups at each time point were compared using Student's t tests.B Comparison of postoperative SCr values changes (95% confidence interval) between the LPN and LRN groups.C Comparison of the percentage change (95% confidence interval) of SCr after surgery between LPN and LRN groups.D Mean ECT-GFR value at baseline Fig. 3 Fig. 3 Comparison of the occurrence of CKD before and after operation between the two groups.A Comparison of the number of CKD (GFR < 60 ml/min) for patients after LPN or LRN for RCC > 4 cm.The * means p < 0.05.Sankey diagrams for the change of GFR from baseline to postoperative 12 months in the 2 groups.Sankey diagrams were used to show the major transfers or flows of patients.The colors of the columns represent patients with different GFR levels, with blue representing GFR ≥ 90 ml/min, green representing GFR 60-89 ml/ Table 1 Clinical characteristic of patients with RCC over 4 cm Continuous variables were expressed as mean ± SD or median (IQR), while categorical variables were presented as n (%) BMI, body Mass Inde; CHD, coronary heart disease; before surgery, all patients with a history of CHD were well controlled and without surgical contraindications; cTNM, clinical TNM classification; Hb, hemoglobin; Ca, serum calcium; SCr, serum creatinine; LDH, lactic dehydrogenase; ECT-GFR, emission computed tomography for glomerular filtration rate measurement; ASA, American Society of Anesthesiologists a p < 0.05 was considered statistically significant Table 2 The comparison of surgery-related indicators Continuous variables were expressed as median (IQR), while categorical variables were presented as n (%) a p < 0.05 was considered statistically significant Table 3 Univariate and multivariable analyses of prognostic risk factors for OS OS, overall survival; HR, hazard ratio; BMI, body Mass Index; CHD, coronary heart disease; before surgery, all patients with a history of CHD were well controlled and without surgical contraindications; Hb, hemoglobin; Ca, serum calcium; SCr, serum creatinine; LDH, lactic dehydrogenase; ECT-GFR, emission computed tomography for glomerular filtration rate measurement; CKD, chronic kidney disease; ASA, American Society of Anesthesiologists; 95% CI: 95% confidence interval a Represents postoperative histology (clear cell renal cell carcinoma vs. papillary renal cell carcinoma vs. other types) , avoid necrosis of tumor cells in a hypoxia environment, and thus promote tumor growth.Hence, it can be postulated that the observed disparity may be attributed to the influence of patients' nutritional status and other diseases on LDH levels.Therefore, we boldly speculate that the use of LPN may not depend on the clinical stage or the size of the tumor, but on the individual patient characteristics and technical ability of the physician to remove the tumor(Mir apoptosis
2023-11-10T06:17:37.357Z
2023-11-09T00:00:00.000
{ "year": 2023, "sha1": "09540989997e9c04743f49fc2aeee35439e41e56", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00432-023-05487-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "796a83ac9fc792bc9713fe29b8aadb7e215b2b3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244003268
pes2o/s2orc
v3-fos-license
Enhanced Removal of Pb from Electrolytic Manganese Anode Slime by Vacuum Carbothermal Reduction Electrolytic manganese anode slime (EMAS) is produced during the production of electrolytic manganese metal. In this study, a method based on vacuum carbothermal reduction was used for Pb removal in EMAS. A Pb-removal eciency of 99.85% and MnO purity in EMAS of 97.34 wt.% was obtained for a reduction temperature of 950°C and a carbon mass ratio of 10% for a holding time of 100 min. The dense structure of the EMAS was destroyed, a large number of multidimensional pores and cracks were formed, and the Pd-containing compound was reduced to elemental Pb by the vacuum carbothermal reduction. A recovery eciency for chemical MnO 2 of 36.6% was obtained via preparation from Pd-removed EMAS through the “roasting-pickling disproportionation” process, with an acid washing time of 100 min, acid washing temperature of 70°C, H 2 SO 4 concentration of 0.8 mol/L, liquid-solid mass ratio of 7 mL/g, calcination temperature of 60°C and calcination time of 2.5 h. Moreover, the crystal form of the prepared chemical MnO 2 was found to be basically the same as that of electrolytic MnO 2 , and its specic surface area, micropore volume and discharge capacity were all higher than that of electrolytic MnO 2 . This study provides a new method for Pd removal and recycling for EMAS. Introduction Electrolytic manganese metal (EMM) is an important raw material for industrial production that is widely used in various industrial elds and occupies an important position in the national economy (Zhang et al., 2020). China is a major producer of electrolytic manganese metal . In 2020, 96.5% of the world's electrolytic manganese metal was produced in China. The production process for EMM mainly includes leaching, impurity removal, electrolysis, and product posttreatment (Tao et al., 2018). Electrolytic manganese anode slime (EMAS) is a solid waste found in the anode chamber during the production of electrolytic manganese metal (Tran et al., 2020). At present, the global annual output of EMM is approximately 1.5 million tons, and approximately 75,000 to 225,000 tons of EMAS is generated each year. In China, most EMAS is dumped as hazardous waste or sold for small additional value. Therefore, EMAS has become a bottleneck hindering the development of the electrolytic manganese industry. Many scholars have proposed the method of resource-based treatment of EMAS, but the main methods are reduction leaching and acid leaching roasting activation (Zhang et al., 2018). For example, Guo et al. used the roasting leaching method to remove Pd from EMAS and then prepared lithium manganate material from Pd-removed EMAS (Guo et al, 2018). However, the process for this method is more complicated, and the removal e ciency of Pd is low (Chen et al, 2019). Many researchers recycle manganese and Pd from EMAS by the use of different reducing agents (Chen et al., 2018;Cheng et al., 2009;Li et al., 2017;Niu et al., 2012;Gui et al., 2014;Wei et al., 2017). However, these methods have certain limitations, such as high energy consumption, complex operation, low e ciency and added value (Cheng et al., 2009;Ye et al., 2015;. The removal of Pd and the regulation of the crystal form of manganese oxides are the key factors limiting the application of the above techniques. Regulation of the manganese oxide crystal form and Pb removal have become urgent problems to be solved in the electrolytic manganese metal industry. The vacuum carbothermal reduction method has always been a research hotspot in the eld of vacuum metallurgy, and great research progress has been made . The vacuum carbothermal reduction method has the characteristics of both reduction roasting and vacuum smelting and has the characteristics of low energy consumption, simple processing, and environmental friendliness; it is widely used in the elds of chemical engineering and metallurgy (Brkic et In this study, the vacuum carbothermal reduction method was rst used for Pb-removal in EMAS. The effects of process parameters such as reduction temperature, holding time and mass ratio of carbon on the Pd-removal process were investigated, and the removal mechanism for Pd was analyzed. In addition, the preparation process for chemical MnO 2 by the "roasting-acid washing disproportionation" process was studied. This study provided a new idea for the high-value resource utilization of EMAS. Raw material The EMAS samples used in this paper were all sampled in an electrolytic manganese plant in Chongzuo, Guangxi, according to the "technical speci cation for sampling and preparation of industrial solid waste". The EMAS samples were dried at 80°C, milled by ball milling and screened through 200 mesh for standby. H 2 SO 4 and other chemical reagents were analytically pure and purchased from Chongqing Boyi Chemical Reagent Co., Ltd. Experiment for Pd removal First, the EMAS was pretreated by washing and drying. Secondly, a certain mass of pretreated EMAS and a certain mass ratio of activated carbon (4 wt%, 6 wt%, 8 wt%, 10 wt%, 12 wt%) were transferred to an agate mortar for full mixing. Finally, the sample boat was placed into the middle of a tubular furnace, and the vacuum pump and tubular furnace were started. The sample was then roasted at a set reaction temperature (850°C, 900°C, 950°C, 1000°C and 1050°C), held for a certain time (70 min, 80 min, 90 min, 100 min and 110 min), and then naturally cooled to room temperature. Preparation of chemical MnO 2 Chemical MnO 2 was prepared from Pb-removed EMAS by a "roasting-acid washing disproportionation" process. The roasting process is the key step affecting the preparation of chemical MnO 2 from Pd-removed EMAS (MnO), while the acid washing process has little effect on the conversion e ciency of chemical MnO 2 ). Therefore, the effects of calcination temperature (450°C, 500°C, 550°C, 600°C, 650°C) and calcination time (1.5 h, 2 h, 2.5 h, 3 h, 3.5 h) on the conversion of chemical MnO 2 were investigated. The process parameters used for acid washing disproportionation were as follows: acid washing time of 100 min, acid washing temperature of 70°C, H 2 SO 4 concentration of 0.8 mol/L, and liquid-solid mass ratio of 7 mL/g. Discharge performance test for chemical MnO 2 The prepared sample electrode (0.032 g sample) was used as the working electrode, a zinc sheet (1 cm×1 cm) was used as the reference electrode and counter electrode, and a 9 mol/L KOH saturated solution was used as the electrolyte. The constant current discharge method was used to measure the speci c capacity of the sample, and the termination voltage was 1.0 V. The speci c capacity was determined using the calculation method shown in equation 1: Where C (mAh·g −1 ) is the speci c capacity of the sample to be tested; I (mA) is the discharge current; T (h) is the discharge time; and m (g) is the mass of the sample. Analysis method X-ray uorescence (XRF) (XRF-1800, Japan) was used to analyze the elemental composition of the EMAS sample. X-ray diffraction (XRD) (D/Max-2500, Japan) and scanning electron microscopy (SEM) (JSM-7800F, Japan) were used to analyze the phase composition and microstructure of EMAS and EMAS after vacuum carbothermal reduction and chemical MnO 2 . The speci c surface area and pore diameter of chemical MnO 2 were analyzed by the Brunauer-Emmett-Teller (BET) method (3H-2000PS1, Best Instrument Technology Co., Ltd., China.). The discharge performance of electrolytic MnO 2 and chemical MnO 2 was analyzed by using an electrochemical workstation (CHI660E, Shanghai Chen hua Instrument Co., Ltd., China.). The Pb content was determined by XRF, and the following formula was used to calculate the Pbremoval e ciency: Where φ Pb (%) is the removal e ciency for Pb; C 0 (mg·g −1 ) is the original Pb content; C e (mg·g −1 ) is the Pb content after vacuum carbothermal reduction treatment; φ Mn (%) is the conversion e ciency for chemical MnO 2 ; m 0 (mg·g −1 ) is the original weight of Mn in EMAS after vacuum carbothermal reduction treatment; and m e (mg·g −1 ) is the weight of Mn in MnO 2 after "roasting -acid washing disproportionation" treatment. Removal behavior for Pd The Pb-removal e ciency in EMAS increased with increasing reduction temperature (Fig. 1a). The Pdremoval e ciency was 85.12% and 99.85% when the reduction temperature was 850°C and 950°C, respectively. Then, the reduction temperature continued to increase, and although the Pb-removal e ciency increased slightly, this increase was not obvious. Therefore, considering the energy consumption, the reduction temperature was suggested to be 950°C. As shown in Fig. 1b, within the time range selected in the experiment, the removal e ciency for Pb in EMAS increased obviously with time, but when the holding time exceeded 100 min, the change in the Pb-removal e ciency was small, so it was better to select a holding time of 100 min. When the carbon mass ratio reached 10%, the Pb-removal e ciency basically tended to be stable, which changed from 99.85-99.88% (Fig. 1c). This can be explained by the fact that the amount of reducing agent was not enough to fully reduce the Pb compounds in the EMAS when the proportion of carbon was relatively small, which led to a poor removal effect for Pb. When the proportion of activated carbon reached 10%, the Pb compounds in the EMAS were fully reduced, so it was better to choose the proportion of carbon as 10%. The optimum Pb-removal e ciency was 99.85%. Removal Mechanism for Pd In Table 1 and were reduced to PbO. MnO was the main phase in EMAS, which indicated that MnO 2 was completely reduced to MnO when the temperature ranged between 700°C and 950°C. All Pd compounds in EMAS were reduced to Pd when the temperature was higher than 700°C. The main phase was MnO with good crystallinity and no impurity peak observed when the temperature was 950°C, which indicated that Pb was almost completely removed from EMAS. As shown in Fig. 2c, the black condensate collected in the vacuum tubular furnace was metallic Pb, which further proved that the Pb-containing compounds in the EMAS were removed by Pd vapor. As shown in Fig. 3a, SEM for the raw EMAS particles presented a relatively dense state. As shown in Fig. 3b to Fig. 3d, the dense structure of the EMAS main body was gradually destroyed, and a large number of multidimensional pores and cracks were formed as the temperature was increased. These multidimensional CO 2 + C = 2CO (10) 2. Pb-removal reactions: 2 PbSO 4 + C = 2 PbO + 2 SO 2 (g)+ CO 2 (g) (11) PbSO 4 + CO = PbO + SO 2 (g)+ CO 2 (g) (12) 2 Pbo + Co = Pbo + Co (G) (13) Pb 2 O + CO = 2Pb(g) + CO 2 (g) (14) CO 2 + C = 2CO (15) Preparation of chemical MnO 2 As shown in Fig. 4a and Fig. 4b, the conversion e ciency of chemical MnO 2 (CMD) increased with increasing temperature, increasing from 20.4-37.2%. When the temperature exceeded 600°C, the conversion e ciency for CMD increased slightly. Therefore, it was better to choose a roasting temperature of 600°C at which the conversion e ciency for CMD was 36.6%. When the roasting time was less than 2.5 h, the CMD conversion e ciency increased obviously with time (from 28.3-36.6%); when the roasting time was more than 2.5 h, the CMD conversion e ciency increased slightly (from 36.6-38.5%), and with the increasing calcination time, the economy worsened. Therefore, the optimum roasting time was 2.5 h for a CMD conversion e ciency of 36.6%. As shown in Fig. 5, the crystal forms of chemical MnO 2 (CMD) and electrolytic MnO 2 (EMD) were basically the same. Compared with the standard card, the main crystal form was γ -MnO 2 . There were many lattice defects, no ideal ratio and vacancies in the crystal form of γ -MnO 2 , which has the characteristics of a large cross-sectional area of crystal tunnels and high electrochemical activity. This type of crystal form for γ -MnO 2 is widely used in power batteries and alkaline manganese batteries and other industries, which affords the prepared CMD with natural advantages due to its crystal form and provides a large number of application scenarios for the high-value resource utilization of CMD (Guo et al., 2007;Xiao Chai et al., 1978). As shown in Fig. 6a and Fig. 6b, the CMD particle size was larger than that of EMD, the particles were staggered and stacked together, there was a substantial agglomeration phenomenon, and the particle size of the aggregate was approximately 1 µm. 3.4 Discharge performance of chemical MnO 2 Figure 7a and 7b shows discharge curves for EMD and CMD in 9 mol/L KOH (ZnO saturated) solution with different discharge currents at 1.0 V, respectively. The potential plateau in the discharge curve is attributed to the transformation between manganese dioxide and metal . The discharge platforms for CMD and EMD were basically the same. The discharge plateau for CMD was more stable and lasted longer at a discharge current of 0.1 A/g, 0.3 A/g and 0.4 A/g. As shown in Fig. 7c, the discharge capacity of CMD was 240.84 mAh/g, and the discharge capacity of EMD was 223.96 mAh/g when the discharge current was 0.1 A/g. The results showed that the discharge performance of CMD was better than that of EMD. A comparative analysis of CMD and EMD by the BET test is shown in Fig. 8. The adsorption capacity of the prepared CMD was greater than that of EMD. The speci c surface area of CMD was 45.2607 m 2 ·g −1 , while that of EMD was only 28.3444 m 2 ·g −1 , which is consistent with the SEM results. According to the IUPAC classi cation, the adsorption isotherms for CMD and EMD can be classi ed as type II with H 3 -type hysteresis, respectively (Thevenot et al., 1996). In the range of P/P 0 ≤0.40, the N 2 adsorption capacity of CMD and EMD increased gradually with increasing relative pressure, and the adsorption curve and analytical curve overlapped in this region, which indicated a small amount of microporous adsorption and monolayer adsorption (Shu et al., 2017). When P/P0 > 0.40, the adsorption capacity of N 2 increased rapidly, and H 3 -type hysteresis was observed at relatively high pressure, which was due to the capillary condensation and multilayer adsorption of N 2 in mesopores and macropores, indicating that the pores in CMD and EMD are narrow slit-like pores (Bakandritsos et al,. 2004;Bu et al., 2010). According to the Dubinin-Radushkevich model, the micropore volume of CMD was 0.0896 cm 3 ·g −1 , while that of EMD was only 0.0489 cm 3 ·g −1 . The average pore size of CMD was 7.92 nm and that of EMD was 6.90 nm. Therefore, it was considered that the speci c surface area and pore size of CMD is higher than those for EMD, which may be one of the reasons for the better discharge performance of CMD. Conclusion In this study, the vacuum carbothermal reduction method was used for Pd-removal from EMAS. In this process, the main phase of EMAS was reduced to MnO, and the Pd-containing compounds were gradually reduced to metal Pd and volatilized. A Pd-removal e ciency of 99.85% and MnO powder purity in EMAS of 97.34% was obtained at a reduction temperature of 950°C, carbon mass ratio of 10%, and holding time of 100 min. A Mn recovery e ciency of 36.6% was obtained by the "roasting-acid washing disproportionation" process under the following conditions: acid washing time of 100 min, acid washing temperature of 70°C, H 2 SO 4 concentration of 0.8 mol/L, liquid-solid mass ratio of 7 mL/g, calcination temperature of 60°C and calcination time of 2.5 h. The crystal form of CMD was basically the same as that of EMD, and the speci c surface area and micropore volume of CMD were higher than those of EMD. The discharge capacity of CMD was 16.88 mAh/g, which was higher than that of EMD at a discharge current of 0.1 A/g. This study provides a new method for the recycling of EMAS. Declarations Ethics approval and consent to participate Approval was obtained from the ethics committee of Southwest University of Science and Technology. The procedures used in this study adhere to the tenets of the Declaration of Helsinki. Consent to Publish Informed consent was obtained from all individual participants included in the study. Funding All sources of funding for the research reported should be declared. The role of the funding body in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript should be declared. Availability of data and materials All data generated or analysed during this study are included in this published article (and its supplementary information les). Supporting Information Supporting Information is not available with this version.
2021-11-12T16:24:59.071Z
2021-11-10T00:00:00.000
{ "year": 2021, "sha1": "ed0daf47b95e6f86b80619ac2f3e6cfa96863464", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-990487/latest.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cdbf1d2097c8343c99e223d4a2dbea597611ed62", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [] }
210587156
pes2o/s2orc
v3-fos-license
Demographic and Social Status of Sporting Bull Rearers and Rearing of Jallikattu Bulls In India, Tamil Nadu is one of the major agrarian states and bestowed with four important recognized indigenous cattle breeds viz. Bargur, Kangayam, Pulikulam and Umblachery; and all of them are an integral part of agriculture and played major role for sustainable livelihood to rural farmers in the backdrop of varied climatic conditions. Some of these breeds were predominantly used for sporting events especially in jallikattu to recreate the rural farmers at the time of festival season (pongal) to worship the god for better monsoon and harvest. Among the districts in Tamil Nadu, Madurai, Sivagangai, Trichy, Dindigul and Pudukottai districts are known for “Bull baiting or Jallikattu” sporting event. These sporting events are age-old traditional and they had been mentioned in ancient Tamil literatures and epics. Documentation of sporting bulls and their cultural association with the folk had been carried out in ancient times by the scholars of Tamil and animal lovers. However, the investigation on socio-economic attributes of International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 8 Number 08 (2019) Journal homepage: http://www.ijcmas.com Introduction In India, Tamil Nadu is one of the major agrarian states and bestowed with four important recognized indigenous cattle breeds viz. Bargur, Kangayam, Pulikulam and Umblachery; and all of them are an integral part of agriculture and played major role for sustainable livelihood to rural farmers in the backdrop of varied climatic conditions. Some of these breeds were predominantly used for sporting events especially in jallikattu to recreate the rural farmers at the time of festival season (pongal) to worship the god for better monsoon and harvest. Among the districts in Tamil Nadu, Madurai, Sivagangai, Trichy, Dindigul and Pudukottai districts are known for "Bull baiting or Jallikattu" sporting event. These sporting events are age-old traditional and they had been mentioned in ancient Tamil literatures and epics. Documentation of sporting bulls and their cultural association with the folk had been carried out in ancient times by the scholars of Tamil and animal lovers. However, the investigation on socio-economic attributes of ISSN: 2319-7706 Volume 8 Number 08 (2019) Journal homepage: http://www.ijcmas.com The study was conducted among 176 sporting bull rearers in Madurai, Trichy, Dindigul and Pudukottai districts who were actively involved in sporting bull rearing. Majority of the sporting bull rearers were in the age group of less than 35 years (38.64 per cent) followed by 36 to 45 years (31.25 per cent) and more than 45 years (30.11 per cent) and they were illiterate (75.57 per cent) and mostly belonging to Hindu religion (77.27 per cent). Among bull rearers, 63.64 percent of the respondents belonged to backward community followed by most backward (26.14 percent) community and scheduled castes (9.60 percent). The data depicted that more than half of the respondents (55.11 per cent) were rearing the sporting bulls as an ancestral legacy for several decades. The bull rearers selected the bull calves based on alertness (24.21 per cent), body conformation (21.56 per cent) and whirls (19.96 per cent). Most of the bull rearers (85.22 per cent) believed that training was essential and their choices were swimming (30.50 per cent), vaadi (22.91 per cent) and hooking the soil by horns (17.98 per cent) for sporting bulls. The findings of this study indicated that it is an age-old traditional sporting event; even though there was no income from these sporting bulls they reared because of their ancestral practice. bull rearers and their managing practices for rearing of sporting bulls have not been attempted so far. Hence, this study was planned to assess the socio-economic status of sporting bull rearers and their rearing of sporting bull in Tamil Nadu. Materials and Methods Information pertaining to demographic and social status of sporting bull rearers and their source of purchase, selection, training of bull calves and participation in sporting events were collected. The data were collected from well organized interview schedule along with questionnaire from the sample size of 176 bull owners present in 33 villages of Madurai, Trichy, Pudukottai and Dindigul districts. All the collected information were computerized and analysed by using appropriate statistical techniques. Results and Discussion The results of primary data pertaining to status of bull rearers, breeding, feeding and participation in sporting events are given in Table 1. Age of the bull rearers and education status In this study, majority of the sporting bull rearers were in the age group of less than 35 years (38.64 per cent) followed by 36 to 45 years (31.25 per cent) and more than 45 years (30.11 per cent); and it gives a fair idea about age of the bull rearers who play a crucial role as they inherited the traditional knowledge of this sporting event from their ancestors and inculcate the same among the younger generation for succeeding years, since it was reflected almost all age groups showed interest to rear the sporting bulls. A greater part of respondents were illiterate (75.57 per cent). This might be due to the fact that the respondents were engaged in agriculture and livestock rearing early in life and they gave up the primary education. Religion, community and annual income Majority of sporting bull rearers belongs to Hindu (77.27 per cent) and Christian (22.73 per cent) religions. 22.73 per cent of christian farmers belong to Trichy district only and they reared and conducted this sporting event. Among bull rearers, 63.64 percent of the respondents belonged to backward community, most backward community (26.14 percent), scheduled castes (9.60 percent) and scheduled tribe (0.06 per cent). Most of the bull farmers belonging to backward community to rear Pulikulam cattle as their ancestral occupation, which is in agreement with Thesinguraja et al., (2017) but contrast to his finding none of the respondent belong to SC and ST category. Most of the bull rearers had annual income of eighty five thousand to one lakh sixty five thousand (59.10 per cent), which is in agreement with the findings of Thesinguraja et al., (2017) in socio-economic status of Pulikulam cattle rearers. Number of sporting animals reared and number of individuals needed to rear a sporting bull In this study, bull rearers were having one (58.52 per cent), two (29.54 per cent) and more than two bulls (11.94 per cent) in their possession. The less number of sporting bull reared by a farmer might be due to high feeding cost, time consuming training process and management problems due to ferociousness of sporting bull. Number of individuals needed to rear a sporting bull was one (28.98 per cent), two (37.50 per cent), three (28.41 per cent) and more than three (5.11 per cent). This might be due to make them savage towards strangers, kept apart separately and fed by one or two family members only and similar findings also reported by Pattabhiraman (1962). Reasons and experience for rearing sporting bulls The data depicted that more than half of the respondents (55.11 per cent) were rearing the sporting bulls as an ancestral legacy for several decades followed by more than onethird of respondents from their childhood (34.10 per cent) and remaining were beginners ( Figure 1). About 38.64 and 31.25 per cent of bull rearers had the experience of bull rearing from 10 to 20 and more than 20 years respectively. It depicted the sporting bull rearing had been in the cultural living system of agrarian community since ancient days and it also unfolding the reason behind the existence of this breed still today, i.e. participation of stakeholders for conservation of indigenous breeds as they are bestowed with endurance and aggressiveness, which is suitable for this sporting event. As there is dearth of literature about profile of sporting bull rearers in India and particularly in Tamil Nadu, the result obtained in the present study could not be compared. Source and age at purchase As mentioned in Table 1, the bulls were purchased from reputed livestock market (shandy) in nearby areas (51.70 per cent), since, they were so many livestock markets selling native breeds in Madurai, Sivagangai, Dindigul and Trichy districts and followed by procurement from other cattle herds (27.84 per cent). Some owners preferred to select the calf in cattle herds itself by observing the social activities of male calves, sire and dam performance in herds. Most preferred age to purchase the bull calves was at six months to one year (52.27 per cent). Optimum age for selection and training to sporting bulls Majority of the respondents followed the practice of selecting the bull calves at six months of age (68.75 per cent) and started giving training at the age of six months (60.23 per cent) itself. This might be due to ease to give training to a calf and cost of the animal was economical to purchase; which is in agreement with the report of Nisha (2016) who stated that bulls were sold and trained for bull baiting at six to seven months of age. Selection criterion for sporting bulls From this study, bull rearers selected the bull calves based on vigour and exuberance (24.21 per cent), body conformation (21.56 per cent) and whirls (19.96 per cent); which were represented in Figure 2 and these observations concurred with the findings of Ezhilvendan (2013), who documented selection of bull calves based on bounce and vivacity for sporting purpose in Tamil Nadu. Hence, presence of aggression in the sporting bulls results from early selection of bull calves based on alertness and conformation. Among the respondents, 19.96 per cent of bull rearers gave importance to whirls present on the forehead and back of the bull calves, as a selection criterion for sporting event. It might be due to their indigenous traditional knowledge inherited from their ancestors that whirls in those areas indicate aggressiveness. Ancestors gave names to each whirls present in the body of bull calves. Type of training to sporting bulls Majority of bull rearers preferred to give training like swimming (30.50 per cent), vaadi (22.91 per cent) and hooking the soil (17.98 per cent) to sporting bulls which coincided with the reports published in English daily (The Hindu, dated 14/01/2018) and report of Ezhilvendan (2013). From the data collected through the questionnaire, it was understood that swimming was practised in bulls to prevent respiratory distress in the sporting arena and strengthening the legs of sporting bulls. Vaadi was used to improve gesture and language training, and whistle sound by a gang of bull tamers towards sporting bullsto increase aggression behavior. Hooking the soil was practised to sharpen their horns and used to threaten the bull tamers in sporting arena and they were illustrated in Figure 3. Majority (85.22 per cent) of the bull owners opined that training was essential and these might help the bulls to show aggression in the sporting event. With respect to duration of training, one half (50.57 per cent) of the bull rearers were given training to sporting bulls atleast 30 minutes to and nearly one half (55.11 per cent) of the bull rearers were giving training for weekly once. Difficulties during training The respondents observed that no obscurity during training of bulls (90.90 per cent) and response to the training was good (81.25 per cent). Age at first participation in sporting event Majority of the sporting bulls (92.6 per cent) were allowed for the first time in sporting event, when they were below 3.5 years of age and their sporting life was less than eight years of age (46.02 per cent). Utility and disposal of bull after sporting life After their sporting life, the bulls were reared either till their death (76.14 per cent) or for natural service (22.73 per cent) or sold for slaughter (1.14 per cent).Even there was no utility of bull after sporting life they rearing sporting bull because of ancestral practice, love towards the bull and for prestige. Nearly one-half (57.39 per cent) of the bull rearers were not disposing the sporting bulls, they keeping the bulls in home till their death. This indicated that majority of the bull rearers worship the bull as equal to god and treat them as one of the family member till its death. About 22.73 per cent of bull rearers gave the winning bull as a gift to their ancestral god and it had been considered as a "Temple Bull"; though, it was not tied, but allowed to roam and wander (as a dominant bull) and consequently sire the indigenous cows naturally in the nearby villages, thereby promoting the genetic variability of the population and prevent the indigenous breeds from extinction in their breeding tract. Bull rearers sold their winning bulls (41.48 per cent) at exorbitant price to needy bull rearers and now-a-days, it has been one of the value added bull enterprises for them. Only in Madurai, Trichy and Dindigul districts, the male calves are not sold for slaughter, rather reared for sporting events.Even after the death of sporting bull, they buried in back of their home or their own land itself built the temple and worshiped as god. From this study, it was observed that the bull rearers are rearing their bull calves only based on interest and enthusiasm, without any financial assistance. This event is an ancient and traditional sporting event for livestock keepers and agrarian community to safeguard the livestock wealth in the rural areas for future generations.
2019-10-03T09:05:48.535Z
2019-08-20T00:00:00.000
{ "year": 2019, "sha1": "318fdbf348f991c69b9836440c2959d6079c90a1", "oa_license": null, "oa_url": "https://www.ijcmas.com/8-8-2019/R.%20Priyadharsini,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5229925749180f4f1d9dc3463989408cd5aecd10", "s2fieldsofstudy": [ "Sociology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Psychology" ] }
115905763
pes2o/s2orc
v3-fos-license
Twisted Tape Based Heat Transfer Enhancement In Parabolic Trough Concentrator – An Experimental study The heat transfer augmentation in parabolic trough concentrator is gaining importance now a days as it makes the system compact and efficient. Out of various techniques, use of twisted tape inserts is popular due to easy implementation as well as substantial enhancement in system performance. Most of the studies in this field pertaining to experimental/computational analysis with respect to both outdoor and indoor set ups. The rise in convective coefficient of heat transfer fluid due to insert has been studied by many researchers. But such studies are based on uniform heat flux or out-field non-uniform based which is under uncontrollable environment. Hence in the present study, Nusselt number correlations for plain absorber and absorber with twisted tape (y=3.48, 5.42 and 7.36) are developed under the realistic condition of solar concentration with controlled environment. The parity plot shows the maximum deviation of 20% which in turn indicates better quality of fit. Introduction Solar energy has been used by both nature and human kind throughout time in many ways. It is used to heat and cool buildings (both actively and passively), heat water for domestic and industrial uses, heat swimming pools, power refrigerators, operate engines and pumps, desalinate water for drinking purposes, generate electricity, for chemistry applications, and many more operations. Because of the desirable environmental and safety aspects it is widely believed that solar energy should be utilized instead of other conventional energy forms. Out of different applications of solar energy, the power generation by focusing type of collectors is gaining popularity now a days. Parabolic trough power plants consist of large field of parabolic trough collectors (PTC), a heat transfer fluid/steam generation system, a Rankine steam turbine/generator cycle, and optional thermal storage and/or fossil-fired backup systems. The performance of PTC is based on optical and thermo-hydraulic configuration. Due to advanced optics technology presently in use, researchers have focused on performance enhancement of receivers taking into account various geometrical treatments. Literature review In this section, various heat transfer augmentation techniques applicable to PTC absorber have been highlighted. An innovative flat aluminium absorber in small PTC for process heat and direct steam generation has been investigated by Bortolato et al. [1]. The absorber has got bar and plate technology 2 1234567890''"" with an internal turbulator. Ray tracing was done to get the amount of flux in each bin using Soltrace ® . Due to the high heat flux over the receiver a heat spreader is used to avoid hot spots on the surface and thus an offset turbulator has also been used inside the receiver to reduce the thermal gradient between the wall and the fluid. Due to low pressure drop, despite the presence of turbulator, makes it suitable for steam generation at even low mass flow rates. Jaramillo et al. [2] have worked on the thermal hydraulic performance of a PTC with twisted tape inserts for low enthalpy processes by considering the first and second law of thermodynamics for a temperature range of 70 to 110 o C. For the theoretical model, an empty tube and tube with twisted tape of twist ratio 2 was implemented and concluded that heat removal factor increases to 3 % and overall heat loss coefficient decreases by 1.5 % for twisted tape. For the numerical simulation, twist ratios of 1, 2, 3, 4 and 5 with Reynold number range of 1350 -8350 was considered. The thermal efficiency increases as the twist ratio tends to 1, and as the flow rate increases the efficiency increases and thus get independent of the twist ratio at higher flow rates. The enhancement factor based on second law shows that for higher Reynold number and high twist ratios, there is no advantage of using twisted tape. It can thus be concluded that the best results are obtained only when twisted tapes are used for very low flow rates. Mwesigye et al. [3] numerically analyzed the effect of wall detached Twisted Tape having a twist ratio 0.5 -2.0 for a turbulent range of Re (10260 -1353000). Non-uniform heat flux boundary condition with heat flux extracted from Soltrace® was implemented along the circumference. Syltherm was considered as the HTF (Heat Transfer Fluid) in this analysis. Due to twisted tape 68% reduction in surface temperature and 1.05 -2.69 times rise in Nu was noticed compared to plain tube. Further, Nu and f correlations were developed based on this results. Vashistha et al. [4] investigated the experimental use of single, double and 4 Twisted Tapes in co-swirl and counter-swirl (CT) orientations inside a tube having twist ratio 2.5, 3, 3.5 and Re in the range of 4000 -14000. Better heat transfer rates are observed for lower Re with twisted tapes as there is a decline in performance with increasing Re. There is broadening of centrifugal forces and thus turbulent intensity near the wall when twist ratio is reduced. Counter swirl flow generates increasing whirl velocity thus 4CT perform the best at y = 2.5 compared to other configurations. In the thermo-hydraulic performance analysis, the 4CT surpass all the other configurations in complete Re range which is 1.23 -1.26. The correlation for Nu and lie in the range of ± 4 % and ± 7 % respectively. Similar studies were performed by Song et al. [5], Waghole et al. [6], Ghadirijafarbeigloo et al. [7] and Zhu et al. [8]. The influence of various fin configurations on the system performance were studied by Bellos et al. [9], [10], [11] and Xiangtau et al. [12]. All these studies have proved the enhancement in heat transfer rate at the cost of additional pressure drop. An intensive review mentions the benefit of performance augmentation with reference to experimental and/or computational analysis. Till date studies have been focused on uniform heat flux based indoor set up. Also, out-field experimental study or non-uniform heat flux based computational analysis have been tried. Hence in the present study, the non-uniform heat flux is simulated (with the use of Soltrace®) in case of indoor based receiver. Further, Nu correlation for the case of plain receiver as well as twisted tape is also developed. Heat transfer augmentation in PTC A cross-section of a parabolic trough collector is shown in Figure 1, the procedure for determining total heat loss from the receiver is discussed below. The convective heat transfer coefficient between glass cover and surrounding is, Similarly, convective heat transfer coefficient between receiver and cover is determined by, The radiation heat loss coefficient from the glass cover outer surface to ambient is given by, The receiver -cover based radiation heat loss coefficient is: Considering the both convective and radiative heat loss from the absorber surface and glass cover outer surface, the overall heat loss coefficient (UL) is calculated as, Similarly, the overall heat transfer coefficient based on the outer surface area of the receiver, In equation (6), Uo can be raised by increasing hfi as rest all parameters (except UL) remains unchanged. However, UL is inversely proportional to hf i. Hence different methods are used to enhance hf i. The twisted tape ( Figure 2) insert partially blocks the flow passage and also induces secondary swirl motion which increases the degree of turbulence of HTF. Simultaneously, the velocity of the HTF also increases due to the reduction in free flow area due to insertion of twisted tape. The combined effect of all these results in enhancement of useful heat gain by HTF. The geometry of the twisted tape is specified by twist ratio which is defined as the ratio of the pitch length for 180° twist (H) to the width of the twisted tape (D). In general, the intensity of swirl is higher for lower twist ratios. In the present work, twisted tapes (y = 3.48, 5.42 and 7.36) are used. The steps involved in the analysis with twisted tapes is similar to that of plain receiver except the following changes. Experimental Analysis For the detailed analysis, Soltrace® based solar radiation is taken as input to the receiver and Nu correlations are developed for the wide range of Re (2300 -25000) with respect to different twisted tape inserts. The experiment is carried out for flow rates of 7, 13, 16, 19, 22, 25, 28 and 30 LPM. The day wise profile for the solar radiation, wind velocity, ambient temperature is taken as follows from the weather forecast report available from the MNRE website [20] for the area Manipal which is used in the heat loss calculation analytically and then that amount of solar radiation reduced is fed to the heating strips. Experimental Setup The experimental set up consists of test section, entry length, pumping unit, thermostatic bath, cooling unit, electrical components and measuring instrumentation units as shown in the schematic (Figure 3). The HTF (Therminol VP-1) in thermostatic bath can be circulated in the flow at pre-set temperature by means of gear pump. The flow fully develops in the entry length section before the test section. The required concentrated heat flux is provided to the HTF in the test section using differential heating of 6 Nichrome strips wound over the receiver. The HTF coming out of the test section is cooled in a crimpled finned tube DPHX before entering the thermostatic bath. The test section is properly insulated to avoid the heat loss to surrounding. The output from 20 thermocouples (16 on receiver, HTF entry and exit temperature, insulation surface and ambient temperature), differential pressure transducer and flow meter are connected to data logger. Digital voltmeter and ammeter are used for the electrical power measurement. Experimental procedure In this section, the detailed experimental procedure for determining the energy parameters of PTC with plain tube/twisted tape insert is discussed. The experimental procedure for plain receiver as well as twisted tape inserts is nearly similar. However, additional steps involved in experimentation with twisted tape inserts are discussed. i. Required entry temperature of the HTF and flow rate are set in thermostatic bath and variable frequency drive. ii. During HTF flow, air bubbles are removed in the line with the help of valves. iii. Based on Soltrace® output, required electrical power is supplied to different Nichrome strips with the help of step down transformers and dimmer stats. iv. Once the system attains steady state, all readings are recorded. v. The above steps are repeated for the case of twisted tapes. The error involved in the analysis is based on the accuracy of different instruments (Table 1) The Soltrace ® output as given in Figure 4 shows the solar ray distribution about the receiver after passing through a vacuum glass tube. The entire receiver tube is divided into 8 bins and the solar radiation falling on it is averaged about that particular bin over the length. With this distribution, the Local Concentration Ratio (LCR) about the receiver is found and cross multiplied for corresponding solar radiations in W/m 2 . Later this radiation value is used as an input (resistance heating) to the test section. Results and discussion As there is no Nu correlations available for non-uniform heat flux applicable to plain tube as well as tube with twisted tape inserts the following result highlights its importance mainly for DNI based applications. The experiment was conducted with varying inlet conditions (Inlet temperature from 40 -100 o C, DNI of 167 -671 W/m 2 and flow rate of 7 -30 lpm) to determine experimental Nu. The Nusselt number for plain tube from literature review was found to be of the form = ' * . By using regression analysis, the coefficients of C, m and n are found out. These coefficients are then minimized by reducing the error generated by subtracting predicted Nu from experimental Nu. This way the Nusselt number correlation for plain tube was found. Figure 5 shows the parity plot of Nusselt number predicted vs Nusselt number experimental for plain tube and lies in the range of ± 19% of experimental Nusselt number. The range of Re (2300 -25000) covered is from the transition range to turbulent region. The laminar and sub laminar region haven't been covered in the present work due to difficulties in experimental operation. The predicted Nusselt number is, The Nusselt number was plotted also for plain tube with twisted tape inserts and a generalized correlation with twist ratio (y= 3.48 -7.36) being the independent term present in the equation. The correlations developed so far has the averaged surface flux. In the present work the heat flux was divided into 6 parts having symmetric heat flux about the centre. From the literature review the . The other forms of Nu weren't in well agreement with the experimental data and had a high range of error. The same procedure as mentioned earlier was followed to find the Nusselt number experimentally and then predicted for the experimental data. There error in Nu predicted lies in between ± 20%. The predicted Nusselt number for twisted tapes was thus found to be: Conclusion The heat transfer enhancement in PTC due to twisted tape inserts has been studied. The primary objective was to develop Nu correlations for plain absorber as well as absorber with twisted tape under realistic non-uniform solar radiation level. This was done by taking good number of data points. From parity plots it has been observed that both the correlations are matching with an error less than 20%. Hence for PTC analysis, the present correlation is more viable than the uniform heat flux based Nu correlations.
2019-04-16T13:28:29.065Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "dacdb29b0714f20ca8e0b2bbd373efa20d2a5cf2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/376/1/012034", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "41d727a1c0448b1b3d193f89d125b01a62939500", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
116298763
pes2o/s2orc
v3-fos-license
A Structurally Enhanced, Ergonomically and Human–Computer Interaction Improved Intelligent Seat’s System † Featured Application: The potential application of this seating system lies in the field of manufacturing process for airplane seating system. This paper provides a prototype for the mold processing, manufacturing flow and logistics for industrial engineering. The potential application will improve the early stage prototype iteration, and speed up the product design and manufacturing process. In addition, the concept-driven design process closely integrates human needs, and it can optimize the cost and quality in the early stage design. Abstract: Modern technology advances airplane seat design with better ergonomics and new HCI (human-computer interaction). However, airline companies are not motivated to replace the seat system due to the cost consideration. Hence, a series of re-optimized design in ergonomics and HCI should be carried out by designers. This paper describes a novel intelligent seat’s system, which is designed to be used for the airplanes or similar conditions. This system consists of redesigned ergonomics and HCI compared with original seat’s systems. The mainly redesigned parts are the aesthetics and visual modeling for people to receive visual information, the ergonomics part for people to receive tactile information, new users’ action innovation for people to receive and output information, the redesign of the structure of the system with low weight and cost, and the functional system environment for people to receive information from humans through movement in multiple environments. Structural analysis supports the redesign. The purpose of the redesign is to improve the HCI system with new tech and interaction. Introduction The re-design of airline seats' systems for improved ergonomics and human-computer interaction will be described in this article. As the seat system evolves, quite a lot of new HCI design of the new tech seat system is required to overcome the challenges that emerge. Various companies all over the globe have begun designing new HCI for the new seat system [1]. Microsoft, Google and other US Problems scenarios: Passengers in the automobiles or airplanes with various interactive actions emerged as using high tech instruments. Design tasks: the aesthetics and visual modeling for users to receive visual input, the ergonomics part for users to get tactile information, a new user's action innovation for users to communicate information, the redesign of the structure of the seat system with lower weight and cost, the functional system environment for users to get information from users through movements in various environments. Design Concept The mind mapping below illustrates the design process for this design goal with complexity. Some design sketches showing the design concepts also help to make the design more useful and more interesting. After mind mapping, the following points are concluded and summarized here: redesigned ergonomics and HCI in original seat's systems, the aesthetics and visual modeling for people to receive visual information, the ergonomics part for person to receive tactile information [4], a new user's action innovation for people to receive and output information, the redesign of the structure of the system with low weight and cost, and the functional system environment for people to receive information from humans through movements in multiple environments. The basic design concept and ideas can be basically seen from Figure 1. Designs 2017, 1, 11 3 of 27 to receive visual input, the ergonomics part for users to get tactile information, a new user's action innovation for users to communicate information, the redesign of the structure of the seat system with lower weight and cost, the functional system environment for users to get information from users through movements in various environments. Design Concept The mind mapping below illustrates the design process for this design goal with complexity. Some design sketches showing the design concepts also help to make the design more useful and more interesting. After mind mapping, the following points are concluded and summarized here: redesigned ergonomics and HCI in original seat's systems, the aesthetics and visual modeling for people to receive visual information, the ergonomics part for person to receive tactile information [4], a new user's action innovation for people to receive and output information, the redesign of the structure of the system with low weight and cost, and the functional system environment for people to receive information from humans through movements in multiple environments. The basic design concept and ideas can be basically seen from Figure 1. Aesthetic-Formal and Visual Transmission of Information from Humans To improve the seat system, the first thing to do after the concepts is to change the form and visual shape, in order to let people have a better impression of the seat system. Design Composition and Curve Coordination The famous artist Kandinsky's theory [5] laid the foundation for the aesthetic design of the seat system. When people see the shape of the products, it is better for them to be comfortable and feel calm. To put the point and line's constitution in a great reconstruction of the product will achieve the design goal. The idea of this theory can be seen from Figure 2. Aesthetic-Formal and Visual Transmission of Information from Humans To improve the seat system, the first thing to do after the concepts is to change the form and visual shape, in order to let people have a better impression of the seat system. Design Composition and Curve Coordination The famous artist Kandinsky's theory [5] laid the foundation for the aesthetic design of the seat system. When people see the shape of the products, it is better for them to be comfortable and feel calm. To put the point and line's constitution in a great reconstruction of the product will achieve the design goal. The idea of this theory can be seen from Figure 2. Repeated equal-spaced straight lines are the easiest design scene, so as the basic rhythm, minute interval times, or repeated at intervals. As shown in Figure 3. First, the first iteration aimed at increasing the amount of repetition, as pictures of the scene. If you rely on a lot of violin music, play to enhance the sound of the violin. The second iteration-except for the amount of enhancement, there is sense of quality in music generally similar to repeating the same section of music after a long pause [5]. In this article, the design is based on Kandinsky's theory, and the goal is to make the shape of the seat system shown in Figure 4 in a curved coordination and a great design composition. Design Semantics The way that the seat system is reconstructed is illustrated here. The new structure is inspired by a common object in daily lives: the eggshell. For the embryo, the eggshell is an important structure [6]. Firstly, the eggshell forms a chamber with safety for the new developing embryo. Secondly, a Repeated equal-spaced straight lines are the easiest design scene, so as the basic rhythm, minute interval times, or repeated at intervals. As shown in Figure 3. Repeated equal-spaced straight lines are the easiest design scene, so as the basic rhythm, minute interval times, or repeated at intervals. As shown in Figure 3. First, the first iteration aimed at increasing the amount of repetition, as pictures of the scene. If you rely on a lot of violin music, play to enhance the sound of the violin. The second iteration-except for the amount of enhancement, there is sense of quality in music generally similar to repeating the same section of music after a long pause [5]. In this article, the design is based on Kandinsky's theory, and the goal is to make the shape of the seat system shown in Figure 4 in a curved coordination and a great design composition. Design Semantics The way that the seat system is reconstructed is illustrated here. The new structure is inspired by a common object in daily lives: the eggshell. For the embryo, the eggshell is an important structure [6]. Firstly, the eggshell forms a chamber with safety for the new developing embryo. Secondly, a First, the first iteration aimed at increasing the amount of repetition, as pictures of the scene. If you rely on a lot of violin music, play to enhance the sound of the violin. The second iteration-except for the amount of enhancement, there is sense of quality in music generally similar to repeating the same section of music after a long pause [5]. In this article, the design is based on Kandinsky's theory, and the goal is to make the shape of the seat system shown in Figure 4 in a curved coordination and a great design composition. Repeated equal-spaced straight lines are the easiest design scene, so as the basic rhythm, minute interval times, or repeated at intervals. As shown in Figure 3. First, the first iteration aimed at increasing the amount of repetition, as pictures of the scene. If you rely on a lot of violin music, play to enhance the sound of the violin. The second iteration-except for the amount of enhancement, there is sense of quality in music generally similar to repeating the same section of music after a long pause [5]. In this article, the design is based on Kandinsky's theory, and the goal is to make the shape of the seat system shown in Figure 4 in a curved coordination and a great design composition. Design Semantics The way that the seat system is reconstructed is illustrated here. The new structure is inspired by a common object in daily lives: the eggshell. For the embryo, the eggshell is an important structure [6]. Firstly, the eggshell forms a chamber with safety for the new developing embryo. Secondly, a Design Semantics The way that the seat system is reconstructed is illustrated here. The new structure is inspired by a common object in daily lives: the eggshell. For the embryo, the eggshell is an important structure [6]. Firstly, the eggshell forms a chamber with safety for the new developing embryo. Secondly, a structural protection and controlled gas medium are also provided by the shell. As the Figure 5 shows. Designs 2017, 1, 11 5 of 27 structural protection and controlled gas medium are also provided by the shell. As the Figure 5 shows. The main idea for the design is to design a new structure just like the embryo, with a cozy environment and secured environment, offering the passenger the most security and great feeling for the entire journey. Then, the re-designed model, will include more modern style elements, and more interaction with the passenger. The model is shown in Figure 6. Color: Contrasting colors form the basis of modern design. Meanwhile, small details such as the handles have a combination of black and white to add to the element of mystery without being so boring. In artistic terms, this is known as the penetration of point, line and plane. Shape: Eggshells are one of the most natural, beautiful and secure structures. We use ergonomics factors to define the main space of the seat as well as using passengers' behavior as one of our factors in making decisions. The seat handles are different from the traditional ones, for not only do they cater to the shape and moving actions of our arms, but they are also able to change in shape. The cushion, redesigned to stimulate a circle form known as "Yin-Yang", provides an element of traditionalism. The back rest is also a combination of aesthetic and ergonomics by providing the best of both worlds in terms of passengers' comfort and space. Hollow Spaces: Through research, we optimize the area of the backseat to cater to the main pressure points. Thus, we decided to make the back rest and seat bottom hollow [6]. This does not reduce the strength and stiffness of the material but reduces the material costs. Furthermore, we design the cut in such a way that not only does it look aesthetically stunning, but it is also able to outline the shape of the body and allow the passengers to feel as one with the spaces. Crafts: Designers are born from craftsman. Thus, we incorporate the act of crafting when designing. We have read through many design references about molding and material books for crafting, as we want to search for the easiest and most cost-efficient way for producing it. The main idea for the design is to design a new structure just like the embryo, with a cozy environment and secured environment, offering the passenger the most security and great feeling for the entire journey. Then, the re-designed model, will include more modern style elements, and more interaction with the passenger. The model is shown in Figure 6. structural protection and controlled gas medium are also provided by the shell. As the Figure 5 shows. The main idea for the design is to design a new structure just like the embryo, with a cozy environment and secured environment, offering the passenger the most security and great feeling for the entire journey. Then, the re-designed model, will include more modern style elements, and more interaction with the passenger. The model is shown in Figure 6. Color: Contrasting colors form the basis of modern design. Meanwhile, small details such as the handles have a combination of black and white to add to the element of mystery without being so boring. In artistic terms, this is known as the penetration of point, line and plane. Shape: Eggshells are one of the most natural, beautiful and secure structures. We use ergonomics factors to define the main space of the seat as well as using passengers' behavior as one of our factors in making decisions. The seat handles are different from the traditional ones, for not only do they cater to the shape and moving actions of our arms, but they are also able to change in shape. The cushion, redesigned to stimulate a circle form known as "Yin-Yang", provides an element of traditionalism. The back rest is also a combination of aesthetic and ergonomics by providing the best of both worlds in terms of passengers' comfort and space. Hollow Spaces: Through research, we optimize the area of the backseat to cater to the main pressure points. Thus, we decided to make the back rest and seat bottom hollow [6]. This does not reduce the strength and stiffness of the material but reduces the material costs. Furthermore, we design the cut in such a way that not only does it look aesthetically stunning, but it is also able to outline the shape of the body and allow the passengers to feel as one with the spaces. Crafts: Designers are born from craftsman. Thus, we incorporate the act of crafting when designing. We have read through many design references about molding and material books for crafting, as we want to search for the easiest and most cost-efficient way for producing it. Color: Contrasting colors form the basis of modern design. Meanwhile, small details such as the handles have a combination of black and white to add to the element of mystery without being so boring. In artistic terms, this is known as the penetration of point, line and plane. Shape: Eggshells are one of the most natural, beautiful and secure structures. We use ergonomics factors to define the main space of the seat as well as using passengers' behavior as one of our factors in making decisions. The seat handles are different from the traditional ones, for not only do they cater to the shape and moving actions of our arms, but they are also able to change in shape. The cushion, redesigned to stimulate a circle form known as "Yin-Yang", provides an element of traditionalism. The back rest is also a combination of aesthetic and ergonomics by providing the best of both worlds in terms of passengers' comfort and space. Hollow Spaces: Through research, we optimize the area of the backseat to cater to the main pressure points. Thus, we decided to make the back rest and seat bottom hollow [6]. This does not reduce the strength and stiffness of the material but reduces the material costs. Furthermore, we design the cut in such a way that not only does it look aesthetically stunning, but it is also able to outline the shape of the body and allow the passengers to feel as one with the spaces. Crafts: Designers are born from craftsman. Thus, we incorporate the act of crafting when designing. We have read through many design references about molding and material books for crafting, as we want to search for the easiest and most cost-efficient way for producing it. The result is that the most efficient way is to piece up various parts together like a Lego set. This repetitive process makes it fast and easy for factories to produce it and easier for aircraft servants to clean it. Furthermore, the black surface of the cushion makes it easier to keep it from getting dirty. All in all, the shape of an egg provides an integral feeling: beautiful in and out, and full of inspiration of modern internationalism design. Ergonomics-Sensorial Transmission of Information from Humans After the aesthetic part, careful design is needed for the sensorial transmission of the information from humans. By analyzing the body posture and pressure points on human bodies subjected to prolonged seating, the design is considered in terms of ergonomics. The positions of the sensors are illustrated in Figure 7. The result is that the most efficient way is to piece up various parts together like a Lego set. This repetitive process makes it fast and easy for factories to produce it and easier for aircraft servants to clean it. Furthermore, the black surface of the cushion makes it easier to keep it from getting dirty. All in all, the shape of an egg provides an integral feeling: beautiful in and out, and full of inspiration of modern internationalism design. Ergonomics-Sensorial Transmission of Information from Humans After the aesthetic part, careful design is needed for the sensorial transmission of the information from humans. By analyzing the body posture and pressure points on human bodies subjected to prolonged seating, the design is considered in terms of ergonomics. The positions of the sensors are illustrated in Figure 7. For improvement of the comfort, the seating system automatically adjusts the position to better fit the passenger's situation. Some sensors will be installed inside the back cushion to monitor the physical signal from the passengers. Some body movements or non-movements may indicate that the passenger falls asleep, which is when the seat needs to be adjusted. Furthermore, some cardiorespiratory signals will also be monitored, such as heart rates and respiration, to offer more options. Specifically, in order to extract the proposed cardiorespiratory information from the original signals, some adaptive data processing algorithms need to be developed and utilized, both for PVDF (polyvinylidene fluoride) film sensors and conductive fabric sensors. The data from the research can be see from Tables 1 and 2. Simple and powerful data processing algorithms are therefore described and testified as follows, according to cardiorespiratory information extracted from PVDF sensor output and conductive fabric sensor output. For improvement of the comfort, the seating system automatically adjusts the position to better fit the passenger's situation. Some sensors will be installed inside the back cushion to monitor the physical signal from the passengers. Some body movements or non-movements may indicate that the passenger falls asleep, which is when the seat needs to be adjusted. Furthermore, some cardiorespiratory signals will also be monitored, such as heart rates and respiration, to offer more options. Specifically, in order to extract the proposed cardiorespiratory information from the original signals, some adaptive data processing algorithms need to be developed and utilized, both for PVDF (polyvinylidene fluoride) film sensors and conductive fabric sensors. The data from the research can be see from Tables 1 and 2. Simple and powerful data processing algorithms are therefore described and testified as follows, according to cardiorespiratory information extracted from PVDF sensor output and conductive fabric sensor output. From the deduction from the data above, it can be optimized for the ideal seat height, area for construction and the support areas. All of these need to be focused on, as well as some other information that should not be neglected. Firstly, the passenger's back is supposed to be in the most comfortable position, when it is not restricted to a limited space. This is when the lumbar vertebra will be in a lordosis position (Lordosis is the inward curvature of a portion of the lumbar and cervical vertebral column.). At the same time, the back and bottom of the seat should be strong enough to support the weight of the passenger. Generally, the point force point of an adult's lumbar vertebra is about 23 to 26 cm higher than the seat. For this reason, according to these measurements, the product will be designed so that the support will be higher than the force point, so that it is strong enough to support the back when the passenger lays back. The other two important points should also be positioned on the shoulder rest and the lumbar rest. However, most of the time, lumbar will be the main part to exert the force. The back rest can be ranged from 48 to 63 cm in width, and 35 to 48 cm in height. Secondly, only a small area of the bottom will be in contact with the seat when the passenger is seated. Detailed calculations suggested that nearly 75% of the passenger's body weight is supported by around 25 cm 2 of this contact area with the cushion. Not only will this result in a big stress exerted on our back, causing soreness in the backbone area, but it is also the reason for pain and aches after prolonged sitting. Adding a cushion on the back can greatly lower the pressure exerted on the back because the contact area is increased. In this case, the cushion acts as a support for the proper seating position [9]. Lastly, from the observation and surveying, it is obvious that passengers will not remain rigid in the seats with the same position for a prolonged period. Thus, to better utilize the material, it is logical and cost efficient to reduce the amount of material used in the seat back components, while, at the same time, not failing to maintain the comfort of the passenger throughout the entire trip. Size of Seats Based on the research, the dimensions of seating position of both male and females were collected to maximize the comfort the majority of our passengers, the detailed data is listed in Table 3. This data shown in Table 4 provides the basis to the modelling work, as the requirement is to meet the population distribution, taking 50% of the passengers as men. The following takes the 50th percentile of each piece of data. Conclusion-cushion area: the ischium endures the largest pressure, which decreases gradually as it spreads out; the smallest pressure is supplied by the thighs. The back rest can range from 48 to 63 cm in width and 35 to 48 cm in height: the shoulder blade and lumbar vertebra are two main supporting points, and pressure gradually reduces outwards along the two points. The right and the left back seat experience the same pressure. The seat should suit the shapes of back, legs and the bottom of thighs. Two supporting points: one is between the fifth and sixth thoracic vertebras; the other is on the waist. Interaction-Transmission and Reception of Information from Humans through Movements To equip the system with a smarter HCI and make it more useful, the redesigned system needs to be good in terms of both ergonomics and HCI. Currently, new tech affects our life in various ways. For example, inside a conference room at Google's San Francisco office, there's a screen in front displaying raw output data from a tiny sensor just below one's hand. If someone moves the thumb up and down against the finger. Each time, the blue dot on the screen moves along with the finger [12]. It can be flipped to a new demo, and now a circle is being made with the thumb. The faster the thumb goes, the faster the blue dot spins. It is a tiny chip, and one that will soon be remarkably easy to add to nearly any device: inside the frame of a VR (virtual reality) helmet, the bezel of a smart watch, or the chassis of your phone [6]. Some devices with chips are shown in Figure 8. just by sliding the finger. The gestures don't have to be huge and exaggerated: one doesn't need to wave like a madman in front of the Kinect [6]. They can be as small as they are in real life. Right now, the gesture technology has depended on exaggeration. Camera-based systems, like Leap Motion (2010, LeapMotion Inc., San Francisco, CA, USA) or Intel's Real Sense (2015, Intel, Santa Clara, CA, USA), are big and slow, and can't see through walls or at night. Capacitive sensors are great for touch but cannot see in three dimensions; when the users cross the fingers, the sensors fall apart. The way will be easy when users swipe a touchscreen, twist a knob on the stereo, or scroll the finger around the iPod's touch wheel. With the right tracking system, a light switch can be flipped without the switch, or the volume of the speakers can be turned up without actually touching them just by sliding the finger. The gestures don't have to be huge and exaggerated: one doesn't need to wave like a madman in front of the Kinect [6]. They can be as small as they are in real life. Right now, the gesture technology has depended on exaggeration. Camera-based systems, like Leap Motion (2010, LeapMotion Inc., San Francisco, CA, USA) or Intel's Real Sense (2015, Intel, Santa Clara, CA, USA), are big and slow, and can't see through walls or at night. Capacitive sensors are great for touch but cannot see in three dimensions; when the users cross the fingers, the sensors fall apart. Structure, Weight, and Cost-Reception of Information from Humans through Movements in Single Environments Another part of the design is the structure, weight, and cost of the seat systems. Structural Mechanics Based on patent study, the product is able to do a close comparison with its own design and modify the current design to fit the new needs. The following designs show our idea to improve the strength of existing seat skeleton structures. Generally, hollow tubular structure should be in one piece, without any other components such as screws. Similarly, when building a house, each house pillar must be one piece, and not broken into many pieces because a shift in one of the parts will result in a collapse of the whole house [11]. The comparison between the hollow design and ordinary seat is shown in Figure 9. Structure, Weight, and Cost-Reception of Information from Humans through Movements in Single Environments Another part of the design is the structure, weight, and cost of the seat systems. Structural Mechanics Based on patent study, the product is able to do a close comparison with its own design and modify the current design to fit the new needs. The following designs show our idea to improve the strength of existing seat skeleton structures. Generally, hollow tubular structure should be in one piece, without any other components such as screws. Similarly, when building a house, each house pillar must be one piece, and not broken into many pieces because a shift in one of the parts will result in a collapse of the whole house [11]. The comparison between the hollow design and ordinary seat is shown in Figure 9. Supporting planar plate with semi hollow cut will allow the whole structure to endure larger stress through shear deformation, with about the same amount of material used. In this way, the lateral supporting structure can better utilize the material, in order to save material and be more costeffective. Detailed structural analysis of the planner plate will be followed to better illustrate the mechanism of this design. A general idea can be found from Figure 10. Supporting planar plate with semi hollow cut will allow the whole structure to endure larger stress through shear deformation, with about the same amount of material used. In this way, the lateral supporting structure can better utilize the material, in order to save material and be more cost-effective. Detailed structural analysis of the planner plate will be followed to better illustrate the mechanism of this design. A general idea can be found from Figure 10. Supporting planar plate with semi hollow cut will allow the whole structure to endure larger stress through shear deformation, with about the same amount of material used. In this way, the lateral supporting structure can better utilize the material, in order to save material and be more costeffective. Detailed structural analysis of the planner plate will be followed to better illustrate the mechanism of this design. A general idea can be found from Figure 10. Adding additional support components, such as screws, to make all the plates a one-piece component, helps to increase the overall endurability under the 3D shear deformation, at the areas where most planner plates overlay each other. In the case of potential shear displacement between the plates, if all the components are bonded together, they will behave stronger as a whole unit, and are less likely to undergo large displacement and disassembly compared to simple structure [13]. Simulations will be shown to support this method of adding screws in the following sessions. The simple structure is shown in Figure 11 and the redesigned structure is shown in Figure 12. Adding additional support components, such as screws, to make all the plates a one-piece component, helps to increase the overall endurability under the 3D shear deformation, at the areas where most planner plates overlay each other. In the case of potential shear displacement between the plates, if all the components are bonded together, they will behave stronger as a whole unit, and are less likely to undergo large displacement and disassembly compared to simple structure [13]. Simulations will be shown to support this method of adding screws in the following sessions. The simple structure is shown in Figure 11 and the redesigned structure is shown in Figure 12. Spider-shaped seat bottom bifurcation is to spread out the center of gravity and distribute it to a larger area, thereby reducing the amount of force per unit area. However, it must be soldered to the ground to ensure stability [14]. The pillars of each arch bend (or triangular) is to maximize the force that it can support. Patent Spider-shaped seat bottom bifurcation is to spread out the center of gravity and distribute it to a larger area, thereby reducing the amount of force per unit area. However, it must be soldered to the ground to ensure stability [14]. The pillars of each arch bend (or triangular) is to maximize the force that it can support. Patent US4375300 has also proven that the triangular hinge is the best structure to support maximum truss [15]. Spider-shaped seat bottom bifurcation is to spread out the center of gravity and distribute it to a larger area, thereby reducing the amount of force per unit area. However, it must be soldered to the ground to ensure stability [14]. The pillars of each arch bend (or triangular) is to maximize the force that it can support. Patent US4375300 has also proven that the triangular hinge is the best structure to support maximum truss [15]. Traditional airline seat skeleton [16] has two support points, mainly on the ends of each side, with small hinges to support a large skeletal structure [8]. The traditional and redesigned structure are shown in Figures 13 and 14. Spider-shaped seat bottom bifurcation is to spread out the center of gravity and distribute it to a larger area, thereby reducing the amount of force per unit area. However, it must be soldered to the ground to ensure stability [14]. The pillars of each arch bend (or triangular) is to maximize the force that it can support. Patent US4375300 has also proven that the triangular hinge is the best structure to support maximum truss [15]. Traditional airline seat skeleton [16] has two support points, mainly on the ends of each side, with small hinges to support a large skeletal structure [8]. The traditional and redesigned structure are shown in Figures 13 and 14. The redesign is completely opposite of the above concept as the traditional structure [17] shown in Figure 15. It focuses on minimizing the area of the skeletal structure for a larger support area. Not only does it reduce the weight of the support, but we can also provide a safer seat structure. This is through maximizing the support area to a long beam, connecting all the chairs so that the pressure from the passenger can be equally distributed to a greater area. The redesign is completely opposite of the above concept as the traditional structure [17] shown in Figure 15. It focuses on minimizing the area of the skeletal structure for a larger support area. Not only does it reduce the weight of the support, but we can also provide a safer seat structure. This is through maximizing the support area to a long beam, connecting all the chairs so that the pressure from the passenger can be equally distributed to a greater area. The redesign is completely opposite of the above concept as the traditional structure [17] shown in Figure 15. It focuses on minimizing the area of the skeletal structure for a larger support area. Not only does it reduce the weight of the support, but we can also provide a safer seat structure. This is through maximizing the support area to a long beam, connecting all the chairs so that the pressure from the passenger can be equally distributed to a greater area. By changing the traditional support leg base to a stent-like support, not only does it not affect the amount of weight it can carry, but it also reduces the amount of material used and gives passengers more space allowance. Material Mechanics There are air holes shown in Figure 16 to minimize the volume of material used, yet the strength does not differ much [18]. It minimizes usage of materials, thus saving cost. More holes are designed By changing the traditional support leg base to a stent-like support, not only does it not affect the amount of weight it can carry, but it also reduces the amount of material used and gives passengers more space allowance. Material Mechanics There are air holes shown in Figure 16 to minimize the volume of material used, yet the strength does not differ much [18]. It minimizes usage of materials, thus saving cost. More holes are designed to be placed on the back support rather than seat support, as the main pressure point is on the lower seat area. Designs 2017, 1, 11 12 of 27 to be placed on the back support rather than seat support, as the main pressure point is on the lower seat area. An oblique triangle and hollow tubular structure [19] provides about the same force as a solid structure without changing the comfort of the passenger in the movable mechanism. It reduces the amount of material used, and hence the size and weight of the entire structure. As shown in Figure 17. An oblique triangle and hollow tubular structure [19] provides about the same force as a solid structure without changing the comfort of the passenger in the movable mechanism. It reduces the amount of material used, and hence the size and weight of the entire structure. As shown in Figure 17. An oblique triangle and hollow tubular structure [19] provides about the same force as a solid structure without changing the comfort of the passenger in the movable mechanism. It reduces the amount of material used, and hence the size and weight of the entire structure. As shown in Figure 17. Supporting Planar Plate Finite Element Analysis is run for comparison of the supporting planar plate with a semi hollow cut. The hollow cut will help to enhance the structure. The analysis is run with two supporting plates with about the same amount of material. The momentum is applied to mimic the load of a human sitting on a normal chair, as can be shown from the stress distribution plot in Figures 18 and 19. When applying the hollow cut method, the maximum stress of the uniform cut plate is around twice that of the hollow plate. The smaller stress indicates that the design with a hollow cut is structurally stronger than the uniform cut. With the similar design maximum stress, the hollow design can save much material, or, with the same amount of material applied, the hollow design can withhold a much bigger load, which increases the safety of the seat system. The reason behind the stronger structure is that, although the material is reduced in the middle of the plate, the edges of the plate is enhanced compared with the middle area. The edge area plays a more significant role in the structure. That is why when using the same amount of material, the bending stiffness of the hollow plate will be much bigger, which can help the whole structure to endure larger load. increases the safety of the seat system. The reason behind the stronger structure is that, although the material is reduced in the middle of the plate, the edges of the plate is enhanced compared with the middle area. The edge area plays a more significant role in the structure. That is why when using the same amount of material, the bending stiffness of the hollow plate will be much bigger, which can help the whole structure to endure larger load. Screws Connecting the Supporting Plates To implement the hollow cut design of the supporting parts, multi-layer plates will be combined to form the supporting part. Screws will be added where most planner plates are overlaid, to bond all the layers together. The screws not only physically connect all the layers, but also important in the structure. When shear force is loaded on the seats, the screws act as an essential role to withstand the increases the safety of the seat system. The reason behind the stronger structure is that, although the material is reduced in the middle of the plate, the edges of the plate is enhanced compared with the middle area. The edge area plays a more significant role in the structure. That is why when using the same amount of material, the bending stiffness of the hollow plate will be much bigger, which can help the whole structure to endure larger load. Screws Connecting the Supporting Plates To implement the hollow cut design of the supporting parts, multi-layer plates will be combined to form the supporting part. Screws will be added where most planner plates are overlaid, to bond all the layers together. The screws not only physically connect all the layers, but also important in the structure. When shear force is loaded on the seats, the screws act as an essential role to withstand the Screws Connecting the Supporting Plates To implement the hollow cut design of the supporting parts, multi-layer plates will be combined to form the supporting part. Screws will be added where most planner plates are overlaid, to bond all the layers together. The screws not only physically connect all the layers, but also important in the structure. When shear force is loaded on the seats, the screws act as an essential role to withstand the shear deformation. And it will be more efficient to apply the screws at the spots with the most layers of plates. When the screws are positioned at the place with maximum number of plates, the shear stress will be more uniformly distributed to all the plates, which in effect help the entire supporting structure stronger. In addition, the more uniformly distributed shear stress is also applied to the screws themselves, makes the screws less likely to break. Simulation is run for the screws. The results can be seen in Figures 20 and 21. For comparison, both the two-layer and four-layer plates' situations are analyzed. On the left is the stress of the screw located in the two-layer plates. On the right is the four-layer case. The same shear displacements are applied to the plates. From the stress plots of the screw, it is seen that when the screw is located at the spots with the most layers of plates, the stress on the screw is much smaller. To be more specific, the two-layer case results in stresses around one and a half times and twice the stresses for the four-layer case. In extreme cases, the screws will firstly break, and the plates will disassemble afterwards. In the new design, the safety of the screws is improved, and hence the overall stiffness of the entire supporting components will be higher under the shear load. applied to the plates. From the stress plots of the screw, it is seen that when the screw is located at the spots with the most layers of plates, the stress on the screw is much smaller. To be more specific, the two-layer case results in stresses around one and a half times and twice the stresses for the fourlayer case. In extreme cases, the screws will firstly break, and the plates will disassemble afterwards. In the new design, the safety of the screws is improved, and hence the overall stiffness of the entire supporting components will be higher under the shear load. both the two-layer and four-layer plates' situations are analyzed. On the left is the stress of the screw located in the two-layer plates. On the right is the four-layer case. The same shear displacements are applied to the plates. From the stress plots of the screw, it is seen that when the screw is located at the spots with the most layers of plates, the stress on the screw is much smaller. To be more specific, the two-layer case results in stresses around one and a half times and twice the stresses for the fourlayer case. In extreme cases, the screws will firstly break, and the plates will disassemble afterwards. In the new design, the safety of the screws is improved, and hence the overall stiffness of the entire supporting components will be higher under the shear load. Triangular Hinge The analysis for comparison of the triangular hinge and simple straight hinge is also calculated. The results are shown in Figure 22. For the truss structure, the triangular hinge can also enhance the overall stiffness, so the stability can be improved. The triangular hinge provides more contact between the chair and the ground, hence distributing some of the load from the seat. The analysis for comparison of the triangular hinge and simple straight hinge is also calculated. The results are shown in Figure 22. For the truss structure, the triangular hinge can also enhance the overall stiffness, so the stability can be improved. The triangular hinge provides more contact between the chair and the ground, hence distributing some of the load from the seat. For a simple straight hinge, as can be seen from Figure 23, it is more difficult, structurally, compared with the triangular truss, since single column support is not as stable. Otherwise, it needs to be much thicker in order to provide the same stiffness, which might not be economically efficient. It is also challenging for the junction area, where high stress is concentrated. For a simple straight hinge, as can be seen from Figure 23, it is more difficult, structurally, compared with the triangular truss, since single column support is not as stable. Otherwise, it needs to be much thicker in order to provide the same stiffness, which might not be economically efficient. It is also challenging for the junction area, where high stress is concentrated. For a simple straight hinge, as can be seen from Figure 23, it is more difficult, structurally, compared with the triangular truss, since single column support is not as stable. Otherwise, it needs to be much thicker in order to provide the same stiffness, which might not be economically efficient. It is also challenging for the junction area, where high stress is concentrated. Different types of stress are compared to verify that the triangular hinge is more structurally reliable for the truss. As Figure 24 shows, highly concentrated stress is much less for the triangular hinge, which results from the evenly distributed loading, so that the triangular hinge can withhold higher loading in applications. Different types of stress are compared to verify that the triangular hinge is more structurally reliable for the truss. As Figure 24 shows, highly concentrated stress is much less for the triangular hinge, which results from the evenly distributed loading, so that the triangular hinge can withhold higher loading in applications. Chair Support Simulation has also shown that, by using the redesign, the stress for the chair support is more uniformly distributed, and the stress is also reduced because the stiffness of the entire structure is enhanced by the redesign. Figure 25 illustrates this below. Thus, the area of the skeletal structure can be minimized for a larger support area. The smaller supporting base can also help to provide a larger space for passengers on the plane. Chair Support Simulation has also shown that, by using the redesign, the stress for the chair support is more uniformly distributed, and the stress is also reduced because the stiffness of the entire structure is enhanced by the redesign. Figure 25 illustrates this below. Thus, the area of the skeletal structure can be minimized for a larger support area. The smaller supporting base can also help to provide a larger space for passengers on the plane. Simulation has also shown that, by using the redesign, the stress for the chair support is more uniformly distributed, and the stress is also reduced because the stiffness of the entire structure is enhanced by the redesign. Figure 25 illustrates this below. Thus, the area of the skeletal structure can be minimized for a larger support area. The smaller supporting base can also help to provide a larger space for passengers on the plane. Different types of stress are investigated, and all results show that the support part of the redesign can perform better than the traditional design. When the adjacent chairs are connected, the whole supporting structure is united as one entity, which provides a stronger base for the seating. It can be seen from Figure 26. Different types of stress are investigated, and all results show that the support part of the redesign can perform better than the traditional design. When the adjacent chairs are connected, the whole supporting structure is united as one entity, which provides a stronger base for the seating. It can be seen from Figure 26. Seat As for the seat, it can be shown from the Finite Element Analysis that the designed chair can provide a safe protection for passengers. Stress is more uniformly distributed over the chair, compared with traditional straight chairs. Firstly, it increases the structure safety of the whole chair, just like the protected structure of the embryo from the eggshell. Secondly, the chair is more comfortable for passengers. In traditional chairs, most stress is concentrated in a small area. A passenger easily feels uncomfortable if most of the pressure is applied on one area on the back. In this case, passengers need to adjust the posture very often. However, the design here makes the shape of the chair fit to a human's back, which can in some way adjust to a human's skeleton structure, and provides support to the spine, especially the lower back area. Hence, the comfortableness can be much improved. The plot can be seen in Figure 27. Seat As for the seat, it can be shown from the Finite Element Analysis that the designed chair can provide a safe protection for passengers. Stress is more uniformly distributed over the chair, compared with traditional straight chairs. Firstly, it increases the structure safety of the whole chair, just like the protected structure of the embryo from the eggshell. Secondly, the chair is more comfortable for passengers. In traditional chairs, most stress is concentrated in a small area. A passenger easily feels uncomfortable if most of the pressure is applied on one area on the back. In this case, passengers need to adjust the posture very often. However, the design here makes the shape of the chair fit to a human's back, which can in some way adjust to a human's skeleton structure, and provides support to the spine, especially the lower back area. Hence, the comfortableness can be much improved. The plot can be seen in Figure 27. comfortable for passengers. In traditional chairs, most stress is concentrated in a small area. A passenger easily feels uncomfortable if most of the pressure is applied on one area on the back. In this case, passengers need to adjust the posture very often. However, the design here makes the shape of the chair fit to a human's back, which can in some way adjust to a human's skeleton structure, and provides support to the spine, especially the lower back area. Hence, the comfortableness can be much improved. The plot can be seen in Figure 27. Different loading scenarios are applied to the seats to compare the structural performance of the redesign. Furthermore, it can be seen from the stress contour plot Figure 28, that the redesign can evenly distribute the load over the entire seat. Structural-wise, in the case of extreme loading, the redesigned seat will be less likely to break due to the even stress distribution. In addition, compared with the straight seat, the redesign can better fit the curve of the human body. Then, the passengers can have a bigger contact area between the body and the seat. The whole back will be supported, so, from ergonomics, the redesign is better for the human body. As a result, both the structural and comfortableness performance can be improved at the same time. Different loading scenarios are applied to the seats to compare the structural performance of the redesign. Furthermore, it can be seen from the stress contour plot Figure 28, that the redesign can evenly distribute the load over the entire seat. Structural-wise, in the case of extreme loading, the redesigned seat will be less likely to break due to the even stress distribution. In addition, compared with the straight seat, the redesign can better fit the curve of the human body. Then, the passengers can have a bigger contact area between the body and the seat. The whole back will be supported, so, from ergonomics, the redesign is better for the human body. As a result, both the structural and comfortableness performance can be improved at the same time. Structural Mechanics Reducing materials in some unnecessary areas is useful to save on material costs while providing about the same level structural support. More importantly, the weight of the heaviest part of the structure can be decreased. As Figure 29 shows, the hollow structure can save material without sacrifice the strength. Structural Mechanics Reducing materials in some unnecessary areas is useful to save on material costs while providing about the same level structural support. More importantly, the weight of the heaviest part of the structure can be decreased. As Figure 29 shows, the hollow structure can save material without sacrifice the strength. Structural Mechanics Reducing materials in some unnecessary areas is useful to save on material costs while providing about the same level structural support. More importantly, the weight of the heaviest part of the structure can be decreased. As Figure 29 shows, the hollow structure can save material without sacrifice the strength. Material Mechanics Composite materials are used by aircraft to achieve significant weight reductions compared to conventional seat systems [18]. The seat assembly consists of a lightweight composite support structure, a lightweight composite seat base, and also a seat back assembly with a lightweight inner frame. The take away message from this patent is that composite materials are one of the fields that require more research. The materials known as advanced polymer matrix composites have the potential to be the best material to construct the skeletal structure with lighter weight. Figure 30 gives a general idea of the material. Material Mechanics Composite materials are used by aircraft to achieve significant weight reductions compared to conventional seat systems [18]. The seat assembly consists of a lightweight composite support structure, a lightweight composite seat base, and also a seat back assembly with a lightweight inner frame. The take away message from this patent is that composite materials are one of the fields that require more research. The materials known as advanced polymer matrix composites have the potential to be the best material to construct the skeletal structure with lighter weight. Figure 30 gives a general idea of the material. The sponge foam cushion has air holes inside. The shape of the cushion changes according to the back shape of each user, maximizing comfort and, at the same time, minimizing the materials used [20], as shown in Figures 31 and 32. The inflatable mattress variant degree has been calculated to ensure that the basic shape can provide ergonomic uses [18]. The sponge foam cushion has air holes inside. The shape of the cushion changes according to the back shape of each user, maximizing comfort and, at the same time, minimizing the materials used [20], as shown in Figures 31 and 32. The inflatable mattress variant degree has been calculated to ensure that the basic shape can provide ergonomic uses [18]. The sponge foam cushion has air holes inside. The shape of the cushion changes according to the back shape of each user, maximizing comfort and, at the same time, minimizing the materials used [20], as shown in Figures 31 and 32. The inflatable mattress variant degree has been calculated to ensure that the basic shape can provide ergonomic uses [18]. Seats can use the support from carbon fibre tubes and do not require additional metal support structures [13], shown in Figure 32. Seats can use the support from carbon fibre tubes and do not require additional metal support structures [13], shown in Figure 32. used [20], as shown in Figures 31 and 32. The inflatable mattress variant degree has been calculated to ensure that the basic shape can provide ergonomic uses [18]. Seats can use the support from carbon fibre tubes and do not require additional metal support structures [13], shown in Figure 32. [13]. Figure 32. US Patent 007,954,762 B2 Carbon fibre tube as a metal structure [13]. Upholstered carbon fibre tube support and cushion foam structure are constructed with half of the sponge (foam) [14] in order to achieve a comfortable seating position, saving on material, shown in Figure 33. Upholstered carbon fibre tube support and cushion foam structure are constructed with half of the sponge (foam) [14] in order to achieve a comfortable seating position, saving on material, shown in Figure 33. Forward thigh support structure is higher than back seat support, as it can provide more leg space for the users. Using Y shape support for the back structure (honeycomb) can also reduce the weight of the whole seat structure, yet provide approximately the same weight support [11], shown in Figure 34. Forward thigh support structure is higher than back seat support, as it can provide more leg space for the users. Using Y shape support for the back structure (honeycomb) can also reduce the weight of the whole seat structure, yet provide approximately the same weight support [11], shown in Figure 34. Upholstered carbon fibre tube support and cushion foam structure are constructed with half of the sponge (foam) [14] in order to achieve a comfortable seating position, saving on material, shown in Figure 33. Forward thigh support structure is higher than back seat support, as it can provide more leg space for the users. Using Y shape support for the back structure (honeycomb) can also reduce the weight of the whole seat structure, yet provide approximately the same weight support [11], shown in Figure 34. From this, it was concluded that honeycomb material reduces the total weight of support structure, yet is able to support about the same pressure as the original design. It is both light and strong. From this, it was concluded that honeycomb material reduces the total weight of support structure, yet is able to support about the same pressure as the original design. It is both light and strong. Material Mechanics The material cost could be optimized by minimizing the usage of materials and choosing the most economical yet strongest material, as Figure 35 shows. The costs might be reduced by removing unnecessary material. As shown in material and structural mechanics, it is able to reduce the material used without reducing much of the load that the skeleton can hold. The choice of materials for both the exterior and interior components will be further shown in the following sections. Structural Mechanics On the structural side, by reducing the amount of area of the skeleton support, like Figures 36 and 37 show, both the material usage and the cost might both be minimized. On a side note, the strength of structure should not be sacrificed for the sake of minimizing material cost, as safety is the most crucial factor. Hence, by analyzing all the possible designs in strength supports and modifying them to our needs, a stable low-cost structure could be created. Structural Mechanics On the structural side, by reducing the amount of area of the skeleton support, like Figures 36 and 37 show, both the material usage and the cost might both be minimized. On a side note, the strength of structure should not be sacrificed for the sake of minimizing material cost, as safety is the most crucial factor. Hence, by analyzing all the possible designs in strength supports and modifying them to our needs, a stable low-cost structure could be created. Structural Mechanics On the structural side, by reducing the amount of area of the skeleton support, like Figures 36 and 37 show, both the material usage and the cost might both be minimized. On a side note, the strength of structure should not be sacrificed for the sake of minimizing material cost, as safety is the most crucial factor. Hence, by analyzing all the possible designs in strength supports and modifying them to our needs, a stable low-cost structure could be created. Functionality-Reception of Information from Humans through Movements in Multiple Environments In addition, the new design lets the seat system have more functionality through the movement of the human. Dimension of Seat Design is shown in Figures 38-40. Functionality-Reception of Information from Humans through Movements in Multiple Environments In addition, the new design lets the seat system have more functionality through the movement of the human. Dimension of Seat Design is shown in Figures 38-40. Functionality-Reception of Information from Humans through Movements in Multiple Environments In addition, the new design lets the seat system have more functionality through the movement of the human. Dimension of Seat Design is shown in Figures 38-40. Conclusions In this article, an ergonomically and human-computer interaction re-optimized design used for airplanes or similar conditions is described. The redesigned aesthetics and visual modeling part is highlighted for people to receive visual input. In addition, the ergonomics part is addressed, for a person to get a tactile signal, and a new user's action innovation is shown for people to communicate, the redesign of the structure of the system features with stronger structure and low weight and cost, and the functional system environment is illustrated for people to get information from human movement. Structural analysis is detailed to support the design. It can be seen that, with new tech, the ergonomics and interaction HCI system is improved for the seat's system. In this paper, human machine engineering and interaction and their relationship in seating design are discussed from different angles, including the design methodology. The methodology starts from concept-driven design. In the early stage design concepts, some elements from some certain objects are extracted and applied in the seating system, and then integrate multiple objects to construct the new characteristics of the product. This methodology combines the features of multiple objects in the human-machine interaction. In order to meet some certain user needs, the essential factors are taken from objects with the functions needed, and integrated into the product to implement the function, and then the design goal is met. In this paper, ergonomics in the seating design are discussed from multiple aspects, including methodology of design constituted. This methodology is generally identified with design deconstruction and design reconstruction. By deconstructing the key factors of points, lines, and surfaces, and reconstructing them again, some more sophisticated designs can be obtained. In addition, from the rapid iteration of tests or vote-type design, some effective schemes can be developed, with improved features from original scheme. Furthermore, these features can effectively Conclusions In this article, an ergonomically and human-computer interaction re-optimized design used for airplanes or similar conditions is described. The redesigned aesthetics and visual modeling part is highlighted for people to receive visual input. In addition, the ergonomics part is addressed, for a person to get a tactile signal, and a new user's action innovation is shown for people to communicate, the redesign of the structure of the system features with stronger structure and low weight and cost, and the functional system environment is illustrated for people to get information from human movement. Structural analysis is detailed to support the design. It can be seen that, with new tech, the ergonomics and interaction HCI system is improved for the seat's system. In this paper, human machine engineering and interaction and their relationship in seating design are discussed from different angles, including the design methodology. The methodology starts from concept-driven design. In the early stage design concepts, some elements from some certain objects are extracted and applied in the seating system, and then integrate multiple objects to construct the new characteristics of the product. This methodology combines the features of multiple objects in the human-machine interaction. In order to meet some certain user needs, the essential factors are taken from objects with the functions needed, and integrated into the product to implement the function, and then the design goal is met. In this paper, ergonomics in the seating design are discussed from multiple aspects, including methodology of design constituted. This methodology is generally identified with design deconstruction and design reconstruction. By deconstructing the key factors of points, lines, and surfaces, and reconstructing them again, some more sophisticated designs can be obtained. In addition, from the rapid iteration of tests or vote-type design, some effective schemes can be developed, with improved features from original scheme. Furthermore, these features can effectively Conclusions In this article, an ergonomically and human-computer interaction re-optimized design used for airplanes or similar conditions is described. The redesigned aesthetics and visual modeling part is highlighted for people to receive visual input. In addition, the ergonomics part is addressed, for a person to get a tactile signal, and a new user's action innovation is shown for people to communicate, the redesign of the structure of the system features with stronger structure and low weight and cost, and the functional system environment is illustrated for people to get information from human movement. Structural analysis is detailed to support the design. It can be seen that, with new tech, the ergonomics and interaction HCI system is improved for the seat's system. In this paper, human machine engineering and interaction and their relationship in seating design are discussed from different angles, including the design methodology. The methodology starts from concept-driven design. In the early stage design concepts, some elements from some certain objects are extracted and applied in the seating system, and then integrate multiple objects to construct the new characteristics of the product. This methodology combines the features of multiple objects in the human-machine interaction. In order to meet some certain user needs, the essential factors are taken from objects with the functions needed, and integrated into the product to implement the function, and then the design goal is met. In this paper, ergonomics in the seating design are discussed from multiple aspects, including methodology of design constituted. This methodology is generally identified with design deconstruction and design reconstruction. By deconstructing the key factors of points, lines, and surfaces, and reconstructing them again, some more sophisticated designs can be obtained. In addition, from the rapid iteration of tests or vote-type design, some effective schemes can be developed, with improved features from original scheme. Furthermore, these features can effectively affect the user needs when using the seating system. Thereby, the optimization from another methodology in the seating ergonomics can be achieved. Future work may put emphasis on more efficient human-computer interaction and more comfortable user experience for the seats. New material might be explored to provide lighter weight, without decreasing the structural reliability.
2019-04-16T13:27:11.338Z
2017-11-29T00:00:00.000
{ "year": 2017, "sha1": "6cacf4d8bb39f3eb77fdcd0861f06b3c87094310", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2411-9660/1/2/11/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b780538e2a59d82ba8dbde0ad2dd08ab2f12e6f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
220404591
pes2o/s2orc
v3-fos-license
Expressive Interviewing: A Conversational System for Coping with COVID-19 The ongoing COVID-19 pandemic has raised concerns for many regarding personal and public health implications, financial security and economic stability. Alongside many other unprecedented challenges, there are increasing concerns over social isolation and mental health. We introduce \textit{Expressive Interviewing}--an interview-style conversational system that draws on ideas from motivational interviewing and expressive writing. Expressive Interviewing seeks to encourage users to express their thoughts and feelings through writing by asking them questions about how COVID-19 has impacted their lives. We present relevant aspects of the system's design and implementation as well as quantitative and qualitative analyses of user interactions with the system. In addition, we conduct a comparative evaluation with a general purpose dialogue system for mental health that shows our system potential in helping users to cope with COVID-19 issues. Introduction The COVID-19 pandemic has changed our world in unimaginable ways, dramatically challenging our health system and drastically changing our daily lives.As we learned from recent large-scale analyses that we performed on social media datasets and extensive surveys, many people are currently experiencing increased anxiety, loneliness, depression, concerns for the health of family and themselves, unexpected unemployment, increased child care or homeschooling, and general concern with what the future might look like. 1 Research in Expressive Writing (Pennebaker, 1997b) and Motivational Interviewing (Miller and Rollnick, 2012) has shown that even simple interactions where people talk about one particular experience can have significant psychological value. 1 http://trackingsocial.lifeNumerous studies have demonstrated their effectiveness in improving peoples mental and physical health (Vine et al., 2020;Pennebaker and Chung, 2011;Resnicow et al., 2017).Both Expressive Writing and Motivational Interviewing rely on the fundamental idea that by putting emotional upheavals into words, one can start to understand them better and therefore gain a sense of agency and coherence of the thoughts and emotions surrounding their experience. In this paper, we introduce a new interview-style dialogue paradigm called Expressive Interviewing that unites strategies from Expressive Writing and Motivational Interviewing through a system that guides an individual to reflect on, express, and better understand their own thoughts and feelings during the pandemic. By encouraging introspection and selfexpression, the dialogue aims to reduce stress and anxiety.Our system is currently online at https://expressiveinterviewing.org and available for anyone to try anonymously. Related Work Expressive Writing.Expressive writing is a writing paradigm where people are asked to disclose their emotions and thoughts about significant life upheavals.Originally studied in the scope of traumatic experiences (Pennebaker and Beall, 1986), study participants are usually asked to write about an assigned topic for about 15 minutes for one to five consecutive days.Later studies expanded to specific experiences such as losing a job (Spera et al., 1994).Expressive writing has been shown to be effective on both physical and mental health measures by multiple meta-analyses (Frattaroli, 2006), finding its association with drops in physician visits, positive behavioral changes, and longterm mood improvements.No single theory at present explains the cause of its benefits, but it is believed that the process of expressing emotions and constructing a story may play a role for participants in forming a new perspective on their lives (Pennebaker and Chung, 2011). Motivational Interviewing.Motivational Interviewing (MI) is a counseling technique designed to help people change a desired behavior by leveraging their own values and interests.The approach accepts that many people looking for a change are ambivalent about doing so as they have reasons to both change and sustain the behavior.Therefore, the goal of an MI counselor is to elicit their client's own motivation for changing by asking open questions and reflecting back on the client's statements.MI has been shown to correlate with positive behavior changes in a large variety of client goals, such as weight management (Small et al., 2009), chronic care intervention (Brodie et al., 2008), and substance abuse prevention (D'Amico et al., 2008). Dialogue Systems.With the development of deep learning techniques, dialogue systems have been applied to a large variety of tasks to meet increasing demands.In recent work, Afzal et al. (2019) built a dialogue-based tutoring system to guide learners through varying levels of content granularity to facilitate a better understanding of content.Henderson et al. (2019) applied a response retrieval approach in restaurant search and booking to provide and enable the users to ask various questions about a restaurant.Ortega et al. (2019) built an open-source dialogue system framework that navigates students through course selection. There are also dialogue system building tools such as Google's Dialogflow2 and IBM's Watson assistant,3 which enable numerous dialogue systems for customer service or conversational user interfaces. Chatbots for Automated Counseling.Two dialogue systems for automated counseling services available on mobile platforms are Wysa4 and Woebot. 5These chatbots provide cognitive behavioral therapy with the goal of easing anxiety and depression by allowing users to express their thoughts.A study of Wysa users over three months showed that more active users had significantly improved symp-toms of depression (Inkster et al., 2018).Another study shows that young students using Woebot significantly reduced anxiety levels after two weeks of using the conversational agent (Fitzpatrick et al., 2017).These findings suggest a promising benefit of automated counseling for the nonclinical population. Our system is distinct from Wysa and Woebot in that it is designed specifically for coping with COVID-19 and allows users to write more topic related free-form responses.It asks open-ended questions and encourages users to introspect, and then provides visualized feedback afterward, whereas the others have a conversational logic mainly based on precoded multiple choice options. Expressive Interviewing Our system conducts an interview-style interaction with the users about how the COVID-19 pandemic has been affecting them.The interview consists of several writing prompts in the form of questions about specific issues related to the pandemic.During the interview, the system provides reflective feedback based on the user's answers.After the interaction is concluded, the system presents users with detailed graphical and textual feedback. The system's goal is to encourage users to write as much as possible about themselves, building upon previous findings regarding the psychological value of writing about personal upheavals and the use of reflective listening for behavioral change (Pennebaker, 1997b;Miller and Rollnick, 2012).To achieve this, the system guides the interaction by asking four main open-ended questions.Then, based on users responses, the system provides feedback and asks additional questions whenever appropriate.In order to provide reflective feedback, the system automatically detects the topics being discussed (e.g., work, family) or emotions being felt (e.g., anger, anxiety), and responds with a reflective prompt that asks the user to elaborate or to answer a related question to explore that concept more deeply.For instance, if the system detects work as a topic of interest, it responds with "How has work changed under COVID?What might you be able to do to keep your career moving during these difficult times?" Leading Questions During the formulation of the guiding questions used by our system, we worked closely with our psychology and public health collaborators to identify a set of questions on COVID-19 topics that would motivate individuals to talk about their personal experience with the pandemic.We formulated the following question as the system's conversation starting point: [Major issues] What are the major issues in your life right now, especially in the light of the COVID outbreak? We also formulated three follow-up questions, which were generated after several refining iterations. 6The order of these questions is randomized across users of the system. [Looking Forward] What do you most look forward to doing once the pandemic is over? [Advice to Others] What advice would you give other people about how to cope with any of the issues you are facing? [Grateful] The outbreak has been affecting everyone's life, but people have the amazing ability to find good things even in the most challenging situations.What is something that you have done or experienced recently that you are grateful for? Language Understanding and Reflection Strategies Our system's capability for language understanding relies on identifying words belonging to various lexicons.This simple strategy allowed us to quickly develop a platform upon which we intend to implement a more sophisticated language understanding ability in future work.When a user responds to one of the main prompts, the system looks for words belonging to specific topics and word categories.The system examines the user responses to identify dominant word categories or topics and triggers a reflection from a set of appropriate reflections. 7If none of these types are matched, it responds with a generic reflection. The word categories are derived from the LIWC, WordNet-Affect and MPQA lexicons (Pennebaker et al., 2001;Strapparava et al., 2004;Wiebe et al., 2005) and include pronouns (I, we, others), negative emotion (anger, anxiety, and sadness), positive emotion (joy) and positive and negative words.The COVID-19 related topics include finances, health, home, work, family, friends, and politics.Most of the topics are covered by the LIWC lexicon, with the exception of politics.For this category, we use the politics category from the Roget's Thesaurus (Roget, 1911) and add a small number of proper nouns covered in recent news (e.g.Trump, Biden, Fauci, Sanders). We formulate a set of specific reflections for each word category and topic, which were refined by our psychology and public health collaborators.For instance, if the dominant emotion category is anxiety, the system responds "You mention feelings such as fear and anxiety.What do you think is the best way for people to cope with these feelings?"Initially, we also considered reflections for different types of pronouns, but found that they did not steer the dialogue in a meaningful direction.Instead, we flag responses with dominant use of impersonal pronouns and lack of references to the self and reflect that fact back to the user and further ask them how they are specifically being affected.We also crafted generic reflections to be applicable to a large number of situations though the system does not understand the content of what the user has said (e.g."I see.Tell me about a time when things were different", and "I hear you.What have you tried in the past that has worked well"). User Feedback After the interview, the system provides visual and textual feedback based on the user's responses and provides links to resources (i.e., mental health resources) appropriate given their main concerns. The visual feedback consists of four pie charts showing the relative usage of different word categories, including: discussed topics (work, finance, home, health, family, friends and politics), affect (positive, negative), emotions (anger, sadness, fear, anxiety, joy), and pronouns (I, we, other). The textual feedback includes a comparison with others (to normalize the user's reactions) and interpretations of where the user falls within normalized scales.The system also presents a summary of the most and least discussed topics and how they compare to the average user, along with normalized values for meaningfulness, self-reflection, and emotional tone (using a 0-10 scale) along with textual descriptors for the shown scale values. 8 These metrics are inspired by previous work on expressive writing and represent the self-reported meaningfulness, usage of self-referring pronouns, and the difference in positive and negative word usage (Pennebaker, 1997a).Finally, the system provides relevant resources for further exploration (e.g. for the work topic it lists external links to COVID related job resources and safety practices). Online Interface The system is implemented as a web interface so it is accessible and easy to use.The interface is built with the Django platform and jQuery and uses Python on the backend (Django Software Foundation, 2019). Before the interaction users are asked to report on a 1-7 scale: (1) [Life satisfaction] how satisfied they are with their life in general, and (2) ] what is their level of stress.The user then proceeds to the conversational interaction with our system.After the interaction, the user is asked again about (3) [Stress af ter ] what is their level of stress; (4) [Personal] how personal their interaction was; and (5) [Meaningful] how meaningful their interaction was.Once this is submitted, the user can proceed to the feedback page and view details about what they wrote and how their interaction compares to a sample of recent users.The user is finally presented with a list of resources triggered by the topics discussed. We made an effort to make our system appear human-like to make users more comfortable while interacting with it, although this can vary for different individuals.In future work, we hope to explore individual personas and more sophisticated rapport building techniques.We named our dialogue agent 'C.P.', which stands for Computer Program.This name acknowledges that the user is interacting with a computer, while at the same time it makes the system more human by assigning it a name.When responding to the user, C.P. pauses for a few seconds as if it is thinking and then proceeds to type a response one letter at a time with a low probability of making typos -similarly to how human users would type. Analysis of User Interactions After the system was launched (and up to when we conducted this analysis), we had 174 users interact with the system.We analyze these interactions to evaluate system usefulness, user engagement, and reflection effectiveness. System Usefulness.We examine the system's ability to help users cope with COVID-19 related issues by analyzing the different ratings provided by users before and after their interaction with C.P. Throughout this discussion, we use ∆Stress to indicate how the users stress rating differs before and after the interaction: ∆Stress = Stress af ter -Stress bef ore . Negative values for ∆Stress are therefore an indicator of stress reduction, whereas positive values for ∆Stress reflect an increase in stress.We start by measuring the Spearman correlation between the different ratings for the 174 interactions with C.P. Results are shown in Table 1. The strongest correlation we observe is between the personal and meaningful ratings, suggesting that interactions that are more meaningful appear to feel more personal, or vice versa. We also observe a strong negative correlation between ∆ Stress and the meaningfulness of the interaction, suggesting that the interactions that the users found to be meaningful are associated with a reduction in stress. User engagement.We examine user engagement by analyzing the time users spend in the interaction and the number of words they write throughout the session.Figure 1 shows histograms of the session lengths in the number of words used by the user and of the session duration in seconds.The rightmost column of Table 2 shows Spearman correlation coefficients between user ratings and the length and duration of the sessions.We find a significant negative correlation between Stress bef ore and Stress af ter with session duration and number of words, suggesting an association between user engagement and lower stress.There is also a weak negative correlation between duration of session and reduction in stress (∆Stress).We also investigate if there is a relationship between the pre-and post-session ratings and how engaged a user was with each prompt in terms of length of and duration in writing their response.Table 2 shows Spearman correlation coefficients for these relationships.It appears that Life Satisfaction has no correlation with the length of any prompt response except a potentially weak negative correlation with length on the Major Issues prompt (p = 0.052).A lower rating may relate with having more personal challenges to write about. Stress bef ore has a weak negative correlation between the number of words used and the duration spent in the response to Looking Forward.Higher stress may relate to present concerns, which may make one less inclined to spend time thinking and writing about positive aspects of their future than someone with less stress.We presume this could be the case for the Grateful prompt, which likewise correlates weakly and negatively with Stress bef ore . Stress af ter has a negative correlation between duration spent on every prompt response except for the time spent on Major Issues.This could be a reflection of the fact that those who have a lot to write about major issues in their life also incur high levels of stress. The Personal rating shows no correlations with the duration spent on any of responses, except potentially Advice to Others (p = 0.074).We do observe weak negative correlations between Personal ratings and response lengths on Major Issues and Looking Forward, and potentially on Grateful (p = 0.054) and Advice to Others (p = 0.08).Perhaps if a user writes more, there is a greater expectation for more personal reflections.We discuss engagement related to reflections more deeply in the next section. The Meaningful rating shows weak negative correlations with length on Major Issues, Advice to Others, and possibly on Grateful (p = 0.052) and Looking Forward (p = 0.062).We do not observe a significant correlation with duration on Major Issues or Grateful, but we do observe positive correlations between duration and Looking Forward and Advice to Others.Users who spend more time thinking about advice they would give others facing their issues may find the interaction more meaningful, and may experience benefits having reflected on their agency in managing their challenges. Reflection Effectiveness.To investigate the effectiveness of Expressive Interviewing reflections, we compare the reflections that were triggered for users whose stressed decreased to the reflections that triggered for the users whose stress increased.For each of these user groups, we compute the dominance of each reflection as its proportion of times it was triggered out of all reflections triggered.In Figure 2, we compare the dominance of each reflection across these user groups by dividing the reflection dominance in the decreased-stress group by that of the increased-stress group. Importantly, we observe that all emotion reflections and more topic reflections were triggered at a higher rate for users whose stress decreased, whereas more generic reflections were triggered at a higher rate for users whose stress increased.While we do not presume that increased stress was due to generic reflections, the correspondence between emotion and topic reflections with stress reduction aligns with expectations of effective reflections from Motivational Interviewing-generic reflections and specific reflections resemble simple reflections and complex reflections respectively, as referred to in Motivation Interviewing.While both types of reflections serve a purpose, complex reflections both communicate an understanding of what the client has said and also contribute an additional layer of understanding or a new interpretation for the user, whereas simple reflections focus on the former (Rollnick and Allison, 2004). In qualitatively analyzing the instances where generic reflections were triggered, we observe that contextual appropriateness seems to be the best indicator of their success (in terms of ability to elicit a deeper thought, feeling, or interpretation) given that the user was invested in the experience.As these generic reflections are selected at random, their contextual appropriateness was inconsistent, illuminating the scenarios in which they are more or less appropriate.For instance, out of the seven times the reflection "Interesting to hear that.How does what you say relate to your values?" was triggered for the increased-stress users, one user expanded on their previous message, one expressed confusion about the question, and another copied and pasted the definition of core values 9 as their response.Two other instances of this reflection were triggered when a user had expressed negative feelings such as worry and feeling lazy which appeared misplaced, and the last case was triggered by a message that was not readable.Out of the thirteen times the same reflection was triggered for the decreased-stress group, one user expressed not hav-Figure 2: The dominance of each reflection triggered for users whose stress decreased divided by each reflection's dominance for users whose stress increased.Scores above 1 (red line) correspond to a decrease in stress; score below 1 correspond to an increase in stress.See Table 3 for sample reflections, including the GENeric reflections. ing much to say, another gave one word responses before and after, and all others expanded on their previous message in relation to their values or gave a simple response to indicate a degree that it relates.This reflection appeared more "successful" (based on if the user expanded on their previous message or values) when it was triggered by a message with more neutral to positive sentiment, such as when the user was expressing what they were looking forward to, or when they had several pieces of advice to offer for a friend in their situation, as opposed to one with more negative sentiment like the messages expressing worry or laziness. In instances of other generic reflections, we observed that another issue for appropriateness was whether the reflection matched the user's frame of thought in terms of past, present, or future.For instance, the reflection "I see.Tell me about a time when things were different," best matched scenarios when users described thoughts about changes to their daily lives, but not when users described future topics such as what they were looking forward to, nor when they were already describing the past. Based on our observations of the reflections in action, we have three main takeaways.First, topic and emotion specific reflections are more associated with the group of users whose stress decreased.These reflections are only triggered if the system determines a dominant topic or emotion, which depends on the effectiveness of its heuristics, as well as the amount of detail and context that a user expresses.This leads to the next takeaway, that the system appears to be more effective when users approach the experience with an intention for expression, or conversely it seems less effective when the intent to not engage and express is explicit.Third, the generic reflections were developed with the intent to function in generic contexts, but we learned in practice that some clashed with emotional and situational content or were confusing given the context.As we did observe many, if not more, successful instances of generic reflections, we are able to contrast these contexts to the unsuccessful contexts, and can develop a heuristic for selecting the generic reflections rather than selecting at random, as well as adapt the language of our current generic reflections to be more appropriate for the Expressive Interviewing setting. Comparative Evaluations To assess the extent to which our Expressive Interviewing system delivers an engaging user experience, we conduct a comparative study between our system and the conversational mental health app Woebot (Fitzpatrick et al., 2017). We recruited 12 participants and asked them to interact independently with each system to discuss their COVID-19 related concerns.More specifically, we asked them to use each system for 10-15 minutes and provide evaluative feedback pre-and post-interaction.To avoid cognitive bias, we randomized the order in which each participant evaluated the systems.In addition, we randomized the order in which the evaluation questions are shown. Before interacting with either system, participants rated their life satisfaction and their stress level.After the interaction, participants reported again their stress level and rated several aspects of their interaction with the system, including ease of use, usefulness (in terms of discussing COVID-19 related issues and motivation to write about it), overall experience, and satisfaction using mainly binary scales.For example, the questions "Did <system> motivate you to write at length about your thoughts and feelings?yes/no" and "How useful was C.P. to discuss your concerns about COVID? useful/not useful" assess whether the system encouraged the user to write about their thoughts and feelings about COVID and whether the system provided guidance for it.Tables 4 and 5 show the percentage of users that provided positive or high scores (> 3 on a 7-point scale) for each of these aspects after interacting with both systems. Woebot Expressive Interviewing Stress bef ore 91% 91% Stress af ter 73% 64% As observed, there are fewer participants reporting high levels of stress after using either system.However, we see a smaller fraction of participants reporting high levels of stress after interacting with Expressive Interviewing, thus suggesting that our system was more effective in helping participants to reduce their stress levels. Overall, participants reported that Expressive Interviewing was easier to use, more useful to discuss their COVID concerns and motivated them to write more than Woebot.Similarly, users reported a more meaningful interaction and a better overall experience.However, it is important to mention that Woebot was not specifically designed for discussing COVID-19 concerns and it is of more general purpose than our system.Nonetheless, we believe that this comparison provides evidence that a dialogue system such as Expressive Interviewing is more effective in helping users cope with COVID-19 issues as compared to a general purpose dialogue system for mental health. Ethical and Privacy Considerations We followed the suggestions of previous research on automated mental health counseling and adopted the goals of being respectful of user privacy, following evidence based methods, ensuring user safety, and being transparent in system capabilities (Kretzschmar et al., 2019).The practices of motivational interviewing and expressive writing have numerous studies supporting their efficacy (Miller and Rollnick, 2012;Pennebaker and Chung, 2007).The combination of these methods in an interviewing format has not previously been studied and we intend to continue publishing our findings as the user population expands and becomes more diverse.We will also continue to improve our system and assessment. We have taken efforts to secure user data.We do not ask for identifiers and data is stored anonymously by session ID.The website is secured with SSL.Data is only accessible to researchers directly involved with our study. Our study has been approved by the University of Michigan IRB. Conclusion In this paper, we introduced an interview-style dialogue system called Expressive Interviewing to help people cope with the effects of the COVID-19 pandemic.We provided a detailed description on how the system is designed and implemented. We analyzed a sample of 174 user interactions with our system and conducted qualitative and quantitative analyses on aspects such as system usefulness, user engagement and reflection effectiveness.We also conducted a comparative evaluation study between our system and Woebot, a general purpose dialogue system for mental health.Our main findings suggest that users benefited from the reflective strategies used by our system and experienced meaningful interactions leading to reduced stress levels.Furthermore, our system was judged to be easier to use and more useful than Woebot when discussing COVID-19 related concerns. In future work we intend to explore the applicability of the developed system to other healthrelated domains. Figure 1 : Figure 1: Histograms of overall user engagement measured by session length and duration. Figure 3 : Figure 3: Average number of words in each response grouped by prompt order, divided by the average number of words in each response overall.Equal number of words is at 1, marked with the line.Order of the prompts are indicated by first letter: A = Advice to Others, G = Grateful, L = Looking Forward. Figure 4 : Figure 4: Histogram of the prompt response durations in seconds. Figure 5 : Figure 5: Histogram of the prompt response lengths in tokens. Figure 6 : Figure 6: Histograms of the number of words of each user message preceding the generic reflections, grouping users whose stressed increased and decreased. Figure 7 : Figure 7: Histograms of the number of words of each user message after the generic reflections, grouping users whose stressed increased and decreased. Figure 8 : Figure8: Top: before and after stress ratings by users whose stress increased after interaction with C.P. Middle: before and after stress ratings by users whose stress remained the same after interaction with C.P. Bottom: before and after stress ratings by users whose stress decreased after interaction with C.P. The bars are ordered by the magnitude of change (top and bottom), or by the static stress rating (middle). Table 1 : Spearman correlation coefficients between pairs of ratinga for the 174 interactions.Bold indicates significance with p < 0.05. Table 2 : Spearman correlation coefficients between each rating provided by a user and (top) the length in number of words of the user's response to each particular prompt, and (bottom) duration in seconds of the user's response to each particular prompt, from 174 full interactions.Bold denotes significance with p < 0.05.HEALTH Id like to know more about your feelings surrounding your own health and the health of people close to you.What actions can you take to help keep you healthy during these challenging times?FAMILY What can you do to keep your family resilient during these tough times?POLITICS What is it about the political world that may be hooking you?What are your Table 3 : Sample topic specific and generic reflections. Table 4 : Percentage of users reporting high levels of stress (> 3 on a 7-point Likert scale) before and after using Woebot and Expressive Interviewing. Table 5 : Comparative evaluation Woebot and Expressive Interviewing.Percentage of users reporting positive/high ratings (with scores >3 in a 7-point Likert scale) on usability aspects after interacting with Woebot and Expressive Interviewing. Table 6 : OrderSessions Life Sat.Stress b Stressa Personal Meaning ∆ Stress Average ratings grouped by order that the prompts appeared.All sessions begin with "Major Issues."
2020-07-09T01:01:26.015Z
2020-07-07T00:00:00.000
{ "year": 2020, "sha1": "c278eb333c78313a3762b5b2f07ace7443582b94", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2020.nlpcovid19-2.6.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "c278eb333c78313a3762b5b2f07ace7443582b94", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
138987661
pes2o/s2orc
v3-fos-license
THE DEFECTS OF OXIDE LAYERS FORMED ON 10CrMo9-10 STEEL OPERATED FOR 200,000 HOURS AT AN ELEVATED TEMPERATURE The paper contains results of studies into the formation of oxide layers on 10CrMo9-10 (10H2M) steel long-term operated at an elevated temperature ( T = 545°C, t = 200,000h). The oxide layer was studied on a surface and a cross-section at the inner and outer surface of the tube wall on the outlet both on the fire and counter-fire side of the tube wall surface. The obtained results of research have shown a higher degree of degradation, both of the steel itself and oxide layers, on the fire side. In addition, it has been shown that on the outside tube wall, apart from iron oxides, there are also deposits composed mainly of Al 2 SiO 5 . Material and experimental methods The material studied comprised specimens of 10CrMo9-10 (10H2M) steel taken from a pipeline operated at the temperature of 545°C during 200,000 hours. The oxide layer was studied on a surface and a cross-section at the inner and outer surface of the tube wall on the outlet both on the fire and counter-fire side of the tube wall surface. Thorough examinations of the oxide layer carried out on the inner and outer surface and fire wall and opposite fire wall of tube wall comprised: -microscopic examinations of the oxide layer were performed using an Olympus GX41 optical microscope, -thickness measurements of formed oxide layers, -chemical composition analysis of deposits/oxides using a Jeol JSM-6610LV scanning electron microscope (SEM) working with an Oxford EDS (Energy Dispersive Spectroscopy), -X-ray (XRD) measurements; the layer was subject to measurements using a Seifert 3003T/T X-ray diffractometer and the radiation originating from a tube with a cobalt anode (λ Co = 0.17902 nm). XRD measurements were performed in the 20÷120° and 5-120° range. To interpret the results of the diffractograms were described by a Pseudo Voight curve using the Analyze software. A computer software and the DHN PDS, PDF4+2009 crystallographic database were used for the phase identification, -the oxide layer surfaces were studied using an Vecco atomic force microscope. Introduction Both low-and high-alloy steels are used in the power industry; they should ensure a safe operation of the equipment at an elevated temperature during a long time [1,2]. More and more research is carried out now on the corrosion resistance of 10CrMo9-10 steel operating at an elevated temperature [3][4][5][6][7]. Lehmusto et al. [3] studied inter alia the influence of KCl and K 2 CO 3 on 10CrMo9-10 steel oxidation at temperatures of 500, 550 and 600°C in short-term laboratory tests (168 h). They have shown that oxides forming on 10CrMo9-10 steel are iron oxides, which in the vicinity of steel are additionally enriched with chromium. It has been shown in that paper that the forming oxides layer depends not only on temperature and atmosphere but also on potassium compounds. The oxides formed in the presence of potassium carbonate are well-adhered, whereas the oxides formed in the presence of potassium chloride are multilayered, with poorly adhering layers. Klepacki and Wywrot [8] have presented corrosion examples both on the steam flow side (inside) and on the flue gas side (outside). Such oxides as hematite (Fe 2 O 3 ) and magnetite (Fe 3 O 4 ) are formed on the inside tube wall, while on the flue gas side such alkaline-sulphate compounds are formed, as: Na 3 Fe(SO 4 ) 3 and K 3 Fe(SO 4 ) 3 as well as sulphate compounds like FeSO 4 . Results of examinations After a long-term operation (T = 545°C, t = 200,000h) 10CrMo9-10 steel shows degradation of bainitic-ferritic structure. The extent of degradation on the fire side is much higher than on the counter-fire side, which is presented in Fig. 1. A substantial depletion of boundary areas in carbide precipitates is visible on the fire side. Instead, in both cases carbide precipitates create 'chains' along boundaries, in certain places there are also cases of etching around sulphide precipitates and sporadic creep micropores on the fire side. Corrosion on the grain boundaries also exists, both on the fire and counter-fire side, which is presented in Fig. 2. Such corrosion together with crevice corrosion to the largest extent occurs at the outside wall on the fire side, reaching 84 μm in depth. While on the counter-fire side at the outside wall corrosion exists on the grain boundaries to the depth of 43 μm. At the inside wall, on the fire and counter-fire side, corrosion on the grain boundaries exists to the depth of 21 μm and 17 μm, respectively. Fig. 3d and h shows spalling of individual deposit/oxide layers on the outside. In the case of fire side, both on the inside and outside, a larger degree of surface development was observed ( Fig. 3a and c). The surface topography studies (Figs. b, d, f, h and Table 1) have shown that the largest degree of surface development occurred on the outside surface on the fire side, where Ra and Rmax were 406 nm and 2790 nm, respectively. The oxides layer thickness together with deposits on the fire side was 435 μm and 820 μm on the flowing medium and flue gas side, respectively (Fig. 4a, b). For the counter-fire side on the inside the oxides layer thickness is smaller by 30 μm, while on the outside by as many as 460 μm (Fig. 4c, d). The layer formed on the fire outside shows high degradation, which is presented in Fig. 5b, where a surface after spalling of a thick deposit layer is visible. In addition, a very large fissure reaching the depth of 235 μm may be noticed. Instead, on the inside only local spalling in the oxides layer may be observed directly on the flowing medium side (Fig. 5a). Performed EDS analysis of chemical composition (Fig. 6) combined with X-ray phase analysis (Fig. 7) has shown that oxides occur on the inside surface of tube. Based on DHN PDS and PDF4+2009 crystallographic database it has been found that the forming oxides are: Fe 2 O 3 and Fe 3 O 4 in accordance with the catalogue card numbers: 01-079-0007, 01-089-0951, respectively. In the case of the outside surface of tube from opposite fire wall apart from the aforementioned compounds also Al, Si, K, Ca, Ti, Mg, Na and S based compounds exist, such as: Al 2 SiO 5 . The other elements, such as K, Ca, Ti, Mg, Na, occur in small amounts. Instead, directly on the fire outside only Al 2 SiO 5 exists and small amounts of K, P, S, Ca, Ti, Mg and Fe. After removing the external deposit layer on the outside and repeating the XRD analysis it has been shown that the forming oxide layer is built of Fe 2 O 3 and Fe 3 O 4 (Fig. 8). Summary The oxide layer formed on 10CrMo9-10 steel after a longterm operation at an elevated temperature was examined. The oxide layer was studied on the fire and counter-fire side, formed both on the tube wall outside and inside. The obtained results have shown that the oxide layer formed on the fire side is more degraded. Apart from Fe 2 O 3 and Fe 3 O 4 oxides on the outside tube wall there are also deposits directly on the flue gas inflow side in the form of Al 2 SiO 5 reaching the depth of 400 μm (fire side). The deposits/oxides layer on this side is more fissile, which was shown by microscopic studies and confirmed by surface more developed on the fire side and the oxide layer formed on the X10CrMoVNb9-1 steel on the fire side is the thicker than on the counter-fire side. A significant layer thickness on the fire side is caused by the formation of a larger amount of deposits.
2019-04-29T13:12:24.023Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "4f09abf2cf798fbd8420e6e25e2593ae522c815e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1515/amm-2016-0168", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "756528642ccdba5b4fc05d27e00c132c05643eaa", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
53406645
pes2o/s2orc
v3-fos-license
Up-regulation of microrna-125b induced by 5-fluorouracil (5-FU) treatment inhibits cell growth and proliferation in pancreatic cancer cells Pancreatic cancer is one of the most frequently reported gastrointestinal tumors and has been reported to have a 5-year survival rate of <5%. It is most commonly diagnosed at an advanced stage and the most frequently administered chemotherapeutic compound for patients with advanced disease has been 5-Fluorouracil (5-FU).1 5-FU alone remains one of the standard treatments in advanced pancreatic cancer. The well-known mechanism of the anti-tumor effects of 5-FU is to inhibit DNA and RNA synthesis.1 In addition, cell resistance to cytotoxic agents and radiation are two other factors contributing to the poor prognosis of pancreatic cancer.2 Kit (Ambion, Austin, Texas) according to the instructions of the manufacturer. The quality of the RNA was assessed by 15% denaturing polyacrylamide gel electrophoresis and spectrophotometry (Eppendorf BioPhotometer, Eppendorf, Hamburg, Germany). MicroRNA microarray MiRNA microarray analysis was performed by LC Sciences (http:// www.lcsciences.com/; Houston, TX). In brief, poly-A tails were added to the RNA sequences at the 3'-ends using a poly (A) polymerase, and nucleotide tags were then ligated to the poly-A tails. For each dual-sample experiment, two-sets of RNA sequences were added with tags of two different sequences. The tagged RNA sequences were then hybridized to the miRNA microarray chip (Atactic µ ParaFlo™ micro fluidics chip, LC Sciences, Houston, Texas) containing 328 human mature miRNA transcripts listed in Sanger miRBase Release 8.0 (http://www.sanger.ac.uk/Software/Rfam/mirna). The probe sequences are available upon request. The labeling reaction was carried out during the second hybridization reaction using tag-specific dendrimer Cy3 and Cy5 dyes. Total RNAs from untreated cells and cells treated with 5-FU were labeled with Cy3 and Cy5, respectively. The human miRNA chip includes nine repeats for each miRNA. Multiple control probes were included in each chip, which were used for quality control of chip production, sample labeling and assay conditions. Hybridization signals were detected by Axon Genepix 4000B Microarray Scanner (Molecular Devices, Sunnyvale, CA) and saved as TIFF files. Numerical intensities were extracted for control, background, and miRNA probes and converted into Microsoft Excel spreadsheets. Data analysis The data were corrected by subtracting the background and normalizing to the statistical median of all detectable transcripts. Background was calculated from the median of 5% to 25% of the lowest-intensity cells. The data normalization balances the intensities of Cy3-and Cy5-labeled transcripts so that differential expression ratios can be correctly calculated. Statistical comparisons were performed by ANOVA (Analysis of Variance) using the Benjamini and Hochberg correction for false-positive reductions. Differentially detected signals were generally accepted as true when the ratio of the P value was less than 0.01. Clustering analysis was made with a hierarchical method and visualized using the TIGR MeV (Multiple Experimental Viewer) (the Institute for Genomic Research) microarray program. Real-time RT-PCR Expression of miRNAs was measured using mirVanaTM qRT-PCR miRNA Detection Kit and mirVana™ qRT-PCR Primer Sets (Ambion, Austin, Texas) according to the instructions of the manufacturer. Briefly, cDNA was generated after reverse transcription of 20ng of total RNA. The PCR reaction consisting of appropriate number of cycles (95°C for 15 s, 60°C for 30s) was performed using an iCycler (Bio-Rad, Hercules, CA) after an initial denaturation step (95°C for 3min). Moreover, the real-time PCR products were separated on a 3.5% agarose gel and visualized with ethidium bromide on the ChemiImager Imaging System 5500 (Alpha Innotech, San Leandro, CA). The 5 S RNA was used as an internal control. Transient transfection The Pre-miRTM miRNA Precursor Molecules for miR-125b were purchased from Ambion (Austin, TX). Transfection was performed using LipofectamineTM 2000 (Invitrogen, Carlsbad, CA) according to the instructions of the manual. Briefly, PANC-1 and Panc10.05 cells were seeded on 6-well plates one day before transfection. Next day Pre-miRTM miRNA Precursor Molecules were diluted in 250μl of Opti-MEM, and mixed gently. Meanwhile, 10μl LipofectamineTM 2000 was diluted in 250μl of Opti-MEM, and mixed gently. After incubation of 20minutes at room temperature, the diluted Pre-miRTM miRNA Precursor Molecules were combined with diluted LipofectamineTM 2000, and mixed gently for incubation of 5minutes at room temperature. The complexes were added to each well containing cells and medium. Cells were incubated at 37°C in CO 2 for 24hours prior to testing. Preparation of constructs and stable transfection A single-stranded DNA oligos was designed for hsa-miR-125b according to the instructions of the BLOCK-iT Pol II miR RNAi Expression Vector kit (Invitrogen, Carlsbad, CA). The top and bottom strand oligos were annealed to generate a double-stranded oligo suitable for cloning into the pcDNA6.2-GW/EmGFP-miR vector (Invitrogen, Carlsbad, CA). The pcDNA6.2-GW/EmGFP-miR-neg control plasmid contains an insert between bases 1519 and 1578 that can form a hairpin structure just as a regular pre-miRNA, but is designed not to target any known vertebrate gene. TOP10 competent E. coli was transformed with the expression plasmid. Plasmid DNA was isolated and sequenced by automated sequencing to confirm the sequence. Stable transfection was performed in both PANC-1 and Pac-10.05 cells. Cells were transfected with either the expression plasmid or the control vector using LipofectamineTM 2000 (Invitrogen, Carlsbad, CA). After blasticidin (3μg/ml, Invitrogen, Carlsbad, CA) screening, the stable clones were identified by semi-quantitative RT-PCR and Northern blot. MTT assay Cell proliferation rate was determined by MTT assay. Parent cells and transfectants with miR-125b expression plasmid as well as control vector clones were digested with trypsin and inoculated in 96-well plates at a concentration of 1×10 3 cells/well after counting. Cells were incubated at 37°C in a humidified incubator containing 5% CO 2 . Cell viability was examined every day for 7 days. According to manufacture instructions, 30µl of MTT (5mg/ml, dissolved in 1×PBS) was added to each well. The plates were incubated for an additional 4hour, then measured the absorbance at 570nm in a Bio-Kinetics Reader (Bio-Rad) after 150 ml of DMSO was added to each well to solubilize the formazan crystals. Experiments were carried out in quadruplicate, and the results were shown as mean±SD of three independent experiments. Clonogenecity assay Parent cells and stable clones with miR-125b expression plasmid as well as control vector were digested with trypsin and seeded in 10cm petri dishes at a concentration of 500 cells/well after counting. The plates were incubated at 37°C in a humidified incubator containing 5% CO 2 . When the colonies became visible (2-4weeks), cells were fixed with methanol, stained with Gimesa and counted. Experiments were carried out in triplicate, and the results were shown as mean±SD of three independent experiments. Flow cytometry assay Flow cytometry assay was performed by propidium iodide (PI) staining. Parent cells and clones with miR-125b, as well as control vector were grown to 80-90% confluence, then digested with trypsin, washed twice with PBS and fixed overnight at 4°C in 70% ethanol. The fixed cells were washed twice with PBS and then incubated with 5μg/ml PI and 50μg/ml RNase A in PBS for 1hour at room temperature. Flow activated cell sorter analysis was carried out using a FACS Calibur flow cytometer (Becton Dickson, Mountain View, CA) with CELLQUEST software. A total of 10,000 events were measured per sample. RT-PCR array The Human Apoptosis PCR Array was performed (Superarray Biosciences Corporation, Frederick, MD) according to the instructions of the manufacturer. Briefly, 500ng total RNA of each sample was transcribed to the first-stranded cDNA. Then aliquoted the mixture to the 96-well PCR Array plate after mixing the cDNA with the realtime PCR master mix. Real-time RCR was carried out on the iCycler (Bio-Rad, Hercules, CA). The data was analyzed using the PCR Array analysis template provided by the manufacturer. Statistical analysis Results were shown as mean±SD. Student's t test was used for comparison unless particular test was notified. P<0.05 was considered statistically significant. Determination of IC 50 for PANC-1 and Panc-10.05 IC 50 is necessary for this study. Therefore, both pancreatic cell lines PANC-1 and Panc 10.05 were treated for 72hours with various concentrations of 5-FU. MTT assay results indicated that IC 50 for PANC-1 and Panc-10.05 are 75μg/ml and 200μg/ml, respectively. These concentrations were the guideline for the 5-FU treatment in this study. MiRNA profiles correlated to 5-FU induced cell toxicity In order to examine the miRNA profiles correlated with 5-FU induced cell toxicity towards pancreatic cancer cells, total RNAs from 5-FU treated and non-treated pancreatic cancer cells were isolated and miRNA microarray analyses were performed after assessing the quality of total RNA. Correlation analysis was performed to ensure the quality of miRNA microarray analysis. By correlating the results from 2 chips, the fluorescence labeling, handling, and system related biases can be eliminated and therefore the expression pattern and level are reliable to the true biological differences. A list of differentially expressed miRNAs (at P<0.01) between 5-FU treated and untreated cells was identified and presented in Table 1. We further selected miR-125b, miR-181a, miR-181b, miR-27a and miR-222 based on the differential expression ratio (ratio>1.5) and availability of the primer set for validation. Validation of selected differentially expressed miRNAs Five differentially over-expressed miRNA candidates miR-125b, miR-181a, miR-181b, miR-27a and miR-222 were verified by realtime RT-PCR and Northern blot in 5-FU treated and non-treated pancreatic cancer cells. The results indicated that miR-125b, miR-181a, miR-181b, miR-27a and miR-222 were indeed overexpressed in 5-FU treated PANC-1 and Panc-10.05 cells by both real-time RT-PCR ( Figure 1A) and Northern blot analysis ( Figure 1B). These results were consistent with what we observed from the miRNA microarray analysis. Inhibition of cell proliferation of miR-125b In order to evaluate the functions of miRNAs correlated with 5-FU treatment, miRNA precursors were introduced into the cells and the cell proliferation was examined. Transient transfection was performed to determine the minimum concentration of the precursor molecules of selected miRNA to reduce cell toxicity and nonspecific effects. Real-time RT-PCR results showed that the minimum concentration of miR-125b precursor was 2.5nM for both PANC-1 and Panc-10.05 ( Figure 2). We also compared the inhibition of the cell growth rate between the minimum (2.5nM) and maximum (80nM) concentration of the miRNA precursor and found no significant difference (at P<0.05, Data not shown). Therefore, the minimum concentration of miR-125b precursor was used for the following experiments. MTT assay of transient transfection with the minimum concentration of miRNA precursors showed that miR-125b inhibited cell proliferation in both PANC-1 cells (50%) and Panc-10.05 cells (75%) (Figure 2 & 3). But it did not achieve the same inhibition effects as 5-FU treatment. Selection of positive miR-125b stable transfected clones A single-stranded DNA oligos for miR-125b (Table 2) was designed and synthesized to construct the expression plasmids. After the expression plasmid contain the desired miRNA was constructed, the sequence was verified and confirmed. Sequencing results were analyzed using the program SeqManTM of DNAStar software (DNAStar, Inc.). Pancreatic cancer cell lines PANC-1 and Panc-10.05 were stably transfected with either the expression plasmid or the control vector using LipofectamineTM 2000. After blasticidin (3μg/ml, Invitrogen, Carlsbad, CA) screening, the stable clones from PANC-1 were identified by real-time RT-PCR and Northern blot analysis (Figure 4). We screened 30 clones from each stable transfection experiment and 4 positive clones were selected for the further characterization. We also tried different transfection reagents (LipofectamineTM 2000 and LipofectamineTM LTX) and different concentrations of blasticidin (0.5μg/ml, 1.5μg/ml and 3μg/ml respectively), but there were no stable clones obtained from Panc-10.05 cell transfection. It may be due to its sensitivity to blasticidin. Characteristics of miR-125b stable transfected clones In order to determine which positive clone had the same characteristics as the 5-FU treated cells, cell growth and proliferation rate of stable clones with miR-125b were tested using MTT assay and Clonogenecity assay. Results shown in Figure 5 Inhibition of cell proliferation can be the consequence of cell cycle arrest. Therefore, flow cytometry was performed to determine the alteration of cell cycle. As shown in Figure 7, miR-125b stable transfected clones 5 and 14 showed G2/M phase arrest compared with the control vector clones, while no apoptosis was detected in the stable clones. To identify the potential protein targets of the microRNAs of interest, 2D DIGE and RT-PCR array were performed. Potential protein targets for miR-125b To explore the potential protein targets of miR-125b, the human apoptosis RT-PCR array is used due to miR-125b inhibits cell growth and proliferation. There were two genes appeared to be down regulated more than 2 folds in miR-125b clones 5and 14 with 2 folds differences when compared with the parent cell line. Western blot analysis was followed to confirm the protein expression of BIRC1 and IGF1R. Both BIRC1 and IGF1R expression were down-regulated in miR-125b clones 5 and 14 and similar to 5-FU treated cells (Figure 8). A. BIRC1 was down regulated in miR-125b transfectants compared with controls. B. IGF1R was down regulated in miR-125b transfectants compared with controls. β-actin was used an internal control. Discussion Compared with the untreated cells, twenty-four up regulated miRNAs and seven down-regulated miRNAs were identified in 5-FU treated cells. Among them, 8 miRNAs (miR-125b, miR-181a, miR-181b, miR-181d, miR-27a, miR-222, miR-30a-5pre and miR-495) showed the most significant up regulation in 5-FU treated group, while other 6 miRNAs were down regulated compared with the untreated group, including miR-15b and miR-21. It has been suggested that miR-21 might play an important role in preventing apoptosis and has been shown to be up regulated in different cancers, pancreatic neuroendocrine tumors included. 9,10 Moreover, it has been reported that miR-21 was overexpressed in pancreatic cancer and its strong expression predicted limited survival in patients with node-negative disease. 11 If miR-21 is truly involved the 5-FU regulation, it should be down regulated according to the effect on growth inhibition by 5-FU. We indeed found that miR-21 was down regulated in our studies after 5-FU treatment. For the up regulated miRNAs, miR-125b is one of the most significantly miRNAs which possibly involve in 5-FU metabolism pathway based on its negative effects on proliferation in different cancers. MiR-125b negatively regulated proliferation in oral squamous cell carcinoma cells. 12 Downregulation of miR-125b expression was found in OSCC, breast cancer and prostate cancer. 13,14 But in pancreatic cancer, the differential expression of miR-125b is controversial. Volinia et al., 15 reported that the majority of miRNAs were increased in the tumor compared to normal pancreas, including miR-125b. 9,15 Bloomston et al., 9 also showed that miR-125b was found to be overexpressed in pancreatic cancer compared with normal pancreatic tissue, which was also overexpressed in chronic pancreatitis. 9 The miR-125b expression affected in both diseased tissues is more likely to reflect the desmoplastic reaction of tumor rather than changes specific to PDAC. However, the studies of Lu et al showed miR-125b was decreased in pancreatic cancer compared with normal pancreatic tissues, but little is known about its function in pancreatic cancer. 16 To define how miR-125b affects the pancreatic cell proliferation, the human pancreatic cell lines PANC-1 and Panc-10.05 cells were transfected with Pre-miRTM miRNA precursor molecules of miR-125b, and the results showed that it inhibited cell proliferation in both cell lines. But the inhibition is not the same dramatic as that of 5-FU treated cells, suggesting that there are other miRNAs induced by 5-FU involving in this process. To understand the role of miR-125b, its stable transfected clones were established in PANC-1 cell line. The stable clones (5 and 14) transfected with miR-125b plasmids both showed decreased cell growth and proliferation (about 68% and 87% inhibition for colony formation) which matched with the 5-FU treated cells and precursor molecules transfected cells. But there was no change of cell proliferation in another stable clone 24 which had the highest expression level of miR-125b, suggesting its nonspecific effect. Our results were supported by the previous studies showing over expressing miR-125b, which could inhibit cell proliferation in OSCC, bladder cancer, thyroid cancer and hepatocellular carcinoma. 12,17 In pancreatic cancer, our results are similar to the findings reported by Lu et al. 16 miR-125b was included in a general downregulation of miRNAs associated with differentiation in tumors compared with normal tissues. 16 Recent studies showed that miR-125b could promoter neuronal differentiation in human cells. 18 In the present studies, exogenous miR-125b inhibition of tumor growth may be partly through restoring differentiation capabilities of cancer cells. Further testing whether there are dysregulated genes associated with differentiation in the above tumor will help us better understand the role of miR-125b in the process of recovery of cancer differentiation. Inhibition of cell proliferation can be the consequence of cell cycle arrest or apoptosis, flow cytometry assay was performed. As shown in Figure 7, miR-125b stable transfected clones 5 and 14 showed G2/M phase arrest compared with the control vector, while no apoptosis was detected in the stable clones. It seems that clone 14 has a little higher cell growth inhibition (Clonogenecity assay, 87% versus 68%) accompanying higher percentage G2/M phase arrest (50.35% versus 30.14%) than clone 5. To probe the molecular mechanisms involved in the tumor growth inhibition by miR-125b, human apoptosis RT-PCR array was performed. Two potential protein targets BIRC1 and IGF1R with more than 2-fold of downregulation was identified and confirmation by Western blot. Blotting results (Figure 8) showed that the protein level of BIRC1 and IGF1R was dramatically decreased in 5-FU treated cells and miR-125b transfected clones 5 and 14 compared with the vector controls and the parental. However, no evident decrease of expression IGF1R in clone 24 which has no inhibition effect on cells (data not shown). The IGF-IR has been implicated in promoting oncogenic transformation, growth, and survival of cancer cells. [19][20][21] And it is one of the growth factor receptors known to affect the G2/M transition and to be overexpressed in pancreatic cancer. 22 Reduction of IGF1-R was followed by G2/M phase arrest and colony formation inhibition in the present study. Our finding was supported by the hypothesis that the autocrine interaction between the IGF1-R and its ligand regulates cell proliferation of pancreatic carcinoma cells. 21 In experimental studies, inhibition of IGF-IR substantially reduced pancreatic cancer growth and angiogenesis. 23 EM164, an Anti-Insulin-like Growth Factor I Receptor Antibody caused regression of established BxPC-3 human pancreatic tumor xenograft in SCID mice. 24 Therefore, miR-125b-induced downregulation of IGF1-R may be partly responsible for the observed inhibition of cell proliferation in PANC-1 cells. IGF signaling through the IGF-IR may be an important factor in tumor cell drug resistance. 25 Furthermore, inhibition of IGF-IR with antibodies or small-molecule kinase inhibitors has been found to enhance the cytotoxic effects of a number conventional chemotherapy agents, 5-FU included, both in vivo and in vitro. 26 Also decreased expression IGF1-R by over expressing miR-125b may sensitized pancreatic cancer cells to 5-FU which needs further studies. It is also important to determine whether miR-125b can directly repress the expression of IGF1-R. BIRC1, also named Neuronal apoptosis inhibitory protein (NAIP), which is a member of the IAPs, is expressed in mammalian cells and inhibits the apoptosis induced by a variety of signals. 27 NAIP has been linked to the inherited disease, spinal muscular atrophy (SMA). NAIP may be involved in the mechanisms of resistance of tumor cells to various chemotherapeutic agents. It has been reported NAIP is overexpressed in breast cancer patients with unfavorable clinical features such as stage and tumor size, suggesting that NAIP would play a role in the disease manifestation. 28 In colorectal cancer BIRC1 transcript that was more abundant in tumors than in non-neoplastic tissues appeared to reflect important events for colon carcinogenesis. 29 Our results showed that BIRC1 was down-regulated in 5-FU treated cells and miR-125b stable transfected clones 5 and 14, indicating reduction of BIRC1 might contribute to the inhibition of apoptosis of pancreatic cancer cells. However, we didn't detect the apoptosis in miR-125b stable transfected cells. Other methods used to test the early apoptosis of cells may be the better choice. Or BIRC1 may have an unidentified role in cell proliferation. It has been found that miRNA genes are frequently located in chromosomal regions characterized by non-random aberrations in human cancer, suggesting that resident miRNA expression might be affected by these genetic abnormalities. 30 MiR-125b, which inhibited pancreatic tumor growth in the present studies, is located at chromosome 11q23-24, which is one of the region's most frequently deleted in breast, ovarian, and lung tumors. 31,32 Loss of 11q24.1 and 22q13.31 were also associated with these more advanced tumor cases in endocrine pancreatic tumors. 33 Thus, the miR-125b might be an important candidate for this role. Summary In summary, our findings indicated the expression of mir-125b induced by 5-FU treatment, may inhibit tumor growth in part through down-regulation of the expression of IGF1R, suggesting a novel strategy for anticancer therapy.
2019-04-01T13:16:20.520Z
2016-01-27T00:00:00.000
{ "year": 2016, "sha1": "fe857f038a6b1c4221ec307a9bfd665cc5cc4db6", "oa_license": "CCBY", "oa_url": "http://medcraveonline.com/MOJCSR/MOJCSR-03-00048.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "011935c12b81262b6bece3828dde1496d78ce588", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
258921275
pes2o/s2orc
v3-fos-license
Training in Diagnostic Hysteroscopy: The “Arbor Vitae” Method Background and Objectives: Diagnostic hysteroscopy is the gold standard in the diagnosis of intrauterine pathology and is becoming an essential tool in the daily practice of gynecology. Training programs for physicians are necessary to ensure adequate preparation and learning curve before approaching patients. The aim of this study was to describe the “Arbor Vitae” method for training in diagnostic hysteroscopy and to test its impact on the knowledge and skills of trainees using a customized questionnaire. Materials and Methods: A three-day hysteroscopy workshop combining theory and practical “hands on “sessions with dry and wet labs has been described. The aim of the course is to teach indications, instruments, the basic principles of the technique by which the procedure should be performed, and how to recognize and manage the pathologies that can be identified by diagnostic hysteroscopy. To test this training method and its impact on the knowledge and skills of the trainees, a customized 10-question questionnaire was administered before and after the course. Results: The questionnaire was administered to 34 participants. All trainees completed the questionnaire, and no missing responses were recorded. Regarding the characteristics of the participants, 76.5% had less than 1 year of experience in performing diagnostic hysteroscopy and 55.9% reported performing fewer than 15 procedures in their career. For 9 of the 10 questions embedded in the questionnaire, there was a significant improvement in the scores between pre- and post-course, demonstrating a perceived significant improvement in theoretical/practical skills by the trainees. Conclusions: The Arbor Vitae training model is a realistic and effective way to improve the theoretical and practical skills required to perform correct diagnostic hysteroscopy. This training model has great potential for novice practitioners to achieve an adequate level of proficiency before performing diagnostic hysteroscopy on live patients. Introduction Diagnostic hysteroscopy is the gold standard in the evaluation and diagnosis of uterine cavity morphology and intrauterine pathologies [1]. With high sensitivity and specificity, high feasibility, and low complication rates, diagnostic hysteroscopy is a minimally invasive procedure that can be performed in an outpatient setting without anesthesia [2]. Nowadays, diagnostic hysteroscopy should completely replace blind endometrial biopsies such as dilation and curettage [3][4][5]. When performed with the correct technique, hysteroscopy is well tolerated by patients and the success rate reported in the literature ranges from around 90 to 95% [6]. Conversely, the incorrect application of this procedure, due to suboptimal technical skill, can cause discomfort/pain [2,7,8] and even serious complications [9]. Considering the wide range of indications, hysteroscopy has become a fundamental tool in the daily practice of gynecology. Therefore, the adequate performance of diagnostic hysteroscopy should be considered a basic skill for all gynecologists [10]. Training programs with the objective of giving theoretical notions and technique principles are necessary in order to adequately prepare physicians and, above all, to provide an effective learning curve before approaching patients [11,12]. In the last decade, several models have been proposed for training in hysteroscopy. In order to reproduce the anatomy of the uterus and the characteristics of its tissues as accurately as possible, simulators based on vegetables and animal models, as well as synthetic and virtual models, have been designed and described [12,13]. Since 1995, the Arbor Vitae group has been organizing a three-day hysteroscopy workshop combining theoretical lessons and hands-on practice sessions, applying preparatory exercises with dry and wet labs. The course objective is to teach indications, instruments, the basic principles of the technique by which the procedure should be performed, and how to recognize and manage the pathologies that can be identified by diagnostic hysteroscopy. The aim of this study was to describe the Arbor Vitae method for training in diagnostic hysteroscopy, articulated in theoretical, video, and "hands on" practical sessions, and to test its impact on the knowledge and abilities of trainees using a customized questionnaire. Theoretical and Video Session In the theoretical and video session, basic principles, instruments, and techniques to correctly perform diagnostic hysteroscopy are explained. The understanding of diagnostic hysteroscopy cannot be separated from a good knowledge of the instruments and their correct usage to perform the procedure with the correct technique. The first approach to the procedure takes place during the theoretical sessions. The applicability of each instrument component is illustrated with pictures and videos, with special attention and focus on the vision provided by the 30 • optics. In addition, the anatomy of the cervix and the uterine cavity, the pathologies that can be diagnosed, and the possible complications that can occur are also shown and described. A short constructive debate between students and trainers is held at the end of each lecture. During the video session, videos of diagnostic hysteroscopies are presented and discussed, showing the endoscopic view of the physiological condition of the cervix and uterine cavity, associated pathologies, and correct and incorrect procedures. Uterine Cervix: How to Overcome It The cervix is the "door" to the uterine cavity and, in a sense, the hysteroscopist's "tomb" at the same time, so knowing how to overcome it is essential to a successful procedure. Therefore, the assessment of the cervical canal, its description, and correct navigation are of a particular importance. Pictures and videos are used to teach the trainee how to assess the basic characteristics of the cervical canal such as caliber, direction, morphology, cervical mucosa, and vascularization. The correct technique for navigating the cervical canal is also shown, with particular emphasis on how to take advantage of the 30 • fore-oblique optical system. The trainee should learn that, as far as the execution technique is concerned, an incorrect angle of the instrument could cause pain, lesions of the cervical mucosa, perforation, bleeding, or a wrong path. Failure to navigate properly within the cervix may result in a failed examination. The video sessions show the trainee how to proceed along the cervical canal, keeping the image of the lumen itself at 6 o'clock, following the course of the folds of the arbor vitae, the course of the cervical vessels, and the flow of the distending medium. Respecting the structures of the endocervix is essential for the success of the procedure. In case of doubt or difficulty in advancing the instrument, the trainee is shown how to stop the hysteroscope and move it backwards. The distending medium will then show the correct way to continue the procedure. Uterine Cavity: How to Explore It Once the cervix has been overcome, the 30 • fore-oblique view continues to be the basis for the study of the uterine cavity, thus avoiding tilting movements which, as in the case of the cervix, could cause avoidable pain. The rotation movements around the longitudinal axis of the optical system allow the examination of the tubal ostia (90 • rotation) and the anterior and posterior walls of the uterus (180 • rotation), providing a complete evaluation of the uterine cavity, an effective identification of any pathology, and a fast procedure with minimal movement and high patient comfort. Hands-on Session The aim of the practical "hands on" session is to make the trainee confident with the instruments and how to assemble them, how to navigate the cervix correctly and how to examine the uterine cavity using the 30 • fore-oblique view. The trainee must learn to use the three basic movements of hysteroscopy: translation, rotation, and swing, overcoming the "fulcrum effect", which is a typical mistake made by beginners when approaching the endoscopic view. Trainees are divided into groups of three at each station, and each group has a tutor to guide them through the exercises. Each station is equipped with a table with a fluid collection sheath, an "all-in-one" Tele-Pack+ Storz system (Karl Storz, Tuttlingen, Germany), which includes a monitor, light source, and full HD camera control unit with integrated network function in a single compact mobile unit, an ENDOMAT distension liquid pump (Karl Storz, Tuttlingen, Germany), and a 5 mm hysteroscope (Bettocchi hysteroscope, Karl Storz, Tuttlingen, Germany). The first step of the hands-on session is to become familiar with the instruments. The hysteroscope provided to the trainees is composed of a Hopkins optical system of 2.9 mm with a 30 • fore-oblique view, an internal sheath with a diameter of 4.3 mm equipped with an operative channel for semi-rigid surgical instruments, and an external sheath with a diameter of 5 mm for the outflow of the liquid distension medium (Bettocchi hysteroscope, Karl Storz, Tuttlingen, Germany). The tutor demonstrates how to assemble and disassemble the hysteroscope and how to make the connections to the monitor, the light source, and the distension liquid pump. Then, in order to develop a tactile sensitivity to the instrument, the trainee is invited to assemble and disassemble the hysteroscope, first under direct vision and then blindly. Once the learners have become familiar with the fundamentals of the instruments, they begin exercises to develop or improve the 30 • fore-oblique view. This step begins with basic exercises: the students are provided with longitudinal rubber tubes of standard caliber and are asked to navigate them by putting into practice the 6 o'clock view illustrated during the theoretical and video sessions. Although much simpler than crossing the cervix, the rubber tube immediately provides a very clear and realistic way to learn the correct positioning of the instrument within the canal and provides adequate tactile perception when the hysteroscope is correctly navigated (Figure 1). The second step is to use a cardboard box with a road map in a convex structure in the fundus and a hole through which the instrument can be inserted. This exercise aims to help the trainee to cope with the 30 • fore-oblique view along to the basic movements of hysteroscopy (translation, rotation, and swing), as in the uterine cavity. The map presents pins at various locations ( Figure 2). The tutor proposes different itineraries to be followed, starting from two nearby destinations connected by a linear path and ending in more complex paths. The student's task is to follow the road that connects two different points on the map, with the aim of placing the road image in the centre of the monitor, taking advantage of the perspective offered by the 30 • view and, thus, trying to reduce the movements of the hysteroscope ( Figure 3). Medicina 2023, 59, 1019 4 of 12 pins at various locations ( Figure 2). The tutor proposes different itineraries to be followed, starting from two nearby destinations connected by a linear path and ending in more complex paths. The student's task is to follow the road that connects two different points on the map, with the aim of placing the road image in the centre of the monitor, taking advantage of the perspective offered by the 30° view and, thus, trying to reduce the movements of the hysteroscope ( Figure 3). pins at various locations ( Figure 2). The tutor proposes different itineraries to be followed, starting from two nearby destinations connected by a linear path and ending in more complex paths. The student's task is to follow the road that connects two different points on the map, with the aim of placing the road image in the centre of the monitor, taking advantage of the perspective offered by the 30° view and, thus, trying to reduce the movements of the hysteroscope (Figure 3). The third step is based on exercises using a biological model. The womb model used for this course is an animal model: a cow's rumen. To simulate endometrial polyps or leiomyomas of the uterus, the rumen is arranged with pieces of animal flesh of different consistencies sewn into it and then closed at one end with a suture (Figure 4). The open end is then turned over and tightened with numerous rubber bands to simulate the cervical canal and the isthmus, like the human one. The packed rumen is placed on a closed metal support, which allows it to expand; the whole item is then fixed to a rigid plastic box with an entrance hole, which is supplied to each station ( Figure 5). Medicina 2023, 59, 1019 5 of 12 The third step is based on exercises using a biological model. The womb model used for this course is an animal model: a cow's rumen. To simulate endometrial polyps or leiomyomas of the uterus, the rumen is arranged with pieces of animal flesh of different consistencies sewn into it and then closed at one end with a suture (Figure 4). The open end is then turned over and tightened with numerous rubber bands to simulate the cervical canal and the isthmus, like the human one. The packed rumen is placed on a closed metal support, which allows it to expand; the whole item is then fixed to a rigid plastic box with an entrance hole, which is supplied to each station ( Figure 5). The trainees are then invited to perform a real diagnostic hysteroscopy, putting into practice all of the skills acquired during the theoretical and video sessions and the previous practical exercises. After assembling the hysteroscope and making all connections, the trainee inserts the hysteroscope into the external orifice of the rumen under direct vision, realistically simulating the passage into the cervical canal. The trainee must keep the image of the cervical lumen fixed at 6 o'clock and try to move the instrument backwards if the path to follow is not clear. Once inside the cavity, the trainee should proceed with a systematic evaluation as follows: first, the two tubal ostia should be investigated by rotating the hysteroscope through 90° on its axis; then, the anterior and posterior walls should be evaluated by rotating it through 180° on its axis (see Video S1). Finally, the trainee should try to become familiar with the tactile sensation of touching the anatomical structure with the tip of the hysteroscope. Trainee Improvement Testing by Questionnaire In order to evaluate the effectiveness of our diagnostic hysteroscopy training method, the trainees were asked to complete the same questionnaire at the start and at the end of the course in order to assess their feelings of improvement. The questionnaire consisted of ten questions, each scoring from 0 (minimum) to 10 (maximum), designed to assess the The trainees are then invited to perform a real diagnostic hysteroscopy, putting into practice all of the skills acquired during the theoretical and video sessions and the previous practical exercises. After assembling the hysteroscope and making all connections, the trainee inserts the hysteroscope into the external orifice of the rumen under direct vision, realistically simulating the passage into the cervical canal. The trainee must keep the image of the cervical lumen fixed at 6 o'clock and try to move the instrument backwards if the path to follow is not clear. Once inside the cavity, the trainee should proceed with a systematic evaluation as follows: first, the two tubal ostia should be investigated by rotating the hysteroscope through 90 • on its axis; then, the anterior and posterior walls should be evaluated by rotating it through 180 • on its axis (see Video S1). Finally, the trainee should try to become familiar with the tactile sensation of touching the anatomical structure with the tip of the hysteroscope. Trainee Improvement Testing by Questionnaire In order to evaluate the effectiveness of our diagnostic hysteroscopy training method, the trainees were asked to complete the same questionnaire at the start and at the end of the course in order to assess their feelings of improvement. The questionnaire consisted of ten questions, each scoring from 0 (minimum) to 10 (maximum), designed to assess the theoretical knowledge and technical-procedural level of the trainees in diagnostic hysteroscopy. The outcome assessors were blind to the identity of each respondent. The design, analysis, interpretation of data, drafting, and revisions conform to the Helsinki Declaration, the Committee on Publication Ethics guidelines and the Strengthening the Reporting of Observational Studies in Epidemiology Statement [14], available through the Enhancing the Quality and Transparency of Health Research Network. The data collected were anonymized, taking into account the observational nature of the study, without personal data that could lead to the formal identification of the participants. Each participant enrolled in this study signed consent to allow data collection and analysis for research purposes. The study was not advertised. No remuneration was offered to the participants to give consent. As this research involved normal educational practices in the context of a training course, this study was exempted from Institutional Review Board approval. Statistical analysis was performed with InStat 3.10, GraphPad Software, San Diego, CA. Continuous variables were expressed as mean and standard deviation (SD), or median and interquartile range (IQR), as appropriate. Categorical variables were expressed as frequency and percentage. The independent t-test and Wilcoxon rank-sum test were used to compare continuous variables as appropriate. The χ2 test and Fisher's exact test were used to compare categorical data. A p-value < 0.05 was considered statistically significant. Since the questionnaire was customized and used for the first time in this study, we had no previous data to use as a base for the sample size calculation. Nevertheless, in hypothesizing a mean score of 5 ± 2 before the start of the course (average for a medium knowledge of the topic), and an expected 20% increase in the overall score after the course, the enrolment of 31 participants would achieve a power of 80% with an alpha error of 0.05. Results The questionnaire was administered to 34 participants (10 residents and 24 specialists in gynecology and obstetrics). All trainees completed the questionnaire, and no missing responses were recorded. The complete characteristics of the participants are described in Table 1. Twenty-six course participants (76.5%) had less than one year of experience in performing diagnostic hysteroscopy, and six participants (17.6%) had already participated in at least one previous diagnostic hysteroscopy course. Moreover, 19 participants (55.9%) reported performing fewer than 15 procedures in their career. Technical difficulty encountered (0 = never; 10 = always) (mean ± SD) 4.94 ± 2.1 Failure of the procedure (0 = never; 10 = always) (mean ± SD) 3.74 ± 2.6 Pain caused to the patient (0 = no pain; 10 = maximum conceivable pain) (mean ± SD) 4.75 ± 3 Data are expressed as mean ± standard deviations for continuous variables, or as percentages for dichotomous variables. HSC: hysteroscopy. For 28 participants (82.4%), the diagnostic hysteroscopies were carried out in an outpatient setting, while 6 participants (17.6%) performed the procedures in the operating room with sedation. The 10 questions given before and after the course all use a scale from 0 to 10 and all of the results are shown in Table 2. For 9 out of the 10 questions, there was a significant improvement (p < 0.0001 for questions 1, 2, 3, 4, 6, 9, and 10; p = 0.0025 for question 5; p = 0.0002 for question 7) of the scores between the pre-and post-course. The only question for which there was not a significant improvement was n • 8: "From 0 to 10, how do you judge the accuracy of diagnostic hysteroscopy with endometrial biopsy as compared to dilation and curettage?", where the mean pre-course score was 9.1 and the post one was 9.44 (p = 0.3628). Table 2. Results of the customized questionnaires administered before and after the course. Pre-Course Post-Course p Data are expressed as mean ± standard deviations. Discussion In comparison to other endoscopic procedures, diagnostic hysteroscopy, when performed correctly, is a well-tolerated procedure that allows the evaluation of the uterine cavity without anesthesia on an outpatient basis [8,10,15]. However, these characteristics of high safety and efficacy may lead some gynecologists to mistakenly believe that it is a procedure that can be performed without adequate training. Such a misguided attitude could be responsible for many hysteroscopies being performed with the wrong technique, causing discomfort and avoidable pain to the patient, complications, and inconclusive procedures [2,7,15,16]. One of the main causes of failure in diagnostic hysteroscopy is probably the innate reflex of the novice endoscopist to aim at the centre of the screen when navigating the cervical canal [17,18]. In fact, the failure to master the 30 • fore-oblique view determines the patient's pain and thus the failure of the procedure. The percentage of gynecologists who still perform diagnostic hysteroscopy in a questionable way, who are not familiar with the endoscopic view, and who use incorrect methods to gain access to the uterine cavity may still be high. Therefore, there is a need for training programs that aim to provide theoretical concepts and practical principles to perform the correct technique and to provide adequate training to gynecologists [12,13]. The Arbor Vitae method, through the intensive combination of theoretical, video, and hands-on practical sessions, aims to improve the knowledge of diagnostic hysteroscopy and practical skills to correctly perform the procedure. These principles are fundamental to developing a good technique and performing the diagnostic hysteroscopy correctly. Several training programs using different simulators have been described and reported: plant simulators have been proposed using butternut pumpkin [19], animal organs such as bovine uterus [20] or pig bladder [21], or even virtual models [18,22,23]. All of these have different characteristics, but the aim is to provide the trainee with an easy way to simulate the uterus in order to gain confidence in the use of the hysteroscope before approaching a live patient. The Arbor Vitae method is characterized by the variety of models used in the practical sessions, starting with preparatory exercises on simple plastic tubes to familiarize the student with the instruments and their correct use in the cervix; then, using a card box, the trainee consolidates the movements to be used when examining the uterine cavity. Finally, all that has been learned is tested on an animal model. This step-by-step progressive pathway allows the trainee to gradually and effectively develop the 30 • fore-oblique view, while becoming familiar with the correct movements and how to minimize them and developing tactile perception by using the tip of the hysteroscope. Another important feature of our training method is the realism and reproducibility of our simulator. The rumen, as packaged in these courses, allows trainees to practice repeatedly and develop the right feel and perception with the instruments during the procedure in a highly realistic way. The low cost of the proposed models is another aspect to consider. In fact, the cost of rubber tubing, cardboard boxes, and rumen is negligible. The only additional cost is the cost of keeping the rumen at a low temperature in the fridge, a cost that can be accepted by almost all centers. The most important expense of our course concerns the instruments. Providing workstations with modern, complete, and fully functional instruments could be expensive. Training programs were also proposed with rudimentary or self-assembled equipment-for example, a smartphone instead of the camera we used-supporting the advantage of lower cost [24]. The use of the questionnaire allowed us to test the impact of the knowledge and skills of the trainees. The significant improvement in theoretical and practical knowledge of diagnostic hysteroscopy shown by the mean scores in 9 out of 10 questions, even for the most experienced participants, suggests the effectiveness of the method. Nevertheless, several limitations should be taken into account for proper data interpretation: first of all, the number of enrolled participants was low, although it fits the sample size analysis; secondly, we used a customized questionnaire, without any previous validation; thirdly, the participants were not blinded to the aim of the questionnaire administration, so they may have given higher scores on purpose after the course to manifest their satisfaction. Considering these elements, we solicit further studies in order to confirm our preliminary findings in a large cohort analysis. Conclusions Diagnostic hysteroscopy has become a basic skill that every gynecologist should have in order to meet the demands of daily clinical practice. In the era of precision medicine, the need for training courses in diagnostic hysteroscopy, well-structured and effective in transmitting theoretical and practical notions is increasingly, especially for those who make minimally invasive their working philosophy. The Arbor Vitae method aims to provide a solid foundation for gynecologists wishing to perform diagnostic hysteroscopy and to critically evaluate each procedure. Thanks to the realism and reproducibility of the models used, the trainee learns the basics of navigation and assessment of the cervical canal and uterine cavity, while becoming familiar with the diagnostic procedure and the instruments required. The data from the questionnaire are very encouraging and demonstrate the effectiveness of the Arbor Vitae training method. The participants reported a significant im-provement in both theoretical and practical skills when using this training approach for diagnostic hysteroscopy. This training model has great potential for novice practitioners to achieve an adequate level of proficiency prior to performing diagnostic hysteroscopy on live patients. However, further research designed on the perception of live patients undergoing diagnostic hysteroscopy is needed to validate the effectiveness of the Arbor Vitae training method in improving the performance of trainees.
2023-05-27T15:13:39.497Z
2023-05-24T00:00:00.000
{ "year": 2023, "sha1": "a07f0761d91bb21b138c4a53a2fd994ede6fd87a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/59/6/1019/pdf?version=1684988469", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d9ee028a7ea8699a2fb9295e10834835c6c498de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238857087
pes2o/s2orc
v3-fos-license
Additive Schwarz Methods for Convex Optimization with Backtracking This paper presents a novel backtracking strategy for additive Schwarz methods for general convex optimization problems as an acceleration scheme. The proposed backtracking strategy is independent of local solvers, so that it can be applied to any algorithms that can be represented in an abstract framework of additive Schwarz methods. Allowing for adaptive increasing and decreasing of the step size along the iterations, the convergence rate of an algorithm is greatly improved. Improved convergence rate of the algorithm is proven rigorously. In addition, combining the proposed backtracking strategy with a momentum acceleration technique, we propose a further accelerated additive Schwarz method. Numerical results for various convex optimization problems that support our theory are presented. 1. Introduction. In this paper, we are interested in additive Schwarz methods for a general convex optimization problem where V is a reflexive Banach space, F : V → R is a Frechét differentiable convex function, and G : V → R is a proper, convex, and lower semicontinuous function that is possibly nonsmooth. We additionally assume that E is coercive, so that (1.1) admits a solution u * ∈ V . The importance of studying Schwarz methods arises from both theoretical and computational viewpoints. It is well-known that various iterative methods such as block relaxation methods, multigrid methods, and domain decomposition methods can be interpreted as Schwarz methods, also known as subspace correction methods. Studying Schwarz methods can yield a unified understanding of these methods; there have been several notable works on the analysis of domain decomposition and multigrid methods for linear problems in the framework of Schwarz methods [18,33,34,35]. The convergence theory of Schwarz methods has been developed for several classes of nonlinear problems as well [2,4,23,31]. In the computational viewpoint, Schwarz methods are prominent numerical solvers for large-scale problems because they can efficiently utilize massively parallel computer architectures. There has been plenty of research on Schwarz methods as parallel solvers for large-scale scientific problems of the form (1.1), e.g., nonlinear elliptic problems [12,31], variational inequalities [5,29,30], and mathematical imaging problems [11,14,25]. An important concern in the research of Schwarz methods is the acceleration of algorithms. One of the most elementary relevant results is optimizing the relaxation parameters of Richardson iterations related to the Schwarz alternating method [33, section C.3]; observing that the Schwarz alternating method for linear elliptic problems can be viewed as a preconditioned Richardson method, one can optimize the relaxation parameters of Richardson iterations to achieve a faster convergence as in [33,Lemma C.5]. Moreover, if one replaces Richardson iterations by conjugate gradient iterations with the same preconditioner, an improved algorithm with faster convergence rate can be obtained. Such an idea of acceleration can be applied to not only linear problems but also nonlinear problems. There have been some recent works on the acceleration of domain decomposition methods for several kinds of nonlinear problems: nonlinear elliptic problems [12], variational inequalities [17], and mathematical imaging problems [15,16,19]. In particular, in the author's previous work [22], an accelerated additive Schwarz method that can be applied to the general convex optimization (1.1) was considered. Noticing that additive Schwarz methods for (1.1) can be interpreted as gradient methods [23], acceleration schemes such as momentum [6,20] and adaptive restarting [21] that were originally derived for gradient methods in the field of mathematical optimization were adopted. In this paper, we consider another acceleration strategy called backtracking from the field of mathematical optimization for applications to additive Schwarz methods. Backtracking was originally considered as a method of line search for step sizes that ensures the global convergence of a gradient method [1,6]. In some recent works on accelerated gradient methods [10,20,28], it was shown both theoretically and numerically that certain backtracking strategies can accelerate the convergence of gradient methods. Allowing for adaptive increasing and decreasing of the step size along the iterations, backtracking can find a nearly-optimal value for the step size that results in large energy decay, so that fast convergence is achieved. Such an acceleration property of backtracking may be considered as a resemblance with the relaxation parameter optimization for Richardson iterations mentioned above. Hence, as in the case of Richardson iterations for linear problems, one may expect that the convergence rate of additive Schwarz methods for (1.1) can be improved if an appropriate backtracking strategy is adopted. Unfortunately, applying the existing backtracking strategies such as [10,20,28] to additive Schwarz methods is not so straightforward. The existing backtracking strategies require the computation of the underlying distance function of the gradient method. For usual gradient methods, the underlying distance function is simply the 2 -norm of the solution space so that such a requirement does not matter. However, the underlying nonlinear distance function of additive Schwarz methods has a rather complex structure in general (see (2.6)); this aspect makes direct applications of the existing strategies to additive Schwarz methods cumbersome. This paper proposes a novel backtracking strategy for additive Schwarz methods, which does not rely on the computation of the underlying distance function. As shown in Algorithm 3.1, the proposed backtracking strategy does not depend on the computation of the distance function but the computation of the energy functional only. Hence, the proposed backtracking strategy can be easily implemented for additive Schwarz methods for (1.1) with any choices of local solvers. Acceleration properties of the proposed backtracking strategy can be analyzed mathematically; we present explicit estimates for the convergence rate of the method in terms of some averaged quantity estimated along the iterations. The proposed backtracking strategy has another interesting feature; since it accelerates the additive Schwarz method in a completely different manner from the momentum acceleration introduced in [22], both of the momentum acceleration and the proposed backtracking strategy can be applied simultaneously to form a further accelerated method; see Algorithm 4.1. We present numerical results for various convex optimization problems of the form (1.1) to verify our theoretical results and highlight the computational efficiency of the proposed accelerated methods. This paper is organized as follows. A brief summary of the abstract convergence theory of additive Schwarz methods for convex optimization presented in [23] is given in section 2. In section 3, we present and analyze a novel backtracking strategy for additive Schwarz methods as an acceleration scheme. A fast additive Schwarz method that combines the ideas of the momentum acceleration [22] and the proposed backtracking strategy is proposed in section 4. Numerical results for various convex optimization problems are presented in section 5. We conclude the paper with remarks in section 6. 2. Additive Schwarz methods. In this section, we briefly review the abstract framework for additive Schwarz methods for the convex optimization problem (1.1) presented in [23]. In what follows, an index k runs from 1 to N . Let V k be a reflexive Banach space and let R * k : V k → V be a bounded linear operator such that For the sake of describing local problems, we define d k : V k ×V → R and G k : V k ×V → R as functionals defined on V k ×V , which are proper, convex, and lower semicontinuous with respect to their first arguments. Local problems have the following general form: where v ∈ V and ω > 0. If we set , ω = 1 in (2.1), then the minimization problem is reduced to which is the case of exact local problems. Here D F denotes the Bregman distance We note that other choices of d k and G k , i.e., cases of inexact local problems, include various existing numerical methods such as block coordinate descent methods [7] and constraint decomposition methods [11,29]; see [23, section 6.4] for details. The plain additive Schwarz method for (1.1) is presented in Algorithm 2.1. Constants τ 0 and ω 0 in Algorithm 2.1 will be given in Assumptions 2.3 and 2.4, respectively. Note that dom G denotes the effective domain of G, i.e., In what follows, we fix u (0) ∈ dom G and define a convex subset K 0 of dom G by Choose u (0) ∈ dom G, τ ∈ (0, τ 0 ], and ω ≥ ω 0 . for n = 0, 1, 2, . . . do Since K 0 is bounded, there exists a constant R 0 > 0 such that In addition, we define An important observation made in [23,Lemma 4.5] is that Algorithm 2.1 can be interpreted as a kind of a gradient method equipped with a nonlinear distance function [32]. A rigorous statement is presented in the following. Lemma 2.1 (generalized additive Schwarz lemma). For v ∈ V and τ, ω > 0, we defineṽ Then we haveṽ where the functional M τ,ω : V × V → R is given by (2.6) A fruitful consequence of Lemma 2.1 is an abstract convergence theory of additive Schwarz methods for convex optimization [23] that directly generalizes the classical theory for linear problems [33,Chapter 2]. The following three conditions are considered in the convergence theory: stable decomposition, strengthened convexity, and local stability (cf. [33, Assumptions 2.2 to 2.4]). Assumption 2.2 (stable decomposition). There exists a constant q > 1 such that for any bounded and convex subset K of V , the following holds: for any u where C 0,K is a positive constant depending on K. Assumption 2.3 (strengthened convexity). There exists a constant τ 0 ∈ (0, 1] which satisfies the following: for Assumption 2.4 (local stability). There exists a constant ω 0 > 0 which satisfies the following: for any v ∈ dom G, and w k ∈ V k , 1 ≤ k ≤ N , we have . Assumption 2.2 is compatible with various stable decomposition conditions presented in existing works, e.g., [3,31,33]. Assumption 2.3 trivially holds with τ 0 = 1/N due to the convexity of E. However, a better value for τ 0 independent of N can be found by the usual coloring technique; see [23, where κ τ,ω is the additive Schwarz condition number defined by and K τ was defined in (2.5). Meanwhile, the Lojasiewicz inequality holds in many applications [8,36]; it says that the energy functional E of (1.1) is sharp around the minimizer u * . We summarize this property in Assumption 2.6; it is well-known that improved convergence results for first-order optimization methods can be obtained under this assumption [9,27]. Assumption 2.6 (sharpness). There exists a constant p > 1 such that for any bounded and convex subset K of V satisfying u * ∈ K, we have for some µ K > 0. Propositions 2.5 and 2.7 are direct consequences of Lemma 2.1 in the sense that they can be easily deduced by invoking theories of gradient methods for convex optimization [23, section 2]. 3. Backtracking strategies. In gradient methods, backtracking strategies are usually adopted to find a suitable step size that ensures sufficient decrease of the energy. For problems of the form (1.1), backtracking strategies are necessary in particular to obtain the global convergence to a solution when the Lipschitz constant of F is not known [1,6]. Considering Algorithm 2.1, a sufficient decrease condition of the energy is satisfied whenever τ ∈ (0, τ 0 ] and ω ≥ ω 0 (see [23,Lemma 4.6]), and the values of τ 0 and ω 0 in Assumptions 2.3 and 2.4, respectively, can be obtained explicitly in many cases. Indeed, an estimate for τ 0 independent of N can be obtained by the coloring technique [23, section 5.1], and we have ω 0 = 1 when we use the exact local solvers. Therefore, backtracking strategies are not essential for the purpose of ensuring the global convergence of additive Schwarz methods. In this perspective, to the best of our knowledge, there have been no considerations on applying backtracking strategies in the existing works on additive Schwarz methods for convex optimization. Meanwhile, in several recent works on accelerated first-order methods for convex optimization [10,20,28], full backtracking strategies that allow for adaptive increasing and decreasing of the estimated step size along the iterations were considered. While classical one-sided backtracking strategies (see, e.g., [6]) are known to suffer from degradation of the convergence rate if an inaccurate estimate for the step size is computed, full backtracking strategies can be regarded as acceleration schemes in the sense that a gradient method equipped with full backtracking outperforms the method with the known Lipschitz constant [10,28]. In this section, we deal with a backtracking strategy for additive Schwarz methods as an acceleration scheme. Existing full backtracking strategies [10, 20, 28] mentioned above cannot be applied directly to additive Schwarz methods because the evaluation of the nonlinear distance function M τ,ω (·, ·) is not straightforward due to its complicated definition (see Lemma 2.1). Instead, we propose a novel backtracking strategy for additive Schwarz methods, in which the computational cost of the backtracking procedure is insignificant compared to that of solving local problems. The abstract additive Schwarz method equipped with the proposed backtracking strategy is summarized in Algorithm 3.1. The parameter ρ ∈ (0, 1) in Algorithm 3.1 plays a role of an adjustment parameter for the grid search. As ρ closer to 0, the grid for line search of τ becomes sparser. On the contrary, the greater ρ, the greater τ (n+1) is found with the more computational cost for the backtracking process. The condition τ (0) = τ 0 is not critical in the implementation of Algorithm 3.1 since τ 0 can be obtained by the coloring technique. Different from the existing approaches [10,20,28], the backtracking scheme in Algorithm 3.1 does not depend on the distance function M τ,ω (·, ·) but the energy functional E only. Hence, the stop criterion for the backtracking process can be evaluated without considering to solve the infimum in the definition (2.6) of M τ,ω (·, ·). Moreover, the backtracking process is independent of local problems (2.1). That is, the stop criterion (3.1) is universal for any choices of d k and G k . The additional computational cost of Algorithm 3.1 compared to Algorithm 2.1 comes from the backtracking process. When we evaluate the stop criterion (3.1), the values of E(u (n+1) ), E(u (n) ), and E(u (n) + R * k w (n+1) k ) are needed. Among them, E(u (n) ) and E(u (n) + R * k w (n+1) k ) can be computed prior to the backtracking process since they require u (n) and R * k w (n+1) k only in their computations. Hence, the computational cost of an additional inner iteration of the backtracking process consists of the computation of E(u (n+1) ) only, which is clearly marginal. In conclusion, the most time-consuming part of each iteration of Algorithm 3.1 is to solve local problems on V k , i.e., to obtain w (n+1) k , and the other part has relatively small computational cost. This highlights the computational efficiency of the backtracking process in Algorithm 3.1. Next, we analyze the convergence behavior of Algorithm 3.1. First, we prove that the backtracking process in Algorithm 3.1 ends in finite steps and that the step size τ (n) never becomes smaller than a particular value. Proof. Since Assumption 2.3 implies that the stop criterion (3.1) is satisfied whenever τ ∈ (0, τ 0 ], the backtracking process ends if τ becomes smaller than or equal to τ 0 . Now, take any n ≥ 1. If τ (n) were less than τ 0 , say τ (n) = ρ j τ 0 for some j ≥ 1, then τ in the previous inner iteration is ρ j−1 τ 0 ≤ τ 0 , so that the backtracking process should have stopped there, which is a contradiction. Therefore, we have τ (n) ≥ τ 0 . Lemma 3.1 says that Assumption 2.3 is a sufficient condition to ensure that τ (n+1) is successfully determined by the backtracking process in each iteration of Algorithm 3.1. It is important to notice that τ (n) is always greater than or equal to τ 0 ; the step sizes of Algorithm 3.1 are larger than or equal to that of Algorithm 2.1. Meanwhile, similar to the plain additive Schwarz method, Algorithm 3.1 generates the sequence {u (n) } whose energy is monotonically decreasing. Hence, which completes the proof. Note that [23, Lemma 4.6] played a key role in the convergence analysis of Algorithm 2.1 presented in [23]. Relevant results for Algorithm 3.1 can be obtained in a similar manner. Proof. Take any w k ∈ V k such that By Assumption 2.4 and (3.1), we get Taking the infimum over all w k satisfying (3.2) yields the desired result. Lemma 3.4. Suppose that Assumption 2.2 holds. Let τ, ω > 0. For any bounded and convex subset K of V , we have where the functional M τ,ω (·, ·) was given in (2.6) and In addition, the right-hand side of (3.3) is decreasing with respect to τ . More precisely, if τ 1 ≥ τ 2 > 0, then we have Proof. Equation (3.3) is identical to the second half of [23, Lemma 4.6] . Nevertheless, it is revisited to highlight that some assumptions given in [23,Lemma 4.6] are not necessary for Lemma 3.4; for example, τ need not be less than or equal to τ 0 as stated in [23, Lemma 4.6] but can be any positive real number. Now, we prove (3.4). Since τ 1 ≥ τ 2 , one can deduce from (2.5) that K τ1 ⊆ K τ2 . Hence, by the definition of C 0,K given in Assumption 2.2, we get C 0,K τ 1 ≤ C 0,K τ 2 . Meanwhile, the convexity of G implies that which completes the proof. Recall that the sequence {τ (n) } generated by Algorithm 3.1 has a uniform lower bound τ 0 by Lemma 3.1. Hence, for any n ≥ 0, we get where κ τ0,ω was defined in (2.7). Although Propositions 3.5 and 3.6 guarantee the convergence to the energy minimum as well as they provide the order of convergence of Algorithm 3.1, they are not fully satisfactory results in the sense that they are not able to explain why Algorithm 3.1 achieves faster convergence that Algorithm 2.1. In order to explain the acceleration property of the backtracking process, one should obtain an estimate for the convergence rate of Algorithm 3.1 in terms of the step sizes {τ (n) } along the iterations [10]. We first state an elementary lemma that will be used in further analysis of Proof. It suffices to show that (C(γ − 1) + a 1−γ ) 1 1−γ ≥ a − Ca γ . We may assume that a−Ca γ > 0. By the mean value theorem, there exists a constant c ∈ (a−Ca γ , a) such that Hence, we have C(γ − 1) + a 1−γ ≤ (a − Ca γ ) 1−γ , which yields the desired result. We also need the following lemma that was presented in [23,Lemma A.2]. Remark 3.11. If one sets τ (n) = τ for all n in the proof of Theorem 3.9, then the following estimate for the convergence rate of Algorithm 2.1 is obtained: This estimate is asymptotically equivalent to [23,Theorem 4.7], but differs by a multiplicative constant. A similar remark can be made for Theorem 3.10 and [23,Theorem 4.8]. Similar to the discussions made in [10], Theorems 3.9 and 3.10 can be interpreted as follows: since the convergence rate of Algorithm 3.1 depends on the averaged quantity (3.6), adaptive adjustment of τ depending on the local flatness of the energy functional can be reflected to the convergence rate of the algorithm. As we observed in Lemma 3.1, τ (n) in Algorithm 3.1 is always greater than or equal to τ 0 . Therefore, Theorems 3.9 and 3.10 imply that Algorithm 3.1 enjoys better convergence rate estimates than Algorithm 2.1. Further acceleration by momentum. In the author's recent work [22], it was shown that the convergence rate of the additive Schwarz method can be significantly improved if an appropriate momentum acceleration scheme (see, e.g., [6,20]) is applied. More precisely, Algorithm 2.1 was integrated with the FISTA (Fast Iterative Shrinkage-Thresholding Algorithm) momentum [6] and the gradient adaptive restarting scheme [21] to form an accelerated version of the method; see [22,Algorithm 5]. Meanwhile, two acceleration schemes for gradient methods, full backtracking and momentum, are compatible to each other; they can be applied to a gradient method simultaneously without disturbing each other and reducing their accelerating effects. Indeed, some notable works on full backtracking [10,20,28] considered momentum acceleration of gradient methods with full backtracking. In this viewpoint, we present an further accelerated variant of Algorithm 3.1 in Algorithm 4.1, which is a unification of the ideas from [22,Algorithm 5] and Algorithm 3.1. As mentioned in [22], a major advantage of the momentum acceleration scheme used in Algorithm 4.1 is that a priori information on the sharpness of the energy E such as the values of p and µ K in Assumption 2.6 is not required. Such adaptiveness to the properties of the energy has become an important issue on the development of firstorder methods for convex optimization; see, e.g., [21,26]. Compared to Algorithm 3.1, the additional computational cost of Algorithm 4.1 comes from the computation of momentum parameters t n and β n , which is clearly marginal. Therefore, the main computational cost of each iteration of Algorithm 4.1 is essentially the same as the one of Algorithm 3.1. Nevertheless, we will observe in section 5 that Algorithm 4.1 achieves much faster convergence to the energy minimum compared to Algorithms 2.1 and 3.1. For completeness, we present a brief explanation on why Algorithm 4.1 achieves faster convergence than Algorithm 3.1; one may refer to [21,22] for more details. On the one hand, the recurrence formula t n+1 = (1 + 1 + 4t 2 n )/2 for the momentum parameter t n in Algorithm 4.1 is the same as that in FISTA [6]. Hence, the overrelaxation step v (n+1) = β n (u (n+1) − u (n) ) in Algorithm 4.1 is expected to result acceleration of the convergence by the same principle as in FISTA; see [13, Figure 8.5] for a graphical description of momentum acceleration. On the other hand, the restart criterion v (n) − u (n+1) , u (n+1) − u (n) > 0 in Algorithm 4.1 means that the update direction u (n+1) −u (n) is on the same side of the M τ,ω -gradient direction v (n) −u (n+1) . end for In the sense that the energy decreases fastest toward the minus gradient direction, satisfying the restart criterion implies that the overrelaxation step was not beneficial, so that we reset the overrelaxation parameter β n as 0. In view of dynamical systems, it was observed in [21] that the restarting scheme used in Algorithm 4.1 prevents underdamping of a dynamical system representing the algorithm, so that oscillations of the energy do not occur. Numerical results. In order to show the computational efficiency of Algorithms 3.1 and 4.1, we present numerical results applied to various convex optimization problems. As in [22], the following three model problems are considered: s-Laplace equation [31] with two-level domain decomposition, obstacle problem [5,29,30] with two-level domain decomposition, and dual total variation (TV) minimization [11,25] with one-level domain decomposition. All the details such as problem settings, finite element discretization, space decomposition, stop criteria for local and coarse problems, and initial parameter settings for the algorithms are set in the same manner as in [22, section 4] unless otherwise stated, so that we omit them. We set the fine mesh size h, coarse mesh size H, and overlapping width δ among subdomains by h = 1/2 6 , It would be interesting to find a theoretically optimal ρ that results in the fastest convergence rate of Algorithm 3.1, which is left as a future work. Next, we compare the performance of various additive Schwarz methods considered in this paper. Figure 3 plots E(u (n) ) − E(u * ) of Algorithm 2.1 (Plain), Algorithm 5 of [22] (Adapt), Algorithm 3.1 with ρ = 0.5 (Backt), and Algorithm 4.1 with ρ = 0.5 (Unifi). In each of the model problems, the performance of Backt seems similar to that of Adapt. More precisely, Backt outperforms Adapt in the s-Laplace problem, shows almost the same convergence rate as Adapt in the obstacle problem, and shows a bit slower energy decay than Adapt in the first several iterations but eventually arrive at the comparable energy error in the dual TV minimization. Hence, we can say that the acceleration performance of Backt is comparable to that of Adapt. Meanwhile, as we considered in section 4, acceleration schemes used by Adapt and Backt are totally different to each other, and they can be combined to form a further accelerated method Unifi. One can observe in Figure 3 that Unifi shows the fastest convergence rate among all the methods for every model problem. Since the difference between the computational costs of a single iteration of Plain and Unifi is insignificant, we can conclude that Unifi possesses the best computational efficiency among all the methods, absorbing the advantages of Adapt and Backt. 6. Conclusion. In this paper, we proposed a novel backtracking strategy for the additive Schwarz method for the general convex optimization. It was proven rigorously that the additive Schwarz method with backtracking achieves faster convergence rate than the plain method. Moreover, we showed that the proposed backtracking strategy can be combined with the momentum acceleration technique proposed in [22], and proposed a further accelerated additive Schwarz method, Algorithm 4.1. Numerical results verifying our theoretical results and the superiority of the proposed methods were presented. We observed in section 5 that the additive Schwarz method with backtracking achieves faster convergence behavior than the plain method for any choice of the adjustment parameter ρ. However, it remains as an open problem that what value of ρ results the fastest convergence rate. Optimizing ρ for the sake of construction of a faster additive Schwarz method will be considered as a future work.
2021-10-15T01:15:38.971Z
2021-10-14T00:00:00.000
{ "year": 2021, "sha1": "1f53dba983758f56b129cfaa3dbef2ebf213a251", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1f53dba983758f56b129cfaa3dbef2ebf213a251", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
237396072
pes2o/s2orc
v3-fos-license
IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE IN RESTAURANTS The topicality. In recent years, there has been a need to study the artificial intelligence use for the operation of restaurants, as in Ukraine (and in most countries) there is no such experience. The use of artificial intelligence systems customer-to-customer and item-to-item will ensure the quality of food delivery sites, which will allow you to analyze the order of the guest and identify the patterns of his preferences thus, automatically ask him to choose a certain set, dish and successful additions to the order, which will increase the average check, or choose new establishments that will help them enter the market of restaurant services. Purpose and methods. The purpose of the study is to analyze the current state, determine the prospects for the application of existing robotic technologies in the technological process of restaurants and develop a robotization scheme of the technological process of restaurants such as salad bar. Methods are in the course of research the methods of logical generalization concerning development of the robotization scheme of technological process which were carried out by means of the computer ArchiCaD program were applied. Results. The problem of introduction and the artificial intelligence use are studied by scientists and researchers in various fields of science. Considering their scientific works, it can be noted that artificial intelligence is already actively used for the manufacture of culinary products in foreign restaurants. There are known examples of the use of barista robots, pizza robots, salad maker robots, burger maker robots, etc. The study developed the robotization scheme of the technological process of salad bar, consisting of three stages. The first stage is the service of visitors in the shopping area, where the selection of the order, payment through the terminal and the subsequent automatic receipt of culinary products and beverages. The second stage is the preparation of semi-finished products in the procurement area. This process is controlled by a chef-operator, who controls the required number of semi-finished products and cleans and cuts vegetables, fruits, meat and fish products using machines for cleaning and slicing culinary products. The program provides for the analysis of the balance and the required number of semi-finished products and the choice of components for the preparation of salads with artificial intelligence. The third stage is the automatic preparation of salad in the pre-cooking production area. The artificial intelligence placed in the system analyzes the guest’s order and activates the containers with the necessary ingredients, mixes them and unloads them into a container covered with a plastic lid, and the robot stamping element leaves the order number on the lid. The proposed scheme provides for compliance with sanitary and hygienic standards for institutions of this type. With the developed system of production activities, the required number of employees will be 5 people: cleaner in the trade area, dishwasher, tray packer, cook-operator of the pre-cooking area and system administrator of artificial intelligence. Conclusions and discussions. The authors analyze the current state, identify prospects for the application of existing robotic technologies in the technological process of restaurants and developed a robotization scheme of the technological process on the example of a salad bar. The developed scheme consists of three stages: service of visitors, preparation of semi-finished products and automatic preparation of finished goods. It is assumed that the implementation of the developed system will speed up the process of customer service, reduce the area of production facilities and, accordingly, increase the restaurant turnover. The topicality of the problem The problem formulation. Quarantine measures due to the spread of coronavirus infection have forced humanity to adapt to new rules of conduct that prevent active social contacts between people. In order to ensure social distancing, the process of their robotization is relevant in the restaurant industry, which will ensure the absence of staff contact with guests and at the same time speed up the process of customer service and production of culinary products. New information technologies are already known that function independently of human intervention through the use and artificial intelligence development. In one of his works, Yu. Sydorchuk (2017) emphasizes that the technologies development, total informatization and computerization transform the social, economic, political and spiritual spheres of modern society. According to her opinion, the neuro technology development, genetic engineering, nanotechnology, biotechnology, the widespread use of the Internet affects not only society but also change people, transforming their natural capabilities. Most scientists focus on the study of the nature of the human intelligence development, but there is no consensus on its definition and understanding. With the advent of computers in the 1950s, the ever-advancing artificial intelligence began its development. Therefore, there is a need to study the use of artificial intelligence for the operation of restaurants, as in Ukraine (and in most countries) there is no such experience. A striking example of the use of artificial intelligence in the restaurant business are robot waiters, robot cooks, the possibility of using such artificial intelligence systems as customer-to-customer and item-to-item. The use of these systems will ensure the quality of food delivery sites, which will analyze the guest's order and identify patterns of his preferences, and thus automatically offer the customer to choose a set, dish and successful additions to the order, increase the average check or choose new establishments, which will help them enter the market of restaurant services. The state of the problem study. Analyzing the artificial intelligence concept, we can conclude that there are many definitions of intelligence. Thus, A. Oliynyk (2019) argues that intelligence is the ability to solve problems in unprogrammed (creative) way. Koizumi (2019) suggests that intelligence is the ability to function properly, think rationally, and act effectively in relation to the environment. According to Samuel (2000), intelligence is an innate quality, in contrast to the abilities acquired during training. In one of their works, McAfee and Brynjolfsson (2017) emphasize that the emergence and artificial intelligence development is inevitable. Looking around, we see many interactive and intelligent systems, such as a system that is a personal assistant that uses natural speech processing to make recommendations or answer questions. Even today, driving a car is possible without a person; the car can move independently on the streets, stop at traffic lights or park. The idea of artificial intelligence is mentioned in the article of the famous English scientist A. Turing (1950) "Computers and the mind", which was published in 1950. The main question that was mentioned in the article at the time was: Can computers think like humans? According to the famous American futurist and inventor Hamilton (2017), the merger between computers and humans is so fast and deep that it is a turning point in history. The popular book by E. Brinolfsson and E. McAfee (2016) "The Second Age of Machines" presents the following classification of artificial intelligence: 1) systems that think similarly to humans (e.g., cognitive architecture and neural networks); 2) a human-like system (e.g., Turing test through natural processing language, knowledge representation, automated reasoning, and learning); 3) a system that thinks rationally (for example, logical solution algorithms, conclusions and optimization); 4) a system that operates rationally (for example, an intelligent software agent, the creation of robots that achieve goals through perception, planning, reflection, study, communication, decision-making and action). Considering the scientific works of domestic and foreign scientists and researchers, we can conclude that there is no single definition of artificial intelligence, as it is a very young field of research. Scientists define this concept in broad and narrow meanings. After learning about the artificial intelligence concept, we can conclude that artificial intelligence is a characteristic that determines the intellectual capabilities of computers in their decision-making. A significant number of scientific papers in the United States are devoted to the study of artificial intelligence, which confirms a deep understanding of the need for its use. It is well known that the US government annually prepares various reports on the implementation and active use of new information technologies, including artificial intelligence, in order to improve and facilitate people work. From the content of these reports it can be concluded that the United States is one of the leading countries at the state level to think about the global development of artificial intelligence. In October 2016, the United States presented at the governmental level the document "Preparing for the Future with Artificial Intelligence", which states that artificial intelligence technology opens up new demand and new opportunities for progress in critical areas such as health, education, energy and the environment. This document consists of recommendations for future action for federal authorities and other participants. It has several definitions of "artificial intelligence". McAfee and Brynjolfsson (2017) define it as a computerized system that behaves and mostly thinks as instructed. Others define the "artificial intelligence" concept as a system that can rationally solve a set of problems or adapt actions to achieve goals regardless of the real circumstances. Unresolved issues. Currently, there are two approaches to artificial intelligence, which are conventionally called algorithmic and with the help of self-learning (JavaTpoint, n.d.). In the first, all the rules by which intelligence operates are prescribed manually, and in the second is the created algorithm learns independently on a certain amount of data and allocates its own rules independently. Algorithmic path, which has its positive aspects, such as predictability and the ability to act within the programmed limits, as noted by D. Lubko, S. Sharov (2019), failed. At the same time, artificial intelligence, built on a selflearning algorithm, allows you to act differently in similar situations, taking into account the results of previously performed actions. This confirms that the problem of artificial intelligence has not been fully studied (Tokareva, 2018). It should be borne in mind that the use and transition of restaurants to activities with full or partial use of robotic technologies is an unexplored problem and task of restaurant business professionals. Purpose and research methods The purpose of the article is to analyze the current state and determine the prospects for the application of existing robotic technologies in the technological process of restaurants and the development of a robotization scheme of the technological process of restaurants such as salad bar. The methodological basis of the study is the theoretical and methodological aspects of a comprehensive approach to problem setting, analysis of research results using new theoretical developments, modern computer modeling methods. Research methods are in the course of research the methods of logical generalization concerning development of the robotization scheme of technological process which were carried out by means of the computer ArchiCaD program were applied. The object of the study is the technological process of the restaurant. The subject of the study is restaurants such as salad bar. The information base of the research was the scientific works of domestic and foreign scientists and scientists on the researched problem: monographs, scientific articles, materials of international congresses and symposiums, scientific and practical conferences, regulatory and technical documentation, patents, copyright certificates, statistical data. Research results Scientists have been paying attention to the study of artificial intelligence since the second half of the twentieth century: in 1950, Alan Turing (Turing, 1950) explored the problem of mental nature, i.e. how to implement a meaningful problem of modeling the machine of natural human thinking. Today, theorists and practitioners of many fields of scientific activity, including the culture of hospitality and restaurant business, have begun to understand the use of artificial intelligence, robotics, information and cognitive technologies. The conceptual framework for the creation of artificial intelligence was based on the automation of production processes that replace man during the performance of monotonous, routine work, which reduces time, financial, human and other resources and thus increase productivity. In this regard, different opinions are expressed, for example, P. Morkhat (2018) proposes to consider artificial intelligence from the following reviews: as a "cybernetic (computer-software) tool for expanding and strengthening human intellectual potential"; as a tool for human replacement (under its control) in the performance of any function that has the ability to anthropomorphic mental and cognitive processes (learning and self-learning, reflection, reasoning and self-regulation), emphasizing the ability of artificial intelligence to operate more effectively than primitive automation. V. Razumov and V. Sizikov (2019) emphasize that artificial intelligence can be considered not as a reproduction of natural intelligence, but as a "tool for imitating various scenarios". In their scientific work, the authors express their own modern concept of artificial intelligence as control and communication in complex technical systems (in terms of information processes), which provide the possibility of their automation. Today, the possibilities of using artificial intelligence to solve cognitive problems are widely explored: for example, text interpretation, language recognition, identification of persons and objects, the use of robotic systems that have the ability to make decisions (Demkin & Lukov, 2018;Sokolov et al., 2018), and robotization in the restaurant business on the example of robot waiters and robot chefs, who are already demonstrating the first unique "digital" services. Artificial intelligence is firmly entrenched in reality, as well as in the interaction and interdependence with other phenomena generated by informatization, expanding the functionality of the Internet, information and telecommunications technologies, reviving its uniqueness and relevance. Already today, in many countries, people are using technological innovations that point to the approaching era of artificial intelligence: unmanned aerial vehicles; voice services from modern electronics manufacturers; technological content of the socalled "smart home", etc. One of the leaders in the study of the practical application of artificial intelligence was the American company Apple, which created a prototype of artificial intelligence -a smartphone. Siri's voice assistant appeared in the iPhone 4S in the fall of 2011, which revolutionized the IT industry. After a while, Google introduced its "smart" service Google Now. Unlike Siri, Google's product strives to be useful not only when the user needs it, but also when he doesn't even think about it. That is, Google Now works automatically, like the autonomic nervous system. This system tracks the movements and actions of the user and studies his habits. By calculating the time a user regularly returns home from work, Google Now checks the traffic service in advance and paves the best way to navigate before you leave. Microsoft has similar systems: a virtual assistant with a female voice and Cortana's name is designed for dialogues and can ask questions to the user. Artificial intelligence "smart home" is a concept that scientists have been studying for decades. Today, several large companies are making significant efforts to bring concrete solutions to market for artificial intelligence systems, including Apple, which introduced a unified wireless protocol for managing home appliances with the iPhone help. It is necessary to mention the innovations of the Chinese company Xiaomi, which offered to equip their air fresheners with a Bluetooth module, which allowed the user to be reminded of the need to change the filter. Xiaomi later introduced four smart home modules, which include a webcam that can control a TV, air conditioning, music center, smart outlet, which allows you to remotely turn off any household appliance. All these gadgets can be controlled by the user using a smartphone and voice commands. There are the first developments that allow you to use artificial intelligence to control the functionality of the "smart home". For example, change the lighting depending on how lively the user listens to music. Unmanned vehicles are another proof that the era of artificial intelligence has begun. Business car owners already use on-board computer features such as motion tracking, adaptive cruise control and a collision warning system that can release gas and brake on its own. In particular, Volvo, Audi, Volkswagen, Range Rover, Acura and other companies equip their cars in this way. In April 2018, the European Commission presented a strategy for artificial intelligence, which sets the main goals; they are strengthening the technological and industrial capabilities of the EU with its application in various sectors of the economy, ensuring a "proper ethical and legal framework", as well as preparation for socioeconomic change (Cabinet of Ministers of Ukraine, 2018; 2020). Ukrainian developers are active leaders in the idea of a completely different approach to the artificial intelligence development. For example, the founders of the Digital Life Lab take a slightly different approach to the problem than other researchers. According to their opinion, first you need to learn to feel the car, and only then to think logically. Only in this way can a machine, without being human, find any human qualities. And this can be achieved only by giving the car the opportunity to communicate with people so that it can get to know them better (Antonenko et al., 2019). Ukrainian startup Digital Life Lab is working on the KARA project and developing a model of empathic artificial intelligence. KARA is in the pre-testing stage, and, according to its developers, it will be characterized by recognition of the mood, emotions and the guest's feelings. In June 2020, the famous Ukrainian restaurateur Dmytro Borysov announced on his Facebook page the opening of the gastronomic platform Gastrofamily Food Market, where with the help of a bot assistant guests can choose a restaurant and dishes from the menu according to their preferences. To understand at what stage of use in restaurants is artificial intelligence, it is worth giving a few examples. The company Chowbotics plans to place Sally robot kiosks in restaurants, cafeterias, hotels, airports and medical institutions. Their work is based on stations where work machines contain about 20 plastic containers with chopped vegetables, and when choosing an order, the robot combines them into salads according to the guest's order. In addition, such a station is equipped with a touchscreen for order fulfillment and a terminal for cashless payment. Artificial intelligence calculates the chemical composition and caloric content of food, helps to choose the composition of the salad and portion size according to age, sex, allergies and the guest's preferences. An option to improve the operation of such equipment is to teach artificial intelligence to determine the balance of semi-finished products, the required amount of raw materials and make the necessary list for purchase, as well as analyze demand and plan sales and recommend improving the composition of finished culinary products. Kitchen robotics developer Miso Robotics has released the Flippy robot manipulator for turning burgers. Artificial intelligence is able to distinguish a piece of chicken from a bun, and a ready-made burger from a semi-finished product on such indicators as shape, color, temperature. The American supermarket chain Whole Foods is developing a robot barista Briggio. Flippy has proposed the name "cobot", which means cooperative robot. That is, if the machine detects the presence of a person in the work area, then used by the developers industrial manipulator Universal Robotics will stop immediately to prevent collisions with people and prevent possible injury. The American supermarket chain Whole Foods is developing a robot barista Briggio. Such equipment with artificial intelligence will be able to receive orders from the Internet through a personal account on the developer's website. This way, you can pay and place your order online while on the way to the supermarket. The robot-barista can make hot drinks such as lattes, tea, hot chocolate and cappuccino (McKinsey Global Institute, 2017). The Momentum Machines project has developed the Momentum Machine, which has a capacity of 400 burgers per hour, is equipped with 350 sensors and 20 computers and is 4 meters long. Such a robot will speed up maintenance and increase the owner's income. Thus, for an hour of work at an average burger price of $ 6, it is possible to get an income of $ 2,400, which is three times higher than the average income of a fast food restaurant in the United States (Antonenko et al., 2019). In San Francisco, the airport has a mobile barista robot, which moves around the airport and offers guests a choice of coffee drinks and the ability to pay by card. So now airport guests do not have to look for a coffee shop -the products are looking for those who want them. There is a semi-automatic Spyce restaurant in Boston, where robots have replaced chefs. It was created by four graduates of the Massachusetts Institute of Technology and approved by the prestigious chef of the restaurant Daniel Buluda. Spyce is considered to be the first restaurant in the world with robotic cuisine, where complex dinners are prepared ("Artificial Intelligence", 2019). The bartenders at Bionic Bar on the Royal Carribean liner not only speed up the preparation of drinks, but also serve as elements of the show (McKinsey Global Institute, 2017). Directly above the works is a panel with more than a hundred bottles of alcoholic beverages. To order, the guest must choose a drink from 30 options in the menu on the tablet or on your smartphone by downloading the mobile application. After that, customers just watch as the works mix and shake the necessary ingredients. In the American restaurant Zume Pizza, robots are used to make pizza. The sauce is dosed on the dough pieces, the next robot decomposes the ingredients, then on the conveyor they go to the robot, which distributes them evenly on the pizza, and the last robot transfers it to the oven. This station is located in a portable van, so the pizza can be prepared on the way to the customer, which reduces delivery time. The Chinese restaurant Dalu Robot in Jinan uses 12 robots. They travel around the hall on small bicycles and deliver meat and vegetables, which visitors dip in boiling broth. Each of the robots is equipped with a motion sensor that allows you to send a signal to stop the robot at the right table. In addition, they perform the functions of hostesses, as well as entertain guests by singing and dancing (Association of Ukrainian-Chinese Cooperation, 2017). The Japanese restaurant FuA-Men automated the preparation of noodles using the robot Fully Automated Ramen. The preparation of noodles takes 1 minute and 40 seconds, which are 80 servings per shift. The quality of ready meals does not differ from traditional ones. At the Russian company Promobot, the robot helps people with navigation, answers questions, broadcasts promotional materials and remembers everyone with whom he had to communicate. Based on a preliminary analysis of the use of artificial intelligence in restaurants, the authors have developed a robotization scheme of the technological process of such an institution on the example of a salad bar (Fig. 1). This scheme consists of three stages. The first stage is to serve visitors in the shopping area. First, they form their own order in the order area on the touchscreen terminal: in the dialog box, choose a salad (this can be a suggested recipe or created by the visitor to choose from the suggested ingredients) and drinks. After confirming the order, it is paid through the payment terminal and receives a check with the order number. The next step for the visitor is receiving an order in the distribution area through the appropriate window. The visitor identifies his order by the number of the check, which is stamped on the lid of the finished salad. The next step of the visitor is to receive the ordered drink in the appropriate machine by entering the check number on the keyboard. After receiving a full order, the visitor takes a free seat at a table in the shopping area. After consumption, the cleaner collects trays and dishes and through a special window passes them to the washing tableware, where the process of washing dishes and sorting it on the rack. The second stage is the preparation of semi-finished products in the procurement area. This process is controlled by a chef-operator, who prepares the required number of semi-finished products and cleans and cuts vegetables, fruits, meat and fish products using machines for cleaning and slicing culinary products. The sliced semi-finished products are loaded into a sorting robot, which recognizes the semi-finished product by size, shape and color, and uses special channels to transport the semi-finished products to the appropriate container. The third stage is the automatic preparation of salad in the pre-cooking production area. The artificial intelligence housed in the system analyzes the guest's order and activates the containers with the necessary ingredients, in which the dispenser dispenses the required amount of semi-finished product into a special container that stops under each container, and then sends a mixture of semi-finished products to the mixer. Here is the automatic mixing of products, filling them with dressing and unloading into the dishes, which is covered with a plastic lid, and the stamping element of the robot leaves the order number on the lid. A conveyor is connected to the robot mixer, which connects it with the dishwasher. There, the dishwasher sorts the trays and places the dishes on them, which move along the conveyor to the mixer robot. After loading the finished salad into the dishes and applying the order number, the tray with the order is transported on the conveyor to the distribution room, where it is picked up by the visitor and then sent with it to the vending machines with drinks. During the operation of the proposed system, the number of semi-finished products in the container decreases over time. The artificial intelligence of the system analyzes the hourly number of visitors and the content of orders of previous days and weeks and calculates the limit of the number of semi-finished products in the container. Thus, when the number of semi-finished products becomes less than this limit, the operator is given a signal that it is urgent to prepare a certain semi-finished product and load it into the container. With the developed system of production activities, the required number of employees will be 5 people: a cleaner in the shopping area, a dishwasher, a packer of trays, a cook-operator of the pre-cooking area and a system administrator of artificial intelligence. Forecasting the implementation of the developed system will accelerate the process of customer service, reduce the area of production facilities and, accordingly, increase the turnover of the restaurant. Therefore, restaurateurs, who are constantly working to optimize the technological process and service, in their institution are interested in using robots, because such an innovation in the restaurant business helps to address issues of production and service and is of interest to visitors. In addition, over time, the use of artificial intelligence in restaurants will no doubt be perceived naturally. That is why now restaurateurs have the opportunity to be among the first in Ukraine to implement this innovation and the use of artificial intelligence to robotize the technological process in restaurants. Conclusions and results discussion Thus, the relevance of the artificial intelligence introduction in the activities of restaurants due to the fact that quarantine measures due to the spread of coronavirus infection force humanity to ensure social distancing. This process will ensure the absence of staff contact with guests and at the same time speed up the process of customer service and production of culinary products. Scientific works analysis of domestic and foreign researchers has shown that there is a need to study the use of artificial intelligence for the functioning of restaurants. The robotization scheme of technological process on an example of salad bar which consists of three stages has been developed: service of visitors, preparation of semifinished products and automatic preparation of finished goods. Prospects for further research are to study the possibilities of using artificial intelligence in the restaurant business on the example of robot waiters and robot chefs, the possibility of creating artificial intelligence systems such as customer-to-customer and item-to-item.
2021-09-01T15:12:33.865Z
2021-06-22T00:00:00.000
{ "year": 2021, "sha1": "ba5b738d18c7a1d039adb0a8b85d17404827273f", "oa_license": "CCBY", "oa_url": "http://restaurant-hotel.knukim.edu.ua/article/download/234831/233566", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "87585c23762a757f9043a7011ec79c6ca5625a7e", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [] }
258461400
pes2o/s2orc
v3-fos-license
Terracini loci of curves We study subsets S of curves X whose double structure does not impose independent conditions to a linear series L, but there are divisors D in |L| singular at all points of S. These subsets form the Terracini loci of X. We investigate Terracini loci, with a special look towards their non-emptiness, mainly in the case of canonical curves, and in the case of space curves. Introduction Terracini loci T(X, L, x) of a projective variety X (over an algebraically closed field of characteristic 0) are subsets of the set S(X, x) of all reduced finite subsets S ⊂ X reg of cardinality x, with the property that the double scheme 2S on S does not impose independent conditions to the linear series L, and there are divisors in |L| passing through 2S. Terracini loci are certainly involved in the study of interpolation properties of the image of X in the map induced by L, but they also assume great importance in the theory of secant varieties to embedded varieties X, for they are connected with points of the abstract secant variety of X in which the differential of the map to the embedded secant variety drops rank. More details on the initial properties of Terracini loci can be found in [1]. In this paper, we focus the attention to Terracini loci of curves. Even if curves are never defective, so that their secant varieties always have the expected dimension, yet there are special points p in which the Terracini Lemma fails to provide the dimension of the Zariski tangent space to the secant variety. This happens typically when the set of points in X which generates p belongs to some Terracini locus. We are mainly concerned with the problem of the non-emptiness of a Terracini locus T(X, L, x). Certainly T(X, L, x) is empty when X is a rational normal curve in P r , L is the complete hyperplane linear series and x ≤ (r + 1)/2, because every subscheme of length ≤ r + 1 in X is linearly independent. We will see that, for odd r, the emptiness of T(X, L, (r + 1)/2) characterizes rational normal curves (Proposition 5.6). This means that there is a large mass of examples of non-empty Terracini loci of curves. Our analysis considers mainly two cases: canonically embedded curves (Section 4) and curves in P 3 (Section 5). In the former case, the series L is the complete canonical linear series K X , and one can easily expect that the existence of sets in T(X, K X , x) strongly depends on the geometry of linear series on the curve. We show indeed that subsets of thetacharacteristics on X prove that T(X, K X , x) is non-empty when g/2 ≤ x ≤ g − 1. For x < g/2 we see (Proposition 4.7) that the non-emptiness of T(X, K X , x) is linked to the gonality of X. For space curves, we use induction on the degree and genus, with the technique of smoothing reducible curves, to prove that T(X, L, x) is non-empty for some smooth curves of degree d and genus g ≤ d − 3, when 6 ≤ 2x ≤ d (Theorem 5.12). We also prove the existence of smooth special curves with non-empty Terracini loci in some components of the Hilbert scheme of space curves even outside the Brill-Noether range (Theorem 5.13 and Remark 5.14). We would like to thank the anonymous referees for precious observations on the preliminary versions of the paper. Preliminaries We work over an algebraically closed field of characteristic 0. 2.1. Notation. For any 0-dimensional subscheme S we denote with ℓ(S) the length of S. When S is reduced, then ℓ(S) is the cardinality of S. Let X be an integral projective variety. For any point P ∈ X reg , with homogeneous maximal ideal m P,X , we denote with (2P, X) the subscheme of X defined by m 2 P,X . If S = {P 1 , . . . , P x } ⊂ X is a finite set of points, then we denote with (2S, X) the non-reduced scheme 2S = x i=1 (2P i , X). When it is clear which X we refer to, then we will write simply 2S instead of (2S, X). For any scheme X and for any integer x we denote with S(X, x) the set of all reduced finite subsets S ⊂ X reg of length x. S(X, x) is an open subset of the symmetric product X (x) . Let X be a projective curve, and let S be a finite subset of X reg . When S identifies a Cartier divisor on X, by abuse, we will continue to denote it with S. We will also denote with |S| the complete linear series associated to S, and we will denote with h 0 (S) the dimension of the space of sections of the associated line bundle. We will use the same convention for 2S. For any scheme Z ⊂ P r let Z ⊆ P r denote the minimal linear subspace of P r containing Z. 2.2. Terracini loci. We recall from [1] the definition of Terracini locus of a projective variety. Definition 2.1. Let X be an integral projective variety, L a line bundle on X and V ⊆ H 0 (L) a linear subspace. Set m := dim X. Fix S ∈ S(X, x). We say that S is in the Terracini locus T(X, We say that the integer δ(S, X, L, V ) := dim V (−(2S, X)) − dim V + x(m + 1) is the (Terracini) defect of S with respect to (L, V ). When V = H 0 (L), we will drop V in the notation. We will consider, throughout the paper, mainly the case where X is a curve, i.e. m = 1. In this situation a finite set S of length x lies in the Terracini locus when dim V (−(2S, X)) > max{dim V − 2x, 0}. We can extend the definition of Terracini loci to include some non-reduced 0dimensional subschemes. Definition 2.2. Let C ⊂ P r be a smooth and connected non-degenerate curve. For each positive integer x let C (x) denote the symmetric product of x copies of C. The variety C (x) is a connected projective variety of dimension x parametrizing the degree x zero-dimensional schemes of X, and there is a non-empty Zariski open subset S(C, x) ⊂ C (x) which parametrizes subsets of cardinality x. For each positive integer x letT(C, x) denote the set of all Z ∈ C (x) such that dim 2Z ≤ 2x − 2 and 2Z = ∅. Obviously, T(C, 1) =T(C, 1) = ∅. The inclusion S(C, x) ⊂ C (x) induces an inclusion T(C, x) ⊆T(C, x). The semicontinuity theorem for cohomology gives thatT(C, x) contains the closure of T(C, Example 2.3. Fix an even integer d ≥ 4, a line D ⊂ P 2 and p ∈ D. There is a smooth degree d curve C ⊂ P 2 such that p ∈ C and D ∩ C = dp, i.e. p is a total ramification point of C with D as its tangent line. Since C has only finitely many multitangent lines, T(C, x) is finite and hence it is closed inT(C, x). Since d is even Generalities Example 3.1. Let X ⊂ P N be an integral curve. Here we take L = O X (1) and V = the linear series of hyperplanes. For a general S = {P 1 , . . . , P x } ⊂ X reg write 2S for (2S, X). The elements of V (−2S) correspond to hyperplanes containing the tangent lines to X at the points P i 's. It follows that S lies in the Terracini locus when the span of the x tangent lines at the points of X has dimension smaller than the expected one. Then by definition T(X, L, V, 1) is always empty. On the other hand, when the map induced by the linear series V is not birational onto the image the Terracini locus T(X, L, V, 1) may contain some points. Example 3.2. Let X be a smooth hyperelliptic curve of genus g ≥ 2. We can describe all the Terracini loci with respect to L = K X and V = H 0 (K X ). Let B ⊂ X be the set of the Weierstrass points of X. For any integer x let S(B, x) denote the set of all subsets of B with cardinality x. The set B is the ramification locus of the g 1 2 of X. Since we work in characteristic 0, ℓ(B) = 2g +2. Fix a positive integer x and take S ∈ S(X, x). If x ≥ g, then deg(K X (−(2S, X))) < 0 and hence h 0 (K X (−(2S, X))) = 0. Thus T(X, K X , x) = ∅ for all x ≥ g. Assume that 1 ≤ x ≤ g − 1. Let h : X → P 1 be the morphism associated to the g 1 2 of X. The linear system |K X | is the minimal sum of g − 1 copies of the g 1 2 of X. Hence every base-point free special line bundle on X is the sum of at most g − 1 copies of the g 1 2 . Thus T(X, K X , 1) = B and T(X, K X , g − 1)) = S(B, g − 1). More generally, S belongs to T(X, K X , x) if and only if we have that either S ∩ B = ∅ or there are p, q ∈ S such that p = q and h(p) = h(q) for these conditions are equivalent to h 0 (K X (−(2S, X))) > g − 2x. It is easy to realize that, for each x ∈ {2, . . . , g − 2}, the elements of T(X, K X , x) with (maximal) defect x are the elements of S(B, x). If e := #(S ∩ B) and S has f distinct sets For the rest of the section let us go back to the case in which X ⊂ P N is an integral curve, and we take L = O X (1) and V = the linear series of hyperplanes. Example 3.3. Let us consider what happens when x = 2 and X is a plane curve. Thus max{dim V − 2x, 0} = 0. Then S = {P, Q} ∈ S(2) belongs to the Terracini locus if and only if the tangent lines to X in P and Q coincide. Since in characteristic 0 not every tangent line is bitangent, then T(X, L, V, 2) is either empty or finite. If N > 2, the tangent lines to two general points of X span a 3-dimensional linear subspace (recall that we work in characteristic 0). The set S = {P, Q} lies in the Terracini locus T(X, L, V, 2) when the tangent lines in P, Q meet at some point P 0 , i.e. there exists a plane containing the two tangent lines. In this case, the projection of X from P 0 is a curve with (at least) two cusps. Canonically embedded curves Let us turn now to the case where X is a smooth curve of genus g and we consider the complete canonical linear series L = K X . We will describe in several cases the locus T(X, K X , x). Since we already treated the case of hyperelliptic curves in Example 3.2, we assume that g ≥ 3 and K X embeds X in P g−1 . We start with a very easy observation, which shows that we need to distinguish two cases, depending if x is smaller than g/2 or not. Proposition 4.1. A reduced set S of length x lies in T(X, K X , x) if and only if either 2x < g and h 0 (2S) > 1, i.e. the linear series |2S| is not a singleton, or 2x ≥ g and h 0 (K X − 2S) > 0, i.e. 2S is special. Proof. By definition the set S lies in T(X, K X , x) if and only if h 0 (K X − 2S) > min{0, h 0 (K X ) − 2x}. Since h 0 (K X ) = g, we distinguish between 2x < g and 2x ≥ g. In the latter case S lies in T(X, K X , x) if and only if h 0 (K X − 2S) > 0. In the former case, since by Riemann-Roch h 0 (2S) = 2x − g + 1 + h 0 (K X − 2S), S lies in T(X, K X , x) if and only if h 0 (2S) > 1. It follows from the previous proposition that the Terracini locus T(X, K X , x) is empty if x ≥ g, because in this case the degree of K X − 2S is negative. Let us consider the extremal case x = g − 1. Example 4.2. A set S (resp. scheme) of length g − 1 belongs to T(X, K X , g − 1) (resp.T(X, K X , g − 1)) if and only if h 0 (K X − 2S) > 0 which, for degree reasons, implies that 2S is a canonical divisor. Thus subsets S ∈ T(X, K X , g −1) correspond to divisors in some non empty linear series G such that 2G = K X , i.e. a thetacharacteristic of X. It is well known ( [5]) that X has a finite number, exactly 2 2g , theta-characteristics. A theta-characteristic is odd or even, depending on the parity of h 0 (G). The number of odd theta-characteristics is 2 g−1 (2 g − 1) while there are 2 g−1 (2 g + 1) even thetacharacteristics. Now assume that X is general in the moduli space M c g . In this case, by [5] Corollary 1.11, h 0 (G) ≤ 1 for every theta-characteristic G on X, and for each odd theta-characteristic G on X the divisor D with {D} = |G| is reduced. Thus for X ∈ M c g general the Terracini locus T(X, K X , g − 1) is finite, of cardinality 2 g−1 (2 g − 1). There are X ∈ M c g with theta-characteristics G such that h 0 (G) ≥ 2. For such curves T(X, K X , g − 1) is infinite. On the contrary, a natural question is to ask if there are X ∈ M c g such that T(X, K X , g − 1) = ∅, i.e. no reduced divisor is the zero-locus of an effective thetacharacteristic. In the case g = 3 this is equivalent to ask if there is a smooth degree 4 plane curve X with 28 flexes of higher order, i.e. 28 lines L ⊂ P 2 meeting X at a unique point. The total weight of all flexes of a smooth plane quartic is 24, because its Hessian determinant has degree 6. Thus there is no such X for g = 3. For any X we have #T(X, K X , g − 1) ≥ 2 g−1 (2 g − 1) and either #T(X, K X , g − 1) = 2 g−1 (2 g − 1) (case h 0 (G) ≤ 1 for all theta-characteristic G of X) or dimT(X, K X , g − 1) > 0. In any case, taking subsets of a theta-characteristic, we obtain immediately the following Proposition 4.3. Fix an integer x such that g/2 ≤ x ≤ g − 1. Then the Terracini locusT(X, K X , x) is non empty. Of course when X has a positive dimensional theta-characteristic, then also T(X, K X , x) is infinite for all x between g/2 and g − 1. Remark 4.4. Fix an integer x such that g/2 ≤ x ≤ g − 2. One can try to extend elements of T(X, K X , x) to elements T(X, K X , x + 1), with the addition of suitable points. Respect to this, we can observe: The differential of the rational map φ : X \X ∩ 2S → P 1 induced by the linear projection from 2S shows that there are only finitely many p ∈ X\S such that S∪{p} ∈ T(X, K X , x+1). There are at least 2 such points p, because φ extends to a morphism ψ : X → P 1 by the smoothness of X and X has at least 2 ramification points, because g > 0. Let us now consider the case x < g/2. Proposition 4.5. Assume x < (g − 1)/2, and assume that T(X, K X , x) is non empty. Then Proof. Pick S ∈ T(X, K X , x) and p general in X. Since 2x < g then by Proposition When x + 1 < g/2 , the last inequality is sufficient to conclude that S ∪ {p} sits in T (X, K X , x + 1), by Proposition 4.1 again, thus the inequality on the dimensions holds. Assume x = (g − 2)/2. Since h 0 (2(S ∪ {p})) ≥ 2, then by Riemann-Roch Thus 2(S ∪ {p}) is special, hence it belongs to T (X, K X , x + 1), by Proposition 4.1, and we conclude as before. Corollary 4.6. Fix the minimal integer x such that the Terracini locus T(X, K X , x) is non empty. Then for all y with x ≤ y ≤ g − 1 the Terracini locus T(X, K X , y) is non-empty. We saw in the previous section that T(X, K X , 1) is non-empty if and only if X is hyperelliptic. The last 2 propositions of this section link the gonality k of X with the minimal x such that T(X, K X , x) = ∅. Proposition 4.7. If T(X, K X , x) = ∅, then X has a linear series g 1 2x . For the converse, if X has a linear series g 1 k with k < g/2, then dim(T(X, K X , k)) ≥ 1. Proof. Recall that we are assuming g ≥ 3 and X not hyperelliptic. The first assertion is trivial if x ≥ g/2, for every X satisfies T(X, K X , x) = ∅ and every X has a linear series g 1 2x . When x < g/2, the first assertion follows immediately by Proposition 4.1. For the second assertion, consider the degree k map f : X → P 1 associated to the linear series g 1 k . For any p ∈ P 1 let D p := f −1 (p) denote the associated degree k divisor. Note that h 0 (2D p ) ≥ h 0 (O P 1 (2p)) = 3. Then, as in Proposition 4.1, a general D p belongs toT(X, K X , k). Since we work in characteristic 0, then a general D p is formed by k distinct points, hence it belongs to T(X, K X , k). Before we can refine the previous proposition, let us see what happens for trigonal curves. Example 4.8. Let X be a smooth trigonal curve of genus g ≥ 4, canonically embedded in P g−1 , and let f : X → P 1 be the degree 3 morphism associated to the g 1 3 . By the Castelnuovo's inequality [8], f is unique if g ≥ 5. Let Σ ⊂ X denote the set of all ramification points of the map and let Σ ′ ⊆ Σ denote the set of all p ∈ Σ which belong to fibers of cardinality 2. The points p ∈ Σ \ Σ ′ are called the total ramification points of f , because the fiber containing p is supported at p. Σ ′ = Σ if X is a general trigonal curve of genus g, but there are trigonal curves in which the equality fails, and also trigonal curves with Σ ′ = ∅, e.g. the degree 3 Galois coverings of P 1 . Take p ∈ Σ ′ and consider the point q p = p in the fiber of f through p. We claim that S = {p, q p } belongs to T(X, K X , 2), so that T(X, K X , 2) is non-empty. To prove the claim, consider that h 0 (2p + q p ) = 2. Then h 0 (2p + 2q p ) = h 0 (2S) ≥ 2, which proves the claim when g > 4, by Proposition 4.1. For g = 4, since h 0 (2S) ≥ 2, the divisor 2S is special by Riemann-Roch, and the claim follows again by Proposition 4.1. Definition 4.9. Given a linear series G on X which is a g 1 d , G is tamely ramified if there is a non-reduced divisor D in G in which all the points appear with coefficient ≤ 2. Proposition 4.10. Let X be a smooth curve of genus g ≥ 4, canonically embedded in P g−1 . Assume that X has a base point free pencil R ∈ Pic k (X) such that h 1 (R ⊗2 ) > 0, and let f : X → P 1 be the degree k morphism induced by |R|. ThenT(X, K X , k − 1) = ∅. Assume that the f is tamely ramified. Then T(X, K X , k − 1) = ∅. Then T(X, K X , j) = ∅, and the claim follows from Corollary 4.6. Embedded curves In this section we consider reduced and locally complete intersection curves X embedded in a projective space X ⊂ P r . Notation 5.1. We denote with N X the normal bundle of X. For brevity, we will denote with T(X, x) the Terracini locus of X with respect to the (non-necessarily complete) linear series of hyperplane divisors. For all integers g ≥ 0, r ≥ 3 and d ≥ r + g, we will denote with H(d, g, r) the set of all smooth and non-degenerate curves X ⊂ P r of degree d and genus g such that 1)). On the other hand, we will work mainly with curves in P 3 , and in this case we can drop the assumption by [3]. It is immediate, by Bezout formula, that T(X, x) is empty if x > d/2. So we analyze the case 2x ≤ d. Remark 5.2. Fix x with 2x < r and let X ⊂ P r be a non-degenerate irreducible curve such that T(X, x) = ∅. Then also T(X, x + 1) = ∅. Indeed if S ∈ T(X, x), then the scheme (2S, X) lies in a linear system of hyperplanes of dimension at least r − 2x + 1 ≥ 2. Thus, for p ∈ X general, the scheme (2(S ∪ {p}), X) lies in a linear system of hyperplanes of dimension at least r − 2x − 1 ≥ 0. Notation 5.3. Take X ⊂ P r . An arrow in P r is a non-reduced scheme of length 2. The set of all arrows in P r supported at p is closed in the Hilbert scheme, and it has dimension r − 1. Thus the set of all arrows in P r has dimension 2r − 1. Note that an arrow w with support p determines a line r w through p. If p ∈ X and X is smooth at p, then r w is tangent to X exactly when X contains the arrow w. In the study of Terracini loci (even not on curves) the paper [2] is very useful and we explain it in the following remark used in the proof of Proposition 5.6. Let Z be a general subscheme of X reg with s connected components of length e 1 , . . . , e s . Then we claim that the main result of [2] and its proof imply that: Namely the proof there yields that a general scheme in X (hence obviously curvilinear) which sits in no hypersurfaces of degree d imposes independent conditions to the linear system of hypersurfaces of degree d. Notice that the scheme Z is a Cartier divisor of X, and the integer dim W (−Z) is the codimension of the linear space Z in P r . For rational curves we can easily show the following. Proposition 5.5. Fix integers d ≥ r ≥ 3 and x with r ≤ 2x ≤ d. Then there is a smooth and non-degenerate rational curve X ⊂ P r such that T(X, x) = ∅. Proof. Fix a hyperplane H ⊂ P r . Let Z ⊂ H be a general union of x arrows and let Z ′ be a general set of d − 2x points. By [9, Theorem 1.6] there is a smooth and non-degenerate rational curve X ⊂ P r such that (Z ∪ Z ′ ) = X ∩ H. Let S ⊂ H be the reduction of Z. Since X is smooth and 2x ≥ r, then S ∈ T(X, x). Indeed, we have a characterization of rational normal curves in terms of Terracini loci. Proposition 5.6. Let r ≥ 3 be an odd integer and X ⊂ P r a smooth, connected and non-degenerate curve. Then X is a rational normal curve if and only if T(X, (r + 1)/2)) = ∅. Proof. Set x := (r + 1)/2. The " if " part is true, because if X is a rational normal curve each zero-dimensional scheme Z ⊂ X of degree r + 1 is linearly independent. Now assume T(X, x) = ∅. Set d := deg(X) and fix a general S ⊂ X of cardinality x−1, say S = {p 1 , . . . , p x−1 }. Let V be the linear span of the double scheme (2S, X). For all positive integers a 1 , . . . , a x−1 set V (a 1 , . . . , a x−1 ) := a 1 p 1 + · · ·+ a x−1 p x−1 . Note that V = V (2, . . . , 2). Since S is general in X, Remark 5.4 applied to the curve X gives dim V (a 1 , . . . , a x−1 ) = min{r, x − 2 + . , x − 1} such that a j = 3 and a i = 2 for all i = j, the scheme V ∩ X contains each p i with multiplicity 2. First assume (V ∩ X) red = S and take o ∈ (V ∩ X) red \ S. Since dim 2o ∪ V ≤ dim V + 1, S ∪ {o} ∈ T(X, x), a contradiction. Thus (V ∩ X) red = S. Since we proved that each p i appear with multiplicy 2 in the scheme-theoretic intersection V ∩ X, we have V ∩ X = (2S, X), which has degree 2x − 2 = r − 1. Let u denote the linear projection from V to P 1 . Since X is smooth, u |X\S extends to a morphism u ′ : X → P 1 , and the degree of u ′ is d − r + 1. The assumption T(X, x) = ∅ implies that u ′ has no ramification point, except possibly at the points of S. Fix a ∈ S, say a = p 1 . The point p 1 is a ramification point of u ′ only if V (4, 2, . . . , 2) is a hyperplane. This is false, because dim V (4, 2, . . . , 2) = r. Hence u ′ has no ramification points. This is possible only if d = r, hence X is a rational normal curve. Since in the rest of the paper we will often argue by induction, taking the smoothing of nodal, reducible curves, we need some preliminary results on normal bundles of reducible curves. Remark 5.7. Let X ⊂ P r be a reduced curve with only locally complete intersection singularities. N X is a vector bundle of rank (r − 1) on X and deg(N X ) = (r + 1) deg(X) + (r − 1) (1 − p a (X)). There is a map φ : T P r |X → N X which is surjective outside Sing(X). Consider the restriction to X of the Euler's sequence of T P r : (1) gives h 1 (T P r |X ) = 0. The map φ induces the following exact sequence of coherent sheaves on X: (2) gives h 1 (Im(φ)) = 0. If X is smooth, then N X = Im(φ) and hence h 1 (N X ) = 0. Now assume X singular. Consider the exact sequence Since N X /Im(φ) is supported by the finite set Sing(X), then h 1 (N X /Im(φ)) = 0. Thus (3) gives h 1 (N X ) = 0 even if the non-special curve is singular. If X is smooth and rational, then h 1 (O X ) = 0. As above we obtain h 1 (T P r (−1) |X ) = 0 and h 1 (N X (−1)) = 0. In particular we study the case of curves in P 3 . Lemma 5.8. Let C ⊂ P 3 be a reduced curve and let C ′ ⊂ P 3 be a smooth conic that meets C at i ≤ 3 points p 1 , . . . , p i ∈ C reg with C ∪ C ′ nodal at each p i . Let H be the plane spanned by C ′ . If i > 1, assume also that T p1 C H, T p2 C H and T p1 C ∩ T p2 C = ∅. 1) If i = 1 then N C∪C ′ |C ′ is the direct sum of two line bundles, one of degree 3 and one of degree 4. 2) If i = 2 then N C∪C ′ |C ′ is the direct sum of two line bundles, both of degree 4. 3) If i = 3, then N C∪C ′ |C ′ is the direct sum of two line bundles, one of degree 4 and one of degree 5. N C ′ is the direct sum of a degree 4 line bundle and a degree 2 line bundle. We prove 2) first. Set Y := T p1 C ∪ T p2 C ∪ C ′ . By [7, Corollary 3.2] the bundle N C∪C ′ |C ′ is obtained from N C ′ by making two positive elementary transformations (in the sense of [7, §2]), which depend uniquely on C ′ , p 1 , p 2 , and the lines A similar statement holds for the proof that h 1 (N C∪D ) = 0. Lemma 5.11. Let H ⊂ P 3 be a plane and let C ′ ⊂ H be a smooth conic. Fix 3 distinct points p 1 , p 2 , p 3 of C ′ and lines L 1 , L 2 , L 3 such that H ∩ L i = {p i } for all i. Let E (resp. F ) be the vector bundle obtained from N C ′ making positive elementary transformations at p 1 and p 2 (resp. p 1 , p 2 and p 3 ) with respect to L 1 and L 2 (resp. L 1 , L 2 and L 3 ). Then E is a direct sum of 2 line bundles of degree 4 and F is a direct sum of a line bundle of degree 5 and a line bundle of degree 4. Proof. Set X := C ′ ∪ L 1 ∪ L 2 and Y := X ∪ L 3 . Since H ∩ L i = {p i }, then L i is transversal to H. Thus X and Y are nodal at p 1 , p 2 and p 3 . Note that E ∼ = N X|C ′ and F ∼ = N Y |C ′ . First assume L 1 ∩ L 2 = ∅. Thus X is nodal with 3 nodes and arithmetic genus 1. Call M the plane containing L 1 ∪ L 2 . To prove that E ∼ = O C ′ (2) ⊕ O C ′ (2) it is sufficient to prove that X is the complete intersection of M ∪ H and a quadric. This is true, because h 0 (O X (2)) = 8, h 0 (O P 3 (2)) = 10 and hence h 0 (I X (2)) ≥ 2. Now assume L 1 ∩ L 2 = ∅. In this case X is contained neither in a reducible quadric nor in a quadric cone. Since h 0 (O X (2)) = 9, X is contained in a smooth quadric, Q. Call |O Q (1, 0)| the ruling of Q containing L 1 , and hence also containing L 2 . Since C ′ ∈ |O Q (1, 1)|, then X ∈ |O Q (3, 1)|. Since N Q ∼ = O Q (2), we have an exact sequence (5) 0 We have N X.Q ∼ = O X (3, 1) so that its restriction to C ′ has degree 4. Since (5) is an exact sequence of vector bundles, its restriction to C ′ is an exact sequence of vector bundles on C ′ ∼ = P 1 in which the leftmost and the rightmost terms are line bundles of degree 4. Since C ′ ∼ = P 1 , E is a direct sum of two line bundles of degree 4. The bundle F is obtained from E making a positive elementary transformation, and all rank 2 vector bundles on P 1 split. Hence F is a direct sum of a line bundle of degree 5 and a line bundle of degree 4. Now we are ready to prove some non-emptiness results in P 3 . Recall that H(d, g, 3) denotes the set of smooth and non-degenerate curves space curves X of degree d and genus g, such that h 1 (O X (1)) = 0. Proof. Fix a plane H ⊂ P 3 . We will find X ∈ H(d, g, 3) such that X is tangent to H at x points of H spanning H. We first dispose of the case x = ⌊d/2⌋ and d = g + 3 in steps (a) and (b), leaving the case x = ⌊d/2⌋ and d > g + 3 to step (c) and the case 3 ≤ x < ⌊d/2⌋ to step (d). (a) Assume d even and x = d/2. (a1) Assume d = 6. By Proposition 5.6 T(C, 2) = ∅ for each smooth curve C of genus 1 and degree 4. Hence by a change of coordinates we find a smooth curve C ⊂ P 3 of degree 4 and genus 1 which is tangent to H at 2 distinct points, say q 1 and q 2 . Since C is the complete intersection of 2 quadric surfaces, the normal bundle of C splits as N C ∼ = O C (2) ⊕ O C (2). Since C has genus 1, we get h 1 (N C (−1)) = 0. Fix a general q 3 ∈ H and let M ⊂ P 3 be a general plane containing q 3 . Since q 3 and M are general, we may assume that M is transversal to C, C ∩ M ∩ H = ∅, and C ∩ M spans M . Fix three distinct points {p 1 , p 2 , p 3 } ∈ C ∩ M which span M . There is a smooth conic C ′ containing {p 1 , p 2 , p 3 , q 3 }, tangent in q 3 to the line H ∩ M but not containing the fourth point p 4 of C ∩ M . Indeed, q 3 is general in H ∩ M , while there are only at most two conics passing through p 1 , p 2 , p 3 , p 4 and tangent to H ∩ M . Since M is transversal to C, the curve Y := C ∪ C ′ is nodal. By Lemma 5.8 the vector bundle N + C ′ (−1) is the direct sum of a line bundle of degree 2 and a line bundle of degree 3. Since line bundles of degree 2, 3 separate any set of three points in the smooth conic C ′ , the restriction map H 0 (C ′ , N + C ′ (−1)) → H 0 (N + C ′ (−1) |{p1,p2,p3} ) is surjective. Remark 5.10 gives h 1 (N Y (−1)) = 0, so that Y is smoothable by [7,Theorem 4.1]. By semicontinuity, a general member X 0 of a smoothing family of Y satisfies h 1 (N X0 (−1)) = 0. By construction T(Y, 3) = ∅. Yet, in order to conclude, we need more: we need to smooth Y in a family of space curves whose elements Y λ satisfies T(Y λ , 3) = ∅. In other words, we need: Claim 1: There are an affine smooth and connected curve ∆, o ∈ ∆ and a flat family {Y λ } λ∈∆ of space curves such that Y 0 = Y , the general element of the family is smooth, and T(Y λ , 3) = ∅ for all λ ∈ ∆. Proof of Claim 1: Set Z ′ := (2q 1 , C) ∪ (2q 2 , C), Z ′′ := (2q 3 , C ′ ) and Z := Z ′ ∪ Z ′′ . Note that Z ∩ C = Z ′ and Z ∩ C ′ = Z ′′ . Since q 1 , q 2 and q 3 are smooth points of Y , Z is a degree 6 Cartier divisor of Y . Thus N Y (−Z) is a rank 2 vector bundle on Y with deg(N Y (−Z)) = deg(N Y )−12. The vector space H 0 (N Y (−Z)) is the tangent space to the functor of deformations of Y inside P 3 in families of curves containing Z, while H 1 (N Y (−Z)) is an obstruction space of this functor [9, Th. 1.5]. To prove Claim 1 it is sufficient to find a smoothing family is a direct sum of degree 2 line bundles on C ′ ∼ = P 1 . Thus h 1 (N C ′ (−Z −p i )) = 0 for all i. In particular (with the terminology of [7] Thus h 1 (F ) = 0 for every vector bundle F on C obtained from N C (−Z − p 1 − p 2 − p 3 ) making finitely many positive elementary transformations. Consider the Mayer-Vietoris exact sequence = 0 (here we consider p 1 , p 2 , p 3 as points of the smooth curve C). The sequence Hence the surjectivity of φ gives a fortiori, in sequence (6), that h 1 (N Y (−Z)) = 0. Thus we may apply the proof of [7, Th. 4.1] since the deformation functor of Y which maintains Z fixed is unobstructed. We obtain a family {Y λ } λ∈∆ as in the statement, in which the general element contains Z, hence it is tangent to H at three points. (a2) Assume d ≥ 8 and that the theorem is true for the triples (d ′ , g ′ , x ′ ) such that x = d ′ /2, d ′ = g ′ + 3 and d ′ ≤ d − 2. Take a solution C for (d ′ , g ′ , x ′ ) = (d − 2, g − 2, x − 1) and use the proof of step (a1) with this C instead of an elliptic curve, as follows. By induction, there is a plane H which contains a set Z ′ of x − 1 arrows in C. So, we can continue the induction by taking a general element X in a family {Y λ } λ∈∆ giving a smoothing of Y and fixing Z. Notice that at any step X satisfies h 1 (N X (−Z)) = 0 by semicontinuity, since h 1 (N Y (−Z)) = 0. (c) Assume d > g + 3. If d − g − 3 is even, start with a curve of genus g and degree d ′ = g + 3, constructed as in step (a2) when d ′ is even, or constructed as in step (b) if d ′ is odd. Continue for (d − g − 3)/2 steps, by adding to the previously constructed curve C a smooth conic C ′ , tangent to H, with #(C ′ ∩ C) = 1 and C ′ ∪ C nodal. By Lemma 5.8 we always get h 1 (N C∪C ′ (−1)) = 0. After (d − g − 3)/2 steps we get the claimed curve. Assume that d − g − 3 is odd. When g = 0, start with a rational quartic, by using Proposition 5.6. When g ≥ 1, start with a curve C of genus g − 1 and degree d ′ = g + 2, constructed as in step (a2) when d ′ is even, or constructed as in step (b) if d ′ is odd. In the first step add to C a smooth conic C ′ , tangent to H, which meets the curve C at two points whose tangent lines t 1 , t 2 to C are disjoint and different from the tangent lines to C ′ . We may take such a conic C ′ in a plane which does not contain t 1 , t 2 . Then continue for (d − g − 4)/2 steps, by adding to the previously constructed curve C a smooth conic C ′ , tangent to H, which intersects C at a unique point and with C ∪ C ′ nodal. In any case, the assumptions of Lemma 5.8 hold for C ∪ C ′ . Thus by Lemma 5.8 we always get h 1 (N C∪C ′ (−1)) = 0, and we can continue the induction, by taking a smoothing of C ∪ C ′ which preserves the intersection with H. (d) Assume 3 ≤ x ≤ ⌊d/2⌋. We start with some curve Y such that T(Y, 2) = ∅ and h 1 (N Y (−1)) = 0. We take Y of genus 1 and degree 4 if d is even or genus 2 and degree 5 if g is odd. Then we continue as in steps (a), (b) and (c) above, except that in ⌊d/2⌋ − x steps we add a smooth conic C ′ not tangent to H. For space curves X ⊂ P 3 with h 1 (O X (1)) = 0 we prove the following result. Remark 5.14. In Theorem 5.13, for each fixed x we find g 0 such that for all g ≥ g 0 and all d ≥ 3 4 g + 3 there is a smooth curve X ⊂ P 3 of genus g and degree d with T(X, x) = ∅. The same is true for a slowing increasing function x(g) of g. Note that for a fixed x and for g ≫ x these curves X cover a range of degrees and genera larger that the Brill-Noether range d ≥ 3 4 g + 3. For the proof of Theorem 5.13, we need a series of preliminary lemmas. Lemma 5.15. Fix an integer e ∈ {1, 2, 3, 4}. Let C ⊂ P 3 be an integral and nondegenerate curve of degree d. If e = 4 assume d ≥ 4. Take a union Y ⊂ P 3 of finitely many curves such that C Y . Then there is a smooth conic C ′ such that #(C ∩ C ′ ) = e, C ∪ C ′ is nodal at each point of C ∩ C ′ , and C ′ ∩ Y = ∅. Proof. Take a general plane M ⊂ P 3 . The plane M is transversal to C, Y ∩ M is finite and Y ∩ C ∩ M = ∅. By the trisecant lemma no 3 of the d points of C ∩ M are collinear. Fix S ⊆ C ∩ M such #S = e. Since no 3 of the points of S are collinear, S is the scheme-theoretic base locus of the (5 − e)-dimensional linear space |I S (2)|. A general element of |I S (2)| is smooth. Thus there is a smooth C ′ ∈ |I S (2)| such that C ′ ∩ Y = ∅ and C ′ ∩ C = S. Since M is transversal to C and C ′ ⊂ M , then C ∪ C ′ is nodal at each point of S. Lemma 5.16. Let C ⊂ P 3 be an integral and non-degenerate curve of degree 4. For a general S ⊂ C of length 6 there is a rational normal curve T S ⊂ P 3 such that S = C ∩ T S and T S ∪ C is nodal. Moreover, if p a (C) = 1 then T S1 ∩ T S2 = ∅ for a general S 1 × S 2 ⊂ C × C such that both S 1 and S 2 have length 6. Proof. Let U denote the set of all A ⊂ P 3 such that #A = 6 and A is in linear general position. For every A ∈ U there is a unique rational normal curve T A containing A. Set U(C) := {S ∈ U | S ⊂ C}. Since C is integral and non-degenerate, U(C) is an integral quasi-projective variety of dimension 6. The set S is a general element of U(C). Since deg(C) = 3, then C = T S . We need to prove that S is equal to C ∩ T S (set-theoretically), that T ∪ C is nodal, and the last assertion, concerning a general S 1 × S 2 ⊂ C × C, when p a (C) = 1. If p a (C) = 1, the curve C is the complete intersection of 2 quadrics and (since it has at most one singular point and with embedding dimension 2) it is contained in a smooth quadric Q, say C ∈ |O Q (2, 2)|. Fix a general S ⊂ C of length 6. By the generality of S, since h 0 (O Q (1, 2)) = h 0 (O Q (2, 1)) = 6, we get h 0 (I S,Q (2, 1)) = h 0 (I S,Q (1, 2)) = 0. Thus T S Q. Bezout theorem gives S = T S ∩ Q as schemes. Thus T S ∩ C = S and T S ∪ C is nodal. If p a (C) = 0 then h 0 (I C (2)) = 1 and the unique quadric surface Q containing C is smooth [6, Ex. V.2.9], with either C ∈ |O Q (1, 3)| or C ∈ |O Q (3, 1)|. We conclude as in the case p a (C) = 1. Now we prove the last claim, for p a (C) = 1. It is sufficient to find S 1 , S 2 ∈ U(C) such that T S1 ∩ T S2 = ∅. The pencil |I C (2)| has only finitely many (i.e. 4) singular elements. Take smooth quadrics Q 1 , Q 2 ∈ |I S (2)| such that Q 1 = Q 2 . Note that C ∈ |O Qi (2, 2)|, i = 1, 2. Take a general T i ∈ |O Qi (2, 1)|. Each T i is a rational normal curve and deg(T i ∩ C) = 6. Bertini Theorem gives that S i := T i ∩ C has cardinality 6. Since S i ⊂ T i and T i is a rational normal curve, S i is in linear general position. Thus T i = T Si . Since T Si = C, we have T S1 Q 2 and T S2 Q 1 . Since T i ⊂ Q i , i = 1, 2, we get T S1 ∩ Q 2 = T S1 ∩ Q 1 ∩ Q 2 = T S1 ∩ C = S 1 . Since T S2 ⊂ Q 2 , then T S1 ∩ T S2 ⊆ T S1 ∩ Q 2 = S 1 . Since S 1 ∩ S 2 = ∅ and T S2 ∩ C = S 2 , we get T S1 ∩ T S2 = ∅. (1) T(X 0 , x) is non-empty, and there exists a plane H which contains a union of x arrows Z ⊂ X 0 with h 1 (N X0 (−Z)) = 0. (2) For a general choice of s subsets S 1 , . . . , S s ⊂ X 0 of cardinality 6, there are disjoint rational normal curves T 1 , . . . , T s such that for all i the union X 0 ∪ T i is nodal, and T i meets X 0 exactly at S i . Proof. We make induction on x ≥ 2. To deal with the case x = 2 (i.e. d = 8 and g = 5 − q) we start with a smooth elliptic quartic curve C. We know that H 1 (N C (−1)) = 0 and T(C, 2) is non-empty. Since T(C, 2) = ∅, there is a subset Z ⊂ C of cardinality 2 such that the tangent lines to C at the points of Z lie in a plane H. By Lemma 5.16, for a general choice of s subsets S 1 , . . . , S s ⊂ C of cardinality 6 we find disjoint rational normal curves T 1 , . . . , T s such that C ∪ T i is nodal and T i ∩ C = S i for all i. Take a general plane H ′ ⊂ P 3 . Thus H ′ is transversal to C and Z ∩ H ′ = ∅. Fix e ∈ {0, 1, 2, 3} and W ⊂ H ′ ∩C such that #W = 4−e. Take a general smooth conic C ′ ⊂ H ′ containing W . Since #W ≤ 4 and C ′ is general, shows that the restriction map ρ : surjects, so that the exact sequence proves that h 1 (N Y (−Z)) = 0. If #W ≤ 3, then the surjectivity of ρ only use that N C ′ is a direct sum of 2 line bundles of degree ≥ 2 and that N Y (−Z) |C ′ is obtained from N C ′ making positive elementary transformations. By semicontinuity, we can find a smoothing D of C ∪ C ′ which preserves Z. Thus we get curves D in H(6, g, 3), for g = #W ∈ {1, 2, 3, 4}, such that T(D, 2) = ∅, h 1 (N D (−Z)) = 0, and such that for general subsets S 1 , . . . , S s ⊂ X 0 of cardinality 6 we have the disjoint rational normal curves T 1 , . . . , T s , as in the statement. Then consider a conic C ′′ which meets D in one or in two general points. As above, in both cases Y ′ = C ′′ ∪ D satisfies h 1 (N Y ′ (−Z)) = 0, and we can find a general smoothing X 0 of Y ′ which preserves Z. X 0 has degree 8, and for its genus g we can obtain any number between 1 and 5, because #W is any integer between 1 and 4. Moreover, h 1 (N X0 (−Z)) = 0, T(X 0 , 2) = ∅, and by semicontinuity X 0 satisfies condition 2 of the statement. This concludes the case x = 2. Assume we constructed the required curve X ∈ H(2(x − 1) + 4, 2(x − 1) + 1 − q, 3). There exists a subscheme Z ⊂ X formed by x − 1 arrows which is contained in a plane H, and moreover h 1 (N X (−Z)) = 0. Take a general plane M which is transversal to both H and X, and misses Z. As in the proof of (a1) of Theorem 5.12, for p 1 , p 2 , p 3 ∈ M ∩X there exists a smooth conic C ′ passing through p 1 , p 2 , p 3 , tangent to H at p 4 / ∈ X, which misses the remaining points of M ∩ X. Take Y = X ∪ C ′ , and define Z 0 as the union of Z and the arrow at p 4 tangent to C ′ . Notice that Z 0 lies in H. As in the proof of Claim 1, the vanishing of H 1 (N X (−Z)) implies the surjectivity of the map H 0 (N Y (−Z 0 ) |X ) → H 0 (N Y (−Z 0 ) |{p1,p2,p3} ), and the analogue of sequence (6) shows that h 1 (N Y (−Z 0 )) = 0. Thus there exists a smoothing X 0 ∈ H(2x + 4, 2x + 1 − q, 3) of Y which preserves Z 0 . The existence of Z 0 ⊂ X 0 implies T(X 0 , x) = ∅. By semicontinuity h 1 (N X0 (−Z 0 )) = 0, and X 0 satisfies condition 2 of the statement. Lemma 5.19. Let C ⊂ P 3 be an integral locally complete intersection curve and Z ⊂ C reg a zero-dimensional scheme. Assume h 1 (N C (−Z)) = 0. Take S ⊂ C reg such that #S = 6, S ∩ Z = ∅, S is in linearly general position and the only rational normal curve T containing S meets C only at S and with C ∪ T nodal at each point of S. Set Y := C ∪ T . Then h 1 (N Y (−Z)) = 0. Proof. By assumption S is the scheme-theoretic intersection of C and T . Thus we have the following Mayer-Vietoris exact sequence Proof of Theorem 5.13. Recall that g = 2x + 1 + 5s − q, where q ∈ {0, . . . , 4}. By assumption d ≥ 2x + 4 + 3s. We first dispose of the case d = 2x + 4 + 3s. The case s = 0 is covered by Lemma 5.17. Consider a curve X 0 ∈ H(4x + 2, 2x + 1, 3) as in the statement of Lemma 5.17. Since T(X 0 , x) is non-empty, we can take a set Z of x coplanar arrows in X 0 supported at x points, with h 1 (N X0 (−Z)) = 0. Moreover, for a choice of s general subsets S 1 , . . . , S s ⊂ X 0 of cardinality 6 there are disjoint rational normal curves T 1 , . . . , T s such that for all i the union X 0 ∪ T i is nodal, and T i meets X 0 exactly at S i . Define Y = X 0 ∪ T 1 · · · ∪ T s . Then h 1 (N Y (−Z)) = 0. Arguing by induction on s one finds a smoothing X s of Y which preserves Z. The curve X s belongs to H(2x + 4 + 3s, 1 + 2x + 5s − q, 3). The existence of Z provides that T(X s , x) = ∅. By semicontinuity we also know that h 1 (N Xs (−Z)) = 0. Finally assume d = (2x + 4 + 3s) + t, for some t > 0. Then we obtain the required curve in H(d, g, 3) by induction on t. We start with X s,0 = X s . Then we construct X s,t+1 from X s,t by adding a line ℓ which meets X s,t at one general point, so that X s,t ∪ ℓ is nodal, and taking a smooth deformation which fixes Z. The condition h 1 (O X (2)) = 0 follows by applying several times Lemma 5.18 applied to smooth rational curves of degree ≤ 3.
2023-05-04T01:15:49.106Z
2023-05-03T00:00:00.000
{ "year": 2023, "sha1": "2fa3a8d6ba09d510037fc5bd35bb033af3875df6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2fa3a8d6ba09d510037fc5bd35bb033af3875df6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
229534210
pes2o/s2orc
v3-fos-license
A Bursts Contention Avoidance Scheme Based on Streamline Effect Awareness and Limited Intermediate Node Buffering in the Core Network : In an Optical Burst Switched (OBS) network, data packets sourced from peripheral networks are assembled into huge sized data bursts. For each assembled data burst, an associated control signal in the form of a burst control packet is (BCP) is generated and scheduled at an offset time ahead of the data burst. The offset timing is to allow for the pre-configuration of required resources at all subsequent intermediate nodes prior to the actual data burst’s arrival. In that way, the data burst will fly-by each node and hence no requirement for temporary buffering at all intermediate nodes. An operational requirement of an OBS network is that it be loss-less as in that way a consistent as well as acceptable quality of service (QoS) for all applications and services it serves as a platform can be guaranteed. Losses in such a network are mainly caused by improper provisioning as well as dimensioning of resources thus leading to contentions among bursts and consequently discarding of some of the contending data bursts. Key to both provisioning as well as proper dimensioning of the available resources in an optimized way is the implementation of effective routing and wavelength (RWA) that will seclude any data losses due to contention occurrences. On the basis of the effects of the streamline effect (SLE), that is, effectively secluding primary contention among flows (streams) in the network, we propose in this paper a limited intermediate buffering that couples with SLE aware prioritized RWA (LIB-PRWA) scheme that combats secondary contention as well. The scheme makes routing decisions such as selection of primary and deflection routes based on current resources states in the candidate paths. A performance comparison of the proposed scheme is carried out and simulation results demonstrate its comparative abilities to effectively reduce losses as well as maintaining both high network resources utilization as well as QoS. Abstract: In an Optical Burst Switched (OBS) network, data packets sourced from peripheral networks are assembled into huge sized data bursts. For each assembled data burst, an associated control signal in the form of a burst control packet is (BCP) is generated and scheduled at an offset time ahead of the data burst. The offset timing is to allow for the pre-configuration of required resources at all subsequent intermediate nodes prior to the actual data burst's arrival. In that way, the data burst will fly-by each node and hence no requirement for temporary buffering at all intermediate nodes. An operational requirement of an OBS network is that it be loss-less as in that way a consistent as well as acceptable quality of service (QoS) for all applications and services it serves as a platform can be guaranteed. Losses in such a network are mainly caused by improper provisioning as well as dimensioning of resources thus leading to contentions among bursts and consequently discarding of some of the contending data bursts. Key to both provisioning as well as proper dimensioning of the available resources in an optimized way is the implementation of effective routing and wavelength (RWA) that will seclude any data losses due to contention occurrences. On the basis of the effects of the streamline effect (SLE), that is, effectively secluding primary contention among flows (streams) in the network, we propose in this paper a limited intermediate buffering that couples with SLE aware prioritized RWA (LIB-PRWA) scheme that combats secondary contention as well. The scheme makes routing decisions such as selection of primary and deflection routes based on current resources states in the candidate paths. A performance comparison of the proposed scheme is carried out and simulation results demonstrate its comparative abilities to effectively reduce losses as well as maintaining both high network resources utilization as well as QoS. Keywords: Optical burst switching, streamline effect, congestion, contention, routing and wavelength assignment. I. INTRODUCTION The emergency of Internet of Things (IoT) enabled networks has resulted in a surge of various applications and services and generating massive amounts of traffic globally. This is necessitating the design and deployment of an all optical transport network infrastructure to serve as the core backbone network for the resultant diverse communications services. Such an infrastructure provides connectivity to millions of administrative, commercial, industrial as well as residential centers. The heterogeneous nature of the large volumes of traffic generated by various applications and services ideally requires an all-optical backbone network infrastructure to accommodate it. Such a network must be continuously adaptable to the changing nature of the traffic as well as its spontaneous growth with time. In so doing, it must ensure high end-to-end QoS, availability as well as provision adaptable controllability in cooperation with peripheral (service) layer networks. Utilization of dense wavelength division multiplexing (DWDM) in optical fibers has resulted in transmission bearers achieving speeds in the order of Terabits per second. However, current router switches have not been able to solve the speed mismatches between the high DWDM transmission speeds versus their low switching capabilities. Optical burst switching (OBS) is being rolled out to narrow the switching versus transmission speed gaps in current and future generation optical backbone networks. The OBS approach is based on aggregating and assembling data packets at ingress nodes into optical data bursts. A control packet (CP) is separately generated to carry control data for each assembled burst on a separate wavelength channel. It is transmitted ahead of the actual burst and will thus reach the next intermediate node within some preset offset-time [1], [2]. The magnitude of this timing is carefully set so that it is sufficient to allow for the CP's processing by a CP controller at all intermediate nodes. This also allows for the node's switch fabric pre-configuring as well as channel reservation on its output link prior to the actual arrival of the optical data burst. This prior reservation of resources eradicates the need for optical burst buffering during the switching process, otherwise this would escalate network design and operational costs. The optical burst is then switched through and its reserved resources freed and made available for other lightpath connection requests. The OBS switching paradigm is prone to both congestion and contentions. Both reactive and proactive measures may be employed in the network to try to avoid contention. Such measures include backpressure routing, network segmentation, as well as prioritizing the network traffic. However, the existence of any congested links may drastically aggravate network throughput and consequently its overall performance. Notable QoS metrics that degrade as a consequence of congestion occurrences are burst blocking rate and end-to-end latency. Burst contentions occurring in the core nodes may lead to some data bursts being deleted as a resolution measure. Overall, given the limited buffering at the core nodes, it is necessary that contention and congestion avoidance be jointly implemented in order to improve network throughput, thus in the process guaranteeing consistent acceptable QoS for the various applications and services. Burst assembling approaches at ingress nodes, RWA, contention/congestion resolution are key to minimizing both contention as well as congestion. II. AN OBS NODE In order to transmit data bursts in an OBS network, lightpath connections are setup between desired source and destination pairs. A typical lightpath connection request is established through a series of lightpath connections from source to destination. These will now accommodate both control data as well as the data bursts. At each optical node, functionalities such as multiplexing and demultiplexing of channels as well as wavelength routing should be supported. Fig. 1. An Example OBS core node As the data bursts are switched to intended output ports at network nodes, contentions may occur. It is therefore necessary to provision contention resolution mechanisms that will ensure burst loss minimization. We therefore assume that the architecture of each node should be designed in conformity with the operations of the priority-based intermediate buffering and routing and wavelength assignment (RWA) scheme (LIB-PRWA) which we shall describe in due course. The logical architecture of such a node is shown in Fig. 1. Typically, the edge-core joint node example is a composite edge and core nodes. Such an architecture can perform bursts assembly utilizing edge node functionalities and also forward transit bursts to intermediate nodes using core node functionalities. Arriving packets from periphery user edge networks are classified according to their destination address as well as traffic class before being forwarded to assembly queues. The node uses the segmented burst assembly algorithm as well as adjustable offset timing [1], [2]. The segmented data bursts are ultimately passed on to the available burst transmission queues (BTQs) for temporary storage, while awaiting sche-duling. Finally, they will be passed on to a scheduler for scheduling on available outgoing channels. Prior to scheduling, a CP is sent ahead at an offset time [2]. The same node can also handle transit data burst connections. The associated BCP of a transit data burst connection is processed in the routing module, normally availed at each node. If the received BCP is signaling a local terminating connection, then provision will be made to forward the data burst to one of the disassembly modules for its disassembling into individual data packets. However, in case of a transit connection, both the CP and associated data burst are rescheduled to the desired next node. This is subject to the necessary resources, such as the original wavelength being available. However, if contention occurs, the received data burst may have to be reticulated via the feedback unit until such time that the resources (desired wavelength) becomes available, otherwise it is discarded. Though not illustrated in the generic node architecture of Fig. 1, buffer provisioning is necessary at assembly, burst transmission queues and as well as at schedulers. Fig. 2.An Example Queuing model of an OBS Node Whereas buffer provisioning is restricted to ingress nodes, and with none in the core nodes, however in practice most nodes are composite, i.e. they incorporate capabilities of originating, transiting as well as terminating lightpath connections. A generic queuing model of the OBS node is provided in Fig. 2. Three types of connections namely, local, transit as well as feedback are served. The buffering provisioning is implemented in the form of fiber delay lines (FDLs) and flash (electronic) memory. Both can only render deterministic as well as limited delay even though it is often assumed that any burst losses are only due to wavelength contention rather than buffer overflows. Whereas traditionally we can always assume either a / / /∞or / / / model in terms evaluating such a model, it should however be noted that the number of input flows (streams) are limited and as such, the overall burst loss probability at an OBS link is actually lower [3]. Partly, this is because bursts within one input lightpath connection (stream) are often streamlined and only inter-stream contentions happen at the link. This is referred to as the streamline effect International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249-8958, Volume-9 Issue-6, August 2020 III. RELATED WORK The task of contention minimization in OBS switched backbone networks is accomplished by proper dimensioning of necessary and available resources at wavelength assignment, link and path levels. The key constraint being that more than one data burst cannot be assigned the same wavelength concurrently on the same link. At wavelength assignment level, various schemes such as random wavelength assignment, first-fit (FF), minimum product, maximum sum, best-fit least loaded, least utilized, most frequently used and relative capacity loss have been explored [4]. The FF scheme generally performs relatively better in terms of burst loss probability and fairness. Furthermore, it has low computational overhead and complexity. To maximize on the number of simultaneous end-to-end lightpath connections, wavelength reassignment algorithms using minimum overlap and reconfiguration techniques have been suggested [5]. However, the suggested techniques only slightly reduce the blocking probabilities. The priority-based FF offline wavelength assignment scheme proposed in 6 is geared towards maximizing both the number of simultaneous connections as well as low burst losses. With this scheme, the wavelengths to be utilized for the connection requests are prioritized according to their estimated burst loss probabilities. The priority-based FF approach requires a longer setup time as it requires extra processing time to further estimate the loss probabilities on each selected lightpath connection. At link and path levels, it is desirable that the shortest light path(s) from ingress to egress node be utilized, subject to constraints such as traffic load, congestion as well as wavelength assignment. As suggested in [7], efficient routing can be partly achieved by ensuring that path computation is optimized as much as possible. Examples include the Dijkstra algorithm-based routing protocols such as the Open Shortest Path First (OSPF) and the Intermediate System to Intermediate System (ISIS). Whereas they always thrive to find an optimal path for each ingress to egress node pair, they however cause the same shortest links to become congested as well as be prone to contentions. With respect to the ingress-node destination pair, the longer paths remain underutilized and overall there is traffic imbalance in the network. In order to counter this, authors in [8] propose a distributed Path Computation Element (PCE) that enables routing protocols to efficiently utilize all available network links. PCE also applies software-defined networking (SDN) paradigms to separate signaling and routing paths, thus giving more network control to operators and in that way contentions are reduced overall. An algorithm called the Self-Tuned Adaptive Routing (STAR) [9], was further incorporated to enhance traffic balancing as well prevent links from being overwhelmed. A dynamic contention as well as congestion aware scheme that seeks to reduce blocking probabilities as well as boosting utilization by symmetrically distributing network traffic over all active links was proposed in [10]. Finally, in [11], the researchers proposed and investigated a per-link congestion control-based scheme that seeks to balance available network resources allocation by utilizing present and forecast demands of lightpath requests statistics. In essence, practical networks have a regularized topology and lightpath connection requests are generally random in nature. Given a fixed amount of resources (link, wavelengths, paths, as well as constraints), an increase in the traffic load results in the re-duction of the number of idle resources per link and hence this will lead to both contention as well as blockings. We propose a priority-based limited intermediate buffering and streamline effect aware prioritized routing and wavelength assignment (RWA) scheme (LIB-PRWA) to combat the problem of contention occurrences. The approach relies on prioritized grooming of local and transit lightpath connection requests. This is followed by prioritizing wavelengths according to their past performances in terms of contention occurrences on each. Finally, it assigns the wavelength to the various connection requests by further taking into consideration other resources states (such as congestion, and current traffic loads) in primary paths. Summarily, the paper's contributions are as follows: a) We introduce a burst grooming algorithm for mixing of transit and local data bursts at core nodes. As discussed later, the grooming helps in minimizing contentions. b) We propose and discuss a limited intermediate buffering and RWA (LIB-PRWA) based scheme in which contending bursts may be buffered at a core node depending on their residual hop count. As such, it will be shown that this helps to improve the fairness in terms of drop rate of different hop-count bursts. The rest of the paper is organized as follows: A short discussion on streamline effect aware RWA as well as constraints is provided in section IV. The proposed scheme as well as its key elements such as Priority based RWA, Traffic Grooming and a limited buffering node architecture model are narrated in section V. The scheme's performance analysis is provided in section VI, thereafter section VII concludes the paper. IV. STREAMLINE EFFECT AWARE RWA The extent to which the streamline affects the overall burst loss ratio in the network for individual as well as aggregated flows was explored in various literatures e.g [12]. Overall, at an arbitrary node, the burst loss ratio of all data bursts constituting a stream on an individual end-to-end lightpath connection between a source ( ) and destination ( ) pair is the same, and contentions are rather among the various streams on the link. It has also been shown that both the burst loss ratio as well as contention become lower as the traffic arrival rate of a flow increases. On the basis of the aforesaid, for a given OBS network, ( , ) comprising nodes and a total of links (fibers), the objective would be to either maximize the simultaneous supported network traffic , , or minimize unsupported traffic , for each node pair ( , ) ∈ , . A Bursts Contention Avoidance Scheme Based on Streamline Effect Awareness and Limited Intermediate Node Buffering in the Core Network The main objective of a SLE aware RWA scheme would be to maximize , subject to optimized network supporting resources. Alternatively the objective function is: So, SLE aware RWA will strive to groom as much traffic as can be possible within a single wavelength of capacity B . The above two equations assume that each lightpath request d s D , is served on a single flow path as well as wavelength. Also note that for The following constraints will ensure the SLE; The last two equations are also an indicator of wavelength connectivity (continuity) at each node, hence we have the following variables: In the last five sets of equations, all the variables are indexed with respect to the wavelength  . In order to reduce the number of variables required to solve each set of SLE aware RWA constraints, a decomposition model approach can be used. The model partly utilizes parameters and variables obtained using the column generation model approach, which itself relies on a set of dynamically generated routing and provisioning network configurations. Each SLE compliant carries a fraction of the d s D , traffic for a designated source and destination node pair Specifically, the SLE-RWA decomposition model relies on key parameters such as: ,  c z is a variable denoting the number of wavelength for which configuration c is selected. We can rewrite equation (1) as follows: V. PROPOSED SCHEME Before a data burst is dispatched, resources have to be provisioned for it. This involves determining the least cost path to destination node on which the lighpath connection will be established, and assigning a wavelength to it. It is possible to avoid contention occurences at the next and subsequent nodes by either assigning different links or different wavelength to concurrent bursts originating from the same node. However, at the next and subsequent nodes, each light path connection (data burst) is likely to merge with other transit connections. In so doing on any link, different wavelengths must be assigned to each of the bursts to avoid possible contentions, should they partially or wholly overlap in time. The SLE-aware based RWA's goal is to maximise the number of simultaneous lightpath connections establishments to various source -destination node pairs within the network subject to these constraints. The contention resolution mechansisms at nodes must not escalate network costs, degrade peformance (due to losses), or worsen contentions and other network peformance metrics in certain sections of the network. In certain instances, a data burst finds itself being discarded when it is only a few hops from the destination node, and this would be quite wasteful of resources. The proposed scheme involves enforcing a few measures such as traffic grooming at nodes, selection of both shortest possible paths to destination followed by the evaluation of their current resources states. The selected routes will be prioritized according to the frequency of contention occurrences as well as current network resources metrics. Limited buffering to contending bursts in the form of a feedback unit incoporated in each core node is also implemented. Fig. 4. Proposed scheme's concept As illustrated in Fig. 4, lightpath connection requests are aggregated and then groomed according to priority (low or high priority). After grooming, the LIB-PRWA scheme will choose routes (including deflection routes) based on current network state as well as frequency of contention occurrences. In the process, contentions can also be resolved by reticulating one or more of the contending data bursts via the incorporated feedback unit. The various steps are summarily discussed next: A. Aggregation The SLE aware RWA is one of the key components of the proposed scheme responsible for aggregating traffic both at burst as well as lightpath levels at the input ports. The traffic includes that generated by local (originating), transit as well as feedback (rearticulated) bursts. The aggregation utilizes the available aggregation buffers in serving mostly incoming local and feedback traffic. The traffic (bursts) are placed in each aggregation queue according to destination, priority as well as distance. This means bursts destined for a common destination node are served in the same queue. Feedback as well as other incoming high priority bursts will always have precedence over the rest. As part of the SLE awareness, bursts destined for shorter hops are accommodated ahead of those traversing longer distances. Fig. 5. An example of resolving secondary contention Time wise, this implies that when the aggregated data burst arrives at the next node, sections destined for shorter distances (including this node) are disaggregated and the now vacant window made available for new traffic aggregation as well as accommodation of any feedback traffic at this node. Overall, the process generally is designed to ensure the seclusion of any overlapping with all other incoming transit traffic bursts. In practice, secondary contention may still occur. An example is when the data burst aggregation is already planned and near completion, and the new overlapping aggregated data burst's associated BCP arrives. This normally will happen in cases where the incoming aggregated burst is very close to its destination, hence the offset time is not enough such that the aggregated burst arrives almost at the same time as the BCP. As a remedy, an adjustable scheduling mechanism that has an allowance for aggregation/ unbuffering delaying is implemented. This is illustrated in Fig. 5 B.Grooming Primarily, the purpose of grooming the connection requests is to improve on network utilization. The groomed connection requests are further prioritized so that precedence is given to requests with relatively higher priorities. In that way, more requests are likely to be successfully established simultaneously in the process. Contentions as well as blocking probabilities are drastically reduced as a result of the grooming and wavelength prioritization facilitated by the proposed LIB-PRWA scheme. A summary of a priority grooming algorithm is as follows [13], [14]. The path congestion state on the primary path is determined from: The proposed scheme will always give preference to paths with high connection establishment success likelihood, i.e paths in which congestion likelihood is at a minimal ( , ( min( )). The same applies to links. In weighted terms, congestion levels at any given time t can be computed from; Assuming link blocking probabilities to be independent, then at any time, the overall blocking probability is: ii) Wavelength utilization: With regards to wavelength utilization, it is generally noted that data bursts routed on paths and links that are least used are not likely to encounter any contention. Furthermore, in the unlikely event that contention occurs along the path, the limited available contention resolution mechanisms will suffice to prevent any burst discarding. At any given time, the utilization of a link is determined from [14]. iv) Intermediate Buffering: If a channel scheduler is unable to find a free desired wavelength on the next outgoing link, the data burst is discarded. In the proposed scheme, we assume that each core node incorporates an intermediate buffering provisioning to cater for those data bursts that are nearing their respective destinations. The buffering is implemented in the form of a feedback unit that incorporates FDLs at each node. Typically, data bursts that have traversed half the network's radius ) ( ,d s  are eligible for intermediate buffering. We define the network's radius as half the maximum hop count between the longest of the shortest paths possible in a given network. As argued in various literatures, e.g. in [15], lightpath connections serving a source-node destination pair spanning long hop counts are quite likely to encounter contentions and that their discarding ( as a contention resolution measure) may adversely affect the overall network throughput. Note that the discarded bursts would have already utilized significant amounts of network resources. Algorithm III: LIB-PRWA initialize input: acquire sets of new and transit connection requests from CP processing module. output: sets of lightpath connection requests; These are classified as low or high priority. Step I: acquire network metrics: congestion level, contention frequency, utilization and search for K shortest paths search for set of shortest paths, Step II: Serve all requests according to priority. Step Step IV: drop any fails end D. Limited Buffering at Nodes The generic architectural core node block diagram of Fig. 1 incorporates a feedback unit that facilitates limited buffering. Its functional queuing model equivalent as illustrated in Fig. 6 would be two nodes A and B representing the core node and its incorporated feedback unit respectively. Fig. 6. Model of the proposed feedback unit providing limited buffering Node A comprises an n n  optical switching fabric, where n -is the number of optical links at the input and output respectively. Node B represents the incorporated feedback unit that provides limited buffering and generates a traffic load  representing the contending burst that were looped back. As illustrated, node A prior to grooming, provides two ports A and B . In case of contention, one or more of the contending bursts from both ports A and B are looped back via the feedback unit. The probability of sending an arriving burst to port A is k , whereas that of sending it to port B is k  1 queues. In order to determine the overall node blocking probability N P (i.e. taking into account the feedback unit) we proceed as follows; We define the probability that port A is busy as ; The probability that port B is busy is ; The probability ( 3 B ) of bursts existing in the delay of port B is; where T denotes the momentary delay in the feedback unit. A burst will find port B busy with a probability;   Since the overall performance of any network is quantified taking into account factors such as the utilization ( U ), bandwidth (B ) as well as link rate ( R ), we can thus define a throughput characterization factor; From which the blocking probability of the node (excluding the feedback unit) is The overall joint probability blocking of the node with feedback unit can be determined by first assuming that all bursts are of equal length, the burst arrival process at all inputs follows a Poisson distribution and that the node can handle a maximum of N B bursts at any given time. In this case we first determine the average numbers of bursts at the node as well as average waiting time in the feedback unit. Consequently, we get; According to [16], as long as the service discipline at the node is work conservative, the fraction of bursts that are blocked is independent of the service discipline. VI. PERFORMANCE EVALUATION We commence this section by briefly evaluating, SLE-aware RWA's performance as discussed in section IV. Performance measures of interest will include burst buffering probabilities, secondary contention as well as buffering and access delays. This will be followed by a direct performance evaluation of the overall LIB-RWA scheme in two separate cases; when intermediate buffering is provisioned and when it is not. A. SLE Aware Aggregation Performance As already discussed in previous sections, SLE Aware RWA aggregates bursts from various flows in such a manner as to reduce secondary (inter stream) contention. The buffering implemented in the form of feedback units does burst loss avoidance, even though this is at the expense of extra delays experienced by each buffered flow. The evaluation is carried out on a multi-node network comprising 11 nodes interconnected by 26 bidirectional links. The distance between nodes is in km. Each node incorporates a feedback unit that can provide varying delays for bursts as desired. The network is implemented in OMNET++ (version 5.4). The platform also includes OBS modules that implement both ingress and egress nodes. The ingress nodes generate and supply constant size data packets whose payloads are fixed at 100 kB. Their inter-arrival times follow a Poisson distribution. The data packets create streams that feed to aggregation queues for the generation of data bursts. The Just Enough Time (JET) principle is chosen as the signaling protocol among the various nodes, whereas burst assembling utilizes the in-built LAUC-VF algorithm. Each wavelength operates at either 10 or 100 GBps . The traffic load intensity is varied from zero to 1 and calculated as; where ) (t S bursts is the aggregate size of bursts sent throughout the network,  B single wavelength's capacity, W is the number of wavelengths in a single link, and L is the number of links in the network. In general, it is noted that a burst flow has a high likelihood of being buffered when it is traversing a relatively long hop distance. As the overall traffic increases so is the more instances of secondary contentions which can only be resolved by temporarily buffering the affected flow (see Fig. 5, for illustration purposes) till such time that desired aggregation can take place. As traffic load increases, so will be the number of secondary contentions also increase and hence more buffering likelihoods. When operating the links and paths at 100GBps, the number of flows are likely to correspondingly increase and hence the number of potential secondary contentions that would require buffering of the individual flows. Fig. 9. Access delay times Variation of access delay times for awaiting data bursts along transit nodes are plotted in Fig. 9. It is noted that at low network traffic loads, as expected, access delay times are quite small since there is lots of voids in transiting flows affording aggregation to take place. However, as the traffic load peaks above (50%) access delay times significantly increase for paths/links operating at low speeds. This is because in this case, all transiting flows tend to be filled up and hence not much voids are available to facilitate aggregation of awaiting bursts at intermediate nodes. Fig. 10. Secondary contention ratio Secondary contention ratios are plotted as a function of traffic load when operating the networks at 10 GBps and 100 GBps respectively. Generally, the magnitude of such contention flows is an indicator of the extent of aggregation at nodes since SLE aware RWA tends to avoid primary contention. As can be observed for both speeds, when traffic is increased the algorithm uses more wavelengths hence the latter's more efficient use and thus leading to higher link utilization throughout the network. Fig. 11. Nodal buffering delays One of the distinct features of SLE aware RWA is that of minimizing nodal buffering delays at all ingress nodes. As can be observed from Fig. 12, when operating at 100 GBps, minimum buffering delays are incurred at both low and high network traffic loads . This is because at low traffic loads, the voids will always be available for the aggregating of awaiting bursts at the nodal nodes on transit flows. Operating the network at higher speeds will mean more wavelengths as well as voids are available and hence the nodal buffering delays are quite low as well. A. LIB-PRWA without Intermediate Buffering In this subsection, we carry out a performance comparison of our proposed scheme versus the already existing routing ones. We assume no intermediate buffering, i.e. the incor-porated feedback units on each core node are assumed offline. , 32 and 96 in Fig. 13. As can be observed, the blocking performance in the case of random RWA is more or else identical when the number of wavelengths is varied . However, the proposed scheme's performance is relatively significant as the number of wavelengths per link is increased. Overall, the proposed LIB-PRWA approach enhances network performance by reducing blocking and at the same time increasing the throughput. As expected, random RWA algorithms' performances for varying wavelength capacities are identical. On the other hand, the LIB-PRWA algorithm outperforms the random RWA quite distinctively. It is also noted that for low values of W , the limited resources rather dictate blocking probability and not the wavelength assignment approaches implemented. The overall performance improvement of the LIB-PRWA with increases in W can be attributed to the degree of wave-length spatial reuse, i.e. for large values of W , an ingress node can schedule more lightpath connections (bursts) on a given link. Consequently, more lightpath connections traversing different links can be concurrently assigned the same wavelength values. Furthermore, by comparing the two schemes at low traffic levels, the LIB-PRWA has relatively better performance. This is because wavelength contentions prominently contribute to the blockings at low traffic loads, whereas as the network traffic load increases, most of the burst blockings are also caused by insufficient bandwidth. With regards to the number of nodes (hops) traversed, we note that for low traffic loads, the LIB-PRWA algorithm improves the network performance in terms of the blocking. As can be observed from Fig. 15, the two approaches more or less perform identically at high loads, indicating that no more wavelengths are available for newly generated bursts and senders have to block them immediately. B. LIB-PRWA with Intermediate Buffering In this subsection, we compare the performance of the proposed scheme when it enforces intermediate buffering. This implies that the feedback unit is functional We set the network diameter to 8. Fig. 16 plots the average burst blocking probability as the link load is gradually increased. As anticipated, the LIB-PRWA performs better than the other two. Overall, it is noted that intermediate buffering is effective in contention resolution and consequently improving network performance in terms of blocking probabilities as well as improving fairness to those data bursts that traverse the network through high hop counts. Fig. 15. Average blocking capacity when link traffic is increased The burst loss performance is compared for the three schemes namely, the proposed scheme, the traditional OBS routing's SPF (which uses random RWA) and SPDR in Fig. 17. Fig. 16. Average blocking probability SPF, LIB-PRWA and SPDR The average blocking performance for the three schemes is plotted as a function of the offered link traffic intensity (load). Beyond load values of 4 . 0 , the proposed scheme's burst loss continues to be relatively low. This is because as the traffic increases, the feedback unit can no longer accommodate all the reticulated data bursts. Further, if we define the coefficient of variation (an indicator of the degree of unfairness to individual traffic connections in the network) to be the ration of the standard deviation (  ) to the mean (  ), then the traditional scheme performs relatively better when network loading conditions are below 7 . 0 , whereas the proposed scheme performs relatively better in high network loaded scenarios. Fig. 18 provides a plot of the coefficient of variation of the blocking probability as the link load is varied steadily from 2 . 0 to about 8 . 0 . Fig. 17. Coefficient of variation of the blocking probability The proposed scheme performs relatively better than the rest. Fig. 19 plots the end-to-end throughput for selected routing strategies considering relatively uniform as well as distance-dependent traffic. Both SPDR and the proposed scheme outperform SPR. However, the proposed scheme utilizes the available network resources much more efficiently and shows the highest throughput overall. VIII. CONCLUSIONS The paper addresses the problem of contention occurrences in OBS networks. Frequent contention occurrences in such networks will lead to high data burst losses and consequently a degrading of QoS for the running applications and services. Primarily, in this paper we distinguish two types of contention, primary and secondary. Primary contention is caused by two or more data burses contending for the same output port wavelength at the same time, whereas secondary contention will register when the unbuffering of a previously temporarily buffered data burst flow in the feedback unit overlaps in time with that of another incoming burst when they are requesting the same wavelength. Limited buffering coupled with SLE aware prioritized RWA based scheme is proposed. The limited buffering at all intermediate nodes caters for those data bursts that have traversed more than half the network's radius and suddenly encounter contention. In that way, discarding them as a consequence contention resolving would lead to low network throughput. The SLE aware RWA has been shown to seclude primary contention and aids in the proper aggregation processes at intermediate nodes thus ultimately avoiding secondary contention occurrences A queuing analysis of a typical node taking into account local, transit as well as feedback traffic is carried out. Performance of the model in terms of burst buffering probabilities, access as well as nodal delays is carried out. It is generally found out that bursts that traverse longer hops have a higher probabilities of being buffered at intermediate nodes as they are greater chances of encountering contentions. Nodal delays for data bursts awaiting aggregation at intermediates nodes can be minimized by operating the network at higher speeds as this provisions more wavelengths as well as voids for the aggregation. The proposed LIB-PRWA overall has better performance in comparison to other equivalent ones. This includes improved end-to-end blocking probabilities, throughput as well as overall network resources utilization.
2020-09-10T10:06:17.342Z
2020-08-10T00:00:00.000
{ "year": 2020, "sha1": "ff36d3bb52a3080eeff2660f9c466cc912c7fa94", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijeat.f1214.089620", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "edef29ad078749f91928a9a2908b3267acfe318d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
55174574
pes2o/s2orc
v3-fos-license
Data assimilation into land surface models: the implications for climate feedbacks Land surface models (LSMs) are integral components of general circulation models (GCMs), consisting of a complex framework of mathematical representations of coupled biophysical processes. Considerable variability exists between different models, with much uncertainty in their respective representations of processes and their sensitivity to changes in key variables. Data assimilation is a powerful tool that is increasingly being used to constrain LSM predictions with available observation data. The technique involves the adjustment of the model state at observation times with measurements of a predictable uncertainty, to minimize the uncertainties in the model simulations. By assimilating a single state variable into a sophisticated LSM, this article investigates the effect this has on terrestrial feedbacks to the climate system, thereby taking a wider view on the process of data assimilation and the implications for biogeochemical cycling, which is of considerable relevance to the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report. Introduction Pioneering work such as Charney et al. (1975) on the link between vegetation loss in sub-Saharan Africa and drought persistence highlighted the role that feedback mechanisms between the land surface and the atmosphere play in determining climate. Numerous studies (e.g. Zeng et al. 1999, Friedlingstein et al. 2001) have reinforced our knowledge of how land surface properties change in response to climatic forcing, the magnitude of which itself is influenced by the land surface changes. Indeed, vegetation change is accompanied by soil moisture change, which can lead to changes in properties such as surface albedo and evaporation, resulting in precipitation changes through soil moisture feedback (Koster et al. 2004, Zhang et al. 2008, Liu et al. 2010. These complex feedbacks between the terrestrial ecosystem and climate have been extensively studied using land surface models (LSMs), but remain poorly understood. LSMs calculate the surface to atmosphere fluxes of heat, water and carbon, and update the state variable of the surface and subsurface layers . They are crucial components of general circulation models (GCMs), influencing cloud cover, precipitation and atmospheric chemistry, with these coupled systems representing key tools for predicting the likely future states of the Earth's system under 618 D. Ghent et al. anthropogenic forcing (IPCC 2007). However, representation of highly complex biophysical processes in LSMs over highly heterogeneous land surfaces with limited collections of mathematical equations, and the tendency of overparameterization, infers a degree of uncertainty in their predictions (Pipunic et al. 2008). A substantial portion of this uncertainty may be attributed to the representation of land surface feedbacks within coupled climate models (Notaro 2008). Even if atmospheric greenhouse gas concentrations were stabilized, the longmemory effect associated with the climate system means that anthropogenic warming would continue through future decades and centuries. However, large uncertainties remain with respect to our understanding of biogeochemical cycle feedbacks, diminishing our ability to model climate forcing accurately. Significant progress has been made in reducing uncertainties associated with atmospheric change, but further consideration of the long-term changes in atmospheric chemistry and the consequences of the associated climate forcing remain a priority (Dameris et al. 2005, Cracknell and Varotsos 2007. To this end, improving the estimations in LSMs of feedbacks to the climate system represents a pertinent objective. Data assimilation may be viewed as an optimum solution for such improvements. Data assimilation is a method of minimizing some of the uncertainties inherent in all LSMs due to their approximation of the complexity in the terrestrial ecosystem. Observations, if available, from sources such as Earth Observation (EO) satellites, can be integrated into the model to update a quantity simulated by the model with the purpose of reducing the error in the model formulation. The correction applied is derived from the respective weightings of the uncertainties of both the model predictions and the observations. There has been much research focused on data assimilation into LSMs in previous years. Particular attention has been paid to assimilation of land surface temperature (LST) to constrain simulations of soil moisture and surface heat fluxes. These assimilation studies include the use of variational schemes (Caparrini et al. 2003) and variants of the Kalman filter sequential scheme, such as the ensemble Kalman filter (EnKF; Crosson et al. 2002, Huang et al. 2008, Pipunic et al. 2008, Quaife et al. 2008, first proposed by Evensen (1994). Coupled GCM land atmosphere models are important tools for climate change prediction and for assessing climate feedbacks over future decades and centuries. However, because of large uncertainties with respect to these feedbacks, an example being cloud formation, a concerted effort is required to improve the modelling of water, energy and carbon exchanges in these coupled systems, by optimizing prediction of key variables, such as soil moisture. The assimilation of observations to improve the quantification of soil moisture has long been an objective of the hydrological community (Crosson et al. 2002, Crow and Wood 2003, Huang et al. 2008. Margulis and Entekhabi (2003), for instance, assimilated skin and air temperature, plus relative humidity, to optimize the water and energy budgets of a coupled land surface-atmospheric boundary layer model. Pipunic et al. (2008) also demonstrated enhanced model estimates as a result of integrating EO observations into their land surface scheme, with improved predictions of latent and sensible heat fluxes. This focus on the moisture states of models illustrates the importance attributed to the longer memory characteristics in coupled systems. Optimization, as a result of data assimilation, thus presents an opportunity to improve our ability to predict water and energy fluxes from the land surface to the atmosphere, with the prospect of reducing climate feedback uncertainty. Moreover, the application of data assimilation in understanding and quantifying feedbacks in the climate system is not just restricted to landatmosphere interactions. The role of marine sediments and ocean biogeochemistry in the long-term regulation of atmospheric carbon has driven the development of data assimilation techniques in these systems, resulting in improved parameter estimation (Annan et al. 2005) and enhanced calibration of ocean atmosphere models (Ridgwell et al. 2007) through, for example, the integration of phosphate and alkalinity observations. However, as in any coupled chaotic system, minor changes in a single characteristic can have far-reaching effects. This paper considers the sensitivity of related characteristics to the model update of a single variable, through the process of data assimilation. In §2, LST over two regions of the African continent (an area of West Africa (17 • W to 20 • E longitude, 4 • N to 20 • N latitude and an area of North Africa (10 • W to 33 • E longitude, 20 • N to 30 • N latitude)), is integrated into the state-of-the-art LSM JULES (Joint UK Land Environment Simulator), developed by the UK Met Office, during the period 1 January to 31 May 2007. The effect on soil moisture is discussed in §3, whereby the model simulations are compared with European Remote Sensing Satellites (ERS-1 and ERS-2) scatterometer top soil moisture observations. Finally, in §4, the implications of the data assimilation exercise on surface energy, water and carbon fluxes are considered. LST LST is the radiative skin temperature of the land, with wide-ranging influences on several biophysical processes of the terrestrial biosphere, such as the partitioning of energy into ground, sensible and latent heat fluxes (Sellers et al. 1997, Huang et al. 2008) and the emission of longwave radiation from the surface (Rhoads et al. 2001, Trigo et al. 2008, the physiological activities of leaves (Sims et al. 2008), surface dryness (Sandholt et al. 2002, Snyder et al. 2006, and stomatal conductance (Sellers et al. 1997), and its reported response as an effect of the El Niño Southern Oscillation (ENSO; Manzo-Delgado et al. 2004). Sensible heat flux (H) is a function of the difference between surface and air temperature (Rhoads et al. 2001), whereas latent heat flux (LE) is a function of surface temperature because of the influence LST expends on vapour pressure deficit (Hashimoto et al. 2008). Within the surface balance equation, LE and H are tightly coupled, and an increase in one is usually at the expense of the other. LST also has a role to play in the topic of fire modelling within LSMs. For example, it is related to fuel moisture content (Chuvieco et al. 2004), and in combination with other environmental variables can be applied in predicting fire occurrence and propagation (Manzo-Delgado et al. 2004). This is particularly important for Africa, where climate scenarios remain highly uncertain (Williams et al. 2007), most notably in the fire-dominated savannas. Here cloud-free LST pixels from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) instrument onboard the Meteosat Second Generation (MSG) geostationary satellites, centred over the equator at an altitude of 36 000 km, are integrated into the JULES model over two regions of Africa (West Africa and North Africa) for a 5-month period in 2007. MSG-SEVIRI data SEVIRI acquires an image every 15 min, at a spatial resolution of between 3 and 5 km for the African continent. LST is generated by the Satellite Application Facility on Land Surface Analysis (LandSAF) using a Generalized Split Window (GSW) algorithm (Madeira 2002) for channels IR10.8 and IR12.0, as a linear function of clear-sky top-of-the-atmosphere (TOA) brightness temperatures. Within each scene, bareground and vegetation emissivities, previously assigned to land cover classes (Peres and DaCamara 2005), are averaged and weighted with the fraction of vegetation cover retrieved by the LandSAF (Garcia-Haro et al. 2005) to estimate channel surface emissivity. Independent data assessment of the GWS algorithm against a set of radiative transfer simulations indicated a bias-free algorithm, with random errors increasing in response to increasing viewing zenith angle (Trigo et al. 2008), and with a reported accuracy of 1.5 K (Sobrino and Romaguera 2004) for most simulations between nadir and 50 • . Because clouds scatter and absorb infrared radiation, LST retrieval requires identification of cloudy/part-cloudy pixels. Clear-sky pixels are identified by the LandSAF through the application of a cloud mask, which makes use of software developed in support to Nowcasting and Very Short-Range Forecasting Satellite Application (NWC SAF; http://www.nwcsaf.org); with this information being represented in quality control flags. A complete description of the LST retrieval algorithms can be found in the LandSAF product user manual (available at http://landsaf.meteo.pt/). Model description and data assimilation The JULES land surface model, which has been described elsewhere , Alton et al. 2007) in detail, is the community version of MOSES (Met Office Surface Exchange System). It is becoming increasing important to the UK ecological modelling community because it can be coupled to the Hadley Centre GCM or can be driven by its output. In brief, JULES is terrestrial gridbox model of a fine temporal resolution, in which each gridbox is composed of nine surface tiles: five are plant functional types (PFTs; broadleaf trees, needleleaf trees, C 3 grasses, C 4 grasses and shrubs) and four are non-vegetation types (urban, inland water, bare soil and ice). Each gridbox is profiled into four soil layers that are homogeneous over the gridbox, with soil thermal characteristics being functions of soil moisture. Prognostic soil fields are updated from values for the previous time-step using the mean heat and water fluxes over the time-step, whereby the total soil moisture content within each soil layer is incremented by the evapotranspiration extracted directly from the layer by plant roots, the diffusive water flux flowing in from the layer above, and the diffusive flux flowing out to the layer below . Furthermore, the Clapp and Hornberger (1978) equations for hydraulic conductivity and soil water suction are applied in the model. The physical processes are driven by meteorological data, which update the state variables typically every 30 or 60 min, whereas the biophysical parameters remain constant over the duration of each model run. The output from JULES includes numerous variables depicting the state of the land surface in terms of water, energy and carbon fluxes. At each time-step the grid box LST is derived from the sum of the individual tile surface temperatures multiplied by their respective fractional covers within the grid box. Thus, the surface energy balance equation for each tile, defined by Cox et al. (1999), is given by equation (1): where SW N is the net downward shortwave radiation, which is derived from the surface albedo, LW ↓ is the downward longwave radiation, σ is the Stefan-Boltzmann constant, T s is the surface temperature, H is the sensible heat flux, LE is the latent heat flux, and G 0 is the heat flux into the ground. Here LST was assimilated into JULES for a 5-month period from 1 January to 31 May 2007, by applying EnKF sequential data assimilation, which uses a Monte Carlo approach. The exact methodology, which has been applied previously (Ghent et al. , 2010, is described comprehensively in Ghent et al. (2010), with the EnKF approach implemented according to Evensen (2003). To give a brief overview: at each time-step, model estimates are nudged towards the observations based on the respective state and observation error covariance matrices, P and R. The correction to the forecast state vector is determined by the Kalman gain matrix K defined by: where H is the observation operator relating the true model state to the observations, taking into account the observation uncertainty. The Kalman gain matrix is applied to the difference between the model estimates and the observations according to equation (3): where ψ a is the updated model estimate, ψ f is the forecast state vector, ψ t is the true model state, and ε is the observation uncertainty. The estimate of the model state following the update is taken as the mean of the ensemble members, with the uncertainty indicated from the variance around the mean. The observation error covariance matrix is a measure of the ensemble spread of observations, with randomly generated perturbations constructed using the observation uncertainty of 1.5 K for SEVIRI LST (Sobrino and Romaguera 2004). The distribution of the model ensemble spread, from an ensemble size of 50 in this case, determines the state error covariance matrix, thereby avoiding the expensive integration of the standard Kalman filter. In this study, only perturbations with respect to the meteorological forcing data, generated from normally distributed random number perturbations with zero mean and unit variance, following the Box-Muller transform method (Box and Muller 1958) were considered. Uncertainties in model parameterization or initial conditions were not taken into account. Meteorological forcing variables were taken from generated 6-hourly National Centers for Environmental Prediction (NCEP) reanalysis datasets (Kalnay et al. 1996), with precipitation data calibrated from monthly Tropical Rainfall Measuring Mission (TRMM) precipitation data (Kummerow et al. 1998). The model itself was run at an hourly time-step over the 5-month assimilation period, with a spatial resolution of 1 • × 1 • . Land-cover change was not considered in this experiment, so the fractional coverage of the surface tiles was derived from International Geosphere-Biosphere Programme (IGBP) land-cover classes and mapped onto JULES according to Dunderdale et al. (1999). Initial conditions were set from an equilibrium state following a 200-year spin-up cycle, with soil parameters derived from the International Satellite Land-Surface Climatology Project (ISLSCP) II soil data set (Global Soil Data Task 2000). To quantify the influence that LST assimilation has on the state of the modelled land surface, the changes in several variables were examined: soil moisture, evapotranspiration (ET), and net primary productivity (NPP). Soil moisture The partitioning of available energy into sensible heat (H) and latent heat (LE), driven by changes in the surface temperature, is influenced by the vegetative cover and the available soil moisture (Smith et al. 2006). Temperature change in soil is dependent on thermal conductivity and heat capacity. A dry soil heats up more rapidly than wet soil because the heat capacity of water is higher than that of air, which occupies a much greater percentage of the volume in dry soil. A wet soil surface loses more LE whereas a dry soil surface loses more H. Soil moisture exhibits a significant memory that can persist for many months, prolonging and intensifying pluvial and drought events (Notaro 2008). Moreover, soil moisture feedbacks can regulate climate change and increase our predictability of seasonal climate, yet the strength and regional significance of this feedback remains poorly understood (Zhang et al. 2008). Evidence for soil moisture-climate feedbacks includes the relationship between soil moisture and precipitation, evaporation, air temperature and cloud cover (Findell andEltahir 1997, Zhang et al. 2008). The most extensive study on soil moisture effects, the Global Land-Atmosphere Coupling Experiment (GLACE; Koster et al. 2004, Guo et al. 2006, involved 12 atmospheric GCM (AGCM) simulations and illustrated that the strong land-atmosphere coupling lies mainly in the ability of soil moisture to affect evaporation in the transition zones between dry and wet climates (Zhang et al. 2008). Identified hotspots include the Sahel, northern USA and southern Europe. Furthermore, the feedback among Intergovernmental Panel on Climate Change (IPCC) AR4 models was assessed over Europe (Seneviratne et al. 2006), with a positive correlation between soil moisture and precipitation. In other words, high soil moisture will support enhanced evaporation, increasing atmospheric water content and eventually leading to increased rainfall, although this temporal response depends on subgrid condensation processes within global models and therefore can vary substantially (Koster et al. 2004). Moreover, the strength and impact of soil moisture feedbacks are likely to differ between El Niño and La Niña events (Seneviratne et al. 2006, Notaro 2008, with vegetation interactions also being a substantial influence (Sellers et al. 1997). Future climate change, driven by increased greenhouse gas concentrations, are likely to enhance hydrological responses in these hotspots of strong positive soil moisture feedback (Notaro 2008). In respect of this, the importance of global soil moisture retrieval, and assimilation into hydrological and biophysical models, has received much recent recognition (Crow et al. 2005, Reichle and Koster 2005, Parajka et al. 2006. Here modelled and assimilated soil moisture estimations are compared with ERS scatterometer top soil moisture observations. ERS scatterometer data The ERS-1 and ERS-2 scatterometers are active C-band (5.6 GHz) microwave instruments, providing backscatter measurements sensitive to the surface soil water content without being affected by cloud cover. The surface soil moisture (SSM) data are retrieved, in a discrete 12.5 km global grid, from the radar backscattering coefficients using a change detection method developed at the Institute of Photogrammetry and Remote Sensing at the Vienna University of Technology. Scatterometer estimates are used to model the incidence angle dependency of the radar backscattering signal. Backscattering coefficients are normalized to a reference incidence angle of 40 • , with these coefficients scaled between the driest and wettest observations over the long term to produce relative SSM data ranging between 0% and 100%, with uncertainty detailed with a soil moisture noise model ). The ERS scatterometer (ESCAT) soil moisture dataset used here has undergone previous validation experiments. Wagner et al. (1999) tested the SSM dataset with gravimetric soil moisture measurements over field sites in the Ukraine and found mean correlations of 0.45 (0-20 cm profile) and 0.41 (0-100 cm profile). Ceballos et al. (2005) performed a more extensive validation using a network of 20 soil moisture stations located in western Spain. They found a correlation of 0.75, with a root mean square error (RMSE) (0-100 cm profile) between the scatterometer data and the average soil moisture of 2.2%. However, use of this dataset comes with the caveat that, in extreme climates, such as desert regions, biased estimates may be derived, with azimuthal viewing geometry not taken into account during retrieval (Bartalis et al. 2006). Comparison model: ESCAT In this study modelled soil moisture from the JULES model is compared with SSM scatterometer values in the top 5 cm of the soil from two separate ERS receiving stations, generating SSM 'observations' for northern hemisphere Africa: Maspalomas, covering West Africa, and Matera, covering North Africa. Since 2001, coverage of southern hemisphere Africa did not begin until mid-July 2008, and is therefore not considered during our assimilation period. Figures 1(a) and 1(b) illustrate the comparison for both the modelled state and the assimilated state carried out over the 5-month assimilation period. The SSM 'observations' derived from ESCAT for both West Africa and North Africa are lower than the equivalent modelled by the JULES LSM. It is clear following assimilation that the updated model estimates are closer to the 'observation' values. Indeed, for West Africa a 27.4% reduction in RMSE, from 16.8 to 12.2vol%, between the model soil moisture estimates and the ESCAT SSM 'observations' resulted from the assimilation process. For North Africa, the reduction in RMSE between the model soil moisture estimates and the ESCAT SSM 'observations' as a result of the assimilation process was 32.2%, from 14.6 to 9.9vol%. The modelled and assimilated runs were repeated 50 times over each region, respectively, and paired t-tests performed on the mean RMSEs showed that these reductions in RMSE were significant at the 99% confidence level. It is therefore evident that the process of data assimilation has produced a systematic reduction in the model predictions of soil moisture over both West Africa and North Africa for the period 1 January-31 May 2007. The implication is that this reduction may affect the predictions of heat, water and carbon fluxes from the land surface to the atmosphere. When coupled to the Hadley Centre GCM, this altered change in the strength of the soil moisture-climate feedback could influence the predictions of seasonal and interannual climate. Biogeochemical cycles The main aim of this investigation was to understand and quantify the impact that a change in LST has on the water, heat and carbon fluxes from the surface to the atmosphere. It has been shown that integrating SEVIRI LST into the JULES LSM for the first 5 months of 2007 over much of northern hemisphere Africa resulted in a mean reduction in surface soil moisture during this period. We now consider the effect that this integration, taking the case of West Africa as an example, has on further key fluxes of the water and carbon cycles, respectively: ET (figure 2) and NPP (figure 3). Unmistakeable mean reductions are observed for both these fluxes over the assimilation period. LST and the partitioning of surface energy into H and LE is a function of varying SSM and vegetation cover. Predominantly vegetated surfaces are associated with lower maximum LST values compared with bare soil (Weng et al. 2004), with surface roughness a factor (Sandholt et al. 2002). This is because increases in surface temperatures are associated with increases in H, and because in the surface balance equation more energy is partitioned into LE for higher vegetative cover. Higher H exchange is more typical of sparsely vegetated surfaces. LE is enhanced with increased ET, which is controlled by stomatal conductance (Essery et al. 2003). Stomatal conductance is affected by the quantity of photosynthetically active radiation (PAR), but is also crucially linked to the availability of moisture in the soil. A reduction in soil moisture below a critical value causes a partial closing of stomata on the underside of leaves to reduce water loss. The subsequent decrease in ET results in a decrease in LE because the drop in humidity reduces the humidity gradient between the surface and atmosphere, reducing the evaporative cooling and causing an increase in H and thus also in the surface temperature (Crucifix et al. 2005). ET is an important climate system feedback between the land surface and the atmosphere in that soil moisture anomalies can translate into precipitation anomalies through the ET rate (Shukla and Mintz 1982). This feedback on the precipitation regime could significantly influence the occurrence and persistence of pluvial and drought conditions, which in turn influences the distribution of vegetation and thus surface albedo, subsequent surface evaporation and the terrestrial carbon stocks. The terrestrial carbon cycle feedback may be an important component of future climate change (Melillo et al. 2002), with experiments such as Cox et al. (2000) inferring that these feedbacks could significantly influence climate change over the course of the next few decades. A reduction in soil moisture and associated reduction in ET impacts upon the carbon balance, leading to a reduction in NPP as suggested by Rosenzweig (1968), who postulated, in general, a positive relationship between ET and NPP. With interannual variability of NPP greater than that of heterotrophic respiration over Africa (Weber et al. 2009), the implication of a reduction in NPP over a region would be a corresponding reduction in net ecosystem productivity, and hence an altered carbon balance. However, large uncertainties in both the sign and magnitude of the carbon cycle feedbacks remain because of model simplification of the complex terrestrial system. Data assimilation is an exciting field of research offering significant benefits to land surface modelling. The rationale behind this technique is that, although both sources of information, the model and EO, are associated with uncertainty, the combination of the two sources is expected to reduce the resultant uncertainty. For highly changeable variables in time, an LSM may produce more comprehensive coverage than an EO product, which can suffer from missing data or occasional instrumentation problems. However, because validated EO products can be shown to produce more realistic representations of the ground measurements, the integration of these into LSMs may provide the best possible compromise. Furthermore, data assimilation is reliant on the accurate prediction of uncertainty in observations. EO products are generated using implicit or explicit assumptions, which may not be consistent with the assumptions made in an LSM, whereby biased observations will cause the model to depart from the correct state (Quaife et al. 2008). If remote sensing products are to be integrated more comprehensively into LSMs, then further validation work needs to be undertaken, with the accurate reporting of measurement uncertainty a priority. As highlighted in Pinheiro et al. (2006), to demonstrate how a small change can be influential: Brutsaert et al. (1993) report a 10% error in sensible heat flux as a result of an error of 0.5 K in LST; Moran and Jackson (1991) report a 10% error in ET as a result of a 1 K error in LST; and Kustas and Norman (1996) suggest that an LST error of between 1 and 3 K can lead to errors of up to 100 W m -2 in surface fluxes to the atmosphere. Because of the feedbacks between the land surface and the atmosphere, it is clear how these comparatively minor uncertainties can produce significantly different climatic conditions. Climate change can lead to both positive and negative feedbacks to the climate system. It is therefore essential that we accurately represent these feedbacks in coupled LSM GCM frameworks if we are to successfully predict future climate change. Conclusions These relationships, among others, suggest that there is potential for LST to act as surrogate for assimilating other state variables into a land surface scheme. Indeed, demand for LST observations is increasing because of its importance in regional and global ecosystem studies, and particularly its sensitivity to surface moisture conditions. Remotely sensed data from EO satellites offers the most feasible source of data to constrain and validate LSMs over large geographical regions, as this overcomes the limitation of sparsely available ground measurements. The significance of model predictions as a resource in climate policy decision making ensures the validation of increasingly employed data assimilation methods a priority. Moreover, care should be taken to quantify the changes in the entire ecosystem dynamics through updating of key variables. Although assimilation of EO data into LSMs offers the prospect of optimizing estimates of key biogeochemical states, herein lies the danger. Unless a thorough understanding and validation of the model output is performed, the possibility of the model being improved in one sense, in terms of reduced RMSEs against validation observations, but degraded elsewhere remains a distinct likelihood. In terms of the predictions of biogeochemical fluxes, the acknowledgement of the influence that data assimilation of EO data has on the feedback from LSMs to AGCMs is of great relevance to the IPCC Fifth Assessment Report.
2018-12-11T18:54:31.300Z
2011-02-01T00:00:00.000
{ "year": 2011, "sha1": "dc24cf729cceb720a2ef153dea80149f991f5bda", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Data_assimilation_into_land_surface_models_the_implications_for_climate_feedbacks/10100879/1/files/18208181.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "03ec1e45e9ee1588df2b57974f34496803b716c6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
253318726
pes2o/s2orc
v3-fos-license
Analysis of Science Learning Module Development Needs PBL-Based (Problem Based Learning) to Improve HOTS This study aims to examine the needs of students in MTs Negeri 2 Ngawi, especially Class VII, for PBL-based science learning modules to improve HOTS. This development research method is R&D (Research and Development), which is adapted from the development of a 4-D model (Define, Design, Develop and Disseminate). However, this research is limited to the Define stage, which is carried out by analyzing learning procedures. The subjects of this study were MTsN 2 Ngawi science teachers and class VII students who had studied temperature and heat material. Based on the questionnaire from students, it can be concluded that 90% of students believe in the need to develop teaching materials. While the questionnaire given to teachers found the result that learning in the classroom seemed to be monotonous so that it did not support the ability of students to think at a high level, it was necessary to have teaching materials or modules as support because the books provided by the government were inadequate. In conclusion from the results of the questionnaire of students and teachers, as well as the cognitive learning outcomes of students, it is necessary to develop modules related to Temperature and Heat. The development of problem-based learning modules (PBL) is a method for teachers to improve the learning process of students, especially in terms of fostering higher-order thinking skills (HOTS). INTRODUCTION Education is one of the elements that a country must have to ensure its long-term survival by increasing its potential, quality, and human resources (Dewanti, 2019). Education is a form of realization of human cultural development that will occur continuously. This matter is by Law No. 20 of 2003 concerning the National Education System, which states that "national education functions to develop abilities and form a dignified national character and civilization to educate the nation's life, aiming to develop the potential of students to become human beings who have faith and piety in God, healthy, knowledgeable, capable and creative, independent, as well as being a democratic nobleman". Science education must be learned and mastered by junior high school students listed in the 2013 Curriculum because when students master the material they can apply it in daily life (Septiani, et al, 2019). Learning with the scientific process can improve the abilities of students in the future (Zuriyani, 2013). To improve human quality by expanding learning, namely by improving and developing teaching materials or modules. | |97 Based on interviews conducted with science teachers at MTsN 2 Ngawi, Mrs. Warsiyem S.Pd. said that in her learning science is classified as a subject that is not in demand by students because there is too much material and questions with monotonous presentations. This is also supported based on the results of the questionnaire that was distributed to students and they all stated that science is one of the subjects that is avoided. Students want to provide learning materials, especially science with more interesting presentations. They strongly agree that they are given a learning media in the form of a science module as support in taking science learning. Modules are divided into two categories, namely modules that are printed and digital modules (Dewi et al., 2017). Modules are used so that students can learn independently (selfinstruction) because they contain teaching concepts that are easy to understand, as a result of making them active in the learning process (active learning). According to Fitri (2017), "modules can also be a learning resource for students because there are LKS and learning activities in them". The language of the module is also simple. Modules are arranged systematically as a result of which students can follow instructions and complete activities convincingly. Modules are designed with a variety of interesting learning activities to encourage student participation in learning activities. PBL (Problem Based Learning) or the problembased learning model is one of the learning models used in this study. The PBL learning model has several benefits, including 1) challenging the ability of learners to acquire new knowledge; 2) increasing student motivation and learning activities. 3) Assist learners with knowledge transfer to the real world, 4) Facilitate the development of knowledge and a sense of responsibility for learners 5) Develop critical thinking skills and the ability to adapt to new information, 6) Provide opportunities for learners to apply their knowledge in the real world, and 7) Foster learner interest. 8) Facilitating mastery of student problem-solving concepts (Priatna, 2019). Researchers chose the PBL learning model to develop this module because the PBL learning model can improve the higher level of thinking ability (HOTS) of students (Luciana, 2016). Learners must have quality and strong human resources, and the ability to think at a high level to solve the problems faced (Khoiro, 2019). The urgency of the need to increase HOTS for learners was conveyed by Saido, G.M., et al (2015) based on the research they conducted. Uswatun (2019), found that "PBL (Problem Based Learning) affects the High Order Thinking Skills (HOTS) of students". This is supported by research conducted by Bahri et al. (2018) which shows that "the results of problem-solving skills using the PBL (Problem Based Learning) learning model are superior to students who apply the direct learning model in their learning". Sofyan and Komariah (2016) in their research obtained a response from lecturers that PBL learning is learning that is easy to plan and can support learning that is in line with a scientific approach per the application of the 2013 curriculum. According to research by Heri Retnawati (2016), "the use of problem-based learning tools is effective to improve HOTS, and problem-based learning is superior to in-person learning to improve HOTS". HOTS (High Order Thinking Skill) or commonly referred to as higher order thinking ability is a type of thinking that requires a high-level cognitive hierarchy from Bloom's Taxonomy, including analyzing (C4), evaluating (C5), and creating (C6) (Anderson & Krathwohl, 2010). While the other three cognitive domains, namely remembering (C1), understanding (C2), and memorization (C3) are the stages of low-level intellectual thinking or LOTS (Low Order Thinking Skill) (Sani, 2015). Low thinking ability is not the need of the 21st century, one which requires high-level thinking skills (Osman et al., 2013), (Turiman et | |98 al., 2012). Rofiah (2013), explained that "High Order Thinking Skill (HOTS) is a thought process that goes beyond memorization and fact reading". HOTS must also be carefully designed according to the student context and teaching materials (Nugroho, 2018). Temperature and heat were selected as the subject matter for research and development of learning modules to improve HOTS. The selection of material is based on several factors, including the results of daily tests, the student's scores on temperature and heat materials are not satisfactory, and there are still many students who do not pass in understanding the material as a whole. When learning, most learners only remember the questions and examples given by the teacher. The science teacher of class VII stated that the students are not used to thinking at a higher level, because they can solve the problems that have been demonstrated by the teacher, but the difficulty when the context of the problem is changed to a more difficult level. When learning Temperature and Heat, learners are given questions that require a higher level of thinking ability (HOTS), but few of the learners answer correctly. Teachers only convey knowledge based on general teaching materials, without bringing up problem-solving related to science, especially in temperature and heat materials. Students do not all have teaching materials due to the limitations of schools that do not provide them, as a result of which students have to look for their books as their learning resources. This causes the books that learners have to differ from each other. In addition, the existing infrastructure in schools is not used optimally due to the limited time of the subjects that take place. METHODS This research is a component of development research (R&D/ Research and Development) as a result of the adaptation of the 4-D model (four-D models) proposed by Sivasailam Thiagarajan, Dirothy S Semmuel, and Melvyn I Semmuel (1974). This study examines how problem-based learning modules (PBL) improve student HOTS. This research is a define stage activity that establishes and explains the needs of development research. This study only analyzed the learning process of MTsN 2 Ngawi at the Define stage to find out and collect preliminary data from the study. The define stage consists of 5 activities, namely: initial analysis, student analysis, concept analysis, task analysis, and specification of learning objectives. The initial analysis stage is used to collect information about learning carried out in the classroom so that problems are finally found in the classroom. The student analysis stage is used to find out the characteristics of the learners so that appropriate methods can be determined for learning. The concept analysis stage is used so that students can master the concepts that will be given in the concept map. The task analysis stage is carried out to determine the material and competencies that must be achieved in learning. The last stage is the specification of learning objectives, which is to determine the objectives of learning the material studied. This series of define stages was carried out to a science teacher and 30 students of class VII MTsN 2 Ngawi, East Java who had studied temperature and heat material but still did not fully understand the material because of the material he considered difficult. This stage also analyzes the needs of teachers and students to find out what is needed in learning. RESULTS AND DISCUSSION The first stage of development carried out is the defining stage (Define). At this stage, the researcher conducts an analysis of development needs through literature studies or preliminary research. The results of the literature obtained a determination that the accuracy of the use of learning strategies has an impact on higher-order thinking ability (Mustapa, 2014). Azahro, M. N. & Agnafia, D. This initial analysis stage was carried out using observations of the school where the research was carried out, based on observations made with science students and teachers, it was found that science was one of the lessons that were difficult for students to understand, besides that limited media and the use of reference books in learning was still limited. The student analysis stage is carried out for class VII MTsN 2 Ngawi students by looking at the learning results so far by showing the scores of the exam results that have been carried out by previous students. At this stage, it was found that more students who have not been able to learn using problem-based models are proven by the grades shown by the teacher on the problem-solving criteria, but the students are still lacking. Therefore, researchers take a problem-based model or PBL, Task analysis, this stage is carried out by researchers by adjusting KI (Core Competencies) and KD (Basic Competencies) by the 2013 curriculum as the basis for determining the content of the material. The material taken by the researcher is temperature and heat. Concept analysis, at this stage a concept map is made that is used for later science learning to be more targeted when in the classroom. The last stage is the specification of learning objectives, from all the series of analysis stages that have been carried out, it can be concluded that the purpose of this study is the development of a PBL-based science learning module on temperature and heat material for students of class VII MTsN 2 Ngawi. Meanwhile, based on the results of the development needs analysis from the questionnaire given to 30 students, around 48% of students have not completed science learning on Temperature and Heat material, which is indicated by their daily test scores. This is due to the limited number of books and other learning resources that can improve students' higher-order thinking skills (HOTS) and learning that only emphasizes remembering and not HOTS. 90% of students are not interested in science lessons, because their learning has not been adequately supported and seems monotonous. Therefore, 100% of students believe in the need to develop additional teaching materials. This is in line with research conducted by Tyas Deviana (2018), which states that teachers need teaching materials that can meet the learning needs of individual learners and bathe but are adapted to the surrounding environment. The availability of package books in schools still raises problems for students, so teachers and students need teaching materials in the form of modules to support books at school, Riawan (2020). Based on the results of the teacher needs analysis questionnaire, science learning has been running well, laboratory facilities are already in place, learning support books are available even though the number is limited as a result of hindering optimal classroom learning, and teachers have never made modules with temperature and heat materials. Only government-published Integrated IPA package books are used as guides. Classroom learning still seems to be teacher-centered and students' abilities limited such as remembering and memorizing, especially if there are daily tests, as a result of which students are less involved in learning activities. Learning has not implemented models that hone learners' high thinking skills (HOTS), such as problem-solving. In fact, according to Arsal (2017), learning involves several components including humans and the use of media or learning resources that can support the learning process so that the objectives of the learning process can be achieved. Based on the results of the questionnaire of students and teachers, as well as the cognitive learning outcomes of students, it is necessary to develop modules related to Temperature and Heat. The development of problem-based learning modules (PBL) is a method for teachers to improve the learning process of their students, especially in terms of Azahro, M. N. & Agnafia, D. fostering higher-order thinking skills (HOTS). This is supported by research from Merinda (2021) which states that the development of learning tools based on PBL (Problem Based Learning) is very feasible to be used to improve HOTS (Higher Order Thinking Skills). Research conducted by Widyarti, et al (2019), obtained the average score of higher-level thinking skills of students who were taught with a problem-based learning model was 75.89. 40.74% are in the category of excellent high-level thinking skills, 40.74% are in a good category and 18.51% are in the category of high-level thinking skills are sufficient. CONCLUSION Based on the results of the questionnaire analysis, it is necessary to develop a Temperature and Heat material module with a problem-based learning model (PBL) to improve the HOTS of students. It is recommended that this science development module be given to junior high school students because researchers limit the production of modules at this level. SUGGESTION For teachers, it is recommended to use modules and teaching materials that are following the material needed by students. Schools, it is expected to provide materials for learning needed by teachers. Students are expected to be more active and active in learning activities.
2022-11-05T15:35:10.103Z
2022-09-26T00:00:00.000
{ "year": 2022, "sha1": "eca3c3c79148b01d974892622f9e8e1da1745dc7", "oa_license": "CCBYSA", "oa_url": "http://jurnalpendidikan.unisla.ac.id/index.php/SEAJ/article/download/581/pdf_1", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c50cffdfee7b50046295bb7a94516a85055d1df0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
17321393
pes2o/s2orc
v3-fos-license
Analysis for prevalence and physical linkages amongst integrons, ISEcp1, ISCR1, Tn21 and Tn7 encountered in Escherichia coli strains from hospitalized and non-hospitalized patients in Kenya during a 19-year period (1992–2011) Background We determined the prevalence and evidence for physical linkage amongst integrons, insertion sequences, Tn21 and Tn7 transposons in a collection of 1327 E. coli obtained over a 19-year period from patients in Kenya. Results The prevalence of class 1 integrons was 35%, class 2 integrons were detected in 3 isolates but no isolate contained a class 3 integron. Integron lacking the 3’-CS or those linked to sul3 gene or IS26 or those containing the ISCR1 were only detected in multidrug resistant (MDR) strains. The dfrAs were the most common cassettes and their prevalence was: - dfrA1(28%), dfrA12(20%), dfA17(9%), dfrA7(9%), and dfrA16(5%). The aadA were the second most abundant cassettes and their prevalence was: - aadA1(25%), aadA2(21%), and aadA5(14%). Other cassettes occurred in lower prevalence of below 5%. Prevalence of Tn21, ISEcp1, ISCR1 and IS26 was 22%, 10%, 15%, and 7% respectively. Majority of Tn21 containing integrons carried a complete set of transposition genes while class 2 integrons were borne on Tn7 transposon. The qnrA genes were detected in 34(3%) isolates while 19(1%) carried qnrB. All qnr genes were in MDR strains carrying integrons containing the ISCR1. Close to 88% of blaTEM-52 were linked to IS26 while ≥ 80% of blaCTX-Ms and blaCMYs were linked to ISEcp1. Only a few studies have identified a blaCTX-M-9 containing an ISEcp1 element as reported in this study. Multiple genetic elements, especially those borne on incIl, incFII, and incL/M plasmids, and their associated resistance genes were transferrable en bloc to E. coli strain J53 in mating experiments. Conclusions This is the first detailed study on the prevalence of selected elements implicated in evolution of resistance determinants in a large collection of clinical E. coli in Africa. Proliferation of such strains carrying multiple resistance elements is likely to compromise the use of affordable and available treatment options for majority of poor patients in Africa. There is therefore a need to monitor the spread of these highly resistant strains in developing countries through proper infection control and appropriate use of antimicrobials. Background Recent studies conducted in Kenya show that a significant proportion of E. coli strains from clinical specimens exhibit a strong multi-drug resistance (MDR) phenotype [1,2]. Fortunately, β-lactams, fluoroquinolones and aminoglycosides remain effective against a significant proportion of clinical E. coli strains in Kenya. However, recent studies have reported carriage of plasmid-borne aac(6')lb-cr and qnr genes among β-lactamase producers [1,2]. The qnr genes confer resistance to quinolones, while aac (6')-lb-cr confers reduced susceptibility to fluoroquinolones and aminoglycosides. Therefore, carbapenems remain some of the few alternative antimicrobials that are effective against strains harboring a combination of multiple β-lactamase (bla) genes and genes conferring broad-spectrum resistance to fluoroquinolones and amino-glycosides. Carbapenems may however not be readily available or affordable for many patients in Sub-Saharan Africa [3]. In a recent study, we reported carriage of integrons, IS elements, Tn21 and Tn7 in a collection of 27 E. coli strains obtained from hospitalised patients [1]. These strains also harbored conjugatively transferrable plasmids conferring resistance to β-lactams, fluoroquinolones, aminoglycosides and co-trimoxazole among other antimicrobials suggesting that genes encoding resistance to these antimicrobials are physically linked to each other. Carriage of physically linked elements, each containing a set of resistance genes, may increases the chances of en bloc horizontal transfer of multiple resistance determinants to susceptible strains. Carriage of multiple resistance elements may in turn confer unique advantages to the host and enable them survive a strong antimicrobial selection pressure especially in hospital settings [4]. Studies to determine the prevalence of resistance elements in a large collection of strains from Sub-Saharan Africa are still lacking. Furthermore, little is known on whether the genetic elements encountered among E. coli strains in this region are physically linked to each other. In this study, we determined the prevalence of integrons, ISEcp1, ISCR1, IS26 as well as transposons Tn21 and Tn7 in a collection of 1327 E. coli strains obtained from inpatient and outpatient populations seeking treatment in Kenyan hospitals during a 19-year period (1992-2011). We also determined genetic content of integrons and determined plasmid incompatibility groupings among strains exhibiting unique resistance phenotypes. Physical linkages among these elements and to bla genes were investigated using PCR methods. Similar analysis were done to determine if the aac(6')-lb-cr and qnr genes are physically linked to these elements. conserved sequences (3'-CS) that contains qacEΔ1 (a truncated gene encoding resistance to quaternary ammonium compounds, and sul1 encoding resistance to sulfonamides). All the three class 2 integrons contained an identical cassette array comprising dfrA1-sat2-aadA1. Prevalence of Tn21, Tn7 and IS elements The prevalence of Tn21 was 22% while Tn7 was detected in 3 isolates that also carried class 2 integrons. Prevalence of ISEcp1, ISCR1 and IS26 was 10%, 15%, and 7% respectively. A high proportion (≥ 60%) of isolates containing the IS elements and integrons were MDR (resistant to at least 3 different classes of antimicrobials), Table 4. Isolates carrying multiple elements were more likely to exhibit an MDR phenotype than those lacking such elements (p:0.0001, CI:549.5 to 2419.6, OR:1153) and isolates from urine were more likely to harbor multiple elements compared to those from blood (p:0.0001, CI:3.1 to 5.5, OR:4.1) or those from stool (p:0.0008, CI:1.2 to 2.0, OR:1.6). Although integrons, IS elements and Tn21 were detected in isolates from all specimen-types, a high proportion (69%) of these elements were detected among strains from urine of hospitalized patients. Physical linkage amongst genetic elements Figure 1 illustrates the strategy used for interrogation for physical linkages amongst genetic elements while Figure 2 illustrates some of the genetic associations identified in this study. Majority (69%) of integrons containing 3'-CS were physically linked to the Tn21 transposon while 75% of those containing a sul3 gene at the 3'-terminal were linked to IS26. This element was also linked to 80% of integrons lacking the 3'-CS, Table 5. Forty (40) isolates contained class 1 integrons linked to a single IS26 upstream the 5'-CS while in 12 isolates the integrons was flanked by two IS26 elements. All ISCR1 were detected only in MDR strains and were flanked by a pair of class 1 integron 3'-CS. Close to 94% of Tn21 that were linked to an integron contained a complete set of transposition genes (tnpA, tnpR and tnpM) while 89% of Tn21 with an incomplete set of these genes did not contain an integron, Table 6. All the three class 2 integrons were physically linked to Tn7. Physical linkages between resistance genes and genetic elements Figure 2 illustrates selected examples of physical linkages between bla genes and different genetic elements. Over 40% of isolates carrying bla TEM-52 , bla SHV-5 or bla CTX-M- 14 were physically linked to the IS26, Table 7. The ISEcp1 was the most common IS element associated with bla CTX-M-14, bla CTX-M −15 and bla CMY-2. One isolate contained a bla CTX-M-9 linked to this element. In all cases, the ISEcp1 was detected upstream the bla gene, Figure 2. Thirty seven (88%) of the 42 aac(6')-lb-cr were borne on integrons containing the ISCR1 while 55% were borne on integrons linked to the IS26. Twenty four (71%) of the 34 isolates carrying a qnrA gene were resistant to nalidixic acid but not to ciprofloxacin while the other 10 isolates carrying this gene and 19 carrying the qnrB subtype were resistant to both antimicrobials, Table 8. None of the isolates tested positive for qnrS. Majority (87%) of qnr genes were physically linked to either integron-associated ISCR1 or the IS26. All Isolates carrying aac(6')-lb-cr or the qnr genes contained multiple genetic elements and were all MDR. Carriage of genetic elements or combination of elements among strains exhibiting resistance to different antimicrobials tested in this study. The antimicrobials were grouped into 8 convenient groups:-β-lactams and β-lactamase inhibitors, aminoglycosides, (fluoro)quinolones, nitrofurantoin, chloramphenicol, sulphonamides, trimethoprim, and tetracyclines. Conjugative plasmids mediate en bloc transfer of multiple elements and resistance genes Multiple resistance genes and genetic elements associated with them were transferred en bloc to E. coli J53 in mating experiments, Table 9. Majority of such transferred were mediated by plasmids containing I1, L/M, XI, HI2 and the F-type replicons. These experiments further revealed that genes conferring resistance to tetracylines and chloramphenicol were also harbored in the same plasmids encoding resistance to β-lactams, (fluoro)quinolones and aminoglycosides. However, various gene combinations that had been determined to be physically linked using PCR could not be transferred in conjugation experiments using media containing different combinations of antimicrobials. Discussion The current study shows that a significant proportion of clinical E. coli strains in Kenyan are resistant to important classes of antimicrobials such as β-lactams, fluoroquinolones and aminoglycosides. These results are in agreement with those published before [1,3,5]. These MDR strains were however susceptible to carbapenems. It is easy (although illegal) to purchase antimicrobials in Kenya without prescriptions or with prescriptions not backed by laboratory investigations [6]. We hypothesize that such practices may directly or indirectly lead to emergence of highly resistant strains. A high prevalence of MDR strains from urine and all specimens from hospitalized patients may reflects a corresponding heavy use of antimicrobials among this category of patients as reported in past studies [7,8]. Majority of resistances encountered in hospital isolates were also encountered in community settings probably because patients are often discharged from hospitals as soon as their conditions improve, even before they complete their treatment regiments (our unpublished observations). It is therefore possible that hospital strains find their way into community settings and vis versa. However, we do not rule out the possibility that some MDR phenotypes may arise in community settings. The high prevalence of class 1 integrons may partially be due to their association with the Tn21 that contain a complete set of transposition genes. Past studies show that dfrA7 and dfrA1 cassettes associated with Tn21borne integrons are the most prevalent dfrA-subtypes in Central, North and Western Africa [9][10][11][12]. In this study however, the prevalence of dfrA7 was much lower than that of dfrA1, dfrA12 and dfA17 in that order. The class 2 integron dfrA1/sat2/aadA1 array reported in this study is globally distributed [13]. Our results may therefore reflect regional differences or similarities in distribution of integron cassette arrays. Such differences may arise from unique antimicrobial-use patterns in different countries. This study also demonstrates an apparent correlation between carriage of dfrA17 and resistance to multiple β-lactams as has been reported in Tunisia [12,14] and from Northern Kenya among isolates from dog, cat and human specimens [5]. The reasons behind these correlations are yet to be elucidated. Carriage of different dfrA sub-types in our isolates and carriage of multiple integronassociated sul genes (sul1 and sul3) in the same isolate possibly correlates to heavy usage of sulfonamides and trimethoprim in Kenya for treatment of different infections and as prophylaxis against opportunistic infections among people with HIV/AIDS [15][16][17]. Some integrons, especially those lacking the 3'-CS and those containing a sul3 at the 3'-end, were linked to the IS26 possibly because this element mediates deletion of 3'-CS in class 1 integrons 3'-terminal [18,19]. Similar results have been published in Australia, Spain and Nigeria 1.f Figure 2 Schematic diagram illustrating examples of physical linkages amongst genetic elements and selected genes. 1a-1f: An example of physical linkages between bla genes and multiple genetic elements such as integrons, ISEcp1, and IS26. 2a-2b: An example of physical linkages between bla genes and ISEcp1. 3a-3d: An example of physical linkages between integrons and other genetic elements (such as the ISCR1 element) that are in turn linked to bla genes and (fluoro)quinolone resistant genes. 4a-4c: An example of physical linkages between Tn21 and integrons that are in turn be linked to IS elements. These illustrations are based on PCR mapping data and not sequencing. Therefore, the sizes of each gene and the distances between any two genes are not drawn to scale. [11,12,18,19]. Our data further suggest that strains carrying IS26-associated integrons are highly MDR probably because the IS26 is also linked to other non-integron genes such as β-lactamases. Most β-lactamases, particularly those encoding CTX-M -14 and −15 and CMY-2, were physically linked to ISEcp1. Similar reports have been published in Tunisian [20,21] but no ISEcp1 was detected upstream the bla -CTX-M-1 among our isolates as reported in a related study from the same country [22]. In one isolate, this element was found upstream the bla CTX-M-9. Reports of ISEcp1-bla CTX-M-9 linkages are rare but such linkages have been reported in Klebsiella pneumoniae isolates in Taiwan [23]. Majority of bla TEM genes, bla TEM-52 in particular, were physically linked to the IS26 as reported in Belgium and Germany [24,25]. Taken together, these results suggest that most bla genes in our isolates are in similar genetic environments as those reported globally but the genetic environment of bla CTX-M-9 and bla CTX-M-1 in our isolates appears to be different from those reported globally. Our results further demonstrated that most bla genes are distantly linked to elements that are in turn linked to other resistance genes such as aac(6')-lb-cr and qnr. Similar reports have been published in Tunisia [20,21] and in Nigeria [11]. ISEcp1, IS26 and ISCR1 are known to mediate transposition and/or expression of multiple resistance genes in their close proximity [26][27][28][29][30][31]. Carriage of such multiple elements, each carrying a set of resistance genes may be responsible for the observed coresistance to multiple antimicrobials among our isolates. Conjugation experiments confirmed that multiple elements were borne on narrow host-range plasmids such as IncFII, IncH12 or on broad host-range plasmids such as IncL/M. The type of conjugative plasmids in our isolates (especially those carrying plasmids containing incFtype, incHI2 and incI1 incL/M replicons) were shown to confer resistances similar to those in strains from Tunisia, [32] and from two other studies conducted in Kenya [1,5]. We hypothesis that plasmids of different incompatibility groups have acquired similar or identical sets of resistance genes and this acquisition is mediated by genetic elements such as those investigated in this study. Therefore, there is a possibility that such elements act as genetic shuttles between plasmids of different incompatibility grouping. The similarities and differences in genetic environments of bla, aac (6')-lb-cr and qnr genes reported in this study may reflect a difference in transposition activities of such elements. We further hypothesize that differences in antibiotic use patterns in different regions influence the transposition activity of such elements. Conclusions This study reports carriage of multiple genetic elements in MDR E. coli strains and their association with selected resistance genes. Strains carrying such elements are likely to be well adapted to survive deleterious effects of combined antimicrobial therapy. Furthermore, such MDR strains have a potential to increase morbidity and mortality among patients. It is therefore important to launch (47) 14 (11) PCR methods were used for screening for three genes that are crucial for transposition of Tn21. The tnpA encodes a Tn21-like transposase, the tnpM encodes a putative transposition regulator. Integrons are incorporated into the Tn21 framework adjacent to the tnpM gene. The tnpR encodes a resolvase. Table 7 Analysis for physical association between bla genes and various genetic elements Analysis for physical linkages between bla genes and various genetic elements. The bla content of the isolates analyzed had been determined in a past study [3]. Table shows the number of isolates carrying the three (fluoro)quinolone resistance genes and the proportion of such strains in which these genes were physically linked to various genetic elements and to bla genes. a: Distribution of the aac(6')-lb-cr and qnr genes among strains fully susceptible to β-lactams, among those resistant to TEM-1 or SHV-1 with a narrow substrate-range and among those carrying genes encoding broadspectrum β-lactamases such as bla SHV-5 , bla SHV-12 , bla CMY and bla CTX-Ms . Table 9 Horizontal transfer of genetic elements and associated resistance genes from clinical strains (donors) to E. coli J53 (recipient) Resistance profiles among donor and transconjugants Resistance to selected antimicrobials among donors Physically linked genetic elements or resistance genes detected in donors and recipients Other genes whose linkages were not determined surveillance programs and to put up measures to curtail the spread of these highly resistant strains. There is also a need to compare the genomes of strains encountered in Africa with those from other parts of the world. Isolates The 1327 non-duplicate isolates were obtained sequentially from 13 healthcare facilities in Kenya between 1992 and 2011 (19-year period) from 654 hospitalized and 673 non-hospitalized patients. These isolates comprised of 451 strains from patients with urethral tract infections (UTI) and those with urinary catheters while 371 were from blood of patients with septicemia. Another 505 strains were from fecal specimens of patients with loose stool, watery and bloody diarrhea. Only one isolate per specimen per patient was included for further analysis. Among the isolates investigated in this study, 912 had been analyzed for bla genes in a a past study [3] while 27 had been analyzed for selected genetic elements [1]. Ethical clearance to carry out this study was obtained from the KEMRI/National Ethics Committee (approval number SSC No. 1177). Antimicrobial susceptibility profiles Susceptibility profiles for all isolates were determined using antibiotic discs (Cypress diagnostics, Langdorp, Belgium) on Mueller Hinton agar (Oxoid, Louis, Mo. USA) using the Laboratory Standards Institute guidelines (CLSI) [33]. Detection of genetic elements Figure 1 illustrates the strategy used for detection and characterization of integrons and transposons. Detection of class 1, 2 and 3 and determination of carriage of 3'conserved sequences (3'-CS) in class 1 integrons was done as described before [34,35]. Class 1 integron variable cassette region (VCR), the region in which the resistance gene cassettes are integrated, was amplified as previously described by Dalsgaard et al. [35] while that of class 2 integrons was amplified as described by White et al. [36]. The VCRs of integrons lacking the typical 3'-CS was determined using a PCR walking strategy published before [37]. Identification of integron cassette identity was done using a combination of restriction fragment length polymorphism (RFLP), sequencing and published bioinformatics tools [38,39]. Detection of the ISEcp1, ISCR1, Tn21 and Tn7 elements was done as described in published studies [34,35]. Analysis for Tn21 transposition genes:-tnpA, tnpR and tnpM genes was done as previously described by Pearson et al. [40]. The primers used in this study are presented in Table 10. Detection of aac(6')-lb-cr and qnr genes Screening for aac(6′)-Ib-cr gene that confers crossresistance to fluoroquinolones and aminoglycoside was done using a combination of PCR, RFLP and sequencing as described by Park et al. [41]. The isolates were also screened for genes conferring resistance to quinolones: -qnrA, qnrB and qnrS using PCR and sequencing strategies previously described by Wu et al. [42]. Interrogation for physical linkages between genetic elements and resistance genes Physical linkages between integron and the transposons were determined using a combination of published primers targeting 5'-conserved sequences (5'-CS) of class 1 integrons and those targeting the tnpM of Tn2 or those specific for tnpA7 of Tn7, Figure 1. A combination of primers targeting IS elements and those targeting the 5'-CS or the 3'-termini of integrons were used for interrogation for physical linkages between integrons and IS elements. A combination of primers specific for various genetic elements and consensus primers for bla SHV or bla TEM, [43,44], bla CTX-M [45], bla CMY [46] and bla OXA OXA-2R ATYCTGTTTGGCGTATCRATATTC Primers used for screening various genetic elements and for interrogating physical linkages between different genetic elements and between such elements and bla genes or (fluoro)quinolone resistance genes. Y = T or C, R = G or A, S = G or C, K = G or T. [47,48] were used for determination of physical linkages between bla genes and different genetic elements. Primers for aac(6')-lb-cr and qnr genes were used in combination with those for different genetic elements to analyze for their physical association. A long-range polymerase [LongAmp W Taq DNA Polymerase, (New England Biolabs, USA)] was used in all reactions for physical linkages. A slow ramping rate of between 0.2°C/sec and 0.3°C/sec was set for the annealing step. The extension time was set at 72°C for 2 min and a final extension of 72°C for 15 min was carried out after 35-40 cycles of denaturation, annealing and extension. Conjugation experiments Conjugation experiments using sodium azide resistant E. coli strain J53 as the recipient were done as previous described [49]. Susceptibility to antimicrobials and determination of genetic element content of the transconjugants was determined using similar methods as those used for the corresponding donor strains. Plasmid incompatibility groupings were determined using the scheme of Carattoli et al. [50]. Statistical analysis For the purpose of analysis, both intermediate and resistant results for antibiotic susceptibility testing were grouped together as "resistant". Differences in proportion of isolates bearing different elements was analyzed using the Chi test (χ2) while the Fisher's exact test was used for smaller sample sizes. The Odds Rations (OR) and the 95% confidence intervals (CIs) accompanying the χ2 tests were determined using the approximation of Woolf. The null hypothesis was rejected for values of p ≥ 0.05. Statistical analysis was performed using Statgraphics plus Version 5 (StatPoint Technologies, INC, Warrenton, VA, USA).
2017-06-21T20:34:27.295Z
2013-05-17T00:00:00.000
{ "year": 2013, "sha1": "0fc34d7a2a3893af47788e01ad0b8713c45674be", "oa_license": "CCBY", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-13-109", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7047ea6b9c919de0455c9346338a584cb543ffea", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235751616
pes2o/s2orc
v3-fos-license
Farmers’ Perceptions of Commercial Insect-Based Feed for Sustainable Livestock Production in Kenya The utilization of insect-based feeds (IBF) as an alternative protein source is increasingly gaining momentum worldwide owing to recent concerns over the impact of food systems on the environment. However, its large-scale adoption will depend on farmers’ acceptance of its key qualities. This study evaluates farmer’s perceptions of commercial IBF products and assesses the factors that would influence its adoption. It employs principal component analysis (PCA) to develop perception indices that are subsequently used in multiple regression analysis of survey data collected from a sample of 310 farmers. Over 90% of the farmers were ready and willing to use IBF. The PCA identified feed performance, social acceptability of the use of insects in feed formulation, feed versatility and marketability of livestock products reared on IBF as the key attributes that would inform farmers’ purchase decisions. Awareness of IBF attributes, group membership, off-farm income, wealth status and education significantly influenced farmers’ perceptions of IBF. Interventions such as experimental demonstrations that increase farmers’ technical knowledge on the productivity of livestock fed on IBF are crucial to reducing farmers’ uncertainties towards acceptability of IBF. Public partnerships with resource-endowed farmers and farmer groups are recommended to improve knowledge sharing Introduction Intensification of agricultural production that improves the competitiveness and profitability of livestock enterprises is one option that can increase food production and reduce poverty in Africa [1]. Poultry, fish and pig production are the fastest growing agribusinesses in sub-Saharan Africa (SSA) providing income and employment opportunities for the population. In Kenya, the livestock sub-sector contributes about 12% to gross domestic product (GDP) and 47% of agricultural GDP [2]. In addition, 66% of Kenyan households keep at least one type of livestock with 98% of the rural households keeping poultry [3]. Poultry keeping is one of the most popular livestock enterprises in Kenya due to its low capital and space requirements. It contributes about 55% to the livestock sector GDP and 30% of the agricultural GDP, or 7.8% of Kenya's GDP [4]. The sub-sector employs about two million people [4] directly in production and marketing and indirectly through linkages with suppliers of inputs such as day-old chicks, feed and veterinary services. Kenya's poultry sub-sector can increase household incomes and contribute to food and nutrition security through the provision of eggs, meat and manure. However, its potential is hampered by the high cost of production with the cost of feed alone amounting to over 70% of the production costs [5]. Owing to the high cost of commercial feed, chicken farmers in Kenya have resorted to formulating their own feed, and/or the inappropriate administration of growth hormones [6]. The own formulated feed often does not meet the required nutritional requirements for the birds [7]. Furthermore, the country's reliance on cheap imports of feed and protein ingredients from neighboring countries makes local feed production unsustainable [3]. The situation is exacerbated by non-tariff barriers (NTBs) to trade that hamper the consistent supply of feed ingredients and unanticipated recent crises brought forth by climate change and global pandemics such as that of coronavirus disease 2019 . Insects have been proven to be potential alternatives to animal and plant protein sources worldwide [8]. Although insects occupy 80% of the global biodiversity and have been part of traditional delicacies for over two billion people, they are among the most underutilized feed resources [9,10]. The sustainable utilization of insects in livestock feed formulation has the potential to transform the current overreliance on fishmeal and soybean meal to a vibrant circular economy that offers employment opportunities especially for youths and women at the grassroots with effective feedbacks to the environment. The use of insect protein, particularly the black soldier fly (BSF), in livestock feed formulation is being explored globally [11][12][13][14]. Several milestones in this regard have been achieved [5,12,15,16]. In the European Union, whereas appropriate legislative steps are being initiated to integrate insect protein into feed formulation processes for poultry and pig production, the use of insects in fish feed has been approved [8,17,18]. In Kenya, reference [19] generated business models for insect-rearing for smallholder farmers in a way that would ensure profitability and environmental sustainability. Reference [20] demonstrated that the BSF is locally available in wild ecosystems and can be easily harvested for commercial feed production. Understanding the context and needs of the target groups prior to the release of the innovations facilitates a favorable reception of the technology. Therefore, initiatives on awareness creation to boost farmers' perceptions have been promoted in recent literature [21][22][23]. According to [24], understanding farmers' perceptions provides an accurate reflection of their contextual situation, which could be an impediment to the uptake of innovations. Traditionally, insects are associated with disgust [25], dirt and are considered to be pests, hence the belief that they should be eliminated from the food supply chain [26,27]. Thus, understanding farmers' perceptions of insect-based feeds (IBF) is an important starting point in initiatives that seek to improve livestock welfare through conscious feeding practices and effective management of their health [28,29]. Following [22], this study defines perception as the cognitive interpretation and understanding of the comparative characteristics of insect proteins in livestock feeds over conventional fishmeal and soybean protein. We build on the work of [30] who described the attitudes and knowledge of livestock farmers towards use of insects as a feed alternative in Kenya. This study examines the factors that can support behavioral change of livestock farmers with respect to improved and cost-effective insect-based feeds by synthesizing evidence collected from chicken farmers in Kiambu County, Kenya. The paper sought to answer two questions namely: "What do farmers think (farmer's general view) about IBF?" and "What are the factors that influence their thinking?" Several interdependent factors motivate the undertaking of the study in Kiambu County. First, livestock production is the most prioritized value chain in the county [31]. Besides being connected to nearby markets by a good network of paved roads, an important aspect for farmers' access to markets in developing countries, the county enjoys close proximity to the city of Nairobi that has a high demand for livestock products [28,32]. Reference [33] noted that more than 50% of the population in Nairobi consume chicken products. Moreover, the use of affordable and quality feeds like IBF can be a viable option for improving livelihoods in the county where 23% of the households live below the poverty line [34]. A principal component analysis (PCA) was used to construct five perception indices that are used in multiple linear regressions to evaluate the factors influencing farmers' perceptions on IBF. We find chicken farmers in Kiambu County, Kenya, have favourable perceptions of commercial IBF and recommend that policy interventions should be geared towards increasing farmers' technical knowledge and ability to evaluate the performance of different animal breeds reared on IBF through technical training at group level to capitalize on peer learning. The remainder of the paper is organized as follows. Section 2 presents the study methods. The empirical results and their discussions are presented in Sections 3 and 4 respectively. Finally, Section 5 concludes and draws policy recommendations. Analytical Framework This study employs multiple regression analysis to estimate the factors influencing farmers' perception of IBF in Kiambu County, Kenya. The dependent variables of the ordinary least squares (OLS) equations are the perception indices composed using a PCA, while the independent variables consist of farm/farmer and technology specific characteristics. Multiple regression is an extension of linear regression that analyses the correlation between more than one explanatory variable. According to [35], the OLS approach is used in estimating parameters in a linear model. This approach is well-suited to cases where the dependent variable is continuous and, in this case, the continuous nature of the perception indices qualifies the use of OLS. The OLS estimates have commendable statistical properties of being best linear unbiased estimators with minimum variance [35,36]. However, despite the distinction of the estimates, further model adequacy checks and validation are necessary following the linear regression to ascertain the appropriateness of the model [36]. Previous studies have applied factor scores as dependent variables in multiple linear regressions to understand farmers' perceptions. Most recently, reference [37] evaluated livestock farmers' perceptions of collaborative arrangements for manure exchange using multiple regressions based on factor analysis in Denmark. Reference [38] combined various farm and non-farm characteristics to compute factor scores that were used to elicit the determinants of coffee farmers' perceptions of risk. Other studies [39] compared dairy farmers risk perceptions with their risk management practices in Norway using a factor analysis. Whereas factor analysis reveals latent variables representing farmers' perceptions of IBF, the OLS permits in-depth exploration of the factors to consider when advising governments, farmers, research institutions and other stakeholders on IBF. The Principal Component Analysis Method The PCA method was applied in this study to generate factors with strong patterns explaining farmer's perceptions of IBF. PCA is a popular linear dimension reduction technique that reduces an excessive number of correlated variables by building a linear combination of uncorrelated variables that maximize the total variance explained. In doing so, relevant information is extracted from large data and the dimensionality of the data set is reduced by providing new and meaningful variables [40]. The use of PCA is validated through the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy where a value of at least 0.6 is preferred [41]. Components with eigen values of at least one are retained based on the Kaiser criterion [36]. Further, the component loadings are subjected to an orthogonal varimax rotation which produces uncorrelated factor scores for ease of interpretation. Reference [12] recommends the retention of statements with factor loadings above 0.5 for use in composing perception indices, a threshold adopted in this study. Following [42], the index was generated using the weighted sum scores criterion [43] with slight modification relevant to the study context: where P j is the perception index for the jth farmer, b k represents the weights/factor loading of the kth perception statement; a jk is the response of the jth farmer for the kth perception statement, a k and S k are the mean and standard deviation of the kth perception statement, respectively. The index varies from −1 to +1 and has a mean of zero and a standard deviation of zero. Estimation Strategy This study estimates five multiple regression equations. The dependent variables of the five equations are perception indices computed using the PCA method. The indices comprise of four individual IBF component indices derived from the factor scores of four key IBF perception components (performance, acceptability, versatility and marketability) and a composite index of the four individual IBF components. Following [36], the OLS is specified as a linear function of the parameters: where Y n is the nth factor score, β k denotes the vector of the parameters to be estimated; X k is the vector of the farm/farmer and technology specific characteristics such as: age, gender, years of formal education, income, wealth status, awareness of animal feeding on insects for nutritional purpose and group membership, while ε captures the statistical random term that accounts for measurement error. Data Sources and Sampling Procedure The study used survey data from a sample of 310 households in Kiambu County selected using a three-stage sampling technique. In the first stage, three sub-counties namely: Kiambu Town, Ruiru and Thika Town were purposively selected from a total of 12 sub-counties in the County owing to their proximity to the City of Nairobi and engagement in diverse livestock enterprises; with a large number of chicken. In the second stage, a simple random sampling procedure was used to select two wards in each of the three selected sub-counties. The selected wards were: Riabai and Ndumberi (Kiambu Township); Mwihoko and Gatong'ora (Ruiru); and Gatuanyaga and Karimenu (Thika Town). Finally, a simple random sampling technique was applied to select 50 respondents in each ward from a sampling frame provided by the county livestock extension office. Following [44], an additional 15 respondents were interviewed to account for non-responses. A semistructured questionnaire that contained a mixture of open ended (where the respondent provides their own answer) and closed questions, which restrict the respondent to the choices provided was administered by trained enumerators to the respondents in face-toface interviews in March 2020. From the initially expected sample size of 315, the final sample size dropped slightly to 310 after data cleaning. Data were analyzed using SPSS 22 and STATA 14 softwares. Since IBF is not commercially available, the respondents were provided with background information on IBF products prior to the interviews. This background information pertaining to insect-based products included a pictorial description of the insect, its life-cycle and the harvesting stage, insect inclusion in feed formulation, the resulting compounded IBF products, consumers' readiness to purchase the resulting livestock products and the expected effect of the feed on livestock production. Definition and Measurement of Variables The questionnaire included a total of 18 perception statements and respondents were asked to rate their level of agreement on a five-point Likert scale of agreement/disagreement ranging from 1 (strongly disagree) to 5 (strongly agree). Slight modifications were made to transform the responses in the five-point scale to a four-point scale by eliminating the neutral responses to reduce ambiguity and to strengthen the validity of the factor scores. The 18 perception statements are presented in Section 3 Table 4. A PCA was used to reduce and group the statements into four broad IBF perception attributes (performance, acceptability, versatility and marketability) that have 7, 6, 3 and 2 retained factors respectively (see Section 3) ( Table 5). The statements were based on a wide range of livestock performance indicators such as safety, growth, immunity, feed intake and socio-economic factors such as employment opportunities arising from the IBF value-chain, consumer acceptance of chicken reared on IBF, and environmental sustainability of the feed sources. Table 1 presents a description of the five perception indices. Each of the four individual perception indices had a mean of zero and a standard deviation of one whereas the composite index had a lower mean of approximately −0.15 and a higher standard deviation of about 7 (Table 1). The values of the scores and the overall index ranged between −3 to +3 and −17 to +17, respectively (Table 1). The farm/farmers characteristics that are later included in an OLS regression model as predictors for farmers' perception of IBF are presented in Table 2. Variables capturing a farmer's awareness of IBF attributes, off-farm sources, gender and membership to farmers' groups were measured as dummy variables. Age, wealth status (index) and education were measured as continuous variables. The wealth index was computed using the PCA method following [42]. Four items were used for the estimation of wealth index; animal housing structure [45]; ownership of a television set [46]; land size (above one acre) [47] and the total number of livestock units owned [48]. Since the index ranges from −1 to +1, any household with a positive wealth index was classified as being wealthy. Descriptive Results A summary of the socio-economic characteristics of the respondents is presented in Table 3. Over three-quarters of the households' heads were male and with an average age of 50 years. Household heads had an average of 12 years of formal education which corresponds to the attainment of a secondary school level of education. Eighty-one percent of the farmers had off-farm income sources that complemented their household income while 46% of the farmers were reportedly wealthy. Seventy-two percent of the respondents were members of farmer groups through which they procured inputs and marketed output. While 70% of the farmers were aware of the IBF attributes, nearly all respondents were willing to use commercial IBF once available in the market. Rankings of Farmers' Perceptions of Insect-Based Feeds (IBF) The rankings of the farmers' level of agreement with the importance of various IBF attributes are presented in Table 4. The mean scores ranged between 1.89 and 3.50 with values closer to four indicating more favorable perceptions and values closer to one suggesting less favorable perceptions of IBF, based on a four-point Likert scale. The statement, "I am willing to use IBF once it is commercially available" had the highest mean score ranking of 3.5. The expectation that IBF will lead to employment creation was favorably perceived as indicated by the mean score of 3.43. The mean level of agreement with statements concerning religious and cultural appropriateness of IBF were also high (3.42 and 3.41 respectively), indicating favorable societal acceptance of IBF. Government approval and ability to differentiate the new feed from the conventional feed were also important considerations for farmers (mean scores of 3.29 and 3.27 respectively). Farmers' perception of consumer acceptance of chicken products reared on IBF received a mean score of 3.08 suggesting that consumers would have favorable perceptions on livestock products derived from insect-based feeds. However, this finding is in contrast to earlier studies by [43,49] who noted that farmers were uncertain about whether consumers would accept these products. One plausible explanation for this finding is that meat consumers in Kenya were ready to purchase meat products reared on IBF as noted by [50]. The belief that livestock will have improved feed intake and better tolerance towards diseases ranked moderately at 2.81 and 2.66, respectively. Principal Components of Farmers Perceptions of IBF and Their Associated Loadings Results of the retained principal components and their respective loadings from each of the 18 perception statements are presented in Table 5. The KMO test of sampling adequacy was 0.856 which is within the recommended threshold of 0.6 to 1 [41]. The Bartlett's test of sphericity was significant at a 1% level, implying that the items in each group had significant relationship. Further, the Cronbach's alpha, a measure of internal consistency, for each factor score was above 0.5 hence the perception statements were reliable for PCA. Based on the Kaiser criterion [41], the retained factors cumulatively explained about 64% of the variation. The performance component explained the maximum variation of about 35% with eight items showing factor loadings above the threshold of 0.5 for retention of statements. Farmers typically agreed with statements such as, "IBF will be more sustainable", "IBF is safe for livestock use" and "Livestock will have improved immunity". The component of acceptability explained 11.84% of the cumulative variation and recorded five statements with factor loadings above the 0.5 threshold. It was common for farmers to indicate that "I will use IBF when the government approves it", "IBF is acceptable in my religion", "IBF is acceptable in my culture" and "IBF will create employment opportunities". Two statements namely; "IBF should be fed to all types of livestock" and "IBF should be fed to young livestock" satisfied the 0.5 factor loading threshold and had the highest contribution to the component on versatility which explained about 9% of the variation. This is understandable because farmers keep different breeds of animals on the same farm. Marketability recorded two statements with factor loadings above 0.5 and explained the least variation of approximately 7% in the analysis. Econometric Results The results of the multiple linear regression analysis are presented in Table 6. The factors influencing the individual IBF perception components are in agreement with those of the composite index. However, the coefficients of the latter model are larger than those of the former models, possibly because of the effect of aggregation. The adjusted R-squared values, which measure goodness of fit, were low (2% to 26%) but within the range of similar studies. For instance, references [37,38,51] have reported values of as low 1% for linear regression models of survey data. According to [35], it is not unusual to observe low goodness-of-fit in regression analysis using cross-sectional data and in behavioral studies. All the models except that of versatility were significant at 1%. The model diagnostic tests were performed to ascertain the absence of correlations among the factor scores and to further justify the use of individual linear regressions (Appendix A). Overall, awareness, off-farm income, wealth status and group membership positively and significantly influenced farmers' perceptions of commercial IBF at least at the 5% level. Farmers who were aware of the IBF attributes were more likely to have favourable perceptions of IBF than their counterparts who were not aware. This finding held true for all the perception indices except that of the versatility factors. Similarly, farmers who had an off-farm income source were more likely to have more favourable perceptions on commercial IBF than farmers who did not have an off-farm income source. This was found to hold for the composite index, the performance index and the acceptability index. More wealthy farmers had higher likelihoods of having more favourable perceptions on commercial IBF that their less wealthy counterparts. This was the case for the composite, performance and versatility indices. Finally, households that were members of farmer groups were more likely to have to have more favourable perceptions on IBF than those households who were not members of farmers groups. This later finding holds for the composite, performance and acceptability indices. Notes: *** and ** denote statistical significance of variables and models at 1% and 5% levels, respectively. Standard errors are presented in parentheses. Source: Survey Data (2020). Discussion In conformity with our expectations, we found that a majority of the chicken farmers in this study had positive perceptions of IBF. Almost all respondents in this study were willing to use commercial IBF once available in the market. The statement, "I am willing to use IBF once it is commercially available" had a mean score ranking of 3.5 out of a possible 5 further reinforcing farmers' acceptability of IBF. Moreover, farmers expected that the introduction of IBF will lead to employment creation as indicated by the mean score of 3.43. Studies by [19,43] observed that farmers and other stakeholders are willing to rear insects, for income diversification and other economic benefits. The PCA method was used to compute four perception indices; performance, acceptability, versatility and marketability from retained factors out of the 18 perception statements. The retained factors cumulatively explained about 64% of the variation and the four indices were used as dependent variables in the regression analysis. We found awareness, off-farm income, wealth status and group membership to positively and significantly influence farmer's perceptions of commercial IBF at least at the 5% level (Table 6). These findings suggest that commercial IBF was perceived to be more important than conventional chicken feed by farmers who were aware of the IBF attributes, who had an off-farm income source, were wealthy and those who were members of farmers groups. The performance aspects of IBF such as improved feed intake and improved immunity of livestock reared on IBF were perceived to be more important to the farmers who were aware of IBF attributes. This implies that awareness creation and dissemination is important in promoting use of IBF among chicken farmers in Kenya. Our findings are supported by [30,49] who reported that prior exposure to a particular insect positively contributed to farmers' willingness to use IBF. Similarly, the performance aspects of IBF were perceived to be more important by farmers who belonged to groups than those who were not members of any group. Groups play a crucial role in the transfer of information particularly among smallholder farmers who are often members of more than one group [52]. Wealthier farmers and those with access to off-farm income sources perceived the performance aspects of IBF to be more important than their less wealthy counterparts and those with no access to off-farm income respectively. The acceptability elements of IBF were more important to farmers with prior awareness of the nutritional benefits of feeding chicken on insects and those belonging to farmer groups than their counterparts who were not aware. Farmers with off-farm income sources were more keen on the acceptability elements of IBF than those without an off-farm income source possibly because the supplementary income would allow them to purchase IBF once it is commercially available. This is in line with the finding by [53] that farmers with offfarm sources had more positive attitudes towards new technologies. The versatility features of IBF were more important for wealthy farmers than their less endowed counterparts. Similarly, the more educated farmers perceived the versatility features of IBF to be more important than their less-educated counterparts. High literacy levels facilitate the search, access and comprehension of new and existing information. Educated farmers perceive market research as a critical component to safeguard against economic losses experienced during distress sales [54]. Finally, the marketability aspects of IBF were perceived to be more important by the more educated farmers and those that were aware of the fact that livestock feed on insects for nutritional benefits than their less-educated counterparts and those who are not aware of this. This might be attributed to their high level of literacy and resource endowments which allow them to access and synthesize market information and to purchase high valued livestock breeds. Characteristics such as consumer acceptance of meat and eggs from chicken reared on IBF and the ability of these products to fetch higher prices in the market were rated highly by more educated farmers than their less educated counterparts. Conclusions and Policy Recommendations This paper evaluates farmer's perceptions of commercial IBF in Kiambu County, Kenya. It employs a PCA to construct perception indices that are used in multiple linear regressions on a sample of farmers selected using a multistage sampling procedure. A sample of 310 farmers was used. We find chicken farmers in Kiambu County, Kenya, to have positive perceptions on commercial IBF. Our findings revealed favourable patterns of farmers' perceptions of commercial IBF in Kenya with regard to feed performance, social acceptability of the IBF feed, versatility of the feed and marketability of meat and egg from chicken raised on the novel insect-based feeds. Farmers' awareness of IBF attributes, membership to groups, education, off-farm income sources and their wealth status were the most important drivers of their perceptions on IBF. However, it should be noted that these findings are context-specific and might not be applicable in countries with different cultural backgrounds. Future studies should explore coverage of more counties to improve the applicability of the results. Given that perceptions are based on exposure to knowledge, the study recommends that policy interventions by county governments in Kenya should be geared towards increasing farmers' technical knowledge and ability to evaluate the performance of different animal breeds reared on IBF through technical trainings at group level to capitalize on peer learning. Interventions such as experimental demonstrations that increase farmers' technical knowledge on the productivity of livestock fed on IBF are crucial in reducing farmers' uncertainties towards acceptability of IBF. Public-private partnerships with resource-endowed farmers and farmer groups are recommended to improve knowledge sharing on IBF. Moreover, since such policy measure might set the backdrop for adoption of insect-based animal feeds, our findings would help shape the institutional, legal, regulatory, financial and economic aspects that affect farmers and commercial influencers. Acknowledgments: The authors gratefully acknowledge contributions from Henry Mwololo for his revisions on the earlier draft, and the livestock extension officers of Kiambu County as well as the farmers for their willingness to assist in the provision of data for this research. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Therefore, the views expressed herein do not necessarily reflect the official opinion of the donors. Notes: *** denotes statistical significance at the 1% level. Source: Survey Data (2020).
2021-06-03T13:13:05.836Z
2021-05-11T00:00:00.000
{ "year": 2021, "sha1": "241561a998ceb32f15456a87b3356717ab898936", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/10/5359/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1b843ed19bf5ca154098a9a685b82dd5dda6f480", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
220280494
pes2o/s2orc
v3-fos-license
Massive MIMO As Extreme Learning Machine This work shows that massive multiple-input multiple-output (MIMO) with low-resolution analog-to-digital converters (ADCs) forms a natural extreme learning machine (ELM), where the massive number of receive antennas act as hidden nodes of the ELM, and the low-resolution ADCs serve as the activation function of the ELM. It is demonstrated that by adding biases to received signals and optimizing the ELM output weights, the system can effectively tackle hardware impairments, e.g., the power amplifier nonlinearity at transmitter side. It is interesting that the low-resolution ADCs can bring benefit to the receiver in handling nonlinear impairments, and the most computation-intensive part of the ELM is naturally accomplished by signal transmission and reception. I. INTRODUCTION M ASSIVE multiple-input multiple-output (MIMO), where the base station is equipped with a large number of antennas, is a promising technology for 5G and future generation wireless communications [1]. However, numerous radio frequency chains in massive MIMO lead to high power consumption. To address this challenge, low-resolution analog-to-digital converters (ADCs) can be used [2], [3]. Besides the hardware imperfection at base station, there are also hardware impairments at the user side. For example, the use of cheap power amplifier may lead to nonlinear distortions to the transmitted signals. The hardware impairments have to be properly handled to avoid severe performance degradation. Many investigations have been carried out, e.g., the works in [4], [5], which either address the impairments at transmitter side or receiver side. There are few works addressing the impairments at both user and base station sides. In this work, we bring up a brand-new method to address the challenges in massive MIMO by treating massive MIMO as a natural extreme learning machine (ELM). ELM is a single-hidden layer feedforward neural network, where the input weights and biases are randomly initialized and fixed [6]. The parameters to be learned in ELM are the output weights, which boils down to solving a linear system, making ELM fast in learning. ELM has been investigated for light emitting diode (LED) communications in our previous works [7], [8] to tackle LED nonlinearity and/or cross-LED interference. We have designed ELM based non-iterative and iterative receivers [8], and our investigations demonstrate that ELM is very effective to handle nonlinearity, delivering much better performance than polynomial based techniques [9], [10]. ELM has also been used for channel estimation and detection for OFDM systems [11], [12]. In this work, we consider the uplink of a massive MIMO system where transmitted signals of users suffer from nonlinear distortions, and the base station is equipped with a massive number of antennas with low-resolution ADCs. It is interesting that the massive MIMO itself can be treated as (part of) an ELM. In particular, the transmit antennas of users can be regarded as the input nodes of the ELM, the massive number of antennas at base station acts as the hidden nodes of the ELM, so the massive MIMO channel matrix functions as the input weight matrix of the ELM. Furthermore, the lowresolution ADCs serve as the activation function of the ELM. Then we add biases to the received signals before analog-todigital conversion and obtain the output weights of the ELM with training signals. We show that the ELM can effectively handle the nonlinear impairments, and particularly, the lowresolution ADCs are helpful to handle the nonlinear distortion at transmitter side. The rest of the paper is organized as follows. In Section II, the signal model for massive MIMO with hardware impairments is presented. ELM is briefly introduced in Section III. In Section IV, an ELM receiver is borrowed from [7] for massive MIMO detection. In Section V, the new ELM based receiver is proposed, where the massive MIMO itself is treated as part of the ELM. Simulation results are provided in Section VI, followed by conclusions in Section VII. II. MASSIVE MIMO WITH HARDWARE IMPAIRMENTS We consider the uplink transmission in a massive MIMO system with K active users. Assume that each user has a single antenna and the base station is equipped with N antennas, where N can be much larger than K. In this work, we particularly consider the nonlinear distortion of power amplifiers at transmitter (user) side and low-resolution ADCs at receiver (base station) side. The nonlinear distortion of the power amplifier can be characterized by the nonlinear amplitude to amplitude conversion (AM/AM) and amplitude to phase conversion (AM/PM) [13] A(a) = α a (a) 1 + β a a 2 , Φ(a) = where a is the amplitude of the signal input to the power amplifier, and A(a) and Φ(a) represent the amplitude distortion and phase distortion of the power amplifier, respectively. The received baseband signal vector at sampling time instant m can be represented as where active users, () T denotes the transpose operation, n[m] denotes an additive white Gaussian noise vector, and f (x) is an element-wise function that accounts for the distortions of the power amplifier to the amplitude and phase of the transmitted signal, i.e., After low resolution ADC, the signal can be represented as where Q(.) denotes the quantization operation. The aim of the receiver is to recover x[m] based on the quantized signal r[m]. III. EXTREME LEARNING MACHINE The structure of ELM is shown in Fig. 1. ELM is a singlehidden layer feedforward neural network, where the input weights {ω lu } and biases {b l } are randomly initialized and fixed without tuning [6]. The parameters to be learned in ELM are the output weights, and hence ELM can be formulated as a linear model with respect to the parameters, which boils down to solving a linear system, making ELM efficient in learning. ] T , the vth output of the ELM shown in Fig. 1 can be expressed as where L is the number of hidden nodes, ω l = [ω l1 , ω l2 , . . . , ω lU ] T is the input weight vector that connects all input nodes to the lth hidden node, b l is the bias of the lth hidden node, g(.) is the activation function of the hidden layer, and β vl denotes the output weight that connects the lth hidden node and the vth output node. We can express the M equations in (5) in a matrix form as with where is the hidden layer output vector at time constant m, ELM randomly selects input weights and biases, and output weights β v are obtained by minimizing the cost function The regularized smallest norm leastsquares solution is given by [14] β where I is an identity matrix, γ is a regularization parameter and IV. ELM RECEIVER BORROWED FROM [7] In [7], we proposed an ELM based receiver to handle both the LED nonlinearity and cross-LED interference in MIMO LED communications. Here, we borrow the ELM receiver in [7]. As shown in Fig. 2 Then, the output weight vectors {β Re k , β Im k , k = 1, 2, . . . , K } can be obtained using (9), and each pair of weight vectors correspond to a user. Then, the trained ELM can be used to detect the transmitted data of each user, i.e., the estimator for x k [m] can be represented as where Ω ∈ R L×2N and b ∈ R L . Then, the decision based oñ x k [m] can be expressed aŝ where c belongs to the symbol alphabet. It can be seen in (10) that, intensive calculations are involved in the product of the input weight matrix Ω and the input data vector r [m], leading to a quadratic complexity O(LN). As we proposed in [7], we can put a constrain on the structure of Ω, i.e., it is a (partial) circulant input weight matrix, enabling an implementation using the fast Fourier transform (FFT) with significantly reduced complexity. Refer to [7] for details. A. New ELM Based Massive MIMO We treat massive MIMO with low-resolution ADCs as a natural ELM, based on which a new receiver is designed. It is noted that the idea and receiver here are completely different from those in Section IV. Figure 3 illustrates the ELM based massive MIMO system where the transmit antennas, massive MIMO channel and receive antennas serve as part of the ELM. As a common assumption in massive MIMO, we assume that the number of active users K is less than the number of receive antennas N at base station. By comparing Fig. 3 with the ELM in Fig. 1, the K transmit antennas are analogous to the input nodes of the ELM, and the signals are transmitted over the air, which are picked up by the receive antennas. We treat the receive antennas as the hidden nodes of the ELM, and the channel matrix H is analogous to the input weight matrix Ω of the ELM. To mimic the ELM, we add a bias b n to the received signal y n at each receive antenna. Then the biased signals are input to the low-resolution ADCs. Hence the signal vector after ADC can be represented as where s[m] = f (x[m]) is the distorted signal vector. We treat Q(.) as the activation function, and the only difference between (12) and (8) So, β Re k and β Im k can be obtained by solving two regularized LS problems, i.e., β Im where t Re ] T is the training sequence of the kth user. The trained output weights can be applied to received signals to estimate the transmitted data of each user, i.e., Then, the decision based onx k [m] can be expressed aŝ where c belongs to the symbol alphabet. B. Comparisons with Receivers and Remarks 1) Conventional ZF receiver with perfect channel state information: With the perfect knowledge of the channel matrix H, the weight of the ZF detector can be represented as where h k is the channel vector of the kth user, () H denotes the conjugate transpose, and () * denotes the conjugate operation. The detector simply ignores the nonlinear distortion to the transmitted signal and the impact of the low-resolution ADCs at the receiver side, which leads to very poor performance as shown in Section VI. 2) Conventional ZF receiver with training.: The detector is directly trained using training signals. In this case the weight of the detector can be expressed as where R = [r [1], r [2], . . . , r[M]] T . It is interesting that the directly trained detector performs slightly better than the detector with perfect H, as shown in the Section VI. This is because the training considers the impact of nonlinearity, although it is still a linear one. 3) ELM receiver in Section IV: As shown in Fig. 2, the ELM receiver in Section IV treats the quantized received signals as the input, and it needs a large number of hidden nodes. The new ELM based receiver shown in Fig. 3 is very different. In the new ELM based receiver, the multiplication of the input weight matrix with the input vector is naturally accomplished by signal transmission over the air, and the output of the ADCs is the output of the activation function. Clearly, compared to the ELM receiver in Section IV, the new ELM based receiver has lower complexity of training and significantly lower complexity of detection. Once trained, the new ELM based receiver only needs to carry out (16) and (17) for detection. However, the ELM receiver in Section IV needs to carry out (10) and (11) for detection, which involves matrixvector multiplication. In addition, as shown in Section VI, the new ELM based receiver can even achieve considerably better performance. VI. SIMULATION RESULTS We consider a massive MIMO system with K = 10 transmit antennas and N = 256 receive antennas, and 16-QAM is used. According to [13], the parameter setting for the power amplifier nonlinearity is as follows: α a = 1.96, β a = 0.99, α φ = 2.53 and β φ = 2.82. ADCs with 6bit quantization are used. The signal-to-noise ratio (SNR) is defined as SN R = P s /P n , where P s is the power of the signal (per transmit antenna), and P n is the power of the noise (per receive antenna). To train the ELM and ZF receivers, training signals with length 3000 are used. We assume rich scatter environments, and the elements of H are independently drawn from a proper complex Gaussian distribution with mean 0 and variance 1. Figure 4 shows the symbol error rate (SER) performance of the new ELM based receiver, ELM receiver in Section IV and conventional ZF receivers with perfect channel state information and training. For the ELM receiver in Section IV, 512 hidden nodes are used, and the input weights and the biases are drawn uniformly from [-0.1,0.1]. For the new ELM based receiver, the biases are also drawn uniformly from [-0.1,0.1]. It can be clearly seen from Fig. 4 that the ZF receivers have poor performance due to their weak capability to mitigate nonlinearity. In comparison, the ELM receiver in Section IV can effectively handle the hardware impairments. It also can be seen that the new ELM based receiver delivers the best performance, with significantly lower complexity compared to the ELM receiver in Section IV. To examine the impact of received signal biasing and low resolution ADCs, we carry out an interesting experiment. We assume a trained ZF receiver without quantization (i.e., infinite number of bits for ADC) is used. One ZF detector is trained without adding biases to the received signal, and the other one is trained with biased received signal. The results are shown in Fig. 5. It is interesting that the ZF with biased received signal performs better than the ZF without biasing. This demonstrates that, even for a linear detector, adding biases to the received signal is helpful in dealing with the nonlinear distortion at the transmitter side. It is also interesting that, from Fig. 5, the ZF detector with biasing delivers performance much worse than that of the new ELM based receiver. This indicates that the low resolution ADCs are even helpful to deal with the nonlinear distortion when they are exploited as activation function of the ELM. As a final remark, we note that, as ELM allows fast learning (only output weights need to be updated), adaptive ELM receiver can be developed to handle time varying massive MIMO channels, which requires shorter training sequences once the output weights are initialized. VII. CONCLUSION In this letter we have shown that massive MIMO with low resolution ADCs can be treated as a natural ELM where the massive number of antennas act as the hidden nodes and the ADCs act as the activation function of the ELM. By adding biases to the received signals and optimizing the output weights, the ELM can effectively handle hardware impairments in massive MIMO. The effectiveness of the receiver has been demonstrated.
2020-07-02T01:01:47.145Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "81bd53a98fa8751587073a2ca6444f2ae93a99bf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2007.00221", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d2f6e7d8bd25b7065fb4e1bc45584ad9db113fe6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
234950017
pes2o/s2orc
v3-fos-license
Large neck metastasis with unknown primary tumor - a case report Introduction. Metastatic head and neck carcinoma from an unknown primary tumor is defined as a metastatic disease in the neck?s lymph nodes without evidence of a primary tumor after appropriate investigation. Multiple national guidelines recommend that essential steps in diagnostic protocols involve a detailed clinical exam with radiological imaging, fine-needle aspiration (FNA) biopsy of the cervical tumor, panendoscopy with palatine and lingual tonsillectomy, immunohistochemical staining, and human papillomavirus (HPV) detection. Treatment of head and neck carcinomas of unknown primary (CUPs) origin involves surgery (neck dissection) with radiotherapy, while some authors recommend chemo-radiotherapy in cases of the advanced regional disease. Case report. A 44-year old male was referred to the tertiary medical center because of a large ulcero-infiltrative cervical mass on the right side. Examination of the head and neck and flexible nasopharyngolaryngeal endoscopy was conducted, followed by computed tomography (CT) of the head, neck, and thorax with intravenous contrast. The primary localization of the tumor was not confirmed by these diagnostic methods. Open biopsy of the neck mass confirmed histopathology diagnosis of metastatic squamous cell carcinoma. Results of panendoscopy with biopsies and bilateral tonsillectomy were negative for malignancy. Treatment included extended radical neck dissection with reconstruction and postoperative ipsilateral radiotherapy. Five years after the first surgery, the patient presented with an extensive pharyngolaryngeal tumor. Biopsy with histopathology examination confirmed the diagnosis of squamous cell carcinoma. Conclusion. A structured step-by-step diagnostic approach in identifying the primary site of the metastatic head and neck carcinoma is mandatory. Substantial advances in diagnostics and operative techniques have increased the likelihood of primary tumor identification, as well as detection of regional and systemic spread of the disease. Purpose of adherence to guidelines results in higher overall-survival and longer regional disease-free survival in these patients. Treatment of head and neck SCCUPs prioritizes loco-regional control. Initial recommendations involve surgery (neck dissection) with radiotherapy. 1,2,9 The importance of chemo-radiotherapy is stressed for N2, N3, and metastases with extracapsular extension. 1,10 Treatment remains heterogeneous and still based on retrospective studies, clinical experience, and institutional policies. We present a case of squamous cell carcinoma neck metastasis with an unknown primary tumor to illustrate the importance of a structured diagnostic protocol and appropriate treatment choice in achieving better overall and disease-free survival. Case report A 44-year old male patient was referred to our clinic with a painless large ulcerousinfiltrative cervical mass on the right side. The neck mass appeared four months prior referral. On admission, he did not report any other relevant symptoms in the head and neck region or any comorbidities or allergies. He was a heavy smoker (up to 60 cigarettes a day for 20 years) and frequently consumed alcohol (over 500ml of spirits a day for over 15 years). 6 We conducted a complete and careful clinical otorhinolaryngology examination, followed by flexible nasopharyngolaryngeal endoscopy. Clinical findings appeared normal. Prior to hospitalization, computed tomography (CT) of the head, neck, and thorax with intravenous contrast was done. Imaging findings indicated nodal metastatic disease in the right neck, with central necrosis, infiltration of adjacent muscles, internal jugular vein, and skin. All parts of the pharynx and larynx were without pathological findings. (Figure 1) Discussion The failure to detect the primary tumor location in a patient with metastatic head and neck cancer poses a clinical challenge that can affect the course of treatment and disease prognosis. New recommendations were made in recent guidelines, but weren't applied in the case presented above, which further illustrates their importance in the diagnostic protocol, choice of treatment, and better overall and disease-free survival. After clinical examination and diagnostic imaging, fine-needle aspiration (FNA) biopsy is a crucial step in assessing neck nodal mass in SCCUP. The American Joint Committee on Cancer (AJCC) 3 recommended adding HPV staining to the diagnostic work-up. HPV specific marker p16 positive immunohistochemical staining would indicate a potential oropharyngeal primary tumor (palatine tonsil and base of the tongue). Lymph nodes metastases in SCCUP were positive for HVP in 7.8 to 30%. 6,9 In Serbia, oropharyngeal carcinoma were positive for p16 HPV in 45%. 10,11 A positive p16 result should at least be followed by HPV specific testing (in situ hybridization or PCR), especially in cases where no non-keratinizing histology or lymph nodes are not found in II or/and III region. PET/CT is recommended in all patients where conventional imaging failed to identify the tumor's primary site. PET/CT has high sensitivity (up to 88.3%) and negative predictive value (from 68.9% to 93%), which makes it an excellent complementary diagnostic tool. 7,12,13 Diagnostic protocols that use preoperative PET/CT preceding panendoscopy with directed biopsies resulted in detection of the primary lesion in over 90% of the patients. 12 to deep tonsil biopsies where the identification rate was only 3%. 15 Bilateral tonsillectomy is preferred to unilateral due to a possible bilateral and contralateral tumor location in 15% of the cases with tonsillar malignancies. 16 Recommendation on lingual tonsillectomy is still not firmly established. With advances in operative techniques that include transoral laser microsurgery and transoral robotic surgery, lingual tonsillectomy provided a tumor detection rate of 56% in patients with SCCUP. 8 Bilateral tonsillectomy should always be performed in cases of SSCUP, while in the presented case, only blind biopsies were done in the absence of the evident tumor site. In this case, the pharyngolaryngeal tumor was considered a secondary primary, but we cannot exclude the possibility of the contralateral recurrent disease if the tonsils were positive for occult carcinoma. Further treatment in patients with unknown primary carcinoma with neck metastases involves neck dissection followed by postoperative radiotherapy (RT) or consideration of chemo-radiotherapy. 1,15 Multiple retrospective studies had inconsistent results regarding radiotherapy field size. Some reports reported that patients who underwent bilateral RT did not have significantly better overall survival and regional recurrence compared to patients treated with unilateral radiotherapy to the neck and mucosal surfaces. On the other hand, some studies favor bilateral nodal and mucosal irradiation. 17,18 The NCCN recommends chemo-radiotherapy in N2/N3 disease cases with the extracapsular extension (ECE) 14 , although it should be noted that no randomized trials are demonstrating the superiority of this treatment over radiotherapy alone. Due to the low incidence of the disease and the lack of high-quality evidence, clear clinical management protocols are not available. Conclusion Substantial advances in diagnostics and operative techniques have increased the likelihood of primary tumor identification, regional and systemic spread of the disease. If a CT or MRI does not identify a primary site, the PET-CT scans should be performed before surgical endoscopy and biopsies. In cases of SCCUP, bilateral tonsillectomy with lingual tonsillectomy is indicated during panendoscopy. Although high-quality evidence of treatment protocols is lacking, patients with more advanced stages of the regional disease
2021-05-22T00:02:56.639Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "331664bd50e95819eb95a38633968bf6b24ae227", "oa_license": "CCBYSA", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0042-84502100037D", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ae9175db34e6d7ac53883a823054c37690b7c59d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221367067
pes2o/s2orc
v3-fos-license
A new step toward tuberculosis vaccine? Understanding the role of the immune system in controlling tuberculosis (TB) infection is pivotal to reach the goal of full elimination of the disease by 2050, as highlighted by World Health Organization (WHO) [1]. TB diagnosis in children is particularly challenging, especially in limited resource settings, and studies which aim to identify target population to address efforts are crucial. Host's characteristics in determining disease control and its severity are increasingly studied, with the main aim to identify biomarkers of TB “resistance”, especially in the pediatric age. Basu Roy et al. recently published on EBioMedicine a study on a cohort of Gambian pairs of children (n = 58) exposed to the same index case with different infection status (infected and uninfected). This study emphasizes the importance that a discordant infection status could be related to the unique characteristics of the individual to inhibit mycobacterial growth [2]. The selection process of included children and the elimination of possible confounding factors give strength to what can be considered as an important milestone surrounding this fundamental topic [2]. Using a mycobacterial growth inhibition assay, bacterial growth was evaluated at baseline and at 96 h and a quantitative analysis of IL-1a, IL-1b, IL-10, IFN-g and TNFa was performed [2]. The used test, an autoluminescent BCG growth monitoring in whole blood, has been described by the same groups of authors in a recent publication [3]. It permits, with a small amount of blood (225 ml) to obtain serial measurements (after 1 h and at 24, 48, 72 and 96 h) with the quantification of luminescence related to bacterial colony forming units (CFUs) [3]. While mycobacterial control was superior in uninfected children at one hour, suggesting the role of both adaptive and innate immune response, on the other hand, children with mycobacterial infection showed a superior control at 96 h with a greater role of adaptive responses [2]. Regarding cytokine production, uninfected children Understanding the role of the immune system in controlling tuberculosis (TB) infection is pivotal to reach the goal of full elimination of the disease by 2050, as highlighted by World Health Organization (WHO) [1]. TB diagnosis in children is particularly challenging, especially in limited resource settings, and studies which aim to identify target population to address efforts are crucial. Host's characteristics in determining disease control and its severity are increasingly studied, with the main aim to identify biomarkers of TB "resistance", especially in the pediatric age. Basu Roy et al. recently published on EBioMedicine a study on a cohort of Gambian pairs of children (n = 58) exposed to the same index case with different infection status (infected and uninfected). This study emphasizes the importance that a discordant infection status could be related to the unique characteristics of the individual to inhibit mycobacterial growth [2]. The selection process of included children and the elimination of possible confounding factors give strength to what can be considered as an important milestone surrounding this fundamental topic [2]. Using a mycobacterial growth inhibition assay, bacterial growth was evaluated at baseline and at 96 h and a quantitative analysis of IL-1a, IL-1b, IL-10, IFN-g and TNFa was performed [2]. The used test, an autoluminescent BCG growth monitoring in whole blood, has been described by the same groups of authors in a recent publication [3]. It permits, with a small amount of blood (225 ml) to obtain serial measurements (after 1 h and at 24, 48, 72 and 96 h) with the quantification of luminescence related to bacterial colony forming units (CFUs) [3]. While mycobacterial control was superior in uninfected children at one hour, suggesting the role of both adaptive and innate immune response, on the other hand, children with mycobacterial infection showed a superior control at 96 h with a greater role of adaptive responses [2]. Regarding cytokine production, uninfected children produced less BCG-specific interferon gamma compared to infected children, mirroring the infection status [2]. Moreover, infected children were statistically significant older than uninfected, with a longer smear-positive index case exposure. Historically, the most important role regarding tuberculosis immunity has been attributed to T-cellmediated response, with CD4+ T cells playing a crucial role both for the control of infection and the tisse damage during TB infection [4,5]. Together with adaptive immune response, innate immune cells are crucial for TB infection control [4,5]. Nevertheless, unanswered questions remain. In fact, as stated by the authors, the complete knowledge of immune response to TB is still lacking and consequently, the way to find an effective vaccine is still far away. It is well known that the only available vaccine, the Bacillus Calmette-Gu erin (BCG), used for the first time in 1921, confers a significant protection against TB meningitis and miliary tuberculosis, especially in children under 5 years of age, with a different level of protection against pulmonary TB, ranging from 0 to 80% [6]. In addition, BCG scarring has been associated with a lower morbility and mortality compared to children not having a scar [7,8]. Up to now, several new TB vaccines are in the pipeline with incomplete results regarding safety and efficacy in the pediatric age [9]. Mechanisms required for mycobacterial killing are still under investigation and large studies in different populations are needed to help vaccine development. Intriguing results provided by this study, using a tool which helps to understand the in vivo interactions between the host and the Mycobacterium spp, pave the way for new studies regarding TB vaccine development with a direct applicability in the clinical practice. Contributors CT and LG both conceived and wrote the manuscript. LG approved the final version. Declaration of Interests The authors have nothing to declare.
2020-08-27T09:09:05.879Z
2020-08-25T00:00:00.000
{ "year": 2020, "sha1": "d741e6bef99dffac83ae332c57ce587beb5e7454", "oa_license": "CCBYNCND", "oa_url": "http://www.thelancet.com/article/S2352396420303418/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83b123f21f501483d1b0b9897dcd3ee8a71fb264", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
8420114
pes2o/s2orc
v3-fos-license
Assessment of Learning Style in a Sample of Saudi Medical Students CONFLICT OF INTEREST: NONE DECLARED Background By knowing the different students’ learning styles, teachers can plan their instruction carefully in ways that are capitalized on student preferences. The current research is done to determine specific learning styles of students. Method This cross sectional study was conducted in Al Ahsa College of Medicine from 2011 to 2012. A sample of 518 students completed a questionnaire based on Kolb inventory (LSI 2) to determine their learning style. A spreadsheet was prepared to compute all the information to get the cumulative scores of learning abilities and identify the learning styles. Results The mean values of the learning abilities; active experimentation (AE), reflective observation (RO), abstract conceptualizing (AC) or concrete experience (CE) for male students were 35, 28, 30 and 26 respectively while they were 31, 30, 31 and 29 respectively for female students. There were significant difference between male and female students regarding the mean values of AE-RO (6.7 vs 1.5) and AC-CE (4.1 vs 2.1). This indicated that the style of male students were more convergent and accommodating than those of female students. The female had more assimilating and divergent styles. Conclusion Learning style in Saudi medical students showed difference between males and females in the early college years. Most male students had convergent and accommodating learning styles, while the female dominant learning styles were divergent and assimilating. Planning and implementation of instruction need to consider these findings. introduction Learning styles refer to cognitive, affective, and physiological behaviors that perform as relatively stable indicators of how people perceive, interplay with, and respond to their environment in learning situations by recalling their stored information (1,2). The current learning style models in the literature represent the three layers onion metaphor consisting of; instructional preferences through which they perceive information (outermost layer), information processing (middle layer) and personality (innermost layer) (3). Many instruments were designed to measure different learning styles. One of the famous instruments concerned with the middle layer is the Kolb learning style model (4). In Kolb's model of experiential learning, learning involves a group of human activities including feeling, re-flecting, thinking, and doing, where the person is required to employ each of the four key learning abilities: concrete experience (CE), reflective observation (RO), abstract conceptualization (AC), and active experimentation (AE). Individuals develop specialized preferences for such activities and abilities; that are called learning styles (5). Any learning style is neither preferable nor inferior to another, but is simply different, with different characteristic strengths and weaknesses (6). Education in Saudi Arabia is, notably divided along the line of gender. The division is in line with the attitudes of the Saudi society which is based on the Islamic principles that prohibit intermingling between men and women (7). Our institute, Al Ahsa medical College, King Faisal University is composed of two separate departments; one for male and one for females to conform to the local cultural norm. The college started in 2001 with a traditional discipline based curriculum. During 2011-2012, the institution introduced the problem based learning (PBL) curriculum adopted from the University of Groningen, Netherlands. One of the most common concerns all over the world is the dissatisfaction of both the students and teachers regarding teaching and assessment. Multiple variables may affect this phenomenon. The current research was done to determine the student learning styles, and find if there was any difference between male and female students. Settings This study was conducted in Al Ahsa College of Medicine, King Faisal University, Saudi Arabia from original paper SuMMaRy Background: By knowing the different students' learning styles, teachers can plan their instruction carefully in ways that are capitalized on student preferences. The current research is done to determine specific learning styles of students. Method: This cross sectional study was conducted in al ahsa College of Medicine from 2011 to 2012. a sample of 518 students completed a questionnaire based on Kolb inventory (LSI 2) to determine their learning style. a spreadsheet was prepared to compute all the information to get the cumulative scores of learning abilities and identify the learning styles. Results: The mean values of the learning abilities; active experimentation (aE), reflective observation (RO), abstract conceptualizing (aC) or concrete experience (CE) for male students were 35, 28, 30 and 26 respectively while they were 31, 30, 31 and 29 respectively for female students. There were significant difference between male and female students regarding the mean values of aE-RO (6.7 vs 1.5) and aC-CE (4.1 vs 2.1). This indicated that the style of male students were more convergent and accommodating than those of female students. The female had more assimilating and divergent styles. Conclusion: Learning style in Saudi medical students showed difference between males and females in the early college years. Most male students had convergent and accommodating learning styles, while the female dominant learning styles were divergent and assimilating. Planning and implementation of instruction need to consider these findings. Subjects and Study design A cross sectional design was used in this study. The population was the all students enrolled in the College of Medicine. The sample included all the students who accepted to share and returned a filled survey form. It included a total of 518 respondents from different academic years in the College (307 males and 211 females). The instructions for completing the form were clarified, to avoid random and chance bias during filling. The male and female researchers agreed on standard steps of explanation, assurance, form distribution and collection. The form was self-scored by the students. After explaining the aims of the study and the methods of data collection, all students were asked to return the distributed forms anonymously with only denoting the academic year. The female researcher helped to assure the female students and guarantee the same degree of non-biased form filling. Instruments for determination of Learning Style Kolb learning style inventory (LSI 2) (8) was used to collect the initial answers and ranking of each participant. Calculations were done to reach to the actual learning style. Validity and reliability of the LSI was previously evaluated and proved (9,10,11). The LSI is composed of 12 questions with four options from A-D per question. Each respondent was requested to complete the 12 sentences by ranking the four choices by assigning 4 to the phrase that is most like him, 3 to the one that next describes him, 2 to the next, and finally, 1 to the ending that is least descriptive of him. Each of these choices, correspond to one of the four learning abilities in a random and non-uniform pattern. The LSI employs a forced-choice method by which to measure an individual learning orientation toward four learning abilities representing a repetitive four-step cyclical process: concrete experience (feeling) (CE), reflective observation (watching) (RO), abstract conceptualization (thinking) (AC), and active experimentation (doing) (AE) (8) . Calculations for determination of Learning Style An Excel © spreadsheet was prepared by the second author to compute all the information and identify the cumulative score of each learning style category. The least possible score is 12 and the highest possible score is 48. The greater the score of the learning ability, the more significant that the students prefer to learn through this ability. The mean values of these abilities were plotted per all group and academic years in males and females on the X axis representing the AE and RO, while the Y axis represented AC and CE abilities. Learning style was presented as a diamond graph. Furthermore, the scores were subtracted from one sum to the other in two dialectical or opposite abilities which describe a relative preferred way of learning. The value of AE-RO also shows how a person transforms and processes his learning experience with active experimentation abilities or reflective observation. The value of AC-CE represents how a person grasps learning experience either with abstract conceptualization or with concrete experience. The values of these subtractions were represented on a scale between +36 to -36. The + 36 or -36 comes from subtraction of 48-12 or 12-48. The mean AE-RO and AC-CE values were plotted on a scatter gram in relation to x and y axes respectively. Kolb put cut-off points of 5.9 for AE-RO and 4.3 for AC-CE as the LSI normative scores, at which X and Y axes cross. (5,8). A combination of two values of AE-RO and AC-CE determines which of four learning styles persons prefer to use. The four learning styles were represented as convergent (CON), Assimilating (ASM), Divergent (DIV) and Accommodating (ACM) Statistical analysis Computing the values of learning styles was done by data entry to Microsoft Excel © spreadsheet. (Microsoft Corporation, USA). The values of the 4 dimensions; AE (Active Experimentation or Doer), RO (Reflective Observation or Watcher), AC (Abstract Conceptualizing or Thinker) and CE (Concrete Experience or Feeler) for all students were estimated. The mean values of AE, RO, AC and CE per academic year and gender were calculated and graphed to present the learning style area. The values of AE-RO and AC-CE for all students were plotted at the X and Y axes respectively. Crossing at AE-RO (5.9) and AC-CE (4.3) was graphed to decide the four quadrants of learning ability; convergent, assimilating, divergent and Accommodating learning styles. These values were decided based on the different cut off scores of the norms as presented by Kolb. SPSS (SPSS Inc, Chicago, Illinois) was used to get the descriptive statistics on the AE-RO and AC-CE and to perform t-test to determine if the differences in the scores between the learning styles. Significance was considered at p < .05 in this study. Table 1 presented the number and percentage of sharing students divided by the academic year and by gender. The total students sharing in this study represented 65% of total students. discussion To the knowledge of the authors, no such study was done in the Gulf region to assess a whole Medical College learning style. Medical colleges usually attract a group of the best ranked students from the scientifi c discipline of secondary schools. Students were enrolled in medical colleges according to the competitive ranking and interest. With the idea that medical students had the basic minimum scientifi c thinking suitable for medical study, the researchers aimed to assess their learning style as a part of the educational policy to determine the coping abilities of the students. A comparative study was done between male and was also done. Th e pattern for male students was similar in second, third, fourth and fi ft h year level which were similar to the overall College male students. Th e outcome was nearly central balanced pattern with nearly equal sharing from all quadrants, signifying balanced learning style. Female students' pattern was deviated towards the concrete experience or feeler styles in the second and third years. Th e study also showed deviation to the refl ective observer or refl ective pattern and it became nearly balanced and concentrated in the center of the graph during graduation. Considering that each individual has his own learning preferences, yet, this variation and change in female students cannot be clued to true diff erent styles in diff erent cohorts. As stated by Cuthbert, P. 2005, we cannot exclude the eff ect of learner past experience in aff ecting his response to this questionnaire items, hence aff ecting the results (12). Th e mean values of the 4 dimensions of learning abilities; AE, RO, AC or CE for all College students per gender were presented. Th ese values were estimated and plotted to produce different diamond shaped areas of learning preferences; the pattern for male students was nearly similar in all years and similar to the overall College. They had a nearly central balanced pattern. female students' pattern was deviated towards the concrete experience or feeler styles in second and third years. The graph showed deviation to the refl ective observer or refl ective pattern till it became nearly balanced and concentrated in the center of the graph at graduation. were 35, 28, 30 and 26 respectively for male students and they were 31, 30, 31 and 29 respectively for female students. Th e mean value of AE-RO and the AC-CE were 6.7 and 4.1 for male students and 1.5 and 2.1 for female students. Th e representations of male students in the convergent and accommodating quadrants were more than those of female students. Th e reverse was evident for the assimilating and divergent styles which were more dominant in female students. Th e learning styles in males were convergent (CON) [ [42]. Th e overall representation of learning styles in our sample was 31.3% convergent, 15% assimilating, 33.4% divergent and 20.3% accommodating. Hall (1976) proposed a cultural classifi cation of high-context and low context cultures, based on how in each individual identity rests on total communication frameworks. Arabic countries belong to high context cultures that are associated with the CE mode; therefore, their members tend to learn through feeling in proximate contexts. (13) Th is supports the fi nding in the female students during early college years. However, later on, all the students' male and female showed the balanced pattern without any deviation to the high or low context pattern. All the students, especially males are more exposed to diff erent Western educational and cultural views. Th e mean values of AE, RO, AC or CE were 35, 28, 30 and 26 respectively for male students and they were 31, 30, 31 and 29 respectively for female students. Female students had more tendencies towards RO and CE learning abilities. Th is can be supported with shame and guilt theory. Shame process is more associated with the CE or feeler abilities. Also, guilt understanding imposes the use of internal verbal expression with more tendencies to be present in individuals with the RO or watcher abilities (14). Th e fi ndings of our study showed that the majority of convergent and accommodating learning styles were seen in male students more than females [85 % and 60% respectively] while the majority of divergent and assimilating styles were seen in female students more [60 % and 56% respectively]. Th ese fi ndings were M f M f AE 35 28 33 29 36 31 39 34 32 33 35 31 33 Ro 29 31 30 30 27 29 27 29 29 29 28 30 29 CE 31 31 31 31 31 31 29 31 27 29 30 31 30 AC 25 31 26 31 26 29 25 26 29 28 26 29 27 AE-Ro 5.6 -2.9 3.6 -1. 3 supported by Markus and Kitayama (1991) who examined the self-construal across cultures and the interdependent-self and independent-self, patterns. (15). People with interdependent-self are likely to express their learning preference of the CE and RO abilities (Divergent Style). In contrast , the American and western European independent-self is seen as an entity that contains important characteristic attributes and as that which is separate from context. Th ey involve the two learning abilities of AC and AE (Convergent style) with reliance upon clear concepts and distinct logic in their minds. (16). Barmeyer . Th e two combination mean scores, AE-RO and AC-CE, for the overall sample and for each stream, indicated that all emphasized active experimentation over refl ection and abstractness over concreteness. (19). In this study, the males showed a learning style which is more similar to the independent-self Western group. Convergers combine abstract conceptualization with active experimentation. Th ey apply their knowledge to examine problems and arrive at solutions in a hypothetic-deductive manner. Th ey prefer practical application of ideas and work on technical problems. On the other hand, the female students expressed a more tendency to the interdependent-self pattern. Divergers are described as imaginative, emotional, people-oriented, and culturally interested and also view experiences from diff erent perspectives using divergent thinking. Despite the students are derived from the same culture, there was a diff erence in the dominant learning style between male and female students. Robinson (2007) supported cultural variability among groups and cited the advantages of learning style diff erences. Diff erences may vary within cultural groups as well as between them (20). On the contrary to these cultural diff erences, Zualkernan, (2006) studied participants from computer programming and engineering. He compared participants studying at an American Midwestern University in the United Arab Emirates, with students from an American background. Both groups responded to the Felder Solomen index of learning styles. Th e researchers showed no signifi cant diff erences in learning style between both groups. (21). Joy and Kolb (2009) aimed to examine the role that culture plays in the way individuals learn. Th e sample of 533 students from seven countries responded to the Kolb inventory. Th ere was signifi cant interaction between culture and AC-CE. Th ere was no signifi cant interaction between culture and AE-RO (22). conclusions Learning style in Saudi medical students showed diff erence between males and females in the early college years. Most male students had convergent and accommodating learning styles, while the female dominant learning styles were divergent and assimilating. Planning and implementation of instruction need to consider these fi ndings. It is not enough to develop an awareness of one's learning style (for the student) and an awareness of the learning styles of a population of students (for the teacher), this awareness must be translated into a zone of comfort for learning and teaching strategies, respectively. Correlation with student performance in different curriculum parts will be our next research. Study Limitations Th is research was based on sample from our College, including about 500 students. Th is study depended on self-fi lled questionnaire. Confounding factors as student feeling, mood and own personal perspectives may aff ect the findings. The multiple academic levels of the students may affect the way they think and answer the questions of learning style inventory because the experience gained by College education may affect the answers.
2014-10-01T00:00:00.000Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "3fad3d6d013d4d66afe0e2054f72fea8e8382146", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3766540?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3fad3d6d013d4d66afe0e2054f72fea8e8382146", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
80345818
pes2o/s2orc
v3-fos-license
Pilot Study to Assess the Potential of New Moisturizing Agents for Oral Dryness 1Department of Oral Function and Rehabilitation, Nihon University School of Dentistry at Matsudo, Matsudo, Chiba 271-8587, Japan 2Faculty of Law, Seiwa University, Kisarazu, Chiba 292-8555, Japan 3Research & Development Headquarters, Research Laboratories, Earth Chemical Co., Ltd., Ako, Hyogo 678-0192, Japan 4Research & Development Headquarters, Development Department, Earth Chemical Co., Ltd., Ako, Hyogo 678-0192, Japan Introduction Saliva is known to play important roles in maintaining oral health through antibacterial activity, natural purification and mucosal protective effects. Nevertheless, complaints about oral dryness and xerostomia have become increasingly common in recent years, and these symptoms are frequently seen in the elderly (1)(2)(3). In addition, some studies have indicated that oral dryness is caused by decreases in saliva production, adversely affecting oral health and function (4)(5)(6)(7). Some methods used for xerostomia include administration of artificial saliva and the use of pharmacological agents to improve saliva secretion (8,9). However, results in those studies have indicated that either patients with xerostomia experienced little benefit, or the duration of administration was insufficient to remedy oral dryness. In recent years, several varieties of oral moisturizing gel to relieve oral dryness have gradually come into use (10,11). These moisturizing gels are easier to retain in the oral cavity than artificial saliva, and such gels can easily be used by a caregiver or by elderly individuals themselves. These oral moisturizing products have therefore been made available to nursing-care facilities and for at-home care. Some studies have shown that oral moisturizing gels for xerostomia offer evanescent advantages for xerostomia caused by preoperative radiotherapy or Sjögrenʼs syndrome (12)(13)(14). Moisturizing gels have various effects, supplying moisture within the oral cavity, wherein the moisturizing components retained in the oral cavity are used to relieve the symptoms of xerostomia. This type of product used to relieve symptoms of xerostomia was developed to moisturize the inside of the oral cavity. However, simply moisturizing the oral cavity is insufficient for xerostomia care. A comprehensive approach should aim to achieve the following three goals: 1)increase salivation to supply moisturizing effects(promoting saliva secretion); 2)maintain moisture in the oral cavity(moisturizing effect); and 3)clean the oral cavity(cleaning effect). However, at present no moisturizing gels simultaneously promote saliva secretion, or offer moisturizing and cleaning effects. We therefore aimed to develop a new mouthwash with these effects. The present study was undertaken to clarify the various effects of the components and their characteristics. New mouthwash The new mouthwash tested in this study was based on MONDAHMIN ® non-alcohol (Earth Chemical, Tokyo, Japan), with the addition of polyoxyethylene cetyl ether (CETETH-25), ethylene diamine tetra-acetic acid disodium (EDTA-2Na salt) and cetylpyridinium chloride (CPC). MONDAHMIN ® is well known to show high detergency and strong sterilizing properties (15), and is a popular, commercially available sterile cleaning agent in Japan. The new mouthwash also includes seaweed extract to promote saliva secretion and betaine for moisturizing effects. In addition, these materials were extracted from natural seaweed, but information on the density of these additives is confidential. Wako Pure Chemical Industries)at 37℃ and under 5% CO 2 . Cells were cultured in a 96-well plate for 2 days until confluence. After removing the medium, cells were washed using phosphate-buffered saline(PBS), and incubated for 15 min in 100 l of test solution contain moisturizing ingredients or PBS alone as a control. These solutions were aspirated, and cells were left to dry for 10 min at room temperature (temperature, 30℃; relative humidity, 45%). Next, 100 l of the above medium was added and 10 l of Cell Counting Kit- These analyses were performed using SPSS Statistics version 20 software(IBM Japan, Tokyo, Japan). Values of p < 0.05 were considered statistically significant. Promotion of saliva secretion Saliva secretion was significantly higher with the new mouthwash(5.62±2.07 g/2 min)than in the control group with purified water(4.88±1.97 g/2 min; p=0.0051)( Table 1). Discussion In order to achieve comprehensive treatment of xerostomia, this study developed the following three concepts for a new mouthwash: 1)promote saliva secretion to increase the wetting effect (saliva secretion effect); 2) maintain the moisturizing effect in the oral cavity(moisturizing effect); and 3) remove food residue from the oral cavity (cleaning effect). The experimental results showed that the new mouthwash benefits saliva secretion, moisturizing and cleaning, combining to offer a new type of mouthwash. Promotion of saliva secretion Due to the reduced amount of saliva and reduced saliva self-purification function, causing the emergence of various diseases, promotion of saliva secretion can be considered the ideal countermeasure to xerostomia. To improve saliva flow, drugs used to promote saliva secretion have contained cevimeline hydrochloride hydrate (19) and/or pilocarpine hydrochloride hydrate (20).Cevimeline hydrochloride works by directly stimulating muscarinic M3 receptors in the acinar cells of the salivary glands, thus stimulating the secretion of serous saliva (21). However, while the efficacy of cevimeline has been demonstrated, side effects such as nausea and vomiting have been reported (22). This drug treatment is thus not necessarily effective for everyone. We noticed that theʠumamiʡtaste component could promote saliva secretion in a form that can be simply and easily applied to anyone. We all know that taste and saliva secretion are associated. Reports have indicated that of the five basic tastes (sweet, bitter, sour, salty, and umami), umami is the best in promoting saliva secretion effects (23). Glutaminic acid, inosinic acid, and guanylic acid are taste ingredients that provide the umami taste. Sasano et al. also reported that palpitation, sweating, nausea, diarrhea and dizziness have all been observed in elderly patients taking parasympathomimetic drugs. To circumvent this problem, glutamate, which produces umami taste, was demonstrated to increase salivary secretion and thereby improve hypogeusia by enhancing the gustatory-salivary reflex (23). We found that makonbu, which has been used since ancient Japanese times to refine broth, contains very high levels of umami components. The components of various seaweed extracts can be used as food material, and in particular, the high salivation promotion effect of makonbuextract is well known in the Japanese food known as washoku. Our results showed that, when compared with water, the new mouthwash containing seaweed extract had marked effects in promoting saliva secretion. Moisturizing effects Glycerin and cellulose derivatives are often used to moisturize the inside of the oral cavity. These derivatives can be applied as liquids with sticky characteristics to increase the retention time in the oral cavity (24). Apart from such moisturizing effects, the aim should be to include components that protect and reduce irritation of the oral mucosa. We noticed that betaine (trimethylglycine), a natural substance derived from beet, has very good water retention characteristics. At the same time, this substance can protect the oral mucosa from irritation (25). However, betaine lacks the quality of stickiness, and is thus poorly remained in the oral cavity. To allow betaine to remain in the oral cavity, addition of sodium hyaluronate to the test solution improves retention and the moisturizing effect (26). In this study, we used a human oral squamous cell carcinoma cell line(Ca9-22), in accordance with the method of Mori et al. Ca9-22 reflects intraoral mucosa, as it was derived from human epithelial cells. We found that viable cell counts after drying were significantly better with the new mouthwash containing moisturizing ingredients (sodium hyaluronate, betaine)than without. Our preparation was thus formulated using betaine and sodium hyaluronate with the expectation of improved moisturizing and protective effects. Cleaning effects Due to the reduced saliva secretion in xerostomia, fluidity in the oral cavity is reduced, causing the inside of the oral cavity to become prone to the accumulation of food residue, peeling of the epithelium, and pro-inflammatory depositions (27). In the present study, the cleaning effects of the new mouthwash were evaluated using a simulation made of vegetable oils, lard, flour and other food sources. The resulting residue was not easily removed using water, but the new mouthwash showed much better cleaning effects, suggesting increased efficacy for cleaning the oral cavity. The new mouthwash was based on MONDAHMIN ® , which is currently on the market in Japan, with the addition of polyoxyethylene cetyl ether (CETETH-25), ethylene diamine tetra-acetic acid disodium(EDTA-2Na salt)and cetylpyridinium chloride. The active effect of CETETH-25 is to remove oils from the oral cavity (28), while the EDTA-2Na salt acts to enhance the cleaning effect. Moreover, cetylpyridinium chloride has both a sterilization effect and a decay prevention effect (29). In combination, this provides disinfection and antisepsis (30). The new mouthwash also included seaweed extract to promote saliva secretion and betaine for moisturizing effects. Table 3 shows that the detergency of the new mouthwash was comparable to that of the currently marketed mouthwash. None of these ingredients have problems in terms of safety or adverse effects. From the above results, our cleaning test results appear mainly due to the CETETH-25, with EDTA-2Na providing supporting effects. Overall effects The results indicate that this new mouthwash provides very good saliva secretion, moisturizing and cleaning effects. Unlike the low-level, traditional oral function moisturizers focused on improvingʠwashingʡandʠmoisturizingʡ , this mouthwash facilitates recovery of the original oral function through the promotion of saliva secretion. This represents a major characteristic of the new mouthwash. If symptoms can be alleviated in the elderly and individuals prone to xerostomia, this mouthwash will provide a great benefit in maintaining or improving the oral environment. One of the limitations of this study is that we were unable to use various controls. In the future, actual use of the new mouthwash on patients with xerostomia, young individuals and the elderly will be investigated in order to validate the present findings. Conclusion The components of this new mouthwash promote saliva secretion, and have moisturizing and cleaning effects, suggesting its promise as an effective mouthwash.
2019-03-17T13:10:56.964Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "497b7f8082846124db08c3404734adfaae59dbd1", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ijoms/16/2/16_25/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e6143866c08960090ad4a4874bf668ec74a90d46", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
252677270
pes2o/s2orc
v3-fos-license
Calcific Enthesopathy of the Superior Extensor Retinaculum – An Unusual Cause of Medial Ankle Pain Abstract Aim of the study Ankle pain can present a clinical dilemma to the foot and ankle surgeons, with a multitude of entities to which the symptoms could potentially be attributed. Enthesopathy around the ankle joint could be due to overuse, injury, inflammation or infection. Calcific ligamentous enthesopathy around the ankle is a well-recognised condition with a spectrum of causes. Case description To our knowledge, a clinically symptomatic presentation of calcific enthesopathy specifically affecting the entheses of the superior extensor retinaculum has not been described in the literature. We report the first case of symptomatic calcific enthesopathy of the superior extensor retinaculum in a healthy young female, and highlight the role of radiological interventions in its diagnosis. The condition was managed successfully by ultrasound-guided barbotage. Conclusions Calcific enthesopathy of the attachment of the superior extensor retinaculum is a rare condition that should be considered in the differential diagnosis of patients with medial ankle pain. Introduction The region around the ankle joint is an intricate area stabilised by static osseous and dynamic structures. It consists of ligament complexes, traversing tendons, and important neurovascular structures. Consequently, pathologies arising from all these structures can pose a diagnostic dilemma for a clinician in patients presenting with ankle pain. A thorough understanding of the anatomy and pathological entities pertaining to the area is key to reaching a definitive diagnosis and, consequently, ensuring appropriate patient management. The movement of the ankle is dependent upon the contraction of the flexor, extensor, and peroneal groups of muscles (1) . The retinacula of the ankle are regions of localised thickening of the investing deep fascia. They hold the coursing tendons of the leg and foot at the ankle to ensure efficient functioning and prevent bowstringing. These consist of the extensor retinaculum (superior and inferior), the peroneal retinaculum, and the flexor retinaculum (2) . The extensor retinaculum is derived from the superficial crural aponeurosis of the leg and can be divided into the transverse rectangular superior and the more complex X-or Y-shaped inferior extensor retinaculum (1,3,4) . The superior (proximal) extensor retinaculum (SER) attaches at the anterior tibial crest and the medial malleolus medially and the anterior border of the fibula and the lateral malleolus laterally. This is approximately 3 cm proximal to the tibiotalar joint (5) . Its deep relations include the following extensor tendons (from medial to lateral): tibialis anterior, extensor hallucis longus, extensor digitorum longus, the dorsalis pedis vessels, the deep peroneal nerve, and the peroneus tertius. The tibialis anterior tendon may e237 J Ultrason 2022; 22: e236-e239 Calcific enthesopathy of the superior extensor retinaculum -an unusual cause of medial ankle pain run in a separate tunnel formed from the superficial and deep fibres in 25-29% of cases (4,1) . Ankle pathologies presenting as ankle pain involve a spectrum of acute and/or chronic conditions. Ankle injuries involving soft tissue ligament and/or osseous elements are well described in the literature. Symptomatic presentations of ligament calcifications have been reported at the lateral collateral (6) and medial collateral ligaments of the knee (7) , the ulnar collateral ligament of the elbow (8) and the spine (9,10) . To date, calcifications of the SER itself have not been described in the literature. We present a rare case of calcification of the SER in a 51-year-old female that was managed successfully by ultrasound-guided (USS) barbotage. Case report A 51-year-old , fit and otherwise healthy, with no co-morbidities, presented with an insidious-onset ankle pain localised to the anterior and medial aspects of the left ankle joint, persisting for over a year. This was associated with the development of a firm swelling over the medial aspect of the ankle. The swelling itself did not prevent the patient from walking or activities of daily living. However, it was painful on pressure, and she was concerned enough to seek medical advice. There was no history of specific injury or trauma that preceded the onset of symptoms. On clinical examination, there was no erythema or redness around the left ankle joint. There was moderate focal tenderness at the anteromedial aspect of the tibia, corresponding with the attachment of the SER. The ankle revealed a full range of motion, with no features of instability. The patient was able to stand on tiptoes and perform a single heel rise test. Radiographs of the left ankle revealed no osseous abnormality. She underwent a magnetic resonance imaging (MRI) scan of the ankle, which demonstrated a 10 mm hypointense homogenous lesion on T1-weighted sequences, representing calcification at the attachment of the SER at the medial malleolus (Fig. 1) and the medial attachment of the SER. There was low signal on fluid-sensitive sequences, with mild perilesional oedema. No osseous oedema of the medial malleolus was noted. The tibiotalar joint and other joints were unremarkable. The tibialis posterior, the flexor digitorum longus, the flexor hallucis longus, and the peroneal and anterior tendon complexes were intact. The medial and lateral ligament complexes, the plantar fascia and the Achilles tendon, as well as the sinus tarsi, were normal. Discussion The pathogenesis of calcification of ligaments and tendons is often unclear. Rotator cuff calcific tendinopathy is a very common condition and hence calcific tendinopathy is thought to be a cell-mediated disease in which tenocytes transform into chondrocytes and induce calcification within the tendons (5) . However, it can also be argued that it is a degenerative process involving tendon fibres which undergo necrotic changes progressing to calcification. On the other hand, this hypothesis can be challenged by the fact that some calcification can resolve spontaneously with restoration of the gross tendon morphology (5) . Nevertheless, it is well understood that calcification occurring due to the deposition of calcium hydroxyapatite crystals at the attachment sites of tendons and ligaments can cause pain and disability, especially if unrecognised. Calcific tendinitis is rare in the foot and ankle. Only a few case reports have been published in the literature to address medial ankle and foot pain from calcific tendinitis of the posterior tibialis tendon at the navicular insertion (11) . Symptomatic ligament calcification is even more uncommon in this anatomical location. Our literature search through the PubMed database revealed no reported cases of calcification/mineralisation of the SER. Thickening and scarring of the ankle retinacula have been described in asymptomatic football players, with the probable mechanism involving repeated sub-maximal stress on normal tissue inducing local tissue inflammation and scar tissue formation in the long term (1,12,13) . Ding et al. (14) reported a retrospective review of seven cases of traumatic avulsion of the SER with subperiosteal haematoma as a hypoechoic lenticular structure responsible for the elevation of the hyperechoic periosteum at the fibular insertion of the SER. The SER itself was thickened and hypoechoic, without mineralisation. Our patient was a healthy middle-aged female and even though she had a known pathology, it was present at an unusual site. We hypothesise that that calcification of the SER could be idiopathic or post-traumatic due missed ankle sprain. Our case is unique in that not only it is the first case of calcification affecting the medial attachment of the SER retinaculum, but also USS-guided barbotage allowed successful management of the patient's symptoms. This case report highlights the need for a high index of suspicion about unusual causes of medial ankle pain and confirms the versatility of USS as a therapeutic modality in patient management. Conclusion Calcific enthesopathy of the medial attachment of the superior extensor retinaculum is a rare condition and should be considered in the differential diagnosis of patients presenting with medial ankle pain. Funding of the study No funding to declare. Conflict of interest The authors do not report any financial or personal connections with other persons or organizations which might negatively affect the contents of this publication and/or claim authorship rights to this publication.
2022-10-03T15:09:34.647Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "fda1609057effdb199dbc4fca635e8e629e49844", "oa_license": "CCBYNCND", "oa_url": "http://www.jultrason.pl/artykul.php?a=1088", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10399f130845a8ab448e4d4c588fd378b58c0110", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49545932
pes2o/s2orc
v3-fos-license
Endoscopic assisted electro-cauterization to treat an acquired pharyngeal ostium stenosis in a horse : a case report An 8-year-old Italian saddle-horse gelding with a history of left guttural pouch empyema was referred to the clinic. Endoscopic examination showed a stenosis of the left pharingeal ostium that was treated with an endoscopic assisted electro-cauterization. Endoscopic follow-ups were performed before discharge and at six months after stenosis removal. No recurrence was observed, confirming the patency of the pharyngeal orifice. Guttural pouches are air-filled pharyngo-timpanic or auditory tube diverticula that, in the adult horse, have a volume varying from 300 to 600 milliliters (Munoz et al., 2008).They are located in the caudal area of the head and extend ventro-dorsally from the pharynx to the base of the skull and oro-aborally from the dorsal pharyngeal recess to the atlanto-occipital joint.Moreover, they appear divided by the stylohyoid bone into two compartments: lateral and medial.Each pouch is separated from the contralateral one by a median septum and, dorsally, by the rectus capitis and longus capitis muscles.On the lateral side, the guttural pouch confines the pterigo-pharingeus, levator and tensor veli palatini muscles, stylohyoideus and occipitohyoideus muscles, the ventral belly of the digastricus muscle, as well as the parotid and mandibular salivary glands (Getty, 1982;Hardy and Leveille, 2003).The guttural pouch is lined with a pseudostratified ciliated epithelium containing goblet cells.The main arteries (internal carotid, external carotid and maxillary arteries) and nerves (IX, X, XI and XII cranial nerves, sympathetic nerves and cranial cervical ganglion) are located inside each guttural pouch, under the mucosa fold, and are easily visualized with an endoscope (Getty, 1982;Freeman, 1999).Moreover, retropharyngeal lymph nodes and the recurrent laryngeal nerve are clearly visible in the ventral side of the medial compartment, under the mucosa (Getty, 1982).The dorsal portion is continuous with the pharynx through the pharyngeal orifices (pharyngeal ostium) of the auditory tubes.These openings are equal in size and are covered by the medial cartilage lamina, placed dorso-laterally to the rynopharynx.Exudate and biological materials that may accumulate in the pouch are removed daily by a dynamic mechanism of mucociliar clearance and through the uplift of the pouch base to the pharyngeal orifice during the swallowing phase. Guttural pouches are particularly developed in horses and donkeys, but are also present in other animals like tapirs, some rhinoceros (except for white rhinoceros), certain bats, the South American forest mouse and hyraxes (Getty, 1982;Alsafy et al., 2008).The function of the guttural pouch is not yet completely understood but different suggestions have been made, including a role in equilibration across the tympanic membrane, in inspirated air warming, as a resonance chamber for equine whinny, and as a device for head flotation (Briggs, 2000).Recently, Baptiste et al. (2000) suggested that the function of the guttural pouches may be connected with cooling the blood that supplies the brain through the internal and external carotid arteries.This mechanism would provide heat exchange between the blood that flows through arteries located under the guttural mucosa, and the air of the pouch that, during physical activity, is refreshed more quickly because of the orifice openings (Baptiste et al., 2000).However, a more recent study by Mitchell et al. (2006) rejected this latter hypothesis based on direct measurement of extra and intracranic blood temperature.To achieve this, dedicated probes were positioned in the common carotid, in the jugular vein and close to the hypothalamus (Mitchell et al., 2006).However, no temperature decrement between the intracranic and extracranic blood was observed during physical activity or at rest (Mitchell et al., 2006). The incidence of guttural pouch diseases in the horse, such as empyema, tympany and mycosis, is rather low (Carmalt, 2002;Schaaf et al., 2006), and reports in the literature are constituted by several articles with a small number of cases (Freeman, 2006;Perkins et al., 2006;Schambourg et al., 2006) rather than retrospective studies with large numbers of cases (Judy et al., 1999).Inflammatory processes of the guttural pouch can result in empyema, which is sometimes complicated by chondroids and exudate conglomerates.Moreover, chronic guttural pouch empyema may occasionally involve stenosis or complete closure of the pharyngeal ostium (Gehlen and Ohnesorge, 2005;Perkins et al., 2006). We describe a chronic purulent case of gutturocystitis with stenosis of the pharyngeal ostium treated with endoscopic assisted electro-cauterization to remove the stenotic tissue. Case description An 8-year-old Italian saddle-horse gelding was referred to the University Clinic with a history of about 45 days of monolateral left nasal discharge and a swelling in the left retromandibular region.One month before our visit, the referring veterinarian had submitted the horse to daily flushings with physiological solution administered through endoscopic-assisted catheterization of the left guttural pouch.Following poor clinical results, 15 days before our visit, the veterinarian performed surgical drainage by creating a permanent fistula approaching the guttural pouch over the Viborg's triangle.After fistulation, the owner observed a mild reduction in the nasal discharge with a persistent swelling on the left retromandibular region with purulent material flowing out of the fistula.During this period the horse was excluded from agonistic activity and was put out to rest in the paddock. The owner observed a slight decrease in weight, but no respiratory noises, laryngeal or pharyngeal dysfunction or dysphagia.On the day of our visit, the horse showed good general condition with no nasal discharge, hyperemic conjunctival and oral mucosae and the presence, in the retromandibular region, of a fistula with thickening edges.Emission of a purulent exudate from the fistula was also observed.Around the fistula the skin was hairless, thickening, hyperaemic and had a purulent conglomerate exudate.Standard preoperative blood parameters were within the normal range. An x-ray exam of the retromandibular area was performed in standing position and in lateral view (kV 75 mAs 12).A thickening of the left guttural pouch wall and imperfect lap siding of the two guttural pouches were visualized.No calcification areas were observed within the pouch. An endoscopic examination in the standing station was performed with a Pentax Eg 290-kP flexible endoscope (9.8 mm diameter) and a Pentax EG-1870 (6 mm diameter), with the patient under sedation with detomidine (Domosedan 0.01 mg/kg i.v.) and butorfanol (Dolorex 0.05 mg/kg i.v.).The examination showed mild hyperaemia of the rhynopharyngeal mucosa with a reduced dorsal pharyngeal recess.No abnormalities were visualized in the larynx with the horse at rest or after the "slap test".Normal pharyngeal tube morphology was observed and no exudate was visualized close to the pharyngeal ostium or on the pharyngeal wall and lumen.Examination of the right guttural pouch was performed, approaching through the right nostril.The pouch appeared within normal limits with no abnormalities.Examination of the left guttural pouch was performed by introducing the endoscope through the left nostril; endoscope progression was possible until the cartilage portion of the left guttural pouch opening for approximately one centimetre; after that, a ductal stenosis was visualized with the presence of whitish lined fibrotic tissue and granulation tissue in a transverse position (Figure 1).An opening of about 2 mm was visualized in the ventral side of the stenotic area of the left guttural pouch. Following the endoscopic inspection a pharyngeal ostium stenosis of the left auditory tube was diagnosed.We concluded that the small opening would probably permit the introduction of a 2 mm catheter under endoscopic guidance (Figure 2).Removal of the stenotic tissue was recommended. The stenosis was removed under endoscopic guidance by electro-cauterization (EXCELL 250 MCDS -Alsa s.r.l., Italy) of the stenotic tissue.The horse was in the standing position under sedation with detomidine and butorfanol, as previously described.The aim of this procedure was to reconstruct the stenotic tube by plastic surgery.When a diameter of 1 cm was achieved, approximately 2 h after the beginning of the procedure, the Pentax EG-1870 endoscope was introduced into the left guttural pouch (Figure 3).An abundant grayish exudate was observed in the medial portion of this pouch.It was sucked out with a catheter introduced alongside the endoscope and was submitted to the laboratory for bacterial culture.The pouch-emptying manoeuvre excluded the presence of chondroids, confirming the x-ray results, and showing marked hyperaemia of the mucosa with some erosion lesions.Subsequently, a Foley catheter (30 F, 180 cm long) was introduced through the ventral meatus of the left nasal cavity under endoscopic guidance.It was pushed until it reached the pharynx and was positioned in front of the left pharyngeal ostium.The opening of the guttural pouch orifice was achieved by the introduction of a catheter into the endoscope instrumental channel and contemporary rotation of a flexible endoscope that resulted in abduction of the fibrocartilagineal ostium and the introduction of the Foley catheter into the guttural pouch (Figure 4).The correct Foley positioning inside the guttural pouch and the distension of the catheter balloon with 10 millilitres of physiological solution were controlled using Pentax EG-1870 endoscope visualization introduced into the pouch through the fistula of Viborg's triangle.The external extremity of the Foley catheter was fixed to the lateral margin of the left nostril with a finger trap suture.Culture of the exudate from the guttural pouch resulted in the isolation of numerous colonies of Streptococcus equi ssp.equi.Postoperative treatment included antibiotics (Penicillin G procaine, 22 000 IU/kg i.m., q 12 h and Gentamicin Sulfate, 3 mg/kg i.m., q 12 h) for seven days and anti-inflammatory drugs (Flunixin meglumine 1 mg/kg i.v. and Dexamethasone 0.04 mg/kg i.m.) for three days.For the first three days, daily flushings were performed with physiological solution through the fistula of Viborg's triangle and these were repeated through the Foley catheter for the remaining 12 days.Spontaneous closure of the fistula in the Viborg's triangle was observed during the postoperative treatments.Moreover, good clinical condition and no signs of dysphagia or respiratory noises were recorded.On the fifteenth day after resolution of the stenosis, the Foley catheter was deflated and removed and the horse was discharged and maintained at rest for a month, with subsequent progressive training.Endoscopic follow-ups were performed before discharge and at six months after stenosis removal.They excluded any recurrence and confirmed the patency of the pharyngeal orifice and auditory tube. DISCUSSION AND CONCLUSIONS Empyema is the most common disease affecting equine guttural pouches.It leads to a purulent collection due to the multiplication of pathogenic bacteria such as Streptococcus spp., for which the anatomical structure of the guttural pouches provides a perfect reservoir for months or even years, even after the apparent resolution of clinical conditions (Carmalt, 2002).Other factors can also lead to the development of this disease, such as drugs, trauma that involves the stylohyoid bone, and congenital or acquired stenosis of the pharyngeal orifice (Freeman, 1980).A final diagnosis can be achieved with correct physical examination and diagnostic imaging techniques (x-ray and endoscopy).Studies performed on a large number of clinical cases to compare x-ray and endoscopic examinations have not observed any significant difference between the results obtained using the two techniques (Judy et al., 1999); however endoscopy is preferred over radiology, because it allows direct visualization of the guttural pouch lumen and mucosa as well as pathological internal conditions (exudate or lesions).The endoscopic approach to guttural pouches is easy to perform but it may be obstructed when stenotic or fibrotic processes have occurred in the pharyngeal orifice.Stenosis of the pharyngeal orifice has been described in a few papers (Gehlen and Ohnesorge, 2005;Perkins et al., 2006).It is not a common disease but may complicate the treatment of empyema and it must be resolved in order for there to be complete recovery from pathological conditions.In our clinical case it is possible that the stenosis occurred due to chronic inflammatory processes in the guttural pouch and repeated tissue trauma following several endoscopically assisted catheterizations.To avoid this pathological condition Perkins and Schumacher (2007) suggested, as an elective treatment for guttural pouch empyema, the introduction and fixation of a permanent catheter (Chambers catheter or Foley catheter) for the purpose of perfoming repeated flushings. The selection of the correct treatment for guttural pouch empyema also requires the evaluation of the presence or absence of chondroids, or concomitant stenosis that may interfere with pouch drainage.The elective treatment of empyema that is not complicated by chondroids requires repeated flushings of the guttural pouch with polyionic balanced solutions associated with antimicrobials or antiseptics (Perkins and Schumacher, 2007).A systemic antimicrobial therapy could be administered, but the antibiotic effect inside the guttural pouches could be reduced by the presence of copious amounts of purulent exudate (Perkins and Schumacher, 2007).A rare complication of repeated flushings of guttural pouches with chondroids is pouch rupture, as previously described by Fogle et al. (2007). However, when conservative treatment does not aims at restoring the normal anatomical conformation of the pharyngeal ostium and its physiological function; (3) it avoids complications due to tube fistulation that could occur when the cartilagineal portion of the ostium is removed.Moreover, this technique appears easy to perform and is well tolerated by the horse both during the procedure and in the post-operative period, although it requires fixation of a Foley catheter to the nostril for fifteen days.The endoscopic procedure does not require general anaesthesia for the patient, avoiding the complications often described for the other techniques (Gehlen and Ohnesorge, 2005), and it provides an easier and safer approach to the lesion compared with traditional methods.It is the authors' opinion that the good outcome observed in this clinical case was also due to the short period which had elapsed between the development of the stenosis and its diagnosis and treatment.For this reason it is emphasized that horses with empyema of the guttural pouches should be examined frequently, in order to treat the stenosis during its onset. Figure 2 . Figure 2. Indirect evaluation of stenotic area with a 2 mm catheter introduced under endoscopic guidance
2018-07-01T02:14:05.119Z
2018-02-11T00:00:00.000
{ "year": 2018, "sha1": "c1550097de5fbbeaa05399b8935f94a434010f25", "oa_license": "CCBYNC", "oa_url": "https://www.agriculturejournals.cz/publicFiles/45063.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c1550097de5fbbeaa05399b8935f94a434010f25", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
22400079
pes2o/s2orc
v3-fos-license
Query Completion Using Bandits for Engines Aggregation Assisting users by suggesting completed queries as they type is a common feature of search systems known as query auto-completion. A query auto-completion engine may use prior signals and available information (e.g., user is anonymous, user has a history, user visited the site before the search or not, etc.) in order to improve its recommendations. There are many possible strategies for query auto-completion and a challenge is to design one optimal engine that considers and uses all available information. When different strategies are used to produce the suggestions, it becomes hard to rank these heterogeneous suggestions. An alternative strategy could be to aggregate several engines in order to enhance the diversity of recommendations by combining the capacity of each engine to digest available information differently, while keeping the simplicity of each engine. The main objective of this research is therefore to find such mixture of query completion engines that would beat any engine taken alone. We tackle this problem under the bandits setting and evaluate four strategies to overcome this challenge. Experiments conducted on three real datasets show that a mixture of engines can outperform a single engine. Introduction A common feature in search systems is to assist users in formulating their queries by suggesting completed queries as they type. This is known as query auto-completion (QAC). The typical QAC problem consists in providing a user with the top-K completion suggestions taken from a set of possible suggestions, given a user-provided query prefix and using prior signals for ranking completion suggestions [11]. For example, a QAC engine could generate for the user input "que" the suggestions 1) "query", 2) "question", and 3) "query results". It is an important feature that provides many advantages: users can write queries faster, write more precise and complete queries, use the right vocabulary, avoid typos, and execute queries that have proven to be successful in the past. Moreover, it has the side effect of standardizing the queries, which helps an adaptive search system learning the best documents to return for each query. Much work has been done in order to design good query completion engines that consider contextual information (e.g. [1,3,8,9,11]), which might include the status of the user (anonymous or logged in), user history, Web pages visited prior to the search, and much more. There are many possible strategies for QAC and a challenge is to design an engine that uses all available information in order to recommend diverse relevant suggestions given all this knowledge. Inspired by resource aggregation techniques [7], a strategy could be to aggregate several engines instead of aiming for one optimal engine. More specifically, each position of the suggestions list could be assigned to an engine and filled with a suggestion provided by this engine. This could enhance the diversity of suggestions by combining the strengths of different engines, each using the contextual information in its own specific way. The main objective of this research is thus to find a mixture of QAC engines that would beat any engine taken alone trying to consider all information at once. Constraint is that the learning process must be performed online, that is without an a priori learning phase before deployment. To achieve these objectives, we propose bandits-based techniques adapted from previous work. Related Works Bandits-based techniques have previously been considered to tackle the query suggestion problem, where the goal is to suggest additional queries to a user given its past queries [4]. Bandits algorithms in this setting were used to learn a mapping from each query to the top-K most relevant other queries. This would lead to a very large model, that is one mapping per possible query, and it would not necessarily be useful since many queries might occur a single time in history. Therefore, it was limited to the most frequent queries, which made sense for the query suggestion problem where full query terms are considered. However, it was found to be limiting in the QAC problem, where the most frequent queries are short sub-queries that are very common among multiple query terms. Bandits-based techniques have also been considered for the recommendation problem, where the goal is to answer the search query of a user with a list of several items. Bandits algorithms in this setting were used to learn a mapping from each query to the top-K most relevant links or documents. Previous research mainly addressed the issue of redistributing feedback given the position of click occurrence(s). The same kind of questions raise for the QAC problem and the models in the following proposed approach are based on techniques from this field. QAC as Mixture of Engines We tackle the QAC problem using a mixture of engines (QAC-ME), which we formalize as follows. Let E denote a set of QAC engines. On each time t, a list of M auto-completion suggestions is displayed to the user according to the current user-provided query prefix p t . Let a good suggestion denote a suggestion that would please the user. The user satisfaction toward a suggestion can be measured through user clicks. Let c t ∈ {1, . . . , M + 1} denote the position of the suggestion that is clicked by the user if any, otherwise c t = M + 1. The goal is to maximize the number of user clicks over time. Let S e,t denote the set of suggestions provided by engine e at time t using p t and possibly other contextual information. Items in S e,t are ordered by relevance such that the top-K items correspond to the first K items in the set. We want to assign an engine to each position of the QAC list such that this engine is in charge of providing the suggestion displayed in this position. Let e m,t ∈ E denote the engine designated to fill position m and let q m,t ∈ S em,t,t denote the suggestion assigned to position m. Duplicate suggestions are forbidden, meaning that q m,t is the most relevant suggestion from e m,t such that q m,t = q i,t for i m−1. The goal is to design an algorithm that selects the engine to use at each position in order to maximize the probability that the user clicks on any suggestion from the list, that is the probability that c t = M + 1. Note that it has been observed that the probability of getting a click on an item decays with the rank of the item in a list [2] -for example, a good suggestion in position 1 as a higher click probability than the same suggestion in position 3. Approaches The ranked model (Alg. 1) based on the ranked bandits algorithm [6] for query recommendation handles each position as an independent bandits problem, instantiating one bandits algorithm ϕ m for each position m. The ranked model does not share information from feedback gathered on the same engine placed at different positions. In contrast, the cascade model (Alg. 2) based on the cascade bandits algorithm [5] for query recommendation uses one single bandits algorithm ϕ Algorithm 1 Ranked Bandits for QAC-ME 1: initialize ϕ 1 (E), . . . , ϕ M (E) 2: for all episodes t do 3: receive prefix p t from user 4: for m = 1, . . . , M do 5: e m,t ← select(ϕ m ) 6: repeat 7: q m,t ← suggestion of engine e m,t for prefix p t 8: until q m,t ∈ {q 1,t , . . . , q m−1,t } 9: end for 10: display {q 1,t , . . . , q m,t } to user and get click index c t 11: update ϕ ct with outcome 1 for action e ct,t 12: update ϕ m with outcome 0 for action e m,t , ∀m = c t 13: end for Algorithm 2 Cascade Bandits for QAC-ME 1: initialize ϕ(E) 2: for all episodes t do 3: receive prefix p t from user 4: for m = 1, . . . , M do 5: e m,t ← select(ϕ) 6: repeat 7: q m,t ← suggestion of engine e m,t for prefix p t 8: until q m,t ∈ {q 1,t , . . . , q m−1,t } 9: end for 10: display {q 1,t , . . . , q m,t } to user and get click index c t 11: update ϕ with outcome 1 for action e ct,t 12: update ϕ with outcome 0 for action e m,t , ∀m < c t 13: end for Algorithm 3 Explicit Ranked Bandits for QAC-ME update ϕ with outcome 1 for action (e ct,t , i ct,t ) 10: update ϕ with outcome 0 for action (e m,t , i m,t ), ∀m < c t 11: end for for the whole setting. The cascade model merges all information obtained for a given engine regardless of the location of the engine when feedback was gathered. This should speed up the learning process but this also assumes independence between engines performance and their location in the list, which might not be true in practice. Note that neither ϕ m (ranked) nor ϕ (cascade) considers which engines are assigned to positions 1 to m − 1, or which suggestions are placed in these positions when selecting e m,t . Let the rank of the suggestion for engine e denote its index in S e,t . Obviously, if engine e is asked to fill position m (Algs. 1 and 2, line 7), it will recommend its most relevant suggestion, that is the first suggestion in S e,t , or rank 1. However, because showing duplicate suggestions to the user is forbidden, engine e is asked for its next suggestion until a new, unique, suggestion is provided (Algs. 1 and 2, line 8). Given that M positions must be filled, engines might be forced to recommend up to their M -th best suggestion 1 . An easy example is when the same engine is assigned to fill all M positions. It will obviously recommend its top-M suggestions. It is natural to assume that the probability of showing a good suggestion for an engine may vary given the rank of the suggestion that is actually shown. We address this concern by expliciting the rank of each suggestion placed by a given engine. Let j e,t (m) denote the rank of the most relevant recommendation q from engine e such that q = q i,t for i m Algs. 3 and 4 respectively extend Algs. 1 and 2 to the explicit suggestion rank setting. Instead of learning a general outcome distribution per engine, the refined learning process aims at learning one outcome distribution for each suggestion rank per engine. Notice that even though the explicit cascade model still has only one single bandits algorithm that manages all positions, its set of available actions differs from one position to another. These explicit variants might converge slower than their original, non-explicit, counterpart because they share less information. However, even though observation gathering takes more time, we would expect these explicit variants to be more robust to suggestions skipped when avoiding duplicates and to be more robust to high performance variance across suggestion ranks in engines. Application We tackle the problem of learning a mixture of four real engines for filling M = 5 positions of a QAC field using three real datasets built by taking full-length queries performed on websites over a one month period and splitting them into (query prefix, full query) tuples. Three different clients were chosen for their diversity and representativeness of real life situations: • Dataset 1: website with few traffic (13k queries per month); • Dataset 2: website with high traffic (1.2M queries per month) and long queries on average; • Dataset 3: website with high traffic (1.1M queries per month) and short queries on average. The Ranked (Alg. 1), Ranked Explicit (Alg. 3), Cascade (Alg. 2), and Cascade Explicit (Alg. 4) strategies are compared against two baselines: the basic engine that is currently deployed by the company and a random mixture assigning engines at random to each position. Note that the basic engine is part of the four engines available for the mixture. Each engine is designed to consider different contextual information such as user history, previous queries (for this user and all users), most popular searches, dictionary entries, and many more. The well-known Thompson sampling (TS) [10] bandits algorithm ϕ is used. TS maintains a posterior distribution on the outcome probability of each action given past observations and selects actions according to their probability of being optimal using a sampling procedure. It has been considered previously for the query recommendation problem [4]. Bernoulli priors and Beta posteriors are used here. Approaches are compared based on the average number of clicks they manage to gather after a trial period and the corresponding increase number of clicks w.r.t. the currently deployed solution, that is the basic engine without mixture. The experiment is run over 10,000 query prefixes (episodes) and each experiment is repeated five times. On episode t, a tuple (p t , z t ) is sampled from the dataset, were z t is the full query. We consider that a user click happens in position m if q m,t = z t . Tab. 1 shows the results (averaged over the five runs) for the three datasets. We observe that Cascade Explicit and Ranked Explicit manage to gather much more clicks than the other strategies on datasets 1 and 2, leading to large increases with respect to the original basic strategy (up to 48%). Even the random strategy performs really well compared to the current basic engine on dataset 1. This highlights the potential benefits of a mixture for providing a diverse suggestions list to the user. The improved performance of explicit algorithms compared with their non-explicit variants leads us to believe hat there is a benefit in modeling independently the expected click probability for each rank. We also observe that Ranked does not beat Cascade when the rank is explicit. On dataset 3, it appears that none the strategies are able to beat to the basic strategy. This was expected given that this dataset was generated from data acquired using this engine running and proposing auto-completions to users. In order to validate this hypothesis, we perform additional experiments where we run each possible mixture of the four engines in M = 5 positions, that is 1024 mixtures, over 1000 query prefixes (episodes) 2 . Figure 1 shows the total number of clicks obtained with each mixture on datasets 1 and 3, where mixtures have been ordered in decreasing number of clicks. Note that a single run per mixture was performed, meaning that these results are noisy and that the ordering of the mixtures is not absolute. The position of the basic strategy is shown by the red dot. We observe that the basic strategy is far from being optimal on dataset 1, while it is very close to the top (6-th position) on dataset 3. This confirms why none of the proposed strategies could beat the basic strategy on this dataset. We also note that non-explicit algorithms converge faster than their explicit counterparts. Note: Though Cascade Explicit always seems to beat Ranked Explicit, performing a Welch's t-test revealed that the null hypothesis cannot be rejected in this case. Further replications should be performed for additional conclusions. Conclusion These preliminary results show the potential of mixing query completion engines with bandits-based approaches for improving the quality of the suggestions in the QAC problem and that a mixture adapts better to the large range of usage contexts. Bandit algorithms have shown to be efficient, flexible, and fast to learn which engine to use where and when. Fancier bandits frameworks such as sleeping bandits and structured bandits should also be considered for this problem as they might be more adapted to the dynamics of this application than standard bandits. Results also show a limitation of the offline evaluation setting, that is the dependency upon the approach used for gathering data. This should be a motivation for further, online, experiments. Future work includes an A/B testing of the strategies on the live system as it would allow us to validate the results presented in this paper. It would also allow us to evaluate the bias introduced by offline evaluation and take into account the real click probability decay pattern. The integration of the bandits algorithms in the company QAC product is already planned to replace the basic engine in order to provide heterogeneous suggestions based on the context. Additional experiments will be also conducted to apply similar approaches to document recommendation. Finally, it would be interesting to provide the analysis in order to obtain theoretical guarantees and regret bounds.
2017-09-13T00:44:09.000Z
2017-09-13T00:00:00.000
{ "year": 2017, "sha1": "a8a59cc353928478f9234fa9afeaebd1de3a5ab8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3f382cf093d524c7e42c643ac3189602cf566fa2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
139351963
pes2o/s2orc
v3-fos-license
Water Adsorption on MO2 (M = Ti, Ru, and Ir) Surfaces. Importance of Octahedral Distortion and Cooperative Effects Understanding metal oxide MO2 (M = Ti, Ru, and Ir)–water interfaces is essential to assess the catalytic behavior of these materials. The present study analyzes the H2O–MO2 interactions at the most abundant (110) and (011) surfaces, at two different water coverages: isolated water molecules and full monolayer, by means of Perdew–Burke–Ernzerhof-D2 static calculations and ab initio molecular dynamics (AIMD) simulations. Results indicate that adsorption preferably occurs in its molecular form on (110)-TiO2 and in its dissociative form on (110)-RuO2 and (110)-IrO2. The opposite trend is observed at the (011) facet. This different behavior is related to the kind of octahedral distortion observed in the bulk of these materials (tetragonal elongation for TiO2 and tetragonal compression for RuO2 and IrO2) and to the different nature of the vacant sites created, axial on (110) and equatorial on (011). For the monolayer, additional effects such as cooperative H-bond interactions and cooperative adsorption come into play in determining the degree of deprotonation. For TiO2, AIMD indicates that the water monolayer is fully undissociated at both (110) and (011) surfaces, whereas for RuO2, water monolayer exhibits a 50% dissociation, the formation of H3O2– motifs being essential. Finally, on (110)-IrO2, the main monolayer configuration is the fully dissociated one, whereas on (011)-IrO2, it exhibits a degree of dissociation that ranges between 50 and 75%. Overall, the present study shows that the degree of water dissociation results from a delicate balance between the H2O–MO2 intrinsic interaction and cooperative hydrogen bonding and adsorption effects. ■ INTRODUCTION Rutile-like TiO 2 , RuO 2 , and IrO 2 are isostructural metal oxides with a large number of applications. As an example, they have all been studied for energy conversion and storage from water oxidation either photo-or electrocatalytically. 1−6 Many of these applications imply the existence of a material−water interface, whose understanding is crucial for the rationalization of the global catalytic process. Therefore, the knowledge of the intrinsic water−MO 2 interaction as well as how this interaction varies when increasing the water coverage is of high importance. In this context, several works dealing with H 2 O−TiO 2 , 7−17 H 2 O−RuO 2 , 12,13,18−23 and H 2 O−IrO 2 13,24,25 interfaces have been reported in the literature. The obtained results allowed determining the different behaviors as a function of the material and the exposed surface. However, some controversy still exists on the amount of water that dissociates after adsorption on the different surfaces and materials. 7 Among the three materials, titanium oxide is the most intensively explored one. The most stable (110) surface has centered most of the available investigations, which mainly focused on understanding whether water adsorption on this surface dissociates generating H + /OH − species. 7,9 The Odefective surfaces have been observed to favor water dissociation, and thus, surface preparation appears to be extremely delicate to determine the intrinsic water−surface interface. [14][15][16]26 Moreover, results are also sensitive to the experimental conditions and techniques used for the analysis. 9 In this context, several contributions based on high-resolution electron energy loss spectroscopy, temperature-programmed desorption, infrared reflection absorption spectroscopy, and scanning tunneling microscopy (STM) data are in agreement with water adsorbed molecularly on (110), 27−31 particularly at low water coverages. Other experiments performed at higher coverages with X-ray photoelectron spectroscopy (XPS) and photoelectron diffraction techniques suggest that water partially dissociates. 32,33 The reader is referred to the excellent review of U. Diebold for a detailed survey of water adsorption on TiO 2 surfaces, the performance of different techniques, and surface preparation. 34 On the other side, density functional theory (DFT) calculations show that the energy difference between the molecular and dissociative adsorption forms is very small and sensitive to the surface model and level of theory used, particularly, the thickness of the slab model, which exhibits an even−odd behavior, the DFT functional, or the inclusion of the U Hubbard correction. 9,10,13,28,35−40 This turns several contributions in favor of one or the other situation. The influence of water coverage on the degree of water dissociation on the (110)-TiO 2 surface has been less addressed and most of the calculations including a water monolayer or more suggest that water adsorbs mainly in its molecular form. 41,42 Although the interaction between water and the (011) surface of titania has been much less studied, water adsorption at this surface is also relevant because of its large contribution to the Wulff construction. STM, X-ray diffraction measurements, and DFT calculations have shown that the (011) surface suffers an important reconstruction when prepared in ultrahigh vacuum conditions. 11,43−45 This reconstruction is, however, reversed when exposing the distorted (011) surface to liquid water and the original facet is recovered. 11,45 From a computational point of view, calculations suggest that water adsorption at low coverages occurs through the dissociative mode, although increasing the water coverage to the full monolayer seems to decrease the preference for the dissociative form. 11,45−47 Ruthenium oxide has also been largely studied and again most studies focus on the (110) surface. 12,13,[18][19][20][21][22][23]48 For this (110) surface, it is accepted that at very low coverages, single water adsorbs on unsaturated Ru centers, establishing an equilibrium between molecular and dissociated water molecules. 18−20 Moreover, STM and DFT calculations also indicate that increasing the water coverage allows the formation of water dimers that are adsorbed in contiguous unsaturated Ru centers. 18,19,22 These dimers enclose one molecular and one dissociative water, leading to the formation of H 3 O 2 − motifs or hydrogen-bonded (H 3 O 2 − ) n chains at higher water coverages. The other (011), (100), and (001) surfaces have only been studied in detail very recently. 21,22 XPS, in situ surface diffraction, and DFT calculations suggest a mixed molecular/ dissociative arrangement at high coverages, the ratio between the two forms varying depending on the surface. The two most stable surfaces [(110) and (011)] are more prone to dissociate the adsorbed water, whereas the (001) and (100) surfaces mostly present the molecular form. In our previous paper, 22 observed trends were rationalized by the combination of three factors: (i) the intrinsic acid−base properties of each surface; (ii) the presence of strong cooperative effects; and (iii) an increase of the surface oxygen bridge (O br ) basicity by the adsorption of water. Finally, H 2 O−IrO 2 interaction has been much less studied. 13,24,25 To our knowledge, only the adsorption of water on the most stable (110) facet has been addressed by means of DFT calculations. Results suggest that the interaction energy between H 2 O and IrO 2 is significantly stronger than in the other two materials and that the adsorbed molecules tend to dissociate. With the aim of analyzing how the nature of the metal oxide influences the water adsorption and, in particular, the degree of dissociation, we studied the rutile-like H 2 O−MO 2 interaction (M = Ti, Ru, or Ir) at the (110) and (011) surfaces, the ones that contribute the most in the Wulff construction of the three materials, considering two different water coverages: isolated water molecules and full monolayer. Moreover, thermal effects and proton mobility on the surface are analyzed by performing ab initio molecular dynamics (AIMD) of the full monolayer coverage. Results show that there are three key factors for determining the degree of dissociative water over the surface: (i) the H 2 O−MO 2 intrinsic interaction, (ii) the different octahedral distortion of TiO 2 with respect to RuO 2 and IrO 2 , and (iii) the presence of hydrogen bonding and adsorption cooperative effects. ■ RESULTS AND DISCUSSION As mentioned above, the main goal of this paper is to compare the properties of MO 2 (M = Ti, Ru, and Ir) upon interacting with water. For that, we first addressed the structural and electronic properties of the bulk and main crystallographic surfaces. Second, we considered the adsorption of one single water molecule at each of the two selected surfaces, with the ultimate goal of comparing the intrinsic water interaction among the three metal oxides. Third, we studied the water monolayer adsorption and evaluated the degree of deprotonation in each case. For that, we carried out, in addition to static calculations, AIMD simulations to address the influence of thermal effects. MO 2 Bulk and Surfaces. All three considered metal oxides MO 2 (M = Ti, Ru, and Ir) crystallize in a rutile structure, tetragonal with space group P4 2 /mnm (see Figure 1a). 49 Titanium dioxide exhibits two other (thermodynamically metastable) crystalline phases in nature: anatase (tetragonal, I4 1 /amd) and brookite (rhombohedral, Pbca). However, for comparison, the present work will only consider the water adsorption on the rutile polymorph of TiO 2 . In the bulk structure, metal cations, M 4+ , show a distorted octahedral coordination and O 2− atoms display a trigonal planar environment. Main distances, cell parameters, and computed net charges of the metal and oxygen atoms for the three metal oxides are given in Table 1. Concerning the bulk, it can be observed that both M−O distances and cell parameters are in very good agreement with the experimentally determined values, 49 deviations being less than 1.5%. The M−O distances range between 1.95 and 2.01 Å, and the largest ones correspond to IrO 2 , as expected. As found experimentally, Ti 4+ exhibits a distorted octahedral environment with four shorter Ti−O distances in the equatorial plane and two longer Ti−O axial distances (tetragonal elongation), whereas the opposite situation with four longer equatorial M−O distances and two shorter axial ones (tetragonal compression) is observed for RuO 2 and IrO 2 . These differences may be related to the electronic configuration of the metal in each metal oxide. That is, the electronic configuration of Ti is 4s 2 3d 2 , that of Ru is 5s 2 4d 6 , and that of Ir 6s 1 5d 8 . Although the metal ion can be formally considered as M 4+ and thus there are no d electrons in Ti 4+ , Ru 4+ has four d electrons, and Ir 4+ has five d electrons, there is a certain covalent character, and in an octahedral ligand field, this can lead to different geometrical distortions for early and late transition metals. Furthermore, for both RuO 2 and IrO 2 , spinpolarized Perdew−Burke−Ernzerhof (PBE)-D calculations indicate that magnetization of each metal ion is equal to zero, as found previously at this level of theory. 50−52 Finally, and as expected, computed charges indicate that TiO 2 is significantly more ionic than RuO 2 and IrO 2 . Slab models for different crystallographic orientations [(110), (011), (100), and (001)] were built cutting out the slab from the optimized bulk structure. Surface energies, main M−O distances of the outermost layer, and net atomic charges are given in Table 2. Values corresponding to the internal layers are very similar to the bulk values and thus have not been included and will not be discussed further. In all cases, bulk cutting to generate slab models leads to two-coordinated bridging oxygen O br at the surface, as well as five-coordinated M 5c sites for Computed values for the surface energies are similar to those previously reported in the literature and follow the same trend. 53 That is, the smallest surface energy corresponds to the (110) facet, whereas the largest one corresponds to the (001) one with M 4c unsaturated metal centers. The remaining two surfaces show intermediate values, and their relative order depends on the material. For TiO 2 , (100) is more stable than the (011) one, whereas for RuO 2 and IrO 2 , the relative stability is reversed. Despite that, the contribution of the (100) surface to the TiO 2 Wulff shape is zero, whereas (011) accounts for 29.3% because of symmetry equivalences. Overall, it can be observed in Figure 2 and Table 2 that the (110) and (011) facets contribute by more than an 80% of the total surface, and thus, these two surfaces are the ones considered to analyze the water adsorption. Adsorption of Isolated Water Molecules. As mentioned above, the aim of the present work is to get insights into the different behavior of MO 2 (M = Ti, Ru, and Ir) upon interacting with water. For that, we first studied the adsorption of a single water molecule onto the two surfaces that contribute the most to the Wulff shape ( Figure 2): the (110) and (011) surfaces ( Figure 1B). This corresponds to a water coverage of 1/4 for (110) and of 1/8 for (011). The preferred adsorption configuration at the (110) and (011) surfaces of MO 2 is that in which the water molecule binds through its O atom with the undercoordinated M 5c sites. 10 This interaction increases the water acidity, leading to the formation of a hydrogen bond between one H atom of the water molecule and the nearest undercoordinated O br . This interaction can also lead to a dissociative OH − /H + adsorbed form, with water deprotonated and the O br protonated. Thus, both adsorbed forms, molecular (mol) and dissociative (diss), have been considered in each of the two surfaces (see Figure 3). Adsorption energies, relative stabilities between the two forms, and main structural parameters are given in Table 3. Regarding the (110) surface, both the molecular and dissociative minima were localized for TiO 2 and RuO 2 , whereas only the adsorbed dissociative form was located in the case of IrO 2 in agreement with previous calculations. 9,13,18 Indeed, all attempts to optimize the molecular adsorbed minimum collapsed to the dissociative adsorbed species. Furthermore, results show that while the molecular form is more stable for TiO 2 , the dissociative form is more stable for RuO 2 and the only minimum for IrO 2 . This is related to the strength of the M−H 2 O interaction, which follows the trend: 2 ; that is, the adsorption energy increases (in absolute value) from TiO 2 to IrO 2 . As a consequence, the acidity of the water molecule upon interacting with the undercoordinated M 5c sites of the surface exhibits a larger increase when it is adsorbed on IrO 2 than on RuO 2 or on TiO 2 . This is in agreement with the computed charge of the water molecule in H 2 O−MO 2 (M = Ti, Ru, and Ir), which increases from M = Ti to Ir. Thus, despite the higher basicity of O br in TiO 2 , as indicated by the net atomic charge of O br (Table 3) and the density of states (DOS), which shows that p bands of O br in TiO 2 lie at higher energy (see Figure 4), the increase of water acidity upon adsorption is not sufficient to favor the dissociative form. The computed preference of a single water to be molecularly adsorbed form on (110)-TiO 2 is in agreement with the recent carefully conducted molecular beam/STM experiments by Wang et al., 9 which determine that molecular adsorption is preferred by 0.035 eV. It should be noted that whether or not water adsorbs in a molecular or a dissociative form on a defectfree TiO 2 (110) surface has been a subject of intense debate. 7 From a computational point of view, different works agree with the fact that the energy difference between mol and diss is small and sensitive to the computational approximation. 9,10,36,37 A significantly different behavior is observed for the (011) surface. At this surface, the diss form is the most stable adsorbed species on TiO 2 , whereas the mol form is the preferred one on RuO 2 and IrO 2 . This can be related to the different behavior observed regarding the water adsorption energies on (110) and (011). The interaction energy of water on TiO 2 (110) is smaller than on TiO 2 (011), whereas for RuO 2 and IrO 2 the reverse trend is observed. Such differences arise from the nature of the vacant site (axial or equatorial) on each surface and on the octahedral distortion observed in the bulk of each material: tetragonal elongation for TiO 2 (axial bonds are larger than equatorial ones) and tetragonal compression for RuO 2 and IrO 2 (axial bonds are shorter than equatorial ones). As mentioned above, the vacant site of M 5c in the (110) surface is axial, whereas the vacant site of M 5c in the (011) surface is equatorial. Thus, for TiO 2 , interaction of water with the axial vacant site of M 5c in (110)-TiO 2 is smaller than the interaction with the equatorial vacant site of (011)-TiO 2 , whereas the opposite is observed for RuO 2 and IrO 2 . The larger the interaction is, the larger the increase of acidity of the water molecule, which would explain that the proton transfer to O br occurs more easily on (011) than on (110) for TiO 2 and on (110) than on (011) for RuO 2 and IrO 2 . Among these latter materials, relative energies indicate that deprotonation on (011), although unfavorable in both cases, is less difficult on IrO 2 (7.0 kJ mol −1 ) than on RuO 2 (12.6 kJ mol −1 ) as found for (110 For IrO 2 , we could not locate the mol form on the (110) surface. The trends observed on the electron transfer from water to the metal oxides are in agreement with the fact that metal d-bands above the Fermi level are higher for TiO 2 than for RuO 2 and IrO 2 (see Figure 4), which would explain the smaller electron donation from water to the metal and weaker bond in the former case. Overall, the ability of (110) and (011) surfaces to induce dissociation of an interacting water molecule seems to be ultimately related to (i) the kind of octahedral distortion observed in the bulk of these materials (tetragonal elongation for TiO 2 and tetragonal compression for RuO 2 and IrO 2 ) and (ii) the different nature of the vacant sites created (axial or equatorial) on these surfaces. Adsorption of Water Monolayer. Previous section has shown that the relative stability between the mol and diss adsorbed forms of water on the (110) and (011) surfaces of MO 2 (M = Ti, Ru, and Ir) depends on the increase of water acidity upon adsorption and on the basicity of O br . These factors depend on the metal oxide and on the different nature of the M 5c vacant at each surface. At higher water coverages, however, cooperative H-bond interactions can come into play in determining whether deprotonation occurs or not. Thus, we have analyzed the structure of a water monolayer on each surface and metal oxide. For that, we have added one water molecule at each of the unsaturated metal centers; that is, four molecules per unit cell at the (110) surface and eight water molecules at the (011) one. We considered all possible combinations of mol and diss water molecules as initial structures. For instance, for the (110) surface, we considered seven possible structures: (i) 4 undissociated (molecular) water molecules (4mol), (ii) 3 molecular and 1 dissociated (3mol/1diss), (iii) 2mol/2diss, (iv) 1mol/3diss, and (v) 4diss. Note that for the 2mol/2diss configuration three different starting situations are possible, two in which the two equal molecules (mol or diss) are neighbors and another one in which they are not. All possible combinations were also considered for the (011) surface. Adsorption energies per water molecule and structural parameters of main configurations are given in Table 4. Figure 5 shows the optimized structures of the most stable configuration of each material and surface. First of all, it can be observed that the adsorption energy per water molecule in the monolayer is in all cases except (110)-IrO 2 and (011)-TiO 2 larger (in absolute value) than that of a single water molecule, which indicates the presence of cooperative effects as a result of the formation of H-bond chains of moderate strength. Indeed, Figure 5 shows that two parallel H-bond chains separated by O br are formed for both the (110) and (011) surfaces. Increases on the adsorption energy per water molecule range from 10 to 30 kJ mol −1 and result from a subtle balance between the changes induced on the water−surface interaction, which decreases as indicated by Figure 5 for water labels. the increase of the M−O distance, and the stabilizing H-bond interactions between the water molecules in the monolayer. At the (110)-IrO 2 surface, the adsorption energy per water molecule is essentially the same than that obtained for the isolated water molecule because the H-bonding at this surface is the weakest one; that is, the O w1 −H w2 distance (2.350 Å) is the largest one. At the (011)-TiO 2 surface, the adsorption energy is smaller in the monolayer because there is a significant increase of the M−O distance (see Tables 3 and 4). Most stable monolayer configuration depends on the material and on the surface. For (110)-TiO 2 , the most stable arrangement is that in which no water molecules are dissociated (4mol), whereas for (110)-RuO 2 , the preferred configuration has a 50% degree of deprotonation (2mol/2diss) and for (110)-IrO 2 , the only configuration located exhibits a 100% degree of deprotonation (4diss). This trend is in agreement with what was found for the adsorption of a single water molecule, which showed that the preference for dissociation increases from TiO 2 to RuO 2 and to IrO 2 . Indeed, for the latter material only the dissociated form was localized (see Table 3). For RuO 2 , the 2mol/2diss situation is more stable than the 4diss, despite the dissociation of a single water molecule being the preferred situation (Table 3), because it allows forming very stable H 3 O 2 − species. That is, this 2mol/ 2diss configuration encloses two H 3 O 2 − species resulting from deprotonation of two water molecules to two O br . This deprotonation leads to OH − species that, due to their higher basicity, establish strong hydrogen bonds (∼1.7 Å) with the undissociated water molecules. Furthermore, as already seen previously for H 2 O−RuO 2 , 21 the M−OH interaction involves a significant electron donation to the surface that accumulates on the O br so that the charge of the H 3 O 2 − species is smaller than 1. For IrO 2 , the water−surface interaction dominates, in agreement with its much larger interaction energy (see Table 3) and higher surface energy (Table 2) and all water molecules dissociate. In this situation, H-bonding between metalcoordinated OH− is very weak, and thus, adsorption energy per water molecule is essentially the same as that of an isolated molecule. For the (011) surface, we observe similar trends; that is, the most stable arrangement (011)-TiO 2 is that in which no water molecules are dissociated (8mol), whereas for (011)-RuO 2 and (011)-IrO 2 , the preferred configurations exhibit a 50% (4mol/ 4diss) and 75% (2mol/6diss) degree of deprotonation, respectively. In this case, the observed trends are not in agreement with that found for the adsorption of a single water molecule, which shows that the dissociated form is the preferred situation for TiO 2 and the molecular form the preferred one for RuO 2 and IrO 2 . This is due to the fact that the presence of the monolayer modifies the water−surface interaction as compared to that with a single molecule. For TiO 2 , the adsorption energy per water molecule in the monolayer is smaller than in the single water adsorption. Indeed, the Ti−O distance is significantly larger in the former (2.24 vs 2.10 Å). Such an increase in the M−O distance is produced to establish an efficient H-bond network but weakens the water−surface interaction, leading to a smaller increase of water acidity that hinders deprotonation. Still, the configuration with 50% dissociation, that is, with two H 3 O 2 − and two protonated O br , is only 3 kJ mol −1 less stable than the fully undissociated one (Table 4), and thus, the present results are not conclusive about whether water monolayer at the (110)-TiO 2 surface is dissociated or not. Furthermore, thermal effects need to be taken into account. For (011)-RuO 2 , the most stable configuration encloses four molecular and four dissociated waters; that is, it shows a 50% degree of deprotonation, as in the (110) surface. Note that we have not been able to localize a minimum corresponding to the fully undissociated monolayer. Attempts to optimize such a structure collapsed to (6mol/2diss), the second most representative structure. This behavior is in contrast with the fact that isolated water prefers a molecular adsorbed form. However, formation of H 3 O 2 − species, with a strong H-bond between the OH − and the undissociated water molecule, is particularly favorable. A 50% deprotonation is the preferred situation as it maximizes the number of H 3 O 2 − units and hence H-bond cooperative effects. For (011)-IrO 2 , the preferred configuration shows a 75% deprotonation (2mol/6diss), despite the molecular form being the most stable for the isolated water molecule, although with a lower relative energy as compared to RuO 2 . This 75% deprotonation does not maximize the number of H 3 O 2 − , as one would expect, because water surface interaction dominates over H-bonding. Furthermore, as already seen for RuO 2 , 21 adsorption cooperative effects may induce deprotonation. That is, deprotonation of one water molecule favors deprotonation of a neighbor adsorbed water because of the increase of the metal Lewis acidity as a result of the protonation of O br . Overall, the degree of deprotonation results from a subtle balance between H-bond cooperativity and adsorption cooperativity. In the case of IrO 2 , the latter effect is larger because of the larger interaction with the metal sites. Present results show that relative energies corresponding to lower mol/diss configuration arrangements per water molecule are small (3−16 kJ mol −1 ) and thus may contribute to the behavior of the water−metal oxide interface. On the other hand, thermal effects may modify the relative stability of these configurations. Because of that, we have run AIMDs up to 8 ps (1 ps equilibration) for all metal oxides and the two surfaces starting from the most stable monolayer obtained with static Table 4. calculations. Figure 6 shows the H-bond distances corresponding to two interacting water molecules and those between these water molecules and O br along the simulation. M−O distances are reported in Figure S1 of the Supporting Information. Table 5 shows the frequency of each possible configuration, considering that proton transfer to an O br occurs if the H-bond distance is smaller than 1.2 Å. For (110)-TiO 2 and (110)-RuO 2 , the most stable configurations (4mol and 2mol/2diss, respectively) remain along the 7−8 ps simulation. For (110)-TiO 2 , the O br −H w Hbond distance oscillates around 1.7 Å, whereas the H-bond distance between the two water molecules oscillates around 2.1 Å. In the latter case, oscillations are larger because of the weaker H-bond. For (110)-RuO 2 , H-bond distances are consistent with the presence of H 3 O 2 − species and a protonated O br almost all along the simulation. Note that the frequency of the 2mol/2diss is 99.9% and only the 1mol/3diss arrangement appears in 0.1%. For IrO 2 , we observed a larger proton mobility. As predicted by static calculations, the main configuration is a fully dissociated monolayer (4diss) with an 84% frequency. Noticeably, there is a non-negligible frequency of the 1mol/3diss (11.4%) and of the 2mol/2diss (4.8%). This indicates that thermal effects disfavor deprotonation, because of the increase of the M−O w distances, which reduces the water−surface interaction and thus the water acidity. The (011) surface shows a larger proton mobility compared to the (110) one. For TiO 2 and RuO 2 , most stable configurations (8mol and 4mol/4diss, respectively) obtained from static calculations remain the most frequent arrangement (94 and 72%). However, for IrO 2 , the most stable configuration (2mol/6diss) is no longer the main one when including thermal effects. Indeed, three configurations account for a frequency of 95%: 4mol/4diss with 33%, 3mol/5diss with 43%, and 2mol/6diss with 19%. Note that the most frequent configuration (3mol/5diss) does not correspond to the most stable one obtained from static calculations (2mol/6diss). These results again show that thermal effects tend to decrease the M−OH 2 interactions, thereby increasing the percentage of molecular water. ■ CONCLUSIONS The present study analyzes the H 2 O−MO 2 (M = Ti, Ru, and Ir) interactions by means of periodic DFT (PBE-D2) calculations. Adsorption of both an isolated water molecule and a full monolayer on the two surfaces that mostly contribute to the Wulff shape, the (110) and (011) surfaces, has been addressed. Results indicate that the adsorption of a single molecule preferably occurs in its molecular form on the (110)-TiO 2 surface and in its dissociative form on (110)-RuO 2 and (110)-IrO 2 . However, the opposite trend is observed on the (011) surface; that is, water prefers to adsorb in its dissociative form on (011)-TiO 2 and in its molecular form on (011)-RuO 2 and (011)-IrO 2 . This is related to the kind of octahedral distortion observed in the bulk of these materials (tetragonal elongation for TiO 2 and tetragonal compression for RuO 2 and IrO 2 ) and to the different nature of the vacant sites created on these surfaces, axial on (110) and equatorial on (011). Thus, water adsorption on TiO 2 leads to longer M−O distances on (110) than on (011), and consequently, the increase of water acidity (and possible dissociation) is larger on (011). The opposite is observed for RuO 2 and IrO 2 with longer M−O distances on (011). Furthermore, adsorption energies (in absolute value) increase from TiO 2 to RuO 2 and IrO 2 , along with the electron transfer from the water molecule to MO 2 . For the monolayer, in addition to the intrinsic water adsorption, other effects such as cooperative H-bond interactions, particularly the formation of H 3 O 2 − species, and cooperative adsorption come into play in determining whether deprotonation occurs or not. Furthermore, thermal effects seem to favor configurations with a smaller degree of dissociation because of an enlargement of M−O distances, which leads to a smaller increase of water acidity. For TiO 2 , water monolayer is fully undissociated on both (110) and (011) surfaces, whereas for RuO 2 , water monolayer exhibits a 50% dissociation, the formation of H 3 O 2 − motifs being essential. Finally, on (110)-IrO 2 , the main monolayer configuration is the fully dissociated one, whereas on (011)-IrO 2 , it exhibits a degree of dissociation that ranges from 50 to 75%. Overall, the present study shows that several effects, in addition to the intrinsic water adsorption, are responsible for the degree of dissociation of adsorbed water on MO 2 (M = Ti, Ru, and Ir), IrO 2 being the one more prone to induce dissociation. ■ COMPUTATIONAL DETAILS Periodic boundary DFT calculations were carried out with the Vienna ab initio simulation package (VASP) code. 54,55 All calculations were performed with the GGA PBE functional 56 plus D2 Grimme's correction 57 for dispersion and using the projector augmented wave pseudopotentials 58,59 to describe ionic cores and valence electrons through a plane wave basis with a kinetic energy cutoff equal to 500 eV. The above computational parameters ensure a good agreement with experimental cell parameters of the bulk structures for all the studied materials (TiO 2 , RuO 2 , IrO 2 ). Moreover, the inclusion of dispersion corrections is essential to describe properly adsorption processes 60 and bulk water. 61,62 Bulk calculations were performed considering a K-point mesh for the Brillouin zone of (8,8,8), (15,15,15), and (9,9,9) for TiO 2 , RuO 2 , and IrO 2 , respectively, employing the Monkhorst−Pack (MP) grid, 63 whereas slab calculations were performed considering a MP K-point mesh of (3,3,1), (6,6,1), and (4,4,1) for TiO 2 , RuO 2 , and IrO 2 , respectively. The cutoff and K-point mesh were chosen according to the best cost/accuracy strategy of both cell parameters and surface energies. The energy convergence criteria for electronic and geometry relaxations were fixed to 10 −5 and 10 −4 eV, respectively. Because the water adsorption mode on TiO 2 has been controversial and sensitive to the computational approach, 7,10 we have performed additional calculations for this system with a hybrid functional PBE0 64 and considering the D3 correction for dispersion 65 (see Table S1 of the Supporting Information). Results obtained at the PBE0-D2 and PBE-D3 levels of theory show that although adsorption energies can vary up to 13 kJ mol −1 with respect to PBE-D2, relative energies between the molecular and dissociative forms follow the same trend and vary less than 3.5 kJ mol −1 , which shows the robustness of the chosen approximation. Surface models of the main crystallographic orientations were built by cutting out the slab from the optimized bulk structure. Slabs were constructed considering a (2 × 1) supercell and a four-layer thickness, the minimum one for a reasonable converged surface energy (see Figure S2 of the Supporting Information). The c value was set to 35 Å ensuring an interlayer distance of at least 21 Å to minimize the interaction between replicas in the (h k l) perpendicular direction. Atom positions were fully relaxed in the optimization process. Surface energies of the (110) and (011) facets were computed through the following equation: (1) where E slab is the energy corresponding to the relaxed surface without optimizing the bulk cell parameters; E bulk is the fully relaxed bulk energy; N is the number of formula units in the slab per units in the bulk unit cell; and 2A is the corresponding two cross-sectional area of the slab. Water adsorption on the two most stable facets of the rutile polymorph, that is, (110) and (011), was simulated with a (2 × 2) supercell in both the low and high coverage regimes. The low coverage regime corresponds to the adsorption of only one water molecule per unit cell, whereas in the high coverage regime, all the outermost (undercoordinated) metal atoms were saturated with water molecules, leading to a water monolayer. Reported adsorption energies are normalized per water molecule, according to the following equation: where E (hkl)+H 2 O is the total energy of the slab with the adsorbed water, E (hkl) is the total energy of the slab model, E H 2 O is the total energy of an isolated water inside a 15 × 15 × 15 Å 3 cubic box, and n (H 2 O) is the number of adsorbed waters onto the surfaces. The properties of these materials and their propensity to induce water deprotonation are discussed in terms of pDOS and Bader charge analysis. 66−68 At this point, it is worth mentioning that TiO 2 is particularly sensitive to the computational model used. 7 Concerning the slab thickness of the (110) surface, our calculations show, as found previously, 36 an even odd oscillation on the water adsorption energy with the number of TiO 2 layers (see Table S2 of the Supporting Information). Most accurate results, given by a six-or seven-layer TiO 2 slab, provide the molecular form as the more stable one, as found experimentally. 9 Because of computational reasons, and for consistency with RuO 2 and IrO 2 , present calculations correspond to a four-layer TiO 2 slab. This model provides the correct relative stability between the molecular and dissociative forms, although with a relative energy (34.7 kJ mol −1 ) that is significantly higher than that recently determined by combining supersonic molecular beam, STM, and AIMD (3.5 kJ mol −1 ). 9 Regarding the (011) surface, it is worth mentioning that depending on the number of layers, the surface may present a significant reconstruction, as observed experimentally. 11 This involves the cleavage of two internal Ti−O bonds to strengthen the Ti−O bonds with the surface undercoordinated Ti 5c sites (see Figure S3 of the Supporting Information). The energy difference between the non-reconstructed and the reconstructed one is small ( Figure S4), and the former surface has a higher surface energy. AIMDs were carried out on the most stable water monolayer structures (i.e., the one with the most stable deprotonation degree) for both (110) and (011) surfaces for all the materials. The energy convergence criteria were fixed to 10 −4 eV. AIMDs were carried out considering an equilibration period of 1 ps (1000 steps of 1 ft) and a production period of 7 ps (7000 steps of 1 ft) in the NVT ensemble. During both the equilibration and the production periods, only the water monolayer and the first layer of the surface were allowed to move according to the motion's equations, while atoms of the remaining surface layers were maintained at fixed positions. This option was chosen in order to avoid unrealistic deformation of the structure of the slabs and to simulate the actual rigidity of the material. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsomega.8b03350. PBE-D2 surface energies with different slab thicknesses; optimized structures for the TiO 2 (011) rutile surface; M−O distances along the AIMD; and relative energies between molecular and dissociative adsorbed forms on the TiO 2 (110) rutile surface, as a function of the number of layers (PDF)
2019-04-30T13:09:03.348Z
2019-02-11T00:00:00.000
{ "year": 2019, "sha1": "e3c9a061087a4daab62fa69d6de3a300312e06ed", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b03350", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0c4af446471f9f10ecde3da05a9ab8358b868be", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
88520247
pes2o/s2orc
v3-fos-license
Generalizing the Mean and Variance to Categorical Data Using Metrics Researchers have developed ways to generalize the mean and variance to situations in which a data metric is available. We apply the tools developed in Pennec (2006) to categorical data, and show the generality of this approach by considering two quite different applications. First, spelling variability in Middle English is quantified. Second, variability of a finite group (in the sense of group theory) is defined and applied to an example. Introduction The usual sample mean and variance are appropriate for real-valued numerical data. However, data can also lie on Riemannian manifolds. For example, angular data such as longitudes of cities are naturally plotted on a circle, which is a simple manifold. Many other examples are considered in Fisher (1996) and Mardia and Jupp (2000). The key here is that angles (in radians) are not two dimensional points that lie approximately on a circle but are values that represent arc length from a labeled point and are inherently circular. Generalizations of standard statistical ideas such as mean and variance have been developed for manifolds in Pennec (1999) and Pennec (2006). It turns out that this approach can be extended to categorical data sets that have a metric associated with them. We show how this is done and illustrate it with two concrete examples: measuring spelling variability in Middle English and defining the variance of a finite group (in the sense of group theory). Method The usual mean can be defined in terms of minimizing the sum of squares in Equation (2.1) with respect to c. The minimizing value is the mean, call this µ, and f(µ)/n is the population variance, as shown in (2.2). Section 4.1 of Pennec (2006) extends this idea to data on a manifold by replacing the absolute value sign, which is the distance function on the real numbers, with a geodesic distance on a manifold, denoted by the function d in Equation (2.3). The generalization of the mean is the minimizing value(s) of f, and the generalized variance is the minimum value of f in Equation (2.4). Note that there need not be a unique solution, c min . Moreover, there might be more than one geodesic through two points, so d(x,c) means the minimum arc length between x and c over all geodesics that connect these. Finally, for this definition to work, we must have a connected manifold with neither boundary nor singular points: details are in Pennec (2006). Although we don't use the following ideas, Pennec (2006) goes on to define probability distributions and expectations on a manifold, which allows him to define covariance matrices, to generalize the normal distribution to a manifold by using maximum entropy, and to formulate a generalized χ 2 law. All these are defined intrinsically as opposed to using definitions that embed the manifold in a higher dimensional space. For example, one can think of a circle as a one-dimensional manifold (an intrinsic point of view) or as a graph embedded in R 2 . To make this distinction clear, an example of circular data variability is given below. This is important because intrinsic geometry in general relativity revolutionized physics. Since many statistical procedures can be viewed geometrically (for example, this is systematically done in Saville and Wood (1997)), and because the intrinsic methods of differential geometry have already been applied to statistics (for instance, see Murray and Rice (1993)), this could be a fruitful point of view for statisticians. Extending Pennec's Theory to Discrete Data There is nothing special about numerical data in Equations (2.3) and (2.4). The same formulas can be used if x and c are categorical data as long a metric function, d, is available. Moreover, squaring d corresponds to L 2 -norm for data vectors, but other distance functions could be considered. For example, Equation (2.5) corresponds to the L 1 -norm, and minimizing this f results in the median. Because the median is more robust to outliers than the mean, we compare the results from Equations (2.3) and (2.4) to that of (2.6) and (2.7) in the examples below. In addition, although we do not use this, it is clear that a further generalization is possible by replacing the exponent 2 in Equations (2.1) and (2.3) with p, which is the L p -norm of the data when considered as a vector. To illustrate the above theory, we consider three data examples. First, we consider a toy problem using a small data set to compare the methods discussed above. Second, we compute spelling variability using the Levenshtein edit distance, which is one of many text metrics. Finally, we compute finite group variability using the word metric from group theory. Manifold Example with Circular Data To illustrate Pennec (2006)'s generalization of mean and variance, we consider a data set consisting of angles, which can be viewed as values from a circle and is called circular data in Fisher (1996) or directional data in Mardia and Jupp (2000). Once this concrete example is understood, the generalizations to categorical data are straightforward. Consider the angles {-2.12, -1.08, 0.016, 0.99, 2.08, 3.14}, which were generated by adding uniform noise to {-2π/3, -π/3, 0, π/3, 2π/3, π}. The traditional method of finding a mean direction is adding the unit vectors e iθ for the values in the data set to produce a resultant vector. The solution is the angle this makes with the positive x-axis (unless the sum is 0, which gives no solution). In this case, the resultant is (0.0104, -0.00815), which gives an answer of -0.66 radians. Note that just averaging these angles as numbers gives the much different answer of 0.504 radians. For a circle, the distance between two angles is given by Equation (2.8). The minimization is needed because two angles create two arcs on the circle, and the shortest of these is the distance between the angles. For example, if θ 1 = 1.5 and θ 2 = -1.5, then the difference is 3, but the distance between them is π -3 = 0.14. We can minimize Equation (2.3) numerically to get a mean direction of -1.59. However, it is informative to look at a plot of (2.3), which is shown in Figure 1. Note that the data values correspond to the spikes, and all the local minimums are close to halfway between the data values. The y-coordinate of the global minimum is 19.0, which is the variability measure, although in practice one might divide this by the size of the data set. Finally, using Equation (2.6) produces an interval from -1.08 to -2.12. Since this is a generalization of the median, it is not surprising that a data set of even size produces a non-unique answer, which is not the case for an odd-sized sample. Why are the answers above so different? First, treating angles as numbers is incorrect. Second, the six data points are almost {-2π/3, -π/3, 0, π/3, 2π/3, π}, which causes both the traditional resultant vector method and Equation (2.8) to be sensitive to small perturbations. This makes it is easy to find examples where these methods do not agree. Applications to Discrete Data As discussed in Section 2.1, the optimization approach to variability can be extended to any type of discrete data that has a metric. Metrics have been well studied in mathematics, statistics, and applications, so this approach has many uses, two of which are given below. Spelling Variability Using Levenshtein Edit Distance The history of English is split into three periods. First is Old English, which runs from roughly 450 (all dates are in the Common Era) through 1100, just after the Norman Conquest. The Anglo-Saxon Chronicle states that the earliest settlers were the Angles, Saxons, and Jutes, which suggests that there were dialects in Britain from the start. Middle English was used from about 1100 through 1500, ending just after the introduction of printing in England by William Caxton. It has several dialects, and it changed over time, both of which cause spelling variability. Figure 2 shows the first four lines of the General Prologue of The Canterbury Tales for four different manuscripts, and no two of these are identical. However, at that time, there were no dictionaries or other references that prescribed a standard orthography, so even within one manuscript spelling variations are common. Third, Modern English starts around 1500, and by 1700 it is much like Present Day English. It is in this time period that modern dictionaries and grammars are developed and the idea of editorial standards take over. Of these three periods, Middle English is the most variable, and it is well known that spelling variability decreases over time as book publishing and reference works become widely used. The question we address is how this variability can be explicitly quantified. As discussed in Section 2, we use a string metric to do this. Defining Levenshtein edit distance Levenshtein edit distance (we drop his name below) is defined to be the minimum cost of changing one string into another where letter copying has cost 0; adding, deleting, or substituting a letter all have cost 1. This can be computed by dynamic programming, which requires roughly m*n steps, where m and n are the lengths of the two strings. Additional information is given in Chapter 6 of Russell (2011). For DNA matching in bioinformatics, this is computationally expensive because the strings can be long, but for English words, this is quick to compute. The intuition behind this algorithm is to align two different strings as closely as possible, after which the letters that do not match are added, deleted, or substituted as needed. It turns out that it is enough to focus on initial substrings, starting with the empty string. Figure 3 shows an example where the edit distance between "OLD" and "HALDE" (a Middle English form of the word "OLD") is found to be 3. The optimal path is shown in red, and moving one square to the right means adding a letter, moving one square diagonally means a letter substitution, and moving one square downwards would mean deleting a letter. Intuitively, the cost is 3 because (1) "LD" is in both words and copying has no cost; (2) "O" is changed to "HA" at a cost of 2, and (3) "E" is added at the end at a cost of 1. The cost of changing "OLD" to "HALDE" is 3 as shown by the path of red numbers. Hence the edit distance between these two strings is 3. It turns out that edit distance is a distance function in terms of the mathematical definition of metric space. That is, the following are true. First, EditDistance[s1, s2] ≥ 0 because all the costs are non-negative. Second, EditDistance[s1, s2] = 0 exactly when s1 = s2 because the only zero cost operation is copying. Third, EditDistance[s1, s2] = EditDistance[s2, s1] because (1) copying and substitution are their own inverses and (2) adding and deleting are inverses of each other. That is, for any transformation from s1 to s2, this can be reversed to change s2 to s1 at the same cost. Fourth, EditDistance[s1, s2] + EditDistance[s2, s3] ≥ EditDistance[s1, s3] because, by definition, edit distance is the minimum cost over all string transformation paths, which includes paths going through s2. Variability of the Middle English forms of the word "OLD" Using edit distance, we now compute the variability of the forms of the word "OLD" appearing in the Linguistic Atlas of Late Mediaeval English (LALME), McIntosh et al. (1986). These are {aeld, aelde, ald, alde, alld, aulde, awlde, eeld, eelde, eld, elde, hald, halde, held, helde, hold, holde, hoolde, old, olde, oold, oolde, ould, wold, woold}. Each row and column of the matrix in Figure 4 stands for one of these words, and each entry is the respective edit distance. To find the median word using Equation (2.6), we find the row with the smallest sum, which gives two solutions: "hold" (16 th row) and "old" (19 th row) because both sum to 48. To find the mean word using (2.3) requires finding the row with the smallest sum of squares, which is 108 and corresponds to "hold." Figure 4: The edit distances between each pair of word forms for "OLD" in the LALME. The above method can now be used to compare variabilities. For example, according to Chaucer et al. (1987), Chaucer only uses four forms: {olde, old, oold, oolde}. The matrix of distances is a submatrix of Figure 4, and one finds that all the words are means as well as medians. The variability produced by Equation (2.6) is 4, and by (2.3) is 6, so Chaucer is less variable than LALME, which is expected since his texts are a subset of the latter's. Variability of Finite Groups Using the Word Metric A group is a set with a binary associative operation that satisfies the following: (1) the set is closed under the operation; (2) there is an identity element; and (3) every element has an inverse. For example, the set {0, 1, …, n-1} with the operation addition modulo n is a group. And the set {1, 2, …, p-1} with the operation multiplication modulo p is a group when p is prime, which is required so that there are multiplicative inverses modulo p. For more on this definition, see any introductory text on abstract algebra such as Chapter 1 of Fraleigh (2003). Groups can be defined with generators and a set of relationships, an approach called combinatorial group theory: see Miller (2004). For example, {0, 1, …, n-1} with addition modulo n is generated by 1. That is, repeatedly adding 1 generates all these values. It is also generated by -1 and any other value relatively prime to n. Another example is {1, 2, …, p-1} with multiplication modulo a prime, p, which is generated by any primitive root, a concept from number theory. A last example is the dihedral group of size 2n, which has two generators that satisfy a n = 1, b 2 = 1, and (ab) 2 = 1. In general, let G be the group, and let S be a set of generators, and assume that S is closed under inverses, which is not required, but simplifies matters below. There is a standard distance for a group presented as generators and a list of identities, which is called the word metric. This is given by Equation (3.1). where the s i are in the generating set S. Note a similarity with edit distance: both use the minimum number of operations to change one object into another. The easiest way to compute word distances is to create a Cayley graph using the generator set, S, and then use the fact that the usual distance between two vertices on this graph is the same as the word distance just defined. Figure 5 does this for the cyclic group of order 15, C 15 , where S = {1, -1}, and Figure 6 does this for the direct product of cyclic groups, C 3 x C 5 , where S = {{1, 0}, {-1, 0}, {0, 1}, {0, -1}}. Finally, notice that C 15 and C 3 x C 5 are isomorphic, but because we are using two different sets of generators, the graphs in Figures 5 and 6 are different. For each matrix in Figure 7, the rows have the same numbers, just rotated. So by Equation (2.6), every vertex is a median, and the variability is the sum of any row, which is 56 for C 15 and 28 for C 3 x C 5 . By (2.3), every vertex is a mean, and the variability is the sum of the squares of each row, which is 280 for C 15 , and 64 for C 3 x C 5 . Since each group has order 15 (the number of vertices), directly comparing these values is valid. Note that graph theory uses the concept of average distance, and this corresponds to (2.6) for this example because the average is a linear function of the sum. Finally, the above method depends critically on the choice of generators, S. An extreme example is using S = G, then for all a and b distinct, d(a,b) = 1 because s = ba -1 can be used in Equation (3.1). This makes the Cayley graph the complete graph, so any two groups of equal size would be equal in variability. In the above cases, natural sets of generators were used, so C 3 x C 5 is less variable than C 15 . These are the distance matrices for C 15 and C 3 x C 5 , respectively. Note that the entries in the latter are mostly smaller than those in the former. Discussion The Equations (2.3) and (2.4) as well as (2.6) and (2.7) allow the definition of a typical value and variability for discrete data. Because much basic statistical theory is based on means and variances, the above generalizations can also be used to create hypothesis tests. For example, the F test, which uses the ratio of variances, is one way to decide whether or not H 0 : σ 1 2 = σ 2 2 is true. This suggests using Equation (4.1) as a generalization. The distribution of (4.1) is unknown in general, but this could be estimated by a permutation test. That is, take a random sample of size m from{x 1 , x 2 , …,x m , y 1 , y 2 , …,y n }, use this sample as the xs, and the rest as the ys. Repeating this many times gives an empirical sampling distribution, which can be used to estimate the p-value. Finally, there are many metrics that already exist, some of which already have known uses in statistics. For instance, Diaconis (1988) discusses many metrics of the symmetric group, S n , and these are related to rank-based statistical techniques. For example, he shows that Spearman's rank correlation is equivalent to the L 2 -norm on S n . The generality of the above approach will make further applications to categorical data easy to find.
2019-04-01T01:33:40.868Z
2014-10-05T00:00:00.000
{ "year": 2014, "sha1": "fb46193758a3052a2e53e33363b17260500a07c1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "216824cd1f6ede154ef19c981a784c8027beefe5", "s2fieldsofstudy": [ "Mathematics", "Linguistics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
251001632
pes2o/s2orc
v3-fos-license
Wilkie's Syndrome following Chemotherapy: A Case Report and a Review of Literature Superior mesenteric artery (SMA) syndrome is a rare etiology of upper gastrointestinal obstruction. The measured angle between the SMA and the aorta is typically between 38 and 65° and maintained by mesenteric fat. Excessive fat loss can lead to intestinal obstruction due to an exaggerated acute angularity of the SMA, compressing the third part of the duodenum. We present a 22-year-old female with a history of aplastic anemia, status post bone-marrow transplant, who presented with intractable nausea and had confirmed SMA syndrome on CT angiography. Subsequently, the patient underwent nasogastric decompression and successful laparoscopic duodenojejunostomy. Introduction Superior mesenteric artery (SMA) syndrome, or Wilkie's syndrome, is a rare etiology of upper gastrointestinal obstruction, with potentially nonspecific symptoms on initial presentation. e prevalence of SMA syndrome in the general population is estimated at 0.013%-0.3% [1]. Normally, the measured angle between the SMA and the aorta is 38-65°maintained by mesenteric fat padding. e pathological mechanism of intestinal obstruction is due to an exaggerated acute angularity of the SMA, leading to compression of the third part of the duodenum between the SMA and aorta, caused by excessive fat loss. ere are multiple predisposing factors that can increase the angularity, leading to duodenal compression, of which the most common is significant weight loss [1]. However, substantial weight loss and low BMI are not required for development of SMA syndrome. Gastroenterologists should be cognizant of other underlying medical disorders, previous surgeries, and psychological disorders that can heighten a patient's risk. Specifically, unexpected cases have been reported due to anorexia nervosa [2], spinal cord injury and kyphoscoliosis, and bariatric surgery [1]. However, there are limited cases reported of SMA syndrome subsequent to chemotherapy. Case Presentation We present a 22-year-old female with no prior comorbidities who clinically manifested with lethargy, menometrorrhagia, and pancytopenia, and a subsequent diagnosis of aplastic anemia.She was scheduled for a stem-cell transplant, from a fully matched, unrelated donor. At time of initial medical oncology evaluation intake, she weighed 49.5 kg and was 160.7 cm tall (BMI � 19.2). Subsequently, she underwent a conditioning regimen of cyclophosphamide-fludarabineantithymocyte globulin (Cy/Flu/ATG) prior to her allogenic bone-marrow transplant. She received all 5 days of scheduled fludarabine as well as 2 days of scheduled cyclophosphamide. On day 9 posttransplant, her weight continued to precipitously drop to 48.2 kg, to 47.9 kg, 46.6 kg, and 43.2 kg (on days 12, 14, and 50, respectively). She was drinking suboptimal amount of fluid (30-40 oz per day), and she reported no appetite and had minimal desire for food items she once enjoyed. She was refractory to antiemetics, as well as dronabinol for appetite stimulation. On consultation to gastroenterology, she complained of persistent nausea and vomiting episodes with any oral intake. Additionally, the only nutrition she was taking in regularly was 2 protein shakes with a total of 550 calories per shake. Finally, due to the intractable and intolerable nausea and vomiting, she presented to the emergency department (at a weight of 42.1 kg). Physical examination revealed peri-umbilical tenderness but was otherwise normal. Further history revealed a similar constellation of symptoms approximately 1 month prior, with a normal esophagogastroduodenoscopy, and resolution of symptoms with supportive care. During this hospitalization, however, she was initiated on total parenteral nutrition, due to malnutrition and inability to maintain >50% of caloric requirements by mouth. She was extremely fatigued, requiring even assistance to use the commode. During hospitalization, she continued to have asymptomatic sinus tachycardia, of which cardiology believed to be a physiologic response due to her volume-depleted state and acute dehydration. Her tachycardia improved with resolution of pain with vomiting and increasing physical strength. On readmission imaging, CT scan showed a massively distended stomach, first, and second portions of the duodenum, after collapse of the third portion of the duodenum as it traverses the SMA (Figure 1). CT-angiography was obtained revealing acute angulation of the SMA-aortic takeoff at approximately 20 degrees (Figures 2 and 3). A diagnosis of SMA syndrome was made. It was deduced that the chemotherapy-induced nausea, vomiting, and mucositis had led to appreciable weight loss and absence of the mesenteric fat pad, resulting with SMA syndrome. e patient underwent nasogastric decompression. By the day of surgical treatment, she dropped to 39.5 kg. Her nadir was 38.8 kg (BMI � 15) 4 days post-op. She underwent a laparoscopic duodenojejunostomy resulting in a successful bypass. Additionally, with surgical treatment and discontinuation of the chemotherapy, her weight gradually progressed to 47 kg. e patient was then followed for resolution of the preoperative symptoms. She demonstrated appropriate emptying of the duodenum and her subsequent follow up was uneventful. Discussion Very few cases of SMA syndrome postchemotherapy have been reported in the literature. Table 1 offers a review. Each highlights the seemingly noteworthy complications in patients receiving treatment for malignancy, who may experience weight loss and severe emesis, which likely confound the presentation of the disease. Physicians should be wary of this rare etiology of duodenal obstruction and its clinical manifestations. Usually, the diagnosis can be confused with various motility-related or anatomic causes of duodenal obstruction. Albeit rare, a high index of suspicion should be employed for young patients presenting with signs and symptoms of intestinal obstruction with a history of weight loss, to allow inclusion of SMA (Wilkie's) syndrome in the differential diagnosis. Delayed diagnosis can lead to devastating outcomes, including death from complications related to electrolyte abnormalities or perforation. Furthermore, in patients who present with persistent emesis, SMA syndrome can precipitate hypovolemia, hypokalemia, and metabolic alkalosis. In order to confirm the diagnosis in a patient with symptoms suggestive of SMA symptoms, radiological studies are required, which may include an upper gastrointestinal (GI) barium studies alongside simultaneous angiography [3] or CT abdomen for noninvasive, enhanced anatomical detail. However, even with imaging, the diagnosis may be easily overlooked [4], as radiological findings do not always correlate with clinical findings [5]. Additionally, some patients may experience intermittent symptomatic compression of the duodenum, leading to a delay in diagnosis [1]. is was illustrated with our patient, who presented with similar symptoms 1 month Case Reports in Gastrointestinal Medicine prior to diagnosis, with no anatomic abnormalities on esophagogastroduodenoscopy. Depending on initial management, it is recommended that once symptom free and patency of the duodenum is achieved, patients begin a dietary regimen while cautiously monitoring clinically and biochemically for refeeding syndrome. Some suggest reversal of weight as initial therapy; however, this is based off of limited evidence. In one of the largest case series of SMA syndrome, the authors concluded that surgical management is superior to medical therapy with respect to success and recurrence, and surgery should be employed when conservative management fails, with lack of consensus on timing for surgical management [3]. Medical therapy consists of supportive care such as total parenteral nutrition or small-volume meals and postural changes [6]. Ultimately, in those who do not improve with conservative management, Case Reports in Gastrointestinal Medicine the surgical options include mobilization of ligament of Treitz (Strong's procedure) [7], gastrojejunostomy, and laparoscopic duodenojejunostomy. Of those options, duodenojejunostomy may provide the best results [8], which is reflected in the successful outcome in our patient. In summary, physicians should be wary of this rare etiology of duodenal obstruction and its clinical manifestations. Albeit rare, a high index of suspicion should be employed for patients presenting with signs and symptoms of intestinal obstruction in the setting of chemotherapy, to allow inclusion of SMA syndrome in the differential. Data Availability Data are readily available and readers may access the data supporting the conclusions of the study by directly emailing the corresponding author. Consent Patient consent was obtained for publication of case details. Disclosure Corsi and Abu-Heija are co-first authors. Conflicts of Interest e authors have no conflicts of interest. Authors' Contributions NC, AAH, and MC were involved in designing, reviewing the literature, and writing up of the case. AAH, AR, and ME reviewed the manuscript and provided critical specialist input. All authors approved the final draft. Nicholas J. Corsi and Ahmad A. Abu-Heija contributed equally to this work.
2022-07-24T15:20:24.659Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "9e139b0d7e5def99833e670c29248cf8d315d566", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crigm/2022/7783074.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b5c9dcfbda2a3f07575971498f2acad24a7a6b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
59500616
pes2o/s2orc
v3-fos-license
HYDRAULIC NETWORK MODELING AND OPTIMIZATION OF PUMPING SYSTEMS Mitul Jani 1 , Shridhar Manure 2 and M G Dave 2 . 1. M. Tech. (Energy System), Dept. of Electrical Engg., Nirma University. 2. Bureau of Energy Efficiency (B.E.E.), Accredited Energy Auditor. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Hydraulic model:- The hydraulic network simulation is done on EPANET platform, using an underground reservoir (R1), 4 pumps (P1-P4), 4 valves (V1-V4), 13 delivery nodes (D1-D13), 17 pipelines, viz.13 for delivery (L1-L13) and 4 for pump suction (F1-F4). The network developed as per actual configuration of the WDS is as shown in Figure 1 and detail as given in Table 1. Existing condition analysis:-In order to calculate the efficiency of the pump, the performance parameters of the pump must be measured as per standard practices. These measurements should not be instantaneous and hence must be taken as an average of measurements over some span of running hours. In this case, the measurements are taken for pressure, flow and power for one hour for each pump and their efficiency is calculated using Eq. (1). Using the average values of one hour measurements, the efficiency is obtained for each pump as summarized in Table 2. Eq. (1) Where, Q = Flow rate of fluid (in m 3 /hr or CMH) h = Total head developed by pump (in m) g = Gravitational acceleration (in m/s 2 ) ρ = Density of fluid (in kg/m 3 ) ƞ m = Motor efficiency P = Power input to motor The measurements taken at site are fed into the EPANET network module in order to evaluate overall performance of the hydraulic system. Per unit cost of electricity used for calculating the operating cost is taken to be as Rs. 6.25/kWh, the density of water was found out to be 992 kg/m 3 , the motor efficiency was calculated after conducting the no-load test to find out fixed losses and tested under full-load for copper losses, efficiency was calculated to be 90% as per de-rating factor and the acceleration due to gravity is taken as 9.81 m/s 2 . The pumps are operated for 8 hours a day which makes the per-day utilization / usage factor of 33%. The results of the simulation are given in Table 3. The pressure / head loss of the system components are as shown in Figure 2. The velocity of water in pipelines is an important parameter to evaluate the frictional losses, flow through each link and the unit head-loss as per the existing conditions modeled in EPANET are summarized in Table 4. These parameters are very useful in order to know about the operating conditions of the network. They determine the scope of improvement in network operation and the possibilities of future expansions or modifications as per the application demand. It is important to know how the network operates and in what condition so as to explore any avenue for optimization. The velocity of water in each pipeline and the pressure available at different nodes / delivery points is as shown in Figure 3. 1) The condition of impeller of Pump No. 3 and 4 might be deteriorated; which is supported by the fact that the pump is producing only 28 m, due to which it is not able to develop its rated hydraulic power. 3) The velocity of water in all the pipelines was less than 2.0 m/s, which is satisfactory as per the design standards, as high velocity may incur more frictional loss. 4) The velocity are in limits, however, the pipelines L5, L9 and L12 have high unit head-loss due to the fact that the size of the pipelines are smaller (275 mm). 5) The pipelines L5, L9, L12 should be replaced with 350 mm pipes so that the head-loss is reduced as well as the loss-coefficient," C" used in Hazen-Williams equation, will increase and hence overall head-loss of pipeline will also reduce since the frictional losses would decrease. 6) The demand nodes D8, D13 and D11 are affected due to the poor performance of these pipelines, the pressure of water available at these nodes is very low as compared to other demand nodes. 7) Due to low pressure at demand nodes, the locality may not receive sufficient supply. Also if demand increases, the pump may not be able to deliver up to that node. 8) The pressure-head loss in pipelines is leading to waste of energy that is provided to the fluid by the pump and hence must be dealt with. Impact of Change in Pipeline on the Network:- From hydraulic network simulation, it is evident that pipelines L5, L9, L12 have significantly high head-loss per unit length, if the diameter of suction pipes are increased from 275 mm to 350 mm for all the pipelines, it would reduce the frictional head loss in pipelines and increase the pressure of water supplied at the farthest end node as evident from Figure 4. This simulation is showing significantly how this measure would improve the water distribution, also taking into consideration the future expansion of the network and increase in water demand to study / examine network behaviour in conditions of enhanced operating capabilities of the system. The change in pipeline diameter would yield results as obtained from the simulation given in Table 5. The results clearly show that the velocity of water in pipelines is significantly reduced and hence the frictional head loss is reduced as well. The effect of change in pipe dimension on pressure is also quite significant on the immediate demand node corresponding to the pipe and also increases the pressure on the last node of supply as the head loss for other pipeline is fixed. The results show that pressure at delivery node D13, D8 and D11 from pipeline L5, L9 and L12 respectively, is changed significantly due to change in pipe dimension and because of that, the end delivery nodes (D13, D8 and D11) have an increased pressure of supplied water. The difference in values of pressure at affected nodes is shown in Table 6. Corro-coating is a resin based chemical material applied on metals / alloys that are exposed to or are in contact with water / moisture. In this case, impeller and its casing might have been affected due to corrosion effect. The resin based chemical is applied as a coating to the internal parts of the pump. Together these measures would increase the efficiency of the pump by 5% approximately in actual against claimed 10% by the manufacturer of resin coating material. It is most simple optimization measure. There is no requirement to make major changes in network component. Only by improving pump performance, the network performance is optimized. The existing pumping network is estimated to work on the following improved parameters after implementation of suggested measures as shown in Table 7. It is also suggested for better performance and system operation, overhauling and maintenance of pumps should be done regularly and the system must be diagnosed properly in order to identify any problems, issues or factors that lead to degradation of efficiency of the pumping system. Timely maintenance of pipes is also an important factor and so is the cleaning of the strainer of the pump which is present on the suction side in order to filter out sludge / solid particles / waste, if present in the reservoir. The improved flow and pressure of the network after implementation of energy saving measures is projected to be as shown in the network in Figure 5. 1042 The overall efficiency of the network has improved and is evident from the calculated results as given in Table 8. The summary of calculations for economic feasibility of implementing these measures is shown in Table 9. The total energy savings from these measures for present operational practice, is projected to be 1,86,180 kWh and in monetary terms, Rs. 11.65 lakh with an estimated investment of Rs. 34 lakh and average simple payback period of approximately 3 years. [2] Construction of an Overhead Tank and pumping with an Existing Pump (1 x 200 HP):- In current scenario, the WDS is not having any facility or infrastructure to supply water to the locality for 24 hours. The water is supplied in 4 hours duration for 2 times a day and hence only 8 hours a day. If an overhead tank is constructed in the premises of the WDS and a existing pump of 200 HP after overhauling, (in order to improve its efficiency) is used to supply water to the tank and then the tank could supply water to the network, it would not only reduce the no. of pumps operating and save energy, but also provide 24 hours of water supply to the locality. As per the data available from the network, in order to cater the demand of the locality, the capacity of an overhead tank should be approximately 45 Lakh litres in order to meet the demand for 24 hours. According to the capacity and pressure head requirement of network, the dimensions as well as the elevation required for the concrete tank are obtained to be as given in Table 10. The costing of construction of an overhead tank and the civil, plumbing, electrical costs are obtained as a method of weighted average as given in Table 11. In this N.O.M., only one pump of 200 HP is supplying the water to the tank through a pipeline as per the level of water inside the overhead tank. This measure would save energy in terms of pumping requirement and the overhead tank would provide the network with 24 hours of un-interrupted water supply. In order to simulate this inside the network built in EPANET, the demand of each node over 24 hours has been distributed into average hourly demands with variable multiplying factors. This gives idea about the hourly performance of network as well as improves the distribution capacity of network by providing un-interrupted supply. Pump The modified network as simulated in EPANET, shows the pressure on individual nodes and flow in individual links as shown in Figure 6. The performance of the pump after the overhauling, working at improved efficiency of around 72% works on the estimated performance parameters as given in Table 12. The pump is required to run for 12 hours during the day in order to pump water to the overhead tank as per demand and water level variations. The techno-economical aspects of this N.O.M. as calculated in EPANET are as given in Table 13. A summary of feasibility study and economical advantages of this N.O.M. are projected to be as given in Table 14. The new pumps are working at an efficiency of around 71% and estimated performance parameters as given in Table 15. The pumps are operating for 13 hours during the day to pump water to overhead tank as per demand and water level variations. The techno-economical aspects of this N.O.M. as calculated in EPANET are as given in Table 16. A summary of feasibility study and economical advantages of this N.O.M. are projected to be as given in Table 17. Summary:- The summary of all Network Optimization Measures (N.O.M.) are as given in Table 18. It gives a comparison of different optimization techniques for a water distribution network of WDS. The choice of most suitable option can be done on the basis of the requirements of the application. Also it depends on various factors like reliability of network supply, power consumption, scope of energy savings, amount of investment, suitability for the site and various other considerations are to be kept in purview of the subject while undertaking the hydraulic study of a large and complex network.
2019-04-22T13:06:13.283Z
2017-04-30T00:00:00.000
{ "year": 2017, "sha1": "e476940b6432b57684b981bfbb8d9147d809ad1b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21474/ijar01/3912", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9e9f7cdcd122cf7e5614f1d9448414996a9b6d5d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
252885757
pes2o/s2orc
v3-fos-license
The Holistic Review on Occurrence, Biology, Diagnosis, and Treatment of Oral Squamous Cell Carcinoma A prevalent head and neck cancer type is oral squamous cell carcinoma (OSCC). It is widespread and associated with a high death rate of around 50% in some regions of the world. We discuss the likelihood of developing OSCC and the impact of age in this review. Prior to examining the vast array of diagnostic indicators, a brief explanation of the biology of the disease is addressed. Finally, the therapeutic strategies for OSCC are listed. The complete literature for this study was compiled by searching Google Scholar and PubMed using the terms "OSCC," "oral squamous cell carcinoma," "diagnosis of OSCC," "oral cancer," and "biomarkers and OSCC." The research finds that OSCC has several critical parameters with a lot of room for additional in-depth study. Introduction And Background Various concepts of the most recent research in oral squamous cell carcinoma are presented in Figure 1. The Global Burden of Disease Study in the 10 most populated countries suggests trends and gender differences in the mortality rate of oral cancer [1]. The following review provides a systematic picture of the research undertaken in the fundamental concepts, which are segregated into four key sections. The first section introduces this critical issue and delves into the occurrence of OSCC and its risk influenced by age. This is followed by the biology section that gives a peek into the role of viruses and understanding the aspects of the microbiome and the bacteriome. The penultimate section deals with the current research on diagnosing OSCC via a wide range of biomarkers. The last section surmises the various treatments prescribed and the areas of interest pursued. Occurrence and influence of age The most common head and neck malignancy worldwide is oral cavity squamous cell carcinoma (OCSCC) [2]. It accounts for approximately 1% of cancer cases that are newly diagnosed every year in the United States [3]. On a global scale, they rank sixth among the most observed type of cancers. Roughly 90% of these cancers are histologically squamous cell carcinomas, referred to as oropharyngeal squamous cell carcinoma (OPSCC) [4,5]. To get a perspective on the enormity of this domain, after removing duplicates, 5247, 2167, and 153 articles were found across three databases, including PubMed, Scopus, and Embase. Panda et al. used SPSS's chi-square test to compare the variations in OPSCC staging and grading between two age groups. Statistical significance was defined as a p-value of 0.05. The number or percentage of overall survival (OS), disease-free survival (DFS), recurrence, distant metastasis (DM), and second primary (SP) events in both cohorts were combined to create the odds ratio (OR), which was then used to conduct the meta-analysis. Trials were further divided into matched and mismatched studies for one or more criteria, such as age, gender, site, tumor, node, metastasis (TNM) staging, and treatments offered, in order to do subgroup analysis. The funnel plot in RevMan version 5.3 (Copenhagen, Denmark: Cochrane Collaboration) was used to evaluate publication bias. In young patients, there were 49% higher odds of recurrence in unmatched subgroup analysis and 90% higher risks of metastasis in matched subgroup analysis. Young age may be taken into account as a separate determinant of recurrence and distant metastasis (DM), according to the results, albeit additional matched studies are needed to confirm this link. A significantly better overall survival (OS) was observed in younger patients compared to adults. The heterogeneity ranged from moderate to severe. The Surveillance, Epidemiology, and End Results (SEER) database analysis noted an increase in the average annual percentage of the incidence of oral tongue squamous cell carcinoma (OTSCC) to be more significant in men at 1.2% than in women at 0.5%, and in patients below 45 years vs. above (1.6% vs. 0.9%, respectively) from 1973 to 2010 [5]. Young patients had a non-significant tendency toward lower recurrence-free survival. Also, no appreciable difference was observed in relapse-free survival by age. According to this study, young patients with OTSCC may have a higher risk of recurrence than older patients [6]. Another important study to support this finding was the report of Garavello et al., who found the five-year DFS rates to be 34% for the young compared to 58% among the old cohorts with a p=0.003 [7]. Numerous probable causes of OTSCC's poor prognosis have been uncovered through molecular investigations. However, the study's sample size restrictions and design make it impossible to draw any definitive conclusions about age-related differences [8]. Younger patients did not have worse survival outcomes than older patients, according to a meta-analysis of nine trials (HR: 0.97; 95% confidence intervals (CI): 0. 66-1.41). This study revealed that among the OCSCC patients receiving final therapy, young age is not a poor predictive survival factor due to the benefit of integrating the existing information in the systematic meta-analysis. These studies were the subject of a meta-analysis of overall survival hazard ratios, which revealed a pooled hazard ratio of 0.95. These data also imply that young individuals have comparable oncologic outcomes to older patients with a little higher age barrier. Against this backdrop, it is vital to understand the biology of the disease to probe further diagnosis and treatment. Role of the Virus Human papillomavirus (HPV) and OCC: Although the link between the human papillomavirus (HPV) and uterine cervix and anogenital carcinomas is well known, its role in the emergence of oral squamous cell carcinomas is still debatable. In this sense, reference lists were manually screened in the citation tracking process [9]. Studies performed on people were cohort, case-control, or cross-sectional, evaluated the HPV oncogenic activity by the E6 and E7 mRNA, contained primary oral SCC (OSCC), and/or included a biopsy to confirm the diagnosis were all considered eligible. Because none of the included research was longitudinal and none of the cross-sectional studies had a control group, it could not determine if HPV infection was related to OSCC [10]. Seventeen instances (4.4%) tested positive for HPV/mRNA. Two examples of HPV-18 and 14 cases of HPV-16 were both positive [11]. Because none of the five studies considered were longitudinal or cross-sectional and lacked a control group, it could not determine if HPV infection was related to OSCC [10]. Hence, further studies on the role of HPV infection and its relation to OSCC are paramount and could be scope for other research groups. Role of smoking and HPV: Skoulakis et al. explored the synergistic role of smoking and the human papillomavirus (HPV) in developing cancer of the head and neck [12]. Smoking was less common in the HPVpositive group than in the HPV-negative group. So probably, there is no significant role of smoking in the pathogenesis of "head and neck squamous cell carcinoma" (HNSCC). Maxwell et al. evaluated the role of tobacco on recurrence among HPV-positive patients who had oropharyngeal cancer (OPC) [13]. They noted a statistically significant higher risk of recurrence in current smokers than those who had never smoked. Skoulakis et al. determined that smoking is statistically more observed in HPV-negative than positive groups of HNSCC patients [12]. To fully explain the pathophysiology of HNSCC and the likely carcinogenetic pathways that are brought on by smoking and HPV, more research is, however, required. Relationship Between Epstein-Barr Virus and OSCC Apart from both quantitative and qualitative assessments of the Epstein-Barr virus (EBV) association with OSCC, the meta-analysis by Sivakumar et al. affirmed the association between EBV and OSCC [14]. Polymerase chain reaction, in situ hybridization, and immunohistochemistry were among the diagnostic techniques performed. Latent membrane protein (LMP)-1, EBV-determined nuclear antigen-1, and EBVencoded small non-polyadenylated RNA-2 were among the diagnostic targets. The results of the metaanalysis revealed a connection between OSCC and EBV. However, given the several crucial limitations of the studies undertaken, there is a need for further validation of the association for any conclusive inference. High-Throughput Nucleotide Sequencing for Bacteriome Studies Cancer is a significant disease in modern times. Given their favorable economic and social structures and the evident aging of their populations, it is the leading cause of mortality in developed countries, particularly [15][16][17]. The chosen studies diverged slightly from the main goals of this review in that they used next-generation sequencing for the microbial analysis and addressed the broad topic of the connection between oral squamous cell cancer (OSCC) and microbiota. Several articles focused primarily on comparing the oral microbiota in OSCC versus typical tissue samples [18]. Three of them had additional objectives -to make a correlation between oral cancer and certain life habits as proposed by Lee et al. [19], to analyze the genomics and metabolic pathways in microbes that are associated with OSCC [20], and to evaluate the potential growth of the bacteria's pro-inflammatory factors in their OSCC samples, chiefly by Perera et al. [20]. Most studies detected microorganisms related to inflammatory responses in the OSCC samples, like Fusobacterium nucleatum and Pseudomonas aeruginosa. While the former is linked to the OSCC of the tongue, the latter is associated with the OSCC of gingiva in addition to the tongue, for at least one study. Additionally, numerous bacteria that metabolize ethanol to create acetaldehyde, including Neisseria spp., Rothia mucilaginosa, and Streptococcus mitis, were discovered in the OSCC samples. However, the studies yielded no consensus on the hypothesis, given that often, they were found in a larger quantity within the non-tumor controls. Association of Microbiome The significance of microorganisms in the etiology of oral squamous cell carcinoma has garnered particular attention because periodontal disease is a microbial condition. Sami et al. offer one comprehensive review [21]. Several bacterial species have been identified in the oral squamous cell carcinoma (OSCC) samples [22]. These include relatively rare species that inhabit the oral cavity like Bacteroides fragilis [23], and bacteria earlier unnamed like Actinomyces and Streptococcus [24,25]. In addition, the environmental species were observed like Dietzia psychralcaliphila and Gordonia sputi. More thorough studies have been conducted on a few of these species, including Porphyromonas gingivalis and Fusobacterium nucleatum. Most haven't, nevertheless, been thought about in terms of both their singleton and polymicrobial functions in the OSCCassociated microbiome. The precise processes by which the oral microbiome may contribute to the development of OSCC are yet not fully understood [26]. Bacterial Dysbiosis -Culture-Independent Studies According to data collected from 731 cases and 809 controls, there was no steady amelioration of any unique taxon in the oropharyngeal or oral malignancies, albeit common taxa could be distinguished between investigations. While several studies found a link between dysbiosis and oral/oropharyngeal cancer, the analytical and methodological differences made it impossible to produce a consistent summary. This emphasizes the need and scope for greater quality research with standardized methodology and reporting. More than 30% of the non-tumor tissue included the bacteria Granulicatella adiacens, Porphyromonas gingivalis, Sphingomonas spp. PC5, and Streptococcus mitis/oralis [27]. One study reported using reagent controls to establish the lack of bacterial contamination. However, this influences the data interpretation [28]. Initial microbiome research on oral cancer has shown altered bacterial populations, including pathogens of known importance. This may indicate that bacterial genome-associated inflammatory alterations play a role in mouth cancer as a contributing factor. Identification of the specific changes in a microbe is indeed a positive step towards the development of salivary-based biomarkers of microbes in the clinical evaluation of the progress of oral cancer. Role of Porphyromonas Gingivalis OSCC is the widely observed malignant neoplasm of the oral region [29]. This study focused on the mechanisms that P. gingivalis plays in the development, upkeep, and/or maintenance of OSCC. In a murine model, Gallimidi et al. showed that P. gingivalis-infected OSCC tumors that were 4NQO-induced were noticeably more widespread and invasive, with strong expression of IL-6 [30]. The PAR4 receptor-induced over-expression of the MMP9 via kinase-dependent signaling pathways of p38MAPK and ERK1/2. PAR2 and PAR4 were both found to be required for increasing the OSCC cell invasion potential. Streptococcus gordonii and P. gingivalis can interact to create communities, which then colonize the tooth plaque. Because of its damaging effects on periodontal tissues, P. gingivalis gains from its interaction and coaggregation in the subgingival plaque [31]. The bacterium's effect in the oral epithelial cells could vary based on the phase development of OSCC, as such alterations were absent in the non-diseased gingival keratinocytes. This is further attested by the study of Liu et al. who demonstrated a novel mechanism of how P. gingivalis stimulates the immune evasion of OSCC via the protection of cancer from any viable macrophage attack [32]. The most recent research highlight the simplicity of managing periodontal disease (PD) as the necessary means of preventing OSCC. However, more research on human subjects is required to estimate the actual oncogenic risks from the infection of P. gingivalis in oral malignancy. This could also expand the scope of OSCC development, including determining the tumor's location and stage. Proteomic Markers The neck and head squamous cell cancer (HNSCC) is a highly prevalent malignancy linked to chewing tobacco. Over the last two decades, researchers have discovered an increasing number of HNSCC patients with positive human papillomavirus (HPV) tumors that appear in younger people and those who consume less or no alcohol or tobacco. The relationship in the oropharynx is more vital than that in the oral cavity [33]. These articles were divided into subsections listed below, followed by a list of all detected protein biomarkers and a brief explanation of their significance. Clinical applications of biomarkers include detecting, diagnosing, and monitoring disease activity and evaluating therapy efficacy. Tung et al. reported the reduction of vitamin D-binding protein in OSCC plasma, suggesting differential regulation across different species [34]. Role of Glucose Transporters The solute carriers' major facilitator superfamily has approximately 400 members, including glucose transporters (GLUTs) [35]. The distribution of glucose and other hexoses to metabolically active cells depends critically on the control of the expression of glucose transporter proteins. Two significant proteins in this class are glucose transporters 1 (GLUT-1) and glucose transporters 3 (GLUT-3) [36]. GLUT-1 expression in the Tca8113 and CAL27 cell lines was significantly higher than that in the normal oral keratinocytes (NOK) cell line, natural killer (NK). No matter if the tumor was in an early or late stage or whether it had a low or high tumor grade, GLUT-3 expression was always excessive in the deep invasive front [37]. Accordingly, there seemed no link between GLUT-3 and tumor grade [38]. GLUT-3 is the second largest researched transporter, albeit with limited research. Mixed results were found from mRNA investigations when cell lines expressed GLUT-3 [39], and frequently overexpressed in oral squamous cell carcinoma (OSCC) tumors than the adjoining healthy tissues [40,41], with occasional exceptions [42]. GLUT-1 and maybe GLUT-3 are the only two glucose transporters extensively examined in OSCC and healthy oral keratinocytes. In a different investigation by Kunkel et al., the positive cell proportion was more accurate at predicting the prognosis than the intensity of GLUT-1 staining [43]. Compared to those with cell positivity of >50%, those with cell positivity at 50% demonstrated a median survival of 138 months (p=0.0034). Clinical decision-making may benefit significantly from a greater understanding of these proteins' connections to illness development, resistance to treatment, and prognosis. Prognostic Biomarkers It is crucial to find accurate prognostic biomarkers for detecting oral tongue squamous cell carcinoma (OTSCC) to predict the tumor's behavior more accurately and direct the subsequent therapy decisions. There were 174 investigations carried out during the previous three decades, and 184 biomarkers were assessed for the prognostication of OTSCC. Numerous biomarkers have been proposed as helpful prognosticators for OTSCC, but the methodology and reporting quality of the original studies is generally subpar, making it impossible to draw definitive conclusions. OTSCC is increasing in incidence and has an aggressive clinical behavior with a relatively poor prognosis [44,45]. When a biomarker proved to be statistically "nonsignificant" in an unadjusted analysis, it was typical to reject it from an adjusted analysis using Cox regression. In contrast, biomarkers that were "significant" in an unadjusted study were frequently included in an adjusted analysis. Numerous immunohistological indicators examined in OTSCC and buccal cancer samples did not predict survival in OTSCC, although some did in buccal carcinoma [45][46][47]. Malondialdehyde -Oxidative Stress Marker Squamous cell carcinoma (SCC) is an oral malignancy widely observed. The endogenous formation of malondialdehyde (MDA) during lipid peroxidation is an appropriate biomarker for endogenous DNA damage [48]. The degree of tissue damage caused by oxidative stress may be determined by estimating the lipid peroxidation by-products in the OSCC group. The research typically revealed a prominent increase of malondialdehyde in OSCC-positive cohorts than in the control healthy group. Nevertheless, to ascertain that MDA is a potential biomarker for oxidative stress and a valid prognostic marker of OSCC, this calls for a study of a grander scale with controls more evenly-balanced and equidistribution samples between the various histological grades and clinical stages of OSCC. CircRNAs Circular RNAs (CircRNAs), a newly discovered non-coding RNA, have been linked to carcinogenesis, metastasis, and cancer progression. They may be potential biomarkers for detecting OSCC. The post-test probability of the circRNAs was calculated using Fagan's nomogram [42]. The post-test probability increased to 47% from 20% with a positive likelihood ratio of 4 and decreased to 8% with a negative likelihood ratio of 0.33. Accordingly, it can be suggested that circRNAs are an effective and reliable diagnostic biomarker. Multiple studies have shown that dysregulated circRNAs are crucial for cancer cell proliferation, metastasis, and incidence. Compared to those who used tissue samples to diagnose OSCC patients, the use of plasma and saliva specimens demonstrated a better efficacy, with no heterogeneity. Histopathological Features In a crucial study, the criteria for exclusion of studies included -alternative tumors other than OSCC [49], samples that contained biopsies [50,51], immunohistochemistry-based investigations [51], histological grading systems used for analysis [52], reports of univariate survival analysis [53], studies based solely on association analysis [54], omitted the hazards for OS (HR) and/or its 95% confidence interval (CI), and reviews of associated literature [55], conference abstracts and letters [56]. During the title and abstract screening, 2490 research were included. Of these, 2074 studies were eliminated, leaving 416 studies that satisfied the requirements for full-text screening. A promising biomarker should be precise, quantifiable, relevant, accessible, and affordable. Even though this is a rapidly evolving field with standard practice for some cancers, the therapeutic approach to OSCC and its prognosis still rely on tumor, node, metastasis (TNM) clinical staging. It was useful to probe the review of the impact of histopathological traits on hematoxylin and eosin (HE)-stained slides as prognostic indicators for OSCC patients. Perineural invasion (PNI) and disease-specific survival (DSS) were significantly correlated in a meta-analysis of 7523 individuals from 26 studies. Depth of invasion (DOI) in OSCC has only been the subject of one previous meta-analysis. Regardless of the cutoff point, this research found substantial risks of the metastasis of the lymph node during diagnosis with recurrence in tumors possessing high DOI [57]. CAIX Expression One of the most challenging situations for the cellular and extracellular matrix to maintain homeostasis is hypoxia. Much research has investigated the prognostic value of carbonic anhydrase IX (CAIX) in varying cancer types, including OSCC [58]. The PECO framework-based investigation into the predictive significance of tumoral CAIX immunohistochemistry expression in patients with OSCC is a significant article in this field. The analysis returned the pooled hazard ratios (HRs) doubly higher for the Asian group (HR: 1⁄4 2.01, 95% confidence intervals (CIs): 1.42-2.86) than the non-Asian group. Here, the correlation between CAIX overexpression to worsen OS and disease-free survival (DFS) in OSCC patients was confirmed, indicating a positive test implied the overall risk of mortality growing by around 50%. S100 Proteins Oral cancer is a significant health issue among the general public [59]. To review the literature in this domain, a detailed search was strategized for every database with free text words and the MeSH (Medical Subject Headings) combinations. The findings showed that significant increase in the levels of S100A7 in three studies [60][61][62]. In comparison, overexpression was reported for S100A2 [60,62], A9 [63,64], and A12 in oral squamous cell carcinoma (OSCC) patients compared to the control of healthy cohorts [65,66]. In contrast, the quantitative analysis demonstrated under expression of S100A8 [62,67], A9 [62,67], and A14 in two studies each in OSCC patients as against healthy subjects [62,66]. It is noteworthy that all studies report the overexpression of S100A7 in OSCC patients, unlike healthy individuals [61][62][63]. Accordingly, it is postulated that increased S100A7 protein expression is linked to the onset of oral cancer, making the protein secreted a potential OSCC biomarker. Unfortunately, the sample size overall for the studies was small and held a significant influence on the interpretation of the findings. It is yet unclear whether certain S100 protein members' up-or downregulation acts as a diagnostic sign in OSCC. CYFRA 21-1 and MMP-9 as Salivary Biomarkers Numerous techniques and tests can identify OSCC. In patients presenting with clinically obvious lesions, the diagnostic accuracy of several methods, including oral cytology, vital staining, oral spectroscopy, and lightbased detection, has been assessed by a Cochrane systematic review in a dental environment [68]. When the techniques of participant recruitment are ignored, studies that compare changed expressions of a particular salivary biomarker between healthy "control" participants and "cases" with OSCC may produce false results [68]. CD68 and CD163 Tumor-Associated Macrophages One common neoplasm in humans is squamous cell carcinoma of the head and neck (SCCHN) [74]. An important study based on the following criteria for the qualitative and quantitative analysis was conducted: (i) prospective/retrospective cohort studies that analyzed the cluster of differentiation (CD)68 + and/or CD163 + tumor-associated macrophages (TAMs) expressed in clinical dissections of SCCHN; (ii) minimum population of 20 patients in each study; (iii) semiquantitative determination using immunohistochemistry (IHC); (iv) studies that determined the TAM correlation to patients' prognosis on at least one of these parameters -disease-free survival (DFS), overall survival (OS) and progression-free survival (PFS). While CD68 + TAMs were assessed in 12 studies, eight studies were analyzed for their CD163 + TAMs. Notably, four of these studies evaluated both TAM subpopulations [75][76][77]. The meta-analysis demonstrated the excess CD163 + TAM and negligible CD68 + TAM, correlating to the poor survival of HNSCC patients. In accordance with previous observations of other immunological indicators, including the programmed cell death ligand 1 (PDL1), both TAMs were more frequently expressed in females than in males [78,79]. Here, it was shown that stromal CD163 + TAMs are associated with a worse prognosis in SCCHN patients. Treatment The disease stage, location, and the patient's general health status affect how OSCC is treated. An extensive evaluation of the various treatment techniques is given by Gharat et al. [80]. Inhibitors of the epidermal growth factor receptor (EGFR) and cyclooxygenase-2 (COX-2) enzymes, photodynamic therapy, chemoprevention, nanocarrier-based drug delivery technology, polymeric nanoparticles, nanoemulsion, solid lipid nanoparticles, nanolipid carriers, carbon nanotubes, nanoliposomes, metallic theranostic nanoparticles, hydrogels, cyclodextrin based system, liquid crystals, and surface-engineered particulate system are among them. The majority of oral squamous cell carcinoma (OSCC) cancers overexpress the epidermal growth factor receptor (EGFR/ErbB1/HER1), and links have been made between higher expression levels and an aggressive phenotype, a poor prognosis, and resistance to anticancer therapy [81]. In OSCC, prostaglandin E2 (PGE2) release is promoted by COX-2 overexpression. This stimulates the cell surface receptors (EP1, EP2, EP3, and EP4) to encourage OSCC development [82]. Accordingly, EGFR and COX-2 inhibitors have been probed as potential therapeutics. The existing treatment modalities have brought about the main issues relating to non-specific cell death for OSCC, such as chemotherapy, radiation, invasive surgery, and photodynamic therapy. As a result, surface engineering has recently made it possible for scientists to create a variety of nanoparticles with the necessary targeting, programmed-release, and imaging properties, thus advancing the field of nanotheranostics. Despite all these benefits, additional research is required to determine nanotechnology's practical application and efficacy for OSCC management. There aren't many studies on the direct site for treating OSCC using the nanoparticulate method via the oral cavity or the buccal mucosa. Researchers in this sector have the chance to investigate nanoparticulate systems further to enhance medicine delivery and patient quality of life. Conclusions The review of the current literature on OSCC provides a clear insight into the current standing in the domain. While demonstrating the necessity for deeper exploration, our review notes the holistic aspects of this issue. The influence of age, smoking, and other viruses is detailed, while the hypotheses yet to be affirmed in the different aspects of the disease are delineated. An exhaustive account of the biomarkers used for OSCC diagnosis is discussed, outlining their comparative challenges and successes. In conclusion, this exercise strives to highlight the frontiers of the research on OSCC and the incumbent lacunae that need to be resolved. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-10-14T15:24:10.335Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "0e1e24783e1d62560791427d0bb0c6d179797f98", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/115082-the-holistic-review-on-occurrence-biology-diagnosis-and-treatment-of-oral-squamous-cell-carcinoma.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1377fab62e0442db5e823e30c1896a54acaee952", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
208006224
pes2o/s2orc
v3-fos-license
The Snyder model and quantum field theory We review the main features of the relativistic Snyder model and its generalizations. We discuss the quantum field theory on this background using the standard formalism of noncommutaive QFT and discuss the possibility of obtaining a finite theory. Introduction Since the origin of quantum field theory (QFT) there have been proposal to add a new scale of length to the theory in order to solve the problems connected to UV divergences. Later, also attempts to build a theory of quantum gravity have proved the necessity of introducing a length scale, that has been identified with the Planck length L p = G c 3 ∼ 1.6·10 −35 m [1]. A naive application of this idea, like a lattice field theory, would however break Lorentz invariance. A way to reconcile discreteness of spacetime with Lorentz invariance was proposed by Snyder [2] a long time ago. This was the first example of a noncommutative geometry: the length scale should enter the theory through the commutators of spacetime coordinates. Noncommutative geometries were however not investigated for a long time, until they revived due to mathematical [3] and physical [4] progresses. Their present understanding is based on the formalism of Hopf algebras [5]. In particular, QFT on noncommutative backgrounds has been largely studied [4]. In most cases, a surprising phenomenon, called UV/IR mixing, occurs: the counterterms needed for the c S. UV regularization diverge for vanishing incoming momenta, inducing an IR divergence. Noncommutative geometries also admit a sort of dual representation on momentum space in theories of doubly special relativity (DSR) [6]. Here a fundamental mass scale is introduced, that causes the curvature of momentum space [7], and the deformation of both the Poincaré group and the dispersion relations of the particles. The Snyder model can also be seen as a DSR model, where the Poincaré invariance and the dispersion relations are undeformed. As mentioned above, Snyder's idea was almost abandoned with the introduction of renormalization techniques, with the exception of some Russian authors in the sixties [8]. It revived more recently, when noncommutative geometry became an important topic of research. However, in spite of several attempts using various methods [8,9], the issue of finiteness of Snyder field theory has not been established up to now. Here we review an attempt to investigate this topic using the formalism of noncommutative QFT [10,11]. The Snyder model The most notable feature of the Snyder model is that, in contrast with most examples of noncommutative geometry, it preserves the full Poincaré invariance. In fact, it is based on the Snyder algebra, a deformation of the Lorentz algebra acting on phase space, generated by positions x µ , momenta p µ and Lorentz generators J µν , that obey the Poincaré commutation relations together with the standard Lorentz action on position and a deformation of the Heisenberg algebra (preserving the Jacobi identities), where β is a parameter of the order of the square of the Planck length and η µν = diag(−1, 1, 1, 1). The generators J µν are realized in the standard way as In contrast with most models of noncommutative geometry, the commutators (4) are functions of the phase space variables: this allows them to be compatible with a linear action of the Lorentz symmetry on phase space. However, translations act in a nontrivial way on position variables. It is important to remark that, depending on the sign of the coupling constant β, two rather different models can arise: They have very different properties. For example, the Snyder model has a discrete spatial structure and a continuous time spectrum, while the opposite holds for anti-Snyder. The subalgebra generated by J µν and x µ is isomorphic to the de Sitter/anti-de Sitter algebra, and hence the Snyder/anti-Snyder momentum spaces have the same geometry as de Sitter/anti-de Sitter spacetime respectively. In fact, the Snyder momentum space can be represented as a hyperboloid H of equation embedded in a 5D space of coordinates ζ A with signature (−, +, +, +, +), or equivalently as a coset space SO(1, 4)/SO(1, 3). The Snyder commutation relations are recovered through the choice of isotropic (Beltrami) coordinates on H and the identification where M AB are the Lorentz generators in 5D. Note that this construction implies that p 2 < 1/β, and hence the existence of a maximal mass, of the order of the Planck mass, for elementary particles. This is a common feature in models with curved momentum space [7]. The momentum space of the anti-Snyder model can be represented analogously, as a hyperboloid of equation (5) with β < 0, embedded in a 5D space of coordinates ζ A with signature (+, −, −, −, +), or equivalently as a coset space SO(2, 3)/SO(1, 3). Again, anti-Snyder commutation relations are recovered through the choice of isotropic coordinates (6) and the identification (7). An important difference from the previous case is that the momentum squared is now unbounded. In the following we shall concentrate on Snyder space, but most results hold also for β < 0. Generalizations of the Snyder model The Snyder model can be generalized by choosing different isotropic parametrizations of the momentum space, but maintaining the identification x µ = M µ4 . In this way, eqs. (1-3) and the position commutation relations still hold, but [x µ , p ν ] is deformed. For example, choosing p µ = ζ µ , one obtains [12] The most general choice that preserves the Poincaré invariance is [13] p Algebraically, the same models can also be obtained by deforming the Heisenberg algebra as [12,14] The function φ 1 and φ 2 are arbitrary, but the Jacobi identity implies A different kind of generalization is obtained by choosing a curved spacetime (de Sitter) background, imposing nontrivial commutation relations between the momentum variables, with α proportional to the cosmological constant. This idea goes back to Yang [15], but was later elaborated in a more compelling way in [16]. We call this generalization Snyder-de Sitter (SdS) model. The other commutation relations are unchanged, except that now, by the Jacobi identities, This model depends on two invariant scales besides the speed of light, that are usually identified with the Planck mass and the cosmological constant, from which the alternative name name triply special relativity, proposed in [16] for this model. It must be noted that, in order to have real structure constants, both α and β must have the same sign. There are indications that the introduction of α might be necessary to obtain a well-behaved low-energy limit of quantum gravity theories [16]. An interesting property of the SdS model is its duality for the exchange αx ↔ βp [17], that realizes the Born reciprocity [18]. The phase space can be embedded in a 6D space as Alternatively, one can construct the SdS algebra directly from that of Snyder by the nonunitary transformation wherex µ ,p µ are generators of the Snyder algebra and λ a free parameter [19]. Phenomenological applications A wide literature considers the phenomenological implications of the nonrelativistic Snyder model, especially in connection with the generalized uncertainty principle (GUP) [20]. However, here we are interested in the relativistic case, which has obtained much less consideration. Some consequences are: -Deformed relativistic uncertainty relations: from the deformed Heisenberg algebra one gets The spatial components essentially coincide with those considered in GUP. -Modification of perihelion shift of planetary orbits [21]: provided that the model is applicable to macroscopic phenomena, on a Schwarzschild backgorund the perhihelion shift gets an additional contribution, δθ = δθ rel 1 + 5 3 βm 2 , where m is the mass of the planet. This correction clearly breaks the equivalence principle at Planck scales. -DSR-like effects [22]: no effects of time delay of cosmological photons occur, contrary to other models derived from noncommutative geometry [23], but some higher-order effects are still present. Hopf algebras In the study of noncommutative models an important tool is given by the Hopf algebra formalism [5], especially in relation with QFT. Since in noncommutative geometry spacetime coordinates are noncommuting operators, the composition of two plane waves e ip·x and e iq·x gives rise to nontrivial addition rules for the momenta, denoted by p ⊕ q, that are described by the coproduct structure of a Hopf algebra, ∆(p, q). The addition law is in general noncommutative. Moreover, the opposite of the momentum is determined by the antipode of the Hopf algebra, S(p), such that p ⊕ S(p) = S(p) ⊕ p = 0. The algebra associated to the Snyder model can be calculated (classically) using the geometric representation of the momentum space as a coset space mentioned above and calculating the action of the group multiplication on it [24]. Alternatively, one can use the algebraic formalism of realizations [12]: a realization of the noncommutative coordinates x µ is defined in terms of coordinates by assigning a function x µ (ξ µ , p µ ) that satisfies the Snyder commutation relations. The x µ and p µ are now interpreted as operators acting on function of x µ , as In particular, it is easy to show that the most general realization of the Snyder model is given by [14] x where the function χ(βp 2 ) is arbitrary and does not contribute to the commutators, but takes into account ambiguities arising from operator ordering of ξ µ and p µ . In general, it can be shown that for any noncommutative model, [25] e ik·x ⊲ e iq·ξ = e iP(k,q)·ξ+iQ(k,q) , where the functions P µ and Q can be deduced from the realization. Moreover, with K µ (k) ≡ P µ (k, 0) and J (k) ≡ Q(k, 0). The generalized addition of momenta is then given by with D µ (k, q) = P µ (K −1 (k), q), and the coproduct is simply Note that D µ is independent of χ. Moreover, the antipode S(p µ ), is −p µ for all (generalized) Snyder models. A fundamental property of the Snyder addition law is that it is nonassociative. Hence the algebra is noncoassociative, so strictly not a Hopf algebra. For the calculations, it is also useful to define a star product, that gives a representation of the product of functions of the noncommutative coordinates x in terms of a deformation of the product of functions of the commuting coordinates ξ. In particular, from the previous results one can calculate the star product of two plane waves: where We consider now a Hermitean realization of the Snyder commutation relations The request of Hermiticity will be important for the field theory. We get and hence e ik·ξ ⋆ e iq·ξ = e iD(k,q)·ξ (1 − β k·q) 5/2 . QFT in Snyder space Let us consider a QFT for a scalar field φ on a Snyder space. Usually, field theories in noncommutative spaces are constructed by continuing to Euclidean signature and writing the action in terms of the star product [4]. In fact, the action functional for a free massive real scalar field φ(x) can be defined through the star product as [14] S free [φ] = 1 2 The star product of two real scalar fields φ(ξ) and ψ(ξ) can be computed by expanding them in Fourier series, Then, using (26), But The two (1 + βk 2 ) 5/2 factors cancel and then [14], This is called cyclicity property, and occurs also in other noncommutative models; it follows that the free theory is identical to the commutative one, The propagator is therefore the standard one Notice that the cyclicity property is a consequence of our choice of a Hermitian representation for the operator x, and can be related to the choice of the correct measure in the curved momentum space. The interacting theory is much more difficult to investigate. Several problems arise: -The addition law of momenta is noncommutative and nonassociative, therefore one must define some ordering for the lines entering a vertex and then take an average. -The conservation law of momentum is deformed at vertices, so loop effects may lead to nonconservation of momentum in a propagator. For example, let us consider the simplest case, a φ 4 theory [10] The parentheses are necessary because the star product is nonassociative. Our definition fixes this ambiguity, but other choices are possible. With this choice, the 4-point vertex function turns out to be where D 4 (q 1 , q 2 , q 3 , q 4 ) = q 1 + D(q 2 , D(q 3 , q 4 )), g 3 (q 1 , q 2 , q 3 , q 4 ) = e iG(q2,D(q3,q4)) e iG(q3,q4) , and σ denotes all possible permutations of the momenta entering the vertex. With the expressions of the propagator and the vertex one can compute Feynman diagrams. For example, the one-loop two-point function depicted in fig. 1 in position space is given by [10]. However, the effects of momentum nonconservation cancel out. Attempting instead a calculation at all orders in β, not all diagrams can be explicitly computed [11]. It can be shown, however, that the divergences are suppressed with respect to the noncommutative theory and there are even indications that the integrals might be finite, at least for the interaction (34). If instead, renormalization is necessary, the phenomenon of UV/IR mixing could still be present, as in other noncommutative models [4]. We recall however that a model that avoids this problem in Moyal theory was proposed by Grosse and Wulkenhaar [26] (GW model). Its main characteristic is that, besides the kinetic and interaction terms, its action also contains a term proportional to φ x 2 φ. A similar mechanism can be recovered in Snyder theory by considering a curved background (SdS model) [27]. In fact, using the relation (14) between the SdS and Snyder algebra with λ = 0, and the realization (24) of the Snyder algebra, the action can be reduced, at zeroth order in α, β, to The action (38) is identical to that of the free GW model. One may therefore hope that also in this case the IR divergences are suppressed and one can obtain a renormalizable theory. Conclusions We have reviewed the present status of research on the Snyder model, the earliest example of noncommutative geometry, and the only one that preserves Lorentz invariance. In particular, we concentrated on the definition of a quantum field theory defined in accord with the standard noncommutative formalism, and the issue of renormalizability. It turns out that, although an exact calculation has not been performed in full, there is good evidence of renormalizability and absence of UV/IR mixing, at least in the SdS model.
2019-10-20T16:18:21.000Z
2019-10-20T00:00:00.000
{ "year": 2019, "sha1": "82b4b7a8aca0183dd988f553cf73fe1099b0cdeb", "oa_license": null, "oa_url": "https://ujp.bitp.kiev.ua/index.php/ujp/article/download/2019506/1488", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "82b4b7a8aca0183dd988f553cf73fe1099b0cdeb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257748451
pes2o/s2orc
v3-fos-license
Identification and Characterization of Rhipicephalus microplus ATAQ Homolog from Haemaphysalis longicornis Ticks and Its Immunogenic Potential as an Anti-Tick Vaccine Candidate Molecule : Although vaccines are one of the environmentally friendly means to prevent the spread of ticks, there is currently no commercial vaccine effective against Haemaphysalis longicornis ticks. In this study, we identified, characterized, localized, and evaluated the expression patterns, and tested the immunogenic potential of a homologue of Rhipicephalus microplus ATAQ in H. longicornis (HlATAQ). HlATAQ was identified as a 654 amino acid-long protein present throughout the midgut and in Malpighian tubule cells and containing six full and one partial EGF-like domains. HlATAQ was genetically distant (homology < 50%) from previously reported ATAQ proteins and was expressed throughout tick life stages. Its expression steadily increased ( p < 0.001) during feeding, reached a peak, and then decreased slightly with engorgement. Silencing of HlATAQ did not result in a phenotype that was significantly different from the control ticks. However, H. longicornis female ticks fed on a rabbit immunized with recombinant HlATAQ showed significantly longer blood-feeding periods, higher body weight at engorgement, higher egg mass, and longer pre-oviposition and egg hatching periods than control ticks. These findings indicate that the ATAQ protein plays a role in the blood-feeding-related physiological processes in the midgut and Malpighian tubules and antibodies directed against it may affect these tissues and disrupt tick engorgement and oviposition. Introduction Haemaphysalis longicornis Neumann, 1901 (Acari: Ixodidae), commonly called the Asian longhorned tick, is a three-host tick of the Metastriata family. It exhibits parthenogenetic and bisexual phenotypes and feeds on a wide variety of hosts including wildlife, livestock, companion animals, and humans. Haemaphysalis longicornis populations are endemic in Australia, New Zealand, New Caledonia, Fiji, the Korean Peninsula, Northeastern China, Northeastern Russia, and Japan [1,2]. Parthenogenetic phenotypes were first reported in the USA in 2017 and are now established in several states [3]. In all these regions, the Asian longhorned tick is perceived as an economic, veterinary, and public health concern because of its ability to infest hosts in large numbers, cause damage to hides, and transmit several infectious disease agents to animals and humans [1,[4][5][6]. Female Japanese white rabbits (specific-pathogen-free animals, 18 weeks old purchased from Japan SLC) were used for the animal experiments. The rabbits were kept in a room with a temperature of 25 • C, humidity of 40%, and controlled lighting (period of light from 6:00 to 19:00 h). Throughout the course of the experiments, the rabbits had ad libitum access to water and commercial pellets (CR-3; CLEA Japan, Tokyo, Japan). Ethical Statement The experimental design and management of animals were approved by the Experimental Animal Committee of Obihiro University of Agriculture and Veterinary Medicine (Animal experiment approval numbers: 19-74, 19-224, 20-85, 21-40, 21-226). Identification of HlATAQ cDNA An Expressed Sequence Tags (EST) database which was previously constructed in our laboratory using the midgut cDNA library of semi-engorged female H. longicornis [22][23][24] was searched to identify EST sharing similarities with R. microplus Bm86. Sequences of identified EST were examined using the BLASTX sequence homology search of NCBI (National Center for Biotechnology Information, National Institute of Health, http://blast. ncbi.nlm.nih.gov/Blast.cgi, accessed on 20 June 2019). Upon identification of the EST sharing homology with the R. microplus ATAQ gene, the Escherichia coli clone containing the corresponding plasmid DNA was selected for further analysis. The selected E. coli clone was cultured overnight in a Luria-Bertani (LB) medium with 50 mg/mL ampicillin sodium at 37 • C and its pGCAP1 plasmids were extracted using the QIAGEN Plasmid mini kit (Qiagen, Hilden, Germany). The length of the target gene was first estimated through enzymatic digestion and then submitted to sequencing. In detail, the extracted plasmids were digested with EcoRI and NotI restriction enzymes (Nippon gene, Tokyo, Japan), and the product was electrophoresed on 1.5% agarose gel, stained in ethidium bromide solution (Nacalai Tesque, Kyoto, Japan), and visualized under UV transilluminator (Printgraph AE-6905CF; Atto, Tokyo, Japan). The sequence of the cDNA was determined by repeatedly sequencing the plasmids with pGCAP1 vector-specific primers and target gene-specific primers (Table 1). Afterward, the obtained overlapping partial sequences were aligned using GENETYX ® ver. 7 (GENETYX, Tokyo, Japan), and the full-length cDNA sequence was identified. Genespecific primers were designed using the primer walking method. All sequences were obtained by performing Sanger sequencing using the BigDye™ Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, CA, USA) and ABI Prism 3100 Genetic Analyzer (Applied Biosystems). To annotate the different sections of the full-length cDNA, the translate tool of the Expasy proteonomic server (http://www.expasy.ch/tools/pi_tool.html, accessed on 28 August 2019) was used to identify the Open Reading Frame (ORF), and the other sections were identified by referring to the reported features of cDNAs produced by the vector caping method [22,23]. The cDNA sequence was designated as the H. longicornis ATAQ gene (HlATAQ) and was registered in the NCBI GenBank database under the accession number: ON210133. The homology of HlATAQ to previously published sequences was assessed using the BLASTp algorithm of the NCBI GenBank database. All ATAQ sequences deposited in GenBank were aligned with HlATAQ and an identity/similarity matrix of the ATAQ protein family was generated using the SIAS tool (http://imed.med.ucm. es/Tools/sias.html, accessed on 1 March 2023). A phylogenetic tree was constructed using the HlATAQ sequence, the ATAQ sequences from other tick genera, and the reference sequence of the Bm86 homologue from H. longicornis (Hl86) deposited in the GenBank database. Sequence alignments were created and tested with the web-based program GUIDANCE 2 [25] and the phylogenetic tree was inferred by the maximum likelihood method using MEGA X [26]. Analysis of HlATAQ Expression by Real-Time PCR The expression patterns of the HlATAQ gene were investigated by performing real-time PCR analysis on total RNA extracted from different tick developmental stages and from adult tick midguts and Malpigian tubules. In detail, the transcription levels of HlATAQ were examined in eggs, unfed larvae, engorged larvae, unfed nymphs, engorged nymphs, unfed females, females at the slow feeding stage, females at the rapid feeding stage, and engorged females. To assess the expression levels in the midgut and Malpighian tubules, the tissues of female ticks at different blood-feeding phases were collected. The midgut samples were collected on day 0 (unfed), day 2 (slow feeding stage), day 4 (rapid feeding stage), and at the engorgement stage. Meanwhile, Malpighian tubules were collected on day 0, day 2, and day 6, and after detachment of fully fed ticks. Total RNA was extracted from whole ticks and tick tissues according to the standard protocol of TRI reagent ® (Sigma-Aldrich, St. Louis, MO, USA). Total RNA samples were then subjected to DNase treatment using TURBO DNA-free™ (Thermo Fisher Scientific, Waltham, MA, USA), and concentrations and purity were determined with a Nanodrop TM 2000 Spectrophotometer (Thermo Fisher Scientific). Thereafter, cDNA was synthesized from DNA-free RNA using the ReverTra Ace ® qPCR RT Kit (Toyobo, Osaka, Japan) according to the manufacturer's directions. The cDNA was stored at −20 • C until use in real-time PCR. The real-time PCR assays were performed according to the standard protocol using a 7300 Real-Time PCR System (Applied Biosystems) and THUNDERBIRD ® SYBR ® qPCR Mix (Toyobo). The gene-specific primer sets employed are indicated in Table 1. To calculate the relative expression levels of HlATAQ, standard curves were generated using twofold serial dilutions of the cDNA of unfed female H. longicornis and cycling conditions comprising a 10 min heat denaturation and polymerase activation step at 95 • C followed by 40 cycles of a denaturation step at 95 • C for 15 s, and an annealing/extension step at 60 • C for 60 s. The specificity of PCR primers was confirmed by a melting curve analysis. Data were collected with the 7300 system SDS software version 1.4 for Windows (Applied Biosystems) and analyzed using Microsoft Excel [27]. The Haemaphysalis longicornis actin, glyceraldehyde-3-phosphate dehydrogenase gene (GAPDH), L23, and P0 were evaluated as candidate internal control genes. As a result of evaluating the expression stability of the internal control candidate genes in all samples, P0 was the most stable. The expression data were normalized using the H. longicornis ribosomal protein P0 (HlP0) (accession number EU048401) as the reference gene. RNA Interference RNA interference (RNAi) experiments were conducted to assess the effect of silencing HlATAQ on H. longicornis life cycle parameters. Double-stranded RNA (dsRNA) for RNAi was synthesized for two different regions of the gene. Briefly, the pGCAP1 plasmids con-taining the HlATAQ full-length cDNA were transformed into ECOS TM Competent E. coli DH5α (Nippon Gene) which were cultured overnight. Afterward, the plasmids were extracted and purified from cultured competent cells, following the protocol of NucleoSpin ® Plasmid EasyPure (Takara Bio, Shiga, Japan). Oligonucleotide primers including T7 promoter sequences at the 5 end were then used to PCR-amplify two regions of HlATAQ (HlATAQ-2 (574 bp) and HlATAQ-4 (567 bp)) from the cDNA plasmids (Table 1). The amplifications were performed in 50-µL PCR reaction mixtures containing 5.0 µL of 10× PCR Buffer for KOD-Plus-Neo, 3.0 µL of 25 mM MgSO 4 , 5.0 µL of 2 mM dNTPs, 1.5 µL each of 10 µM forward and reverse primers, 1.0 µL of KOD-Plus-Neo polymerase (1.0 U/µL) (Toyobo), and 0.2 µL of plasmid and sterile water. PCR conditions were set at 94 • C for 2 min, followed by 40 cycles of 98 • C for 10 s, 64 • C for 30 s, and 68 • C for 15 s. The PCR products were subjected to gel electrophoresis, then extracted and purified using the NucleoSpin ® Gel and PCR Clean-up kit (Takara Bio), phenol/chloroform/isoamyl alcohol (25:24:1), and 3M sodium acetate and ethachinmate (Nippon Gene). The purified DNAs were used to synthesize two dsRNAs named HlATAQ-2 dsRNA and HlATAQ-4 dsRNA, with a T7 RiboMax™ Express RNAi System (Promega, Madison, WI, USA) according to the standard protocol. The dsRNA of the firefly luciferase (Luc) gene [28] was used as the negative control. The size and quality of dsRNAs were confirmed by electrophoresis on 1.5% agarose gel. The dsRNA aliquots were stored at −80 • C until use. HlATAQ-2 dsRNA, HlATAQ-4 dsRNA, or Luc dsRNA (1 µg/tick) was injected from the fourth coxae into the hemocoel of unfed female H. longicornis fixed on a glass slide with adhesive tape [29]. The injections were performed with 10 µL microcapillaries (Drummond Scientific Company, Broomall, PA, USA) drawn to fine-point needles by heating. After dsRNA injection, the ticks were left for 24 h in an incubator set at 25 • C and then simultaneously fed on the ears of Japanese white rabbits. The rabbits were monitored daily. The ticks were left to feed until engorgement and collected when they dropped from the host. Engorged ticks were put in an incubator set at 25 • C and allowed to lay eggs which were collected, transferred to individual containers on the 20th day after the start of oviposition, and incubated until hatching. For each of the injected ticks, the length of the blood feeding period (days), preoviposition period (days), the oviposition to egg hatching period (Egg hatching period; days), body weight at engorgement (mg), and egg mass at 20 days after oviposition (mg) were recorded. In addition, to evaluate the efficiency of HlATAQ knockdown, the expression level of the HlATAQ gene was examined by real-time PCR using total RNAs extracted from injected ticks collected on the 4th day of blood. The real-time PCR assays were carried out as described above. Localization of HlATAQ Protein by Immunohistochemistry HlATAQ localization was performed on midgut and Malpighian tubules removed from 5-day-fed female ticks dissected under a stereomicroscope (SZX16; Olympus, Tokyo, Japan). The collected midgut and Malpighian tubules were immersed and fixed in 4% paraformaldehyde at 4 • C overnight. The next day, the tissues were immersed in PBS and left to stand at 4 • C for 24 h. Then, to prevent the formation of ice crystals during freezing, the sample mixture was immersed in a 5% sucrose solution, 10% sucrose solution, 15% sucrose solution, and 20% sucrose solution every other day and allowed to stand at 4 • C for 24 h. The samples were embedded in the OCT compound (Sakura Finetech Japan, Tokyo, Japan). Frozen sections (10 µm thick) were prepared with a cryostat (CM3050 S; Leica, Wetzlar, Germany). The sections were air-dried for 1 h, washed with PBS, blocked with 5% skim milk (Wako, Osaka, Japan) (1 h at room temperature), and incubated with anti-HlATAQ peptide antibodies (Eurofin Genomics, Tokyo, Japan) as a primary antibody or serum of naïve mouse as control, both diluted 1: 100 with 5% skim milk solution. Alexa Fluor ® 594 goat anti-mouse IgG (Thermo Fisher Scientific) diluted 1: 1,000 was used as the secondary antibody. The sections were then mounted in ProLong ® Diamond Antifade Mountant with Microorganisms 2023, 11, 822 7 of 23 DAPI (Thermo Fisher Scientific), covered with a cover glass, left overnight in a dark place, and later observed under a fluorescence microscope (BZ-9000; KEYENCE, Osaka, Japan). Expression and Purification of Recombinant HlATAQ Two HlATAQ recombinant proteins, one covering the whole ORF (rHlATAQ) and the second covering a truncated portion of the ORF (rtHlATAQ), were produced in this study. To obtain rHlATAQ, HlATAQ was PCR-amplified using a set of forward and reverse primers (rHlATAQF, rHlATAQR; Table 1) to add restriction enzyme sites. After double digestion with XhoI and BamHI restriction enzymes, the PCR amplicon was inserted in a pET28a plasmid (Novagen, Madison, WI, USA). After confirmation of correct insertion of HlATAQ by sequencing analysis (primers set: T7 promoter, rHlATAQ middle, T7 terminator; Table 1), the constructed plasmid was transformed into the E. coli BL21 (DE3) pLySs strain (Thermo Fisher Scientific) for protein expression. To produce rtHlATAQ, the full amino acid sequence of HlATAQ was analyzed to identify potentially immunogenic peptides. The Phyre2 (Protein Homology/analogY Recognition Engine V 2.0; http://www.sbg.bio.ic.ac.uk/phyre2, accessed on 15 December 2021) and BepiPred-2.0 program (IEDB Analysis Resource) were used to analyze and predict the linear B-cell epitopes of the protein, and the similarity of candidate peptides to published ATAQ proteins was assessed. The identified immunogenic peptide was amplified from HlATAQ and inserted into a pCold-ProS2 plasmid (Takara Bio), after double digestion with XhoI and BamHI. Upon confirmation of the plasmid insert by sequence analysis, E. coli BL21(DE3) was used for the transformation of pCold-ProS2-HlATAQ plasmid and protein expression. The non-inserted plasmid served as a control (rProS2). For the sequence confirmation prior to protein expression, constructed plasmids were first multiplied by transformation into the E. coli DH5 alpha strain, then purified and subjected to sequencing using the BigDye™ Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems) and ABI Prism 3100 Genetic Analyzer (Applied Biosystems). Expression and purification of recombinant proteins were verified at each step, using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). SDS-PAGE results were visualized by the Coomassie blue staining method. Vaccine Experiments A small-scale vaccination trial was performed to evaluate the effectiveness of HlATAQ as an anti-tick vaccine. Two Japanese white rabbits, one immunized with recombinant HlATAQ and the other immunized with rProS2 protein (control rabbit) were experimentally infested with H. longicornis, and the conditions of ticks and rabbits were monitored. Rabbit immunization, tick infestation, and evaluation of vaccination effect were carried out as follows. Upon purification, the recombinant proteins were confirmed by a western blotting analysis using anti-ProS2 mouse monoclonal IgG as the primary antibody and horseradish peroxidase-conjugated sheep anti-mouse IgG as a secondary antibody. Afterward, the proteins were dialyzed by Slide-A-Lyzer™ Dialysis Cassettes (20K MWCO) (Thermo Fisher Scientific), and protein concentrations were measured by a fluorometer, Qubit 3.0 (Thermo Fisher Scientific), prior to immunization. For immunization, each rabbit received an intradermal injection of 300 µg recombinant HlATAQ or rProS2 protein adjuvanted with TiterMax gold (Sigma-Aldrich) in a 1:1 ratio. On day 14 after the primary immunization, each rabbit was boosted with the same dose of the antigen emulsified with the adjuvant. Prior to immunization, after each vaccination and 14 days after the tick challenge, blood was collected from the ear vein of the rabbits. Sera were prepared and their reactivity against recombinant HlATAQ or rProS2 antigens was verified by SDS-PAGE. In addition, sera antibody titers were measured using an enzyme-linked immunosorbent assay (ELISA) with sheep anti-rabbit IgG (H+L) as a secondary antibody. Tick infestation was performed 16 days after the boost immunization. Each rabbit was infested with 30 female H. longicornis using the ear bag method [30]. Ticks were left to feed until engorgement. After dropping, each tick was weighed and then monitored for survival rate, egg laying, and subsequent hatching to larvae. To evaluate the effect of vaccination, the distribution of the blood-feeding period (days), body weight at engorgement (mg), pre-oviposition period (days), egg mass (mg), egg mass to body weight ratio, and egg hatching period (days) of ticks infested on recombinant HlATAQ-immunized rabbit were compared to the values obtained for the ticks infested on the control rabbit. Statistical Analysis The data obtained in the gene expression analysis, RNAi experiments, and vaccine experiments were analyzed using Microsoft Excel 2010, the statistical software R version 4.1.1 (R Foundation for Statistical Computing, Vienna, Austria), and SPSS Statistics for Windows, v26.0 (IBM Crop., Armonk, NY, USA). Differences between the two groups were analyzed using the Student's t-test, Welch's t-test, or the Mann-Whitney test according to the distribution of data. Comparisons of values obtained in more than two groups were performed using the one-way ANOVA test with a post-hoc Bonferroni for multiple pairwise comparisons or the Tukey Honestly Significant Difference test (Tukey's HSD). A p-value of less than 0.05 was considered statistically significant. Identification of HlATAQ Searching the H. longicornis midgut EST database, the ESTs of two cDNA clones (S02086B-17_A10 and S02086B-06_k17) with sequences sharing similarity with R. microplus Bm86 were identified. The EST of S02086B-17_A10 shared 86.36% identity with the Bm86 homologue previously characterized in H. longicornis (Hl 86) [23] and therefore was excluded from further analysis. The closest BLASTX match (64.56%) for the EST of S02086B-06_k17 was the ATAQ protein characterized in Dermacentor reticulatus ticks. Identities to the other ATAQ proteins ranged from 63.64% to 43.45%. S02086B-06_k17 was considered as the clone containing the BmATAQ homologue of H. longicornis and was sequenced to obtain the full-length cDNA sequence. The complete nucleotide sequence of the cDNA was 2371 bp including a cap, a 5 UTR, a protein coding sequence (ORF), a 3 UTR, and a poly (A) tail. The ORF was 1965 bp-long and ranged from an ATG start codon at position 118 to a TAG stop codon at position 2082 ( Figure 1). The nucleotide and encoded protein were designated as HlATAQ and HlATAQ, respectively. When the H. longicornis genome was searched using the Genome Blast feature of the NCBI, HlATAQ showed 96.79% identity (95% coverage) with two sequences (CM023449, JABSTR010000004) registered as H. longicornis isolate HaeL-2018 chromosome 2 whole genome shotgun sequences. It is therefore plausible that HlATAQ locates on chromosome 2 of H. longicornis. bp-long and ranged from an ATG start codon at position 118 to a TAG stop codon at position 2082 ( Figure 1). The nucleotide and encoded protein were designated as HlATAQ and HlATAQ, respectively. When the H. longicornis genome was searched using the Genome Blast feature of the NCBI, HlATAQ showed 96.79% identity (95% coverage) with two sequences (CM023449, JABSTR010000004) registered as H. longicornis isolate HaeL-2018 chromosome 2 whole genome shotgun sequences. It is therefore plausible that HlATAQ locates on chromosome 2 of H. longicornis. Figure 1. Nucleotide sequence of HlATAQ cDNA full length (2371 bp). The components of the cDNA were identified by referring to the features of cDNA produced using the vector capping method [22] and examining the amino acid translation. UTR: untranslated region. Characterization of HlATAQ Amino Acid Sequence The protein encoded by HlATAQ was composed of 654 amino acids, had a predicted molecular weight of 70.6 kDa, and an isoelectric point of 4.6. The bioinformatic analysis showed that HlATAQ had a signal peptide at the N-terminus, a transmembrane domain, and an intracellular domain at the C-terminus. The YFNATAQRCYH signature peptide of the ATAQ protein family was found at the amino acid position 60-70, and seven EGF-like domains were identified. Five of the EGF-like domains completely fell into the pattern of the EGF-like region Cys-Xaa (3, 9)-Cys-Xaa (3, 6)-Cys-Xaa (8, 11)-Cys-Xaa (0, 1)-Cys-Xaa (5, 15)-Cys where Xaa is any amino acid other than cysteine and six cysteines are present in one EGF-like domain [16]. One EGF-like domain had six cysteines, however, instead of the expected three to nine amino acids; only one amino acid separated the first two cysteines. The remaining EGF-like domain contained four cysteines, which might correspond to a partial EGF-like domain as reported in other Bm86 and BmATAQ homologous proteins [16,23,31]. Eight potential N-glycosylation sites and 12 potential O-glycosylation sites were also identified in the HlATAQ sequence. The components and the map of HlATAQ are presented in Figure 2A . The components of the cDNA were identified by referring to the features of cDNA produced using the vector capping method [22] and examining the amino acid translation. UTR: untranslated region. Characterization of HlATAQ Amino Acid Sequence The protein encoded by HlATAQ was composed of 654 amino acids, had a predicted molecular weight of 70.6 kDa, and an isoelectric point of 4.6. The bioinformatic analysis showed that HlATAQ had a signal peptide at the N-terminus, a transmembrane domain, and an intracellular domain at the C-terminus. The YFNATAQRCYH signature peptide of the ATAQ protein family was found at the amino acid position 60-70, and seven EGF-like domains were identified. Five of the EGF-like domains completely fell into the pattern of the EGF-like region Cys-Xaa (3, 9)-Cys-Xaa (3, 6)-Cys-Xaa (8, 11)-Cys-Xaa (0, 1)-Cys-Xaa (5, 15)-Cys where Xaa is any amino acid other than cysteine and six cysteines are present in one EGF-like domain [16]. One EGF-like domain had six cysteines, however, instead of the expected three to nine amino acids; only one amino acid separated the first two cysteines. The remaining EGF-like domain contained four cysteines, which might correspond to a partial EGF-like domain as reported in other Bm86 and BmATAQ homologous proteins [16,23,31]. Eight potential N-glycosylation sites and 12 potential O-glycosylation sites were also identified in the HlATAQ sequence. The components and the map of HlATAQ are presented in Figure 2A,B. Comparisons of the amino acid sequences of HlATAQ and ATAQ proteins from other tick species showed that all have six full and one partial EGF-like domains. HlATAQ had the highest molecular weight and number of glycosylation sites. Interestingly, except for H. elliptica ATAQ (HeATAQ) and A. variegatum ATAQ (AvATAQ) which had a GPI anchor, all ATAQ proteins had a transmembrane domain ( Table 2). A comparison of the protein structure of HlATAQ with Bm86 and Hl86 showed that both HlATAQ and Hl86 had seven EGF-like domains while Bm86 had nine. HlATAQ had a similar molecular weight, but more glycosylation sites than Bm86 (Table 2). Comparisons of the amino acid sequences of HlATAQ and ATAQ proteins from other tick species showed that all have six full and one partial EGF-like domains. HlATAQ had the highest molecular weight and number of glycosylation sites. Interestingly, except for H. elliptica ATAQ (HeATAQ) and A. variegatum ATAQ (AvATAQ) which had a GPI anchor, all ATAQ proteins had a transmembrane domain ( Table 2). A comparison of the protein structure of HlATAQ with Bm86 and Hl86 showed that both HlATAQ and Hl86 had seven EGF-like domains while Bm86 had nine. HlATAQ had a similar molecular weight, but more glycosylation sites than Bm86 (Table 2). P F E I V F L C A C L A L G F I E A T D V P D T K G N I C T A V G H L C G N A T tgc act gtg cac aat gag ggc aaa tac ttc acc tgt gac tgt gga gtg aac cgc tac ttc aac gcc aca gct caa aga tgt tac cat atc gac gcc tgc ttg cct atg ctc tgc cat cca ggc acg tgc ttg gac aac gat ggc aac gac ggg gca acg tgt gac tgc tcg ggc ctc gaa aaa ctt cag cca gac tgc gaa gtt gat cca aaa ttc aag gag gaa tgt gcg aat ggt ggc ggc gaa gta tgg ccg gcc gcc aat ggc att ccg gaa tgc gtg tgt cct caa ggc acc aag ctt gaa aac ggt tcc tgc aaa tcc atc gcc tgc ttg ttc ccc gac ttc acg tgc ata gac G E V W P A A N G I P E C V C P Q G T K L E N G S C K S I A C L F P D F T C I D atc tgc act gat ccg aag ctc aga gaa gac gac cgc tgc tgc caa gga tgg gag gaa ggc ttg tgt aat gca cac cat gaa gag gga acg tat tgt gtt cca ggt acc atc gcc aag gat gac ctt tgc aca aat gtg tgt gaa gcg gat gag act gat ccc atc tgc gag cac ggc tgc acg cat gaa aac gca tct ttg ccg gtt tac acc tgc aac tgc gga gga gat caa gag ctc tct gct gac gga ttg acg tgt act gct aaa ctc gct tgc aat gaa gaa gaa aca gca acc tgc gaa tgc agt ggt cac agg tgt gtg tac gga gac agt caa gtc acc tgc gag tgc ccc ggt cac act att gag gtt gaa ggt gtc tgc tca gag aac tgc acc atc gcc aag caa gct gag tgc gac act tta cta agc aag tgc gtt att gta tcc agc aaa gag aca tgc acc tgt BLAST and Phylogenetic Analyses of HlATAQ Amino Acid Sequence The BLASTp analysis of HlATAQ did not return significant hits other than ATAQ and Bm86 homologous sequences. However, HlATAQ showed a low degree (<50%) of homology to previously known ATAQ proteins. The closest BLASTp match (47.79% identity; 99% coverage) of HlATAQ was the D. reticulatus ATAQ protein. The analysis of the proximity among tick genera showed that H. longicornis ATAQ shared higher identity/similarity with Dermacentor spp. and H. m. marginatum than other species including H. elliptica (Table 3). In agreement, an amino acid sequence-based phylogenetic tree located HlATAQ on a divergent branch between the Dermacentor spp. clade and the H. elliptica/A. variegatum clade ( Figure 3A). Conversely, in the tree inferred from the genes, HlATAQ formed a clade with H. elliptica and A. variegatum sequences ( Figure 3B). The ATAQ sequences from the Rhipicephalus species, however, were consistently located in one clade, genetically distant from the H. longicornis sequence. The alignment of the deduced amino acid sequences of the ATAQ and Bm86 homologue sequences included in the phylogenetic analysis is shown in the Supplementary Material Table S1. a divergent branch between the Dermacentor spp. clade and the H. elliptica/A. variegatum clade ( Figure 3A). Conversely, in the tree inferred from the genes, HlATAQ formed a clade with H. elliptica and A. variegatum sequences ( Figure 3B). The ATAQ sequences from the Rhipicephalus species, however, were consistently located in one clade, genetically distant from the H. longicornis sequence. The alignment of the deduced amino acid sequences of the ATAQ and Bm86 homologue sequences included in the phylogenetic analysis is shown in the Supplementary Material Table S1. Expression Patterns of HlATAQ in Developmental Stages, Feeding Phases, and Tissues The transcription levels of HlATAQ mRNA were measured in unfed and feeding or engorged specimens of all development stages. In addition, the variation of expression throughout feeding phases was examined in the midgut and Malpighian tissues which are the reported main expression sites of this protein [16]. HlATAQ mRNA was expressed constantly throughout the developmental stages of H. longicornis ( Figure 4A). The highest expression level was recorded in engorged larvae, while the lowest was found in eggs. Notably, in larvae, nymphs as well as female ticks, blood-feeding significantly upregulated the expression of HlATAQ. However, the metamorphosis of engorged larvae to nymphs was followed by a significant decrease in HlATAQ transcripts. The analysis of each feeding phase in female ticks showed that HlATAQ expression steadily increased (p < 0.001) during the course of feeding, reached a peak, then decreased slightly with engorgement. In the tick tissue analysis, HlATAQ mRNA transcripts were detected in both midgut and Malpighian tubules throughout the blood-feeding stages ( Figure 4B). Expression levels in both tissues significantly peaked from the unfed period to the slow feeding period, and then decreased at full engorgement (p < 0.001). However, the suppression of HlATAQ gene expression by RNAi did not affect the tick phenotype. The recorded lengths of the blood-feeding period, pre-oviposition period, and oviposition to egg hatching period, the body weight at engorgement, and egg mass did not show any statistically significant differences between the RNAi and control ticks (Table 4). Expression Patterns of HlATAQ in Developmental Stages, Feeding Phases, and Tissues The transcription levels of HlATAQ mRNA were measured in unfed and feeding or engorged specimens of all development stages. In addition, the variation of expression throughout feeding phases was examined in the midgut and Malpighian tissues which are the reported main expression sites of this protein [16]. HlATAQ mRNA was expressed constantly throughout the developmental stages of H. longicornis ( Figure 4A). The highest expression level was recorded in engorged larvae, while the lowest was found in eggs. Notably, in larvae, nymphs as well as female ticks, blood-feeding significantly upregulated the expression of HlATAQ. However, the metamorphosis of engorged larvae to nymphs was followed by a significant decrease in HlATAQ transcripts. The analysis of each feeding phase in female ticks showed that HlATAQ expression steadily increased (p < 0.001) during the course of feeding, reached a peak, then decreased slightly with engorgement. In the tick tissue analysis, HlATAQ mRNA transcripts were detected in both midgut and Malpighian tubules throughout the blood-feeding stages ( Figure 4B). Expression levels in both tissues significantly peaked from the unfed period to the slow feeding period, and then decreased at full engorgement (p < 0.001). The data are presented as mean ± standard error. * p < 0.05, ** p < 0.001. Detection of HlATAQ Protein in Tick Tissues Immunostaining with the anti-HlATAQ peptide serum was used to localize HlATAQ in the midgut and Malpighian tubules of female ticks. The tissues of the 5-day-fed ticks showed clearly higher fluorescence with the anti-HlATAQ serum than with the naïve mouse serum ( Figure 6). Strong reactions to anti-HlATAQ serum were found throughout the cells of the midgut ( Figure 6A) and Malpighian tubules ( Figure 6B). In the Malpighian tubules, HlATAQ fluorescence intensity was particularly strong in the basal membrane ( Figure 6B). However, the suppression of HlATAQ gene expression by RNAi did not affect the tick phenotype. The recorded lengths of the blood-feeding period, pre-oviposition period, and oviposition to egg hatching period, the body weight at engorgement, and egg mass did not show any statistically significant differences between the RNAi and control ticks (Table 4). Detection of HlATAQ Protein in Tick Tissues Immunostaining with the anti-HlATAQ peptide serum was used to localize HlATAQ in the midgut and Malpighian tubules of female ticks. The tissues of the 5-day-fed ticks showed clearly higher fluorescence with the anti-HlATAQ serum than with the naïve mouse serum ( Figure 6). Strong reactions to anti-HlATAQ serum were found throughout the cells of the midgut ( Figure 6A) and Malpighian tubules ( Figure 6B). In the Malpighian tubules, HlATAQ fluorescence intensity was particularly strong in the basal membrane ( Figure 6B). Expression, Purification, and Verification of Recombinant Proteins The Phyre2 prediction software revealed that the folded structure of ATAQ is highly similar (78% coverage with 99% confidence) to apolipoprotein E receptor 2 (apoER2), a lowdensity lipoprotein. The N-terminal region of apoER2 reportedly contains ligand-binding domains [32,33] and therefore, we hypothesized that the N-terminal region of HlATAQ would be critical for the function of the protein. Further analysis of the seven EGF-like domains located in the N-terminal region of the HlATAQ protein, with the BepiPred-2.0 program, showed the presence of many B-cell epitopes throughout the domains. Therefore, along with the whole ORF of the protein, the N-terminal region containing EGF-like domains one-five (amino acid 29-249; length: 221 amino acids) was selected for the production of recombinant HlATAQ protein. The truncated protein shared 47-75% similarity with previously published ATAQ proteins and was PCR amplified from HlATAQ cDNA with the primer pairs rtHlATAQF and rtHlATAQR (Table 1). Both the rHlATAQ and the rtHlATAQ plasmids were successfully transformed in E. coli BL21 (DE3). However, the expression of rHlATAQ was not successful and therefore only rtHlATAQ was expressed. After the removal of His and ProS2 tags, the aggregation of rtHlATAQ protein was high. Hence, to keep its water solubility, rtHlATAQ protein without removal of His-ProS2 tags was purified along with recombinant His-ProS2 protein as a control. The expected molecular weight of purified rtHlATAQ and rPros2 proteins were 48.3 kDa and 26.0 kDa, respectively ( Figure S1A,B). Purified rtHlATAQ and rPros2 proteins with the expected molecular weights were detected by anti-rPros2-mouse monoclonal IgG ( Figure S1C). Afterward, the purified proteins were dialyzed ( Figure S1D) and used for rabbit immunization. Immune Response of Rabbit to Vaccination with Recombinant Proteins The immune responses generated by rtHlATAQ and rPros2 vaccinations were verified by the identification of purified rtHlATAQ and rProS2 proteins with expected sizes using rabbit sera collected 11 days after the boost vaccinations ( Figure S2). Since rtHlATAQ was purified without the removal of the His-ProS2 tags, we confirmed the specificity of the rabbit anti-rtHlATAQ antibodies. The analysis of the reactivity against rtHlATAQ and rProS2 of sera collected, before immunization, 11 days after primary vaccination, 11 days after boost-vaccination, and 14 days after tick challenge, showed that rtHlATAQ-immunized rabbit had antibodies specific to the protein. The reactivity of rtHlATAQ-immunized rabbit sera against rtHlATAQ antigen was significantly greater (p < 0.01) than that of rProS2-rabbit sera against rtHlATAQ and rProS2 antigens (Figure 7). Figure 8 shows the effects of rHlATAQ vaccination. Compared to ticks fed on rProS2immunized rabbit (control), the ticks fed on rtHlATAQ-immunized rabbit showed a significantly longer blood feeding period (Mann-Whitney U-test: u = 169, z = 4.14703, p < 0.01) and higher body weight at engorgement (Mann-Whitney U-test: u = 234.5, z = 3.17865, p < 0.01). Similarly, the pre-oviposition periods of ticks detached from rtHlATAQ-vaccinated rabbit tended to be significantly longer (Mann-Whitney U-test: u = 379.5, z = 4.14703, p < 0.01). The eggs laid by ticks fed on rtHlATAQ-immunized rabbit had a higher egg mass (Tukey HSD test: Q = 3.8122, p < 0.01) and took a significantly longer time to hatch than those from the control group (Tukey HSD test: Q = 4.2538, p < 0.01). Egg mass to body weight ratio values were not significantly different between the two tick groups. The blood-feeding period, body weight at engorgement, pre-oviposition period, egg mass, egg mass to body weight ratio, and egg hatching period of the two tick groups are detailed in Table S2. Antibody titers of pre-immune, 11 days after primary vaccination, 11 days after boost-vaccination, and 14 days after tick infestation are shown at various dilutions (1:100, 1:1000; 1:10,000). A comparison of OD values was performed using the one-way ANOVA test with a post-hoc Tukey Honestly Significant Difference Test. In each comparison, red rectangles indicate the highest OD values of reactivity against rtHlATAQ or rProS2 (20 ng each well) for sera diluted at the same ratio. A significant difference was considered when the p-value was less than 0.05. Figure 8 shows the effects of rHlATAQ vaccination. Compared to ticks fed on rProS2immunized rabbit (control), the ticks fed on rtHlATAQ-immunized rabbit showed a significantly longer blood feeding period (Mann-Whitney U-test: u = 169, z = 4.14703, p < 0.01) and higher body weight at engorgement (Mann-Whitney U-test: u = 234.5, z = 3.17865, p < 0.01). Similarly, the pre-oviposition periods of ticks detached from rtHlATAQ-vaccinated rabbit tended to be significantly longer (Mann-Whitney U-test: u = 379.5, z = 4.14703, p < 0.01). The eggs laid by ticks fed on rtHlATAQ-immunized rabbit had a higher egg mass (Tukey HSD test: Q = 3.8122, p < 0.01) and took a significantly longer time to hatch than those from the control group (Tukey HSD test: Q = 4.2538, p < 0.01). Egg mass to body weight ratio values were not significantly different between the two tick groups. The blood-feeding period, body weight at engorgement, pre-oviposition period, egg mass, egg mass to body weight A comparison between the two groups for each parameter was performed using the Mann-Whitney test or Tukey Honestly Significant Difference Test. A p-value less than 0.05 was considered as a significant difference. Discussion In this study, ATAQ, a tick protein structurally related to Bm86 and a vaccine candidate, was investigated in H. longicornis. Currently, characteristics of ATAQ proteins are available for tick species from Rhipicephalus (5 species), Hyalomma (one species), Dermacentor (2 species), Amblyomma (one species), and Haemaphysalis (one species) genera [16]. Two proteins (XM 037660470 and XP037574698), predicted from the whole genome sequencing of R. sanguineus (LOC119393444) and D. silvarum (LOC119456952) and registered in GenBank as "glycoprotein antigen BM86-like", share a high sequence identity with ATAQ proteins. The HlATAQ in this study is therefore the second identification in Haemaphysalis genus and the 13th tick species in which the protein is confirmed. Haemaphysalis longicornis ATAQ shared most of the structural features of previously reported ATAQ proteins and appeared to be the longest of all. Previous studies [16,20] showed that except for one of the two R. appendiculatus ATAQ (RaATAQ-2), all Rhipicephalus spp. ATAQ had the same length, whereas, variation in length was observed in other genera. A similar variation in protein length had also been observed in Bm86 homologues [16] and may relate to the interspecies and inter-genera genetic variations of the ATAQ and Bm86 protein family. It is one of the reported causes of variation in the efficacy of Bm86-and ATAQ-based vaccines [20,34]. Another interesting finding was that, instead of a GPI anchor similar to H. elliptica ATAQ, HlATAQ had a TM anchor similar to Rhipicephalus, Hyalomma, and Dermacentor spp. A comparison between the two groups for each parameter was performed using the Mann-Whitney test or Tukey Honestly Significant Difference Test. A p-value less than 0.05 was considered as a significant difference. Discussion In this study, ATAQ, a tick protein structurally related to Bm86 and a vaccine candidate, was investigated in H. longicornis. Currently, characteristics of ATAQ proteins are available for tick species from Rhipicephalus (5 species), Hyalomma (one species), Dermacentor (2 species), Amblyomma (one species), and Haemaphysalis (one species) genera [16]. Two proteins (XM 037660470 and XP037574698), predicted from the whole genome sequencing of R. sanguineus (LOC119393444) and D. silvarum (LOC119456952) and registered in GenBank as "glycoprotein antigen BM86like", share a high sequence identity with ATAQ proteins. The HlATAQ in this study is therefore the second identification in Haemaphysalis genus and the 13th tick species in which the protein is confirmed. Haemaphysalis longicornis ATAQ shared most of the structural features of previously reported ATAQ proteins and appeared to be the longest of all. Previous studies [16,20] showed that except for one of the two R. appendiculatus ATAQ (RaATAQ-2), all Rhipicephalus spp. ATAQ had the same length, whereas, variation in length was observed in other genera. A similar variation in protein length had also been observed in Bm86 homologues [16] and may relate to the interspecies and inter-genera genetic variations of the ATAQ and Bm86 protein family. It is one of the reported causes of variation in the efficacy of Bm86-and ATAQ-based vaccines [20,34]. Another interesting finding was that, instead of a GPI anchor similar to H. elliptica ATAQ, HlATAQ had a TM anchor similar to Rhipicephalus, Hyalomma, and Dermacentor spp. Likewise, among Bm86 homologues, some proteins have GPI whereas others have a TM anchor [16]. Other protein families such as the cadherin superfamily [35] and the carcinoembryonic antigen (CAE) gene family also have GPI-and TM-anchored members. The fact that mutations in the transmembrane domain of the CAE family resulted in a shift from transmembrane-to GPI-anchorage [36] suggests that the TM found in H. longicornis ATAQ could be the ancestral domain, from which the GPI anchor of H. elliptica ATAQ could have derived following mutation events. GPI anchors of glycoproteins are rich in sphingolipids and cholesterol. They reportedly do not have uniform physical properties, interact with transmembrane proteins, and are involved in the transport of anchored proteins to the apical surface of epithelial cells [37,38]. Although it is unclear how the functions of ATAQ proteins differ depending on the type of anchor, a previous study suggests that replacing the GPI anchor with a transmembrane domain does not affect the function of certain proteins. In addition, some GPI-anchored proteins occur naturally in both GPI-anchored and transmembrane isoforms, which do not show functional dissimilarities. Meanwhile, the function of some GPI-anchored proteins is abolished when the GPI-anchor is replaced by a transmembrane protein domain [39]. Further studies on the function of ATAQ and Bm86 proteins and a comparison of GPI-and TM-anchored protein efficiency will certainly help in understanding the importance of the anchor. The multiple EGF-like domain is a key feature of the Bm86 and ATAQ protein families. The consensus sequence of the full EGF-like domain of the Bm86 family was first defined as Cys-Xaa (4, 8)-Cys-Xaa(3, 6)-Cys-Xaa(8, 11)-Cys-Xaa(0, 1)-Cys-Xaa(5, 15)-Cys based on the sequence of Bm86 from R. microplus [31]. Following the characterization of Bm86 and ATAQ proteins from several other tick species, the number of amino acids separating the first two Cys was updated to Xaa (3,9) [16]. Based on the EGF-like domains of HlATAQ, we suggest the number of amino acids separating the first two Cys is updated to Xaa (1,9). Noteworthy, improvement of the consensus formula always occurred in the same section of the protein. Whether this variation has an impact on the protein function remains unclear and could be an interesting topic for further investigation. The finding that the HlATAQ gene is expressed at all developmental stages is in accordance with the data of previously identified ATAQ proteins [16]. However, the expression patterns were different. Compared to BmATAQ, the HlATAQ transcript level was more variable throughout the developmental stages. Meanwhile, opposite to RaATAQ, the expression level of HlATAQ was lower in unfed than feeding ticks [16]. These differences might indicate tick species-based variations of the function of ATAQ. To our knowledge, this study is the first evaluation of ATAQ protein transcript levels in tissues at different feeding phases. The expression of the HlATAQ gene in tissues was performed only in the midgut and Malpighian tubule for two reasons: (1) previously identified ATAQ proteins were exclusively expressed in these tissues; (2) since the midgut and the Malpighian tubule are, respectively, the organ in charge of blood digestion and osmotic pressure regulation [40], ATAQ expression levels in these tissues could be useful to evaluate the potential effect of an ATAQ-based vaccine. The observed variation of expression level in the midgut and Malpighian tubule indicates a relationship between blood-feeding and HlATAQ expression kinetics. We, therefore, inferred that disturbing the HlATAQ expression would affect tick blood meal and the related life parameters. Our immunostaining experiments provide clarification on the localization of ATAQ in the midgut and Malpighian tubules. Extracellular proteins with EGF-like domains are generally involved in blood coagulation and complement cascades or are associated with the regulation of cell growth [31]. Bm86 reportedly resembles the proteins involved in cell growth and is expressed by stem cells and/or prodigest cells of the midgut epithelium [41,42]. ATAQ and Bm86 structural similarities may explain the localization of HlATAQ throughout the midgut cell. Meanwhile, because Malpighian tubules are derived from the endoderm and are thought to have originated as an extension of the intestine [43], their histological similarity with the midgut may explain the observed localization of HlATAQ. The high fluorescence intensity in Malpighian tubules' basal membrane supports the expression of ATAQ protein by stem cells as hypothesized by Nijhof et al. [16]. Bm86 proteins from Ixodids were reportedly exclusively expressed in the midgut, but the argasid tick Bm86 homologue (Os86) could be identified in both midgut and Malpighian tubules. Although currently characterized ATAQ proteins were found in both midgut and Malpighian tubules, studies on other tick tissues and ATAQ proteins from other tick species are required for a thorough understanding of the distribution of these proteins. The structure and expression patterns of ATAQ protein support a role in blood feeding. Therefore, the fact that HlATAQ-silenced ticks did not phenotypically differ from control ticks was surprising. It was, however, expected because, in the previous study, silencing of R. evertsi evertsi ATAQ did not result in a significantly different phenotype [16]. Similarly, no significant effect was obtained in RNAi of Bm86 [44], Ree86 alone, or Ree86 and ReeATAQ [16]. In contrast, silencing of Hl86 resulted in ticks characterized by significantly reduced weight at engorgement, but a similar blood feeding period, egg weight, or egg hatching ability with the control ticks [23]. Though it is unclear why the effects of silencing Hl86 and HlATAQ differed, these results may indicate that Bm86/ATAQ are components of a group of proteins with similar functions for which expression can be compensated in case of downregulation as hypothesized for some salivary gland genes [45]. The results of rabbit immunization with rtHlATAQ protein (domains 1-5), in agreement with the protective effect reported for the synthetic Rhipicephalus ATAQ peptide [19], support the potential of the ATAQ protein as an anti-tick vaccine. The physiological functions of ATAQ EGF-like domains are not well understood. However, it has been reported that human EGF-like domains have different functional roles and that derived forms exist [46]. ApoER2, for example, is known for its role in platelet aggregation and hemostasis, with its EGF domains serving as ligand-binding domains [47,48]. Meanwhile, Bm86 has been described as a cell membrane-bound ligand potentially transmitting positional or cell-type information to adjacent cells and influencing the cell lineage of those adjacent cells [31]. ATAQ proteins might have similar functions. Hence, in this study, we targeted the EGF-like domains, the potential ligand-binding region of HlATAQ, with the hope that antibodies directed against it would affect protein functions, leading to a vaccine effect. Immunological experiments using the recombinant EGF-like domain of the Bm86 ortholog of Ixodes scapularis (Is86), showed an effect on I. scapularis blood sucking and molting [49]. It is therefore reasonable to presume that antibodies targeting the EGF-like domain of HlATAQ could have generated the physiological effect observed on the ticks. Our hypothesis is that like the Bm86 vaccine destroys the midgut epithelium [31], anti-ATAQ antibodies may reach the midgut from the host blood and not only destroy the midgut but also damage the Malpighian tubule via the hemolymph. Such actions would have disturbed the blood-feeding process resulting in a longer feeding time to compensate for the defect, with consecutive effects on the weight at engorgement, delays in egg laying, higher egg mass, and delayed egg hatching. Noteworthy, contrasting with the effect of recombinant ATAQ, vaccination with recombinant Bm86 led to a reduction in the number of engorged ticks, lower engorgement weights, and a decrease in the number of oviposited eggs [16]. The results reported here give a hint at the immunogenic potential of HlATAQ, but are not fully conclusive. The experiments were performed on only one rabbit, and there were no repeats. Previous tick vaccine studies indicated protein genetic diversity [20,50] and the balance between host complement system-IgG antibodies' collaborative action and tick protease-protease inhibitors [51] as factors determining the protective effect of Bm86/ATAQ-based vaccines. Thus, more elaborated and comprehensive studies covering HlATAQ genetic diversity, investigating several vaccination designs (single or several recombinant proteins, type of adjuvant, number of boost vaccinations), and the larger number of vaccinated animals are needed for the clarification of ATAQ potential as a commercial vaccine. Conclusions To sum up, the full-length cDNA sequence of the ATAQ gene of H. longicornis was identified using the midgut cDNA library of semi-engorged female ticks. The corresponding protein was localized, functionally characterized, and then evaluated in a vaccine experiment. HlATAQ, which shared the same structure with previously identified ATAQ proteins, was located in the midgut digestive cells and Malpighian tubule cells and expressed at all life stages. Its expression seemed upregulated by blood-feeding; however, gene silencing did not affect tick phenotype. Ticks fed on a rabbit immunized with recombinant HlATAQ tended to have an extension of the blood feeding period, accompanied by changes in pre-oviposition and egg hatching periods. Based on these findings, ATAQ is considered to be an anti-tick vaccine candidate which deserves future studies to discover its role in tick feeding and potential use for tick control. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/microorganisms11040822/s1, Figure S1: Expression, purification and verification of immune response to recombinant proteins; Figure S2: Confirmation of the specificity of the reactivity of rtHlATAQ-and rProS2-immunized rabbit sera against rtHlATAQ and rProS2 proteins; Table S1: Sequence alignment of the deduced amino acid sequences of the ATAQ and Bm86 homologues included in the phylogenetic analysis; Table S2: Effects of rHlATAQ vaccination on H. longicornis infestation in rabbits.
2023-04-30T05:46:15.150Z
0001-01-01T00:00:00.000
{ "year": 2023, "sha1": "c1554c5206182dd082a3f3d121e92fd4b961a3c2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/11/4/822/pdf?version=1679562423", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1554c5206182dd082a3f3d121e92fd4b961a3c2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
119296706
pes2o/s2orc
v3-fos-license
Gravitational Baryogenesis in Anisotropic Universe The interaction between Ricci scalar curvature and the baryon number current, dynamically breaks CPT in an expanding universe and leads to baryon asymmetry. Using this kind of interaction and study the gravitational baryogenesis in the Bianchi type I universe. We find out the effect of anisotropy of the universe on the baryon asymmetry for the case which the equation of state parameter, $\omega$, is dependent to time. Introductions Theoretical prediction of antimatter is one of the most impressive discoveries of quantum field theory which made by Paul Dirac about 80 years ago [1]. Some scientist thought that "maybe there exists a completely new universe made of antimatter" because they believe that there is a symmetry between matter and antimatter. Our present point of view on being symmetry between matter and antimatter is very much different, even opposite. The absence of γ ray emission from matter-antimatter annihilation [2] and the theory of Big Bang nucleosynthesis [3], the measurements of cosmic microwave background [4], indicate that there is more matter than antimatter in the universe. So that, we sure that antimatter exists but believe that there is an asymmetry between matter and antimatter. The origin of the difference between the number density of baryons and anti-baryons is still an open problem in particle physics and cosmology. Observational results yield that the ratio of the baryon number to entropy density is approximately n b /s ∼ 10 −10 . The standard mechanism of baryogenesis is based on the following three principles as it was formulated in 1967 by A. D. Sakharov [5]: 1. Non-conservation of baryons. It is predicted theoretically by grand unification [6] and even by the standard electroweak theory [7]. 2. Breaking of symmetry between particles and antiparticles, i.e. C and CP. CP-violation was observed in experiment in 1964 [8]. Breaking C-invariance was found earlier immediately after discovery of parity non-conservation [9]. 3. Deviation from thermal equilibrium. This is fulfilled in nonstationary, expanding universe for massive particles or due to possible first order phase transitions. Similarly, in [10], a mechanism for baryon asymmetry was proposed. They introduced an interaction between Ricci scalar curvature and any current that leads to net B − L charge in equilibrium (L is lepton number) which dynamically violates CPT symmetry in expanding Friedmann Robertson Walker (FRW) universe. The proposed interaction shifts the energy of a baryon relative to an antibaryon, giving rise to a non-zero baryon number density in thermal equilibrium. The author of [11] studied the some mechanism of baryon asymmetry which was proposed in [10] for the case which the equation of state parameter of the universe, ω, is dependent to time. In this paper, we study the gravitational baryogenesis in the Bianchi type I universe. We assume the universe is filled with to components of perfect fluid and study this model for different cases. We will study the effect of anisotropy and interaction between two different components of perfect fluid onṘ and consequently on gravitational baryogenesis. Preliminary The gravitational field in our model is given by a Bianchi type I metric as where, the metric function, A, B, C, being the function of time, t, only. We assume the matter is perfect fluid, then the energy momentum tensor is given by where u ν is the four vector satisfying and ρ is the total energy of a perfect fluid and p is the corresponding pressure. p and ρ are related by an equation of state as One can obtain the Einstein field equations form the BI space time as where G is the Newtonian gravitational constant and over-dot means differentiation with respect to t. Using Eqs. (5)(6)(7)(8), we can obtain the Hubble parameter as where a = (ABC) 1 3 is the scale factor, and is the shear tensor, which describes the rate of distortion of the matter flow, and θ = u j ;j is the scalar expansion . The equation of state parameter, ω, can be expressed in terms of the Hubble parameter and shear tensor as We obtain the Ricci scalar as By derivating of R and use of Eq. (11) and (12), it is shown thaṫ where, M p ≃ 1.22 * 10 19 Gev is the Planck mass. If the space time will be isotropic, σ = 0, Eq. (15) reduce to the result of [11]. Also, ifω = 0, only the first term reminds and it is zero at ω = 1/3 and at ω = −1. Therefore by taking into accountω, we have baryon asymmetry at ω = 1/3 and at ω = −1, becauseṘ =0 3 Perfect Fluid with Interaction. In the following we consider our study with universe dominated by two interacting perfect fluids with equation of states as We assume that the conservation relation of energy for these two components areρ where Γ 1 ρ d + Γ 2 ρ m is the source term of interaction and Γ 1 and Γ 2 may be time dependent [19], [20], [21], [22], [23]. Although Eq. (18), and (19) do not satisfy the conservation equation, but we havė Here ρ = ρ d + ρ m and p = p d + p m and where r = ρ m /ρ d . Using Eqs. (16)(17)(18)(19)(20), we obtaiṅ From the third term of Eq. (22), it is seen that even for constant equation of state, ω varies with time. This is due to that the universe is supposed to be filled of components with different equation of state parameters. Substituting Eq. (22) into Eq. (15), givė We want to check this result for some components. Radiation Dominant In this subsection we suppose one of the fluid components correspond to radiation. To do this, we take γ m = 1/3 so thatγ m = 0 and therefore Eq. (23) reduces tȯ choosing, Γ 1 = λ 1 θ and Γ 2 = λ 2 θ, λ 1 , λ 2 ∈ ℜ [20], [21], [22], [23] and one can achieve aṡ We assume that the other components which fill the universe, is a massive scalar field of mass m, with a time dependent equation of state parameter interacting with radiation. The time dependent equation of state parameter of the massive scalar field as an universe anisotropic universe is define as Where V (φ) = (1/2)m 2 φ 2 . The interaction between scalar field and radiation is given by Eq. (18), and (19) with γ m = 1/3. By defined z = (1 − γ d )ρ d /2 and usingż = mρ d 1 − γ 2 d , which was defined in [24], we can obtainγ At last by substituting Eq. (27) into Eq. (25), we geṫ For the scalar field dominant, which is equivalent with, r −→ 0, we havė For the case thatφ 2 ≫ m 2 φ 2 (φ 2 ≪ m 2 φ 2 ) we have γ d = 1(−1) so thaṫ γ d = 0 therefore we havė Eq.(31) shows that if there is no interaction source term with dark matter, i.e. Γ 1 = 0, thenṘ φ ≃ 0 and in this case there is no any gravitational source for asymmetry in baryon number. On the other hand for the radiation dominant, i.e., r → ∞ we obtaiṅ we see that,Ṙ = 0 if λ 2 = 0. It is seen that isotropic universe, σ = 0, Eq. (32) reduce to the result which is obtained in [11]. We can obtain ρ R as a function of equilibrium temeperatur, T . The radiation energy density is related to the T as [17], [18]. where K R is proportional to the total number of effectively degree of freedom. So we haveṘ Gravitational Baryogenesis in Anisotropic Universe The author of [10] introduced a mechanism to generate baryon asymmetry. Their mechanism is based on an interaction between the derivative of the Ricci scalar and the baryon number current, J µ , as where M * is a cut off characterizing the scale of the energy in the effective theory and ǫ = ±1. This interaction violate CP. The baryon number density in the thermal equilibrium, has been worked out in detail in [10]. It lead to where µ B is a chemical potential and µ B = −µB = −ǫṘ/M 2 * and g b ≃ 1 is the number of internal degrees of freedom of baryons. According to [17], the entropy density of the universe is given by S = (2π 2 /45)g s T 3 , where g s ≃ 106. The ratio n b /S in the limit T ≫ m b and T ≫ µ b is given by where T D is called the decoupled temperature and in the expanding universe the baryon number violation decouples at the T D temperature. Therefore the baryon asymmetry in terms of temperature can be determined from Eq. (34) and (37) as where α = M * /M p . Baryogenesis with out Interaction In this subsection we assume γ m = γ R = 1/3. We assume γ d > 1/3 which is corresponding to non-thermal component and it decrease more rapidly then radiation [10]. If there is no any interaction between these two component of universe, i.e. Γ 1 = Γ 2 = 0 then we havė In this case we can arrive aṫ it is clearly seen that for γ d > 1/3,ṙ > 0. This means that ρ d decreases faster then ρ R . From Eq. (40) we can obtain ρ R ∝ (ABC) −4/3 ∝ a −4 and then the temperature red shift is as T ∝ a −1 ∝ (ABC) −1/3 and also from Eq. (39) one can obtain ρ d ∝ T 3(1+γ d ) [18]. We suppose at therefore we obtain We assume T D = ηT R . The case which η ≫ 1 is equivalence with the state which r −→ 0. Hence we have For η ≪ 1, we can arrive at it is clearly seen that if γ d > 1/3, n b /s > 0. Conclusion The main purpose of the present work has been to explore the consequences of using the anisotropy of metric, (1), as input in Einstein' s equation, assuming that the cosmic fluid is endowed with a perfect fluid. The expression for the energy-momentum tensor T µν is given in (2). The cosmological constant Λ has been set equal to zero. We have obtained the following result, for our studies. 1. We show that the universe which dominated by two interacting perfect fluids has a curvature that varied with time and the effect of anisotropic space time is remarkable. 2. We have obtainedṘ for radiation dominant regime and the effect of anisotropy of space time obviously is seen in it. 3. We assume one of the components which fill the universe, is a massive scalar field. We have shown that the effect of shear tensor inṘ is notable and also for scalar field dominant and for the case which kinetic term is negligible with respect to potential term, have obtainedṘ = f (ρ R , σ)λ 1 . This result shows that in this case, if Γ 1 = 0, there is no any gravitational source for asymmetry in baryon number.
2010-10-24T14:32:27.000Z
2010-10-24T00:00:00.000
{ "year": 2010, "sha1": "8f49b2a2ef0de9c35f50751ad36eba7e247bafc5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1010.4966", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8f49b2a2ef0de9c35f50751ad36eba7e247bafc5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
133655031
pes2o/s2orc
v3-fos-license
Modelling nutrient requirements for pigs to optimize feed efficiency Improvement of feed efficiency is crucial if pig production is to meet the challenge of sustainability in terms of production costs and environmental impact. . This implies to precisely know the nutrient requirements of sows and growing pigs to develop adapted feeding strategies and thus optimize performance. This chapter describes existing modelling approaches developed to predict the nutrient requirement of a single individual animal (growing pig or sow) in terms of protein / amino acids, energy and minerals, and depending on characteristics of the pig and the feed, and environmental conditions. The chapter proposes and explains the integration of individual variability among animals into models for pig feeding, its application in precision feeding, and illustrates via a case study the relevance of the application of these models for improving feed efficiency. Introduction Animal production is continuously facing the challenge of sustainability. In the next decades, world animal production is expected to increase by about 70% to satisfy the increased demand for animal protein, as anticipated by FAO (2011). Meeting this demand in a sustainable way requires increasing the efficiency of animal production. Additionally, livestock production systems have also to integrate the dimensions of animal health and welfare, food quality and security, environment, and consumer and citizen expectations to ensure their sustainability. In pig production, feed and feeding are major levers to control performance, with immediate and reversible effects. Feed represents a major part of the production costs (typically 60 to 70%) and thus largely affects economic results. Feed is also largely implicated in other sustainability pillars by its action on performance, animal welfare and health, product quality and environmental impact. For most nutrients, the efficiency with which animals transform dietary inputs to animal products is relatively low. For protein, this efficiency rarely exceeds 50%, while for phosphorus and energy these efficiencies are even lower. This implies that knowledge of nutritional requirements, combined with their availability in feed ingredients, is of major importance to develop feeding strategies that contribute to improving the feed efficiency. The requirement of nutrients such as amino acids or minerals, when all other nutrients are provided at adequate levels, can be defined as the amount needed for specified production purposes such as growth, protein deposition, milk production or maintenance (Fuller, 2004). Nutrient requirements are affected by factors related to the animal (e.g. genetic, age, weight, sex, social status and health), the feed (e.g. feed allowance, nutrient composition and digestibility) and housing conditions (e.g. ambient temperature and space allowance) (Noblet and Quiniou, 1999). Two methods are generally used to determine the nutrient requirements of pigs: the empirical and the factorial methods (Patience et al., 1995). A comparison of these two methods has been given by Hauschild et al. (2010b) and Pomar et al. (2013). In the empirical approach, the requirement for a nutrient is determined by feeding groups of pigs with increasing levels of this nutrient while measuring one or several performance traits during a given time interval. With this method, the nutrient requirement corresponds to the population requirement for the considered performance and time interval. The estimated requirement may depend on the measured performance trait and on the statistical model used to estimate the population requirement (Pomar et al., 2013). Differences in pig characteristics (e.g. adiposity in growing pigs and prolificacy in sows), and the effect of the environment on the requirement limits the possibility of extrapolation of results to other production situations . It is why the factorial approach is generally preferred and considered as a reference method. With this method, the estimated nutrient requirement includes requirements for maintenance and production, and the efficiency of nutrient use (Fuller and Chamberlain, 1982; van Milgen and Noblet, 2003). Requirements are assumed to be the amount of the given nutrient that will allow the animal to perform normally its needed functions, without limiting growth (Pomar et al., 2013). As illustrated for lysine in Pomar et al. (2013), the requirement for a nutrient is calculated as the sum of lysine requirements for given functions (e.g. basal endogenous losses and daily gain) expressed as quantity required for the nutrient per unit of feed intake, body weight (BW) or daily gain. Whereas the empirical approach is used to obtain the requirement at a population level, the factorial approach allows estimating the requirement of an individual animal at a given stage. Applying the factorial method to determine the requirement of a population needs to account for variability among pigs in a population. Indeed, using the requirement of the average pig to feed a population of pigs implies that half of the population will get less nutrients than required and the performance of the population will be lower than expected Brossard et al., 2009;Hauschild et al., 2010b). An alternative is to use a lower nutrient efficiency for the population than for individuals (NRC, 2012) or to increase the requirement by a given percentage . However, defining the population requirement is difficult, depending on whether it is considered as the nutrient amount required to satisfy the average pig, the most demanding pigs or a given proportion of pigs. Even if the factorial method allows for a more mechanistic determination of requirement for a given level of performance, its application is not straightforward because of the dependency of this performance level to pig characteristics, nutritional supply, housing conditions, and their interactions (Noblet et al., 2016). Modelling approaches based on the factorial approach have been developed since the 1970s to predict the response of growing pigs or sows to the nutrient supply. These models have been developed to simulate the performance of a single animal and can be used as decision support tools to assess the nutrient requirement and identify appropriate feeding strategies (e.g. van Milgen et al., 2008;NRC, 2012). However, between-animal variation has been shown to affect the population response, the efficiency of nutrient utilization and, consequently, the optimal nutrient supply for the population (Leclercq and Beaumont, 2000;Pomar et al., 2003;Brossard et al., 2009). Therefore, stochasticity has been introduced in models to deal with variability and to simulate responses of groups of pigs (e.g. Ferguson et al., 1997;Knap, 1999;Pomar et al., 2003;Vautier et al., 2013). These approaches have shown that variation among animals needs to be considered to improve nutrient efficiency on-farm. Assessing variation among animals is possible through the development of monitoring devices that allow one to characterize individual animals on-farm, and through the development of feeding devices that are controlled through real-time decision support tools. In this chapter, we will present briefly the existing modelling approaches developed to predict nutrient requirements of a single individual (growing pig or sow) in terms of protein/amino acids, energy, and minerals. We will then present how variability among animals can be integrated in modelling nutrient requirements. We will also illustrate how these models can be used in the development of precision feeding systems that allow improvement of feed efficiency. 2 Modelling pig nutrient requirements As described in introduction, the factorial approach allows defining requirements for a given level of performance in a given environment. However, this performance level depends on several factors such as pig characteristics (e.g. age, weight, sex, heath status and genetic potential), feeding level and feed quality, and housing conditions. Moreover, this method does not account for the possible interactions between these factors. Defining nutrient requirements more precisely implies to establish response curves between nutritional supplies and performance accordingly to the physiological status of the pig. Modelling approaches allow integrating these responses curves to predict performance (outputs from the model) from information on feed quantity and quality, animal characteristics, and housing conditions (inputs to the model) on the basis of a set of equations. Different types of models exists (France and Thornley, 1984;Sauvant, 2005). Empirical models, also called 'black box' models, relate inputs to outputs without relying on the underlying biological mechanisms. Mechanistic models include underlying biological mechanisms to calculate outputs from inputs; this implies that these models are more complex and typically use numerous equations and parameters. Models predicting outputs at a given point in time or space are called static, which contrasts with dynamic models that describe the dynamic nature of the response over a given period (e.g. the fattening period). Finally, deterministic models have a fixed set of parameters and therefore do not account for variability among animals, the same set of parameter values for inputs implying the same values for outputs. In contrast, stochastic models integrate variability in model parameters, which offers the possibility to account for variation among animals in a population. Growing pig models Since the 1970s, different models, most of which are dynamic, deterministic, and more or less mechanistic, have been developed to simulate pig growth and to determine nutrient requirements. They are often based on the association of empirical equations and mechanistic description of physiological or biochemical functions of the animal. The concepts used in these models have been summarized and compared by several authors (Bastianelli and Sauvant, 1997;Ferguson, 2006;Kyriazakis and Sandberg, 2006;Luiting and Knap, 2006;van Milgen et al., 2012). Fawcett (1974, 1976) proposed one of the first nutritional models describing pig growth, with the objective of predicting body protein and lipid deposition from the energy and protein intake. The basic concepts underlying this semi-mechanistic model can be summarized as follows (van Milgen et al., 2012): -Growth is determined from modelling lipid and protein deposition; BW, water, and ash deposition, and fat and lean weight are calculated empirically from body protein and lipid weight -The model considers an upper limit to protein deposition in growing pigs, which is constant during growth (i.e. PDmax, 110 g/day from 20 to 100 kg) -There is a minimum lipid-to-protein deposition ratio (minimum LD:PD) -The actual protein deposition is determined by the last two factors and by the quality and quantity of ingested protein (i.e. the supply in essential amino acids) -Feed intake is a model input; energy not used for maintenance or protein deposition is used for lipid deposition that is therefore considered as an energy sink. An important concept used in this model was the linear-plateau relationship between ingested energy and protein deposition, with protein deposition being dependent on (linear part, minimum LD:PD) or independent of (plateau or PDmax) the energy supply. Body weight and carcass traits are determined from these depositions. Compared to the approach of Whittemore and Fawcett, various modifications have been proposed, for instance concerning the change in PDmax and the relationship between ingested energy and protein deposition during growth NRC, 2012). In these models, feed intake can be considered as an input ('push' approach) or an output ('pull' approach). In the 'push' approach, ad libitum feed intake is typically modelled as a function of BW through linear, exponential, power, or gamma functions (e.g., Moughan et al., 1987;Pomar et al., 1991a;de Lange, 1995;van Milgen et al., 2008;NRC, 2012). Because lipid deposition is considered as an energy sink, energy not used for maintenance and protein deposition is deposited as lipid. Unless a specific control of energy intake is included in the model (e.g. with the Gamma function where animals eat for maintenance when they approach maturity), there will be no constraint on the way lipid deposition will evolve during growth. In the models of Black et al. (1986), Ferguson et al. (1994, Emmans (1997), Wellock et al. (2003) and Yoosuk et al. (2011), a 'pull' approach was used where feed intake is calculated to satisfy the requirements of maintenance and of the potential protein and lipid deposition. In these models, explicit response curves (e.g. Gompertz functions) are used to describe the change in the potential protein and lipid deposition during growth. Actual intake is then calculated from predicted feed intake by applying constraints such as ingestion capacity of animal, feed characteristics (e.g. nutrient composition, density and water holding capacity) or environmental conditions (e.g. temperature). In this approach, when the animal tends to maturity, protein and lipid deposition tends towards zero and feed intake corresponds to the energy expenditure for maintenance. The different models presented here predict growth, as affected by nutrient supply, animal characteristics and, in some cases, housing conditions. These models allow one to calculate nutrient requirements to achieve this growth and identify possible limiting factors. Most of these models depend on a large number of parameters that are difficult to obtain or estimate under practical conditions. Although these models are used in a research context, practical application of models requires that these models are easily accessible and user-friendly (e.g. as a dedicated software tool) and that required model inputs can be provided by the user (e.g. TMV (Werkgroep 1991), NRC (2012) and InraPorc van Milgen et al., 2008)). For example, the InraPorc software summarizes the phenotypic potential of a growing pig (i.e. growth potential and ad libitum feed intake) by five model parameters. These parameters can be estimated through a statistical routine using on-farm recorded data of body weight and feed intake. Tools such as InraPorc can be used to estimate nutrient and energy requirements and utilization, evaluate the consequence of different feeding strategies, and identify feeding strategies that allow one to improve performance traits such as feed efficiency. An example of the change in nutrient and energy utilization during growth obtained from InraPorc is given in Fig. 1 for the utilization of digestible energy (Fig. 1a) and standardized ileal digestible (SID) lysine (Fig. 1b) as a function of BW. illustrated the use of InraPorc to adapt feeding strategies and they simulated the performance of a pig between 27 and 100 kg BW having an average daily feed intake of 2.74 kg/d and an average daily gain (ADG) of 1.10 kg/d. Diets were formulated on a least-cost basis using feed ingredient prices of May 2008 in France and a net energy content of 10 MJ/kg. A first diet (D1) was formulated to contain 9.2 g SID lysine, a level 10% higher than the requirement of the animal at the start of the studied period to account for the requirement of the population (see Section 3.1). Two others diets (D2 and D3) were formulated with lower SID lysine levels corresponding to the requirement at around 60 kg BW and at the end of the studied period. Three feeding strategies were then defined: a single phase strategy with D1 (SP), a two-phase strategy with D1 offered until 60 kg BW and D2 for higher body weights (TP), and a multiphase strategy (MP) with a progressive replacement of D1 by D3 to follow as much as possible the changes in the SID lysine requirement. Growth performance obtained for the three strategies were identical as the diets were formulated to avoid deficiencies. However, total nitrogen (N) intake decreased from 4.79 kg for SP to 3.86 kg for MP. Consequently, N excretion decreased from 3.01 kg for SP to 2.08 for MP, and the efficiency of N utilization was increased. In addition, better adapting the nutrient supply to the requirement reduced feed costs (in comparison with the SP strategy) by 5.4% for TP and by 10.8% for MP. This example shows the interest of using modelling nutrient requirements to improve feed efficiency. In contrast to models for growing pigs, only few models describing nutrient utilization in reproductive sows have been developed (Williams et al., 1985;Dourmad, 1987;Pomar et al., 1991b;Pettigrew et al., 1992;Dourmad et al., 2008;NRC, 2012;Hansen et al., 2014), and most of these are research models. Some of these are more of calculation models to determine nutrient requirements (e.g. NRC 2012). More mechanistic models also exist, such as InraPorc and Hansen et al. (2014). These two models describe energy and nutrient partitioning on a daily basis using a similar structure. The calculation of nutrient requirements during gestation and lactation, that are differentiated, is based on a factorial approach. As for growing pigs, the sow models are based on body lipid and protein mass that evolve as a function of the flows of (metabolizable) energy and digestible amino acids. During gestation, requirements for maintenance (including activity and thermoregulation) and foetal/uterus and maternal growth are modelled. During lactation, nutrient and energy requirements are based on the requirements for maintenance, milk production and maternal growth. Priority of nutrient use is given to maintenance, foetal growth and milk production. Body reserves can be mobilized to contribute to these functions in case the nutrient supply is lower than the requirement during lactation. Conversely, body reserves can be (re)constituted during the next gestation when the nutrient supply exceeds the requirements for maintenance and foetal growth. Compared to other models, the InraPorc sow model allows calibrating parameters from on-farm data of reproductive performance, feeding practices and housing conditions. These parameters can be used to calculate nutrient requirements by the factorial approach, and to simulate the dynamic response of sows to different feeding strategies. An example of the SID lysine utilization by sows over four parities is given in Fig. 2 ( Dourmad et al., 2015a). Three different feeding strategies were used that differed in nutrient supplies during gestation. With the first feeding strategy ( Fig. 2a) a single gestation diet was used during the entire gestation period and a lactation diet during lactation. The feeding level during gestation increased by 400 g/d during the last 3 weeks of gestation and was adjusted to body condition at mating and the target body condition at farrowing. During lactation, feed intake was assumed to be close to ad libitum. Diets were formulated on a least-cost basis. The Fig. 2a indicates that the SID lysine requirement increases during gestation but decreases with parity (because the sows attain mature BW). Consequently, for gestating sows receiving the same diet independently of parity, the amino acid and protein supplies exceed the requirement, especially during the beginning of gestation and more so in older sows. To test the possibility of reducing this excess, a second strategy was simulated using two different diets for gestating sows, depending on parity and gestation stage, and differing in amino acid and protein contents (Fig. 2b). The first diet contained 3.8 g SID lysine and 102 g crude protein (CP) per kg of feed and was used during the first 80 days of gestation, except for first parity sows. The second diet contained 5.5 g SID lysine and 145 g CP per kg of feed and was used in first parity sows throughout gestation, and in other sows from day 80 of gestation. Other amino acids were supplied according to the ideal protein requirement. With the two-phase strategy, total consumption of CP and SID lysine were reduced by 10 and 11%, respectively, with an associated average reduction of 15% of N excretion. These results indicate that the amino acid supplies were much better adjusted to the sow's requirements with this strategy. Further improvements have been tested using a multiphase feeding during gestation. Two gestation diets were formulated differing in amino acid and CP contents. The first and the second diets contained 3.0 g SID lysine and 99.7 g CP, and 5.5 g SID lysine and 145 g CP per kg of feed, respectively. The two diets were mixed in adequate proportions to meet, on a daily basis, the amino acid (and digestible P) requirement (Fig. 2c). Compared to the single diet feeding strategy, intake of CP and SID lysine and N excretion were reduced by 14, 17 and 2% respectively, with the multiphase strategy. Compared with the one-phase feeding strategy (and without considering the possible extra cost for applying the feed changes), the calculated feed cost was 6% lower with the two-phase strategy, and 8% lower with multiphase feeding. Modelling nutrient requirements can thus be used to develop feeding strategies to improve feed efficiency and to reduce N excretion and feed costs. Mineral requirement modelling As described above, many current growth models for pigs consider that BW, water and ash deposition, and fat and lean weight are determined from body protein and lipid weight, as suggested by Whittemore and Fawcett (1976). Van Milgen et al. (2008) and NRC (2012) estimate empty BW and BW from protein and lipid body mass without explicitly determining water and ash mass. As indicated by Létourneau-Montminy et al. (2015), these approaches imply that body ash (i.e. all body minerals) mass and growth are driven by growth of body protein and/or lipid. Létourneau-Montminy et al. (2015) observed that pigs fed deficient diets modulated growth of soft tissues and bone mineralization independently (Pomar et al., 2006;Rousseau, 2013). Consequently, total body ash can be reduced without necessarily reducing mineral deposition and growth of soft tissues. Minerals are of great importance in pig nutrition. For instance, phosphorus (P) has an important role in bone development and metabolism of growing pigs (NRC 2012). It also has an important economic value, being the third most expensive nutrient in pig diets after energy and protein. Indeed, due to the low digestibility of dietary P of plant origin (i.e. phytate), diets are supplemented with expensive, non-renewable inorganic sources of P to meet the P requirements (Selle et al., 2011). The alternative, often used in practice, is to incorporate phytase enzymes which increase P digestibility. The high excretion of P, due to its oversupply and its low digestibility, also contributes to the environmental impact of pig production through eutrophication (Selle et al., 2011). These different elements argue to use precise modelling approaches to determine mineral requirements in order to be able to achieve an optimized supply, with minimized excretion. The factorial approach of mineral requirements has been integrated in models for growing pigs and sows. For instance, Jondreville and Dourmad (2005) estimated P requirements for maintenance and production for different physiological stages. These principles have been integrated in models such as InraPorc for growing pigs and sows using the concept of apparently digestible P and considering the effect of diet form (pellet or mash) and phytase addition on digestibility. As summarized by Noblet et al. (2016), the P requirement for growth is based on the BW of animals. Export of P through milk is estimated from milk protein production while P requirement for conceptus growth is estimated from protein deposition in foetuses. The approach proposed by Jondreville and Dourmad (2005) allows one to adjust the dietary digestible P supply to pig performance and physiological status, and to evaluate the effect of performance level on apparent digestible P requirement. For instance, a decrease of 0.2 points in feed conversion ratio in growing pig requires an increase of 0.2 g/kg in dietary P concentration. As indicated earlier, estimating mineral requirements from performance (i.e. BW or BW gain) has some limitations. Consequently, specific models have been developed, mainly for P, to allow the body mineral content to vary independently of protein and lipid mass and to allow simulating different phases of P deficiency (e.g. a phase of bone weakening while performance is maintained followed by a phase of growth reduction) and compensatory bone mineralization (Fernandez, 1995;Schulin-Zeuthen et al., 2005;Létourneau-Montminy et al., 2007Symeou et al., 2014a,b). These models are deterministic and mechanistic as they explicitly consider the mechanisms of P intake, digestion, retention and excretion (e.g. Symeou et al., 2014a,b), and also of Ca and other minerals and natural or microbial phytase addition (e.g. Létourneau-Montminy et al., 2015). Steps of mineral digestion in the stomach, and in the small and large intestine, can be differentiated (e.g. Symeou et al., 2014a,b) as well as excretion in the urine and faeces. These models are mainly oriented towards research but are aimed to be included in decision support tools and can be applied to develop feeding strategies in relation to growth performance, bone mineralization and optimization of P retention while minimizing P excretion. 3 Population, variability and feed requirement modelling 3.1 Interest to include variability in pig models As described above, several models exist to determine nutritional requirements. However, these models are mainly deterministic, representative for average animal. Furthermore, parameter values used in these models are obtained from experiments conducted with groups of pigs, for which the average animal is supposed to be representative (Pomar et al., 2013). Different arguments can be advanced in favour of considering not only the average animal but also the variability among animals. Knap (1995) pointed out some arguments for considering variability in models. For example, the profitability of production systems can be largely affected by the extent of variability of performance traits. Moreover, the change from a production system to another can have minor effects on mean production levels but important consequences on their variability. For example, offering feed ad libitum in the finishing period can have limited impacts on the population mean of ADG in comparison with a restricted feed allowance. However, it can induce an increase in the variability in ADG among animals, influencing the BW and lean content distribution at slaughter and thus impact carcass payment. Additionally, when using a model for replacing experimental comparisons by simulations, the inclusion of variability in models is needed as a proper significance testing of differences between production systems requires statistical tests based on knowledge of variability within and between systems. Indeed, a difference between treatment means is meaningful only if it can be compared to the variation within each treatment. A last example is the study of the relationships between performance criteria, for which covariance between these criteria and thus variability must be accounted for. Recent studies have demonstrated the importance of considering variability among animals to evaluate biological responses and in defining nutritional programmes Main et al., 2008;Brossard et al., 2009;Vautier et al., 2013). Indeed, between-animal variation determines the population response and, therefore, the overall efficiency of nutrient utilization and optimal nutrient levels (Leclercq and Beaumont, 2000;Pomar et al., 2003;Brossard et al., 2009). Pomar et al. (2003) and Pomar (2005) illustrated this point by using a growth model based on those of Knap (1999) and Wellock et al. (2003). The inputs of the model are the diet composition and the pig genotype, which is described by three independent parameters. This model was used to simulate the growth of 2500 pigs originating from five different populations with the same mean genetic potential (i.e. the same mean values for each of the three model parameters) but with different genetic variances. Populations were generated randomly to obtain for each parameter 0, 0.5, 1, 1.5 and 2 times the estimated genetic variance from a reference population. Consequently, the null variance population corresponded to a population of fully identical animals, while other populations corresponded to more or less heterogeneous populations. The performance of pigs was simulated during one day considering that all pigs were 50 kg BW. Simulations were carried out with 11 diets, with intake of ideal protein varying from 212 to 290 g/d. For the null variance population, the response of protein deposition to increasing protein supply had a linear-plateau shape, as stated in the single animal model (Fig. 3a). With an increasing variance of parameters, the response became increasingly curvilinear-plateau and the protein level required to attain the plateau increased as the variance increased. The ideal protein requirement, defined as the nutrient level required to maximize protein deposition, increased from 235 g/d to 251 g/d when the variance increased from 0 to 1 times the reference variance. For higher variance, the requirement value was difficult to estimate because some animals in the population did not receive sufficient protein to express their potential (Fig. 3b). The average animal is represented by the null variance population for which the requirement is met for a supply of 235 g/d. For other populations, this supply allowed covering the requirements of 50% of the population. This percentage of pigs with their requirement met was decreased when the protein supply was reduced, the extent of this decrease being higher when the population variance increased. Similarly, the proportion of over-fed pigs increased when the protein supply increased. In the same way, Brossard et al. (2009) also showed that the SID lysine requirement varies between pigs in a population and that the percentage of pigs for which the requirement was met can vary greatly with the feeding strategy (e.g. then number of feeding phases) and the growth period. Integrating variability in feed requirement modelling To account for the variability among pigs in determining nutrient requirements and to study the effect of feeding strategies on performance, different models have been developed to simulate the growth of a pig population (e.g. Ferguson et al., 1997;Knap, 1999;Pomar et al., 2003;Wellock et al., 2004;Brossard et al., 2009;Symeou et al., 2016a,b). These approaches are based on the knowledge of mean values and variability (mainly through variance, rarely through covariance) of model parameters. Different methods can be used to estimate these values. Schinckel and de Lange (1996) proposed repeated measurements of body composition using ultrasounds or tomography to assess individual variability during growth. Methods of inversed modelling have also been developed to obtain these parameter values for a population or for individual animals Doeschl-Wilson et al., 2006;Vautier et al., 2013). For instance, model parameters can be adjusted iteratively through optimization by comparing real data (e.g. BW, feed intake, protein or lipid mass) to model outputs to minimize the difference between predicted and real values. Once the mean and the variance of parameter values are known, different methods can be applied to integrate variability in models. For instance, combinations of a limited number of parameters can be generated randomly from mean and variance values of parameters. Parameters are supposed to have normal and independent distributions and the coefficient of variation (CV) of parameters are fixed by authors. The lack of information constrained the authors to ignore the covariance between parameters (e.g. Ferguson et al., 1997;Pomar et al., 2003;Wellock et al., 2004;Symeou et al., 2016a,b). Indeed, Ferguson et al. (1997) considered it too costly and difficult to develop experimental procedures to obtain a correct estimation of CV and correlations between parameters. However, not considering covariance between parameters induces an overestimation of simulated variability . For example, a parameter describing feed intake is undoubtedly correlated to a parameter describing growth because a pig eating more than average is also likely to grow faster than the average pig. Therefore, introducing variability in models requires knowledge not only of the variance in parameter values but also of the covariance between parameters (Ferguson et al., 1997;Kyriazakis, 1999). Consequently, other methods have been developed to integrate covariance between parameters. Morel et al. (2010Morel et al. ( , 2012 based on a generic variance-covariance matrix associated with distribution laws of the parameters. The generic variance-covariance matrix was calculated as a median from matrices obtained from 40 subpopulations of pigs differing in sex and cross-breeds, to ensure the genericity of the relationships between parameters and to avoid the use of a particular variance-covariance structure. This generic matrix and the distribution laws can be combined with mean parameters obtained on-farm to generate virtual populations of pigs with realistic mean performance and variability. Finally, Schinckel et al. (2003) and Strathe et al. (2009Strathe et al. ( , 2010 developed stochastic growth models using nonlinear mixed equations, integrating a mean effect of the population and random effects due to individuals. Their method allows one to obtain a mean value for each parameter but also individual values and correlation and variance-covariance structures that can be used to generate populations and perform stochastic simulations and predictions of mean and standard deviation of performance. Applying stochastic modelling to feed efficiency improvement The stochastic growth models presented above can be used to simulate the effect of different feeding strategies on animal performance but also on economic results and environmental impacts. Morel et al. (2010Morel et al. ( , 2012 applied their model to investigate how pig genotype (i.e. lean, normal and fat), population size and variability, feed costs, carcass payment scheme, and feed allowance (i.e. restraint vs ad libitum) affect the gross margin and/or N retention that are optimized through feeding strategies (i.e. number of diets, energy and amino acid content, quantity and duration during which each diet is fed). Morel et al. (2010) randomly generated populations of 1 to 625 pigs by varying four parameters of a growth model. Variation of these parameters around the mean was modelled by a CV of 0% (single pig population), 5, 10 or 20%. To generate populations, the parameters were considered either independent or correlated with an associated covariance matrix for the parameters, as described in Section 3.2. Populations were generated for lean, normal or fat pig genotypes. Diets used changed each week; levels of digestible energy (DE) and ratio digestible lysine (Lysd)/DE used for formulation of diets were determined by nonlinear optimization with a genetic algorithm on the basis of simulated performance between 20 and 85 kg BW with an objective of maximizing gross margin (i.e. carcass value minus feed costs and piglet value). Their results showed that lean pigs allowed one to obtain the highest gross margins. The variability in gross margin between simulation runs increased with increasing population variability but decreased with increasing population size. When the covariance was introduced, difference in optimal Lysd/DE ratio varied from −4% to 50% between single pig population (average pig) and population of 125 pigs, depending on the pig type. These authors also noticed that including variance increased optimal values for Lysd/DE ratio. In conclusion, this study showed the importance of the knowledge and the inclusion of variability in the optimization of gross margin through adapted feeding strategies. Brossard et al. (2010) used a stochastic model for an economic and environmental analysis of pig production. They simulated the effect of changing the SID Lys/net energy (NE) ratio from 85 to 115% of the average population requirement on growth performance, economic results, and N excretion in two contexts of costs of feed ingredients (high or moderate). These different Lys/NE ratios were combined with three different feeding strategies: one strategy with a single diet formulated to meet the mean requirement at the beginning of the growth period; a two-phase strategy with a grower diet and a finisher diet formulated to meet the mean requirement at the beginning and at the middle of growth period; a multiphase strategy with daily adjustments of the diet to the mean requirement of the population. Performance and N excretion were simulated from 27 to 112 kg BW with InraPorc®. Gross margin was calculated from carcass value and feed and labour costs. Maximal growth performance was observed with a SID Lys/NE ratio of 105 to 115% of the mean requirement (Table 1) illustrating the fact that the nutrient supply required to maximize growth performance is higher than mean population requirement. Reducing the SID Lys/NE ratio below 100% of the mean requirement reduced growth performance and economic results, with the effect being more marked for the two-phase and multiphase strategy. It also increased N excretion for the multiphase strategy but decreased N excretion for the single diet strategy. Indeed, the daily adjustment of SID Lys supply below the mean requirement implied that a major part of the population encountered a SID Lys deficiency, consequently reducing the N efficiency. In contrast, with the single diet strategy, reducing SID Lys supply reduced the SID Lys oversupply and thus the N excretion. When the SID Lys/NE ratio in diets was increased above 100% of the mean requirement, economic result was improved for two-phase and multiphase strategy with an optimum with a SID Lys/NE ratio of 110 to 115% depending on cost context and strategy. For the multiphase strategy, this economic improvement was accompanied by a small reduction in N excretion, whereas the two other strategies induced higher N excretion. , with a diet change at 112 days of age, and containing 0.762 g SID Lys/MJ NE and 0.635 g SID Lys/MJ NE, respectively, for the 100% reference level; multiphase strategy: the Lys/NE ratio was changed daily according to requirements, with an NE supply of 9.71 MJ and a SID Lys supply for the 100% reference level of 0.762 g/MJ NE and 0.531 g/MJ NE at the beginning and at the end of the growing-finishing period, respectively. 2 Difference relative to the 100% Lys/NE level of the 1-phase strategy. 4 Towards precision feeding As indicated above, modelling approaches exist that deal with variability in requirements among individual pigs. This allows one to define how to feed a population, on a basis of a reference (e.g. mean animal) and of simulations on populations generated using the knowledge of mean or individual values of parameters and/or variancecovariance relationships between parameters. Even if these approaches are useful to explore optimal feeding strategies, some difficulties can arise when considering these approaches. Depending on the approach followed, obtaining appropriate parameters or variance-covariance knowledge for a given population can be difficult. Moreover, parameters obtained for one population cannot be adapted for another population due to differences in parameter variability or housing conditions. To account for variability among pigs without dealing with issues such as covariance structure between parameters, a new approach for nutrient requirement modelling is offered by precision feeding, concomitantly with the development of technologies in the field of precision farming. Precision feeding is based on the dynamic adjustment (if possible day by day) of dietary nutrient supplies to requirements, at a group or at an individual level. In this approach, individual pigs are treated as such and each pig/group is to be modelled individually. The purpose is to improve feed efficiency whilst reducing feed cost and environmental impact. To provide daily and individual tailored diets, this technique needs to include the following elements (Pomar et al., 2009(Pomar et al., , 2013: Real-time data collection is needed in terms of BW and feed intake through automatic devices. To obtain model parameters by individual and to predict nutrient requirements and performance individually and on a daily basis using real-time data, models have to evolve to integrate 'real' growth and feed intake patterns that may differ from the expected 'theoretical' patterns. Hauschild et al. (2010aHauschild et al. ( , 2012) developed a prediction model combining a statistical real-time estimation of expected BW gain and feed intake (depending on realized performance during the preceding days) with a mechanistic model predicting protein deposition and NE intake and calculating by the factorial method the amino acid and mineral requirements to sustain this performance. Knowing the requirements for a pen or an individual pig, an optimal blend of feeds can be defined and distributed to individual pigs. Pomar et al. (2010) used this model to simulate the effect of applying precision feeding at an individual level. Their results indicated that precision feeding, by daily and individual adjustment of nutrient supplies to requirements, reduced feed cost by 11% and N and P excretion by more than 38%, compared to three-phase feeding applied to the whole group. This study indicates the potential of precision feeding in improving feed efficiency. The application of precision feeding using real-time modelling of nutrient requirements was tested for growingfinishing pigs over an 84-day fattening period. Pigs received a classical three-phase feeding programme (3P) obtained by blending fixed proportions of feeds A (high nutrient density) and B (low nutrient density), or a daily-phase feeding programme in which the blended proportions of diets A and B were adjusted daily to meet the estimated nutritional requirements of each individual pig (multiphase individual feeding, MPI). The performance (ADG, average daily feed intake, gain to feed ratio and N and P retention) obtained with the MPI programme were similar to those obtained for the 3P programme. However, compared with the 3P programme, the application of MPI programme reduced the SID Lys intake by 27%, the estimated N excretion by 22%, and the estimated phosphorus excretion by 27% (Table 2; Andretta et al., 2014). In Andretta et al. (2016), the application of MPI programme reduced SID lysine intake, estimated N excretion and feeding costs by 26, 30 and 10%, respectively. Even if the reduction in excretion and feed costs is smaller than this estimated by simulation, these results confirm the possibility of improving feed efficiency (for instance in terms of feed cost and environmental impact) with the combination of real-time modelling of requirement with precision feeding. This approach has yet to be improved for a further application on-farm, to be able to integrate more information on animal (e.g. composition of BW gain, health status and behaviour) and its environment (e.g. temperature and ventilation rate) that are made available by the development of sensors, and that can modulate accuracy of predictions. Additionally, more complex objectives for feeding strategies could be integrated such as body composition, expected weight at fixed age to plan slaughter departures, and so on. Phosphorus excretion (g/d) 6.9 a 5.1 b Different subscripts within a row indicate statistical difference (P < 0.05). Case study Using modelling approaches to determine feed requirements and to adapt feeding strategies has been shown by simulation to improve feed efficiency (see Section 3.3). Some studies have also been developed to assess the practical application of these modelling approaches. For instance, Brossard et al. (2014) used a herd modelling approach to evaluate different feeding strategies to control or reduce variability among pigs at slaughter. Indeed, the variability in BW among animals complicates the management of slaughter departures. These departures are planned to deliver each time a sufficient number of animals within BW range allowing a maximal payment of carcass. Controlling the variability of BW at slaughter is thus important to be able to deliver a maximal number of animals in the targeted BW range and to avoid too light or too heavy pigs. The applicability and accuracy of this approach was assessed in an experimental study using some of the feeding strategies evaluated by simulation. The InraPorc model was used to perform simulations on 10 batches, each of 84 cross-bred pigs (half barrows and half gilts), to characterize the effect of feeding strategies differing in the amino acid supply or feed allowance, on the mean and variation in growth rate and slaughter weight. In the simulations concerning feed allowance effect, pigs were offered feed ad libitum or were restricted (increase in feed allowance by 27 g/day up to a maximum of 2.4 and 2.7 kg/day for gilts and barrows, respectively). A two-phase feeding strategy was applied to all animals, with 0.9 and 0.7 g of digestible lysine per MJ of net energy (NE) in diets provided before or after 65 kg BW, respectively. Pigs were supposed to be slaughtered in two departures, with a mean BW at departure of 112 kg for the whole population. Results indicated that a feed restriction reduces the CV of BW at first departure for slaughter (BW1) and at slaughter by 34% (from 9.0 to 5.9%) and 26% (form 7.9 to 5.8%), respectively. Growth performance obtained from in silico simulations using ad libitum and restricted feeding plans was compared with results obtained in an in vivo experiment on a batch of 168 pigs when applying exactly the same feeding and slaughtering strategies. Actual growth was similar to that obtained by simulation. The CV of BW1 was also similar in vivo and in silico for the ad libitum feeding strategy but was slightly underestimated by 1 percentage point in silico for the restriction strategy. This study confirms the relevance of using simulations to predict the level and variability in performance of group-housed pigs, and the possibility of control of variability through feeding strategies. 6 Conclusion and future trends A precise knowledge of nutrient requirements in growing pigs and reproductive sows is required to better adapt feeding strategies and thus increase feed efficiency and economic results, and reduce the environmental impact of pig production. Modelling approaches have been developed to integrate this knowledge and allow taking into account the possible interactions between the pig and production conditions. Recent development to integrate the effect of individual variability on requirement estimations allowed better accounting for these interactions. Even though additional research is still required, modelling approaches appear to be interesting alternatives and complements to experimentation. Modelling nutrient requirements is a powerful tool to test the effect of different feeding strategies on a set of criteria of interest (e.g. efficiency of nutrient utilization (and thus environmental impact), economic results). Testing a large set of feeding strategies by actual experiments would be too expensive and complicated in field situations. Moreover, results of virtual experiments give indications or trends for future improvements in feeding practices depending on farm situation and animal potential. Research models can be included in decision support tools for technicians and farmers and can also be directly integrated into automatized systems such as precision feeding systems. However, and even if current models already allow good predictions of animals requirements performance, further developments are still in progress for their improvement. Current nutritional models allow defining nutrient requirements depending on pig characteristics. Some pig models for growing pigs are now including variability among animals in terms of growth and intake potentials. Also models for sows can account for variability among animals and within a given animal depending on parity or gestation/lactation stage. Future research for these models is aimed to integrate other sources of variation. For sows, the effect of ambient temperature, activity, parity or litter size on requirements are still to be refined and integrated more properly in models (Ngo et al., 2012;Dourmad et al., 2015b). In the same way, growing pig models have to be refined to better integrate the effect of ambient temperature (Wellock et al., 2003;Renaudeau et al., 2011), animal activity or capacity to deal with different stress sources linked to their social environment (e.g. group size, surface per animal, mixing period, as in the model proposed by Wellock et al., 2004) or health status. The effect of health status on resource allocation and the associated mechanisms have to be investigated to be integrated in models. Concerning minerals, comprehensive models were made available recently (e.g. Létourneau-Montminy et al., 2015) but refinements are still required to better account for changes in body mineral reserves and the mechanisms involved in the regulation of absorption. This will allow accounting for resorptionabsorption phases in sows or compensation between phases in fattening pigs. Availability of sufficient and adapted data is required to support these developments and their application for improving feed/nutrient efficiency. The current development on sensors and data collection to characterize pigs at an individual level or on their environment will allow the access in real time and with a higher frequency to more precise data on classical characteristics (e.g. feed intake, BW and ambient temperature) but also to other new traits such as behaviour, body composition and health status. The prediction of nutrient requirements will then be supported by models integrating these different types and sources of data, on the basis of historical data and also in real time. It can be imagined that the requirements will be defined not only for simple production objectives (e.g. feed intake and growth) but also for multicriteria objectives such as economical return and environmental impact. -Every year, the European Federation of Animal Science (EAAP, www.eaap.org) organizes its annual meeting with sessions on pigs with a focus on feed use or feed efficiency, and also since 2016, a dedicated commission for PLF (precision livestock farming). Every five years, the international Workshop 'Modelling Nutrient Digestion and Utilization in Farm Animals' (Modnut) is organized (see http://www.jackiekyte.com.au/modnut2014/for the last one). -The EU H2020 Feed-a-Gene project (www.feed-a-gene.eu/) aims to better adapt different components of monogastric livestock production systems (i.e. pigs, poultry and rabbits) to improve the overall efficiency and sustainability. This includes the modelling of biological functions with an emphasis on feed use mechanisms to better understand mechanisms of feed efficiency.
2019-04-27T13:07:31.600Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "008e010c35715075da02f1dba93817a90db002aa", "oa_license": "CCBYNC", "oa_url": "https://hal.archives-ouvertes.fr/hal-01644029/file/Pig%20Meat_10_Brossard_postprint.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "008e010c35715075da02f1dba93817a90db002aa", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Environmental Science" ] }
245102778
pes2o/s2orc
v3-fos-license
Efficient Everolimus Treatment for Metastatic Castration Resistant Prostate Cancer with AKT1 Mutation: A Case Report Abstract Metastatic castration resistant prostate cancer (mCRPC), the advanced stage of prostate cancer (PCa), develops resistance to first line androgen deprivation therapy (ADT). Aberrant androgen receptor (AR) and PI3K-Akt-mTOR signaling pathway are responsible for the development and progression of mCRPC. We herein describe a case of a 64-year-old male mCRPC patient with somatic AKT1 and AR mutations. The patient, who had been heavily pretreated by ADT and AR inhibitors, showed stable disease progression when he received everolimus, an mTOR inhibitor. The PSA level dropped drastically from 1493.0 ng/mL to 237.6 ng/mL, after 3 months of treatment. The overall survival (OS) was 43 months, of which the progression-free survival (PFS) with everolimus treatment was 7 months. The administration of mTOR inhibitor, everolimus, could achieve good clinical responses along with prolonging PFS for mCRPC patients harboring AKT1 mutations. Technology in precision medicine, such as targeted next-generation sequencing (NGS) of cancer-relevant genes, has promising function in personalized therapy. Introduction Prostate cancer (PCa), an epithelial malignant tumor developing in the prostate, is the second most common malignant tumor among men in the world. 1 Androgen deprivation therapy (ADT) via chemical or surgical castration, as the first-line treatment for metastatic PCa, can temporarily achieve good clinical responses. However, subsequent androgen deprivation resistance is widely observed in metastatic PCa patients receiving ADT, resulting in metastasis castration-resistant prostate cancer (mCRPC). 2 A combination of mutated genes (such as TP53, RB1, PTEN, and BRCA1/2), activated signaling pathways (such as PI3K/AKT/mTOR, WNT/βcatenin, and SRC), and other mechanisms are involved in the progression and evolution of PCa. 3 A majority of castration resistant prostate cancer (CRPC) patients are characterized with mutations and copy number alterations of genes related to the PI3K-Akt-mTOR signaling pathway, 4 which plays a crucial role in castration-resistance and CRPC development. 5 Clinical trials have been launched to investigate whether mTOR, pan/selective-PI3K and Akt inhibitors are novel targeted therapy agents of mCRPC. 6 Everolimus (RAD001), an mTOR inhibitor, can effectively limit tumor growth via inhibiting cell proliferation, angiogenesis, and tumor cell autophagy. 7 Food and Drug Administration (FDA) has approved the application of everolimus in advanced renal cell carcinoma, breast cancer, and other tumors. 8 However, few case reports have clarified its role in mCRPC. Herein, we report a case of everolimus treatment against mCRPC in a patient with AKT1 and AR mutations, to provide mCRPC patients with an alternative treatment option. Case Presentation A 64-year-old man with a 2-year history of dysuria presented increased dysuria accompanying pain in his left leg. He was diagnosed as PCa with a high Gleason score (5 plus 3) in April 2016, by undergoing pathological tissue biopsy. His serum prostate-specific antigen (PSA) was over 100 ng/mL, and emission computed tomography (ECT) imaging showed systemic bone metastasis. Throughout the treatment period, we monitored the PSA level to evaluate the therapeutic effect ( Figure 1). Prostatectomy was performed followed by 4-month flutamide (250 mg three times a day), continued goserelin (3.6 mg every 4 weeks until passing away), and 6-week chemotherapy with docetaxel (75 mg/m 2 every 3 weeks) combined with zoledronic acid therapy (4 mg every 4 weeks until month 36). At the end of month 4, PSA level declined from 125.20 ng/mL to 0.02 ng/mL, and the tumor size of the lung metastasis also decreased from the whole lung to a small lesion; however, the testosterone remained at castrate level throughout the drug treatment period. From month 8 to month 14, the patient was treated with bicalutamide (50 mg once a day) as an antiandrogen drug, and the PSA level rose gradually from 0.63 ng/mL to 14.51 ng/mL. To treat against the increasing PSA, the patient received a 9-month combination treatment of abiraterone (1 g once a day) and prednisone (5 mg twice a day), during which the PSA level finally increased to 26.64 ng/mL, despite the temporary decline to 15.00 ng/mL. Although we measured a transient decrease of PSA in this period, no appreciable improvement was observed in pulmonary pathology imaging. The patient was then treated with docetaxel (75 mg/m 2 ) plus prednisone (5 mg twice a day) for 5 months, carboplatin (area under the curve: 5) for 3 months, and enzalutamide (160 mg once a day) for 3 months, sequentially. The metastasis in lungs and bone kept progressing, accompanying with the increasing PSA level which reached 45.53 ng/mL, 92.07 ng/mL, and 169.6 ng/mL, respectively, after completing each of these three treatments. Based on these poor responses, olaparib (300 mg twice a day) combined with abiraterone (1 g once a day) was given to the patient for 4 months; however, PSA peaked at 1,493.0 ng/mL. Thus, tumor tissue biopsy of the right lung was sampled and performed with immunohistochemistry. Due to the positive staining results of both PSA and prostate specific membrane antigen (PSMA), we found this adenocarcinoma sample in the lung was derived from PCa. To clarify the genomic profile of tumor tissue, Figure 1 The surveillance of PSA level since initial diagnosis. The red arrow represents the prostatectomy, and the red star represents death. The online part and the line length represent the therapeutic regimen and therapy time, respectively. The PSA level at death was unknown. the lung metastasis and plasma samples underwent NGS analysis (Nanjing Geneseeq Technology Inc., Nanjing, Jiangsu, China) for 425 cancer-relevant genes. AKT1 mutation was observed in both tissue and plasma samples, while AR mutation was only detected in the tissue sample (Figure 2 and Supplementary Table 1). According to the sequencing result, oral administration of everolimus (10 mg once daily) was started and continued until passing away. After receiving this AKT1 mutation targeted treatment for 3 months, the PSA level drastically dropped from 1,493.0 ng/mL, the highest PSA level, to 237.6 ng/mL. CT imaging also revealed that the size of the lung lesion was remarkably reduced, and the size of the bone metastasis remained stable (Figure 3). In the following 3 months, PSA level remained stable, under 300 ng/mL, followed by leaping to 880.3 ng/mL. Unfortunately, the patient died in January 2020. Supplementary Table 2 summarizes the dosage and frequency of medication. Discussion ADT is the internationally recognized standard treatment for PCa. 9 After 18~24 months of ADT treatment, most 5425 PCa cases irreversibly develop into mCPRC, of which the median survival time is around 1~2 years. 10 Herein, we reported the treatment process of a mCRPC patient with AKT1 and AR mutations and revealed that everolimus was a potential option of limiting mCRPC progression. This mCRPC patient successively experienced multiple approved therapies, including docetaxel, abiraterone, and enzalutamide. However, all of them yielded to unstoppable degenerative disease. Genomic profiling, using NGS, detected AR deletion and AKT1 (E17K) mutation in metastatic lung tumor tissue and plasma samples. An aberrant AR signaling pathway is widely considered as a dominant driver of PCa. 11 Increased expression of androgen-synthesizing enzymes continuously sustained elevated androgen levels and contributed to AR activation. 12 The detected AR deletion c.*612_2173 +34del included exons 5-7 and a part of exon 8 that code for the ligand binding domain interacting with dihydrotestosterone. This deletion was very likely to be associated with AR splice variant (AR-V) of AR v567es or ARV12. 13 These two types of AR-Vs were considered to be constitutively active and contribute to ADT resistance. 14 Additionally, AR-Vs could not interact with heat shock protein or combine with androgen response elements in the nucleus to regular PSA expression. 15 According to previous studies, 16 we deduced that bicalutamide resistance was attributable to AR-Vs. Moreover, various clinical studies found that abiraterone and enzalutamide had no satisfactory long-term therapeutic effect in the mCRPC patients with AR-Vs. 17 These findings could explain the high-level PSA of the patient, especially for the late stage of treatment. It was also notable that the PI3K-AKT-mTOR signaling pathway was altered in almost 100% of advanced-stage PCas. 18 have revealed the abnormal activities of AKT and mTOR proteins in prostate cancer tissue, which implies the crucial role of PI3K-AKT-mTOR in the occurrence and development of PCa. 19,20 The crosstalk signaling between AR and PI3K-AKT-mTOR pathways is hyperactive in mCRPC. Hence, the inhibitors of the above two pathways would produce a promising therapeutic effect in mCRPC. Of note, AKT1 (E17K) mutations stimulate downstream signals of PI3K-AKT-mTOR that cause tumor cells to emerge transformed. 21 Also, AKT1 (E17K) mutant oncoproteins can selectively destroy rare, quiescent, chemotherapy-resistant, and tumor-promoting AKT1 low quiescent cancer cells (QCC). 22 Multiple investigations have reported remarkably longer survival time after everolimus treatment in patients carrying AKT1 (E17K) mutations. 23 Hence, we adopted everolimus to halt disease progression in this case with ADT treatment resistance. The effectiveness of everolimus could be proved by the drop of PSA level and the reduced size of lung lesion. The overall survival (OS) of this patient was 43 months, including the 7-month progression free survival (PFS) with everolimus. The patient's OS was longer than the median OS (around 20 months) of mCRPC patient with no effective treatments. 24,25 Such evidence indicated everolimus could effectively alleviate mCRPC degeneration. Some Phase II clinical trials showed that mCRPC patients did not benefit from everolimus; 26,27 however, patients in these studies did not undergo genomic profiling and their AKT mutation status was unknown. Thus, our case suggested that either tumor tissue or plasma samples of mCRPC patients should be performed with comprehensive genomic profiling before being treated with targeted therapy such as everolimus. In summary, the case report complemented the clinical application of everolimus against mCRPC. The sharply decreased PSA level was very likely to be associated with the administration of everolimus in this mCRPC patient harboring AR and AKT1 E17K mutations. He also achieved relatively good responses to everolimus, including reduced or stable tumor size of metastases, and longer PFS of 7 months. Clinical trials, enrolling patients with mutated PI3K-AKT-mTOR and AR signaling pathways, should be launched to further investigate the efficacy of everolimus in mCRPC. Data Sharing Statement Additional data and materials related to the genetic tests, pathologic reports, treatment information, and images are available for review upon reasonable request. Ethics Approval and Consent to Participate This research was approved by the ethics committee of the Second Affiliated Hospital of Dalian Medical University. Written informed consent for publication of the clinical details and images was obtained from the patient and wasn't required institutional approval.
2021-12-12T17:36:52.355Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "a242fc7b54175833a54f90d90accde7699e5573e", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=76582", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c770d077f06148e3ae3d29f9bc9016cffad3d02d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
203442049
pes2o/s2orc
v3-fos-license
A hope to trust. Educational leadership to support mature students’ inclusion in higher education: an experience from Surrey, England ABSTRACT This article discusses a Transition Programme to support the inclusion of mature students in Higher Education. The Transition Programme was designed and it is currently provided by a Higher Education institution in Surrey, South-East of England. An outcome of innovative educational leadership, the Transition Programme’ successfully solved the paradox of selection for admission to Higher Education programmes, in particular with regard to mature students. The English Higher Education system offers an interesting case for discussion, being caught between the principle of inclusiveness within a ‘widening participation’ agenda and the contrasting selective principle of ‘recruiting with integrity’. The article is motivated by two main aims. The first aim is to contextualize sociologically, within a discussion on the related concepts of hope, trust and risk, the motivations underpinning mature applicants’ choice to enter Higher Education. The second aim of the article is to argue for the capability of educational leadership to generate positive change supporting mature applicants’ trust in hope for a successful inclusion in Higher Education. half of the twentieth century. In the British HE system, honours degrees are the most common undergraduate courses. However, they are not the only way to achieve a higher education qualification. Students can also complete a FdA which lasts two years and combines academic study with workplace learning. A FdA is a combined academic and vocational qualification in higher education, equivalent to two thirds of an honours bachelor's degree, introduced by the government of the United Kingdom in September 2001. It is always possible for students to return to their studies at a later date, because there is no time limit on topping up a FdA with a third year of study, towards the achievement of a Bachelor's degree. In the HE institution where the TP has been implemented, concerns had been raised regarding the potential impact of academic skills-based selection on applicants whose unique profile could not be fully measured or appreciated. The problem that the TP was designed to solve is that mature students deciding to re-enter education were at risk of being deprived of hope and harmed in their trust in education. However, the importance of the TP goes far beyond the solution of organizational dilemmas concerning the management of recruitment. The TP offers applicants who do not fully meet 'traditional' academic standards as defined by the HE sector and professional bodies, extraprovision to advance their academic profile and skills before they access HE. Mature applicants' transferable skills, agentic thinking and capability are acknowledged to support the offer of a place, whilst the TP secures the academic skills necessary to succeed in HE. The goal, both academic and ethical, of the TP is to offer mature applicants the possibility to transition and undertake academic progression building on what they already know and can do. This is in stark contrast with an approach based on a deficit approach of educational assessment that focuses on what mature applicants do not do or do not know (for an assessment of the 'deficit approach' see Snyder, Ritschel, Rand, & Berg, 2006). The article is motivated by two main aims. The first aim is to contextualize sociologically the motivations underpinning mature applicants' choice to enter Higher Education within a discussion on risk. Risk is discussed in its relationships with possible negative implications of selective processes for mature applicants' trust in the Higher Education system. The culture of modernity has conceptualized risk as the link between decision-making in the present and possible negative outcomes of that decision-making in the future (Beck, 2013). Risk therefore entails responsibility and accountability, at least at the level of self-reflection, for the decision-maker. Negative outcomes of selective processes may not only harm the academic career of the applicants, but also their well-being, self-esteem and happiness. Connected to the risk of failure generated by selection, the pivotal function of trust and hope as a support for students' risk-taking behavior is discussed. The second aim of the article is to argue the capability of educational leadership to generate positive change supporting mature applicants' hope. The TP is discussed for this scope, exploring pedagogical innovation and creative leadership that successfully transformed educational selection into an opportunity for recognition and celebration of applicants' diverse identities, experience and skills. The decision to access HE is an important event that shapes complex narratives of the self in the professional, academic and personal dimensions; negative consequences of the decision can damage not only academic careers but also self-esteem, self-perceptions of individual value and trust in hope. The TP is considered an important pedagogical and organizational innovation and a good example of the impact that education leadership can have, well beyond academia. The TP aims to empower applicants by reducing the risk generated by selection while equipping them with academic skills that protect from subsequent failures in the academic journey. Literature review to build a conceptual framework Conceptual framework 1: a sociological discussion on hope and risk 'What happens to declined applicants?' 'What happens to their hopes and desire to progress in their lives and careers?' These were the questions that provoked one of the two authors of the article, who was at the time the newly-appointed manager of the Foundation Degree, to review rejected applications over a five years period. This was a decision and a process driven by concern for the harm that a rigid form of selection could do to the well-being of mature students aspiring to enter HE. There was a need to re-establish social connectiveness and support (Idan & Margalit, 2013). The outcomes of the analysis were used to design a 12-weeks program aimed to provide academic skills to applicants who would have been otherwise rejected, notwithstanding their motivation and professional experience. The TP was born; since the inauguration of the TP, applicants who would have been unable to progress academically due to 'not meeting a set of academic-based criteria' have been offered the opportunity to have their professional skills recognized and acknowledged, whilst enhanced by academic skills provided by the TP. This article argues that the TP positively engages with mature applicants' risk-taking attitude, their perception of themselves as a learner, as well as their self-esteem, resilience, trust in the system and that peculiar perspective on the future called 'hope'. Hope is a requisite for risky decision-making, in situations where the outcomes of decisions are most uncertain (Coleman, 2017). The decision to access HE without the support of a standard academic background is a good example of a positive attitude towards risk, because applicants add uncertainty to a life-world that otherwise would be connotated by relative safety of well-known professional routines. Without hope and hopeful thinking, it is unlikely that mature applicants would choose to venture into the exploration of HE. Hope, as well as trust and faith, support risky decisions and motivate to challenge the familiar world of ordinary experiences (Kwong, 2019). Like trust, however, hope is susceptible to disappointment because, unlike faith, hope does not operate in counterfactual ways. Milona (2019) convincingly argues that hopes are composed of a desire and a belief that the object of the desire is possible, reinforcing the theoretical position that counterfactual thinking does not fall within the realm of hope. For instance, failure can invite us to revoke hope, maybe to retreat to more familiar worlds (Stockdale, 2017). Individuals fluctuate between low to high levels of hope (Snyders, Ilardi, & Cheavens, 2000) determined by blueprints of behavior formed during life experiences and realms of reality that influence how outcomes of risky decision motivated by hope are reacted to (Ratcliffe, 2013). Individuals more inclined towards high hope are found to respond more positively to challenges using their resilience to 'bounce back' faster in comparison to individuals inclined to have low hope (Martin, 2011). Higher levels of personal hope motivate positive outlooks and self-perception so that change, expectations and achievements are managed within a 'can-do, do-do and will-do' attitude. Hope is both individual and social; individual because it can support individual decisions in situation of uncertainty and risk, social because like the decision to trust hope is influenced by personal experiences, contexts and interaction with others (Luhmann, 2005). It is here argued that the TP empowers hope by offering an alternative to the restrictive principles of selection on access to the FdA. Instead of being rejected, applicants can access the FdA even if they do not fully meet academic requirements, and the TP will prepare them before the beginning of the main taught program, the FdA in Early Childhood Studies. By reducing the risk of failure on access to HE, the TP makes risky personal projects more credible and supports applicants' trust in hope. Trust in hope can be also conceptualized as rationalization of hope: as the likeness of the hoped-for outcome, in this case, success in HE, goes up, hope is transformed in rational foundation for decision-making (Schleifer McCormick, 2017). For this reason, it is believed that the design, implementation and development of the TP are an example of progressive educational leadership that deserves to be discussed. Even the most successful academic journey starts with the shock of leaving familiar life-worlds (displacement, see Giddens, 1991) and inherited identarian labels (Fass, 2004(Fass, , 2007 to enter a largely unknown territory such as HE (Risquez, Moore, & Morley, 2007). For mature applicants returning to education, the goal is to reach a destination from where it is possible to claim an empowered identity (Fleming, Loxley, Kenny, & Finigan, 2010), returning to the familiar world with the confidence and resources needed to reshape it (re-territorialization, see Deleuze & Guattari, 2004). Hope is here defined as a capability to project oneself in the future with some degree of indifference towards the risk of failure (Milona, 2019). Hope is therefore particularly necessary for mature applicants who leave safer, more familiar, life worlds (Taylor & Harris-Evans, 2018) to move into the unfamiliar, where their social networks are often unable to offer protection or support (Koranteng, Wiafe, & Kuada, 2018). However, hope can be harmed from the beginning of the journey, when the selection for accessing HE is not designed to value the skills and knowledges of mature applicants if they do not meet academic standards (Ramsey & Brown, 2018). From a perspective centered on the applicants' subjective experiences, the TP can be understood as a leadership-driven innovation that transforms selective access to HE from a social situation where hope is challenged to a different social situation, where hope is celebrated and reinforced through support offered to applicants to strengthen their academic profile. This latter point is probably the core message that the article aims to develop in the next sections, not before some further conceptual clarifications. Conceptual framework 2: a sociological examination of selective access to higher education provision This article develops a discussion on the interplay between hope and organizational procedures, arguing that such relationships are not necessarily conflictual, as demonstrated by the TP. The TP is part of the professional experience of one of the authors as a tutor, manager and leader of a degree program in Early Childhood Studies (ECS), characterized by a sizeable population of mature applicants. Associated to the ECS undergraduate provision, a Foundation Degree Programme (FdA) is also offered, representing the program of choice of mature learners working within the Early Years sector. Most applicants who undertake the FdA are often the first in their family to enter HE, who may not have possessed entry qualifications or opportunities to access HE as a school leaver. The selection of applicants is based on the evaluation of a minimal set of academic skills and professional qualifications as well as experiences of working with young children, assessed through documentation presented by the candidate at interview and upon application. As for all selective processes in the education system, making selection to access the FdA an object of sociological analysis allows a rich variety of intellectual stimula to emerge. The selection of FdA applicants is a regulated decision, enshrined in organizational procedures. Three sets of criteria define the general framework of the selection. Firstly, the selection must meet the requirements of 'recruitment with integrity' (applicants should be recruited only if they meet the minimum requirements to undertake the program of study). Second, selection must at the same time secure the principle of 'widening participation' (HEFCE, 2014(HEFCE, , 2015(HEFCE, , 2017. Already at the level of the most general criteria, it is evident that widening participation and recruitment with integrity can coexist only in a paradoxical relationship. As selection must be implemented, two contrasting forces clash. Widening participation entails adjusting, often lowering, the threshold for admission, particularly regarding academic skills (Evans, Rees, Taylor, & Wright, 2019). On the contrary, recruiting with integrity means that applicants who are not expected to possess minimal skills or experiences to succeed should not be admitted onto the program of choice. The paradox is generated in the combination of an inclusive force, widening participation, with a selective force, recruitment with integrity. The necessity to solve such a paradox was one of the main drivers for the design of the TP as an instance of pedagogical and organizational innovation. The third set of criteria consists of the subject benchmark statement for Early Childhood Studies in England and Wales. The subject benchmark defines the expected Early Years professionals' competencies in terms of knowledge and skills (Early Childhood Graduate Practitioner Competencies, 2018; Quality Assurance Agency for Higher Education [QAA], 2014). The subject benchmark clarifies what professionals must know as well as what professionals must be able to apply in practice (Lumsden, 2018). The benchmark contributes to profiling the persona of an ideal Early Years professional. By doing that, the subject benchmark allows to measure the similarity between the empirical applicant, in this case to the FdA, and the idealized professional, and from that to assess the credibility of the candidature. The subject benchmark therefore represents another selective factor which is added to the recruitment with integrity principle, limiting the scope of the widening participation approach. Due to contrasting forces and the somehow contradictory nature of guidelines, the position of applicants to the FdA is most uncertain. Facing a binary decision is surely expected by the applicants, because the success/unsuccess distinction is so intrinsic in modern society that any selection process with either outcome appears natural and unavoidable. The paradoxical relationship between widening participation on the one hand, and recruiting with integrity and subject benchmarks on the other hand, is a Gordian knot. Cutting the knot can be a practical and philosophical aim for educational leaders, a motivation for professional and personal engagement. Conceptual framework 3: a sociological analysis of the function of trust in decision-making processes The decision of entering HE necessitates hope, and hope necessitates a minimum of trust. Applicants to the FdA need to trust both the specific institution and the HE system. The applicants do not need to trust specific individuals, as much as a passenger does not need to trust the engineers who design airplanes. The applicants need another form of trust, directed to the functional elements of the system, for instance, trust in teaching, trust in student services, trust in assessment procedures. No applicant can consider all the complexities of the unfamiliar world of HE, therefore knowledge cannot support decision-making; rather, applicants will engage with teaching, student services, assessment or any other social situation and institution based on their trust in the system. It is agreed with Luhmann (2005) that trust in the system is easier to acquire than trust in the person. A great majority of individuals cannot trust expert systems in order to participate in basic social activities. It is an attitude of trust that transcends personal decision-making (Faulkner, 2015). Trust commitments towards expertise are a necessity for inclusion in social processes that are often made opaquer by hyper complex technology and organizational arrangements (Hawley, 2014). Trust in the system, differently from trust in the person, it is a necessity for participation in most social contexts (Giddens, 1991). However, it is also true that the system of HE occupies a peculiar position. Differently from other systems, for instance, healthcare but also compulsory education, trust in HE systems is not imposed by dependence, that is, by the fact that the subject has no credible alternative to trust the system he or she must participate in (Baraldi & Corsi, 2017;Luhmann, 2005). Access into HE, and this is particularly true for mature students who might be otherwise living in a stable familiar world, still represents a risky decision (Trowler, 2015). A decision that is perceived as even riskier because is not necessary (Beck, 2013;Luhmann, 1991). With support from D'Cruz recent research (2018), it is here argued that trust must be thought as domainspecific, and trust in HE is intrinsically fragile and always revocable (Boronski & Hassan, 2015). The application and admission process for entering the FdA program is therefore considered an interesting example not only for the development of more inclusive and refined selective processes, but also as a window with a view to an important social situation such as selective interviewing. Selection is the first point of access to the HE system for many applicants and the first point where trust in the system of HE can be lost or reinforced, making hope towards academic progression more credible. This statement deserves more discussion. The selective interview that applicants to the FdA face is the intersection between trust in the system and experience of participation in the system. Interactions during the interview and its outcome will impact on the applicant's well-being, but also on the system. Gidden's theory of trust (1991) is particularly helpful here, with the concept of faceless commitments. Faceless commitments are a product of modernity and depend on trust in the system. Giddens argues that any subject must trust systems, because it is simply impossible to return to a situation where familiarity, that is, full first-hand knowledge, extends to all experiential domains. HE is accessed through a system that is largely non-transparent for the applicants (as well as for the professionals working in it!). Participation in a selective interview demands trust based on faceless commitments. However, the interesting aspect of Giddens' theory is that it does not underestimate the connections between trust in the system and trust in personal trust. The selective interview is a situation where an applicant who vastly ignores most aspects of the system nevertheless encounters the system face to face. The selective interview is thus an empirical example of an intense interaction that either reinforces trust or reinforces skeptical attitudes towards systems. Giddens calls these situations access points, where the trusting relationship between the individual and the system becomes real or, using the vocabulary of Idealistic philosophy, where the relationship is actualized. Access points are situations where the individual evaluates the trustworthiness of the system (O'Neill, 2018). The applicants' trust, or distrust, towards the HE system, is not indifferent to lived experiences of selective interview, because trust is necessarily relational and level of trust depend on specific lifeexperiences; Domenicucci and Holton describe the interactive expansions or retreat of trust as a two-place relation (Domenicucci & Holton, 2017). If the perspective of the system is taken instead of the applicants', still access points are crucial spaces, characterized by a tension between the system and lay skepticism that make the system vulnerable. Entering the interview room, the candidate has a limited knowledge of the system's culture, expectations and procedures. This applies both the HE system and to the specific HE institution as a local system, because each individual organization has specific sets of rules and procedures that differentiate it from the social environment surrounding it (Luhmann, 1982). The interviewer represents the expert system that the applicant must trust in exchange to his/her inclusion in it. During the interview, the interaction can either strengthen the applicant's trust or alternatively awaken suspicions and distrust. During selective interviews, emotional labor (Hochschild, 1983) is invested by the interviewee and the interviewer who manage their emotions and feelings in accordance with organizational expectations. Hochschild offers us a sociological understanding of emotional labor inviting us to consider i) how emotions are regulated and expressed by individuals ii) how the organization relates to roles, responsibilities and interactions. A selective interview, for instance, differs from a medical interview. Whilst the patient is not assessed (at least not primarily assessed) regarding the quality of his/ her 'patientness', the applicant to the FdA is indeed assessed as a future student and a potential qualified professional. The selective interview is the access point to HE and the HE institution where experiences are made that will impact on the applicant's trust in the institution, on the applicant's trust in the HE system and on the applicant's trust in her/his own possibility of participating in HE. The selective interview has implications for the credibility of trust in hope. Conceptual framework 4: who is non-traditional? The semantics of mature students as they transition into higher education Hope supports the risky decision to enter HE, trust is needed to participate in HE from the first contact with the point of access on the selective interview. However, neither hope nor trust can change organizations until they are enshrined in decisions. The second section of the article discusses how educational leadership can promote mature applicants' trust and hope. Like hope, selection leans to the future; like trust, selection looks back to the past and relies on expectations. The academic and professional biographies of the applicants represent the influence that the past holds on the selection process from the institutional point of view and this is probably as much important as the influence that the past holds on the motivations and decision of the applicants. After all, the aim of the widening participation agenda, that is, to overcome 'dispositional barriers' formed during previous education and/or life experiences (Department of Employment and Learning, 2010) connects hope in the future with decisions made in the present and past experiences. Moreover, the professionals at the access point to HE may be influenced by the past, for instance by inherited categorizations or established procedure and criteria that limit the possibility for decision-making. Such factors contribute towards expectations and projections on encountering the applicants during selective interviews (Snyder et al., 2000). The acknowledgment of the importance of selection as the access point to HE imposed to one of the authors in her role of manager and educational leader to pay a great deal of attention towards the cues, often to be found in language, for categorizations and stereotypes about the applicants that could affect their experience of the access point to the institution. An example of her attention to the empirical cues for categorizations that can obscure the unique, multi-dimensional person of each candidate consists in her challenge to the label of 'non-traditional' attached to applicants to the FdA. Words do not only mirror but constitute social reality; moreover, the meaning of words is always a two-sided coin: one side what the words signify, on the other side, what the same words signify-not. This applies to 'non-traditional'; the meaning of non-traditional cannot be separated from the reference to what 'non-traditional' is not, that is, 'not non-traditional', 'traditional' and, with a small semantic leap, 'normal'. Subsequently, words hold meaning whilst at the same time project inherited cultural beliefs and dominant structures that are value-laden. Bakhtin's double-voiced discourse (1994) illuminates how words and dialog embed and reinforce power, conforming both consciously and unconsciously. For instance, the term 'non-traditional' is used to categorize a group of students entering HE mainly via vocational routes as opposed to traditional students progressing into HE from school education. The polarisation between traditional and non-traditional generate social semantics, such as a deficit model for those students/ applicants who are non-traditional, as opposed to the 'normal' traditional students. A deficit model is the matrix to produce knowledge on non-traditional applicants, for instance, their need to catch up or make good, the need to reform themselves, the very idea that they are defined by their condition of need. Alternative to this semantic field, another perspective is based on the idea that an applicant's decision to enter HE is an eloquent claim of empowerment, hope and trust. This perspective clearly invites new language to refer to the applicants, new words that do not entail a semantics of deficit, inadequacy, inferiority (Eraut, 1994). In her role as a manager and leader, the author chose the term 'mature' to replace 'non-traditional' in the planning and implementation of selection to enter the FdA. Conceptual framework 5: mature students and the risk of moving to unfamiliar worlds The decision to access HE entails risking some degree of security regarding professional identity, emotional safety and self-esteem (Mallman & Lee, 2016). The acknowledgment of the delicate position of the applicants has been already presented as the ethical drive for one of the authors as an educational leader to implement the innovation of selective procedure for the access to the FdA program. Security refers to continuity of their selfidentity throughout time; security is produced, and produces, what Luhmann (1988) defines the world of familiarity, 'one's haven and space'. A sense of undisputed reliability of persons and things is pivotal for security. However, the sense of reliability clearly depends on repetition and stability and it is therefore hindered during the movement towards, and immersion into, HE, which represents a highly unfamiliar world (Markle, 2015). Indeed, it is pertinent to wonder what does motivate applicant to make the risky decision to trade the familiar known for the unknown? In previous sections, selection and trust have been discussed in a genuine sociological fashion, that is, they have been explained with regards to their function, in the classic sense of their ability to solve a social problem (see Radcliffe-Brown, 1952;Parsons, 1957). However, some more conceptual work is needed. Selection describes how the process works, trust accounts for the basic conditions of applicants' participation in the HE system. What is missing is a concept to explain what underpins the decision to apply for the FdA. In other words, there is a need for a conceptual tool to explain applicants' motivation. Hope lends itself for that important function. Like the concept of selection and the concept of trust, hope is discussed from a sociological perspective. In a social world that is too complex for any observer to compute, in a social world where any choice is made from a position of partial ignorance with regard both to its presuppositions and its consequences (Luhmann, 2005), hope is a basic need. Where there is hope there are increased possibilities for experience and action, because hope constitutes an effective support for risky decision-making. Complexity and uncertainty create the need for hope (Snyder, Rand, & Sigmon, 2002). In many situations, of course, one can choose whether to hope, or not. But it is argued here that a complete absence of hope (hopelessness) would prevent participation in a complex society molded by risks (Beck, 2013). Hope can be used to describe an attitude or disposition (Dunn, 1994) towards risk-taking. Gilman, Dooley and Florell (2006) consider how hope dispositions can offer an insight into risk-taking attitudes. It is true that a gulf between the intended outcomes of purposeful action and the empirical reality of unavoidable uncertainty can be addressed through rational choices in the form of planning. This is the case for organizational behavior (Hernes, 2015). Planning is the reconstruction of an unknown future as a horizon of limited possibilities linked to choices, a horizon limited enough to be fully embraced by rational thinking. In other words, planning is a tool to create an artificial future-in-the-present, where uncertainty and risks can be reduced. However, planning does not provide motivation to run a risk in situations where other options would be available such as remaining in a more familiar world. 'If we always do what we always did, we will always get, what we always got' is an eloquent adagio to motivate risky decision-making, but what about when what we got offers us relative safety? This is an important point for discussion. The applicants to the FdA could remain within the boundaries of a familiar world, where the need for risky decision-making is limited (Mitchell, Kensler, & Tschannen-Moran, 2018). Utilizing the tools offered by phenomenology, familiarity can be defined as a subject's general view of lived life. In the familiar world, which is full of 'facts' that support expectations, trust is not problematized, and the need of hope is greatly reduced. However, the modern condition is one where the achievement of a totally familiar world is impossible. Participation in society forces risky decisions to be made. Familiarity is not built to reduce complexity, it is an alternative to complexity that appears to play a residual role in social contexts characterized by new possibilities, dependency, intransparency and lack of integration. In their movement from a familiar to an unfamiliar world, planning might have helped some applicants to rationalize their decision to apply to the FdA. However, whilst planning can support decision-making it cannot motivate risky decision-making. This is the function of hope. The paralyzing uncertainty of an unfamiliar world where applicants are to sail uncharted waters cannot be reduced by planning, because planning is not credible within a largely unknown environment. Uncertainty is particular higher within HE, where complexity is reproduced at the level of the individual organizations (Latta, 2019). However, when uncertainty cannot be reduced by knowledge, still it can be reduced by hope. Hope allows some degree of indifference towards risks that cannot be erased because they do partially depend on the subject's choices. From another angle, hope can be understood as support to decision-making in situations of risk that generate further cycles of risk-taking. The applicants to the FdA are obviously aware that possible negative consequences of the decision to apply would not have arisen within their familiar world. Hope supports the acceptance of unnecessary risks in situations where the alternative offered by the security of the familiar world seems to make the risk almost unacceptable. Underpinning an applicants' need for hope there are self-produced risks forcing them to make new decisions that were not necessary in their somehow smaller familiar world. This article conceptualizes hope within a modern society that is characterized by contingent structures and changing conditions (Luhmann, 2005). In absence of hope, individuals' lives in a complex society would be led to, and led by, feelings of dissatisfaction and alienation. The absence of hope reduces the range of possibilities for rational action. Without hope, the subject does not have the attitude that enables him/her to take unnecessary, but potentially rewarding, risks. Generated through a robust theoretical discussion, this statement is considered as an effective way to present the relationship between hope, the decision to enter HE and the position within the HE system of the applicants to the FdA, at once robust and fragile. Promoting mature students trust in hope towards inclusion in higher education. An experience from Surrey, South-East of England Methodology Building on interviews with applicants as well as on the analysis of reflective observations in personal journals compiled by students during their academic journey, the second section of the article develops a conceptual framework to enable the transfer of experiences and knowledge matured in the implementation of the TP to other similar situations. Data consist of 24 application interviews and 24 reflective journals. Interviews were audiotaped upon permission granted by the applicants, as specified in the ethics section, Reflective journals, which represented and still represent a requirement of the FdA program for students to record and interact with their thinking, experiences and challenges during the learning activities and independent study. Through reflective journals, professional and academic areas of daily life were used to reflect on, learn from and link to theory. Although reflective journals were discussed during classroom activities such as workshops and peer-reflections, a specific request for the use of the journals as sources of data for research was advanced to the students at the beginning of the TP, with a clear indication that denying permission or subsequent withdrawal were not going to have any impact on the student's learning journey and personal academic progression. A copy of the reflective journal of each participant was collected at the end of the TP. Participants in the researches were adults, age between 25 and 51 at the moment of data collection and outside of institutionalized education for at least 5 years. The great majority of the participants, with the exception of one, were female. This unequal distribution by gender reflected the nature of the Early Childhood workforce at the time of data collection, and still reflects it. Characteristically for FdA applicants, no participants had completed a HE program of studies at the moment of the interview. Due to the vocational nature of the FdA, all applicants, and participants in the research, were employed in the Early Years sector in registered nurseries, in schools nurseries as teaching assistants, or as registered childminders. For the applicants, a FdA, and possibly a subsequent full degree presented an opportunity for professional progression towards managerial positions, as well as towards teaching qualifications. Two non-probabilistic sampling methods have been used in the research. The first one was purposive sampling. The research discussed here is limited only to mature prospective students who applied to the FdA program in ECS offered by the HE institution where one of the authors was working as program manager. The nature of FdA programs contributed to the exclusion of younger individuals taking a traditional progression route from postsecondary education, as those subjects apply to an ordinary undergraduate program. The second sampling method, connected to the first, was convenience sampling. For this research, participants meeting the selection criteria of the purposive sampling were approached asking for the permission to use their application interview and their reflective journals as sources of data for possible future research, and their inclusion in the research depended on their willingness to accept that. The research was based on a methodological choice, that is, to consider applicants, and students, narratives as a pivotal resource to allow a phenomenological description of the semantics of participation in HE. Narratives are conceived as social constructions, in which the observed reality is interpreted and presented at once through series of stories that express knowledge and constitute the context for the production of knowledge, including knowledge about the self. In line with previous research on the narrative construction of identities (see for instance Amadasi & Iervese, 2018) the production of narrative is approached as a form of agency towards the construction and negotiation of identities through personal stories (Bamberg & Georgakopoulou, 2008). In particular, two types of narratives appear to be tightly intertwined in the production and socialization of identities through stories. 1) narratives of personal life concern pivotal events that define personal biographies underpinning 2) narratives of the self, concerning opinions, emotions and relationships (Somers, 1994). Narrative analysis is an extension of the interpretive approaches within the social sciences. Narratives lend themselves to a qualitative enquiry in order to capture the rich data within stories. Narrative analysis takes the story itself as the object of study; thus, the focus is on how individuals or groups make sense of events and actions in their lives through examining the story they produce (Riessman, 1993). This approach to study is not new to qualitative sociology. Sociology has had a history of ethnographic study including the analysis of personal accounts. However, with ethnography it is the events described and not the stories created that are the object of investigation: language is viewed as a medium that reflects singular meanings. Under the narrative movement and criticisms of positivism, the question of textual objectivity has been challenged by social constructionism (Gergen, 1997), encouraging many to approach narratives as social constructions, that are social in the sense they are exchanged between people. Narratives constitute rather than represent reality. As such, life stories are a linguistic unit involved in social interactions and are therefore cultural products, in their content and form (Linde, 2001). Language is therefore seen as deeply constitutive of reality, not merely a device for establishing meaning. Stories do not reflect the world out there, but are constructed, rhetorical, and interpretive (Riessman, 1993), lending themselves to a phenomenological analysis. Linde's concept of life stories as cultural products and Riessman's interactive rhetoric allow to approach interviews as a product of a dialog co-constructed and continuously re-interpreted by the researcher and the participants. The narrative approach to the analysis of interviews and reflective journals applied in the research hereby presented is posited to have the ability to capture social representations 'in the making'. Narrative analysis is well suited to study subjectivity and identity largely because of the importance given to imagination and the human involvement in constructing a story (Rosenwald & Ochberg, 1992). This research was underpinned by a robust and sound ethical framework. The research was approved by the Colleges Ethics Committee, based on the provision of information sheets and informed consent forms to applicants to the FdA at the moment of their interview. Consent was sought relating to collection and processing of data produced through application interviews and reflective journals and collected in written form. Participants involved in the research were fully informed about any possible future use of the data. Ethical practice will prevent any opportunity of participants being singled out as individual. The informed consent contained an explicit request to agree with the use of video to disseminate the result of the research, specifying the level of dissemination. All research activities were undertaken ensuring there is no harm to participants and data was stored securely adhering to the Data Protection Act 1998, which was the most updated data protection regulations at the time of the research. However, upon examination, it is evident that the storing and protection of data would be fully compatible with the new EU General Data Protection Regulation, promulgated in 2018. Research requires sensitivity regarding power and vulnerability that relates to social relationships; this research is based on data collected within an ethical framework that can be described as doing 'research with' participants, rather than 'research on'. All participants were offered the opportunity to ask about the implications of the use of data for possible future research. The consent to the use of application interviews and reflective journals as data was completely voluntary and applicants were informed that the refusal to allow their interview or reflective journal to be used as data was not going to be detrimental for the outcome of the application. Any potential breach of confidentiality and data protection in relation to research data were minimized using protected storage. The audio-recorded interviews and copies of the reflective journals, collected at the end of the academic year, were stored in a locked cabinet within the college's facilities only accessible by staff. Raw data were accessible only by the researcher, and processed data does not include any reference to personal data, in order to prevent identification. All references to participants or third parties have been completely anonymized in the data presented as part of the discussion within this article or any other event of public dissemination. The discussion of data is organized in two sections. In the first section, the paradoxical coexistences of the principles of inclusiveness and recruitment with integrity will be discussed in its relationships with applicants' ascribed identity within the largely unknown environment of HE. Applicants' expectations, reflections on risk and hope will be discussed also considering current literature on the position of mature learners in education. Transcripts from the application interviews will be chiefly used as sources of data. In the second session, students' experience of the TP will be analyzed based on the reflective journals they had compiled during the TP. In line with the pioneering work of Risquez et al. (2007), as well as in light of the methodological framework underpinning the research presented in this article, the analysis focused on the narrative construction of students' identity as they moved into HE. Results and discussion 1: cutting the knot. Combining selection with inclusiveness FdA programs were introduced in 2001 as a pillar of the 'widening participation' agenda of the Labour government. FdA are equivalent to the first two years of an honours bachelor's degree; their scope is to enable those working in a specific sector to progress academically, professionally and personally by systematizing and exposing to critical reflection their experience, skills and expertise. On completion of two years of full-time studies (always organized to allow employed people to attend the lectures and seminars, for example, by scheduling them one evening a week and one Saturday a month) students can choose to undertake a top-up year towards an honours degree. While governments' policies over the best part of two decades have been recognizing the need for more inclusive HE to support social mobility, the paradoxical nature of the selection processes has often failed in creating the conditions for inclusion. Cutting the Gordian knot at the intersection of inclusiveness and selection was the scope of the innovation discussed in this article to combine: 1) effective delivery with positive outcomes for student achievement and experience (according to the 'recruiting with integrity' principle, see also UK Quality Code by the Standing Committee for Quality Assessment, 2018); 2) inclusiveness to empower and celebrate applicants' hope and trust. The selection process is the point of access where applicants' hope encounters the reality of organizations with a dramatic impact on their trust. How was the Gordian knot cut? The solution developed was to bypass the unsolvable dilemma between selection and inclusion by incorporating inclusion into selection. Incorporating inclusiveness into selection was underpinned by an understanding of the selection process not as a gatekeeper but as the first step in the learning journey. Selection not the nemesis, but as the celebration of hope. The combination of inclusiveness and selectivity has been achieved through the design and implementation of the 'Transition Programme/ TP'. The TP was planned as a compulsory short course to applicants who did not fully meet the academic standards at the point of their enrolment. The first challenge for the TP was that the majority of FdA mature applicants had not undertaken previous academic study, often without GCSE in core subjects identified by HEFCE (2017), by the Children's Workforce Development Council (2010) and by the QAA (2014). The severity of the challenge was clear to the applicants as well. This is suggested by the following comments shared by applicants on their reflective journals. Reflective journals are a requirement of the FdA program for students to record and interact with their thinking, experiences and challenges. Professional and academic areas of daily life were used to reflect on, learn from and link to theory. The comments from the reflective journal also suggest the dynamic coexistence of hope (otherwise the applicant would have never moved towards HE) and negative expectations based on reflective narratives of past educational experiences. • I never thought I would be able to undertake a degree • No one in my family has ever been to college or university • I don't know if I can do it, am I clever enough? • What happens if I fail my first assignment will I get chucked out? • I'm not very good at maths and never will be • I'm not very good at writing or English • What happens if I don't get onto the program, what else can I do? Is that When asked why she wanted to access the FdA program an applicant commented, during the interview: I never had an opportunity to continue studying at 16, there was never any other expectation other than getting a job. University was never on the landscape for my parents or me. In fact, even at school I never had any conversation about the possibility of going to university. Those children that did were high flyers, in all the top classes and sets. They had an air about them that we all knew meant they wouldn't be working at 16 like the rest of us . . . it was unsaid . . . or just expected The comments and the excerpt from an interview indicate that not only the culture of HE, but also applicants' self-identities were focusing on deficit from the past, rather than hope for the future. Penn (2005), Ben-Ari (1996), and Wolf (1990) refer to forms of acculturation to account for the power of negative expectations and their influence towards risk-avoiding behavior. I want to go into teaching and see this as a route where I can continue earning money whilst I study. I like the idea learning with those in practice so I can learn from them to develop my thinking. I am not a great writer, I can say something clearly, present my ideas but writing it down into a structured construct is challenging for this has put me off applying, because I think I might not pass What emerges from the excerpt is a battle between hope and expectation of negative outcomes of a decision based on hope. The applicant is pointing to what she could not or had not, rather than what she has achieved in her profession. There is hope, there are aims and goals, but what is missing is the empowerment and confirmation of hope based on the celebration of the things that applicants are good at (Mezirow, 1991). Career progression is an important factor of motivation for mature students, particularly if the access to HE is integrated in a detailed and timed life-career plan (Wong, 2018). The importance of professional development for progression is clearly presented by applicants as a determinant of the risky decision to enter HE; this is explained by Smith (2018) as the pursuit of social and cultural capital in order to align self-identity and aspirations with the position in the professional settings. I will learn a great deal by doing a degree to develop my skills much more . . . that is my inspiration . . . to learn more and be acknowledged financially and professionally for what I do and who I am. However, mature students can be very sensitive to what they feel as a mismatch between hope and the reality of their position vis-à-vis academia (Ramsey & Brown, 2018). I have high personal expectations of myself although pressure from life at times has prevented me to reach my goals. Also, my high expectations put fear into me which prevents me from moving forward Differently from many 'bridge programmes', the TP is not only, and not primarily, designed to provide academic skills to mature students to enable them to achieve their ambitions; rather, the TP challenges the narratives of the self-constructed by applicants, valuing their existing knowledge and experience as part of their academic progression, developing teaching and learning activities on case-studies related to them. An applicant, when invited to present something positive about herself to capture her strengths to match selective application criteria, was particularly negative about her profile. As one of the authors asked in her role as admission tutor So, with all of those barriers you have self-assessed and identified, what influenced you to still apply for this programme? What inner strengths do you have inside you that took no notice of those barriers and got you to this stage of your progression? The applicant looked at her unable to answer and after one minute tears rolled from her eyes. The question was about hope, but the applicant could not translate her hope in confidence, finding it easier to focus on her challenges. Such an emotional response communicates that previous educational experiences have been negative; at the same time, the same response communicates that the applicant knows that there is more to her than the label she identifies herself with (Ashforth, 1995). Prensky (2001, 1) reminds us that 'our students have changed radically. Today's students are no longer the people our educational system was designed to teach'. Yet, the need to label and measure continues. Not narratives of the self that position students in a disadvantaged position in HE, but also expectations concerning the HE environment can disempower mature students, as suggested by Katartzi and Hayward (2019) I have only realised after all this time that full time at uni is not Monday to Friday, 9 to 5. My idea of university was that I had to attend full time, attend class every day, and learn something from scratch. My perception of what a university student should be did not match who I am. I didn't think my profile or past experiences were good enough or 'right'. I didn't think I would fit in, belong or connect with those studying at Uni As suggested by the excerpt, entering HE for mature students with a solid professional position implies the movement from an established position within a familiar lifeworld to an unfamiliar world; this generates an acute sensitivity for risk. Interestingly, the expert also displays an expectation concerning HE disregard for the applicant's existing professional and vocational knowledge and skills. 'Learn something from scratch' suggest the idea that professional knowledge and experiences are not relevant in HE. As explained in the first section of the article, one of the TP aims is to challenge such assumption, valuing what the mature applicants could bring into the FdA from the first contact between applicants and HE, the selective interview. It is believed that this is one of the innovative and unique aspects of the TP: combing the provision of academic skills, as many other programs do, with the recognition and celebration of the diverse knowledges that mature students bring with them. Academic skills are provided through workshop and exercises that are linked with the professional experiences of mature students, rather than prescriptive 'how to study' handbooks. The stories of colleagues who have undertaken HE studies can contribute to make the unfamiliar more familiar, reinforcing hope and motivations. I want to learn more about what I do with children and why I do it. I really enjoy my job and I like impacting on the lives of young children although after undertaking a Diploma and starting work I have not studied further. I have some colleagues at work who are doing a degree and they have inspired me. They talk about what they do and I would like to do that. They moan about assignment deadlines but when they talk about what they are studying and doing, I like the ideas they bring back to the nursery and I think they are good ideas However, not many applicants seemed to be included in professional or personal networks that offer example of successful transition into HE. In most cases, applicants' narrative concerned an individual and often solitary struggle between prescribed identity and ascribed identity. The excerpt below, taken from an application interview, suggests that the gulf between the identities can be a source of motivation (Mallman & Lee, 2016), even if the decision to enter HE can generate some instability in the construction of self (Now I don't know myself fully, really). I want to look at myself differently, differently from how I am seen by my family, friends and particularly work colleagues. They all think they know me and know what I can do, but they really don't seem to know me fully. Now I don't know myself fully really. I want to do this, and I am making the opportunity for this to happen not. I have found the right time in my life but also in my head In some circumstances, applicants narrate their decision as a challenge not only to prescribed identities but also to unfair practices that they experience. HE can be an instrument to force management to acknowledge the voice of the applicant as a professional. Recently, I planned parent engagement meetings and our nursery home visits and during the welcome meeting it was the teachers who were acknowledged by the Head during the welcome introduction and I was not even mentioned or considered for organisation which I feel was totally poor practice An instrumental approach to HE is not at odds with issues of self-identity; unfairness in the workplace relates both to economic treatment but also to the lack of acknowledgment of the applicant's professional status. HE can be an investment in view of financial returns (Tomlinson, 2017) but it is also the starting pint of a narrative of rebellion against prescribed identity that do not match hoe applicants see themselves. To be honest, I run the nursery in school and I am training staff on double my salary (or more) who haven't got a clue about early years and learning through play. This is probably a big motivation for me to apply. They have a degree and are getting paid so much more than me, for having less skills, experience or creative ideas of how to engage with children. I really don't mind training them up. I enjoy it although it irritates me that I have to do a degree to be acknowledged for what I do, for what I know. Those people managing me have no idea at times! The commitment of the TP to value and celebrate the existing skills and knowledge of applicants is therefore not only addressed to challenge situations of low self-esteem and fear; the TP also aims to present HE as an environment where self-narratives of high professional stature are acknowledged, This is connected to research that invites to conceptualize mature students' transition to HE utilizing Deleuze and Guattari's idea of deterritorialization: the established professional finds him or herself surrounded by an environment where knowledge and experiences developed in other environments seem not applicable (Taylor & Harris-Evans, 2018). Based on this analysis, and although the de-territorialization effect cannot be prevented as related to an ontological shift in the life-story of the applicant, the TP was designed to contain the negative effect of de-territorialization by presenting an image of the new territory, that is, HE as a territory where previously developed skills and knowledge still had value and could be used. The TP is an example of educational leadership transforming HE into a habitable territory for mature students, and an example of the potentiality of innovation at the level of pedagogical practices as previously discussed by Uslu and Arslan (2018). In the case of the TP, pedagogical innovation that generated change at organizational level consists in the provision of academic skills based on the discussion and reflection on the professional experiences of students (see Figure 1). Results and discussion 2: 'working with students, not probing applicants'. The organization of the transition program The architecture of the TP was based on the idea that instead of setting mature applicants up to fail by measuring their academic profile at the point of access through the selective interview, it is possible to tackle personal and academic barriers (considered to be intertwined, see section 2.2) during the FdA program, that is, from inside HE. McKenzie's (1990) reflective framework was utilized to transform the selective interviews in data to inform the development of the TP that was designed as a program of transformational learning, where the development of knowledge and skills support change in the student's self-identity (Mason, 2018). To implement a transformational learning framework, the TP devises three strategies: 1) to offer extended time to complete the two years program; 2) to identify short courses or training opportunities to support skills deficit; 3) to strengthen the relationship between academic mentors and employers to offer work-based learning. Analysis of vocational and academic skills during recruitment enabled leadership to open a progression route constructed specifically to meet mature students' needs, without neglecting the institution's statutory duties. Over a decade, the TP has been offering applicants a bespoke program, transforming selection from a barrier to the first step inside HE, therefore into a celebration of hope. In other words, the TP transformed potentially inadequate applicants to be excluded from HE into students to be supported from within HE. The evaluation of interview records from previous cohorts of applicants and the review of online level two tests that applicants are required to take before the interview takes place (in English HE level two corresponds to the academic standards for successful completion of secondary education or equivalent) suggested to the leadership of the FdA that underpinning the lack of academic qualifications was the need for support regarding spelling, grammar, essay structure, ICT, online research, how to use library resources, speaking in front of peers, difference between descriptive and evaluative writing. The results of the evaluation of interviews records and review of online tests align with similar results produced in other contexts (Faulkner, Fitzmaurice, & Hannigan, 2016;Hay, Tan, & Whaites, 2010;Smith, 2018). The TP was developed as a modular and sequential provision; modular because the academic barriers are challenged by tutors and students through specialized modules; sequential because the TP is organized to allow the use of newly achieved academic skills as a tool to overcome further barriers. The sequential organization of the TP is influenced by Skemp's (1989) relational learning theory of purposeful deep level learning to support the identified skills deficit. The delivery of the modules is largely guided by Fowler and Robbins (2006) idea of mentor-mentee cooperative reflection where perspectives and outcomes are challenged, guided and provoked to consider other possibilities. The choice of looking at the delivery as a social situation involving a mentor and a mentee rather than a teacher and a learner was quite innovative with respect to the culture or the institution; however, the idea of cooperative reflection entails something more: according to Fowler and Robbins, whilst the mentor's expertise has the responsibility to offer a framework of support, the mentee and the mentor learn from each other. Cooperative reflection is therefore an alternative to the well-known scaffolding model, because the mentor is prepared to 'dance with the mentee' following the mentee lead, for instance utilizing mentee's reflection on personal and professional experiences as a foundation for learning. Learning becomes a cowritten and always changing textbook (Schön, 1987). Within a cooperative reflection model, pedagogical leadership supports students' self-efficacy and self-determination; learning and teaching is directed to empower students' capabilities, ultimately making their hope more credible as they progress into HE. The excerpt below, taken from a reflective journal entry that a student wrote at the end of the TP illustrates its empowering effect, fulfilling the aim of working to enrich, not to erase, students' self-identities. Constructing a development plan that focuses on things I want to get better at or things that I am not doing so well at is so powerful to make me much more aware of my values and belief system. The reflective self-auditing process provoked me to unpick layers that build up and contribute to outcomes, enabling me to not see everything as one outcome but rather there are many contributing challenges that make up that an outcome. To reflect on those things in my personal, professional and academic life it exhausting and invigorating, I am more proactive in my thinking things through to make changes rather than expecting thing. Plus my confidence is stronger now! The TP represents an alternative to traditional bridge program that operates within a 'filling the gap' framework. Rather than looking at what the students do not have. The TP looks at what the students have, and link academic development to the knowledges that students brings to the classroom: academic success is pursued through, and not despite mature students' identities. The TP adapts and implements for and with mature students in HE the work of self-efficacy models with younger students' in secondary education (Gannouni & Ramboarison-Lalao, 2018). The rationale of the TP and its ethical drive is to empower applicants at the point of access into HE, in order to create a situation where hope is associated with success, therefore reinforced. TP secures the development of the required academic skills in the course of the FdA working with students after admission, as opposed to the established selective procedure that would ask applicants to demonstrate academic skills as a condition for admission. It is a program aimed to support mature students in building cultural capital (academic skills) and social capital (participation in HE), that are recognized by recent research as the main motivation for the risky decision to enter HE (Smith, 2018). The excerpt below is a reflective journal entry that suggests the multidimensional impact of the TP both on the construction of cultural capital (self-reflection) and in the construction of social capital (interacting with peers). Talking with peers about my concern and lack of confidence when engaging with parents has enabled me to realise how other people also feel quite similar but in a variety of ways or in different circumstances. I thought it was just me who becomes so nervous. Somehow talking it through and using SWOTs to analyse my own barriers has boosted my confidence. I have now tried so many ways to try out new approaches and mind set. I now state my point, argue if I don't agree and have a firmer understanding that often there isn't right or wrong but merely different opinions. The TP enabled me to break challenges down much more, linking theory to understand what makes me so nervous and what my fears were. This has been liberating, not only at work, but personally as well As suggested by the excerpt, building cultural and social capital is not only related to career profession (Tomlinson, 2017) but also to the development of communication skills and possibly more active forms of participation in the public debate, outside the professional context. See also the reflective journal entry below I didn't realise how many forms of communication there are. Rather than just talking, it's the emotional literacy, the language and power or words that are used pragmatically to control or limit. This topic has totally opened up so much opportunity, and so much insight The TP may be approached also from a perspective focused on its positive impact on social justice which is one of the most discussed points in the debate around innovative educational leadership (DeMatthews, 2018). A broader concept of social justice, for instance, includes students' capability to renegotiate their position within inherited structures towards self-realization and construction of more complex and dynamic identities (see also Mason, 2018). This is what the excerpt below suggests Family said they would support me whilst I do a degree although in hindsight we didn't know how much things would change. I am not at home as much so others have to pick up cooking, cleaning, shopping etc which they didn't want to. This has caused a lot of challenges. And at work, at work I have to leave earlier one day a week and my manager is making it very difficult for me. Hence I have developed negotiation skills, argument and use of language to move things forward whilst being able to relate these points to psychology and sociological theory Working with students, not probing applicants could describe the innovation brought by the TP. As a 'bridging course' between academic levels, the TP offers a solution to the paradoxical coexistence of selectivity and integrity in recruitment on the one hand, and inclusiveness and empowerment of mature students on the other hand. Hope can be damaged by negative outcomes of decision based on it; however, hope can be reinforced when decisions based on it prove to be successful. Like trust, hope can be a learnt experience after experience, until it becomes a structure that orientates social behavior. Since its inception, funding for the TP has been provided by students' fees as well as by local stakeholders and boroughs aiming to upskill the workforce. Once the TP was designed, a first cohort of 18 students were contacted after their interviews offering them a place in the FdA Early Years on condition of undertaking a 12-weeks course designed to support their academic skills. In a nutshell, this is the solution to the paradox of selection and inclusivity that innovative educational leadership brought to the fore: embedding inclusive practices (the TP) into selection (conditional admission). Twelve students who previously would have not been accepted onto the program agreed to undertake the TP. From the first year, the face to face delivery of the TP has been scheduled to meet the needs of students who are employed, running from 6 to 9 pm in the evening. The organization of each session (with a high degree of flexibility to accommodate the personal input from the student) is based on a first part focussed on induction to the use of computer and to library and college resources; a second part is devoted to reflective professional and personal skills. During the second part of the session, students undertake SWOT analysis to identify personal, professional and academic Strengths, Weaknesses, Opportunities and Threats. Academic skills are then introduced through reflection on activities and case-studies and dialog is fostered within a community of practice framework that promotes participation (Fleer, 2003). Both parts of the sessions are designed to help develop a learning community which, as suggested by UNESCO (2003), is considered vital to provide students with the skills to function effectively in a dynamic, information-rich and continuously changing environment. The TP is careful to not introduce an ideological contrast between academia and employment, between knowledge and skills. On the contrary, the TP looks at how students already analyze, evaluate, argue and synthesize within their personal or professional life, building on academic skills from that. The aim of the TP is to enable students by translating already applied skills from life experiences and practice into academia. An example of this pedagogical strategy concern the empowerment of students' reflective skills through the inclusion of others' perspective, towards the development of dialogical intelligence. Dedicated session where facilitation of dialog is implemented. Presenting students with role-models within a participatory and collegial framework for reflection. An entry from a reflective journal highlights the wide-ranging impact of facilitated dialog Conscious and unconscious theory along with my own thinking has probably been one of the biggest challenges for me on this course. Exploring that others do not see things in the same way as me was an eye opener. It actually made me angry, emotional and at times in disbelief that others thought so differently or could be so flexible. I smile looking back at how inflexible or green I was at the beginning and when I reread my journals I cringe a bit but smile and love how much I have grown. This learning impacts on my well-being as well as my practice and personal role as a mum and partner During the delivery of the TP, the teaching staff gather data and feedback from students asking them to explain what challenges they experienced and felt could prevent progression. Anonymized questionnaires are distributed monthly to allow students to evaluate the TP in light of their needs. Combined with the reflective journals, evaluative questionnaires allow a continuing analysis of what students may want from the program, in line with student-centered pedagogies in HE (Katartzi & Hayward, 2019;Markle, 2015;Wong, 2018). Continued students' feedback allows reflection on action to be undertaken (Altrichter & Posch, 2000;Dewey, 1966;Gannouni & Ramboarison-Lalao, 2018;Moon, 1999;Schön, 1987), enabling what Loughran and Berry (2005, 2) define as a 'developing pedagogy', that is, a pedagogy organised with a curricular focus based on explicitly modelling particular aspects of teaching so that we can unpack these aspects of teaching with our students through professional critiques of practice One example of developing pedagogic for transformative learning was the inclusion of time-management skills in the TP, following students feedback through reflective journals; for almost a decade now, time-management is a core component of the TP, with a positive impact on students beyond academia, as suggested by the excerpt below Time management has been a challenge for me to ensure work, family and University commitments are met. Reflection and the use of SWOT analysis has enabled me to look at how I utilise my time (or not) and how 'time thieves' take my time or how I let them. Based on feedback on observations, translation between academic and professional life has become the main tool for teaching, proving to be extremely effective and supportive in establishing hope, realizing ambition and increasing student recognition of what they were already able to do and would be able to build upon in the future. Conclusion: a paradox in higher education solved through innovative leadership On conclusion, and upon reflection, the authors would suggest that the article fulfilled its two main aims. The first aim was to contextualize sociologically the motivations underpinning mature applicants' choice to enter Higher Education within a discussion on hope, trust and risk, presented as three interrelated concepts. The article provided conceptual clarification to enable the reader to use hope as a tool to contextualize sociologically the motivations underpinning mature applicants' choice to access Higher Education by approaching that choice as a movement from the familiar world to a more complex world, characterized not by repetition of patterns and behaviors but characterized by risky decisions. The second aim of the article was to argue the capability of educational leadership to generate positive change supporting mature applicants, and students', hope. This aim was achieved by presenting the Transition Programme, a project of pedagogical innovation designed to promote inclusiveness while securing recruitment with integrity into Higher Education. From an institutional perspective, the Transition Programme is addressed to transform a selective process from a stressful and potentially hurtful clash between applicants' hope and institutional rules into a celebration of mature applicants' hope. However, and the authors believe most importantly, the Transition Programme has been, and still represents, an investment to preserve and celebrate applicants' trust in their own hope. The importance of this aspect cannot be underestimated because, linking back to the theoretical discussion in the first part of the article, hope is a necessary tool to support decision-making in complex, unfamiliar and therefore uncertain environments. Since its inception, the TP has been delivered to many cohorts of students. The most interesting piece of data refers to the observable correlation between less restrictive selection at the point of access and increased levels of retention and progression from the FdA to a full degree. An apparently impossible coexistence of inclusiveness for wider participation and selectivity entailed in recruitment with integrity was secured by the Transition Programme, protecting applicants' hope from the Moloch of bureaucratized selection. When applied in organized and resourced strategies, educational leadership can successfully implement a more inclusive and more empowering education. Disclosure statement No potential conflict of interest was reported by the authors. Notes on contributors Federico Farini is Senior Lecturer in Sociology at the University of Northampton and UK lead for the Horizon2020 Project 'Child-UP'. From 2015 and 2017 he has worked as Senior Lecturer
2019-09-17T03:08:07.619Z
2019-09-09T00:00:00.000
{ "year": 2021, "sha1": "50e61bf36e039a241ae1bae57c799b2d4ce91369", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13603124.2019.1657592?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "c207c98389098087f20342c1a0951e6a5ab56d6f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
73471661
pes2o/s2orc
v3-fos-license
Indirect treatment comparisons including network meta-analysis: Lenvatinib plus everolimus for the second-line treatment of advanced/metastatic renal cell carcinoma Background In the absence of clinical trials providing direct efficacy results, this study compares different methods of indirect treatment comparison (ITC), and their respective impacts on efficacy estimates for lenvatinib (LEN) plus everolimus (EVE) combination therapy compared to other second-line treatments for advanced/metastatic renal cell carcinoma (a/mRCC). Methods Using EVE alone as the common comparator, the Bucher method for ITC compared LEN + EVE with cabozantinib (CAB), nivolumab (NIV), placebo (PBO) and axitinib (AXI). Hazard ratios (HR) for overall survival (OS) and progression-free survival (PFS) estimated the impact of applying three versions of the LEN+EVE trial data in separate ITCs. Last, to overcome exchangeability bias and potential violations to the proportional hazards assumption, a network meta-analysis using fractional polynomials was performed. Results Bucher ITCs demonstrated LEN + EVE superiority over EVE for PFS, indirect superiority to NIV, AXI, and PBO, and no difference to CAB. For OS, LEN + EVE was superior to EVE and indirectly superior to PBO, applying original HOPE 205 data. Using European Medicines Agency data, LEN + EVE was directly superior to EVE for OS. Fractional polynomial HRs for PFS and OS substantially overlapped with Bucher estimates, demonstrating LEN+EVE superiority over EVE, alone, NIV, and CAB. However, there were no statistically significant results as the credible intervals for HR crossed 1.0. Conclusions Comparing three Bucher ITCs, LEN + EVE demonstrated superior PFS when indirectly compared to NIV, AXI, and PBO, and mixed results for OS. While fractional polynomial modelling for PFS and OS failed to find statistically significant differences in LEN + EVE efficacy, the overall HR trends were comparable. Introduction In the United States, approximately 63,990 new cases will occur and 14,400 people will die from kidney and renal pelvis cancer in 2017 [1]. Renal cell carcinoma (RCC) is the most prevalent form of kidney cancer, diagnosed in approximately 90% of cases [2]. Treatment for advanced/metastatic RCC (a/mRCC) typically consists of single agents, however the combination of lenvatinib plus everolimus (LEN+EVE) demonstrated significant improvements in progression-free survival (PFS) compared to EVE as monotherapy among second-line a/mRCC patients (HOPE 205 trial, NCT01136733, ) [3]. Currently, there are no other direct, head-to-head clinical trials comparing combination LEN + EVE to active comparators. Therefore, indirect treatment comparison (ITC) may be useful for informing of LEN+EVE efficacy in the absence of clinical trials. The aim of this study was to compare overall survival (OS) and PFS efficacy outcomes among relevant second-line a/mRCC drug therapies: LEN+EVE to axitinib (AXI), cabozantinib (CAB), EVE, nivolumab (NIV), placebo (PBO) and by applying different methods of ITC analysis. Our objectives were: (1) Compare Bucher ITC's for three different versions of HOPE 205 data: original trial data [3], extended OS data (European Medicines Agency 2016) [4], and extended OS data with re-stratification of the models (FDA 2016) [5] (2) Compare these efficacy estimates to network meta-analysis (NMA) results using fractional polynomials, where the proportionality of hazards can vary. Materials and methods A systematic literature review was conducted to gather relevant data on second-line a/mRCC drug therapies as described in the Supporting Information. Data sources for LEN+EVE efficacy: Three versions of HOPE 205 trial data The original study from reported hazard ratios (HR) for OS and PFS of LEN+EVE versus LEN and LEN+EVE versus EVE among participants with prior VEGF therapy. Additionally, more mature OS data were submitted to the European Medicines Agency (EMA) and the Food and Drug Administration (FDA), along with PFS outcomes reassessed by independent reviewers [3][4][5]. Furthermore, the FDA accepted the OS and PFS hazard models with different values for the stratification factors compared to the EMA submission and original trial analysis. Consequently, three sets of efficacy results comparing LEN+EVE to EVE are summarized in Table 1. 1. Original Motzer publication reporting post-hoc OS trial data (December 2014). Trial investigators reviewed MRI and CT scans for disease progression. PFS data ended in June 2014, while OS data were extended to December 2014. HRs with 95% confidence intervals (CIs) were estimated for PFS and OS using stratified Cox regression models and the Efron method for tied events. Patients were stratified by hemoglobin level and corrected serum GTand TW but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Covance provided support in the form of salaries for author HM but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the 'Author Contributions' section. Competing interests: Authors GM and MG are employees of Eisai. DM was also employed by Eisai during the manuscript's preparation. GT, TW, HEJ are employees of Purple Squirrel Economics (GT, TW) and Covance (HJE) who acted as independent consultants. This does not alter our adherence to PLOS ONE policies on sharing data and materials. calcium. Additional trial characteristics are listed in S5-S7 Tables, in the Supporting Information. 2. EMA [4] request for independent radiological review (IRR) and OS (July 2015). For the PFS data, the EMA requested independent reviewers blinded to treatment. For OS, data were extended to July 2015. Statistical methods remained the same and the independent reviewer-led results for PFS were published in a follow-up Motzer et al article (2016) [6]. The updated OS results were included in EMA online product information [4]. 3. FDA re-analysis of EMA data for PFS (June 2014) and OS (July 2015). While the FDA accepted the same data as the EMA request, the FDA required a change in stratification factors based on the data recorded in the randomization system instead using case report forms. After this amendment, the proportionality of hazards assumption was still considered satisfied. Proportionality of hazards assumption Due to the three different sets of OS efficacy results for the various submissions, it was decided to separately apply versions 1, 2, and 3 (Table 1) of the HOPE 205 trial data into three separate ITCs for comparison. To obtain comparative efficacy results without risk of bias from proportionality violations, a NMA applying parametric fractional polynomials was conducted. After reviewing the a/mRCC studies being compared, this NMA fractional polynomial technique was requested by the Evidence Review Group (ERG) of the National Institute of Healthcare and Excellence (NICE). Based on previous findings from NICE assessment committees for CAB (GID-TA10075 [7]) and NIV (TA417 [8]), the ERG advised to consider AXI as having similar efficacy to EVE. This network omitted the RECORD-1, TARGET, and AXIS trials, whose participants did not have prior anti-VEGF therapy, were of lower risk, may have crossed over to investigational treatments within RECORD-1 and TARGET [9][10][11]. Thus, a more homogeneous network of trials with greater across trial comparability would be analysed. Statistical analysis for the Bucher indirect treatment comparisons The ITCs were performed in Microsoft Excel 2010. Data and results were verified by two quality control reviewers. Efficacy outcomes from the SLR that entered into the ITCs included published HRs for PFS and OS, with two-sided 95% CIs on the natural log scale. Standard errors (SE) were calculated by subtracting HR from the 95% confidence limit. For trials reporting patient crossover (TARGET and RECORD-1), adjusted results available from publications entered the ITCs. ITCs used the Bucher method with EVE as the common comparator and proportionality of hazards within trials was assumed [12]. Statistical analysis for the network meta-analyses with fractional polynomials First, survival data were digitally extracted from the published Kaplan-Meier curves for CHECKMATE-025 and METEOR using UnGraph software package [13][14][15]. Where insufficient details were available on the published Kaplan-Meir curves (e.g., overlapping censor symbols, number censored not reported [CHECKMATE-025]), the method of Guyot et al (2012) was used [16]. The estimated survival functions apply a wide family of models including Weibull and Gompertz distributions in a method described by Jansen (2011) [17]. First order and second order polynomials using fixed effects estimated the treatment effect with multiple parameters using the Markov Chain Monte Carlo (MCMC) method in WinBugs [18]. Two chains were run for 50,000 iterations and discarded as "burn-in," and then the model was run for a further 50,000 iterations for inference. Non-informative priors were used, and convergence was confirmed with diagnostic plots and the Gelman-Rubin statistic. The powers for the fractional polynomials were chosen from the set: -2, -1, -0.5, 0, 0.5, 1, and 2. The Deviance Information Criterion (DIC) was used to compare the goodness of fit. Systematic literature review Final selection of included and excluded studies is described in the Supporting Information. To enable indirect comparison of AXI with the HOPE 205 trial, a multi-step ITC was designed, connected by adding the TARGET trial (SOR vs PBO; Escudier 2009) [9]. This resulted in six studies for ITC, as listed in Table 2 Patient population for Bucher indirect treatment comparisons Across the selected trials, patient characteristics (median age, gender, history of prior nephrectomy) were considered similar enough for ITC. As all patients were considered equally likely to be given any treatment in the network, adherence to transitivity was considered sufficient. However, some differences still prevailed. The average patient in the HOPE 205 [15] and AXIS (AXI vs SOR; Motzer 2013) [11] trials had greater disease severity as measured by Eastern [14], and RECORD-1 [10]) had received more than one prior anti-VEGF therapy. The SOR trials (TARGET, AXIS) did not require failure of prior anti-VEGF therapy. All patients in TARGET and approximately 30% of patients in AXIS had no prior anti-VEGF therapies. Three indirect treatment comparisons using the Bucher method progression-free survival The HOPE 205, CHECKMATE-025 and METEOR trials used RECIST 1.1 criteria to assess response, and RECORD-1, TARGET and AXIS used RECIST 1.0. Indirect estimates of HRs for PFS of LEN + EVE versus other treatments are presented in Table 3 below. Consistency across trials was assessed by visually examining the comparable median PFS in patients treated with EVE (Supporting Information S8 Table). However, there was a lack of direct evidence comparing LEN+EVE to either CAB, AXI, or SOR, which limited the ability to statistically report consistency. Graphically, this limitation is evident from the networks not containing any closed loops. Median PFS was higher in HOPE 205 than in the other three EVE involved studies even though patients in HOPE 205 were of higher risk and worse performance status. Across trials, the median PFS as well as the overall response rate (Supporting Information S8 and S10 Tables, respectively) did not vary greatly by method of radiologic review (INV versus IRR). When available, IRR-derived results entered the ITCs. For all versions of the HOPE 205 trial results, LEN + EVE was found to be superior to EVE alone, and indirectly superior to both NIV and PBO (Table 3). There were marginal differences in PFS between LEN + EVE and CAB. LEN + EVE was shown to be superior to AXI, though potential effect modification of none versus one prior VEGF-therapy may have biased results. However, the AXIS study did not report an estimate of the interaction term. Overall survival Indirect estimates of OS HRs comparing LEN+EVE therapy versus other treatments, after adjustment for patient cross-over, are presented in Table 4. No statistically significant differences were observed for LEN+EVE versus NIV, LEN+EVE versus CAB, or LEN+EVE versus AXI. Compared to PBO, LEN+EVE was significantly superior applying the results from the Motzer publication (Dataset Version "1") and ITT results for the placebo controlled trials RECORD-1 and TARGET (Supporting Information S11 Table). LEN+EVE was superior to EVE (based on the HOPE 205 trial and EMA's mature OS dataset). Of note, more mature OS data did not result in improved LEN +EVE efficacy estimates. Furthermore, as with the analysis of PFS, the multi-step indirect comparison of LEN+EVE to AXI was in potential violation of the exchangeability assumption based on prior anti-VEGF status. OS results as reported by the individual trials are listed in Supporting Information S9 Table. Results from the network meta-analysis applying fractional polynomial survival curves From the digital extraction of the published Kaplan-Meier curves, the proportional hazards assumption was found to be violated for PFS in CHECKMATE-025 and METEOR studies. The test for proportional hazards for PFS was not statistically significant for HOPE 205. However, the test was underpowered due to the sample size, and the diagnostic plots were similar to the other studies in violation. The log-cumulative curves suggest a change in hazards around seven weeks (~exp(2) = 7), which is likely to be due to interval censoring; the first protocol specified assessment of response was at eight weeks in all trials. The proportional hazard assumptions held for OS within the HOPE 205 and METEOR trials, but not for CHECK-MATE-025. With the assumption of similar efficacy between AXI and EVE monotherapy to increase network certainty, the final NMA included three trials for four treatment comparisons (Fig 2). INV assessment of PFS was available for CHECKMATE-025 (NIV vs EVE) [3], and IRR review was available for METEOR (CAB vs EVE) [14]. To avoid informative censoring of progressed patients from the HOPE 205 radiologic review, the investigator assessment of PFS was applied (Dataset Version "1"). For OS, the mature July 2015 data set (Dataset Version "2") was applied. A second-order fractional polynomial (P1 = -2, P2 = -2) provided the best fit for PFS (DIC = 777.2). A first-order fractional polynomial (P1 = -1) provided the best fit for OS (DIC = 640.23), however visual inspection of the Kaplan-Meier overlay demonstrated an underestimated survival for NIV. The hazard ratios over time (x-axis) for PFS resulting from this model are presented in Fig 3 and show that LEN +EVE is superior (HR < 1 on the vertical axis) to EVE monotherapy, CAB and NIV from about two months. However, the 95% credible intervals (dotted lines) cross 1 indicating these differences are not statistically significant. While the fixed-effect models for PFS fit the Kaplan-Meier data well, random-effects models were not explored due to expected instability due to a small network. The hazard ratios over time (x-axis) for OS (Fig 4) show that LEN + EVE is numerically superior to EVE monotherapy from around two months, and CAB and NIV from approximately eight months. However, similar to PFS comparisons, the 95% credible intervals crossed 1 indicating these differences are not statistically significant. While first-order polynomials assume a monotonic association of treatment and effect, second-order polynomials were explored but did not result in better fitting models (as indicated by DIC). The HOPE 205 trial (32 events among 51 patients) was smaller than CHECKMATE-025 and METEOR (215 and 178 events for NIV and CAB, respectively). The second best fitting model (P1 = -2, P2 = 0) did not provide a good fit for LEN + EVE. While alternate second order models with higher powers (P1�-1) provided better fit for LEN + EVE, there was not a better overall fit. Discussion A summary of the three Bucher ITCs compared the efficacy of LEN + EVE to other secondline a/mRCC therapies when applying three different data sets derived from the HOPE 205 trial. Furthermore, results were compared to a Bayesian NMA relaxing the proportional hazards assumption. Based on the three Bucher ITCs, for PFS, the LEN + EVE combination was directly superior to EVE, and indirectly superior to NIV, AXI, and PBO. For OS, there were no statistically significant differences between LEN + EVE versus NIV, CAB, or AXI, and mixed results comparing EVE and comparing PBO. The fractional polynomial NMA resulted in comparable HR estimates, with LEN+EVE superiority over EVE, CAB, and NIV from two months for PFS and two (EVE) to eight (CAB) months for OS. However, with the added model parameters for time, the comparisons were not statistically significant. Impact of data sets on estimates Three separate data sets emerged from the original July 2014 HOPE 205 (Motzer 2015) study as a result of requests from the FDA and EMA. The current analysis found ITC estimates to be marginally impacted by which version of the HOPE 205 trial data were applied. The post-hoc IRR estimates requested by the FDA resulted in higher hazard ratios compared to the EMA INV assessments (EMA INV results not presented) [5]. The IRR-derived estimates were, however, found to be complementary with the initial INV-derived estimates from the HOPE 205 original data. Likewise, a meta-analysis by Amit et. Al (2011) [19] and an FDA retrospective study in 2012 on INV versus IRR for PFS estimates reported a high degree of correlation between the two methods. Although there may be patient-level discrepancies between the two methods, regulatory authorities based their approvals of combination LEN +EVE by the overall population trends. The EMA calculation of PFS and OS based on stratified data from case report forms is consistent with the HOPE 205 protocol (HOPE trial; Eisai SAP E7080-G000-205). Uniquely, the FDA stratified data recorded within the electronic randomization system. However, the comparability between electronic and paper administration for PROs was demonstrated in a 2015 meta-analysis by Muehlhausen et al [20]. While FDA-derived results had lower PFS point estimates and higher OS point estimates then original HOPE-205 results and EMA-derived results, the statistical trend was overall consistent across the three sets of results. With the fractional polynomial method, HR trends reflected the Bucher-derived results. However, relaxing the proportional hazards assumption allowed HR to vary over time, and the increase in model parameters contributed to a decrease in power. Furthermore, with a small number of events in the HOPE 205 trial compared to the rest of trials in the reduced network, there may have been insufficient data to robustly fit one family of curves across the four treatments. Future analyses could extend the fractional polynomial modelling to permit a different family for each treatment. Limitations The validity of any ITC is dependent on the exchangeability of patient baseline characteristics across the trials [21]. While HOPE 205 was an open label, Phase 2 study with a smaller sample size, the patient populations were comparable to CHECKMATE-025 and METEOR trials, with all patients having previously failed VEGF-therapy or SUN. Within the broader networks used for the Bucher ITCs, RECORD-1, AXIS, and TARGET studies were conducted in earlier time periods, with different patient populations, prior treatment failure history, and/or trial design features (crossover). For instance, the TARGET study found that additional prior therapy use was associated with worse OS and PFS. Therefore, the assumption of similar distributions of patient baseline characteristics (i.e., potential effect modifiers) required to produce robust ITC estimates may be violated, potentially limiting the relevance of the final results [22]. Furthermore, confounding effects from subsequent therapies (Supporting Information S7 Table) and continuing investigational treatment after progression may have increased within-trial uncertainty around the result estimates. On the other hand, the fractional polynomial method omitted the TARGET and AXIS trial from the network, on the basis that AXI and EVE had comparable efficacy. Consequently, sources of bias from crossover and patient population differences were reduced. Furthermore, this smaller network consisted of all direct comparisons. Thus, Bucher-derived point estimates from this smaller set of trials would be comparable to the relevant results presented from the larger network. Consequently, comparison between Bucher and fractional polynomial results was appropriate. Conclusions As evidence for the violation of proportionality assumption was not strong for OS, the Bucher ITCs and Bayesian fractional polynomial HR trends were similar when comparing LEN + EVE to other therapies in second-line a/mRCC. However, sources of potential biases and increased standard errors for both indirect methodological approaches ultimately contributed to wider confidence and credible intervals. For PFS, Bucher analysis found HRs demonstrating indirect superiority of LEN + EVE over AXI, NIV, and PBO, and for all versions of HOPE 205 trial data. For both methods, conclusions concerning LEN + EVE superiority for OS were challenged by the small trial size and number of events. Consequently, the fractional polynomial approach found numeric but not statistical significance in comparative effects. The nature of fractional polynomial modelling with added parameters for time-varying HRs produced substantially wider credible intervals, therefore assessment at each timepoint is also important to consider. Table. Overall survival as reported in the individual trials. CI, confidence interval; HR, hazard ratio; ITC, indirect treatment comparison; NR, not reported; OS, overall survival; RPSFT, rank preserving structural failure time; VEGF, vascular endothelial growth factor; vs, versus. a 98.5% confidence interval. (DOCX) S10 Table. Overall response rate as reported in the individual trials. CI, confidence interval; n/N, number with event/number in efficacy population; NR, not reported; ORR, overall response rate; VEGF, vascular endothelial growth factor. (DOCX) S11 Table. Overall survival applying ITT results from TARGET and RECORD-1. � Indicates significance at a 5% significance level CI confidence interval; LEN, lenvatinib; EVE, everolimus; EMA, European Medicines Agency; FDA, Food and Drug Administration.
2019-03-11T17:24:35.861Z
2019-03-05T00:00:00.000
{ "year": 2019, "sha1": "206f9c8a08fba2c4afb3f6bead5620cdc97a00f1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0212899&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "206f9c8a08fba2c4afb3f6bead5620cdc97a00f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119077333
pes2o/s2orc
v3-fos-license
The Measure in Euclidean Quantum Gravity In this article a description is given of the measure in Euclidean path-integral in quantum gravity, and recent results using the Faddeev-Popov method of gauge fixing. The results suggest that the effective action is finite and positive. II. THE GRAVITY PATH-INTEGRAL In exact analogy to Feynman's initial formulation, a path-integral for a space-time to evolve from the boundary metric h 1 µν defined on a initial surface Σ 1 to another metric h 2 µν defined on a final surface Σ 2 is given by where Einstein's action defined using the determinant g and the scalar Ricci Curvature R of metrics appears in the exponent in the integrand. However, despite the simplicity of its definition, the actual evaluation of the path-integral has proved very difficult to compute. (i)Analytic continuation in time does not make sense as 'time' coordinate is not uniquely identified in a diffeomorphism invariant theory. (ii)The gravitational action is non-quadratic and even non-polynomial due to the √ g term, and we can compute integrals which are essentially Gaussian. (iii)Perturbation about a classical metric with well defined time and metric does not make sense as Feynman diagrams diverge at each order in the perturbation theory and the theory is non-renormalisable. (iv)If one formulates a Euclidean path-integral ab-initio, one finds that the gravitational path-integral is unbounded from below, and the exponent or the probability for certain configurations diverges making the 'integral' including those configurations meaningless. (v)The gravitational action is diffeomorphism invariant, and hence geometries with the same action are over counted in the measure. Several physicists have been working to solve these issues, or formulate quantum gravity using new approaches. In these article I shall compute the scalar sector of the Faddeev-Popov measure and in the process examine the resolution of the unboundedness of the Euclidean action in the gravitational path-integral. As the 'Wick Rotation' from Lorentzian to Euclidean metrics is difficult to implement, one can work with Euclidean or sum over positive definite metrics or compute the Euclidean path-integral. A. Faddeev-Popov Measure The Euclidean gravitational action as formulated by Einstein S = − 1 16πG √ g R is invariant under transformations of the metric g ′ µν = ∂x λ ∂x ′µ ∂x ρ ∂x ′ν g λρ the general coordinate transformations or diffeomorphisms of the manifold. In a infinitesimal transformation, implemented thus x µ → x µ + ξ µ (x µ ), the ξ µ are the vector fields, the generators of the diffeomorphisms. Thus configurations which are related by diffeomorphisms leave the action invariant, and represent redundant or unphysical degrees of freedom. The measure however counts them as distinct geometries, and one has to factor out the diffeomorphism group. The path-integral written in terms of physical degrees of freedom is thus where Dif f (M) is the diffeomorphism group of the underlying manifold M. Apart from the diffeomorphism group, there also exists another set of transformations of the metric, the 'conformal transformations'. these of course leave neither the metric nor the action invariant, (except for special cases where conformal killing orbits leave the metric invariant). In [1] the Einstein action was shown to be unbounded from below due to the conformal mode of the metric. We will discuss this in details in this article. To isolate the physical measure, it is always useful to talk about DeWitt superspace, which is a manifold comprising of points which are metrics. Taking one particular slice of this manifold identifies a unique set of metrics, and rest of the superspace can be obtained by using the diffeomorphisms and/or conformal orbits. Thus, in a infinitesimal neighborhood of the slice, one can use the cotangent space element δg µν = h µν , and write a coordinate The h ⊥ represents the traceless sector of the theory and lies on the gauge slice (the conformal mode has been factored out from this mode of the metric and written isolated as the third term), a diffeomorphism generated by ξ µ and a conformal transformation generated by φ. This is further written in a more useful way where clearly, the trace of the diffeomorphisms is also isolated as it contributes to the conformal sector of the theory and the operator Lξ maps vectors to traceless two tensors, and adds to h ⊥ µν . The implementation of this coordinate transformation in the measure in the cotangent space of metrics leads to the identification of a Jacobian, which is the Faddeev-Popov determinant. The same determinant appears in the base space of the metrics, and using this the measure in the path-integral is written in terms of the physical metrics and the Jacobian, and the diffeomorphism group is factored out. The path integral is thus: The Faddeev-Popov determinant can be calculated formally as a functional determinant, however, evaluating the determinant as a function of the physical metric is a task which requires regularisation and appropriate use of new techniques. A non-perturbative evaluation of the determinant exists for the scalar sector of the determinant [3]. We will discuss some subtleties in that calculation in the next few sections. B. The unboundedness of the Euclidean action The Euclidean gravitational action or the action for positive definite metric comprises of If one rewrites the action in terms of the conformal decomposition of the metric (3) then the gravity action reduces to The kinetic term of the conformal mode is positive definite, and hence the Euclidean action can assume as negative values as possible for a rapidly varying conformal factor. This pathology can be assumed to be a signature of presence of redundant degrees of freedom, but in case of Einstein gravity the conformal transformation is not a symmetry of the action and thus the 'redundancy' which this represents is not obvious. However, in this paper we investigate the path-integral written in terms of physical variables and examine the case if the negative infinity is indeed cancelled. Does the conformal mode however uniquely isolate any divergence in the Euclidean action? The √ḡ e 2φR term in the action has the 'kinetic terms' of the conformally equivalence class of metrics. For compact manifolds we can use the Yamabe Conjecture to reduce thē R to a constant, and hence this term cannot make the action unbounded by itself, and thus we ignore the contribution. In case of arbitrary manifolds with boundaries, like the case of flat space time, a slightly more careful treatment is required. We take the form of the Ricci scalar given in terms of the derivatives of the metric In the above the second term is potentially problematic, to analyse the terms in a algebraic way, we rewrite the second term in terms of the derivative of the determinant of the metric g. This makes the action take the form: Thus a potential divergent contribution to the action might come from the second term of the above for rapidly varyingḡ (the other terms in the above donot contribute with a unique sign, and thus in the action integral negative contributions will cancel positive contributions instead of summing up to give unbounded behavior). In case of non-compact geometries we might take theḡ µν to be unimodular and then the divergence of the Euclidean action will be concentrated in the conformal mode term. In case of arbitraryḡ µν , the term has the same sign as the conformal mode term, and thus as discussed in this article, any remnant divergence from theḡ µν can be cancelled by the contribution from the measure. However, a more careful analysis of the divergent terms in the Euclidean action are required. For the purposes of this article we will see that the solution given cures problems due to any divergent Euclidean action. C. Gauge Fixing The Faddeev-Popov determinant can be obtained using a Gaussian Normalisation condition on the measure [2]. where G µνρτ is the DeWitt supermetric obtained in terms of the background metric as This Gaussian normalisation condition is slightly different from the condition in [2], where they have a half in the exponent, and thus it makes a difference to the normalization constant numerics in the Jacobian. Interchanging the coordinates of the tangent space leads to the determination of a Jacobian (a function of the metric), which is then determined as, The scalar product in the exponent breaks up into The integration over each of the modes gives a determinant. In case of the tensor determinant, it has to be evaluated using the projection to the gauge fixed modes. The gauge fixing is obtained by putting a gauge condition on the h µν . This in case of the covariant gauge choice can be written as e.g. in the case of the orthogonal gauge [2], Using this, the Faddeev-Popov determinant was obtained in [2] from (15) as If we confine ourselves to the Landau gauge, F † = −2∇ µ the vector determinant of (18) is This vector determinant is equivalent to the inverse of where ξ is a vector field. In case we write the ξ µ =ξ µ + ∇ µφ , we can isolate a scalar determinant in the above The scalar sector of the Faddeev-Popov determinant is thus C is the constant in the De-Witt metric which determines the signature. Where we have absorbed a factor of two in the prefactor in the operator expression to retain the conventions of [3]. The addition of det s (−∇ 2 ) 1/2 is new in this calculation, this factor was ignored in the initial version of [3] in the transformation to the scalar mode of the diffeomorphism generator. III. EVALUATION OF THE SCALAR MEASURE One has to compute the functional determinant (22) using known techniques like the heat kernel equation. We ignore the finite scalar determinant which multiplies the determinant of the fourth order functional operator in the subsequent discussion. It changes some constants in the effective action. This calculation was done in [3] and we clarify the calculation in this article. We use the heat kernel regularisation which is defined in order to achieve a ζ function regularisation of the determinant. Given an operator F with eigenvalues λ, the zeta function of p is defined to be Clearly, the determinant of the operator, would be given by and thus Other regularisation schemes can also be used if required, thus the exact answer would be particular to the way of regularisation. From (25), the Faddeev-Popov determinant can be written as an exponential and adds to the classical action in the 'weight' of the path-integral creating an effective action. Since the Faddeev-Popov has the square root of the determinant appearing in the measure, the exact terms which appear in the effective action for the scalar determinant is Γ trace = 1 2 ζ ′ (0). Thus one has to find ζ ′ (0) for the scalar determinant of (22). The zeta function for a given operator is appropriately written in terms of the 'Heat Kernel'. The Heat Kernel is precisely the term U(t, x, x ′ ) = exp(−tF )(x, x ′ ), t is a parameter and x, x ′ represent coordinates of the manifold in which the operator is defined. The heat kernel satisfies a diffusion differential equation, and can be solved term by term of a series Σa n t n [4]. In [3] the operator was slightly transformed by scaling the Heat Kernel by exp(−tQ), Q being a function of curvature [3] The zeta function and its derivative are thus: The finite term as p → 0 appears in the last term of the derivative of the zeta function (as we know this might not be the unique the way to extract the finite term, this is given in details in [3]). The finite term (regulator independent) is remarkably proportional to a 1 and is obtained from (28) in [3] (using a regularisation ζ(0) = −1/2.) Thus in the effective action Γ trace + Γ classical = 1 2 ζ ′ (0) + Γ classical we get to first order − 1 64π 2 Tr x a 1 + Γ classical . In the subsequent discussion, we fix the a 1 in the weak gravity and strong gravity regimes and find the Γ trace + Γ classical . These regimes are identified in the following way, the scalar operator in (22) is taken thus By using the commutation relations, one obtains We take two limits, one where there is weak field gravity, and one obtains ∇ µ R µν ∼ 0, here (32) is approximated by And where gravity is strong, one obtains And thus we approximate the scalar determinant by two determinants and These regimes are identified for the entire metric and not the conformally transformed metric. The first one (35) can be evaluated using the Heat Kernel for the Laplacian for arbitrary space times, under certain boundary conditions, by splitting the fourth order operator into product of two Laplacians whose heat kernel expansions are well known [4]. (i) ∇ µ R µν ≈ 0, the Faddeev-Popov determinant is the square root of In the factorised form, the first scalar determinant is a divergent one for C < −1/2, and the second one is a convergent one. We discuss the divergent determinant's contribution to the effective action as this should cancel the divergence from the classical action. The first scalar determinant is of a Laplacian and thus we use the heat kernel of a Laplacian, and analytically continue to the divergent regime of C < −1/2. The a 1 coefficient of the Laplacian is well known, and to quote [4,5] a 1 = 1 6 R What is interesting is that this term is dimensional and it is obvious that the space of measures does not have any length scales like the Planck length to make the term dimensionless in the exponential. To retain the correct dimensions, one had to introduce in the definition of the measure and the determinants a scale. I scale the coefficient a 1 by a appropriate length squared :l 2 un (in [3] this had been taken to be Planck length squared) to get the exponential dimensionless and restore the 16πG in the classical action. Writing R in terms ofR and (∇φ) 2 , and using the constants in the determinant one gets as the coefficient of the kinetic term of the conformal mode in the effective action (Γ trace + Γ classical ) Thus the positive action takes over at (1 + 2C) > −l 2 un π/2l 2 p . From Einstein' action C = −2, and Euclidean Einstein gravity has a positive definite effective action for l un = l p . The number −π/2 might differ for different regularisation schemes, (and conventions for defining determinants from Gaussian integrals) but it is indeed a finite number, and we should be in the realm of a convergent path-integral for Euclidean quantum gravity. Note that the effect of this calculation has been in the end change of the overall sign of the action by a minus sign. This achieves the required 'convergent behavior' of the exponent in the path-integrand. This contribution from the measure has the right sign, irrespective of what the sign of the classical bare action is. We saw that the classical bare action can be unbounded from below, and this change in sign makes those divergent actions contribute with convergent weights to the gravitational path-integral. From a premature analysis of the classical action it seems that the wildly divergent terms appear with a negative sign and thus reversing this sign makes the path-integral potentially convergent. (ii) In the case of the regime ∇ µ R µν ≫ 1, the relevant operator is (36) and the coefficient a 1 is determined for −8∇ µ R µν ∇ ν (with a scaling and in anticipation that the (1+2C) prefactor will be negative) from [3] We find that −R 2 term has exactly the sign required to cancel the contribution from the (∇φ) 2 term in the classical action, as writing R in terms ofR and∇φ, one finds (∇φ) 4 from R 2 , which dominate for configurations with rapidly varying φ. The other terms do not give divergent negative terms. Any further negative divergence fromḡ is also rendered positive by this squared term. Thus Γ trace + Γ classical in this regime emerges as positive definite. Note in this non-perturbative regime, the effective action at this order does not merely change by an overall minus sign but has additional non-trivial contributions proportional to R 2 and higher derivative terms, which dominate over the classical bare action. IV. CONCLUSION In this article we clarified and verified some calculations of [3] regarding the scalar measure in the gravitational path-integral. We showed by an exact computation in the orthogonal gauge, the scalar operator which appears in the measure is the same as obtained in [3]. This is not surprising as the scalar operator was obtained for the generic case in [3] and should be the same in any covariant calculation. We clarified the fact that the Euclidean action can be unbounded only from below by isolating the potentially diverging terms in the calculation and the resolution given in [3] though a more careful analysis has to be done. We then explained the results of [3] in evaluating the scalar measure and used that to observe that the effective action in the gravitational action is positive. What is very interesting is the emergence of finite terms by regularising the determinant in a non-perturbative way. Work is in progress in obtaining a regularisation of the entire Faddeev-Popov determinant.
2011-06-08T21:58:59.000Z
2011-06-08T00:00:00.000
{ "year": 2011, "sha1": "4316d86fe796daecb07df9b3a3a8609e937b56be", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4316d86fe796daecb07df9b3a3a8609e937b56be", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226336941
pes2o/s2orc
v3-fos-license
Sexual selection for bright females prevails under light pollution Sexual selection for bright females prevails under light pollution Christina ELGERT*, Topi K. LEHTONEN, Arja KAITALA, and Ulrika CANDOLIN Organismal and Evolutionary Biology, University of Helsinki, Helsinki, PO Box 65, 00014, Finland, Tvärminne Zoological Station, University of Helsinki, J.A. Palménin tie 260, Hanko, 10900, Finland, Department of Ecology and Genetics, University of Oulu, Oulu, PO Box 3000, 00014, Finland, 90014 The functions of sexually selected traits are particularly sensitive to changes in the environment because the traits have evolved to increase mating success under local environmental conditions (Rosenthal and Stuart-Fox 2012). When environmental conditions change, previously reliable signals may become less reliable or harder to detect and evaluate. Because the correct expression, transmission, and interpretation of sexual signals typically influence mate choice outcomes, impediments to sexual signals can change both the strength and the direction of sexual selection (Rosenthal and Stuart-Fox 2012). Artificial light is a major anthropogenic disturbance that is intensifying around the world and has high potential to negatively impact wildlife, for example by hampering the expression and detection of sexual signals. For instance, the bioluminescent signals of fireflies are often inhibited or obscured by artificial illumination (Rosenthal and Stuart-Fox 2012;Owens et al. 2020). The evolution of more detectable signals could, at least partly, mitigate the negative effect of artificial light on mate attraction. However, whether sexual selection for signal conspicuousness will result in an evolutionary response depends on the heritability of the signal and the factors that constrain signal evolution. These include physiological and morphological limitations, costs of signaling, and trade-offs in allocation of energy to different traits (Andersson 1994;Jennions et al. 2001). We investigated whether artificial light alters sexual selection on signal intensity, in this case glow brightness, in the European common glow-worm Lampyris noctiluca. To attract flying males, flightless females emit a continuous cold light from a lantern on the underside of their abdomen. Females benefit from mating rapidly because they only have a limited amount of available resources after emerging as adults and lose eggs each day until they mate (Wing 1989). Females with a brighter glow are quicker to attract a male (Hopkins et al. 2015;Lehtonen and Kaitala 2020), and signal brightness correlates with body size (Hopkins et al. 2015). Here, we assessed, in a field experiment, the effects of three different intensities of artificial light, control (0.1-0.6 lux, N ¼ 21), intermediate, (7-10 lux, N ¼ 23), and high (16-20 lux, N ¼ 20) (Figure 1), on glow-worm female mate attraction success (see Supplementary Material for additional information). The artificial light levels were chosen to mimic those of low-to medium-intensity street lights at the street level, with typical values ranging between 10 and 60 lux. The intensity of typical moonlight, in turn, is only 0.05-0.1 lux (Kyba et al. 2017). Two rivalling signalers, i.e., dummy females that were designed to trap males attracted to them, were placed at an equal distance from the source of light ( Figure 1A). The two dummy females differed in signal brightness, with peak glow intensities of 0.016 mW/nm and 0.13 mW/nm, mimicking a dim and a very bright wild female, respectively (Hopkins et al. 2015;Lehtonen and Kaitala 2020; unpublished spectrophotometer data from 56 wild females by A-M Borshagovski). The experiment was performed at 4 sites, resulting in 4 replicates per night, with the treatments rotating among the sites (see Supplementary Material for further methodological details). The interaction term between dummy brightness and intensity of artificial light was nonsignificant and removed from the model (generalized linear mixed model: v 2 2 ¼ 0:2601, P ¼ 0.88). The refitted model showed that the probability of a dummy female attracting males depended on both its brightness (v 2 1 ¼ 51.96, P < 0.001) and the intensity of the artificial light (v 2 2 ¼ 35:39, P < 0.001): the brighter dummy female was more likely to attract males, and the likelihood of successful mate attraction decreased with artificial light intensity ( Figure 1B and Supplementary Table S1). Both artificial light intensities reduced mate attraction success compared to the control (intermediate light intensity: Z ¼ À2.731, P ¼ 0.017, high light intensity: Z ¼ À3.972, P < 0.001; Supplementary Table S1). The high light intensity had a stronger negative effect than the intermediate light intensity (Z ¼ À2.441, P ¼ 0.038; Figure 1B and Supplementary Table S1). The dimmer dummy female did not attract any males in the presence of artificial light ( Figure 1B). Our results show that while females are less likely to attract males under artificial light, sexual selection for brighter signals nevertheless continues to operate. Hence, the results indicate that sexual selection has the potential to promote the evolution of brighter signals under artificial light. The negative effect of artificial light on female mate attraction is in line with earlier findings on effects of street lights on mate attraction in Lampyrids (Bird and Parker 2014;Elgert et al. 2020). The results also show that the negative effect increases with the intensity of artificial light as demonstrated by the diminishing success of the brighter dummy females ( Figure 1B). Interestingly, the dimmer females attracted no males in the presence of artificial light. This could be because males either actively selected the brighter of the two females or because they failed to detect the dimmer female and, hence, passively selected the brighter one. Overall, our study shows that the negative impact of artificial light on glow-worm mate attraction increases with the intensity of artificial light, but sexual selection for brighter signals nevertheless prevails. If the selection results in an evolutionary response, it would mitigate the negative effect of light pollution on mating success. However, several factors could restrict an evolutionary response. The costs of signals, for instance those arising from signal production and predation risk, are likely to increase with brightness, which could constrain the evolution of brighter signals. In addition, signal brightness correlates positively with body size (Hopkins et al. 2015), and an increase in signal brightness may need to be traded-off against other fitness-related traits, such as shorter larval development time. The heritability of the signal, and thus the potential for evolutionary change, is not known. More research is therefore needed on the factors that constrain evolutionary responses to artificial light. In this respect, our results on the sexual selection for brighter signals prevailing under artificial light build a foundation for further studies on the mechanisms that promote or hinder adaptation to light pollution in the common glow-worm.
2020-10-29T09:02:43.140Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "63b5425d0c12873fa632c458fc0a0c6895bc56eb", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cz/article-pdf/67/3/329/38606170/zoaa071.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76caddae50d43ec9f6f450739db66dc1bb6e8691", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
121909161
pes2o/s2orc
v3-fos-license
Anisotropy Constant Required for Thermally Assisted Magnetic Recording An anisotropy constant Ku required for thermally assisted magnetic recording (TAMR) was calculated. First, a heat transfer simulation was carried out to obtain the thermal gradient of media at the time of writing for the Ku calculation. Both the Curie temperature Tc and Ku should be designated for TAMR media design. Ku is a function of Tc. Therefore, we introduce a parameter Ku / Kbulk that shows the intrinsic ratio of film Ku to bulk Ku. From the information stability conditions, we estimated whether or not the media have the potential for TAMR. The limiting factor is the thermal gradient related to the information stability on the trailing side located 1 bit from the writing position during writing and at the adjacent track during rewriting, and the limiting factor determines the minimum Ku / Kbulk. Therefore, the minimum Ku / Kbulk strongly depends on the thermal gradient. Consideration was also given to the dependence of the grain number per bit on Ku / Kbulk, the dependence of the standard deviation of the grain size on Ku / Kbulk, the dependence of the thickness of the recording layer on Ku / Kbulk, and the dependence of the writing temperature on Ku / Kbulk. I ntro duction Simultaneously satisfying the three conditions of minute magnetic grains, an appropriate thermal stability factor, and suitable coercivity is difficult with high-density magnetic recording 1) beyond several Tbits/inch 2 (Tbpsi). Thermally assisted magnetic recording (TAMR) has been proposed as a way of solving this problem. TAMR is a recording method in which the medium is heated to reduce the coercivity at the time of writing. We have already proposed a new media design method for 2 Tbpsi TAMR 2) that takes account of information stability during writing 3) . Many TAMR media design parameters are related to each other in a complex manner. This new method can reveal the relationships between such parameters as anisotropy, Curie temperature, thickness and writing temperature. In our earlier work, we dealt with a hypothetical medium in which the magnetization, anisotropy, and Curie temperature are independent of each other. However, the magnetization and anisotropy are functions of the Curie temperature in actual materials. We have also shown that the bit error rate (bER) after 10 years of archiving in granular media is a function of the grain number per bit n and the standard deviation of the grain size D as well as the thermal stability factor TSF 10 4) . In this study, we simulated our media design for 4 Tbpsi TAMR with this new method. We also considered improvement of the design method, the dependence of the magnetic properties on the Curie temperature, and bER dependence on TSF 10 , n, and D . As a result, we determined the anisotropy constant required for 4 Tbpsi TAMR. Hea t Trans fer Simula tion To deduce the required anisotropy constant, we must first determine the thermal gradient of media at the time of writing. A heat transfer simulation was carried out using Poynting for Optics (Fujitsu Ltd.). Figure 1 is a schematic illustration of the structure of a medium that consists of four layers, that is, a recording layer RL (Fe-Pt base, t nm), interlayer 1 IL1 (MgO base, 5 nm), interlayer 2 IL2 (Cr base, 10 nm), and a heat-sink layer HSL (Cu base, 30 nm). The x , y , and z axes are the down-track, the cross-track, and the thickness directions, respectively. The writing temperature T w is defined at the heat-spot edge and at the center of the RL layer in thickness direction, and the two positions of T w in Fig. 1 are at a distance of d w in the cross-track direction where d w is defined as the heat-spot diameter. T max is defined as the maximum surface temperature. (a) Calculation conditions, (b) optical constants, and (c) thermal constants used in the simulation are summarized in Table 1. Ambient temperature T a is the maximum working temperature of the hard drive, and is assumed to be 330 K. A perfect conductor with a 12 nm 12 nm aperture is positioned 2 nm above the RL surface to generate a light spot with a diameter of several nanometers, and circularly polarized light at a wavelength of 780 nm is irradiated. Figure 2 shows the beam profile for the cross-track direction, and the light-spot diameter d L (FWHM) is about 9.0 nm. The profile for the down-track direction is almost the same as that in Fig. 2. The calculated temperature profiles at the surface z = 0 nm, and at 4 and 8 nm below the surface are shown in Fig. 3 for the cross-track direction. The two positions of T w in Fig. 3 correspond with those in Fig. 1. A significant temperature gradient can be seen for the RL thickness direction. The temperature profile near T w for the down-track direction is almost the same as that in Fig. 3. Figure 4 (a) shows the calculated thermal gradients T / x for the down-track direction and T / y for the cross-track direction as a function of the RL thickness t , and T / x T / y . The dependence of T max on t is shown in Fig. 4 (b). As t increases, the thermal gradient becomes smaller, and the maximum surface temperature becomes higher since the thick RL works adiabatically. The white regions indicate upward or downward magnetization, and the gray regions indicate the magnetization transition. This figure shows the situation after many rewriting operations on the i th track. The transition region spread to adjacent tracks as a result of the repetition of rewriting. Therefore, the stability of information on adjacent tracks during rewriting should be considered. The maximum temperature at which information can be held at an adjacent track is defined as T adj . The distance y from the isothermal T w to the position of T adj was assumed to be y = d T d w + a m / 2, where a m is the mean grain size. Then (T w T adj ) / y T / y is the minimum thermal gradient required by the medium for the cross-track direction, and it should be smaller than the thermal gradient T / y calculated by the heat transfer simulation. T adj can be calculated from the thermal stability factor as mentioned in 4 4 .4. Of course, the minimum thermal gradient required by the medium for the down-track direction T / x should also be considered. The maximum temperature that can hold the information recorded in 1 bit former during writing is defined as T rec . The distance x from the isothermal T w to the position of T rec was assumed to be the bit pitch Thus we can obtain d B and d T : The head field around the position at T w , T rec or T adj is H w , and H w is determined from taking the stray field from the surrounding magnetization M s (300 K) into consideration. The writing-head configuration is shown in Fig. 5 (b). It is also assumed that the main-pole size of the head is 600 nm (down-track direction) 300 nm (cross-track direction). The writing position as shown in Fig. 5 (a) is located on the trailing side of the main pole. The maximum temperature that can hold the information under the head field during rewriting is defined as T adj , and T adj = T a . From T adj = T a , the maximum head field H adj that can hold the information can be deduced from the thermal stability factor as mentioned in 4 4.5, and H adj should be higher than H w due to the geometrical restriction of the head field. The user areal density and the bit area S were assumed to be 4 Tbpsi and 140 nm 2 , respectively. The simulation was carried out for grain numbers per bit n of 4, 5, 6, and 7. The non-magnetic spacing between grains is 1 nm, and the mean grain size a m can be calculated from S / n . The temperature dependence of the magnetization M s was determined using the mean field theory 5) , and that of the anisotropy constant K u was assumed to be proportional to M s 2 6) . The M s of Fe-Pt with a Curie temperature T c of 770 K was assumed to be 1000 emu/cm 3 at 300 K. And the M s (300 K) of the medium is reduced with reducing T c . Both T c and K u should be designated for TAMR media design. K u (T c ) at room temperature is a function of T c as shown in Fig. 6. The closed circles in the figure are an example experimental result for Fe-Ni-Pt films 6) . K u (T c , T ) is also a function of temperature T . It is intrinsic that K u is reduced by reducing T c since K u (T c = RT , T = RT ) = 0 ( RT : room temperature). The K u value of bulk Fe-Pt was assumed to be 70 Merg/cm 3 , and is shown as a closed triangle in the figure. The K u value of Fe-Pt film is generally smaller than that of bulk Fe-Pt. Therefore, we introduce a parameter K u / K bulk that shows the intrinsic ratio of film K u to bulk K u . The solid line is a result calculated using the mean field theory for increasing K u / K bulk is challenging. Therefore, it is necessary to design a smaller K u / K bulk medium. Thermal Stabili ty Factor The thermal stability factor K is generally a function of temperature T and magnetic field H , and is expressed as where V = a 2 t ( a: grain size) and k are the grain De termina tion o f com posi tion We simulated the media design using the mean field theory for Fe-Pt by Cu simple dilution rather than using that for Fe-Pt by the Ni substitution of Fe because the calculation was simpler. x can be determined unambiguously from the writing temperature T w under a certain K u / K bulk . Writing completion is defined as the state in which the direction of the recorded magnetization is stable for a certain time under head field H w at temperature T w . For example, is x / v = (7.5 nm) / (10 m/s) = 0.75 ns (linear velocity v = 10 m/s). Since H w is parallel to M s , the thermal stability factor K + (T w , H w ) is given by = 0.75 10 9 3.2 10 8 = exp(K + (T w , H w )) exp(TSF 10 ) , then K + (T w , H w ) = TSF 10 ln = TSF 10 40.6 , where TSF 10 is the thermal stability factor calculated using many bits 7) , in which each bit has n grains and each grain is of various sizes, under the condition of 10 -3 bit error rate. TSF 10 is calculated statistically using grain-error probability P = 1 exp( f 0 exp( TSF 10 (a / a m ) 2 )) ( f 0 = 10 11 s -1 : attempt frequency) for exactly = 10 years of archiving. TSF 10 is a function of the grain number per bit n and the standard deviation of the grain size D , and TSF 10 has no relation with K u since TSF 10 is calculated statistically. TSF 10 is for the grain of mean size a = a m . Then K + (T w , H w ) becomes where V m = a m 2 t is the mean grain volume. From Eq. (4), the Cu composition x can be determined. If x = 6.5 nm, K + (T w , H w ) = TSF 10 40.7, and the difference of 0.1 in K + (T w , H w ) corresponds to a temperature difference of less than 1 K. Therefore, K + (T w , H w ) = TSF 10 40.6 is applicable in this simulation in spite of a few changes of x . Con di tion (1) We estimated from the following four conditions whether or not the media have the potential for TAMR. That is, information stability (1) for 10 years of archiving, (2) on the trailing side located 1 bit from the writing position during writing, (3) in an adjacent track during rewriting, and (4) under the main pole during rewriting. The limiting factor under these conditions determines the minimum K u / K bulk . The first condition, namely the information stability for 10 years archiving, is expressed using Con di tion (2) The second condition, namely the information stability on the trailing side located 1 bit from the writing position during writing, determines the minimum thermal gradient T / x required by the medium for the down-track direction. The grains on the trailing side from the position of T w are exposed to the head field H w during the cooling process after the writing has taken place for a certain time at a certain temperature. Of course, the information recorded in the grains should be preserved. The maximum temperature that can hold the information is T rec as defined in 3 3. Since H w may be antiparallel to M s , T rec can be calculated using the thermal stability factor K (T rec , H w ) as H w H c (T rec ) 2 = TSF 10 40.6 . (6) Then T / x = (T w T rec ) / x should be smaller than the thermal gradient T / x calculated by the heat transfer simulation. Con di tion (3) The third condition, namely the information stability in an adjacent track during rewriting, determines the minimum thermal gradient T / y required by the medium for the cross-track direction. The maximum temperature that can hold the information in an adjacent track is T adj as defined in 3 3. Assuming that the maximum number of rewriting operations is 10 4 , T adj can be calculated using the thermal stability factor K (T adj , H w ) as follows: 10 4 = exp(K (T adj , H w )) exp(TSF 10 ) , then Then T / y = (T w T adj ) / y should be smaller than the thermal gradient T / y calculated by the heat transfer simulation. Con di tion (4) The fourth condition, namely the information stability under the main pole during rewriting, determines the maximum head field H adj that can hold the information. The grains in the adjacent tracks are exposed to the head field at each time of rewriting, for a certain time at temperature T adj = T a . From the assumption, is (600 nm) / (10 m/s) = 60 ns. The head field is simultaneously applied to about other 15 tracks at each time of rewriting. Since the head field may be antiparallel to M s , H adj can be calculated using the thermal stability factor K ( T adj = T a , H adj ) as follows: These limiting factors determine the minimum For example, TSF 10 is 63, and K u V m / kT (T a ) is 97 for n = 4. If the medium has K u V m / kT (T a ) = 63, it can archive information for 10 years. However, a medium with K u V m / kT (T a ) = 63 is not suitable for TAMR, and K u V m / kT (T a ) = 97 is necessary. If the thermal gradient T / x(y) can be increased, for example to 13.5 K/nm, a medium with K u V m / kT (T a ) = 64 becomes suitable for TAMR, and K u / K bulk can be reduced to 0.55 from 0.87. As a result, the minimum K u / K bulk value strongly depends on the thermal gradient T / x(y) . If the thermal gradient exceeds 13.5 K/nm, the limiting factors will change from conditions (2) and (3) to condition (1) or (4). As n decreases, the required TSF 10 becomes larger. However, both the grain size a m and the volume become larger. As a result, the required K u , and therefore the required K u / K bulk , become smaller by decreasing n 7) . The minimum n value can be determined by the SN ratio of the reproduced signal, that is the complex matter of signal processing. Table 2 Calculation results of TAMR media design for various grain numbers per bit n. De pendence of s tan dard devia tion of grain size The dependence of the required K u / K bulk value on the standard deviation of the grain size D is shown in Table 3. As D increases, the required TSF 10 becomes larger. Therefore, the required K u / K bulk becomes larger by increasing D . De pendence of RL thickne ss Although the grain volume increases by increasing the RL thickness t , the required K u / K bulk values are almost the same as those shown in Table 4. Figure 7 shows the dependence of T / x(y) on K u / K bulk for each t . As K u / K bulk increases, T / x(y) becomes smaller. The dotted lines in the figure show T / x(y) for each t , and T / x(y) should be smaller than T / x(y) . If T / x(y) is constant in spite of the change of t , K u / K bulk can be decreased by increasing t . However, T / x(y) is reduced by increasing t due to the adiabatic effect of RL. Therefore, increasing the thickness has little merit as regards reducing K u / K bulk . De pendence of writin g tem perature The dependence of the required K u / K bulk on the writing temperature T w is shown in Table 5. Figure 8 shows the dependence of T / x(y) on K u / K bulk for each T w . The dotted lines in the figure show T / x(y) for each T w . As T w increases, T / x(y) becomes larger as shown in Fig. 4 (a). Therefore, increasing the writing temperature T w is effective in reducing K u / K bulk . T / x(y) by medium as a function of intrinsic anisotropy constant K u / K bulk . The calculation parameter is writing temperature T w . Conclusion s The anisotropy constant K u required for thermally assisted magnetic recording (TAMR) at 4 Tbpsi was evaluated by simulation. We introduced a parameter K u / K bulk that shows the intrinsic ratio of film K u to bulk K u . It is necessary to design a smaller K u / K bulk medium. The limiting factor whether or not the media have the potential for TAMR is the thermal gradient T / x(y) calculated by a heat transfer simulation, and it determines the minimum K u / K bulk . (1) Dependence of grain number per bit n As n decreases, the thermal stability factor TSF 10 calculated statistically becomes larger. However, the grain volume becomes larger. As a result, K u / K bulk becomes smaller by reducing n. (2) Dependence of standard deviation of grain size D As D increases, TSF 10 becomes larger. Therefore, K u / K bulk becomes larger with increasing D . (3) Dependence of recording layer (RL) thickness t Although the grain volume increases with increasing t , T / x(y) becomes smaller with increasing t due to the adiabatic effect of RL. Therefore, the K u / K bulk values are almost the same. (4) Dependence of writing temperature T w As T w increases, T / x(y) becomes larger. Therefore, increasing T w is effective in reducing K u / K bulk . Though optimum conditions can't be decided since there are many trade-off conditions, it can be point out that increasing K u / K bulk by an improvement of media preparation is necessary to realize 4 Tbpsi TAMR. An investigation of media structure with large T / x(y) will also be effective to realize 4 Tbpsi TAMR.
2019-04-20T13:13:58.987Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "f5800fb1ea86409a3472d9dcbdd771ec6d9f21b6", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/msjmag/39/1/39_1412R002/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e1ba44286bb159d2514392d7fa40284f17e86d94", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
263796622
pes2o/s2orc
v3-fos-license
A loss for photoemission is a gain for Auger: Direct experimental evidence of crystal-field splitting and charge transfer in photoelectron spectroscopy We find a new 5 eV satellite in the Ti 1s photoelectron spectrum of the transition-metal oxide SrTiO$_3$. This satellite appears in addition to the well-studied 13 eV structure that is typically associated with the Ti 2p core line. We give direct experimental evidence that the presence of two satellites is due to the crystal-field splitting of the metal 3d orbitals. They originate from ligand 2p t$_{2g}$ $\rightarrow$ metal 3d t$_{2g}$ and ligand 2p e$_g$ $\rightarrow$ metal 3d e$_g$ monopole charge-transfer excitations within the sudden approximation of quantum mechanics. This assignment is made by the energetics of the resonant and high-energy threshold behaviors of the Ti K-L$_2$L$_3$ Auger decay that follows Ti 1s photo-ionization. The discovery of photoelectron satellites, the structures occurring on the high-binding-energy side of a principle photoelectron-core line, dates back to the early work of Siegbahn [1] and Carlson [2] on rare gases.They are a unique example of the sudden approximation of quantum mechanics, and they illustrate the electron correlations that occur in atoms, molecules, and solids.In fact, the nature and presence of such satellites have been used to establish whether transition-metal compounds are either the charge-transfer or Mott-Hubbard type, being effectively described by both model and Anderson-impurity Hamiltonians [3]. Despite the numerous theoretical descriptions of this unique manybody phenomenon, there has been no direct, experimental evidence that charge is actually transferred (hops) from the ligand to the metal ion during the core-photoionization process, a theoretical ansatz that is so central to the physics discussed in these works.For example, it is certain that the satellites occurring in rare gases must be of the shake-up/shake-off origin [4], whereas the chemical bonding in molecules and solids can lead to either "shake-on" or "shake-off." In this Letter, we employ hard x-ray photoelectron spectroscopy (HAXPES) to study the satellite structure that appears in both the Ti 1s and Ti 2p core levels of the transition-metal oxide SrTiO 3 .We find a new photoemission satellite in the Ti 1s spectrum that is unresolved in the Ti 2p spectrum due to the spin-orbit splitting of the Ti 2p shell.The resonant and high-energy threshold behaviors of the Ti K-L 2 L 3 Auger decay demonstrate that the presence of two satellites is a direct consequence of the crystal-field splitting of the metal 3d ion and uniquely identifies them as ligand-to-metal charge transfer.High-energy resonant photoelectron spectroscopy is therefore shown to be a powerful experimental method to study the nature of such transitions because the final state of the resonant decay retains the memory of the initial-state excitation. Figure 1 shows the Ti 1s and Ti 2p core-level HAXPES spectra recorded with photon energies hν = 5597 eV and hν = 4967 eV, respectively, from a SrTiO 3 single-crystal surface.Data were recorded at the NIST beamline X24A that is equipped with a Si(111) double-crystal monochromator and a hemispherical electron analyzer.Details of the beamline and vacuum system have been given previously [5].The sample was etched in buffered-oxide etch for 10 minutes prior to introduction to the vacuum system; it was then annealed at 730º C for 30 minutes. The well-studied 13 eV Ti satellite [6,7,8,9,10,11] is indicated in both spectra; as seen, it has a binding energy of approximately 13 eV relative to the Ti 1s, Ti 2p 1/2 , and Ti 2p 3/2 core lines.The Ti 2p spectrum is similar to what is reported in the literature, with the exception that our high-resolution spectrum shows a hint of a second low-energy satellite that appears as a shoulder on the high bindingenergy side of the 2p 1/2 component and as a shoulder on the low binding-energy side of the 13 eV satellite that is associated with the 2p 3/2 line.The Ti 1s spectrum, on the other hand, shows two distinct satellites: The 13 eV structure together with a lower-energy, smallerintensity feature occurring at approximately 5 eV below the main line.Clearly, the spin-orbit splitting of the Ti 2p core level is the reason this lower-energy satellite has not been resolved previously; it is also not resolved in the Ti 2s spectrum [8] on account of the anomalously large Coster-Kronig decay width of the 2s shell [12].Fig. 1.Ti 1s photoelectron spectrum recorded with photon energy hν = 5597 eV and Ti 2p photoelectron spectrum recorded with photon energy hν = 4967 eV from cubic SrTiO3.Note the two satellite structures that appear at approximately 5 eV and 13 eV lower-kinetic (higher-binding) energy in the 1s spectrum relative to the main 1s core line.The higher energy satellite is mirrored in the Ti 2p spectrum and occurs at approximately the same energies relative to the Ti 2p1/2 and the Ti 2p3/2 core lines.The data have been normalized to equal peak height. Before we present our resonant data, it is important to elucidate the different electronic transitions that will be studied.Figure 2 shows Ti K x-ray-absorption near-edge spectra for single-crystal SrTiO 3 [13].The data are plotted for different sample geometries relative to the incident synchrotron-beam wave vector q and synchrotron-beam polarization vector e.By orienting the direction of the electric-field polarization and wave vector, the different dipole and quadrupole transitions of the Ti 1s electron may be selected as shown in the figure .In cubic materials, such as SrTiO 3 , the intensity of dipole transitions is invariant with respect to q and e [14].As shown by their sensitivity to sample geometry, the first two peaks of the spectra are dipole-forbidden transitions of the Ti 1s electron to the Ti 3d derived t 2g (d xy , d yz , and d zx ) and e g (d 3z 2 -r 2, and d x 2 -y 2) unoccupied molecular orbitals.The energy difference between the two peaks is approximately 2.2 eV, which corresponds to the crystal-field splitting of the metal 3d shell.This splitting results from the different orbital overlap between the Ti 3d orbitals and the ligand 2p orbitals that are strong functions of symmetry; it stabilizes (destabilizes) the occupied (unoccupied) e g orbitals more strongly than the occupied (unoccupied) t 2g orbitals.In a simple ionic picture, this is because the metal e g orbitals point towards the ligand atoms while the metal t 2g orbitals point between them [15]. Note that the t 2g and e g transitions are mirrored at higher-photon energy (by approximately 5 eV), and we have indicated these transitions as t′ 2g and e′ g , respectively, in the figure.These higherenergy transitions, however, show no geometry dependence, indicating that they are dipole allowed.In the theoretical work of Vedrinskii et al. [16] and Vankó et al. [17], similar features in both the SrTiO 3 and LaCoO 3 spectra were attributed to 1s transitions to the metal 3d orbitals on neighboring metal atoms via oxygen-mediated intersite hybridization: M(4p)-O(2p)-M′(3d).The fact that they appear at higher-photon energy than the direct (local) 1s to 3d transitions is attributed to the reduced core-hole attraction on the neighboring metal sites, although their increased energy can also be explained in a simple molecular-orbital picture as the metal 4p orbitals lie at higher energy than the metal 3d orbitals and hence so would their hybrid.The fact that these transitions are dipole allowed and show such small intensity in the absorption spectra also demonstrates that most of their character originates from the neighboring metal-ion 3d states.Note that all of these features lie well below the main 1s to 4p absorption edge that occurs at 4984 eV in SrTiO 3 In our resonant measurements, we set the photon energy to the energies of the above transitions and measure high-resolution Ti K-L 2 L 3 Auger spectra.As in the study of Danger et al. [18], we report only the 1 D 2 K-L 2 L 3 peak as it is the most intense and other peaks exhibit similar behavior.Figure 3 shows Ti K-L 2 L 3 ( 1 D 2 ) Auger spectra recorded with the photon energy set to the 1s → t 2g quadrupole transition (hυ = 4968.3eV), the 1s → t′ 2g dipole transition (hυ = 4973.7 eV), and the 1s → 4p dipole transition (hυ = 4984.0eV), as indicated.Note the similar kinetic energies of the Auger transition recorded at the latter two photon energies, despite the fact that the former of the two transitions lies below the primary 4p edge or "white line" and would therefore typically be considered a bound state.Note as well the large energy shift (by a full 2 eV) of the Auger peak recorded with the photon energy set to the quadrupole 1s → t 2g transition; this shift is the same when the photon energy is set to the quadrupole 1s → e g transition (not shown), and it is also noticeably narrower for both transitions as expected [19,20]. Clearly, these data demonstrate the localized nature of the metal 3d states in this transition-metal oxide.Promotion of the spectator electron to either the metal 3d t 2g or the metal 3d e g orbitals is both sufficiently localized and long lived to fully screen the photo-hole, and the Auger decay in this case more closely resembles direct photoionization because its final state leaves only one photo-hole on the Ti ion.In the case of the two dipole transitions, the strong intersite hybridization delocalizes the spectator electron, leaving little or no spectator-electron density on the absorbing atom with which to screen the photo-hole.It is interesting to note that the energy of the resonant Auger decay recorded at the nonlocal 1s → 3d′ transition occurs at slightly lower kinetic energy than at the 1s → 4p transition, again consistent with the assignment that the latter transition is to orbitals that have the majority of their character on neighboring metal sites.Such dispersive energy shifts have been used previously to study electron dynamics in adsorbed systems [21,22,23]. Figure 4 again shows high-resolution Ti K-L 2 L 3 Auger spectra, but now plotted as a function of excess photon energy above the 1s → 4p threshold as indicated.Note that the energy of the 1s → 4p threshold is a full 15 eV above the 1s → t 2g transition, and consequently it is well above the resonant Raman regime that has been studied in the past [20,24,25,18]; our high-energy threshold data therefore probe the electron dynamics that occur as the Ti 1s electron transits to the continuum, as opposed to the resonant behavior that occurs when it is trapped in the 3d bound state below it [26].Clearly, there is an additional feature at the high kinetic-energy side of the main Auger peak that turns on discretely with excess photon energies above the 1s → 4p edge.From the difference spectrum shown in the inset, we find that the kinetic energy of this feature coincides with the kinetic energy of the Auger peak when the photon energy is set to either the t 2g or e g local 1s → 3d resonance; i.e., in the presence of the screening charge of the t 2g or e g 3d spectator electron.We note that if this additional intensity were due to an intrinsic satellite or loss feature associated with the primary or diagram Auger decay, it would occur at a kinetic energy below the main line.Consequently, it must reflect the same well-screened initial states of the Auger decay but that turn on discretely at the excess photon energies above the 4p threshold that are equal to the binding energies of the two Ti photoelectron satellites; i.e., E + ΔE 1 and E + ΔE 2 where E is the threshold for the core ionization and ΔE is the additional energy required for the "shake" [27].Similar threshold phenomenon has been observed for K-L 2 L 3 Auger decay of Ar gas [25] and Cu and Ni metals [28], but to our knowledge this is the first time that such a satellite has been observed in a solid on the highkinetic (low-binding) energy side of the primary Auger peak clearly identifying it as a "shake-on" rather than a "shake-off" charge process. Figure 5 illustrates the three possible transitions pertaining to the Auger decay: K-L 2 L 3 Auger decay following direct 1s photoionization, K-L 2 L 3 resonant Auger decay following promotion of the 1s electron to the unoccupied metal 3d orbitals, and K-L 2 L 3 Auger decay following 1s photoionization and O 2p to Ti 3d ligandto-metal charge transfer.Note the similarity (and hence the equality of final-state energies) of the initial states of the Auger decay in the latter two cases [29]; clearly, the high-energy structure observed in the Ti K-L 2 L 3 Auger spectrum is due to the contribution from the well-screened initial states that are created by the t 2g and e g ligand-tometal charge transfer that occur for energies above the Ti 4p edge. We will now discuss our data within the context of the available theoretical calculations.Ikemoto et al. [6] found satellite structure in the Ti 2p core ionization of Ti based transition-metal oxides and attributed it to O 2p → Ti 4s shake-up.Kim and Winograd [7] inter- -preted similar data in terms of optical absorption spectra, and they assigned the observed satellites to monopole O 2p e g → Ti 3d e g , O 2p a 1g → Ti 4s a 1g , and O 2s e g → Ti 3d e g transitions in order of increasing energy.These authors concluded that the first shake-up satellite occurring on the high binding-energy side of a transitionmetal compound must be due to ligand 2p e g → metal 3d e g monopole transitions because the probability of the first allowed shake-up, 2p t 2g → 3d t 2g , is too low to be observed.Sen et al. [8] disagreed with this hypothesis and argued that all such monopole transitions should be observed, and an anion-exciton model was later proposed by de Boer et al. [30] that challenged the conventional wisdom that the satellites are due to ligand-to-metal charge transfer.Since then, the 13 eV satellite has been reproduced by the full multiplet charge-transfer theory using a configuration-interaction wave function for a TiO 6 octahedral cluster and an Andersonimpurity Hamiltonian [8].In these calculations, the on-site metal d-d Coulomb repulsion energy U, the charge-transfer energy Δ, and the ligand-metal p-d hybridization energy V are fitting parameters.This treatment was adopted by Bocquet et al. [10] and by Zimmermann et al. [11] who used a more complete configuration-interaction basis set but neglected both the core-hole d-electron multiplet effects and the crystal-field splitting of the metal 3d orbitals.These calculations consequently predict only the existence of the high-energy 13 eV structure. Recently, Kas et al. [31] have applied an ab initio real-time cumulant approach for charge-transfer satellites in x-ray photoemission data.Their calculation reproduces the Ti 2p core lines and the high-energy 13 eV satellite.The 5 eV satellite is again either missed or obscured by the spin-orbit splitting of the Ti 2p level.A major advantage of this theoretical treatment over previous work, however, is that it is a real-space approach based on densityfunctional theory (DFT) in which the cumulant representation describes the transfer of spectral weight from the main quasi-particle or core peak to the satellite.Afforded by the calculation is the charge density of the 13 eV ligand-to-metal excitation that clearly identifies it as e g symmetry, bearing stunning resemblance to the e g set of molecular orbitals for this system [32]. From group theory and molecular-orbital considerations, electronic transitions with e g symmetry should naturally lie at lowerkinetic energy (higher-binding energy relative to the main line) than those with t 2g symmetry.This is because the energy required to excite an electron from an occupied O 2p e g level to an unoccupied metal 3d e g level is significantly greater than the energy required to excite an electron from an occupied O 2p t 2g level to an unoccupied metal 3d t 2g level.Likewise, because the transition probability in the sudden approximation is given by the square of the overlap between the initial and final states (hence the monopole-selection rules), the e g transition will be more intense than the t 2g transition, once again because the metal 3d e g orbitals point towards the ligand 2p e g orbitals.Consequently, it is clear that the smaller-intensity 5 eV satellite observed in our data is due to ligand O 2p t 2g → metal 3d t 2g transitions and the larger-intensity 13 eV satellite is due to ligand O 2p e g → metal 3d e g transitions.As the formal-charge state of the Ti ion in this material is Ti 4+ , both transitions should be observable in high-resolution photoelectron spectra. In conclusion, we have identified a previously un-resolved photoelectron satellite in the Ti 1s core-level spectrum of the transition-metal oxide SrTiO 3 , and we have examined the photonenergy dependence of the Ti K-L 2 L 3 Auger decay within the vicinity of the Ti K edge.Our data reveal a low binding-energy feature in the Auger spectrum that is concurrent in energy to the Auger peak measured at both the Ti 1s → t 2g and the Ti 1s → e g quadrupole transitions.This feature turns on discretely with excess photon energies above the 4p threshold that are equivalent to the two satellite binding energies thereby uniquely identifying them as O 2p t 2g → Ti 3d t 2g and O 2p e g → Ti 3d e g monopole ligand-to-metal charge transfer.The presence of two distinct satellites is due to the crystalfield splitting of the Ti 3d orbitals, and this assignment is consistent with recent ab initio theoretical calculations that predict the energy, intensity, and charge density of the higher-energy e g excitation.This work therefore points to a new direction on how photoelectronsatellite structure and the threshold behavior of Auger spectra may be used to study chemical bonding and orbital occupation in the solid state. This work was performed at the National Synchrotron Light Source which is supported by the U.S. Department of Energy.Additional support was provided by the National Institute of Standards and Technology.The authors thank Dr. Eric Shirley for useful discussions and sharing unpublished calculations at the Ti 1s near edge. Fig. 2 . Fig.2.Polarization dependence of the Ti K (1s) x-ray absorption spectra for cubic SrTiO3 showing the pre-edge structures below the 1s → 4p threshold.The inset shows the full near-edge region.Transitions to the t2g, eg, t′2g, e′g, and 4p levels are indicated (see text). Fig. 3 . Fig.3.Resonant Ti K-L2L3 Auger spectra recorded with photon energy set to the Ti 1s → t2g, 1s → t′2g, and 1s → 4p transitions indicated in Figure 2. The data have been normalized to equal peak height. Fig. 4 . Fig.4.Ti K-L2L3 Auger spectra recorded with photon energy set to the Ti 1s → 4p transition and with excess energy above threshold as indicated.Note the kink and the additional intensity on the high-kinetic energy side of the Auger peak that turns on discretely at 5 eV and then again at 17 eV excess photon energy.The inset shows the difference spectrum between the spectrum recorded at the 1s → 4p transition and 17 eV above it.The data have been normalized to equal peak height and have had an integrated background removed. Fig. 5 . Fig. 5. Illustrations of initial and final states for: (a) Ti K-L2L3 Auger decay following 1s core photo-excitation that leaves the Ti ion doubly ionized; (b) Ti K-L2L3 Auger decay following resonant excitation of the 1s electron to the metal 3d orbitals; (c) Ti K-L2L3 Auger decay following 1s photo-excitation accompanied by charge transfer from the ligand O 2p to the metal Ti 3d orbitals.Note the similarity of the initial and final states for the Auger decays in (b) and (c) that leave the Ti ion singly ionized.
2019-04-13T10:45:14.358Z
2015-01-22T00:00:00.000
{ "year": 2015, "sha1": "233d12f4652f4e4b826e1dcb7724d896ad34efad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "233d12f4652f4e4b826e1dcb7724d896ad34efad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270463175
pes2o/s2orc
v3-fos-license
Hepatitis C Elimination. Are we on course for 2030? vaccination and contact tracing. Last, but not the least, is the global implementation Introduction H epatitis C is a major public health problem in Pakistan. At present, Pakistan has the highest disease burden 1 due to chronic hepatitis C (CHC) virus infection.HCV prevalence in Pakistan was studied at the national level in 2008.This sero-survey was conducted by the Pakistan Medical 2 Research Council (PMRC) and the prevalence of Hepatitis C antibodies was estimated to be approximately 5% (8 million).However, recently, other prevalence studies done in Pakistan have shown an increasing trend in chronic hepatitis C (CHC) plus Velpatasvir.In the current scenario, the bottleneck remains that very few people are aware of their disease due to the asymptomatic nature and therefore the unaware patients do not seek active testing and treatment, thereby creating a large pool of untreated and undiagnosed people in the community. Taking notice of this high burden of disease yet the ease to eliminate the hepatitis C virus, virtually within a few weeks of treatment, the Prime Minister (PM) of Pakistan, has agreed to launch a hepatitis C virus elimination programme to be 13 implemented over the next 5 years (till 2030), which has also received endorsement by the President of Pakistan.The salient features of the PM Programme are that all patients above the age of 12 years will be screened using rapid HCV testing.It is assumed that 10% of the population will be positive (17 million) for chronic hepatitis C. Following this, these 17 million HCV-positive patients will undergo PCR testing and if this hypothesis is correct then 60% may have the virus and thus would need treatment, we are looking at treating 15 million people free of cost.This scientific modeling/mathematics needed the support of the modelers.Therefore, Homie Razavi from the Center of Disease Analysis (CDA) was requested by the Federal government to undertake the modeling for Pakistan.Likewise new data on hepatitis C released by the Polaris Observatory, also shows that 9 countries Australia, Brazil, Egypt, Georgia, Germany, Iceland, Japan, Netherlands, and Qatar are on track to meet the WHO target of eliminating 22 hepatitis C with the UK not far behind.In Pakistan, the PM Programme is being launched as a support program to the already existing provincial hepatitis programs.The PM program shall procure the commodities i.e rapid tests, PCR and the treatment and shall pass on to the provinces on their need/demand base.Monitoring of the utilization of these commodities shall be done by the provinces and the federal government.The federal health ministry shall be the custodian of the project.Provinces will enhance their capacity to screen, test and treat more patients and strengthen infection prevention i.e safe blood, safe injections and infection control in the health care settings.The plan is to screen all population over 12 years of age using health facility-based screening along with community screening.All those who are found to be reactive on the rapid test shall have their blood collected immediately (reflex testing) for HCVRNA, CBC and AST> CBC and AST shall be used to calculate the APIR score for deciding the duration of therapy (APRI <1.5 receive 12 weeks treatment and those with >1,5 receive 24 weeks treatment.Private sector engagement along with engaging other stake holders like department of education, religious affairs, water and sanitation, NGOs, corporate sector, airlines, railways, army will help in making this huge possible.Hence, by adhering to this plan we can achieve elimination by 2030. Conclusion In conclusion, the 2030 hepatitis elimination targets seem difficult to be achieved in Pakistan, especially at the current pace.However combining the PM's programme for hepatitis C elimination with other similar stakeholders such as the provincial hepatitis programmes, harm reduction, blood safety and infectious disease control services can bring a huge change in the current momentum.In the PM programme alone, screening, testing and treatment will be increased 8 times to that of the present figures and new infections will thereby be reduced by 5 times. Hence, abiding by this hypothesis of a combined annotation approach, (one-time mass screening plus follow through with active treatment) we should be successful in toppling the hepatitis C (HCV) virus infection dominos.This in turn should be able to achieve an almost 80% reduction in the incidence and 65% reduction in liver related mortality secondary to chronic hepatitis C infection by year 2030, placing us more closer to our target . 20 ( WHO), HCV elimination targets by 2030.The model estimates that each year Pakistan has to treat 1 million cases to achieve the 2030 elimination targets.Similarly, new cases of HCV must be reduced by enhancing efforts 10 times the 21 present rate. SVR 12.The cost of 12 weeks of therapy with Sofosbuvir and Daclatasvir is relatively lower than that of Sofosbuvir
2024-06-14T15:02:44.584Z
2024-03-30T00:00:00.000
{ "year": 2024, "sha1": "5e7afc567c9bcb90d67614c20e956522e5b7be8d", "oa_license": "CCBY", "oa_url": "https://www.annalskemu.org/journal/index.php/annals/article/download/5738/3296", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1030d123dc840f3c1b576b201d37ef961c517dd8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
52230647
pes2o/s2orc
v3-fos-license
Relative competitiveness of irrigated rice cultivars and Aeschynomene denticulata This study evaluated the relative competitive ability of rice cultivars in the presence of a joint-vetch (Aeschynomene denticulata) biotype, at different replacement levels of plants in the association. The experiments were conducted in a randomized complete block design with four replications. First, it was determined the population of plants in which the final dry mass remains constant, both for the rice and for the joint-vetch (24 plants per pot). Later, two experiments were carried out to evaluate the competitiveness of the rice cultivars BRS Querência and BRS Sinuelo CL with joint-vetch plants, both conducted in replacement series, in different crop and weed combinations, varying the relative proportions of plants per pot (24:0, 18:6, 12:12, 6:18 and 0:24). Competitiveness of each species was analyzed by diagrams applied to replacement experiments and by the relative competitiveness indices. Fifty days after the emergence, tillering or number of leaves, height, leaf area and shoot dry mass were determined. Competition occurred between the rice cultivars and the joint-vetch, both were adversely affected, irrespective of the plant proportion. This resulted in reductions in all evaluated variables. Different competitive abilities were observed between rice cultivars in the presence of joint-vetch. The ‘BRS Querência’ was more competitive than the ‘BRS Sinuelo CL’ for all plant proportions and variables tested. INTRODUCTION The Brazilian production of rice (Oryza sativa L.) is about 12 million tonnes (Silva et al., 2013), representing a source of income and job in many farms in the country, besides being one of the staple cereal foods most consumed by the population.However, rice cultivation productivity in Brazil is still greatly affected by biotic and abiotic factors. Among the biotic factors that contribute most to the damage to rice crops stand out weeds, by competing with the crop for resources available in the environment such as water, light and nutrients.Negative effects caused by weed interference are observed in the growth, development of crops, production and grain quality for the industry (Agostinetto et al., 2010(Agostinetto et al., , 2013;;Fleck et al., 2008;Galon et al., 2011).It is noteworthy that weeds can be hosts for pests and diseases and produce allelopathic effects, even causing losses of 52% to 100% in rice crops if no control measures are adopted, increasing the cost of production and negatively influencing the efficiency of crop irrigation (Fleck et al., 2008;Silva & Durigan, 2009). Aeschynomene is a genus of weed infesting rice, containing many species, also commonly called joint-vetch in rice producing areas in southern Brazil (Fleck et al., 2008).It is present in almost 30% of the area cultivated with rice in Rio Grande do Sul State, causing great losses in productivity and quality of the harvested grain, especially for the regions of the South Coast, Central Depression and West Border, which have the highest infestation of the weed (Andres & Theisen, 2009).Given the poor control of weeds in these regions and also considering the annual cycle, dispersal and propagation by seeds and the difficulty of crop rotation makes joint-vetch the most problematic weed in rice cultivation (Lazaroto et al., 2008).Another factor that enhances the negative effects of competition of weeds and rice is the low competitive ability of this cereal (Velho et al., 2012).Andres & Theisen (2009) observed reduction of over 57% in the grain yield of the rice cultivar BRS Querencia, Bragantia, Campinas, v.74, n. 1, p.67-74, 2015 under competition with joint-vetch at a density between 25 and 31 plants m -2 . In crops, the population of cultivated plants is usually constant whereas the population of weeds varies according to the soil seed bank and the environmental conditions that change the infestation level (Agostinetto et al., 2008(Agostinetto et al., , 2013;;Galon et al., 2011).Thus, in studies on competition, it is necessary to evaluate not only the plant population in the competitive process, but also the influence of the variation in the proportion between the species (Christoffoleti & Victória, 1996). Determination of competitive interactions between crops and weeds requires appropriate experimental designs and methods of analysis, and the conventional replacement series experiments are the most used to clarify these relationships (Agostinetto et al., 2013;Cralle et al., 2003;Crotser & Witt, 2000;Estorninos et al., 2002;Roush et al., 1989;Vida et al., 2006).In these experiments, the crops generally achieve greater competitive ability than weeds.In the field, the effect of weed on culture is mainly related to the level of infestation and not to its individual competitive ability (Vilà et al., 2004).However, when there is competition between individuals of the same genus and/or species, the competitive advantage of the crop can be changed, once both exploit the same ecological niche. In this context, considering the wide distribution, infestation and losses caused by joint-vetch to irrigated rice fields in Rio Grande do Sul State, associated with lack of studies on the interference of this plant on the irrigated rice, this study evaluated the competitive ability between irrigated rice cultivars and a biotype of joint-vetch (Aeschynomene denticulata), at different proportions of plants in the association. In all experiments, we adopted the randomized complete block with four replications.Competitors tested included the rice cultivars recommended for cultivation in the Rio Grande do Sul State, namely BRS Sinuelo CL (Clearfield ® ) of mid-cycle (121-135 days), with modern type plants, good tolerance to lodging and diseases, smooth leaves, thin long grain with smooth hulls (SOSBAI, 2010) and the BRS Querência early cycle (106-120 days) with "modern American", smooth leaves and grain strong stems, high tillering capacity, large number of fertile spikelets, with moderate resistance to diseases (SOSBAI, 2010).These cultivars competed with a biotype of joint-vetch (Aeschynomene denticulata). Preliminarily, both for the rice and the joint-vetch in monoculture, we performed an experiment to estimate the plant population in which the dry mass production remains constant.To this end, we used populations of 1, 2, 4, 8, 16, 24, 32, 40, 48, 56 and 64 plants per pot (equivalent to 25, 49, 98, 196, 392, 587, 784, 980, 1,176, 1,372 and 1,568 plants m -2 ).The constant final production was achieved with a population of 24 plants per pot, for all cultivars tested under competition with joint-vetch, which amounted to 587 plants m -2 (data not shown). Two other replacement series experiments were conducted to evaluate the competitiveness of rice cultivars BRS Querência and BRS Sinuelo CL with joint-vetch plants, with different combinations of cultivars and the weed biotype, varying the relative proportions of plants per pot (24:0, 18:6, 24:12, 6:18, 0:24), maintaining a constant total number of plants (24 plants per pot).To establish the desired populations for each treatment and achieve uniformity of seedlings, seeds were previously allocated on trays, and later transplanted to pots. Fifty days after the emergence, we performed the measurement of rice tillering or the number of composite leaves of joint-vetch, height, leaf area (LA) and shoot dry mass (DM) of the rice and of the joint-vetch.The number of tillers (rice) or composite from the soil surface to the last fully expanded leaf.Leaf area was determined with the aid of an electronic integrator (Licor 3100), by collecting all the plants in each treatment.After quantification of LA, shoots were placed in paper bags and dried in a forced air circulation oven at 60 ± 5 °C to constant mass. Data were analyzed by graphs illustrating the variation or relative yield (Bianchi et al., 2006;Cousens, 1991;Radosevich, 1987;Roush et al., 1989).This procedure, also known as the conventional method for replacement experiments, involves the construction of a diagram based on the relative and total yield or variations (PR and PRT, respectively).When the result of PR is a straight line, it means that the species have equivalent abilities.If the result of PR is a concave line, it indicates loss in growth of one or both species.On the contrary, if the PR shows a convex line, it indicates advantage in growth of one or both species.When the PRT is equal to 1 (straight line), there is competition for resources; if it is higher than 1 (convex line), the competition is avoided.If the PRT is less than 1 (concave line), there is mutual impairment of growth (Cousens, 1991). Indices of relative competitiveness (CR), relative clustering coefficient (K) and aggressiveness (A) were calculated, in which CR is the comparative growth of rice cultivars (X) in relation to joint-vetch (Y); K indicates the relative dominance of one species over another, and A indicates the most aggressive species.Thus, CR, K and A indices indicate the most competitive species and their joint interpretation indicates with greater precision the competitiveness of species (Cousens, 1991).The rice cultivars X are more competitive than the joint-vetch Y when CR> 1, K x > K y and A> 0; in turn, the joint-vetch Y is more competitive than rice cultivars X when CR <1, K x < K y and A <0 (Hoffman & Buhler, 2002).To calculate these indices, we used 50:50 proportions of the species involved in the experiment (joint-vetch and rice), or populations of 12:12 plants per pot, using the equations: CR= PR x /PR y ; K x = PR x /(1-PR x ); K y = PR y /(1-PR y ); A= PR x -PR y , according to Cousens & O'Neill (1993). The statistical analysis of yield or relative variation included the calculation of the differences in the RP values (DPR) obtained in the proportions 25%, 50% and 75%, relative to values belonging to the hypothetical straight line at the respective proportions, namely 0.25; 0.50 and 0.75 for PR (Bianchi et al., 2006;Fleck et al., 2008).We used the t-test to test the differences in the indices DPR, PRT, CR, K and A (Hoffman & Buhler, 2002;Roush et al., 1989).The null hypothesis to test the differences in DPR and A is that mean values are equal to zero (H o = 0); for PRT and CR, mean values are equal to 1 (H o = 1); and for K, the mean differences between K x and K y are equal to zero [H o = (K x -K y ) = 0].The criterion for considering the curves PR and PRT different from hypothetical lines was the occurrence of significant differences by the t-test in at least two proportions (Bianchi et al., 2006;Fleck et al., 2008).Similarly, for the indices CR, K and A, there are differences in competitiveness when at least two of them showa significant difference by the t-test. The results obtained for rice tillering or number of composite leaves of joint-vetch, plant height, leaf area and dry mass, expressed as mean values per treatment, were subjected to analysis of variance by F-test and, whenever significant, means were compared by Dunnett's test, considering monocultures as controls in such comparisons.For all statistical analysis we adopted p≤0.05. RESULTS AND DISCUSSION The graphical results of the lines of relative yield (PR) in relation to the expected lines, showed that rice cultivars BRS Querência and BRS Sinuelo CL when combined with the joint-vetch (competitor) have similar competitive ability for all the variables studied and at the different proportions tested (Figure 1).Total relative yield (PRT) was less than one, with significant differences for all combinations tested, evidencing a mutual impairment in all variables and proportions analyzed (Figure 1; Table 1). Among the variables, the tillering or the relative number of leaves, the leaf area (LA) and the relative dry mass (DM) showed greater reductions in the PR curve than the relative plant height (Figure 1).The same was observed by Fleck et al. (2008), who reported that the height of rice plants was slightly reduced when in competition with red rice.This trend in relation to height is probably associated with the plant strategy to capture more light, thereby forming stems with longer internodes in the case of rice and more etiolated stems in the case of joint-vetch, with lower investment of energy for growth and development.This is because light is the main limiting resource in the community (Almeida & Mundstock, 2001), with a key role in the initial response of a plant with higher competitive potential (Galon et al., 2011;Page et al., 2010).In wheat, Agostinetto et al. (2008) and Rigoli et al. (2008) registered an increase in the height of the plants when in competition with ryegrass or radish.Moreover, Wandscheer et al. (2013) evaluated the competitive ability of corn intercropped with goosegrass (Eleusine indica) and reported that the weed exhibited better competitive results than the crop. In general, the differences relating to rice tillering and joint-vetch leaf number, plant height (EP), AF and MS (Table 1) showed a greater loss of PR in the competitor in comparison with rice cultivars, except for the number of leaves and the EP, in which the joint-vetch was more competitive than the cultivar BRS Sinuelo CL at the proportion 75:25.The competitive ability of rice cultivars, in the presence of the weed, varied depending on the variable (Table 1).There were major differences in tillering and AF, with superiority of the cultivar BRS Querência to BRS Sinuelo CL, especially with a higher proportion of the species in relation to the competitor, with a reduction in this difference as the proportion of competitor was increased.According to SOSBAI (2010), the cultivar BRS Querência stands out compared to other rice cultivars given the high tillering capacity, which corroborates the results found herein.Similar results were verified by Agostinetto et al. (2008), Bianchi et al. (2006), Galon et al. (2011) and Wandscheer et al. (2013), which also observed, when working with many species, the existence of competitive variability according to the development cycle and the intrinsic characteristics of each cultivar when in competition with weeds. For the variables AF, EP and MS, rice cultivars had a similar competitiveness (Table 1).It can be inferred that as they are different cultivars, they express different behavior in the presence of joint-vetch, especially the difference in the cycle, early and medium, respectively for the BRS Querência and the BRS Sinuelo CL.Agostinetto et al. (2013) investigated rice and soybeans competing with crabgrass (Digitaria ciliaris) and observed that the intraspecific competition prevailed for crops, while for the weed, prevailed interspecific competition as the most harmful. Tillering, EP, AF and MS of rice cultivars BRS Querência and BRS Sinuelo CL were reduced when competed with joint-vetch in all tested proportions (Table 2).There was increasing loss to the crop, for the variables evaluated, with increasing proportion of the competitor.The higher the proportion of plants in the association (competitor or crop), the greater the losses in variables of rice or even weed; this demonstrates again the competition of species for the same resources.The results of this study are similar to several others in the literature, such as Cerqueira et al. (2013) when analyzed Spermacoce verticillata competing with rice cultivars Jatobá and Catetão; Agostinetto et al. (2008) evaluated rice competing with barnyardgrass; and Galon et al. (2011) who studied barley in competition with ryegrass.All the above authors found greater reductions in morphological variables with increasing proportion of competitor plants in the association. The BRS Querência, in the presence of different proportions of joint-vetch, experienced minor losses compared to BRS Sinuelo CL.At the proportion 75:25 (rice: competitor), as to the number of tillers, the BRS Querência was reduced by 26%, while the BRS Sinuelo CL, by 44% (Table 2). Likewise, joint-vetch showed the greatest suppression in the number of leaves when competing with the BRS Querência (95%) than with BRS Sinuelo CL (83%) at 75:25 (rice: competitor), demonstrating the higher competitiveness of BRS Querência compared to BRS Sinuelo CL.Nevertheless, for the other variables, the superiority of BRS Querência over BRS Sinuelo CL is not so evident.Importantly, the competition was detrimental to both rice cultivars, because according to Bianchi et al., (2006) competition quantitatively and qualitatively affects production, because it changes the use efficiency of environmental resources, such as water, light, CO 2 and nutrients.Also, in a plant community, benefit in competition for resources is given for species that establish first or by intrinsic characteristics on the competitive ability of each cultivar (height, growth rate, number of tillers, among others). Besides that, when evaluating the relative competitiveness index (CR), we found higher growth in both rice cultivars compared with joint-vetch, in all variables (Table 3).Among the cultivars, BRS Querência presented greater competitive ability than BRS Sinuelo CL for all variables tested, under competition with joint-vetch.The same was observed for the coefficients of competitiveness (K); in all cases, rice showed higher values for this coefficient (Table 3).Regarding aggressiveness (A), rice was more competitive in all cases.For this index (A), the BRS Querência was more aggressive than BRS Sinuelo CL in the presence of the competitor joint-vetch (Table 3). In the joint interpretation of graphical analysis on the relative variables and their significance in relation to the equivalent values (Figure 1 and Table 1), morphological variables (Table 2) and the competitiveness indices (Table 3), in general, there was a negative interaction between the species, affecting both rice cultivars and the competitor (joint-vetch).Nonetheless, in this case, the competitor suffered greater losses than rice, especially when associated Table 1 . Relative differences for the variables tillering (rice) or number of leaves (joint-vetch), height, leaf area and shoot dry mass of the rice cultivars BRS Querência and BRS Sinuelo CL and of the joint-vetch, at 50 days after emergence.UNIPAMPA, Itaqui (RS), 2011/2012 * Significant difference by the t-test (p≤0.05).Values in brackets represent the standard error of the mean. Table 2 . Differences between plants associated or not of the rice cultivars BRS Querência and BRS Sinuelo CL and joint-vetch for variables tillering (rice) or number of leaves (joint-vetch), height, leaf area and shoot dry mass, at 50 days after the emergence.UNIPAMPA, Itaqui (RS), 2011/2012 * Means differ from the control (T) by the Dunnett's test (p≤0.05). Table 3 . Indices of competitiveness between rice cultivars and competitor, expressed by relative competitiveness (CR), relative clustering coefficient (K) and aggressivity (A), obtained in experiments conducted in replacement series Significant difference by t-test (p≤0.05).Values in brackets represent the standard error of the mean.K x and K y are the relative clustering coefficients of rice cultivars and competitor, respectively. *
2018-09-13T19:38:19.795Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "f1aab01686d17e641b169d475dd4b9fcf282f6cc", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/brag/a/ByCV3mxmVqVvLGfVfxP3yps/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f1aab01686d17e641b169d475dd4b9fcf282f6cc", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
234626169
pes2o/s2orc
v3-fos-license
Cranioesophageal Pythiosis in a Horse Background: Pythiosis is a chronic inflammatory disease that is caused by oomycete Pythium insidiosum. This illness affects several species including humans and horses. Equine is the most affected species, having no predisposition for breed, gender, or age. It is usually shown in cutaneous and subcutaneous forms, and the lesions, which grow quickly and are hard to treat, are located mainly in the extremities. The diagnosis is made via epidemiology, clinical signs, and macroscopic and microscopic aspects of the lesion. This study describes a case of cranioesophageal pythiosis in a horse, examining the epidemiological, clinical and pathological characteristics. Case: A 12-year-old male quarter horse, weighing 515 kg was taken to the Veterinary Hospital at the University Center of Espírito Santo (UNESC). The horse had an increase in volume in the cranioesophageal region, coughing, difficulty breathing, and a runny nose. On clinical examination, the horse showed an enlargement in the submandibular and retropharyngeal lymph nodes, subcutaneous edema in the larynx region, and a temperature of 38.2oC. According to the owner, cough was recurrent and had lasted about 12 months even after treatment with different kinds of antimicrobials. On radiographic exam, there was a marked decrease in the tracheal lumen and increased soft tissue radiopacity in the region adjacent to the narrowing. The animal was taken to surgery to remove the mass, but he died because of complications during surgery. The animal’s owner did not allow necropsy, but a fragment of the mass in the cranioesophageal was removed and sent for histological examination. The fragment was fixed in 10% formalin and processed using routine histological analysis. Macroscopically, the mass was light yellowish and ulcerated, and it measured 7.0 × 5.0 × 5.0 cm. In the middle of the ulcerated areas, there were yellow and firm granular structures that were consistent with kunkers. Histologically, extending from the tracheal adventitia to the thyroid, there was a large number of lymphocytes, macrophages, neutrophils, eosinophils, and multinucleated cells (foreign body type) and a well-defined focus of coagulative necrosis, which was surrounded by a thin border of macrophages. Within the necrotic areas, there were negative images of tubuliform hyphae. Grocott’s silver methanamine staining showed hyphae that had irregular branches, rare septa, smooth and parallel walls, and was impregnated by silver. Histological sections of the mass were subjected to immunohistochemistry. Hyphae were positive for Pythium insidiosum. Discussion: The diagnosis of pythiosis was based on macroscopic, histological findings and positive immunostaining for Pythium insidiosum. This report shows the unusual location of the disease in the horse, which made the clinical diagnosis of the disease complex. Extracutaneous forms of pythiosis in horses are less frequent than cutaneous forms. The etiopathogenesis of these forms is still unclear, but it has been suggested that previous lesions in the intestinal mucosa caused by plant material or pathogens may be predisposing factors for the appearance of the enteric form of the disease. It was not possible to observe if the animal’s other organs were affected because a necropsy could not be performed. The agent probably penetrated the esophageal epithelium and spread throughout the trachea and thyroid, but its origin cannot be determined. The radiographic findings in this study are compatible with neoplasms. However, inflammatory processes such as those caused by pythiosis should be included in the differential diagnosis of horses with swelling in the cranial portion of the esophagus. INTRODUCTION Pythiosis is a chronic inflammatory disease that is caused by the aquatic oomycete Pythium insidiosum [14]. Pythium species are plant pathogens, but they occasionally cause disease in several animal species [8]. In nature, P. insidiosum performs its biological cycle in stagnant waters. In these environments, the microorganism colonizes aquatic plants for their development and asexual reproduction with the formation of infectious zoospores. Susceptible hosts acquire infection upon contact with contaminated environments [9]. Pythiosis occurs in different regions around the world [3]. In Brazil, the disease in horses is considered to be endemic in the Brazilian Pantanal [14] and Rio Grande do Sul [7]. Horses are most affected by the disease, and there is no predisposition for breed, sex, or age. The disease usually presents in cutaneous and subcutaneous forms [3], but effects on other organs have been described [5,11,14]. The lesions are characterized by ulcerated masses and varied sizes, which drain serosanguinous fluid. These lesions are usually located on parts of the body that are in contact with water. In these masses, there are light yellowish concretions of necrotic material called kunkers [3], and microscopically, areas of necrosis are observed, with eosinophils, collagenolysis, and fibrosis, and Splendore-Hoeppli reaction involving hyphae [8], which are better visualized with silver-based dyes [8]. Molecular and immunohistochemical methods can confirm the diagnosis [8]. The aim was to describe a case of pythiosis in the cranioesophageal region of a horse, and to address its epidemiological, clinical, and pathological characteristics. CASE A male 12-year-old quarter horse weighing 515 kg was taken to the Veterinary Hospital of the University Center of Espírito Santo (UNESC) and presented with an increased volume in the cranioesophageal region, coughing, difficulty breathing, and runny nose. Clinical examination showed enlarged submandibular and retropharyngeal lymph nodes and subcutaneous edema in the larynx region, and the temperature was 38.2ºC. The owner indicated that the cough was recurrent and had lasted about 12 months even with different antimicrobial treatments. This horse lived in a stable and was only used for racing and training. The animal ate and drank in the stall. The water came from an artesian well on the property. A radiograph showed a marked decrease in the tracheal lumen with increased soft tissue radiopacity in the region that was adjacent to the narrowing ( Figure 1A). The full-blood count (FBC) and biochemical profile did not show any alterations. The leukogram had leukocytosis resulting from neutrophilia. The animal was taken to surgery to remove the mass, but died because of complications during surgery. The animal's owner did not allow necropsy, but a fragment of the cranioesophageal mass was removed and sent for histological examination. The fragment was fixed in 10% formalin and processed routinely for histological analysis and stained with hematoxylin and eosin 1 (HE). Histological sections were also subjected to special histochemical staining using Grocott's methenamine silver nitrate 1 . Macroscopically, the mass was light yellowish, ulcerated, and measured 7.0 × 5.0 × 5.0 cm. In the middle of the ulcerated areas, there were yellow and firm granular structures, which were consistent with kunkers ( Figure 1B). Histologically, extending from the trachea's adventitia to the thyroid, there were large numbers of lymphocytes, macrophages, neutrophils, eosinophils, and multinucleated cells (foreign body type) and a well--defined focus of coagulative necrosis, which was surrounded by a thin border of macrophages ( Figures 1C). Within the necrotic areas, there were negative images of tubuliform hyphae ( Figure 1Cinsert). Grocott's silver methanamine staining showed that hyphae had irregular branches, rare septa, smooth and parallel walls, and they were impregnated by silver (Figure 2A). DISCUSSION The diagnosis of pythiosis was based on macroscopic and histological findings and positive immunostaining for P. insidiosum. The location of the disease in the horse was unusual, which made the clinical diagnosis of the disease complex because the volume increase and radiographic changes were suggestive of neoplasm. Extracutaneous forms of pythiosis in horses are less frequent than cutaneous forms, and the etiopathogenesis of these forms are not well understood. It has been suggested that previous lesions in the gastrointestinal tract mucosa caused by plant material or pathogens may be predisposing factors for the appearance of the extracutaneous form of pythiosis [10,11]. In addition, there are reports that the disease can occur because of active penetration of the agent into the tissues [1]. The agent likely penetrated the esophageal epithelium directly or by some injury that was caused by coarse food, and it spread around the trachea and thyroid. However, its precise origin cannot be determined. In this study, it was not possible to observe whether the animal's other organs were affected because necropsy could not be performed. Extracutaneous pythiosis has been described in other species including lung injury in jaguars [2] and canines [6] and sublingual pythiosis in felines [4]. The cough and breathing difficulties that were observed in this horse probably resulted from compression of the trachea that was caused by the proliferative inflammatory lesion. A similar case was observed in a horse with severe breathing difficulties caused by pythiosis in Rio Grande do Sul. In this case, the disease affected the soft palate and caused nasopharyngeal obliteration, which resulted in breathing difficulties [12]. The radiographic findings in this study are compatible with neoplasm. However, inflammatory processes such as those caused by pythiosis should be included in the differential diagnosis of horses with swelling in the cranial portion of the esophagus. Acknowledgements. This work was supported by Fundação de Amparo à Pesquisa Inovação do Espírito Santo -FAPES. Declaration of interest. The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.
2021-05-17T00:02:48.842Z
2020-11-15T00:00:00.000
{ "year": 2020, "sha1": "c8810b9447fd9691104795cb77319453a3fa2b23", "oa_license": "CCBY", "oa_url": "https://seer.ufrgs.br/ActaScientiaeVeterinariae/article/download/102669/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7c88c795917e62b5e28fc2b22b70426e6cccfe84", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212708441
pes2o/s2orc
v3-fos-license
Fibroblast Interaction with Different Abutment Surfaces: In Vitro Study Background: Attaining an effective mucosal attachment to the transmucosal part of the implant could protect the peri-implant bone. Aim: To evaluate if chair side surface treatments (plasma of Argon and ultraviolet light) may affect fibroblast adhesion on different titanium surfaces designed for soft tissue healing. Methods: Grade 5 titanium discs with four different surface topographies were subdivided into 3 groups: argon-plasma; ultraviolet light, and no treatment. Cell morphology and adhesion tests were performed at 20 min, 24 h, and 72 h. Results: Qualitative observation of the surfaces performed at the SEM was in accordance with the anticipated features. Roughness values ranged from smooth (MAC Sa = 0.2) to very rough (XA Sa = 21). At 20 min, all the untreated surfaces presented hemispherical cells with reduced filopodia, while the cells on treated samples were more spread with broad lamellipodia. However, these differences in spreading behavior disappeared at 24 h and 72 h. Argon-plasma, but not UV, significantly increased the number of fibroblasts independently of the surface type but only at 20 min. Statistically, there was no surface in combination with a treatment that favored a greater cellular adhesion. Conclusions: Data showed potential biological benefits of treating implant abutment surfaces with the plasma of argon in relation to early-stage cell adhesion. Introduction The long-term success of dental implants depends, among other factors, on the establishment and maintenance of crestal bone levels in relation to the formation of a soft tissue barrier [1]. The peri-implant soft tissues have become a major concern in recent years as the presence of an effective mucosal attachment at the transmucosal part of the implant can provide the implant with protection of the peri-implant bone from bacteria contamination and oral environment, preventing peri-implant pathologies [1][2][3]. This aspect is of particular relevance in the aesthetic area, where the stability of gingival margin and papilla is related to the maintenance of crestal bone [3]. Microscopic and Topographic Analysis At SEM observation, all surfaces were clean without visible contaminations ( Figure 1). Topographic analysis of machined disc MAC highlighted a smooth surface with circular micro-threads with depth lower than 2 µm ( Figure 1A), with a Sa mean value of 0.2. The other tested surfaces appeared all grooved because of the peculiar presence of variably deep parallel micrometric sulci (hence the "micro-grooved" pattern). UTM surface presented a particular threading with a triangular profile and pitch of 50 µm ( Figure 1B). XA surface ( Figure 1C) was more deeply threaded with a pitch of 80 µm and a depth of 50 µm. The Sa mean value of this surface was 21, and its Sdr 119%, consistently with its micro-topography. Although anodized, UTM-Y surface ( Figure 1D) resembled that of UTM, whose Sa and Sdr were respectively 0.6% and 2.8%, the same values as UTM. Wettability The surfaces were tested for the wetting properties by optical contact angle (OCA) measurements of water drops. As reported in Table 1, MAC showed an average contact angle value of 77°, while UTM and UTM-Y resulted in progressively more hydrophobic (H20 CA° being respectively 82° and 96°). On the contrary, XA was the most hydrophilic of all the surfaces. Wettability The surfaces were tested for the wetting properties by optical contact angle (OCA) measurements of water drops. As reported in Table 1, MAC showed an average contact angle value of 77 • , while UTM and UTM-Y resulted in progressively more hydrophobic (H20 CA • being respectively 82 • and 96 • ). On the contrary, XA was the most hydrophilic of all the surfaces. Cell Morphology and Scanning Electron Microscope Analysis The behavior of cells on different surfaces at different timepoints are depicted in Figures 2-5. The analysis of MAC samples suggests that plasma treatment affected the growth/morphology of the cells only during the first phases of their adhesion, while UV treatment did not exert any effect. At T0 (20 min), the cells seeded on MAC (Figures 2 and 5) were hemispherical with reduced and delicate filopodia, while, the cells grown on the same plasma-treated surfaces became spindle-shaped with broad lamellipodia and rare filopodia. The effect of plasma of Argon was confirmed also in UTM, UTM-Y, and XA samples: treated discs showed cells with more spread and extended shape compared to the untreated samples (Figures 2 and 5). Notably, in both control and treated UTM discs, the cells were more numerous on the flanks of the grooved surface. In XA control samples the cells were attached almost exclusively to the bottom of the grooved surface and some of them showed a spreading morphology (Figures 2 and 5). In the treated sample, cells were more elongated compared to the control and they appeared more evenly distributed throughout the sample, adhering not only on the bottom but also on the top and the flanks of the grooves. At T1 (24 h), all samples, both treated and untreated ones, presented abundant and well-developed cells that exhibited a spread morphology and several cellular extensions (Figures 3 and 5). Fibroblasts in treated and untreated samples were comparable, except for a slight increase in cell area in the plasma-treated samples (Figures 3 and 5). In both control and plasma-treated samples of UTM and UTM-Y, several cells showed a flat, spindle form, and appeared preferentially distributed between the ridges, connecting the flanks of the grooved surface (Figures 3 and 5). Fibroblasts grown on XA control disc appeared as flattened cells, located at the bottom of the ridges, or as elongated cells, between the flanks of the ridges. In the treated samples, instead, many cells grew on the crest of the ridges, showing an unusual morphology (fusiform but markedly swollen in the center), as portrayed in Figures 3 and 5. At T2 (72 h), all the specimens allowed the same spread morphology with a similar covered area, but exhibited an increased growth compared to T1 samples. At this time point, the quantitative differences previously described between treated and control discs were strongly reduced (Figures 4 and 5). It is worth noting that only on treated XA surfaces, the cells continued to show peculiar spindle morphology and grew on the top of the ridges. On the contrary, in the untreated discs, the cells were flattened and located between the ridges or at the bottom of the groove (Figures 4 and 5). Cell Adhesion As for the surface treatment, at T0, in all different titanium samples, plasma of Argon significantly increased the number of adherent fibroblasts compared to the controls (Tables 2 and 3; Figure 6). This difference, however, was no longer statistically significant at T1 and T2 (Tables 2 and 3). Ultraviolet light resulted in less effective than Plasma of Argon. Indeed, at T1, the number of adherent cells was similar to the control and, at T1 and T2, the number of adherent fibroblasts, although higher on UV treated than untreated surfaces, was not different in a statistically significant way (Tables 2-4; Figure 6). As regards the surface type, no difference could be detected at any time point (Table 4). Cell Adhesion As for the surface treatment, at T0, in all different titanium samples, plasma of Argon significantly increased the number of adherent fibroblasts compared to the controls (Tables 2 and 3; Figure 6). This difference, however, was no longer statistically significant at T1 and T2 (Tables 2 and 3). Ultraviolet light resulted in less effective than Plasma of Argon. Indeed, at T1, the number of adherent cells was similar to the control and, at T1 and T2, the number of adherent fibroblasts, although higher on UV treated than untreated surfaces, was not different in a statistically significant way (Tables 2-4; Figure 6). As regards the surface type, no difference could be detected at any time point (Table 4). Focused Ion Beam Evaluation of Fibroblasts Layers After 72 h, complete coverage of the discs' surface was observed for the UTM and UTM-Y samples. Hence the FIB column was used to perform a selective ablation of these samples to evaluate the difference in cell layer thickness. The growth pattern of the cells appeared the same in both treated and untreated samples with fibroblasts completely adhering to the top of the titanium crests. However, UTM and UTM-Y discs treated with either plasma or UV displayed a slight increase in cell layer thickness compared to the untreated discs, with an increase ranging from 30% to 50% (Figure 7). Focused Ion Beam Evaluation of Fibroblasts Layers After 72 h, complete coverage of the discs' surface was observed for the UTM and UTM-Y samples. Hence the FIB column was used to perform a selective ablation of these samples to evaluate the difference in cell layer thickness. The growth pattern of the cells appeared the same in both treated and untreated samples with fibroblasts completely adhering to the top of the titanium crests. However, UTM and UTM-Y discs treated with either plasma or UV displayed a slight increase in cell layer thickness compared to the untreated discs, with an increase ranging from 30% to 50% (Figure 7). Discussion The establishment and maintenance of an efficient soft tissue seal around dental implants and abutments are hallmarks for implant success [22]. The surface properties of implant elements have Discussion The establishment and maintenance of an efficient soft tissue seal around dental implants and abutments are hallmarks for implant success [22]. The surface properties of implant elements have proven to influence the quality of this mucosal attachment. To improve the interaction between recipient tissues and implant components different surface modifications have been proposed, among which topography, roughness, and chemical qualities have been the research focus [23,24]. While the effects of surface roughness on the bone response have been widely discussed [25][26][27], the scientific data are lesser, with regards to the impact of surface roughness on soft tissue attachment. Some histological investigations in humans and animals support the fact that moderately rough surfaces may favor soft tissue integration [28][29][30]. This observation is in accordance with the present study, where micro-grooved surfaces promoted higher levels of cell adhesion than MAC, although only XA was statistically significant. Besides roughness, chemical surface modifications such as those obtained by anodization of grade 5 titanium have been suggested to enhance the early biological response of gingival cells when dealing with machined smooth surfaces [31]. The anodization process was used here to modify UTM attaining UTM-Y, endowed with a yellowish color (hence the suffix Y). These micro-grooved and anodized micro-grooved surfaces had identical roughness features, but they differed in terms of wettability ( Table 1). The anodization process promoted indeed a slight transition of UTM-Y (H20 CA = 96 • ) toward the hydrophobic regime compared to UTM (H20 CA = 82 • ). Surface energy plays a relevant role among the surface properties of implants components [32]. More specifically, hydrophilicity may promote cell adhesion, being beneficial during the early stage of wound healing [33]. Both UV light and plasma cleaning can increase surface wettability [19]. In this in vitro model, significantly higher values of cell adhesion could be detected due to the plasma treatment at 20 min irrespective of the type of surface. Differences, however, lost their statistical significance with time. Interestingly the wettability of pristine surfaces, albeit quite different, could not affect fibroblast adhesion in a statistically significant way (Table 1). Data here presented are in line with previously reported outcomes evidencing that plasma treatment is capable of enhancing cell adhesion on titanium surfaces, mostly at the early stage [18,34]. On the other hand, no differences were found between the ultraviolet light group and untreated discs in terms of cell adhesion, unlike other studies where UV treatment seemed to increase this parameter [21]. Whether wettability, which is indeed influenced both by surface topography and chemical composition [35], is sufficient to predict the biological outcome is still a matter of debate. For instance, Gittens et al. [36], in a magistral paper of theirs, stated that "available techniques to measure surface wettability are not reliable on clinically relevant, rough surfaces," and they noticed that the behavior of the cell model used was dependent on its differentiation state. Although these authors were working with lineage osteoblasts, the consideration sheds light on the delicacy of any cell model, fibroblasts included. In this study, fibroblasts preferentially adhered to the peaks of roughened surfaces. According to Chang et al. [37], one may speculate that fibroblasts reacted to a fibronectin density possibly higher on activated surfaces than the untreated controls, by forming more adhesion complexes. Whatever the mechanism involved at T0, at T2, no differences could be found between treatment groups. A reasonable explanation thereof could be the duration of plasma of argon effects, as this treatment is very powerful, but it tends to be most active in a limited period of time, which may not be detrimental for chair side usage. As pointed out elsewhere [21], another aspect worthy of consideration is the saturation effect owing to rapid cell growth on a small surface like a disc. The present work tried to overcome, at least in part, this usual drawback of in vitro studies recurring to a strong statistical setting. Furthermore, the thickness of cells overgrowing at T2 was considered carefully. The FIB column ablation allowed to show a slight difference in terms of cell stratification, which may suggest promising results in case of longer observation time-point. A bi-dimensional observation, indeed, may fail to detect "vertical growth" following cell stratification. In the present study, FIB, by ablating part of the cell carpet, allowed a tridimensional observation of cell layers, thus highlighting any possible difference among samples. In particular, UTM and UTM-Y discs were the only samples completely covered with cells at T2 and they displayed a slight increase in cell layer thickness compared to untreated discs, independently of the type of treatment received (UV or Plasma of Argon). This qualitative observation suggested that FIB/SEM might be useful for analyzing cell layering in future studies. However, the main limitation of this research is, obviously, the very in vitro model. In spite of the positive and encouraging outcomes, this study must be confirmed in vivo and, even more compellingly, recurring to clinical trials. Finally, the selected micro-topographies exemplified three of the most common families adopted on the abutments (smooth, micro-grooved, anodized), but they did not represent all the possibilities available on the market. Sample Size A power analysis was performed by referring to a similar preclinical study, which investigated the same topic [38]. Based on these data, mean fibroblast adhesion values of 181 ± 37 and 135 ± 26 at 20 min (p = 0.0039) was projected by setting effect size dz = 1.438, error probability alpha = 0.05, and power = 0.95 (1-beta error probability), resulting in 12 samples from each sub-group (G* Power 3.1.7 for Mac OS X Yosemite, version 10.10.3). Sample Preparation As portrayed in Figure 8, 884 serially numbered, sterile discs (Sweden & Martina), made of grade 5 titanium, with four different surface topographies were used for this study: Sample Preparation As portrayed in Figure 8, 884 serially numbered, sterile discs (Sweden & Martina), made of grade 5 titanium, with four different surface topographies were used for this study: machined (MAC); "micro-grooved" surfaces Ultrathin Threaded Microsurface (UTM); "micro-grooved" Anodized Ultrathin Threaded Microsurface (UTM-Y); "micro-grooved" Thin Machined (XA). All disks had a diameter of 10 mm and a height of 3 mm. After manufacturing, all the titanium discs underwent the same standard cleaning and sterilization procedure that is used for commercial dental implants. Three discs per surface type underwent surface and micro-topography analyses. Two discs per surface type underwent an analysis of wettability. The remaining 864 titanium discs, i.e., 216 per each of the 4 surfaces, were randomly allocated into three sub-groups of 288 samples as follows: i. Argon plasma treatment at 8 W and atmospheric pressure for 6 min, using a plasma reactor, Plasma R, Diener Electronic GmbH, Ebhausen, Germany, (test group 1, TG1) ii. UV treatment [Ultra Violet light treatment (Toshiba, Tokio, Japan) for 3 h (15 W) at ambient conditions [intensity: 0.1 mW/cm2 (λ = 360 ± 20 nm) and 2 mW/cm2 (λ = 250 ± 20 nm)] (test group 2, TG2) iii. No treatment (control group, CG). Every treatment subgroup counted a total of 72 samples per surface and was further subdivided into either a cell adhesion group (n = 36) or a cell morphology group (n = 36). Finally, three computergenerated randomization lists (Random Number Generator Pro 2.08 for Windows, Segobit Software, http://www.segobit.com/) were used to randomly allocate the titanium discs into three sub-groups (T0, T1, T2), consisting in an equal number of 12 titanium discs each. All the computer-generated randomization lists were prepared in advance by an external investigator not involved in the study and an independent consultant prepared all of the envelopes/containing numbers for randomization, which were opened immediately before the testing procedures. "micro-grooved" Thin Machined (XA). All disks had a diameter of 10 mm and a height of 3 mm. After manufacturing, all the titanium discs underwent the same standard cleaning and sterilization procedure that is used for commercial dental implants. Three discs per surface type underwent surface and micro-topography analyses. Two discs per surface type underwent an analysis of wettability. The remaining 864 titanium discs, i.e., 216 per each of the 4 surfaces, were randomly allocated into three sub-groups of 288 samples as follows: i. Every treatment subgroup counted a total of 72 samples per surface and was further subdivided into either a cell adhesion group (n = 36) or a cell morphology group (n = 36). Finally, three computer-generated randomization lists (Random Number Generator Pro 2.08 for Windows, Segobit Software, http://www.segobit.com/) were used to randomly allocate the titanium discs into three sub-groups (T0, T1, T2), consisting in an equal number of 12 titanium discs each. All the computer-generated randomization lists were prepared in advance by an external investigator not involved in the study and an independent consultant prepared all of the envelopes/containing numbers for randomization, which were opened immediately before the testing procedures. Topographic Analysis Area surface roughness parameters at different sites of the implant were obtained by scanning electron microscope (SEM), using an EVO MA 10 SEM (Zeiss, Oberkochen, Germany). In particular, the Stereo Scanning Electron Microscope (SSEM) technique was used. This approach exploits the basic principle of stereo vision to turn conventional SEM images into three-dimensional surface topography reconstructions. Two images of the same field of view are acquired after eucentric rotation at a given angle. This is obtained by changing the angle between the sample and the electrons source, by tilting the table bearing the sample. The tilting angle is set and controlled by the instrument control software. The recorded incoming data were the couple of images obtained (stereopair), the size of the field of view, and the tilting angle and they were processed using a specific software (Mex 6.0, Alicona Imaging, Chicago, IL, USA). Three-dimensional images obtained by this process allowed us to measure height profiles or areas, and to calculate the different roughness parameters defined by relevant literature and standards (Table 1). In the present analysis, SEM images used to build-up stereo-pairs were obtained at 2000×. Roughness parameters according to ISO25178 were obtained from reconstructed images of 80 × 110 micrometers area. Wettability To assess the wetting properties of the samples, the optical contact angle (OCA) of a sessile water drop (1 µL in volume) was measured through the OCAH 200 (DataPhysic Instruments GmbH, Filderstadt, Germany). The integrated high-resolution camera allowed us to acquire the image of the drop on each specimen, while the drop profiles were extracted and fitted with a dedicated software (SCA20) through the Young-Laplace method. Contact angles between the fitted function and baseline were calculated at the liquid-solid interface [39,40]. Cell Culture To characterize the biological response in vitro, Normal Human Dermal Fibroblasts were used (NHDF). Fibroblasts were maintained in Dulbecco Minimum Essential Medium (DMEM). These cells represent an excellent model for studying the dynamics of fibroblast adhesion [31,[41][42][43][44] and have similar behavior in culture to the gingival fibroblast despite some differences; indeed, their fundamental characteristics are almost identical [41]. The culture media were supplemented with 10% fetal bovine serum (Life Technologies, Milan, Italy), 100 U/mL penicillin, 100 mg/mL streptomycin, were passaged at subconfluency to prevent contact inhibition and were kept under a humidified atmosphere of 5% CO 2 in the air, at 37 • C. Cell Adhesion Cell adhesion was evaluated on 648 titanium samples using a 48-well plate (BD, Milan, Italy). Cells were detached using trypsin for 3 min, carefully counted, and seeded at 3 × 10 3 cells/well in 1 mL of growth medium on the different samples. The 48-well plates were kept at 37 • C, 0.5% CO 2 for 20 min, 24 h and 72 h. Before and after fixation in 4% paraformaldehyde in PBS for 15 min at room temperature, cells were washed two times with PBS and then stained with 1 µM DAPI (Molecular Probes, Eugene, CA, USA) for 15 at 37 • C to detect cell nuclei. Samples were analyzed using a Nikon Eclipse T-E microscope with a 4X objective. Cell nuclei were then counted by using ImageJ (NIH) software with the tool "Analyze particles" [51,52]. Scanning Electron Microscope/Focused Ion Beam Analysis To test if in vitro conditions at the longer time points could generate a different cell layering, samples at T2 were dehydrated with a graded ethanol series, air-dried, and secured to an aluminum stub with a conductive adhesive carbon disc. Subsequently, the specimens were sputter-coated with a thin layer (30 nm) of gold using a K550 sputter coater (Emithech, Kent, UK) and examined by the Dual Beam Helios Nanolab 600 (FEI, Hillsboro, state, USA). Micrographs of the samples were acquired detecting secondary electrons, using an operating voltage of 5 kV and an applied current of 0.17 nA Additionally, the Focused Ion Beam (FIB) column was used to selectively ablate a small region of the cell layer, making it possible to evaluate its thickness and the interaction between the fibroblasts and the titanium surface. Statistical Analysis Data were recorded on Excel 2011 data sheet (Microsoft Corporation, Redmond, WA, USA) and analyzed by using Statistical Analysis Software (SAS; Cary, NC, USA) and GraphPad Prism 6 [53][54][55]. The following independent variables were considered: (1) the type of surface, (2) the type of treatment, (3) the time of assessment. The number of fibroblasts on in vitro titanium discs was the primary dependent outcome variable. Data were expressed as means and 95% confidence intervals. To compare the effect of type of surface, type of treatment, time of assessment, and their interaction on the main outcome variable, a general linear model was performed, using three-way ANOVA with Tukey's corrections for multiple comparisons [56,57]. Conclusion Within its limitations, this in vitro study highlighted the capability of micro-grooved surfaces to attract and distribute cells, suggesting the potential biological benefits of treating implant surfaces with the plasma of argon in relation to early-stage cell adhesion. The positive reported outcomes encourage the use of micro-grooved surfaces and bio-activation in in vivo studies. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2020-03-15T13:03:12.639Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "30eb173bd846c2d625f5b08b41c822df6ef581f0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/6/1919/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c3085d4fd7aafc436a3dee8e84a68f3f67a881e4", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
199571460
pes2o/s2orc
v3-fos-license
Assessment of GeneXpert MTB/RIF Performance by Type and Level of Health-Care Facilities in Nigeria is and the of a single infectious ranking above HIV/AIDS, with the estimated burden of 10.4 million and 600,000 new cases of drug resistance. only and 25% of drug-susceptible and drug-resistant TB respectively. Nigeria is ranked th TB with an estimated incidence of population, and notified only of the contributing the global missed Setting: Nigeria GeneXpert MTB Rif as a primary diagnostic tool available and accessible geographical coverage of GeneXpert machines by LGAs stands at 48%, with a varied access and utilization. Objectives: To assess the association between the type and level of health facilities implementing GeneXpert MTB/Rif and performance outcome of the machines in Nigeria. Study Design: Retrospective secondary data analysis of GeneXpert performance for 2017 from GXAlert database. The independent variables were type and levels of health care facilities, and dependent variables were GeneXpert performance (utilization, successful test, error rates, MTB detected, and Rifampicin resistance detected). Results: Only 366 health care facilities are currently implementing and reporting GeneXpert performance, the distribution is 86.9% and 13.1% public and private health care facilities respectively, and only 6.3% of the facilities are primary health care. Of 354,321 test conducted in 2017, 91.5% were successful, and among unsuccessful test 6.8% were errors. The yield was 16.8% MTB detected (54,713) among which 6.8% had Rif resistance. The GeneXpert utilization rate was higher among private health care facilities (55.8%) compared to 33.3% among public health care facilities. There was a statistically significant difference in the number of successful test between public and private health facility-based machines as determined by one-way ANOVA (F(1,2) = 21.81, P = 0.02) and between primary, secondary and tertiary level health facility-based machines (F(1,2) = 41.24, P < 0.01). Conclusion: Nigeria with very low TB coverage should rapidly scale-up and decentralize GeneXpert services to the private sector. and the federal capital territory; despite the large number of GeneXpert machines in the country, the average utilization rate is 27%. 9 The performance challenges in the field were weak enabling environment for optimal utilization of the machine (infrastructure and human resource [HR]) and programmatic issues related to identification, referral, and sample transportation; appropriate placement of the machine; and maintenance. [10][11][12] Therefore, this study aims to conduct a retrospective secondary data analysis of the performance of GeneXpert MTB/RIF in different types and levels of health-care facilities in Nigeria. Study design The study design was a retrospective secondary data analysis from GxAlert database. The target was all GeneXpert machines connected to GxAlert web-based nationwide; this included both private and public health-care facilities and the different health facility levels (primary, secondary, and tertiary). Data review was from January to December 2017. The GxAlert software (SystemOne, Northampton, MA, United States) connects GeneXpert machine to the Internet, allowing transfer of results to a central, secure database in real time. 13 The automated database was developed for different reporting indicators across all machines and was monitored by a designated individual. Primary data from patients were uploaded into the individual computers connected to the GeneXpert machine by laboratory staff. All results from the machine were captured in the database based on the unique identification number assigned to individual patients. The GxAlert database provides information on the following performance indicators: utilization rate of the machine (number of tests conducted per day/month); proportion of successful tests conducted; error rates; invalid results; MTB detected and Rif resistance detected; and type and level of health-care facilities disaggregated into private, public, primary, secondary, and tertiary health-care facilities. Exclusion criteria were Xpert MTB/RIF installed in the last quarter of 2017, machine offline for more than a quarter, and machines primarily used for research activities. The private health facilities were either hospitals or stand-alone laboratories; the facilities could be private for-profit or faith-based (nonprofit) facilities; whereas the status of primary, secondary, and tertiary is based on designation by the government. Utilization rate of the machine was calculated based on a 2-h turnaround time of the machine, assuming an average of 6-h working period per day within the laboratory (i.e., for a four-module machine, it should run 12 tests per day); for an average of 200 working days per year (excluding weekends and public holidays), it should be 2400 tests per year. Successful test rate was the proportion of tests with appropriate results (all results minus errors, invalid, and indeterminate); error rate was the proportion of the sample tested with an error result as determined by the machine; invalid rate was a proportion of all tests with invalid results; and lastly, indeterminate rate was the proportion of all sample results with indeterminate results as an outcome. The GxAlert system was designed with a quality control mechanism to ensure validity and reliability of the data. Quality control measures employed included the use of computer barcode Scanner to reduce transcription errors, automatic generation of all outcome indicators (test results including errors, invalid, indeterminate, MTB detected, and rifampicin detected) by GeneXpert machine and direct upload of indicators to GXAlert platform. Statistical analysis The database from January to December 2017 was reviewed to identify facilities that meet inclusion criteria which included functional connectivity to the Internet, minimum of the three-quarter report, and facilities with complete identification variable on level and type. Data cleaning, validation, and quality improvement were done by exporting GXAlert data to Statistical Package for the Social Sciences v16 software (SPSS Inc., Chicago, IL, USA). Each variable was checked for accuracy and consistency, and frequency tables were generated to check for outliers and errors. The level and type of health-care facilities were double checked with the National Health-Care Facilities directory. Strength and limitation of the study design The strength of this method is the use of an existing database. Variables within the database were automatically uploaded from the GeneXpert machines, thereby reducing recording and transcription error if data were to be collected manually by the staff. All GeneXpert machine performance variables were generated by the machine, eliminating errors from human interpretation and documentation of the results. Limitation of the design is that analysis was restricted to only variables generated from the machines and other important variables on overall facility utilization (general outpatient utilization), turnaround time for maintenance, logistics supply management of cartridges, and other information for GeneXpert test documented on sample request form were not captured in the analysis. The database had no information on the number of staff performing the GeneXpert assay, staff competency, training and supervision to the facility, and if the laboratory was a TB stand-alone laboratory or integrated with other laboratory services. Selection/information bias A likely inherent selection bias for the performance outcome is that machines in primary health-care facilities are likely to have more infrastructure and HR challenges than that of secondary and tertiary health facilities; utilization rate of GeneXpert machines in private health facilities could equally be influenced by service charges (cost). The interpretation of the association between performance and type of health-care facilities should not be described in isolation; it is critical to describe some general context of the different types of health-care facilities. The fact that biodata of patients or the health-care workers at the facility were not captured reduced the chances of information bias. Ethical consideration There are three critical ethical considerations in this study: the overall ownership of the database is with the national TB program; multiple partners are responsible for procurement, installation, and maintenance of the GeneXpert nationwide; and different health-care facilities including the private sector are involved in providing the services. Permission and consent to use the database was granted by the national TB program, while data analysis was done without unique identification of facility's name or partner's name supporting the facility. The only identification that was used was the designation of health-care facilities as public, private, primary, secondary, or tertiary. Patient-level details were excluded during the analysis. results Of 366 GeneXpert MTB/RIF machines uploading data during January-December 2017, 318 (86.9%) were public and 48 (13.1%) were private health-care facilities. Among these, 23 (6.3%) were categorized as primary-level, 287 (78.4%) as secondary-level, and 56 (15.3%) as tertiary-level health-care facilities [ Table 1]. The overall number of tests performed was 354,321 in 2017, out of which 91.5% had successful test outcome. The proportion of tests performed was highest among GeneXpert MTB/RIF machine in the secondary health-care facilities. The mean age of patients tested was 35 years. Among 64,389 patients with known HIV status, HIV positivity rate was 23.1%. Of 90,783 patients with documented gender status, 49.1% were female. The distribution of GeneXpert MTB/RIF machine by type and level of health-care facilities and their performance outcomes are shown in Table 2. There were significant differences in proportions of tests performed by the level of health facility (primary, secondary, and tertiary) implementing GeneXpert MTB/RIF machine regarding error, invalid, and indeterminate outcomes. The proportion of error test outcomes was higher in private health facility GeneXpert machines than those in public health facilities, but this difference was not statistically significant [ Table 2]. However, machines in primary health facilities have the lowest (89.3%) proportion of successful test outcomes compared to 91.6% in secondary-level and 92.2% in tertiary-level facilities (P < 0.01). Tertiary health facility-based machines had the lowest proportions of MTB-positive (P < 0.01) and unsuccessful (P < 0.01) test outcomes compared to those in primary-and secondary-level health-care facilities [ Table 2]. Among 64,389 patients with known HIV status, positivity rate was higher among patients tested in the private health facility-based machines compared to the public health facility-based machines (P < 0.01). GeneXpert machine utilization rate by type and level of health-care facilities is shown in Table 3. The overall machine utilization rate was 33.6. Machine utilization rate among secondary health facility-based machines was worse than those in primary-level and tertiary-level health-care facilities. GeneXpert utilization rate was higher in private health facilities compared to those in public health facilities [ Table 3]. One-way between-groups analysis of variance (ANOVA) was conducted to explore the association of type and level of health-care facilities implementing GeneXpert MTB/RIF machine and performance outcomes [ Tables 4 and 5]. There was a statistically significant difference in the number of successful test outcomes between public and private health facility-based machines as determined by one-way ANOVA (F (1,2) = 21.81, P = 0.02) and between primary-, secondary-, and tertiary-level health facility-based machines (F (1,2) = 41.24, P < 0.01) [ Table 4]. Post hoc comparisons using the Tukey's honestly significant difference (HSD) test revealed that the mean difference in the number of successful test outcomes in secondary health facility-based machines (1.83 ± 0.37, P < 0.01) was significantly different from those in primary (1.84 ± 0.36, P < 0.01)-and tertiary (1.84 ± 0.36, P < 0.01)-level health-care facilities. There was no statistically significant difference in the number of successful test outcomes between the primary-and tertiary-level health facility-based machines (P = 0.83). While there was no significant difference between the mean number of unsuccessful test outcomes in public and private health facility-based machines [ Table 5], primary-level health facility-based machines had the highest proportion of unsuccessful test outcomes (F (1,2) =29.04, P < 0.01) compared to those in secondary-and tertiary-level health facility-based machines. Further analysis using the Tukey's post hoc test indicated that the mean difference in some unsuccessful tests between the three groups (primary, secondary, and tertiary) was statistically significant (P < 0.01). One-way between-groups ANOVA was conducted to explore the association of type and level of health facilities implementing GeneXpert MTB/RIF machine and performance outcomes [ Tables 4 and 5]. There was a statistically significant difference in the number of successful test outcomes between and between primary-, secondary-, and tertiary-level health facility-based machines (F (1,2) =41.24, P < 0.01) [ Table 4]. Post hoc comparisons using the Tukey's HSD test revealed that the mean difference in the number of successful test outcomes in secondary health facility-based machines (1.83 ± 0.37, P < 0.01) was statistically significantly different from those in primary (1.84 ± 0.36, P < 0.01)-and tertiary (1.84 ± 0.36, P < 0.01)-level health-care facilities. There was no statistically significant difference in the number of successful test outcomes between the primary-and tertiary-level health facility-based machines (P = 0.83). dIscussIon One major challenge of TB response in Nigeria is low case finding both in adults and children. The country was recently ranked 7 th among the 30 high TB burden countries and 2 nd in Africa and contributes 8% of the missing TB cases globally. 1 Despite the increasing burden of TB, high unmet needs, and the giant stride of adopting GeneXpert machine as the primary diagnostic tool, in 2016, Nigeria had one of the lowest case detection rates among the high TB burden countries with suboptimal GeneXpert machine utilization rate. The purpose of this study was to determine whether type and level of health-care facilities influence the utilization and quality of GeneXpert services. This would provide useful information and guide to stakeholders on effective policies for improved detection and treatment of TB. Our findings showed that, though the majority of GeneXpert machines are in the public sector, the utilization rate was higher among the few in the private sector. There was no significant difference between the rates of unsuccessful tests emanating from both public and private health-care facilities. However, the error rate from private health-care facilities was slightly higher than the rate in the public health-care facilities. The high utilization rate of GeneXpert machines in private health-care facilities demonstrates the health-seeking behavior of TB patients. Studies have shown that TB patients tend to seek medical care in an accessible, less expensive, responsive, and patient-friendly health facilities. [14][15][16] Patients can equally access private health-care facilities far from home for fear of stigmatization and confidentiality protection. 17,18 Geographic location was also an important determinant of an individual's choice of health-care provider. People residing in urban locations where there are increase number and variety of private providers tend to use private facilities more. 19 In line with the above, adequate number and proper distribution of TB diagnostic and treatment centers would improve access to TB services. This is not the case in Nigeria, as reports have demonstrated the skewed distribution of GeneXpert machines in a limited number of health facilities. 20 Ukwaja et al., 2013, in Ekiti State, Nigeria, reported that more than nine-tenths of the patients walked for over 1 h to access the nearest public health-care facility from their homes, reflecting the inadequate number of public health-care facilities in a rural setting where it is preferred. 18 National TB program and other stakeholders need to adopt the patient-centeredness approach when developing strategies and policies for increasing access to health services which include GeneXpert machine placement. Another important reason for high machine utilization in the private sector is the business-oriented nature of this sector. Although TB diagnostic and treatment services are provided free of charge in private facilities, all patients irrespective of their health problem pay administrative fees. 18 In addition, the facility needs to be in business always as such infrastructures are often maintained to ensure continuous services with limited interruptions arising from strikes, holidays, staff attrition, and incessant power outage, as constantly observed in public health facilities. This is in line with a previous report that the private sector was more efficient, accountable, or medically effective than the public sector. 21 Some authors claimed that diagnostic accuracy and adherence to medical management standards are worse among private than public sector care providers, 22 as such private sector care providers had greater risks of low-quality care. This is at variance with our study which showed no significant difference between rates of unsuccessful tests conducted in both sectors. The slightly higher rate in error test outcome was greatly associated with the increased internal temperature of the machine which arose from frequent and prolonged usage. Consequently, machines with high utilization rates recorded a higher proportion of temperature-related errors. The study also recorded more machine placement and low utilization in secondary-level facilities than other levels with a greater proportion of error rate and unsuccessful test outcome occurring within the primary health facility level. This result is in agreement with a previous research conducted in Nigeria by Gidado et al., 2018, which indicated that more machines were installed in secondary-and tertiary-level health facilities on the assumption that they have a relatively stable power supply to sustain the operation of GeneXpert machine. 20 In the Nigerian health system, services are provided at three levels, namely, primary, secondary, and tertiary. The local government areas (LGAs) provide the primary level of care, state governments provide the secondary level of care and provision of technical guidance to the LGAs, and the federal government is responsible for the tertiary level of care as well as policy formulation and technical guidance to the states. 23 Majority of the secondary-level health facilities under the state governments are fully integrated into the TB program, 18 as such, it was very easy to get their commitment and buy-in during the initial rollout of GeneXpert technology. The usefulness of Genexpert test in intensified case finding has been demonstrated; 24,25 however, in agreement with Agizew, 2017, its usefulness depends largely on the proportion of valid test outcomes. 26 Error rate beyond the acceptable limit is an indication of poor performance. In contrast with a recent study conducted in Botswana where no difference was recorded in the proportion of error rates emanating from the peripheral and centralized-based laboratories, 26 our study demonstrated a significantly higher error rate among the primary health-care facilities. This may be attributed to the low-skilled staff in primary health-care facilities. Such staff require consistent mentoring for skill advancement and better insight on GeneXpert technology. Furthermore, there has been poor counterpart funding and commitment on the side of government for infrastructural maintenance, as such basic infrastructural and other requirements for optimal performance of the GeneXpert machine are not readily available in most primary health-care facilities. conclusIon The involvement of more private health facilities, both faith based and private for profit in the diagnosis and early referral of patients with pulmonary symptoms, could increase case detection. Furthermore, continuous mentoring of GeneXpert operators particularly in primary-level health facilities, infrastructural maintenance, and scaling up of TB diagnostic services to densely populated areas for increased access are some of the factors that would improve GeneXpert utilization and quality of the test.
2019-08-16T04:01:04.297Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "f638b27955e56443c0fc0f27c25c71193a30ce0c", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc6677003", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "8feb20f1423f20938bade102eeee66731d80c4e0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18026108
pes2o/s2orc
v3-fos-license
Sorting out the Most Confusing English Phrasal Verbs In this paper, we investigate a full-fledged supervised machine learning framework for identifying English phrasal verbs in a given context. We concentrate on those that we de-fine as the most confusing phrasal verbs, in the sense that they are the most commonly used ones whose occurrence may correspond either to a true phrasal verb or an alignment of a simple verb with a preposition. We construct a benchmark dataset 1 with 1,348 sentences from BNC, annotated via an Internet crowdsourcing platform. This dataset is further split into two groups, more idiomatic group which consists of those that tend to be used as a true phrasal verb and more compositional group which tends to be used either way. We build a discriminative classifier with easily available lexical and syntactic features and test it over the datasets. The classifier overall achieves 79.4% accuracy, 41.1% error deduction compared to the corpus majority baseline 65%. However, it is even more interesting to discover that the classifier learns more from the more compositional examples than those idiomatic ones. Introduction Phrasal verbs in English, are syntactically defined as combinations of verbs and prepositions or particles, but semantically their meanings are generally not the direct sum of their parts. For example, give in means submit, yield in the sentence, Adam's saying it's important to stand firm , not give in to terrorists. Adam was not giving anything and he was 1 http://cogcomp.cs.illinois.edu/page/resources/PVC Data not in anywhere either. (Kolln and Funk, 1998) uses the test of meaning to detect English phrasal verbs, i.e., each phrasal verb could be replaced by a single verb with the same general meaning, for example, using yield to replace give in in the aforementioned sentence. To confuse the issue even further, some phrasal verbs, for example, give in in the following two sentences, are used either as a true phrasal verb (the first sentence) or not (the second sentence) though their surface forms look cosmetically similar. 1. How many Englishmen gave in to their emotions like that ? 2. It is just this denial of anything beyond what is directly given in experience that marks Berkeley out as an empiricist . This paper is targeting to build an automatic learner which can recognize a true phrasal verb from its orthographically identical construction with a verb and a prepositional phrase. Similar to other types of MultiWord Expressions (MWEs) (Sag et al., 2002), the syntactic complexity and semantic idiosyncrasies of phrasal verbs pose many particular challenges in empirical Natural Language Processing (NLP). Even though a few of previous works have explored this identification problem empirically (Li et al., 2003;Kim and Baldwin, 2009) and theoretically (Jackendoff, 2002), we argue in this paper that this context sensitive identification problem is not so easy as conceivably shown before, especially when it is used to handle those more compositional phrasal verbs which are empirically used either way in the corpus as a true phrasal verb or a simplex verb with a preposition combination. In addition, there is still a lack of adequate resources or benchmark datasets to identify and treat phrasal verbs within a given context. This research is also an attempt to bridge this gap by constructing a publicly available dataset which focuses on some of the most commonly used phrasal verbs within their most confusing contexts. Our study in this paper focuses on six of the most frequently used verbs, take, make, have, get, do and give and their combination with nineteen common prepositions or particles, such as on, in, up etc. We categorize these phrasal verbs according to their continuum of compositionality, splitting them into two groups based on the biggest gap within this scale, and build a discriminative learner which uses easily available syntactic and lexical features to analyze them comparatively. This learner achieves 79.4% overall accuracy for the whole dataset and learns the most from the more compositional data with 51.2% error reduction over its 46.6% baseline. Related Work Phrasal verbs in English were observed as one kind of composition that is used frequently and constitutes the greatest difficulty for language learners more than two hundred and fifty years ago in Samuel Johnson's Dictionary of English Language 2 . They have also been well-studied in modern linguistics since early days (Bolinger, 1971;Kolln and Funk, 1998;Jackendoff, 2002). Careful linguistic descriptions and investigations reveal a wide range of English phrasal verbs that are syntactically uniform, but diverge largely in semantics, argument structure and lexical status. The complexity and idiosyncrasies of English phrasal verbs also pose a special challenge to computational linguistics and attract considerable amount of interest and investigation for their extraction, disambiguation as well as identification. Recent computational research on English phrasal verbs have been focused on increasing the coverage and scalability of phrasal verbs by either extracting unlisted phrasal verbs from large corpora (Villavicencio, 2003;Villavicencio, 2006), or constructing productive lexical rules to generate new cases (Villanvicencio and Copestake, 2003). Some other researchers follow the semantic regularities of the particles associated with these phrasal verbs and concentrate on disambiguation of phrasal 2 It is written in the Preface of that dictionary. verb semantics, such as the investigation of the most common particle up by (Cook and Stevenson, 2006). Research on token identification of phrasal verbs is much less compared to the extraction. (Li et al., 2003) describes a regular expression based simple system. Regular expression based method requires human constructed regular patterns and cannot make predictions for Out-Of-Vocabulary phrasal verbs. Thus, it is hard to be adapted to other NLP applications directly. (Kim and Baldwin, 2009) proposes a memory-based system with post-processed linguistic features such as selectional preferences. Their system assumes the perfect outputs of a parser and requires laborious human corrections to them. The research presented in this paper differs from these previous identification works mainly in two aspects. First of all, our learning system is fully automatic in the sense that no human intervention is needed, no need to construct regular patterns or to correct parser mistakes. Secondly, we focus our attention on the comparison of the two groups of phrasal verbs, the more idiomatic group and the more compositional group. We argue that while more idiomatic phrasal verbs may be easier to identify and can have above 90% accuracy, there is still much room to learn for those more compostional phrasal verbs which tend to be used either positively or negatively depending on the given context. Identification of English Phrasal Verbs We formulate the context sensitive English phrasal verb identification task as a supervised binary classification problem. For each target candidate within a sentence, the classifier decides if it is a true phrasal verb or a simplex verb with a preposition. Formally, given a set of The learning algorithm we use is the soft-margin SVM with L2-loss. The learning package we use is LIBLINEAR (Chang and Lin, 2001) Three types of features are used in this discriminative model. (1)Words: given the window size from the one before to the one after the target phrase, Words feature consists of every surface string of all shallow chunks within that window. It can be an n-word chunk or a single word depending on the the chunk's bracketing. (2)ChunkLabel: the chunk name with the given window size, such as VP, PP, etc. (3)ParserBigram: the bi-gram of the nonterminal label of the parents of both the verb and the particle. For example, from this partial tree (VP (VB get)(PP (IN through)(NP (DT the)(NN day))), the parent label for the verb get is VP and the parent node label for the particle through is PP. Thus, this feature value is VP-PP. Our feature extractor is implemented in Java through a publicly available NLP library 4 via the tool called Curator (Clarke et al., 2012). The shallow parser is publicly available (Punyakanok and Roth, 2001) 5 and the parser we use is from (Charniak and Johnson, 2005). Data Preparation and Annotation All sentences in our dataset are extracted from BNC (XML Edition), a balanced synchronic corpus containing 100 million words collected from various sources of British English. We first construct a list of phrasal verbs for the six verbs that we are interested in from two resources, WN3.0 (Fellbaum, 1998) and DIRECT 6 . Since these targeted verbs are also commonly used in English Light Verb Constructions (LVCs), we filter out LVCs in our list using a publicly available LVC corpus (Tu and Roth, 2011). The result list consists of a total of 245 phrasal verbs. We then search over BNC and find sentences for all of them. We choose the frequency threshold to be 25 and generate a list of 122 phrasal verbs. Finally we manually pick out 23 of these phrasal verbs and sample randomly 10% extracted sentences for each of them for annotation. The annotation is done through a crowdsourcing platform 7 . The annotators are asked to identify true phrasal verbs within a sentence. The reported innerannotator agreement is 84.5% and the gold average accuracy is 88%. These numbers indicate the good quality of the annotation. The final corpus consists of 1,348 sentences among which, 65% with a true phrasal verb and 35% with a simplex verbpreposition combination. Table 1 lists all verbs in the dataset. Total is the total number of sentences annotated for that phrasal verb and Positive indicated the number of examples which are annotated as containing the true phrasal verb usage. In this table, the decreasing percentage of the true phrasal verb usage within the dataset indicates the increasing compositionality of these phrasal verbs. The natural division line with this scale is the biggest percentage gap (about 10%) between make out and get at. Hence, two groups are split over that gap. The more idiomatic group consists of the first 11 verbs with 554 sentences and 91% of these sentences include true phrasal verb usage. This data group is more biased toward the positive examples. The more compositional data group has 12 verbs with 794 examples and only 46.6% of them contain true phrasal verb usage. Therefore, this data group is more balanced with respective to positive and negative usage of the phrase verbs. Experimental Results and Discussion Our results are computed via 5-cross validation. We plot the classifier performance with respect to the overall dataset, the more compositional group and the more idiomatic group in Figure 1. The classifier only improves 0.6% when evaluated on the idiomatic group. Phrasal verbs in this dataset are more biased toward behaving like an idiom regardless of their contexts, thus are more likely to be captured by rules or patterns. We assume this may explain some high numbers reported in some previous works. However, our classifier is more effective over the more compositional group and reaches 73.9% accuracy, a 51.1% error deduction comparing to its majority baseline. Phrasal verbs in this set tend to be used equally likely as a true phrasal verb and as a simplex verb-preposition combination, depending on their context. We argue phrasal verbs such as these pose a real challenge for building an automatic context sensitive phrasal verb classifier. The overall accuracy of our preliminary classifier is about 79.4% when it is evaluated over all examples from these two groups. Finally, we conduct an ablation analysis to explore the contributions of the three types of features in our model and their accuracies with respect to each data group are listed in Table 2 with the boldfaced best performance. Each type of features is used individually in the classifier. The feature type Words is the most effective feature with respect to the idiomatic group and the overall dataset. And the chunk feature is more effective towards the compositional group, which may explain the linguistic intuition that negative phrasal verbs usually do not belong to the same syntactic chunk. Conclusion In this paper, we build a discriminative learner to identify English phrasal verbs in a given context. Our contributions in this paper are threefold. We construct a publicly available context sensitive English phrasal verb dataset with 1,348 sentences from BNC. We split the dataset into two groups according to their tendency toward idiosyncrasy and compositionality, and build a discriminative learner which uses easily available syntactic and lexical features to analyze them comparatively. We demonstrate empirically that high accuracy achieved by models may be due to the stronger idiomatic tendency of these phrasal verbs. For many of the more ambiguous cases, a classifier learns more from the compositional examples and these phrasal verbs are shown to be more challenging.
2014-07-01T00:00:00.000Z
2012-06-07T00:00:00.000
{ "year": 2012, "sha1": "cb13b1b6a37e4080d8c13c5f33694b5aae90abcf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "cb13b1b6a37e4080d8c13c5f33694b5aae90abcf", "s2fieldsofstudy": [ "Linguistics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
262894225
pes2o/s2orc
v3-fos-license
Reconceptualizing Integrated Knowledge Translation goals: a case study on basic and clinical science investigating the causes and consequences of food allergy Background Integrated Knowledge Translation (IKT) and other forms of research co-production are increasingly recognized as valuable approaches to knowledge creation as a way to better facilitate the implementation of scientific findings. However, the nature of some scientific work can preclude direct knowledge to action as a likely outcome. Do IKT approaches have value in such cases? Methods This study used a qualitative case study approach to better understand the function of IKT in a non-traditional application: basic and clinical science investigating the causes and consequences of food allergy. Building off previous baseline findings, data were obtained through in-depth interviews with project scientists and steering committee members and complemented by researcher observation. Data were analyzed through an integrated approach to understand how well participants perceived the stipulated project IKT outcomes had been met and to better understand the relationship between different forms of IKT goals, outcomes, and impacts. Results We propose a conceptual model which builds temporal continuity into the IKT work and understands success beyond truncated timelines of any one project. The model proposes project IKT goals be conceptualized through three metaphorical tower blocks: foundational (changing the culture for both scientists and knowledge-users), laying the groundwork (building relationships, networks and sparking scientific inquiry), and putting scientific knowledge to action. Based on this model, this case study demonstrated notable success at the foundational and intermediate blocks, though did not turn basic and clinical research knowledge into actionable outcomes within the project timespan. Conclusions We find that current IKT literature which situates success as filling a knowledge to action gap is conceptually inadequate for understanding the full contributions of IKT activities. This work highlights the need for building cultural and scientific familiarity with IKT in order to better enable knowledge to action translation. Improving understanding and communication of science and empowering knowledge-users to engage with the research agenda are long-term strategies to build towards knowledge implementation and lay the ground work for many future research projects. Supplementary Information The online version contains supplementary material available at 10.1186/s43058-023-00473-9. • While Integrated Knowledge Translation/co-production approaches to research may address some barriers to implementation, the nature of some scientific work can preclude direct knowledge to action as a likely outcome.• Current literature poorly theorizes how multiple research projects may build towards communal translation goals.• Grounded in a case study investigating the causes and consequences of food allergy, we provide a conceptual model which theorizes Integrated Knowledge Translation outcomes and impacts beyond the domain of a single project.• Integrated knowledge translation work should recognize changing the scientific culture as a first, not last, step in facilitating implementation. Background Integrated Knowledge Translation (IKT) is one form of co-production where knowledge creation is a collaborative effort between researchers and users who "have the authority to implement the research recommendations" [1]; it is action-oriented and solutions-focused [2].IKT, therefore, addresses insufficient implementation of research findings by recognizing that how we conduct science shapes subsequent uptake and impact [3]-both in steering research to relevant issues and in priming knowledge-users to understand and incorporate findings in their own work [4][5][6]. IKT can be more challenging as researchers navigate the intricacies of partnership building and relationship management [7,8], and there is need to better understand outcomes and impacts [9,10], what Kothari and Wathen see as the next major frontier in IKT science [11].One of the challenges to this may be poor articulation in the literature between the goals of IKT (that is, the predefined purpose of enacting IKT, see Table 1), the conditions that drive success, and the outcomes and impacts that emerge.Hence, the literature calls for more theoretically informed approaches to IKT [11,12]. A range of perspectives is used to understand outcomes and impacts of IKT, when reported [20].Reflecting IKT's origins, outcomes often are discussed as immediate knowledge to action [10,21] or Knowledge Translation (KT) products [8].But research has also identified a variety of less tangible yet equally important outcomes such as tacit knowledge, social or Outputs The direct products/deliverables resulting from research activities.Outputs lead to outcomes.Includes reports, publications, presentations, communication strategies, education and training strategies, relationship building strategies and more. Outcomes Changes resulting from research activities and outputs; can include short-term, intermediate, and longer-term outcomes.Outcomes are measurable. Impacts Often used synonymously with outcomes, or as a collective term to encompass outputs, uses and outcomes (e.g., [13]).Here, we define as a futuristic identifiable benefit to, or positive influence on, society and other domains [14]. Goals The predefined target outcomes and impacts of the research activities Knowledge Translation (KT) A dynamic and iterative process that includes synthesis, dissemination, exchange, and ethically sound application of knowledge.This process takes place within a complex system of interactions between researchers and knowledge-users which may vary in intensity, complexity, and level of engagement depending on the nature of the research and the findings as well as the needs of the particular knowledge-user.Usually framed as either end-of-grant or integrated [15,16]. relational capital, and the approach as a means of building relationships between knowledge communities [11,22].A recent umbrella review on research partnerships sorts outcomes and impacts by means of scale, such as individual level (either researcher scientists or stakeholders), the partnership-level (relationships between scientists and stakeholders), community level, or the research process more wholistically [14].Beckett and colleagues likewise classify outcomes/impacts at various scales (individual, group, organizational, societal) but additionally consider "paradigmatic" contributions in which outcomes/impacts completely shift how the world is understood, what is legitimate knowledge, and the relationships between knowledge, research, and practice/policy [13].This parallels Kothari and Wathen's description of appreciating other points of view to converge on a newly created shared perspective [11].The whole becomes more than the sum of its parts. Many identified IKT outcomes hint at future programs of work without explicitly conceptualizing that process.For example, Sibbald and colleagues describe "for many researchers and knowledge-users, the impact was more about laying the groundwork for future research" [9].On the other hand, many of the characteristics associated with successful IKT partnerships hint at previous IKT work without making this connection explicit-e.g., partnerships built on existing relationships, or the participation of skilled researchers (experienced in IKT) [23].And, generally, co-production narratives recognize that partnership relationships that start from scratch require significantly more time investments in the early stages for learning and training, developing relationships, building trust, etc., and this may be a barrier to success [24].Altogether, this creates a landscape in which the detailed inputs, outcomes, and impacts of IKT are documented but little is conceptually or theoretically understood about their relationship of these components to each other, especially beyond the timeline of a single project.Table 1 provides an overview of important terms used to inform this work. Where many research projects may use IKT approaches for clear knowledge to action outcomes, the pertinency of these IKT goals in other forms of health research is less clear.As explored elsewhere [25,26], the nature of some scientific investigations (e.g., basic, laboratory, discovery research) are slow and incremental and can preclude direct knowledge to action as a likely outcome-and hence, measure of success.This paper responds to that gap in the literature through a case study of a notably unorthodox application of IKT-that is, used in a grouping of basic and clinical science substudies investigating the causes and consequences of food allergy.Our purposes here are to (1) explore the IKT outcomes of this case and (2) understand these findings within broader conceptualizations of IKT success. Case study: IKT in the GET-FACTS project This is a case study on the IKT "experiment" through the GET-FACTS project, a 5-year biomedical research project funded by the Canadian Institutes of Health Research and led by a coalition of basic and clinical scientists to assess the causes and consequences of food allergy.As represented in Fig. 1, the case study's focus was on the wholistic IKT experiences of the GET-FACTS project, inclusive of the core biomedical research components alongside the IKT activities (e.g., creation of the steering committee, setting target IKT outcomes).The timeline of the case study therefore parallels the duration of the GET-FACTS research activities. Food allergies are a potentially life-threatening chronic health issue and public health concern [27], with significant impacts on affected individuals, their families, social and educational settings, and communities, as well as the healthcare system more broadly [28].Within the global food allergy community, there was pressing need to better connect knowledge-users with the science, after standard clinical recommendations for allergy prone children to delay introduction of potential allergenic foods, such as peanuts, were turned upside down.Though there were hints that previous recommendations were misguided [29], a landmark study in 2015 found that infants with delayed peanut introduction were actually at greater risk of developing food allergies-counter to previous Fig. 1 The case study: IKT in the GET-FACTS project understanding [30].Clinicians and knowledge-user organizations worldwide scrambled to update and communicate new recommendations while assuring parents they had acted appropriately given the body of knowledge at the time [31][32][33][34].There was recognition from all parties on the importance of knowledge-users being closer to the science and for scientists to be able to better communicate their processes to knowledge-users. Enter the GET-FACTS project; originally conceived as a united but distinct collection of basic and clinical substudies within three major pillars of investigation (genetic determinants of food allergy and tolerance, environmental impact on functional and immunological tolerance to foods, and novel biomarkers to assess allergy and tolerance), co-investigators identified the need for active involvement of knowledge-users to ensure the usability of the research.As such, social scientists with expertise in knowledge translation joined the research team and a knowledge-user driven IKT agenda was built into the protocol to work on behalf of and alongside the basic and clinical researchers towards achieving IKT goals.A knowledge-user steering committee was created to learn about GET-FACTS research as it was unfolding, assist in interpretation, and identify IKT opportunities and potential outcomes associated with the project.Six representatives from Canadian food allergy policy and advocacy participated in drafting the project proposal and, after it was successful, additional members were invited to ensure comprehensive representation.In total, eight knowledge-users sat on the steering committee representing organizations involved in food allergy information and advocacy, Canadian health and wellbeing information and advocacy, science policy brokering, government, and public health.Three steering committee members additionally brought personal experiences navigating their own or an immediate family member's anaphylactic food allergy. As part of the case study investigation, year-1 qualitative interviews were conducted with all basic and clinical scientists and steering committee members to better understand the base level of knowledge regarding IKT and expectations for the project work.While not the focus of this paper, the results, presented in-depth elsewhere [35,36], provide important context for the direction of IKT activities.Baseline assessments demonstrated optimism from both scientists and steering committee members on the potential outcomes from this IKT application; however, a number of flags emerged.Scientists on the project were aware of the term Knowledge Translation (KT) but primarily emphasized the activity as "bench to bedside" clinical applications or end-ofgrant dissemination activities to the public, having no "integrated" research experiences to draw on.Where project clinicians had experience working with nonresearchers (e.g., patients in clinic, patients enrolled in studies), experience among basic scientists was notably limited, to which one participant reflected, "we've never received any formal training in that area" [35].In the context of the IKT, a number of scientists expressed concern about findings being translated too soon without proper contextualization, the relatively slow pace of scientific advancement (particularly so for basic science), and the public's need for quick and immediate answers.Steering committee members, in contrast, had experience with science-policy bridging but raised concerns that many months into the project they did not feel knowledgeable or connected to science and asked for more regular touch points between the scientists and steering committee.Notably, steering committee members envisioned success (creating a concrete resource for others to draw on, replication of this IKT approach in future research projects) and, conversely, failure (if approach did not influence the conduct of future research projects) through very future-focused terms [36]. A number of actions were taken in response to this feedback.More touchpoints were created beyond the once or twice annual all-researcher meeting and presentations.Senior scientists representing the various GET-FACTS sub-studies led webinars with time for discussion with the steering committee.This aimed to strengthen steering committee members' familiarity with the project's scientific activities, improve communication, and better build relationships.In addition to the steering committee created Terms of Reference, which included project outcomes/impacts of interest, an exhaustive process was undertaken with project members (both scientists and steering committee) to better articulate their IKT goals, and operationalize activities, outputs, and outcomes through a Performance Measurement Framework (PMF). The completed PMF outlined three streams (communication and education, networking and relationship building, evaluation and accountability) of activities and outputs to inform envisioned short-term, intermediate and long-term outcomes.The five PMF articulated shortterm target outcomes were as follows: (1) steering committee members have a greater awareness, understanding and knowledge about research, (2) steering committee members feel more empowered to contribute to the research process, (3) scientists have increased awareness, understanding and knowledge about IKT, (4) scientists feel more empowered to contribute to the IKT process, and (5) relationships are strengthened between project scientist and steering committee members.The sole PMF articulated intermediate target outcome was the creation of an IKT approach for replication in similar types of research.The three PMF articulated long-term target outcomes were (1) evidence informed policy and decision making, (2) future scientific research is shaped by end users in collaboration with scientists, and (3) maximize choice and minimize risk for Canadians affected by food allergies.Through this process, the steering committee and research team reflected that these target outcomes did not align with common IKT action-focused outcomes (e.g., outcomes paid little attention to specific knowledge products, or specific changes to food allergy policy and practice) and instead articulated a program of work which focused on the nature of research and building relationships between knowledge-creators and knowledge-users. Methods This was a qualitative case study [37,38] of the GET-FACTS research project, with the objective to better understand the function of IKT approaches within basic and clinical biomedical research.A case study "investigates a contemporary phenomenon within its real-life context and addresses a situation in which the boundaries between phenomenon and context are not clearly evident" [39] and is valuable for capturing that complexity and providing in-depth understanding of phenomena [38,40].In line with Stake [41] and Merriam's [42] social constructivist approach to case studies, the researcher may also have a personal interaction with the case and the context of the case may shift over time, as was relevant in this current study.We reported the study using the COREQ (Consolidated criteria for reporting qualitative research) Checklist [43] (Additional file 1). Data collection and analysis As noted above, baseline in-depth interviews conducted year-1 with all core GET-FACTS scientists and steering committee members have been reported [35,36], as have early reflections on the challenges of doing IKT in basic and clinical research contexts [25].These early insights provide important framing for this summative report.Data here have been generated from semi-structured in-depth interviews conducted at the conclusion of the project with all (100% response) twelve core GET-FACTS basic and clinical scientists and eight steering committee members.Participants were made aware at in-person project meetings of the opportunities for concluding interviews and were later contacted by ES through email with detailed information.Interview questions queried awareness, understanding, knowledge about research and IKT, perceived overall project outcomes, and nature of relationships between researchers and knowledge users over the life of the project.As external to the project, ES, a female PhD student with a master's degree being trained by SJE on qualitative interview techniques, conducted the interviews to minimize positive response bias.Interviews averaged around 50 min, were conducted over the phone, recorded, and later transcribed verbatim.SJE, AEC, and JD participated in GET-FACTS IKT activities and research meetings; AEC is also a core GET-FACTS clinical research scientist.SJE, AEC, and JD therefore also contribute to the case study analysis with embedded researcher observations.Research ethics board approval was received prior to all research activities (University of Waterloo ORE# 19735). We adopted an integrated approach for the analysis [44], which was completed in a two-part process.First, in order to understand how well participants perceived the IKT target outcomes had been met, we created a coding framework which encompassed the same coding as in baseline qualitative interviews and also included coding relevant to an assessment of the project outcomes as had been developed mid-project.The coding, therefore, was largely deductive (based on a pre-established coding framework) but sub-codes that emerged inductively were incorporated into the analysis [44].ES coded each transcript in full, verified for reliability with JD, as overseen by SJE. The second part of analysis sought to conceptualize the relationship between different forms of IKT goals, outcomes and impacts.For this, we enacted an iterative analysis as a back-and-forth conversation [45] between the qualitative data, the literature, and our experiences as researchers working in IKT research broadly and the GET-FACTS IKT case study specifically.Iterative analyses embrace the researcher's reflexivity as an important part of the process [46,47], acknowledging the interaction between what the "data are telling me" and "what I want to know" [46].Qualitative transcripts were cleared of previous analysis and coded openly for any references to outcomes and impacts as perceived by participants (conceptual codes), the links between these codes (relationship codes), and any direction that was associated with these links (participant perspective) [44].JD completed all initial coding, verified for reliability with ES, as overseen by SJE.Codes, and the relationships between them, were sorted and discussed by the research team in context of the IKT and co-production literature.Though the first-phase coding was "removed, " the findings informed the views of the JD, SJE, and ES in discussion of broader theorizing on IKT outcomes as part of the iterative "back and forth" reflection.The research team finalized the model as a simple but productive way to conceptualize IKT outcomes, which reflect a trifecta of the data, our research experiences, and the broader literature.As a final step in this process, the findings from phase 1 of analysis were mapped onto the conceptualization borne out of phase 2 of analysis.The results of this conceptualization are therefore presented first, and then used to structure findings on the GET-FACTS target outcomes. The Tower Model The need for a reframed thinking was evident through the GET-FACTS project both through what incoming challenges were faced in year 1 (e.g., a lack of scientific cultural frameworks for incorporating co-production, poor understanding of the rigor of IKT science) and how participants constructed outcomes and impacts at the end of project.We identified as a gap in the literature the need for better understanding of how one project can support IKT co-production in subsequent projects.We noted the words "build" or "building" emerged frequently in the interviews, used by all but one steering committee member and two scientists.Conceptually, participants described some IKT outcomes as building upon other IKT outcomes, and project goals are best achieved when all outcomes are done well and build "up" activities over time.This informed the visual metaphor for our model as a tower of building blocks. The conceptual model (Fig. 2) identifies three distinct levels or "blocks" of success created through IKT work.It recognizes success stemming from one project as building blocks towards success in future projects, creating a temporal continuity that has been poorly conceived of to date.The foundational block, titled "changing the culture, " includes outcomes/impacts that build understandings of the scientific process, legitimizes IKT science, and validates connection and collaboration with/between scientists and knowledge-users as a fundamental aspect of research.It also represents "paradigmatic" shifts [13] in which outcomes/impacts shift how the world is understood, what is legitimate knowledge, and the relationships between knowledge, research, and practice/policy.This can occur at multiple scales from individual to institutional.The intermediate block(s) build off the former with more targeted applications.Titled "laying the groundwork, " this includes outcomes that enable future research and IKT projects.This can include the building of relationships and collaborations between scientists and knowledge-users in specific contexts (e.g., food allergy scientists' relationships with food allergy knowledgeuser organizations) or the generation of research areas or questions to be explored in future projects.The upper block(s), titled "knowledge to action, " encapsulate more traditional KT outcomes and reflect knowledge to action from a single project such as knowledge tools, knowledge dissemination, or change to policy or clinical practice and additionally includes the influence that knowledge-users have on the science of that project (e.g., patient recruitment, reframed interpretation of findings).Any block or multiple blocks may be the IKT goal (targeted outcome) for a project but building strength through the foundation up enables not only actionable outcomes from a single project but also supports building further with future projects. IKT in the GET-FACT project: target and realized IKT outcomes GET-FACTS scientists and steering committee members participated in an extensive exercise to articulate a joint vision for IKT activities, which resulted in five target short-term outcomes.The realization of these outcomes was gauged through the case study's concluding qualitative interviews.The intermediate and longterm outcomes envisioned by the steering committee and scientists have timelines beyond the scope of this case study, but as goals are relevant in connection to the Tower Model.Table 2 re-imagines these envisioned outcomes (short, intermediate and long term) in relation to the conceptual addition of the Tower Model.Table 2 demonstrates that GET-FACTS target IKT outcomes were primarily concerned with what the Tower Model classifies as foundational.Additionally, while there was also attention on the intermediate block, the top block of "knowledge to action" was only considered in relation to long-term outcomes. An overview of perceived study outcomes as assessed through the qualitative interviews with project scientists and steering committee members is presented in Table 3. Results described here are structured by the three tower Fig. 2 The Tower Model for conceptualizing goals of research blocks and illustrated with quotations from scientist (Sci) and steering committee (SC) participants. Changing the culture Scientists self-described as having expanded their knowledge or understanding of IKT as a result of this project.Comparison with the case study's baseline findings [35] supports evidence of this change.For instance, when asked broad questions about the larger concept of KT (e.g., whose job is it to do KT?What is the role of basic/ clinical scientists in KT?), scientists originally emphasized other actors as being responsible for KT, whereas by the concluding interviews analyzed here, far greater emphasis was placed on the role of scientists in KT, KT as a shared responsibility, and the importance of two-way dialogue. Scientists described a change perception in IKT as a rigorous and validated body of knowledge, and two even described their "biggest surprise" in this process was learning the scientific validity of IKT: Well the entire program was new to me [laughs]! You know, when GET-FACTS started, IKT was kinda something that fluffy people did in corners and not really a hard science. And so I think that the team has done really well at changing perceptions and demonstrating the validity and value of that kind of reorientation of a research question. Which is really what it is. So that was a surprise to me. (Sci 1). In contrast, steering committee members described familiarity with the rigor of IKT but many expressed surprise at the openness of the scientists in participating. Scientists demonstrated fundamental shifts in their knowledge and understandings regarding KT broadly and IKT specifically.For instance, all scientists were able to critically describe advantages and challenges to doing IKT work.Advantages most cited were that it integrates multiple perspective, knowledge-users receive more accurate information, research is more meaningful to knowledge-users, and knowledge-users are aware of the research process.Challenges most cited were difficult to integrate multiple perspectives, required time investment, and challenge in communicating science.Additionally, scientists identified specific strategies for scientists and knowledge-users to work together which reflect the literature on IKT best practices, such as involve knowledge-users from the very beginning, to regularly communicate and engage, and to include those experienced in IKT. Though not as much of a shift from baseline data, steering committee members likewise articulated greater knowledge of IKT stemming from their involvement in the project.All were able to describe differences between end-of-grant and IKT approaches to KT and to describe both advantages and challenges of doing IKT work and to describe the GET-FACTS approach to IKT in detail. GET-FACTS target outcomes Tower Model Target short-term outcomes (goals) Steering committee members have greater awareness, understanding, and knowledge about research Foundational block, "changing the culture" Steering committee members feeling more empowered to contribute to the scientific process Foundational block, "changing the culture" Scientists have increased awareness, understanding, and knowledge about integrated Knowledge Translation Foundational block, "changing the culture" Scientists feel more empowered to contribute to the integrated knowledge translation process Foundational block, "changing the culture" Strengthened relationships between GET-FACTS project scientists and steering committee members Intermediate block, "laying the groundwork" Target intermediate outcome (goal) Creation of detailed IKT approach Intermediate block, "laying the groundwork" and top block, "knowledge to action" Target long-term outcomes (goals) Evidence informed policy and decision making Top block, "knowledge to action" Future scientific research is shaped by end users in collaboration with scientists Intermediate block, "laying the groundwork" and foundational block, "changing the culture" Maximize choice and minimize risk for Canadians affected by food allergies Top block, "knowledge to action" In a similar vein, steering committee members, unprompted, articulated limitations of scientific research findings with regard to their own work and discussed broad implications for how to build research into their organizations and communications strategies: [In regards to the study which upended food allergy recommendations [30] ♦ Scientists (7/12) identify this project as changing their knowledge or understanding of KT.This is supported by analysis of questions on KT (e.g., whose job is it to do KT?What is the role of basic/clinical scientists in KT?).Compared to baseline interviews, scientists now place far greater emphasis on the role of scientists in KT (8/12), KT as a shared responsibility (7/12), the importance of two-way dialogue (6/12) ♦ Scientists (4/12) identify specific strategies for scientists and knowledge-users to work together which reflect the literature on IKT best practices: e.g., involve knowledge-users from the very beginning (4/12), regular communication and engagement (4/12), include those experienced in IKT (2/12) ♦ Scientists reference a change in science practice broadly (4/12) and similar research being conducted using IKT (3/12) as markers of success from the project ♦ Scientists note their "biggest surprise" during the IKT process was learning the scientific validity of KT science (2/12) ♦ Steering committee members are all able to describe the GET-FACTS IKT activities (8/8) ♦ Steering committee members are all able to describe differences between end-of-grant and IKT approaches to KT, and are able to describe both advantages and challenges of doing IKT work (8/8) ♦ Steering committee members describe greater understanding of science (specifically science related to food allergy) compared to before the project (5/8), though some describe no change (2/8) ♦ Steering committee members articulate limitations of scientific research findings (broadly) with regards to their own work and discuss broad implications for how to build research into their organizations and communications strategies (5/8) ♦ Steering committee members describe similar research being done with IKT as marker of success from the project (4/8) ♦ Steering committee members note their "biggest surprise" during the IKT process was the willingness of scientists to participate (3/8) to be more studies done before it becomes conclusive.(SC 7). ] Many, many of our parents are still very, very confused and we have to tell them that you know what, what you're doing is fine. This is a very, very small test subject matter. There needs Finally, where both scientists and steering committee members saw as an important marker of success from the project that similar research will be conducted using IKT, scientists additionally point to success as a change in science practice broadly: Well I think firstly it sensitizes people about the fact that scientists think one way, people who are affected with a disease think another way, stakeholders and policy makers think a different way.If you don't meet the minds, if people are not all talking to each other, you can't be sensitive to how other people look at problems, how other people need information to solve problems, you know.It's really crucial to not just think you know everything and not think you have all the answers.And I think that's what, to me, this was the most valuable part of this whole exercise, to give you a little bit of humbleness and say, we don't have everything, we don't know all the issues that go into a family or policy maker or group that is trying to sensitize families to best practices, to educate families.How are we going to really build an ecosystem of health together?And this, I think, was the most valuable message of the integrated KT project that we've put into place.(Sci 10). Laying the groundwork Both scientists and steering committee members described new and/or strengthened relationships with knowledge-user organizations because of their involvement in GET-FACTS.While the majority of scientists described the building of these relationships as a marker of success from the project, none had specifically identified next research opportunities stemming from these relationships. Interactions with knowledge-users on the project influenced thinking and directions future research projects will take: So I mean, I can't say I walked out of a GET-FACTS meeting and into the lab and did a new experiment. It's not that direct a line, but it's the generation of ideas that sort of gets you thinking in other direc-tions… (Sci 8). And one steering committee member recalled a moment where very evidently the collaboration between scientists and knowledge-users sparked new directions: I don't know how the scientists feel but, from my point of view, I mean we had a magical moment back [at the last researcher meeting].We had all these scientists in a room who had done presentations for us and we got to know a little bit over the years and who are just amazing people.They for the first time ever, they all had a thought at the same time that there was something that they could be working on, a research root that they should be working on.And they could make it happen because there was some sort of application due right away and somebody ran off to fill in the application so they wouldn't miss the timeline.And for a brief moment all of us were on the same page, working on a common issue.And for me, that's….that was a magical moment.(SC 1). A number of scientists described specific intent to adapt the GET-FACTS approach to IKT into upcoming projects that were in formation: I'm actually going to implement this in a large project that we put forward.I think right from the beginning of the project you have to embrace the concept that you need end users and people who are non-scientists in the planning and in the implementation process of your grants.Knowledge translation is not just publishing a paper or going to a conference where there's a bunch of medical personnel who are going to listen to and you know, you're preaching to the converted.Knowledge translation really is all about making sure that there is a concerted effort right from the beginning of any project to have your stakeholders on board, especially if it's a large clinical project.I mean, perhaps some basic science studies don't need this, but we certainly learned our lesson and I'm implementing that in another large clinical setting that I'm working on now.(Sci 10). Steering committee members similarly expressed interest in adapting the GET-FACTS approach to IKT towards future projects: I actually have a note to call [IKT lead] and ask if I can share the framework with some of my [colleagues]…it's a big national project that's going underway and they desperately need some integrated knowledge mobilization work. (SC 5). Finally, most of the scientists and steering committee members were able to describe the GET-FACT approach to IKT with specificity, suggesting their knowledge could be translated to future research projects. Knowledge to action Overall, participants perceived low levels of direct action (e.g., dissemination, tools, changes to policy practice) that had emanated from the project through IKT process at the time of the interviews.As described in Table 2, the IKT PMF focused largely on foundational and intermediate tower blocks as outcomes and knowledge to action was not pinpointed as a short-term outcome goal (one exception here was the creation of the IKT PMF itself, as a tool for future research programs).This also reflected the unpublished status of much of the science in the project and the steering committee's agreement to keep all information in confidence until approved for public use (peer-reviewed publication): I haven't seen any of the output from any of the folks other than what they presented to us which was still work in progress. (SC 1). However, a majority of the scientists and steering committee members described optimism that the IKT approach would have positive impacts on dissemination at the appropriate time. A limited number of scientists identified change in their research focus or process, and none identified improved access to resources such as patients or samples, though there were discussions of patient recruitment for genetic analysis at the mid-point of the project.However, steering committee members described having subtle impacts on the framing and analysis of research findings through their interactions and sharing world views: Questions that the stakeholders or the steering committee members asked the researchers is a different angle or different interpretation of the results that the researchers hadn't necessarily thought of. (SC 8). Finally, one scientist expressed disappointment at the focus of IKT activities: I didn't realize [the intent] was just to conduct kind of a conceptual framework and to kind of have this be an esoteric project. I had thought that it was actually meant to really have that kind of impact and to actually have that translation to the actual research teams. And, I don't think that was the intent, I think I had misunderstood. And if it was the intent, that unequivocally was a huge fail… (Sci 5). This sentiment underscores the need to ensure expectations are clearly articulated and communicated among all participants in IKT projects. Discussion Changing the culture is a marathon not a sprint; not only that, it is foundational to IKT [25].The GET-FACTS project scientists and steering committee members together decided to put culture change at the forefront of the agenda.Over the course this 5-year case study, significant change occurred in how research scientists understand knowledge and IKT co-production and how knowledge-users understand science itself and their role in it.For example, where baseline data revealed many scientists having dismissive or skeptical attitudes towards the engagement of knowledge-users in the research process, concluding interviews revealed a transformation in the attitudes of scientists towards such engagement.This change marks a dramatic epistemological shift away from linear positivistic understandings of knowledge towards the more complex and interpretive epistemological foundations of co-creation.While there is certainly more work to be done for broader change in the norms of scientific culture, the shifts here align with what Beckett et al. [13] describe as paradigmatic. Less success was evident in moving knowledge into action (Tower Model's top block).This reflects the convergence of many factors, including the slow incremental nature of basic science, the lack of relationships between scientists and knowledge-users at the onset of the project, and the need to prioritize foundational/cultural change as the primary focus of efforts.While the vast majority of feedback from scientists and knowledgeusers was supportive of the IKT activities, this was not universal, and there was disappointment expressed from one scientist on the lack of knowledge products emerging from IKT activities.Further conversation on the purpose and measures of success in IKT is necessary.The IKT literature values the tacit outcomes of co-production [11,15] but, ultimately, scientists, knowledge-users and funders expect the time and expense of partnering to be for something; this may be challenging to demonstrate over short timeframes. IKT goals (target outcomes) will vary for every project.The Tower Model is conceptualized to help researchers reflect on these goals to ask: where are you going and why?If the goal is meaningful and actionable change to policy and practice, do you have the foundation in place to facilitate that change?Such considerations draw from the insights of many other approaches to co-production which have clearly demonstrated the vital importance of building projects on foundations of trust [8,16,[48][49][50].The Tower Model, however, aligns with IKT practices of prioritizing the knowledge to action outcomes as important, albeit here through an extended timeline.Further, complementary to existing work, the model focuses on bringing research to a point where planned action theories, such as the Knowledge to Action (KTA) framework, may be used to enhance implementation efforts [51,52]. Researchers who are trained to think about IKT success through multiple projects over the course of time may spend early-years energy focusing on the lower foundational and groundwork blocks in order to build the capacity for meaningful action in later projects.Knowing the challenging and time-consuming nature of this type of work [12], researchers do themselves a disservice to have to start from scratch every time.There is also a risk of falling victim to what Kothari and Wathen [15] call the positivity bias in IKT-that is, where scientists and knowledge-users enter co-production partnerships assuming there will be definitive evidence on a specific problem and they will together generate solutions that work.This does not reflect the messy and sometimes disappointing and inconclusive reality of doing of research.It is therefore imperative that IKT work recognizes success in many forms, as building blocks to future research and future implementation efforts. Where agencies fund in short increments (1, 2, or, as in this case, 5 years), strong foundations and partnerships are established just as project timelines conclude.Evidence suggests that the periods between funded research may strain co-production relationships [50,53], as resources are understood as critical to maintain strong researcher-knowledge user partnerships [54].In many cases, meaningful solutions-focused outcomes would be best supported over extended research timelines, and funders should consider this in program implementation.For instance, Centre of Excellence (CoE) schemes, which already focus energy towards knowledge and competence building and cross community transfer [55], could be harnessed to simultaneously build IKT specific expectations over the long-term (e.g., years 1-5 a focus on foundational cultural change, years 6-10 a focus on strong partnership and networking, years 10 + a focus on relevant knowledge products). There are a few of limitations of this work that bear highlighting.First, there are many good reasons to engage in knowledge co-production, but the Tower Model reflects the IKT emphasis on being action oriented and solutions focused and filling the knowledge to action gap.Not all co-production projects will find this as the north star and this should be interpreted appropriately.Second, despite efforts to minimize positive response bias, we must acknowledge this may be at play in some instances.Finally, though situated within the broader literature, the Tower Model is primarily based on one central IKT project.Further testing and discussion of the model are welcome and encouraged. Conclusions In the wake of revelations on the causes of food allergy [30], a disconnect appeared between the nature of incremental research science and the need for public policy to ensure health and safety for those with food allergy.In this context, the GET-FACTS IKT project sought to maximize the impact of its research through deep change in how scientists and knowledge-users understand each other's worldviews.Improving understanding and communication of science and empowering knowledge-users to engage with the research agenda is a long-term strategy to build towards knowledge implementation.Activities focused on these goals build a foundation and lay ground work for many future basic and clinical research projects, with tremendous potential to ripple onward into meaningful policy and practice. Table 3 GET-FACTS IKT outcomes
2023-09-27T14:10:57.684Z
2023-09-27T00:00:00.000
{ "year": 2023, "sha1": "ff09b96687dd982ea029a019bbd34dbaac559f88", "oa_license": "CCBY", "oa_url": "https://implementationsciencecomms.biomedcentral.com/counter/pdf/10.1186/s43058-023-00473-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09484caa61a8e5daaddef5e2ef7b3a73b22b7bb2", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
17367590
pes2o/s2orc
v3-fos-license
BRCA1 negatively regulates IGF-1 expression through an estrogen-responsive element-like site The insulin-like growth factor-1 receptor (IGF-1R) signaling pathway is critical for both normal mammary gland development and malignant transformation. It has been reported that the IGF-1 stimulates breast cancer cell proliferation and is upregulated in tumors with BRCA1/2 mutations. We report here that IGF-1 is negatively regulated by BRCA1 at the transcriptional level in human breast cancer cells. BRCA1 knockdown (BRCA1-KD) induces the expression of IGF-1 mRNA in MCF7 cells in an estrogen receptor α (ERα)-dependent manner. We found that both BRCA1 and ERα bind to the endogenous IGF-1 promoter region containing an estrogen-responsive element-like (EREL) site. BRCA1-KD does not significantly affect ERα binding on the IGF-1 promoter. Reporter analysis demonstrates that BRCA1 could regulate IGF-1 transcripts via this EREL site. In addition, enzyme-linked immunosorbent assay revealed that de-repression of IGF-1 transcription by BRCA1-KD increases the level of extracellular IGF-1 protein, and secreted IGF-1 seems to increase the phospho-IGF-1Rβ and activate its downstream signaling pathway. Blocking the IGF-1/IGF-1R/phosphoinositide 3-kinase (PI3K)/AKT pathway either by a neutralizing antibody or by small-molecule inhibitors preferentially reduces the proliferation of BRCA1-KD cells. Furthermore, the IGF-1-EREL-Luc reporter assay demonstrates that various inhibitors, which can inhibit the IGF-1R pathway, can suppress this reporter activity. These findings suggest that BRCA1 defectiveness keeps turning on IGF-1/PI3K/AKT signaling, which significantly contributes to increase cell survival and proliferation. immunostaining in about half of breast tumors irrespective of their subtypes, which is associated with poor outcome. 5 The transcriptional regulation of human IGF-1 is not well understood yet. Although mouse Igf-1 is regulated by estrogen via direct binding of estrogen receptor a (ERa) to estrogen-responsive elements (EREs) in its promoter, 6 there is no known consensus ERE in the human IGF-1 promoter. 7,8 The chromatin immunoprecipitation (ChIP) analysis, however, demonstrates that ERa binds to human IGF-1 promoter region, 8 and human IGF-1 mRNA expression is activated by estrogen in human ovarian and breast cancer cell lines. 7,8 Furthermore, intratumoral IGF-1 protein is elevated in breast cancer patients carrying breast cancer susceptibility gene 1/2 (BRCA1/2) mutations. 9 Although it has been shown that siRNA-based BRCA1 knockdown (BRCA1-KD) induces intracellular IGF-1 levels in primary human mammary gland cells, 9 the underlying molecular mechanism in human normal or tumor cells still remains to be determined. Germline mutations in the BRCA1 gene drastically increase the risk of breast and ovarian cancers in the individuals who carry them. 10,11 In addition, the level of BRCA1 protein is also often decreased or absent in sporadic breast and ovarian cancers. 12,13 As a tumor suppressor, BRCA1 is involved in the regulation of cell-cycle progression, DNA damage and repair and maintenance of genomic integrity. 14 Although BRCA1 is not a sequence-specific DNA-binding protein, it functions as a transcriptional modulators via physical interaction with various transcription factors (such as ERa, p53, STAT1, c-Myc, and ZBRK1) and regulates their target gene expression. 15 ERa, a member of the steroid hormone receptor superfamily, is activated by estrogen and has important roles in normal development and tumorigenesis of the breast. 2 BRCA1 interacts with ERa and represses the ERa-mediated transcriptional activity either in an estradiol (E2)-dependent 16 or -independent manner. 17 In this paper, we report that BRCA1 represses IGF-1 transcription in an ERa-dependent manner. Our study also suggests that de-repression of IGF-1 transcription by BRCA1 knockdown may induce a positive-feedback loop in an autocrine manner and result in further activation of IGF-1 transcripts through the IGF-1R/PI3K/AKT pathway. Results Expression of IGF-1 is negatively regulated by BRCA1. In order to identify genes regulated by BRCA1, we performed microarray analysis using RNA samples from MCF7 cells transfected with siRNA (control versus BRCA1). One of the genes that were significantly upregulated by BRCA1-KD was IGF-1 (data not shown). To further confirm this, we performed quantitative real-time PCR (qRT-PCR) analysis and found that BRCA1-KD significantly increased the level of IGF-1 mRNA in the human breast cancer cell line, MCF7 and prostate cancer cell line, DU145 -both of which are ERa-positive (Figures 1a and b). However, BRCA1-KD did not significantly change the expression of IGF-2, IGF-1R, and IRS-1 in MCF7 cells (Supplementary Figure S1). Interestingly, BRCA1-KD did not affect IGF-1 gene expression in two ER-negative breast cancer cell lines, MCF10A and MDA-MB-231 (data not shown), suggesting the potential involvement of ERa in the regulation of IGF-1 by BRCA1. In addition, overexpression of wild-type BRCA1 significantly decreased the level of IGF-1 mRNA in MCF7 cells (Figure 1c). To further evaluate estradiol (E2) dependency, we performed qRT-PCR analysis with MCF7 cells treated with siRNA (control versus BRCA1) under either normal growth or E2-stimulated conditions in the absence or presence of an antiestrogen, ICI182780. Under normal growth conditions, BRCA1-KD-induced IGF-1 mRNA expression was significantly but not completely reduced by ICI182780 (Figure 1d), whereas treatment of ICI182780 nearly completely abolished BRCA1-KD-induced IGF-1 mRNA expression in E2-stimulated MCF7 cells (Figure 1e). These results suggest that the induction of IGF-1 mRNA expression is estrogen-dependent in BRCA1-KD MCF7 cells under E2-stimulated conditions. ICI182780 also reduced IGF-1 mRNA expression levels of control-siRNA-treated MCF7 cells in both normal growth and E2-stimulated conditions. Under these conditions, administration of ICI182780 reduced the BRCA1 mRNA expression level in control-siRNA-treated MCF7 cells in both conditions (Figures 1d and e). It has been reported that ICI182780 inhibited E2-induced BRCA1 mRNA induction in ER-positive cells. 18 BRCA1 represses the human IGF-1 promoter through an ERE-like site. Although it is reported that human IGF-1 gene expression is regulated by estrogen in human ovarian and breast cancer cell lines, no known consensus ERE site has been reported in human IGF-1 promoter. 7,8 Interestingly, the chicken promoter contains an ERE-like (EREL) site, but a reporter construct containing mutations of this EREL site is still activated by estrogen in human hepatocellular carcinoma HepG2 cells. 19 Sasaki et al.,8 however, subsequently demonstrated by ChIP analysis that ERa binds to the human IGF-1 promoter region ( À 111 to À 312) containing this EREL site in human ovarian cancer cell lines and described that this region contains an activator protein 1 (AP1) site. To identify potential sequence elements that involve E2-dependent regulation of the human IGF-1 promoter, we performed sequence analysis of this IGF-1 promoter region. Sequence analysis of this region failed to identify consensus ERE (GGTCAnnnTGACC) or AP1 (T T / G AGTCAG) site. Instead, an EREL site, as previously identified in the chicken IGF-1 promoter, 19 is highly conserved in human, mouse, and chicken IGF-1 promoters (Figure 2a). To determine whether BRCA1 and/or ERa binds to this region, we further performed ChIP analysis under E2-stimulated conditions. The ChIP assay revealed the occupation of both ERa and BRCA1 on the IGF-1 promoter region containing this EREL site in MCF7 cells (Figures 2b and c). BRCA1-KD abolished the interaction of BRCA1 with the human IGF-1 promoter in an E2-independent manner (Figures 2b and c). BRCA1-KD itself did not significantly affect ERa binding to the human IGF-1 promoter in MCF7 cells under estrogen-deprived conditions (Figure 2c, lower). On the contrary, stimulation by E2 markedly increased ERa binding on the IGF-1 promoter in both control and BRCA1-KD MCF7 cells (Figures 2b and c). As expected, the antiestrogen, ICI182780, reduced E2-induced ERa binding to the IGF-1 promoter. Next, we prepared three different reporter constructs of the human IGF-1 promoter: (1) IGF-1-1kb-Luc, (2) IGF-1-EREL-Luc construct contained one copy of wild-type EREL sequence, and (3) IGF-1-EREL-Luc construct contained the mutant EREL sequence. In estrogen-deprived MCF7 cells, E2 administration induced reporter activities from both IGF-1-1kb-Luc (Supplementary Figure S2a) and wildtype IGF-1-EREL-Luc ( Figure 3a) in a dose-dependent manner. Mutation of the EREL site completely abolished E2-induced expression of the reporter gene ( Figure 3a). Under these conditions, we found that 10 nM of E2 induced approximately five-fold increase in the reporter activity from a control reporter containing a consensus ERE element ( Figure 3b). In addition, transient overexpression of BRCA1 suppressed the E2-induced wild-type IGF-1-ELEL-Luc reporter activity in a dose-independent manner, whereas little or no effect was observed in the absence of E2 (Figure 3c). Consistently, BRCA1-KD increased the reporter activity from wild-type IGF-1-EREL-Luc in MCF7 cells under E2-stimulated conditions ( Figure 3d). Interestingly, BRCA1-KD could induce IGF-1-EREL-Luc reporter activity even in the absence of E2 stimulation (Figure 3d Carboxy-terminal domain of BRCA1 has important roles in the regulation of human IGF-1 promoter. A previous study reports that the amino-terminus of BRCA1 interacts with ERa, whereas the carboxy-terminus of BRCA1 functions as a transcriptional repression domain using consensus ERE-Luc promoter reporter gene. 20 To determine the effects of BRCA1 tumor-associated mutations on transcriptional regulation by the EREL site of the human IGF-1 promoter, we performed reporter gene assays with wild-type IGF-1-EREL-Luc in the presence of various BRCA1 mutants and wild-type BRCA1 ( Figure 4). The wild-type BRCA1 suppressed this reporter activity in the presence of E2 (38.4 ± 2.0%), compared with pCDNA3-transfected control (100 ± 3.7%). A tumor-associated BRCA1 mutant, carrying the T300G mutation in the amino-terminal RING domain, suppressed the reporter activity to the similar levels as wildtype BRCA1 (30.7 ± 3.1%). However, one carboxy-terminal Similarly, another carboxy-terminal BRCA1 mutant, 5677insA (Y1853term), also showed reduced suppression of reporter activity (74.0 ± 2.4%). Because all three BRCA1 mutants are known to physically interact with ERa, 20 these results indicate that the carboxyl-terminal domain of BRCA1 is important in repressing IGF-1-EREL-Luc reporter activity. In addition, a carboxy-terminal deletion mutant (BamHI (NT); aa 1-1313) did not repress reporter activity at all (107.0 ± 4.9%). These results suggest that the intact carboxy-terminal repression domain has important functions in suppressing E2-induced IGF-1 reporter activity. Interestingly, the carboxy-terminal domain of BRCA1 (BamHI (CT); aa 1314-1863), which lacks the ERa-interacting domain, still has partial repression activity on the wild-type IGF-1-EREL-Luc reporter (61.9±3.5%). Secreted IGF-1 autocrinely activates the IGF-1R pathway in BRCA1-KD MCF7 cells. To determine the effect of BRCA1-KD on IGF-1 secretion, we measured IGF-1 protein in the culture medium by enzyme-linked immunosorbent assay (ELISA). The culture media harvested from cells treated with siRNA (control versus BRCA1) for 72 h were subjected to ELISA analysis. The amount of the secreted IGF-1 protein was significantly increased in BRCA1-KD MCF7 cells and administration of an IGF-1 neutralizing antibody completely reduced the secreted IGF-1 protein in these cells ( Figure 5a). In addition, BRCA1-KD also induced IGF-1 secretion in another ERa-positive cell line, DU145, but not in ERa-negative MCF10A cells ( Figure 5b). To further evaluate the effect of IGF-1 induction by BRCA1-KD, we performed western blot analysis. BRCA1-KD induced phospho-IGF-1Rb (Y1135), while there were barely detectable levels of phospho-IGF-1Rb in control-siRNA-treated cells ( Figure 6a). Phosphorylation of AKT, a downstream effector of the IGF-1R pathway, at S473 was also increased by BRCA1-KD. Increase in phospho-IGF-1Rb was also observed in the BRCA1-KD DU145 cells (Figure 6b), whereas no significant increase of phospho-IGF-1Rb was observed in BRCA1-KD MCF10A cells (Figure 6b). Consistently, overexpression of wild-type BRCA1 in MCF7 cells further decreased basal levels of phospho-IGF-1Rb (Supplementary Figure S3a). Discussion There are several prior studies implicating BRCA1 in the regulation of the IGF-1R pathway: (a) BRCA1 negatively regulates IGF-1R transcription via the Sp1 transcription factor; 23 (b) mRNA expression of several IGF-1R axis members (including Igf-1, Irs-1, Igf-1r, and Igfbp2) increases in the Brca1 D11/D11 p53 þ / À mouse model; 24 and (c) intratumoral IGF-1 protein is upregulated in clinical samples of breast cancer patients with BRCA1/2 mutations. 9 In comparison, our study demonstrated that among IGF axis members (IGF-1, IGF-2, IRS-1, and IGF-1R), IGF-1 is the only transcript that is regulated by BRCA1 in the MCF7 human breast cancer cell line. In contrast to a previous finding in prostate cancer, 25 IGF-1R mRNA levels are not significantly affected by BRCA1-KD in MCF7 cells. Currently, these discrepancies are not understood; differences in the induction of IGF axis members by BRCA1 loss may be due to unidentified genetic backgrounds of human versus mouse or breast versus prostate cells. Several lines of evidence support the cross-talk between IGF-1R and ERa at different levels. 2,26 For example, IGF-1 induces transcriptional activation of ERa-target genes 21,22 and ERa can be activated by downstream factors of IGF-1R such as MAPK or AKT. 2 In addition, ERa can activate IGF-1R signaling not only by transcriptional activation but also by nongenomic function. Membrane ERa can rapidly induce activation of several kinases including PI3K, ERK, and AKT. 2,26 Our data showed that transcriptional activation of the IGF-1 promoter, induced by BRCA1-KD, is downregulated not only by IGF-1R inhibitors (either a neutralizing IGF-1 antibody or an IGF-1R inhibitor) but also by inhibitors targeting PI3K or AKT. Although the non-genomic function of ERa in the absence of BRCA1 needs further investigation, our data suggest that IGF-1 might be produced at higher levels in breast cancers with a loss of BRCA1 function, which may induce a 'positive-feedback loop' in activating IGF-1R/PI3K/ AKT/ERa signaling. In our studies, when endogenous BRCA1 was knocked down, proliferation seemed to rely more on the IGF-1R pathway in MCF7 and ZR-75-1 cells. In fact, the phospho-IGF-1R is detected in about 50% of breast cancer cells irrespective of their subtypes and is associated with poor survival rate. 5 Our results suggest that the IGF-1R signaling pathway could be aberrantly overactivated in ERa-positive breast cancer cells with defective BRCA1 (e.g. its low expression level, point mutation, and so on). Therefore, targeting the IGF-1R pathway in various ways could be a potential option for prevention or therapy of BRCA1-defective breast cancers. It is noteworthy that levels of BRCA1 are reduced in sporadic breast cancers without BRCA1 mutations. 12,13 Although most of the established human breast cancer cell lines carrying BRCA1 mutations are ERa-negative, three-dimensionally cultured primary mammary epithelial cells from BRCA1 mutation carriers have heterogeneous ERa status: 32% ERa-negative, 44% mixed, 24% ERa-positive versus 90% ERa-positive in controls. 27 Recently, ER-positive tumors have been identified in BRCA1 mutation carriers that are Z50 years at the time of first diagnosis of breast cancer. 28 It has been also reported that approximately 10-36% of breast cancers that occur in BRCA1 mutation carriers are ER-positive. 28 We demonstrated that an insertional mutation (5382insC) and a carboxy-terminal deletion construct of BRCA1 are defective in their ability to suppress wild-type IGF-1-EREL-Luc reporter activity. The 5382insC mutation occurs in approximately 0.4% of the Ashkenazi Jewish population 29 and is the most frequently observed BRCA1 mutation in non-Jewish populations. 30 Like wild-type BRCA1, these BRCA1 mutants still physically interact with ERa by their aminoterminal domains, 20 but their mutation/deletion in the carboxyl-terminal domain may abolish their suppressive function on ERa-mediated transcriptional regulation. We found that BRCA1 has little or no effect on E2-induced binding of ERa to IGF-1 promoter (Figures 2b and c). This result implicates that the binding of ligand-bound ERa to IGF-1 promoter is independent of BRCA1 binding. Then, how does BRCA1 regulate IGF-1 transcription? Previously, it is postulated that the transcriptional repression of ERa by BRCA1 occurs through estrogen-independent interaction between the amino-terminus of BRCA1 and the carboxy-terminal activation domain (AF-2) of ERa. 20 It was subsequently shown that p300 and cyclin D1 may compete with BRCA1 for ERa-binding and reverse BRCA1-mediated repression of ERa transcriptional activity. 31,32 In our reporter assay, however, the carboxy-terminus-truncated BRCA1 completely lost the repression activity on the E2-induced IGF-1-EREL-Luc transcription. These results suggest that the carboxy-terminal repression domain of BRCA1 is further required to suppress E2-dependent ERa transactivation of IGF-1 promoter. The function of carboxy-terminal repression domain of BRCA1 is not well understood yet, but BRCA1 interacts with several factors through this domain. In fact, BRCA1 interacts with a transcriptional repressor CtIP 33 and the histone deacetylase complex including HDAC1 and HDAC2 34 through this domain. It has been also reported that association of BRCA1 with HDAC2 epigenetically represses oncogenic microRNA-155 via deacetylation of histone H2A and H3 on its promoter. 35 In our data, BRCA1-KD itself induced IGF-1-EREL-Luc reporter activity in the absence of E2 (Figure 3d). Taken together, transcriptional corepressors may be recruited to ERa by BRCA1. Our data also indicate that transcription factors other than ERa might regulate IGF-1 transcription in BRCA1-defective cancers. First, BRCA1-KD-induced IGF-1 mRNA expression is partially reduced by an estrogen antagonist ICI182780 in MCF7 cells under normal growth conditions, whereas expression of IGF-1 mRNA is completely reduced by ICI182780 in E2-stimulated BRCA1-KD MCF7 cells. This result suggests that other transcription factors, which are activated by serum-containing factors, may induce the expression of IGF-1 mRNA in BRCA1-KD MCF7 cells. Second, the carboxy-terminus of BRCA1 (BamHI (CT); aa 1314-1863), lacking the ERa-interacting domain, still partially represses E2-induced transcription of the wild-type IGF-1-EREL-Luc reporter in MCF7 cells. It is possible that BRCA1 interacts with other transcription factors through its carboxyterminal repression domain in the regulation of the IGF-1 promoter. Third, the effects of tumor-associated BRCA1 mutants are different between the EREL site of IGF-1 and consensus ERE. The BRCA1 T300G represses the IGF-1-EREL-Luc reporter activity as strongly as wild-type BRCA1, but did not suppress the consensus ERE-Luc activity. 20 These discrepancies also indicate that additional mechanisms including factors other than ERa may exist in the regulation of the IGF-1 promoter. As reported, the EREL site has sequence homology to both consensus ERE and AP1 sequences. 19 Interestingly, it has been reported that BRCA1 can interact with AP1 family proteins, Jun B and Jun D. 36 Thus, transcription factors, such as the AP1 family proteins, may have important roles through these sequences. Further studies are required to fully understand the exact molecular mechanism by which BRCA1 regulates IGF-1 transcripts on its promoter elements. Recently, it has been shown that tumor-suppressor function of BRCA1 depends on BRCA C-terminus domain in mouse model. 37 In our data, carboxy-terminal domain of BRCA1 is required to repress E2-dependent activation of IGF-1 promoter. Our data also suggest that dysregulation of IGF-1 expression by loss of BRCA1 function may induce a positivefeedback loop, resulting further activation of IGF-1 transcripts through the IGF-1R/PI3K/AKT pathway. Taken together, the failure of several BRCA1 mutants in suppressing IGF-1 expression may be critical in the development, survival, and/or proliferation of certain types of ER-positive breast cancer. DNA transfection. Expression vectors for wild-type or mutant BRCA1 protein are described previously. 38,39 DNA transfection was performed using Lipofectamine Plus (Invitrogen) as described previously. 39 After 24 h of transfection, cells were plated into either 24-or 48-well plates with normal growth medium. The day after plating, cells were treated with normal growth media containing inhibitors for 48-72 h. All experiments were performed in triplicate and MTT assay was used to measure the viability of cells. Conflict of Interest The authors declare no conflict of interest.
2016-05-12T22:15:10.714Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "ab3562818d110ae96c5064c3112c78f28e929dc2", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/cddis201278.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab3562818d110ae96c5064c3112c78f28e929dc2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
49190193
pes2o/s2orc
v3-fos-license
Promoting Preventive Behaviors of Nosocomial Infections in Nurses: The Effect of an Educational program based on Health Belief Model OBJECTIVES To determine the effect of educational program based on Health Belief Model (HBM) on promoting preventive behaviors of nosocomial infections in nurses. METHODS In this randomized controlled trial study, 120 nurses working in a hospital in Fasa City, Fars (Iran). Intervention group (n=60) received an educational program based on HBM while control group (n=60) did not received it. A questionnaire consisting of demographic information, HBM constructs (knowledge, perceived susceptibility, perceived severity, perceived benefits, perceived barriers, self-efficacy, performance and cues to action) was used to measure changes toward the prevention of nosocomial infections before, immediately after intervention and four months later (of the end of the intervention). RESULTS Immediately and four months after the intervention, the intervention group showed a significant increase in the knowledge, perceived susceptibility, perceived severity, perceived benefits, Self-efficacy, cues to action and performance compared to the control group. CONCLUSIONS This study in nurses showed that the effectiveness of the Educational program based on HBM on promoting preventive behaviors of nosocomial infections. Hence, this model can act as a framework for designing and implementing educational interventions for the prevention of nosocomial infections. Promoción en las enfermeras de comportamientos para la prevención de las infecciones nosocomiales. Efecto de un programa educativo basado en el Modelo de Creencias de Salud Objetivo. Determinar el efecto del programa educativo basado en el modelo de creencias de salud (en inglés: Health Belief Model -HBM-) en la promoción en las enfermeras de comportamientos para la prevención de infecciones nosocomiales. Métodos. E|fn este ensayo controlado aleatorizado participaron 120 enfermeras quienes trabajaban en un hospital de la ciudad de Fasa, Fars (Irán). El grupo de intervención (n=60) recibió un programa educativo basado en HBM mientras que el grupo control (n=60) no lo recibió. Para medir el cambio en la percepción de los comportamientos para la prevención de infecciones nosocomiales se aplicó antes, al finalizar y a los cuatro meses después de la intervención se utilizó un cuestionario con información de los constructos del HBM (susceptibilidad percibida, gravedad percibida, beneficios percibidos, barreras percibidas, autoeficacia, desempeño y claves de acción). Resultados. Al finalizar el programa educativo y cuatro meses después de iniciado el mismo, el grupo de intervención mostró un aumento significativo en: conocimiento, susceptibilidad percibida, gravedad percibida, beneficios percibidos, autoeficacia, claves de acción y desempeño, en comparación con el grupo de control. Conclusión. Este estudio en enfermeras mostró la efectividad del programa educativo basado en la adopción del HBM para la promoción de comportamientos de prevención de infecciones nosocomiales. Por lo tanto, este Introduction N osocomial Infections (NIs) are considered as one of the most important problems of health centers around the world, especially in developing countries. They have been associated with consequences such as increased length of hospitalization and increased health care costs (1) and are considered a threat as they increase the spread of infection in the community. (2) A reduction in NIs incidence can help retrieve patient health and improve economic efficiency. (3) These infections are a major cause of mortality and increased complications among hospitalized patients. Reports show two million NIs resulting in 19 000 deaths among patients and each year. (4) According to the World Health Organization, 7.1 million cases of NIs occur annually and 1 out of every 20 people is infected in hospital leading to the death of 99 thousand people and imposing about 32-26 million dollars on societies. (5) Studies in the East Mediterranean and Southeast Asian regions showed that 11.8 percent of hospitalized patients were affected by NIs. (6) The main objective is to reduce the risk of acquiring NIs by patients, hospital staff, and patients' companions, and preventing transmission of infection by hospital staff and the patients' companions. Nurses play a crucial role in the control and prevention of NIs because they have the highest contribution to the treatment and care of the patient. (7) Nurses can take appropriate measures in this regard such as disinfecting the skin, wearing gloves and masks, changing infusion sets, using caution measures, separating patients, using standard precautions, observing hand hygiene, preventing inadvertent contact with the needle sticks, avoiding exposure to infected respiratory secretions, and applying the principles of infection prevention in hospitalized patients. (8) Findings by Ghadamgahi et al. (3) suggest that many nurses do not have enough knowledge about controlling NIs. Therefore, continuous education is needed to raise nurses' awareness about NIs that can help reduce such infections. (9) Any training program designed to improve performance of nurses with the purpose of reducing NIs would be counterproductive without their awareness about their own practices and attitudes towards NIs control. (10) In order to achieve this goal, it will be necessary to identify factors influencing behavior. Researchers use behavioral models to identify factors associated with behavior. (11) Theories can guide the performance of health educators and can be used during various stages of planning, implementation, and evaluation of a program. (12) Since the 1950s to the present, the Health Belief Model (HBM) has been widely used as a conceptual framework in studies related to health behavior to explain change or maintenance of health-related behaviors and also to guide the interventions related to health behaviors. (13) According to this model, a person adopts preventive health behavior under the influence of factors such as perceived susceptibility, perceived severity, perceived benefits, perceived barriers, cues to action and efficacy. (14) In this model, perceived susceptibility, i.e. attitudes regarding their vulnerability and exposure to the risk of acquiring NIs, as well as Perceived severity, i.e. attitude about the severity and complications of NIs, are measured. The sum of these two factors is the nurses' perceived threat of NIs. Besides perceived threat, the perceived benefits and barriers, i.e. the analysis of the benefits of adopting preventive behaviors against NIs, analysis of potential barriers to appropriate preventive measures, and nurses' perceived capabilities for doing preventive behaviors; as well as cues to action, i.e. doctors and health workers, educational materials, and television and radio, help guide nurses to preventive practices against NIs. Zeigheimat showed the effectiveness of health belief modelbased education on healthcare behaviors of nursing staff in controlling NIs. (15) Since preventing NIs is a global priority and training members of the health team, especially nurses, can play a significant role in the prevention and control of NIs, this study aimed to determine the effect of education based on Health Belief Model on promoting nurses' preventive behaviors about NIs. Methods This randomized controlled trial study was conducted on 120 nurses working in Vali-e-Asr hospital, Fasa City, Fars (Iran) in 2016. Sample size was estimated based on a previous study by Zigheimat et al. (15) in the intervention group, the mean and standard deviation of practice before and after the study were 37.88 ± 5.78 and 41.9 ± 5.42 in the study groups. Then, based on the mentioned study and considering β = 0.90, α = 0.05, S1 = 5.78, S2 = 5.42, μ1 = 37.88, and μ2 = 41.9, 55 the number a subjects were estimated to be needed in each group. Therefore 60 subjects were recruited in each group to compensate the possible attrition. After obtaining written consent from the university authorities, the researcher obtained the consent of hospital officials as well. Random allocation of the intervention method was used to select 120 nurses who were assigned to intervention (n=60) and control groups (n=60). Figure 1 presents the study flow diagram. At the beginning of the study, one of the researchers introduced himself to the participants and explained the objectives of the study for them. The pre-test questionnaire was administered to both groups. Inclusion criteria were: having at least a diploma in nursing, consent to participate in the study, and work experience for at least three months in the ward. Exclusion criteria encompassed refusing to continue participation in the study and lack of cooperation due to illness and leave. Educational intervention for the intervention group consisted of eight training sessions of 55-60 minutes including lecture, group discussion, questions and answers, as well as posters, pamphlets, videos, and PowerPoint presentations. Focus of the sessions were: first and second: introduction to NIs and symptoms and complications of infections; third and fourth: the principles of proper sterilization of hands, use of gloves and masks, isolation of patients, hand hygiene, hospital waste disposal, and prevention of contact with contaminated respiratory secretions; fifth and sixth: benefits and barriers to the use of standard precautions; seventh: the role of the self-efficacy, standard precautions, and adoption of preventive behaviors against NIs were discussed; and, eighth: the past materials were reviewed, and some manuals were distributed among the participants, and educational resources were introduced to them. Training sessions were held in the hospital's conference room and were planned in a way that did not interfere with their schedules. Immediately after the intervention, both intervention and control groups completed the questionnaire. To protect and promote the efforts of individuals in the intervention group, a training SMS (Short Message Sevice) about NIs was sent to each participant in the intervention group on a weekly basis. They also participated in a monthly training session held for retraining and follow-up activities. Four months later, both groups (intervention and control) completed the questionnaire. The questionnaire was developed based on Health Belief Model and included: Knowledge, perceived susceptibility, perceived severity, perceived benefits, perceived barriers, cues to action, performance checklist, and demographic information. The performance of nurses regarding the adoption of preventive behaviors against NIs based on standard precautions was examined. The questionnaire was developed based on other studies and sources, (3,8,15) the Iranian National Nosocomial Infections Surveillance (NNIS), and also a survey of 15 experienced professors and faculty members in the field. In this regard, the panel of experts was asked to evaluate the clarity, simplicity, and relatedness of all items. Based on their comments, come items were excluded and some were modified. The content validity of the instrument was confirmed and the CVI was 85 percent for susceptibility, 87 percent for perceived severity, 81 percent for perceived benefits, 81 percent for perceived barriers, 89 percent for cues to action, 84 percent for perceived selfefficacy, and 92 percent for performance. The reliability of the questionnaire was also measured and the Cronbach's alpha coefficient was higher than 7.0 in all areas of the questionnaire. The high internal consistency of constructs was also confirmed (perceived susceptibility = 0.82, perceived severity =0.78, perceived benefits =0.8, perceived barriers = 0.79, perceived self-efficacy = 0.81, cues to action =0.83, and performance =0.84).Test-retest method was used to examine the reliability for stability of the instrument. To this end, first, the questionnaire was administered twice to 10 subjects with a 10-day interval. Then, the correlation between the two sets of test scores was calculated the Intra class Correlation Coefficient (ICC) = 0.80 for perceived susceptibility, ICC=0.75 perceived severity, ICC=0.82 for perceived benefits, ICC=0.81 for perceived barriers, ICC=0.82 for cues to action, ICC=0.80 for perceived selfefficacy, and ICC=0.83 for performance. Besides the HBM constructs, the questionnaire collected demographic data on age, sex, education, work history, and workplace of subjects. The items included: 10 items on awareness (True=1 and False=0); 5 items on perceived susceptibility, 5 items on perceived severity, 5 items on perceived self-efficacy, 5 items on perceived benefits, 5 Promoting Preventive Behaviors of Nosocomial Infections in Nurses: The Effect of an Educational program based on Health Belief Model items on perceived barriers, and 5 items on cues to action (a five-point Likert scale from strongly agree to strongly disagree). The performance checklist included 17 items (Yes=1 and No=0, for a total score of 0-17). For ethical considerations, the approval of the Fasa University of Medical Sciences and the consent of all nurses participating in the study were obtained. The participants were assured that their information would remain confidential. After the study, some educational booklets on preventive behaviors against NIs were distributed among the nurses.The collected data were coded and analyzed via SPSS version 22 using Chi-square, independent samples t-test, Mann-Whitney, Wilcoxon, and Repeated Measures ANOVA at a significance level of 0.5. Results The mean age of subjects was 27.8±5.5 in the intervention group and 28.12±5.3 years in the control group. The mean work experience period was 10.1±5.2 years in the intervention group and 9.9±5.6 years in the control group. The independent t-test showed no significant difference between the two groups. Table 1 shows the demographic data including sex, education level, employment status, marital status, and place of work. The predominant charachteristics of the groups were: married women, with Bachelor's education, with contractual employment, and they work principally in emergency and medical-surgical services. Based on chi-square test there was no significant difference between the two groups. The results showed no significant differences between the intervention and control groups before the intervention in terms of level of knowledge, perceived susceptibility, perceived severity, perceived benefits, perceived barriers, selfefficacy, performance and cues to action. However, immediately after the intervention and four months after the intervention, the intervention group showed a significant improvement compared to the control group in each of these areas except perceived barriers. The perceived barriers component significantly decreased for the intervention group compared to the control group (compare the difference between the mean score of the groups) ( Table 2). Discussion Reducing incompliance with hygiene guidelines is considered as the end goal of health education. The Health Belief Model used in this study as the theoretical framework, is an applied model, which has been widely used by various scholars for planning and evaluating interventions aimed at behavior change. (16) The results of the present study, which was carried out to evaluate the effect of HBMbased education on nurses' preventive behaviors against NIs, confirm the efficiency of this model in changing the nurses' behavior. The findings showed a significant increase in nurses' awareness four months after the educational intervention, while no significant change was observed in control group in this area. This finding of this study is consistent with the results of other studies, such as Zigheimat, (15) Suchitra, (17) and Ghaffari. (18) In a study conducted in India by Sabane(19) on 108 nursing students, the subjects' awareness increased significantly after the intervention. The increase in intervention group's awareness after educational intervention shows the effects of the training sessions on the nurses. Training and awareness raising are among the most effective methods of combating NIs. Obviously, continuous awareness alongside effective methods of disinfection and sterilization can decrease infections. The mean scores of the intervention group on perceived susceptibility and perceived severity (perceived threat) after educational intervention showed a significant increase compared to before the intervention, but did not change in the control group. Zigheimat's study showed that in order to strengthen the health beliefs of nurses, one of the fundamental steps would be to create a sense of vulnerability to NIs among them. (15) The results of this study are consistent with theory-based studies by Gorman et al. (20) and Tehrani et al. (21) Health Belief Model is a useful model to interpret communities' responses to infectious disease. (22) Highlighting the severity of a condition (disease) in the community can cause people to see themselves at a higher risk and probably to take a series of health-related measures. The results of this study showed that the intervention group's scores on the perceived benefits in the post-test increased significantly compared to the control group, but their score on the perceived barriers decreased. Since the nursing staff are at a risk due to the nature of their job, which requires dealing with patients and doing risky behaviors, one of the fundamental steps in order to create a positive attitude among them and strengthen their health beliefs is to create a sense of being vulnerable to such infections among them and to highlight the benefits of such an attitude for nurses and the reduced costs for patients and hospitals. In addition, the nurses' perceived barriers should be reduced by regular interventions. Zigheimat et al. found that the perceived benefits of nurses increased after the educational intervention, but their perceived barriers declined. (15) Noruzi et al. (23) showed that the perceived benefits of nurses were mostly about their own and their families' health interests as well as the treatment of the patients. Shalanski's study (24) showed that perceived barriers were the most important obstacle for adopting new behaviors. Ghadamgahi et al. (3) and Ghanbari et al. (25) evaluated the perceived benefits and perceived barriers of nurses and found that they were at an acceptable level. The results of this study showed that the self-efficacy of nurses in the intervention group enhanced after educational interventions. Pike recommended that self-efficacy could be used in the clinical environment for stimulating and motivating nursing students for professional development. (26) In another study, self-efficacy was mentioned as an important factor in academic nursing education. (27) Results of the study by Zigheimat et al. (15) showed an increase in self-efficacy of nurses in the intervention group in controlling NIs after educational intervention compared to the control group. The results of this study in this regard are consistent with other studies. (21,28,29) The mean score of the intervention group on cues to action showed a significant increase after educational intervention compared to the control group. Zigheimat et al. (15) also found that cues to action mean score increased after educational intervention. In studies by Masood Hussain (30) and Boyce (31) training seminars were found to be the most important cues to action. Ghanbari (25) found that workshops were the most important cues to action. (25) Jeihooni et al. (32) found an increase in the mean scores on cues to action after the educational intervention. In this study, the mean score of the intervention group on preventive behaviors against NIs increased after the intervention. In the same line, Suchitra et al. (17) concluded in their study that education had a positive impact on performance of health care workers regarding NIs. They also pointed to the need to develop a system of continuing education for health care workers (17) . Zigheimat et al. (15) suggested that nurses' performance in controlling NIs increased after the intervention in the intervention group compared to the control group. Studies by Kaewchana (33) and Fazrzan (34) showed that training improved the subjects' hand washing performance. Javaheri Tehrani (21) showed that training based on HBM enhanced women's performance regarding the urinary tract infection. The results of this study show the effectiveness of the intervention program and the need for educational interventions on preventive behaviors against NIs. As a result of the HBM-based education, the intervention group's scores on the components improved significantly leading to better NIs preventive behaviors. Given the importance of NIs, the need for fundamental solutions and proper planning to prevent them is felt. Educational programs for nurses, doctors and other health care personnel as well as educational programs on radio and television are essential. Therefore, educational interventions should improve the perceived susceptibility and severity of people about compliance with NIs control standards. Analyzing the benefits associated with infection control standards, removing barriers, and increasing self-efficacy and cues to action among nurses can help them change their behavior in controlling NIs. Periodic and in-service training should also be held for nurses based on the HBM and other health training and promotion models. One of the limitations of this study was selfreporting of performance on NIs control by nurses.
2018-07-03T01:13:46.630Z
2018-04-11T00:00:00.000
{ "year": 2018, "sha1": "34ca6cfba958473c70d407475bd1a1130c8de1f3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.17533/udea.iee.v36n1e09", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c4e8fc7987f5c4adb9c9e598dbc2192226857db8", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
18375199
pes2o/s2orc
v3-fos-license
Efficacy of Intrathecal Morphine in a Model of Surgical Pain in Rats Concerns over interactions between analgesics and experimental outcomes are a major reason for withholding opioids from rats undergoing surgical procedures. Only a fraction of morphine injected intravenously reaches receptors responsible for analgesia in the central nervous system. Intrathecal administration of morphine may represent a way to provide rats with analgesia while minimizing the amount of morphine injected. This study aimed to assess whether morphine injected intrathecally via direct lumbar puncture provides sufficient analgesia to rats exposed to acute surgical pain (caudal laparotomy).In an initial blinded, randomised study, pain-free rats received morphine subcutaneously (MSC, 3mg.kg-1, N = 6), intrathecally (MIT, 0.2mg.kg-1, N = 6); NaCl subcutaneously (NSC, N = 6) or intrathecally (NIT, N = 6). Previously validated pain behaviours, activity and Rat Grimace Scale (RGS) scores were recorded at baseline, 1, 2, 4 and 8h post-injection. Morphine-treated rats had similar behaviours to NaCl rats, but their RGS scores were significantly different over time and between treatments. In a second blinded study, rats (N = 28) were randomly allocated to one of the following four treatments (N = 7): MSC, 3mg.kg-1, surgery; MIT, 0.2mg.kg-1, surgery; NIT, surgery; NSC, sham surgery. Composite Pain Behaviours (CPB) and RGS were recorded as previously. CPB in MIT and MSC groups were not significantly different to NSC group. MSC and MIT rats displayed significantly lower RGS scores than NIT rats at 1 and 8h postoperatively. RGS scores for MIT and MSC rats were not significantly different at 1, 2, and 8h postoperatively. Intraclass correlation value amongst operators involved in RGS scoring (N = 9) was 0.913 for total RGS score. Intrathecal morphine was mostly indistinguishable from its subcutaneous counterpart, providing pain relief lasting up to 8 hours in a rat model of surgical pain. Further studies are warranted to clarify the relevance of the rat grimace scale for assessing pain in rats that have received opioid analgesics. Introduction Rodents remain the most commonly used animals for fundamental science and translational medicine [1]. Public perception and acceptance of animal models for biomedical research relies on the respect of fundamental ethical rules, such as the systematic implementation of the 3Rs: replacement, reduction and refinement [2][3][4][5]. In particular, refinement, which aims to reduce to a minimum any pain or distress caused by research procedures [6] can be applied to any surgical procedures by providing effective analgesia. The need to adopt this approach is a requirement of the European Directive EU 2010/63/EU (Article 14) [4]. Whilst the vast majority of rodent surgical research procedures are performed under anaesthesia, surveys indicate that the use of perioperative analgesics remains low; e.g. in less than 25% of rats undergoing surgery [7][8][9]. The main reported reasons for this are concerns over interactions with results of studies or potential negative side-effects from the analgesic themselves, or simply that there was no perceived need for using pain relief as a consequence of an inability to effectively recognise pain [7,8,10]. Morphine, widely used in humans since the early years of the 19 th century, is a full agonist of the mu-opioid receptor that is well absorbed under clinically used routes of administration [11][12][13]. It is commonly used for the prevention and treatment of severe acute and chronic pain in both humans and animals [14,15]. Clinically relevant side effects of morphine in painful rodents are negligible when administered at appropriate doses for a short duration [14]. However, pica has been cited as a reason for withholding morphine and other opioids such as buprenorphine in rats [16,17]. The incidence of pica-type behaviour depends on several factors such as dose, frequency of opioid administration, strain of rat and type of bedding [17,18]. In rodents, respiratory depression, another commonly quoted side effect following opioid use appears to have little clinical significance in rodents [19,20]. Concerns related to the effects of opioids on various physiological systems may be relevant in some specific research areas; for example the immunomodulatory effects of morphine. Morphine use could trigger a series of effects, such as pro-inflammatory variation in the nervous system [21] and altered tumour growth profiles [22,23]. Many of these effects can be minimised by reduction of the dose of morphine administered [23]. For example, following intravenous administration of morphine in humans, only about 0.1% of the total drug administered is present in the central nervous system (CNS) at the time of peak plasma concentration [24]. Reasons for the relative poor penetration of morphine into the CNS include its relatively poor lipid solubility compared to other opioids and rapid conjugation (metabolism) with glucuronic acid. If the same is true in animals, then use of alternative routes of administration could enable effective analgesia to be provided with lower total doses of morphine. Morphine is commonly administered neuraxially (epidural or intrathecal routes) in non-rodent species [25,26]. In humans, the ratio of potencies between intrathecal (IT) and intravenous (IV) routes of administration is 1/200 [27]; i.e. equipotent analgesic effects can be achieved using a dose that is 200 times less if the IT route is chosen. Techniques for intrathecal injection in rats have been described and although technically more challenging than parenteral routes of injection, could provide a practical means of establishing effective analgesia with a much reduced dose of morphine [28][29][30][31]. Intrathecal morphine was shown to alleviate pain in rats using nociceptive assays in the Brennan model of post-operative pain [32], and to restore exploratory behaviour post sub-costal laparotomy [33] and rearing as well as ambulation in a rat model of knee surgery [34]. Intrathecal injection in rats is also widely used to test therapeutic targets [35]. Refinement of research procedures involving animals partly relies on effective ways to prevent and alleviate pain; which requires effective and reliable methods to detect and quantify pain. Considerable progress has been made in developing such techniques in rats, providing assessment methods that are more relevant to the assessment of post-operative pain than basic nociceptive tests. Current widely used approaches measure a range of pain specific behaviours and use these to construct scales for assessing pain (e.g. the Composite Pain Scale) [36]. These scales can be used effectively, but can be influenced by non-specific behavioural effects of opioids such as morphine [37,38]. Facial expressions can also be used to assess pain [39]. Grimace scales have been developed for rodents [40][41][42], and have been used to refine experimental models [43,44] or assess the efficacy of common analgesic drugs [45,46], However, such scales have been suggested to be influenced by non-specific behavioural effects of opioids [41] and further validation must be carried out to determine if they are a suitable method of pain assessment following morphine administration. The aims of this study were to assess if intrathecal morphine could provide sufficient analgesia to prevent pain in rats undergoing caudal laparotomy, as assessed by analysis of behavioural changes and the rat grimace scale. We hypothesised that a lower dose of morphine, administered pre-emptively by the intrathecal route would have similar analgesic properties to a routine dose injected SC. We also tested the hypothesis that this lower dose of morphine would have fewer non-specific behavioural effects than morphine administered at higher doses by the SC route. Ethical statement All procedures were carried out under Home Office approved project and personal licenses (PPL 60/4431), compliant with the Animals (Scientific Procedures) Act 1986, EU directive (2010/63) and the Animal Welfare, Ethics Review Body at Newcastle University (AWERB). Animals and husbandry Fifty-two female Wistar rats (Charles River Laboratories, Kent, UK) were used in this study (270±14.7g; 61±3 days old). All animals were housed in randomly allocated groups of 3 to 5 rats per cage (RC2 cages, North Kent Plastics Company), provided with environmental enrichment (cardboard tubes, Datesand, Manchester, UK), sawdust substrate (Aspen 4HK, UK) and nesting material (Sizzle nest, LBS technology, Surrey, UK). Food (RM3, SDS Ltd, Witham, UK) and tap water were provided ad libitum. Room temperature was 21±2°C, humidity was 55 ±10%, with a 12h light cycle (7am-7pm). All rats were allowed to acclimatize for 7 days before starting the experiment, during which time the rats were habituated to the laboratory and researchers. The animals were free from any common pathogens in accordance with the FELASA health monitoring recommendations. Study Design This study comprised of two phases. The 1 st phase was designed to assess the effect of morphine, delivered subcutaneously (SC) or intrathecally (IT), on behaviour and facial expressions in pain free rats. The second phase was designed to assess similar effects on rats following abdominal surgery. While each phase was performed and analysed separately; the study design and the method of collecting data were identical in each phase. The study design is therefore only described once. Two main operators were involved in this study: the anaesthetist injecting the rats (AT-blinded to the nature of the injection) and the surgeon (PAF, 2 nd phase only, blinded to the nature and the site of injection). Three other operators assisted with the recording of behaviour and facial expression data. All 3 operators were blinded to the nature and site of the injection received by the rats (NaCl vs. morphine). Treatment groups Twenty-four and twenty-eight rats were randomly allocated to one of 4 treatment groups in phase 1 and 2 of the study, respectively. All treatment groups are described in Tables 1 and 2. Random allocated was carried out using an online random number generator (www.random. org). Treatment groups 1.1 to 1.4 (1 st phase) were control groups designed to assess the possible effect of morphine (SC and IT) on behaviour and facial expression of pain-free rats. Treatment groups 2.1 to 2.4 (2 nd phase) were designed to assess the analgesic action of morphine (SC and IT) on acute surgical pain in rats. The morphine used (treatment groups 1.3, 1.4, 2.3, 2.4) was sterile and free of preservative (Morphine sulphate, 1mg.ml -1 , South Devon Healthcare, Paignton, UK). The NaCl solution used (treatments groups 1.1, 1.2, 2.1, 2.2) was normal (0.9% w/v) and sterile (Braun, Melsungen, Germany). The dose of morphine selected for SC administration is commonly used in rats post-operative analgesia [14,47,48]. The dose for intrathecal injection (0.2 mg.kg -1 ) was selected from previous pilot studies (non-published data) and represents the highest dose and volume (average 54 μg and 54 μl) of morphine that can be administered without causing significant side effects (i.e. respiratory depression). Anaesthesia All treatments were carried under general anaesthesia, between 08.00am and 12.00noon. Anaesthesia was induced in a 7L Plexiglas induction chamber with sevoflurane (inspired fraction of sevoflurane (F i Sevo) = 8%, Abbott, Maidenhead, UK) delivered in 4 l.min -1 O 2 . After loss of consciousness (assessed by loss of the righting reflex), the rats were transferred from the chamber and anaesthesia maintained using a rodent-size facemask (VetTech Solutions Ltd, Cheshire, UK) with sevoflurane (F i Sevo = 2.4% delivered in 1.5l.min -1 O 2 ). Normothermia was maintained using a homeothermic heat pad (Harvard Apparatus, Kent, UK). Heart rate and hemoglobin saturation in oxygen were monitored using a rodent pulse oxymeter (Physiosuite, Kent Scientific Corporation, Torrington, USA). At the end of the procedure (see below), sevoflurane was discontinued and the rats were allowed to recover for a few minutes on the heat pad (O 2 : 1.5 l.min -1 ) before transfer to an incubator (28±3 deg C) until full recovery of coordinated motor function (10-15 min). The duration of anaesthesia was approximately 5 minutes (no surgery) to 15 min (surgery, see below). were positioned in sternal recumbency with their pelvic limbs brought under the abdomen as cranially as possible in order to arch the lumbosacral area. The intrathecal injection was performed using aseptic techniques, using a 25G hypodermic needle and an insulin syringe (0.5ml). The injection site was between the last lumbar vertebra and the 1 st sacral vertebrae (L6-S1) (Fig 1). The injection was considered successful if one of the 2 following signs were noted: presence of CSF in the needle hub prior to injection and/or twitch of the tail during the injection. If none of these signs were noted, or if blood was visible in the needle hub prior to injection, the needle was withdrawn and another sterile needle was used to repeat the procedure. All intrathecal injections were successful at the 1 st or 2 nd attempt. Surgery Rats allocated to treatments 2.2, 2.3, 2.4 underwent a caudal laparotomy with bladder wall injection immediately after the injections described above. The procedure is an adaptation from a previously described technique to produce a clinically relevant model of abdominal surgery [36,49]. Briefly, a caudal and midline laparotomy incision (1cm) was performed. The bladder was exposed and its wall injected with 0.1ml of sterile NaCl 0.9% (Braun, Melsungen, Germany) with an insulin needle and syringe (Terumo, Inchinnan, UK). The abdominal muscles were sutured with an interrupted pattern (4.0 Vicryl, Ethicon, Wokingham, UK) and the skin with a subcuticular pattern of the same suture material. Behaviour recording A custom made filming cage (40 x 26 x 28 cm) was used for behavioural recording. The cage did not contain any bedding, food or water. Three vertical sides, floor and ceiling were lined with a matte black film (Fablon) to minimise sensory distraction and reflections. A high definition camera (Sony High Definition HandyCam model HDR-XR155, Sony, Japan) was positioned using a tripod to face the remaining clear Perspex side of the cage. The rats were placed individually in the cage, allowed to acclimatize for 5 minutes and filmed for 10 minutes without an observer being present in the filming area. The box was thoroughly cleaned after filming each animal. Each rat was filmed on the day prior to the procedure (baseline = T 0 ) as well as at 1, 2, 4, and 8h post-injection. Phase 1 of the study. An operator blinded to the rats' treatment scored each 10-minute video clip using open-source software designed to score animal behaviour from video clips (Cowlog) [50]. Behavioural scoring was performed using a validated rat ethogram [36,49,51]. The ethogram is summarised in Table 3. Duration and frequency for each of the defined behaviours were recorded. A Composite Pain Behaviour (CPB) score was calculated by summing the frequency of the pain behaviours for each individual rat at each time-point. Pain behaviours included in the CPB in this phase were: back arching, twitch and stagger/fall. Phase 2 of the study. One treatment-blinded operator (KH) (blinded to time point and treatment) was responsible for all behavioural assessments. Briefly, the video was played in real time, and the operator scored the incidence of certain specific behaviours. Pain behaviours included in the CPB score in this phase were: back arching, twitch, stagger/fall, and belly pressing. Behaviour scoring of 5 minute per rat, per time point was carried out. Rat Grimace Scale Following video recording for behavioural analysis, rats were placed in a photography box for a period of 5 minutes and photographed using a high definition camera (Casio EX-ZR100, Casio Computer Co., Ltd., Japan) by a treatment-blinded observer. The photography box consisted of two matte black walls and two clear Perspex walls (27cm x 19cm x 17cm). Rats were photographed on every occasion they directly faced the camera, apart from when grooming in accordance with the method described by Sotocinal and colleagues [40]. The rats were then returned to their home cages. A treatment-blinded observer recorded any unexpected event or complication related to picture acquisition. The box was thoroughly cleaned and dried after photogrpahing each animal. All out-of-focus or out-of-frame images were manually deleted. The remaining images were cropped, leaving only the face of the rat in view to prevent bias due to body posture [41]. Using a random number generator, one image per rat, per time point was selected. Using the random number generator again, the selected images were re-ordered and inserted into a custom designed excel file for scoring. Nine trained observers who were blinded to the experimental details, design and purpose of the study scored each photograph for the four facial action units (FAUs) comprising the RGS as described by Sotocinal and colleagues [40]. For each image, the 4 individual FAUS: orbit tightening, nose / cheek flattening, ear changes and whisker change were scored using a 3-point scale (0 = not present, 1 = moderately present, 2 = obviously present). The nine scorers consisted of 5 veterinarians (including 2 with experience of working with rodents) and 4 scientists with no experience of working with rodents. All rats were euthanized using a rising concentration of inhaled C0 2 (filling rate: 20% of the chamber volume per minute) following recording of the last set of data, 8h post-injection, in accordance with current guidelines and legislation [4,52]. Statistical Analysis All statistical analyses were conducted using SPSS (SPSS Inc. Chicago, USA). A sphericity test was performed to verify the parametric nature of the data after which repeated measures Table 3. Ethogram used for behavioural observations in phase 1. Adapted from references [36,49,51]. Pain Behaviours Back arching Vertical stretch as in felines upon waking, including both partial and full arches, while inactive or walking. Twitch Transient involuntary muscular contraction of any body part. Usually occurs while inactive. Stagger/ fall Rapid transition to crouch from high or low rear. Also, falling during grooming while crouched. Belly Pressing Rubbing the laparotomy site purposefully across the floor of the cage Writhe One or more contractions of the abdominal muscle on either side of the stationary or moving animal, lasting until the abdomen relaxes. Ambulatory Behaviours High rear Bipedal stance, fully erect posture Down to partial Downward movement from high rear to partial rear Power of the study Retrospective assessment of the power of the study (2 nd phase only) was performed using G Ã Power (Softnews NET SRL, Romania). For the behavioural analysis (CPB), the power exceeded 0.8 with respect to differences between time points and with respect to differences between the saline surgical group and both morphine groups. The power with which to detect differences between the morphine groups was approximately 0.4. Similar results were obtained for the RGS (within subjects, power >0.8; within morphine groups: 0.33). Results Phase 1: treatments 1.1 to 1.4 Effect of morphine (SC and IT) on the behaviour of pain-free rats. There was no significant treatment effect on the behaviour of rats. Neither morphine administered SC or IT, or NaCl significantly affected the frequency and duration of behaviours identified in the rat ethogram Time had a significant effect on some of the behaviours across all treatments (1.1 to 1.4), as illustrated in Fig 2. As such, the frequency of high rears significantly decreases (p<0.001), the duration of walking decreased (p<0.001), the duration of climbing behaviours were significantly higher at baseline compared to other time points (p = 0.03). Lastly, the duration of inactivity increased over time with duration of inactivity being significantly lower at baseline compared to other time points (p<0.001). There was no significant effect of time on any of the other analysed behaviours. No pain specific behaviours were detected. Effect of morphine (SC and IT) on Rat Grimace Scale (RGS) score in pain-free rats. Average AU RGS results are documented in Fig 3 and Table 4. Average AU RGS scores were significantly different across treatments at baseline, 1 and 4h post injection. At 1h post-injection, rats receiving intrathecal NaCl had a significantly lower RGS score than rats having received NaCl subcutaneously (p = 0.002). At 4h post-injection, total RGS was significantly lower in rats allocated to the NaCl IT group compared to: NaCl SC (p<0.001) and the morphine groups (morphine SC: p = 0.006; morphine IT: p<0.001) compared to their respective control groups. Average AU RGS score also significantly increased over time in pain-free rats for all treatments except morphine intrathecally (Fig 3 and Table 4). Phase 2: treatments 2.1 to 2.4 Analgesic properties of intrathecal morphine in rats subjected to acute surgical pain. The surgical model used in the present study (caudal laparotomy with bladder wall injection) resulted in the presence of previously validated, specific pain behaviours such as writhing, fall/ stagger, twitches, and belly pressing. The incidence of back arching was very low and was therefore removed from the analysis. The composite pain behaviour (CPB) score was therefore obtained using the mean frequencies of the following behaviours: writhing, fall/stagger, belly pressing and twitches (Fig 4). CPB score was very low and not significantly different between treatment groups at baseline. Rats allocated to undergo surgery and receive an intrathecal injection of NaCl showed had a significantly higher CPB score than other rats allocated to the sham group at all postoperative time points ( Table 5). The peak of CPB was observed at 2h post surgery. Morphine, administered subcutaneously or intrathecally, resulted in significantly lower CPB scores compared to control rats (NaCl IT, surgery) at all time points. CPB scores in morphine-treated rats (either SC or IT) were not significantly different to CPB scores in the sham surgery group. There was no significant difference in the CPB score of rats undergoing surgery and receiving morphine either subcutaneously or intrathecally, at any time point. See Table 5 for the full list of significant differences between treatments at all time points. Average AU RGS scores for rats undergoing surgery and receiving NaCl IT significantly changed over time: average AU scores were significantly lower at baseline compared to 8h post laparotomy (p<0.01), but not at other time points. Similar results were obtained for rats undergoing sham surgery. Average AU scores for rats undergoing surgery and receiving morphine remained similar over time if the drug was administered subcutaneously, but were higher 8h post surgery if morphine was injected IT (p<0.001). Detailed results are displayed in Fig 5 and Table 6. Morphine-treated rats (SC and IT) undergoing surgery, displayed significantly lower total RGS scores than control rats (NaCl IT) 1h post-surgery (p<0.001, p = 0.001 respectively). Similar pattern of significant differences were identified 8h post-surgery (p = 0.001-0.002). RGS scores were not significantly different between the morphine groups at 1, 2, and 8h postoperatively. RGS scores were not significantly different across treatments 2h post laparotomy. At 4h post laparotomy; RGS scores were not significantly different between rats allocated to the sham surgery groups and those allocated to the surgical groups and intrathecal NaCl. Average AU Scores-NaCl SC Average AU RGS scores across treatments were significantly different at baseline between rats allocated to receive morphine SC and (p = 0.01). No other significant differences were noted between groups at baseline. Reliability of RGS scoring The reliability of both the individual AUs and the overall RGS scoring across all 9 scorers was high with intraclass correlation (ICC) values for the individual AUs ranging from 0.69 (nose/ cheek flattening) to 0.94 (orbital tightening) and a value of 0.91 for the overall RGS score (P<0.0.001 for all comparisons) ( Table 7). Discussion This novel study demonstrates the effects of morphine injected intrathecally via direct lumbar puncture in pain-free rats and in a model of acute surgical pain. In addition, this study represents the first use of the Rat Grimace Scale to assess the analgesic properties of IT morphine. Table 5. Table 2 Effects of morphine SC and IT on the behaviour and facial expression of pain-free rats The 1 st phase of the study was designed to determine whether morphine, administered SC or IT, influenced behaviour or rat grimace score (RGS) of pain-free rats. At the doses chosen in this study morphine, injected subcutaneously or intrathecally, did not affect the frequency and duration of behaviours used in a validated rat ethogram [36]. All changes in behavioural scope and pattern in the second phase of the study, in particular behaviours included in the CPB score, can therefore be attributed to the acute surgical pain caused by caudal laparotomy and bladder injection. This finding is contrary to the common belief that opioids alter the behaviour of pain-free rodents [38]. Morphine as well as other opioids do influence the behaviours of pain-free rats, but this has generally been reported when the doses used are much greater than those recommended for clinical use in rats [14,38], or when opioids are administered repeatedly for example in models of opioid tolerance [53,54]. The activity of all rats decreased over time, regardless of the drug received (NaCl or morphine). This reflects either insufficient habituation to the recording box (5 min) prior to the recording (novelty effect) during the early time points; or habituation of the rats, and possible boredom after several recordings in quick succession (a total of 5, 10 min long, behaviour recording sessions) [55,56]. The effect of morphine and the route of administration on pain-free rats were not easily interpreted using RGS. Average AU scores increased over time for all pain-free treatment groups, with the exception of intrathecal morphine. Each of the behavioural recording session was followed by another 10 min long session in a smaller Plexiglas box for RGS scoring. Whilst activity was not recorded in the RGS box, it was observed that the rats were also more inactive over time during the RGS sessions. Several rats were seen to remain immobile for a large part of the RGS session, and appeared disinterested by their surroundings. Such observations seem to correlate with the overall activity patterns (i.e. inactivity and walk). In the authors' experience, it would be reasonable to assume that inactive rats placed in a familiar environment could intermittently show signs of drowsiness. Given that orbital tightening is a natural consequence of drowsiness, then an increase in RGS score over time within all treatment groups could have been expected. One might argue that the sedative properties of morphine [14,57,58] could have further accentuated this expected effect, as sedation has already been shown to increase grimace scale scores [59]. While pain-free rats having received NaCl SC had a significantly lower RGS at baseline than at other time points; this pattern was not repeated in the NaCl IT group. Interestingly, RGS scores for pain-free rats receiving morphine IT did not change significantly over time, despite the same decrease in activity shown by control rats. Intrathecal morphine may therefore have an effect on facial expression of pain-free rats. Typically, pain-free rats receiving IT morphine displayed seemingly wider and slightly more protuberant eyes, than in control rats. This facial appearance of pain-free rats receiving morphine was documented as exophthalmos by some of the RGS treatment-blinded scorers. Morphine is known to cause mydriasis and exophthalmos in pain-free rats [60][61][62], but this effect was not analysed for significance in our study. Exophthalmos, which is not taken into account by the RGS, might have counteracted any possible orbital tightening from somnolence in the later time points. Lastly, pain-free rats receiving morphine SC had a significantly higher RGS score 1, 2 and 8h post injection. This could be explained by the sedative properties of morphine [14,57,58] causing some additional degree of orbital tightening. However, if any sedation occurred, this was not reflected by behaviour changes 1h post-injection. Analgesic properties of intrathecal morphine for acute surgical pain The pain caused by the model chosen for this study (caudal laparotomy with bladder wall injection) has been well documented using composite pain behaviour scoring [36,63]. This surgical procedure remains widely used in orthotopic models of bladder cancers in our institution and elsewhere [64,65]. The presence of detectable pain was confirmed by the significant increase in composite pain behaviours in the positive control group up to the last recorded time point, 8h post surgery. This is in line with previously reported pain mediated behavioural alteration, where laparotomy was associated with an increase of pain specific pain behaviour for up to 6.5h [51]; as well as non-specific behavioural alteration for up to 24h [66]. Morphine administered intrathecally via direct lumbosacral puncture significantly attenuated composite pain behaviours (1 and 4h postoperatively) and rat grimace scores (1 and 8h post surgery), and therefore seems to alleviate acute postsurgical pain caused by caudal laparotomy. Morphine IT was mostly indistinguishable from subcutaneous administration based on RGS scoring, but analysis of the composite pain behavior score suggested that SC morphine might provide uninterrupted analgesia for up to 8h. The duration of action of morphine administered subcutaneously (8h) was longer than expected since morphine SC is usually considered to provide effective post-operative analgesia for no more than 2-4h [14,67,68], with a peak of analgesic and anti-hyperalgesic activity 45-60 min post administration [67]. Morphine administered intrathecally is expected to provide long lasting analgesia in people [69] and animals [70][71][72]. While neuraxial morphine was shown to have a duration of action of 21h in primates [70], and up to 24h in dogs and cats [71,72], intrathecal morphine has been associated with markedly shorter duration of action in rats. Most studies undertaken in rats reported a duration of action of approximately 120 min [32,33,73,74]. Results obtained in the present study suggest that intrathecal morphine in rats, might have a longer duration of action. Three key elements of our study design could explain this difference from previous studies. Firstly, the present study documents analgesic properties of morphine in the context of acute surgical pain, using specific pain behaviours validated for post-laparotomy pain in rats. Most other studies assessed the anti-nociceptive properties of morphine using various nociceptive tests (e.g. Von Frey, Hargreaves). Secondly, in our study, in order to assess the practicality of intrathecal administration of post-operative pain management, morphine was administered without prior surgical implantation of a spinal catheter. A spinal catheter is widely used in pharmacological or toxicological studies to facilitate multiple injections of a substance. The presence of the catheter may be associated with chronic inflammatory pain, alteration of rats' behaviours, and intrinsic variation in the pharmacological properties of some molecules, including morphine [33,75]. Thirdly, the dose used in the present study (0.2 mg.kg -1 , IT, i.e. average 54μg per rat) was higher than doses used in most anti-nociceptive studies (e.g. 0.16 to 10 μg per rat) [28,29,32,33]. When selected morphine doses were higher, anti-hyperalgesic properties of intrathecal morphine was anecdotally documented to last for up to 4h [76]. The dose chosen was based on pilot studies conducted in our laboratory (unpublished data) and represented the highest dose that could be administered intrathecally without producing clinical and behavioural side effects. We ensured that the total amount remained below 150μg per rat, the intrathecal dose reported to trigger hyperalgesia in rats [77]. A total dose of 0.2 mg. kg -1 in rats is also higher than intrathecal doses commonly used in human analgesia [25,27,78,79], even after applying allometric scaling [80]. Further studies would be required to identify the lowest intrathecal dose required to inhibit composite pain behaviours in rats undergoing laparotomies and other surgical procedures involving the abdomen and the pelvic limbs. Lastly, the effects of a low morphine dose (54μg or below) administered intrathecally on the immune system and cancer models is unknown to date, but would be expected to be lower than systemic doses previously investigated [22,23]. Relevance of the rat grimace scale for assessing pain in opioid medicated rats A range of different approaches have been developed to assess the degree of pain experienced by rats undergoing surgical and other traumatic pain, with the view of refining the procedures and/or improving the relevance of translational studies. Beyond nonspecific and retrospective methods (such as weight loss, biochemical stress markers etc.), two prospective pain assessment methods are currently most relied upon. Behaviour-based assessments of pain have been developed for both rats and mice following surgery and other traumatic procedures, and use either the appearance of abnormal behaviours, or the change in the frequency of normal behaviour patterns to score pain [11,49,51,81]. There remain a number of limitations to using behaviour to assess pain in animals, such as possible confounding factor of opioid analgesic on the behaviour of pain-free animals [51] and the specific behavioural responses to painful stimuli varying markedly depending on the nature of the surgical procedure (abdominal-based or other) [11,49,51,81]. The use of facial expressions to assess pain [40] was suggested to overcome some of the above difficulties. Variation of facial expression during painful events was codified as the Rat Grimace Scale (RGS), validated for the assessment of acute surgical and nociceptive pain [40,43,82], and used to assess the efficacy of commonly used analgesics in rats [45]. Amongst the proposed advantages supporting the use of RGS was the lack of a confounding effect of opioids on the facial expression of pain-free animals. The present study suggests that opioids may have an effect on facial expression of pain-free rats (overall decrease of RGS scores over time) caused by a degree of opioid-mediated exophthalmos, but no behavioural effect over time. These findings would benefit from further investigation. In rats subjected to acute surgical pain, unexpected findings have weakened our interpretation from RGS results. RGS scores in pain free and opioid free rats were expected to be lower than were observed in this study [40] and similar across treatment groups. Importantly, significant differences were detected amongst treatment groups at baseline during both phases of the study. This major inconsistency existed despite overall excellent inter-scorer reliability of the scoring method (ICC 0.913). Three factors either in isolation or combination may have contributed to such findings. Firstly, in mice, baseline mouse grimace scale scores were shown to differ depending on the strain, sex, and methods used for the scoring of facial action units (retrospective vs. prospective scoring) [83]. While variation of baseline scores across strains in Wistar rats, or the impact of retrospective RGS scoring on still images are yet to be documented in a specific study, significant differences amongst baseline RGS scores between treatment groups were unexpected. Rats (strain, age, sex, age), operators, time of the days and methodology used was similar between both study phases. However, this requires further study as the above finding is based on a single study in 6 mice. A second reason that potentially contributed to the differences in RGS scores at baseline between rats could, more simply, be a cohort effect. All rats were bred by the same supplier, were of a similar age, strain, and received equivalent husbandry in our laboratory. In addition, group housing and random allocation of each rat to his treatment group would be expected to minimize the likelihood of such cohort effect. Nonetheless, the statistical analysis used for this study compensated for any possible cohort effect since using a within-subjects design accounts for the variation between individuals when comparing across time points. Thirdly, a relatively short yet consistent habitation period was used in these studies (5 min), and therefore it is possible that the novel nature of the box used contributed to the variability of facial expressions of the rats, possibly causing inadvertent false positive scoring at baseline that were reduced with repeated exposures. This effect has however not been noted in other studies using grimace scales when similar habitation times [59] or different [82] habituation times were used. While the overall power of our studies was acceptable (>0.8), the lower power observed between the morphine comparisons should be taken into account when interpreting the results of this pilot study. These comparisons bear further investigation in a future study. This pilot study will provide more accurate information for the sample size determination of future studies than was available when this pilot study was undertaken. Only female rats were used in this study. Pain perception and analgesic properties of opioid are influenced by gender in rats. Briefly, female rodents are more sensitive than males to noxious stimuli and have lower levels of stress-induced analgesia, whereas male rodents generally have stronger analgesic responses to mu-opioid receptor agonists than females [84,85]. In spite of this, gender was not found to significantly influence specific pain behaviours in similar surgical models of pain in rats [49], and no sex-differences were reported in the original RGS paper [40]. We chose to use single sex female groups and minimise the number of rats involved in this study. Further studies would be required to characterise gender-based difference in the analgesic properties of intrathecal morphine. Conclusion Findings from the present study suggest that administration of intrathecal morphine by percutaneous injection may represent an effective way of providing long lasting pain relief in rodents subjected to caudal laparotomy and bladder wall injection. Intrathecal morphine (0.2 mg.kg -1 ) may provide comparable analgesia to the subcutaneous route (3 mg.kg -1 ) using less that a 10 th of the dose required for subcutaneous administration. As a result, the intrathecal route of administration may reduce concerns related to the non-specific effects of opioids when these agents are used to alleviate pain in rodents used in a range of different areas of biomedical research. Further studies would be required to better characterize the effects of such reduced morphine dose on individual experimental outcomes. While the RGS may have advantages for the assessment of pain in rats compared to scoring of composite pain behaviours, morphine may impact facial expression of pain-free rats and influence use of the RGS in opioid medicated rats. The variation in detection of analgesic effects between the RGS and the CPB score support the use of both techniques, as complementary measures of the behavioural changes induced both by analgesics and post-surgical pain.
2018-04-03T01:46:17.331Z
2016-10-26T00:00:00.000
{ "year": 2016, "sha1": "3b7b08a0c8f742d59e9a7bb067456ff6456b11c0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0163909&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20d958f4f2834347c387243d125952fa182da08f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119334753
pes2o/s2orc
v3-fos-license
Torsional chiral magnetic effect due to skyrmion textures in a Weyl superfluid $^3$He-A We investigate torsional chiral magnetic effect (TCME) induced by skyrmion-vortex textures in the A phase of the superfluid $^3$He. In $^3$He-A, Bogoliubov quasiparticles around point nodes behave as Weyl fermions, and the nodal direction represented by the $\ell$-vector may form a spatially modulated texture. $\ell$-textures generate a chiral gauge field and a torsion field directly acting on the chirality of Weyl-Bogoliubov quasiparticles. It has been clarified by G. E. Volovik [Pi'sma Zh. Eksp. Teor. Fiz. {\bf 43}, 428 (1986)] that, if the $\ell$-vector is twisted, the chiral gauge field is responsible for the chiral anomaly, leading to an anomalous current along ${\ell}$. Here we show that, even for non-twisted $\ell$-vector fields, a torsion arising from $\ell$-textures brings about contributions to the equilibrium currents of Weyl-Bogoliubov quasiparticles along ${\rm curl}{\ell}$. This implies that while the anomalous current appears only for the twisted (Bloch-type) skyrmion of the $\ell$-vector, the extra mass current due to TCME always exists regardless of the skyrmion type. Solving the Bogoliubov-de Gennes equation, we demonstrate that both Bloch-type and N\'{e}el-type skyrmions induce chiral fermion states with spectral asymmetry, and possess spatially inhomogeneous structures of Weyl bands in the real coordinate space. Furthermore, we discuss the contributions of Weyl-Bogoliubov quasiparticles and continuum states to the mass current density in the vicinity of the topological phase transition. In the weak coupling limit, continuum states give rise to backflow to the mass current generated by Weyl-Bogoliubov quasiparticles, which makes a non-negligible contribution to the orbital angular momentum. As the topological transition is approached, the mass current density is governed by the contribution of continuum states. I. INTRODUCTION Weyl semimetals have been attracting much attention because of the realization of chiral anomaly in condensed matter systems, which is experimentally detectable in various exotic transport phenomena such as the anomalous Hall effect, chiral magnetic effect, and negative magnetoresistance. [1][2][3][4][5][6][7][8][9][10][11][12][13][14] Chiral anomaly is the violation of conservation law of axial currents in the case with both electric and magnetic fields which are not orthogonal to each other. Its origin is attributed to monopole charge carried by Weyl ponts in the momentum space, which generate the Berry curvature, and are sources and drains of momentum generation. Recent experimental studies revealed the realization of chiral anomaly in Weyl semimetal materials via the observation of negative magnetoresistance. [15][16][17] The notion of Weyl semimetals is naturally generalized to superconducting states. 18 In superconductors with broken time-reversal symmetry such as chiral pairing states and non-unitary odd-parity pairing states, nodal excitations from point-nodes of the superconducting gap behave as Weyl fermions accompanying the Berry curvature. There are several candidate systems of Weyl superconductors and superfluids such as the A-phase of the superfluid 3 He, [19][20][21][22] URu 2 Si 2 , 23 the B-phase of UPt 3 , UCoGe, 24,25 and the Bphase of U 1−x Th x Be 13 . [26][27][28] Since the Bogoliubov quasiparticles are the superposition of electrons and holes, the usual coupling with electromagnetic fields does not directly lead to chiral anomaly. However, it is still possible that in Weyl superconductors and Weyl superfluids, emergent electromagnetic fields arising from spatially inhomogeneous textures of the superconducting order parameter and its dynamics give rise to chiral anomaly phenomena. As a matter of fact, in 1997, more than ten years before the invention of the notion of Weyl semimetals, Bevan et al. observed momentum generation due to chiral anomaly in 3 He-A with skyrmion textures of the ℓ-vector field, 29 which was motivated by pioneering theoretical works of Volovik and his collaborators. 19,[30][31][32][33] In the experiment, 29 the chiral anomaly was detected via the measurement of an extra force on skyrmion-vortices . In this paper, we consider another chiral anomaly effect which is referred to as the torsional chiral magnetic effect (TCME). The TCME was originally proposed for magnetic Weyl semimetals with lattice dislocations. 34 Lattice dislocations give rise to torsion fields which cause emergent magnetic fields acting on Weyl fermions, and result in equilibrium currents flowing along the dislocation lines. The current induced by the torsion field is given by, where v F is the Fermi velocity, Λ is the momentum cutoff, p L(R)a (a = x, y, z) is the position of the Weyl point with left(right)-handed chirality in momentum space, and where T a νλ is torsion which can be realized in condensed matter systems by topological defects such as lattice dislocation and a skyrmion texture of magnetic order. In the case of superconductors, torsional magnetic fields arise from vortex textures of the superconducting order parameter or lattice strain, and hence, the negative thermal magnetoresistivity, that is, the anomalous enhancement of longitudinal thermal conductivity along the vortex line. 35 Here we show that the TCME of Weyl-Bogoliubov quasiparticles is realized as the equilibrium currents induced by an order parameter texture in Weyl superfluids. We focus on the A-phase of 3 He having a skyrmion-vortex as a promising platform for the TCME. The order parameter tensor, A µi , that transforms as a vector with respect to index µ = x, y, z (i = x, y, z) under spin (orbital) rotations, is given by the complex form 36,37 whered and (m,n) are unit vectors representing spin and orbital degrees of freedom in the superfluid vacuum, respectively. This is the Cooper-pair state with a definite orbital angular momentum represented byl ≡m ×n. The ℓ-vector also points to the nodal orientation at which Weyl-Bogoliubov quasiparticles reside. Owing to the spontaneously broken gauge-orbit symmetry, the rotation of the orbital part,m + in, aboutl is equivalent to the U(1) phase rotation ϕ. This implies that the superfluid current can be generated by the texture of the triad (m,n,l) without U(1) phase singularities. For rotating 3 He-A, therefore, the ℓ-vector field spontaneously forms a skyrmion-like texture as a ground state, [38][39][40] which is known as the Anderson-Toulouse vortex and the Mermion-Ho vortex (see Fig. 1). 22,[41][42][43] The ℓ-texture fields also appear in 3 He confined in a narrow cylinder. 44,45 To clarify the TCME in 3 He-A with skyrmion-vortices, we utilized two approaches: the semiclassical theory for Weyl-Bogoliubov quasiparticles and Bogoliubov-de Gennes (BdG) equation. The former clarifies that a "torsion" field emerges from a nontrivial ℓ-texture as T3 = curll and gives rise to the TCME as in Eq. (1). The BdG equation provides a full quantum mechanical approach to the TCME and Weyl-Bogoliubov quasiparticles. Under skyrmion textures, the Bogloliubov quasiparticle spectrum possesses chiral fermion branches with spectral asymmetry, and the chiral fermions carry the macroscopic equilibrium current along the torsional magnetic field. Furthermore, we demonstrate that the skyrmion textures of ℓ-vector give rise to spatially inhomogeneous structures of the Weyl fermion band in addition to the torsional magnetic field 46 ; the position of Weyl points in the momentum space exhibits inhomogeneous textures in the real space. From the viewpoint of the TCME, we also revisit the superfluid current generated by ℓ textures. The expression of the current density in spatially inhomogeneous 3 He-A possesses a longstanding issue involved with the angular momentum paradox. McClure and Takagi 47 showed that the ground state with an axially symmetric ℓ-textures possesses the total angular momentum L z = Nh/2, where N is the total number of particles. The mass current density at zero temperatures was derived by Mermin and Muzikar 48 as where ρ is the mass density of 3 He atoms, M is the mass, and C 0 ≈ ρ in the weak coupling approximation. The first term results from the superfluid velocity. The second term in Eq. (4) represents current due to a variation of orbital angular momentum of the Cooper pair,hl, which resembles the electric current induced by a variation of the magnetization density in ma-terials. The third term is the anomalous term referred to as j an . The derivation of Eq. (4) is based on the configuration-space form of the BCS variational ground state wavefunction. 48 The similar approach was also used by Ishikawa, Miyake, and Usui but came to the different conclusion that j an is absent and j IMU = j MM − j an . 49 The anomalous term violates the McClure-Takagi relation whenl · curll = 0. The discrepancy between j MM and j IMU is referred to as the McClure-Takagi paradox. 48,50,51 The physical origin of j an was addressed by Combescot and Dombre 52,53 and Balatsky et al., 19,32,33 where the latter unveiled that j an is attributed to the chiral anomaly of Weyl-Boboliubov quasiparticles so as to violate the law of momentum conservation. In this paper, we show that in addition to j an , the second term in Eq. (4) is related to the TCME, i.e., Weyl-Bogoliubov quasiparticle current generated by a torsion field. We further demonstrated that our numerical calculation for twisted ℓ-texture coincides to j MM with j an in the weak coupling limit. The organization of this paper is as follows. In Sec. II, we present semiclassical analysis for TCME in the case of Weyl superconductors/superfluids. In Sec. III, we describe the numrical method for solving the Bogoliubov-de-Gennes (BdG) equaiton for our purpose, and show the intrinsic features of Weyl-Bogoliubov quasiparticles in the presence of the ℓ-texture, such as spectral asymmetry and spatially inhomogeneous structures of the Weyl fermion band. In Sec. IV, we discuss the mass current density induced by the skyrmion texture from the viewpoint of the TCME. The final section is devoted to conclusion and discussion. A. Semiclassical equation of motion for Weyl-Bogoliubov quasiparticles We consider the Bogoliubov-de Gennes (BdG) Hamiltonian for the Bogoliubov quasiparticles in the A-phase of the superfluid 3 He, i.e., a 3D chiral p + ip superfluid, with spatially varying gap structures such as skyrmion-like ℓ-vector textures of the Anderson-Toulouse vortex and the Mermin-Ho vortex. We apply the path integral formulation in a curved space with nonzero torsion which is induced by a vortex structure. In the Lagrangian in the Feynmann kernel, the spatially varying gap function is expressed as A µi p i /p F with the tensorial field A µi in Eq. (3). Here m and n are unit vectors for a local orthogonal frame which are perpendicular to the direction of the point nodes at p = sp 0 ≡ sp F ℓ, where s = ±1 is the chirality of the Weyl points, and ℓ = m × n is the ℓ-vector. ∆ is the superconducting gap, and p F is the Fermi momentum. Then, the effective Lagrangian for Bogoliubov quasiparticles around p = sp 0 is given by with the Weyl-Bogoliubov Hamiltonian with v the Fermi velocity, τ a is the Pauli matrix in the particle-hole space, and the vielbein e We use greek letter indices µ = 1, 2, 3 as space indices for the laboratory frame, and roman letters a =1,2,3 as indices for a local orthgonal frame. The Weyl Hamiltonian must obey the particle-hole symmetry where C = Kτ 1 is the particle-hole conversion operator with τ µ the Pauli matrices in the particle-hole space. The particlehole symmetry guarantees that the Weyl point appears as a pair of p 0 and −p 0 and the pairwise Weyl points have opposite chirality, s = ±1. The Berry connection and the Berry curvature in the momentum space charaterizing Weyl fermions appear when one projects the state into the one of the two energy band of H s (p, r). This approach is justified for Weyl semimetals, when the Fermi level crosses only one band, and the other band is well separated from the Fermi level. However, in the case of Weyl superconductors and Weyl superfluids, the Fermi level crosses the Weyl point at which the lower band touches the upper band, and thus, the projection procedure is not justified. Nevertheless, we exploit this approach to see qualitatively how the Berry curvature plays the role in the response againt torsion fields. Following the method developed in ref. 54, we obtain the effective Lagrangian for the upper band: where the Berry connections are A +µ ps = i u s+ |∂ p µ |u s+ , and A + rsµ = i u s+ |∂ r µ |u s+ with H s |u s+ = E s |u s+ and E s is the single-particle energy of Weyl-Bogoliubov quasiparticles with chirality s. In some cases with a spatially varying structure of the gap function such as a vortex, and the Mermin-Ho texture or the Anderson-Toulouse texture of the ℓ-vector, nonzero torsion appears. The torsion field is defined by where e a µ is the inverse of e µ a . In the case with nonzero torsion, the Euler-Lagrange equation for r andṙ is modified as, 55 d dt with T ν µλ = e ν a T a µλ , while that for p andṗ is not changed. Then, we obtain the equation of motion for the Weyl-Bogoliubov quasiparticles: where the Berry curvatures are, with X = r, p, and (T µ ) ν = 1 2 ε νλ ρ T µ λ ρ . In Eqs. (12) and (13), all components of vectors are expressed in the laboratory frame. We can obtain smilar equations also for the lower band. The Berry curvature is Ω − XY s = −Ω + XY s It is noted that, as seen from (13), the torsion generates an effective magnetic field acting on quasiparticles. We denote it as In the following, we consider static inhomogeneity of the order parameter, and neglect Ω + tps and Ω + trs . Then, it follows from Eqs. (12) and (13), where v ps = ∂ E s /∂ p, andB = Ω + rr + B. In Eq. (19), the third term of the right-hand side represents the chiral magnetic effect, and the contribution proportional to B corresponds to the torsional chiral magnetic effect found in ref. 34. The third term of the right-hand side of Eq. (20) is associated with chiral anomaly; i.e. the momentum is generated or annihilated at the Weyl points when both an effective magnetic field and a bias potential are applied in the same direction. It has recently been found that from Eqs. (19) and (20) the torsional chiral magnetic effect arising from a U(1) phase vortex or lattice strain leads to the negative thermal magnetoresistivity, that is, the anomalous enhancement of longitudinal thermal conductivity along the vortex line. 35 B. Torsional chiral magnetic effect due to ℓ-textures The current density induced by the torsional field is written as 54 where f (ε) = 1/(e ε/T + 1) is the Fermi distribution function. Equation (21) reproduces the current expression based on the linear response of the effective action for Weyl-Bogoliubov quasiparticles with respect to the torsional magnetic field. 34 The effective Hamiltonian in Eq. (6) possesses anisotropic dispersion of Weyl-Bogoliubov quasiparticles and the Berry curvature is where for Weyl-Bogoliubov quasiparticles with the chirality s, where b ∼ 1. A nonzero torsional magnetic field leads to the equilibrium current where we utilize the particle-hole symmetry in Eq. (8). As seen from Eqs. (10) and (18), in the A-phase of 3 He, a torsional magnetic field is generated by the rotation of the Weyl points, p 0 = p Fl , as The real space texture of the ℓ vector field is thermodynamically stable in superfluid 3 He-A under rotation. This is a consequence of the broken gauge-orbit symmetry in the superfluid vacuum; the U(1) phase rotation in Eq. (3) is equivalent to the rotation of the orbital partm + in aboutl. Therefore, the supercurrent can be generated by a variation of (m,n,l) without a U(1) phase singularity. The ℓ-texture spontaneously emerges in 3 He-A under rotation, and continuous skyrmionlike textures provide an elementary building-block for a variety of coreless vortices with the spatially uniform superfluid density. 38,39 Here we consider skyrmion-vortices with the radial and circularl textures as in Fig. 1. Let us now clarify the torsional field induced by skyrmion vortices. It is convenient to express the orbital part of the order parameter tensor in Eq. (3) in terms of Euler angles α, β , γ aŝ andl = cos α sin βx + sin α sin βŷ + cos βẑ. The texture is assumed to be translationally invariant along the z axis and be axially symmetric. The axisymmetric skyrmion texture requires the Euler angles to obey α ≡ θ + α 0 (r), γ = −nθ (n ∈ Z), and β ≡ β (r), where r is the distance from the center of the vortex and θ is the azimuthal angle. For the real-space ℓ-vector field with axial symmetry, therefore, the parametrization reduces tô ℓ(r) = sin β (r) cos α 0 (r)r + sin β (r) sin α 0 (r)θ + cosβ (r)ẑ. The bending angle is a monotonic function on r which obeys β (r) = 0 at r = 0 and β (r) = π/2 at r = R, where R determines the size of the skyrmion texture. In Fig. 1, we present the texture of (m,n,l) in the skyrmion-vortex with n = 1: (a) the radial skyrmion with α = 0 and (b) the circular skyrmion with α = π/2. From Eqs. (27) and (7), the nonzero torsion field in Eq. (10) can be generated by a skyrmion ℓ texture of continuous vortices as where α ′ ≡ ∂ α/∂ r and β ′ ≡ ∂ β /∂ r. As shown in Fig. 1, the radial skyrmion texture for α = 0 generates the toroidal torsional magnetic field in the xy plane, while the circular skyrmion for α = 0 is accompanied by the nonzero torsion field along the z axis. Using the particle density, ρ = p 3 F /3π 2 , the TCME is recast into where b ′ ∼ 1 is the dimensionless quantity associated with the cutoff of the Weyl cone. The skyrmion texture with Eq. (27) therefore gives rise to the in-plane circulating current and the out-of-plane current, depending on α. We note that j TCME in Eq. (29) corresponds to the Weyl-Bogoliubov quasiparticle contributions to the second term of the right-hand side of the equilibrium current density in Eq. (4). This is distinct from the chiral anomaly effect due to the emergent electromagnetic fields considered in previous studies. 19,30 In Sec. IV, we will discuss the temperature dependence of the equilibrium current density based on the full quantum mechanical calculation. III. CHIRAL FERMIONS IN SKYRMION-VORTEX In this section, we show the emergence of chiral fermions in superfluid 3 He-A with skyrmion ℓ-textures. Beyond the semiclassical effective theory, we here utilize the BdG equation which is the fully quantum-mechanical equation for Bogoliubov quasiparticles in superfluid 3 He-A. We start with the Hamiltonian for the equal spin pairing state, A µi (r) = ∆ i (r)d µ , where ψ a and ψ † a denote the fermionic field operators. As 3 He-A is the equal spin pairing state, we omit the spin degrees of freedom. The single-particle Hamiltonian density represents fermions with mass M, ε(−i∇) = −∇ 2 /2M − µ. The repeated Roman and Greek indices imply the sum over the spin degrees of freedom and (x, y, z), respectively. Now, let us introduce the Bogoliubov transformation of the fermion operator Ψ ≡ [ψ, ψ † ] T , where η E and η † E stand for Bogoliubov quasiparticle operators with the energy E that satisfy fermionic anticommutation relations. We notice that Eq. (31) obeys the particle-hole symmetry, C Ψ = Ψ, with C = τ x K where K is the complex conjugation operator. Substituting Eq. (31) into the Hamiltonian (30), one obtains the Bogoliubov-de Gennes (BdG) equations with To make the Bogoliubov transformation canonical, the quasiparticle wavefunction must satisfy the orthonormal conditions A. Symmetry of Weyl-Bogoliubov quasiparticles in skyrmion-vortices To clarify the symmetry of Bogoliubov quasiparticles in the presence of a skyrmion ℓ-texture, we start with the symmetry group relevant to the classification of axisymmetric vortices in superfluid 3 He, where D ∞,h contains the group of rotations about the vortex line (ẑ), rotations about an axis perpendicular toẑ, and space inversion, and t z represents the translational symmetry of the skyrmion-vortex alongẑ. The time reversal symmetry, T , transforms the order parameter tensor as A µi → A * µi . The generator of the continuous rotation symmetry aboutẑ is expressed as e iQϕ , whereQ ≡L z − nÎ (n ∈ Z) is the combination of the orbital angular momentum operatorL z and the U(1) phase rotation operator. The order parameter for axisymmetric vortices satisfiesQ∆ j (r) = 0. For the skyrmion-vortex states in Fig. 1, the orbital components of the order parameter are given by with Eqs. (25) and (26). In this paper, we focus on the n = 1 case. It is convenient to transform Eq. (35) into the eigenstates of the orbital angular momentum l = +1, 0, −1 as is the spherical harmonic function of degree L obtained by replacingp → −i∂/p F . The phase factor e ilθ , which appears in Y 1,l , is compensated by the winding of (m ′ ,n ′ ) in ∆ l (r) and the U(1) phase factor e iθ is factorized from the off-diagonal component in Eq. TABLE I. Classification of skyrmion-vortex textures in terms of the discrete symmetries. The "radial", "circular", and "twist" textures correspond to α = 0, α = π/2, and α(r), respectively, B is the torsional magnetic field due to the ℓ-texture, and "ZES" denotes the distribution of the zero energy states. Here we impose the periodic boundary condition along the axial direction, ϕ E (x, y, z) = ϕ E (x, y, z + Z), which implies k = 2πn z /Z with n z ∈ Z. By using the factorization in Eq. (36), Eq. (33) is reduced to the one-dimensional differential equation for [u nmk (r), v nmk (r)], where the set of the quantum numbers is given as (n, m, k). The differential equation is solved by expanding the wavefunctions with the orthonormal basis. The set of the basis functions is constructed with the Bessel function, J ν (x). The Bessel function expansion reduces Eq. (33) to the 2N × 2N eigenvalue equation, [56][57][58] where N is the number of the orthonormal functions. Apart from the continuous symmetry, there are discrete symmetries, which leave axisymmetric vortex order parameter invariant. First we note that the BdG Hamiltonian in Eq. (33) always satisfies the particle-hole symmetry This results in the relation (8). The symmetry guarantees that the eigenstate of the BdG equation must appear as a pair of the positive and negative energy states. The positive energy state with E n (m, k) and ϕ n,m,k (r) ≡ [u nmk (r), v nmk (r)] T has the particle-hole symmetric partner with −E n (−m + 1, −k) and C ϕ n,−m+1,−k (r). The other discrete symmetries relevant to the vortex classification are given by three operators, {P 1 , P 2 , P 3 }. 22,38,59,60 P 1 is the space inversion operator. The order-two antiunitary operator, P 3 , is the combination of the time reversal symmetry and π-rotation about any axis perpendicular to the vortex line. The P 2 symmetry is defined as P 2 = P 1 P 3 , which is the combination of the time-reversal symmetry and mirror reflection symmetry in a plane that contains the vortex line. Axisymmetric continuous vortices with skyrmion-like ℓ-texture are classified in terms of these discrete symmetries into three categories: v-, w-, and uvw-vortices. 38 The order parameters of the v-and w-vortices are invariant under the P 2 and P 3 symmetry, respectively, while the uvw-vortex class spontaneously breaks all the discrete symmetries. Let us define the operator M = iσ y that denotes the mirror reflection in the xz plane. The operator flips the spin, momentum, and spatial coordinate as σ → (−σ x , σ y , −σ z ), k → (k x , −k y , k z ), and r → (x, −y, z), respectively. The the P 3 symmetry relates the triad at (r, θ , z) to (m x , −m y ,m z ), (−n x ,n y , −n z ), and (l x , −l y ,l z ) at (r, π − θ , −z). The radial skyrmion-vortex (v-vortex) with α = 0 in Fig. 1(a) spontaneously breaks the P 1 symmetry but maintains the P 2 symmetry. The vortex state with the circular skyrmion texture (α = π/2) in Fig. 1(b) belongs to the w-vortex class with the P 3 symmetry. The uvw-vortex class can be realized by twisting the skyrmion ℓ-texture so as to satisfy the conditions α(0) = π/2 and α(R) = 0. In Table I, we summarize the possible classification of skrymion-vortices in terms of the discrete symmetries. The P 2 symmetry imposes an important constraint on the energy spectrum of the Bogoliubov quasiparticles so as to prohibit the equilibrium current along the axial direction. Axisymmetric v-vortices must satisfy the relation The first equality results from the P 2 symmetry, while the second equality reflects the particle hole symmetry. Hence, the Bogoliubov quasiparticle spectrum in the radial skyrmionvortex with α = 0 is an even function on k and the current flow alongẑ is prohibited. In contrast, as the P 3 symmetry does not impose any constraints on the eigenvalues, the circular and twisted skyrmion ℓ-textures may generate the equilibrium current along the axial direction. B. Chiral fermions and real space texture of Weyl points Let us consider the superfluid 3 He-A confined in a cylinder with radius R. The triad (m,n,l) parameterized as Eqs. (25), (26), and (27) slowly varies froml =ẑ at r = 0 tol =r at r = R. Hence, the bending angle is given by β (r) = π 2R r. We here set the angle as α 0 = 0 for the radial skyrmion (v-vortex) and α 0 = π 2 (1 − r/R) for the twisted skyrmion (uvw-vortex). The size of the half-skyrmion is set to be larger than the superfluid coherence length ξ , R 10ξ where ξ ≡ v F /∆ A > k −1 F . In Fig. 2, we show the Bogoliubov quasiparticle spectra obtained by diagonalizing Eq. (33) with the radial skyrmion textures (v-vortex). The Bogoliubov spectrum is asymmetric with respect to the azimuthal quantum number m and lowest branch (n = 0) crosses the zero energy. As mentioned in Sec. II, the Weyl-Bogoliubov quasiparticles around point nodes experience the torsional magnetic field T3 = ∇ × ℓ. For the radial ℓ skyrmion with α 0 = 0, the toroidal torsional magnetic field, T3 ∝θ, leads to the emergence of the Landau levels linearly dispersing from m = 0. The lowest energy branch crossing the Fermi level in Fig. 2 which is identified as the chiral fermion due to the emergent toroidal field. The group velocity, −v < 0, is an order of ∆ A /k F . Figure 2(b) shows the dispersion of the Bogoliubov spectrum with respect to the axial momentum k at m = 0, which satisfies the P 2 symmetry constraint in Eq. (38). The almost flat dispersion of the lowest eigenstates indicates that the chiral branch in Eq. (39) exists within |k| < k F . Hence, the spectral asymmetry in the radial skyrmion-vortex leads to the equilibrium current along the azimuthal direction and the P 2 symmetry prohibits the flow along the axial direction. To capture the spatial distribution of the chiral fermions, we show in Fig. 2(c) the k z -resolved zero-energy local density of states, N(k z , r, E), 61 where ∑ ′ E>0 stands for the sum over (n, m) that satisfies E n (m, k) > 0. The peak amplitude in the plane (k, r) shifts from r = R at k = 0 to r = 0 at k = ±k F . The spectral evolution reflects the spatial profiles of the radial skyrmion ℓ-texture that smoothly tilts from the axial direction to the radial direction. Therefore, the asymmetric branch crossing E = 0 in Fig. 2 is attributed to the Weyl-Bogoliubov quasiparticles bound to the Weyl points and the ℓ-vector texture leads to spatially inhomogeneous structures of Weyl bands in the real coordinate space. Figure 3 shows the Bogoliubov quasiparticle spectra for the twisted skyrmion ℓ-texture. It is seen that the chiral fermion branch with the spectral asymmetry exists in the k direction as well as the azimuthal momentum m. In order to satisfy the rigid wall boundary condition,l =r, at r = R, we set the azimuthal angle as α(r) = (1 − r/R)π/2. The resulting ℓ texture generates the torsional magnetic field long the axial direction in addition to the azimuthal direction. This is categorized to the uvw-vortex class which holds neither the P 2 symmetry nor P 3 symmetry. The symmetry relation in Eq. (38) can be violated and the spectral asymmetry along k is responsible for the equilibrium current along the axial direction. IV. TORSIONAL CHIRAL MAGNETIC EFFECT AND MASS CURRENT IN SKYRMION-VORTEX In the previous section, we have demonstrated that the chiral fermion branches emerge in the Bogoliubov quasiparticle spectrum under skyrmion-like ℓ-textures. Using the full quantum mechanical BdG equation, in this section, we show that low-lying Weyl-Bogoliubov quasiparticles dominantly contribute to the current density in the weak coupling regime, while the contributions from continuum states become significant as the quantum regime is approached. Here we introduce the dimensionless parameter p F ξ = 2E F /∆ A so as to quantify the quantum corrections to the quasiclassical limit (p F ξ ≫ 1). We define the mass current density j(r) as the linear response of the thermodynamic potential with respect to an infinitesimal flow v, j µ = (δ H /δ v µ ) v=0 , where the Hamiltonian under a homogeneous velocity field is given by a Galilean transformation −i∇ → −i∇ − Mv. The current density is then given by j µ (r) = −i ψ † (r)∂ µ ψ(r) − ψ(r)∂ µ ψ † (r) . In terms of the Bogoliubov quasiparticle wavefunctions ϕ E = [u E , v E ] T , this is rewritten to where the factor "2" arises from the spin degeneracy in the equal spin pairing state and f (E) = 1/(e E/T + 1) is the Fermi distribution function at temperature T . Owing to the particlehole symmetry, the azimuthal current density in Eq. (41), or the angular momentum density (r × j) z = r j θ (r), is recast at T = 0, where n(r) = 2 ψ † ψ is the particle density and ∑ E>0 stands for the sum over (n, m, k) within E n (m, k) > 0. As shown in Fig. 2(a), the chiral fermion states with m ≥ 1 have negative energy and thus are occupied at T = 0. These chiral fermion states make a positive contribution to the mass current density along the azimuthal direction, j θ (r). In the radial skyrmion-vortex, therefore, they produce an azimuthal mass current in the same sense as the torsional field, T3 = curll. In Fig. 4(a), we plot the current density, j θ (r), in the radial skyrmion-vortex (v-vortex) at T = 0 for p F ξ = 20, 6.7, and 4.0. For comparison, we plot the current density obtained from the gradient expansion, j MM , with C 0 = ρ. As the radial skyrmion-vortex always satisfiesl ⊥ curll, this configuration is free from the issue on the anomalous term j an . It is seen from Fig. 4(a) that the current density obtained from the BdG equation is in good agreement with j MM in the weak coupling regime p F ξ = 20, while j θ (r) deviates from j MM as p F ξ decreases. To clarify the Weyl-Bogoliubov quasiparticle contributions, we introduce the E-resolved current density as where E i ≡ E n (m, k). The current density is obtained by integrating j(r, E) over E as j µ (r) = dE j µ (r, E) f (E). For numerical calculations, the δ -function in Eq. (43) is replaced to the Lorentzian function with the width 0.025∆ A . Figure 4(b) shows the E-resolved current density for the radial skyrmionvortex at p F ξ = 20. The mass current density may be decomposed into two contributions, j = j Weyl + j cont . The contribution arising from Weyl-Bogoliubov quasipartciles, j Weyl , is defined as and j cont ≡ j − j Weyl is the current carried by continuum states. It is seen from Fig. 4(b) that Weyl-Bogoliubov quasiparticle states, including the chiral branch, dominantly contribute to the mass current. However, the continuum states within |E| > ∆ A make non-vanishing contributions to the mass current. They satisfy j cont (r) ≈ −j Weyl /2 for p F ξ ≫ 1, and lead to the counter flow to the Weyl-Bogoliubov quasiparticle flow. The backflow of the continuum states is also pointed out in the edge mass current with spatially polarizedl-vectors 62,63 and the surface spin current in the B-phase of the superfluid 3 He. [64][65][66] The main contributions to the mass/spin currents originate in chiral/helical fermion states that are the topologically protected Andreev bound states at the edge. As pointed out by Stone and Roy, 67 however, the bound states are not the only contribution. Another contribution results from the continuum states affected by the formation of the Andreev bound states. The contribution cancels the bound states, and the edge current arising from the bound states alone differs from the actual edge current by a factor of 2. The E-resolved current density in Fig. 4(b) resembles to the behavior of the edge mass/spin current, where the continuum states are affected by the existence of the nontriviall-texture and weakens the flow arising from the chiral fermion states. To capture the contribution of the Weyl-Bogoliubov quasiparticles more systematically, we calculate the total angular momentum per particle at T = 0. The total angular momentum is defined as and the total particle number is given by N = n(r)dr. The angular momentum arising from the Weyl-Bogoliubov quasiparticles within |E| < ∆ A is given by L Weyl z = (r × j Weyl ) z dr and the contribution of the continuum states is L cont z ≡ L z − L Weyl z . In Fig. 5, we plot the p F ξ -dependence of the total angular momentum. The total angular momentum in an isolated Mermin-Ho vortex or a radial skyrmion-vortex was also calculated by Nagai 68,69 using the quasiclassical theory, which reproduces the McClure-Takagi prediction, 47 L z =hN/2, at that L Weyl z vanishes at p F ξ = 0 where the bulk excitation gap closes and topological phase transition occurs. As we pointed out in (ii), the properties of the mass current and angular momentum in the quantum regime p F ξ 10 are essentially different from those in the quasiclassical limit. The mass current arising from Weyl-Bogoliubov quasiparticles alone weaken with decreasing p F ξ , while high energy quasiparticle states with |E| > ∆ A make a dominant contribution to L z /N. In Fig. 6, we plot the T -dependence of L z /N for p F ξ = 10 and 2.0, where ∆ A (T ) is obtained by calculating the gap equation for a spatially uniform ABM state in Eq. (3). For comparison, we calculate L Cross z = (r × j Cross ) z dr, which is obtained from the gradient expansion as 70 where ρ s and ρ s⊥ are the components of the superfluid mass density tensor (ρ s ) parallel and perpendicular tol, respectively, and their T -dependences are determined by the generalized Yosida function. 70 For the case of ∇ρ = 0, Eq. bution of the Weyl-Bogoliubov quasiparticles to the angular momentum. In p F ξ = 10, the T -dependence is slightly deviated from that of ρ ave and enhanced at the low T regime. This is understandable with an extra contribution of the torsional chiral magnetic effect, L TCME z , as discussed in Fig. 5. Such extra contribution vanishes as the topological phase transition (p F ξ = 0) is approached and the mass current is dominated by the contribution arising from continuum states. Lastly, we discuss the mass current induced by twistedskyrmion textures in connection to the paradox of the anomalous current in Eq. (4). Using the particle-hole symmetry, one obtains the current along the axial direction, j z (r), from Eq. (41) as at T = 0. As shown in Fig. 3(b), the twisted skyrmion ℓtexture breaks the P 2 symmetry and induces the chiral fermion branches along axial momentum. As the branch has the negative group velocity with respect to k and the k > 0 region is occupied at T = 0, the asymmetry branch makes a positive contribution to j z (r). In Fig. 7, we plot j θ (r) and j z (r) in the twisted skyrmion-vortexl-texture. It is seen that in weak coupling regime, both the current profiles are in good agreement with j MM including the anomalous term, rather than j IMU . For p F ξ = 20, we find that the total angular momentum is L z (0)/N = 0.43h, which is inconsistent to the McClure-Takagi prediction that L z (0)/N =h/2 is independent of the ℓ-texture. The depletion of L z (0) is understandable with the existence of the extra contribution arising from the anomalous term, which is j an θ < 0 in the case of the twisted skyrmion ℓ-texture. Hence, our results show that in contrast to the McClure-Takagi prediction, L z (0)/N is not fixed toh/2 but sensitive to the ℓ-texture even in the weak coupling regime. As shown in Fig. 7, the current density profiles are deviated from j MM as p F ξ decreases. To capture the change of the spatial profiles in j θ and j z , we plot in Figs. 7(c) and 7(d) the rescaled mass current densities, where n * is a fitting parameter to rescale j µ (r) to j MM µ (R) = j IMU µ (R). Figures 7(c) and 7(d) show that the rescaled profiles gradually shift from j MM to j IMU as p F ξ decreases, implying that j an becomes negligible around p F ξ ∼ 0. As shown in Fig. 5, the contribution arising from Weyl-Bogoliubov quasiparticles vanishes at p F ξ = 0. Hence, the shift of the current density profiles indicates that Weyl-Bogoliubov quasiparticles make significant contributions in the weak coupling regime. Our results shown above are consistent with the prediction by Balatsky et al. 19,32,33 that the anomalous term originates in the chiral anomaly of Weyl-Bogoliubov quasiparticles induced by twisted skyrmion ℓ-textures. V. SUMMARY We have investigated chiral anomaly phenomena induced by skyrmion-like ℓ-textures in the superfluid 3 He-A which is a prototype of Weyl superfluids. Using the semiclassical theory, we find the torsional chiral magnetic effect that torsion fields induced by skyrmion-like ℓ-textures act on Weyl-Bogoliubov quasiparticles as emergent magnetic fields, and result in equilibrium mass current flowing along the emergent magnetic fields. Using the full quantum mechanical BdG equation, we have shown that in skyrmion vortices a chiral fermion branch with spectral asymmetry appears in the low-lying quasiparticle spectrum. The chiral fermion states are responsible for the equilibrium mass flow. Our numerical results, however, show that the total mass current or the total angular momentum differs from that arising from the chiral fermions alone by a factor of 1/2. The discrepancy is compensated by the backflow arising from the continuum states, and for radial skyrmion vortices, our numerical results in the quasiclassical limit coincide with the prediction by McClure and Takagi, 47 L z =hN/2. Furthermore, it has been demonstrated that the angular momentum associated with the Weyl-Bogoliubov quasi-particles increase as the quantum regime is approached. This anomalous behavior is understandable with the extra contribution of the current due to the torsional chiral magnetic effect in Eq. (23). We have clarified the chiral anomaly aspect of the mass current density in Eq. (4); the curll term in Eq. (4) is associated with the TCME of Weyl-Bogoliubov quasiparticles induced by a skyrmion-vortex. The appearance of a chiral branch in 3 He-A was first pointed out by Combescot and Dombre, 52,53 who demonstrated that in the case of a twisted (non-skyrmionic) ℓ-texture (l curll) the BdG equation for low-lying quasiparticles reduces to the Dirac-type equation with a fictitious magnetic field generated by a variation of the ℓ-field. Balatsky et al. clarified that the chiral branch is topologically protected by the Atiyah-Singer index theorem and the chiral fermion carries uncompensated current at T = 0. 33 Although our result for the mass current qualitatively agrees with that in Ref. 33, it differs from Ref. 33 because they consider only the chiral fermion contributions. As mentioned above, we find that in the quasiclassical limit, the continuum states bring about backflow to the quasiparticle flow, i.e., L Weyl z ≈ −2L cont z ≈ Nh. As the quantum regime is approached, the mass current carried by the continuum states changes its sign and makes a dominant contribution. In the vicinity of the topological phase transition (p F ξ = 0), indeed, the total angular momentum is governed by the continuum states and the contribution from chiral fermions becomes negligible. We have also shown that the contribution of j an is crucial for the weak coupling regime, while it vanishes as p F ξ decreases. Although L z /N is composed of the composite contributions of Weyl quasiparticles and continuum states and it is difficult to extract the TCME contribution solely, our results may put a new aspect on the paradox of the mass current and the intrinsic angular momentum.
2019-01-25T12:38:57.000Z
2018-07-26T00:00:00.000
{ "year": 2018, "sha1": "04106b8316be70f2f614b8e4cf530f1aa152842b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.09994", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "04106b8316be70f2f614b8e4cf530f1aa152842b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255820877
pes2o/s2orc
v3-fos-license
Comparative analyses of DNA repeats and identification of a novel Fesreba centromeric element in fescues and ryegrasses Cultivated grasses are an important source of food for domestic animals worldwide. Increased knowledge of their genomes can speed up the development of new cultivars with better quality and greater resistance to biotic and abiotic stresses. The most widely grown grasses are tetraploid ryegrass species (Lolium) and diploid and hexaploid fescue species (Festuca). In this work, we characterized repetitive DNA sequences and their contribution to genome size in five fescue and two ryegrass species as well as one fescue and two ryegrass cultivars. Partial genome sequences produced by Illumina sequencing technology were used for genome-wide comparative analyses with the RepeatExplorer pipeline. Retrotransposons were the most abundant repeat type in all seven grass species. The Athila element of the Ty3/gypsy family showed the most striking differences in copy number between fescues and ryegrasses. The sequence data enabled the assembly of the long terminal repeat (LTR) element Fesreba, which is highly enriched in centromeric and (peri)centromeric regions in all species. A combination of fluorescence in situ hybridization (FISH) with a probe specific to the Fesreba element and immunostaining with centromeric histone H3 (CENH3) antibody showed their co-localization and indicated a possible role of Fesreba in centromere function. Comparative repeatome analyses in a set of fescues and ryegrasses provided new insights into their genome organization and divergence, including the assembly of the LTR element Fesreba. A new LTR element Fesreba was identified and found in abundance in centromeric regions of the fescues and ryegrasses. It may play a role in the function of their centromeres. Background Grasses (Poaceae) are an important source of food for domestic animals worldwide and perform important ecological and environmental functions. The tribe Poeae is the largest tribe in family Poaceae, and species from its largest subtribe, Loliinae, grow in a range of habitats, including wetlands, dry areas, and regions with cold and temperate climates; some are even well adapted to the extreme conditions of mountain, arctic, and sub-Antarctic regions [1]. The subtribe Loliinae comprises a cosmopolitan genus Festuca and its satellite genera [2,3]. Festuca is the largest genus of the family Poaceae, containing more than 600 species, and Torrecilla and Catalán [4] discriminated its two main evolutionary lines: broad leaved and fine leaved (Fig. 1). Broad-leaved Festuca species (hereafter "fescues") include the subgenus Schedonorus, which gave rise to Lolium species (hereafter "ryegrasses"), a sister group of fescues ( Fig. 1) [1]. The evolution of grasses, including Loliinae, has been accompanied by frequent polyploidization and hybridization events, and about 70% of grass species are polyploid [6]. The species of Loliinae have large genomes ranging from 2. 6 Gbp/1C to 11.8 Gbp/1C [7,8]. This study focuses on species from the subgenus Schedonorus, a complex of species with various ploidy levels [7,9] that includes important species widely used for forage and turf. Although some Schedonorus species are diploid, such as Festuca pratensis Huds. (2n = 2x = 14) and Lolium multiflorum Lam. (2n = 2x = 14) and L. Although fescues and ryegrasses have been intensively studied, their evolution and the origin of most polyploid representatives remain obscure [11,15,16]. Like in other species with large genomes, the nuclear genomes of fescues and ryegrasses include a large number and variety of repetitive DNA sequences [17,18]. Their amplification in the genome, accompanied by interspecific hybridization and polyploidization, has expanded the genome size [19][20][21][22][23][24]. However, these processes have likely been counterbalanced by recombination-based mechanisms that have removed substantial parts of nuclear genomes [25][26][27]. Repetitive DNA elements may play different roles in a nuclear genome. Tandem organized ribosomal RNA genes and telomeric sequences are the key components of nucleolar organizing regions and chromosome Fig. 1 Phylogenetic tree of Loliinae subtribe. Phylogeny of subtribe Loliinae with Brachypodium distachyon was used as an outgroup. The tree was constructed from ITS sequence regions of Loliinae species and B. distachyon with PhyML implemented in SeaView [5]. Detailed phylogeny of subgenus Schedonorus is depicted and shows the relationships of fescue and ryegrass species in this lineage (highlighted in light yellow) termini, respectively. Centromeric regions in Arabidopsis, Brachypodium, rice, and maize are partly formed from specific satellite DNAs with~130 bp long units [28][29][30][31], whereas in other plant species, including cereals, these regions are formed by large blocks of Ty3/gypsy retrotransposons containing chromodomain [29,[32][33][34]. In F. pratensis, a putative long terminal repeat (LTR) element localizing preferentially to centromeric regions has been identified [35]. In addition to elucidating the molecular organization of chromosome domains, characterization of repetitive parts of nuclear genomes helps in the development of cytogenetic markers [21,35,36]. Repetitive DNA sequences are also used extensively to study genetic diversity and processes of genome evolution and speciation [37][38][39][40]. The main goal of the present work was to elucidate the repetitive landscape and its impact on genome size and genome divergence in closely related land grasses, including natural polyploid species. We characterized repetitive DNA sequences in the nuclear genomes of 10 representatives of fescues and ryegrasses. We performed global analyses of repetitive DNA sequences and characterized their abundance and variability after partial Illumina sequencing. Moreover, we characterized and assembled the DNA sequence of an LTR element that was highly enriched in centromeric and (peri)centromeric chromosome regions in all 10 genotypes. Colocalization of the centromere-specific histone H3 variant CENH3 with the LTR element indicated its role in centromere function. Genome size estimation Flow cytometric analysis of propidium iodide-stained nuclei was conducted to estimate nuclear DNA content (Fig. 2). Because of the large differences in genome size between the species analyzed, two internal reference standards were used: Pisum sativum cv. Ctirad (2C = 9.09 pg DNA) [41] and Secale cereale cv. Dankovske (2C = 16.19 pg DNA) [41]. All histograms of relative DNA content represented two dominant peaks corresponding to G1 nuclei of the sample and the standard. The 2C nuclear DNA content thus determined ranged from 5.32 pg in L. multiflorum to 20.17 pg in F. gigantea. The monoploid genome (1Cx) ranged in size from 2.43 in F. mairei to 3.36 pg in F. gigantea (Table 1). The remaining representatives of fescues and ryegrasses had similar 1Cx sizes (~2.7 Gb). Repeat composition and comparative analyses of repetitive DNA sequences Interspecific comparisons, reconstruction, and quantification of major repeat families were performed with the RepeatExplorer pipeline [42]. The process, which involved grouping orthologous repeat families from all analyzed species in the same cluster, facilitated the assembly, identification, and quantification of individual repeat elements. In all accessions, LTR retroelements were the most abundant component of the nuclear genome (Table 2, Fig. 3). Ty3/gypsy elements were more than 4 times more abundant than Ty1/copia retrotransposons ( Table 2). The biggest difference in copy number between fescues and ryegrasses was for an LTR element from the Athila clade. Whereas the nuclear genomes of both Lolium species were enriched for the element, which accounted for~25-30% of their genomes, the orthologous Athila element accounted for only~5-7% of the nuclear genomes of fescues (Table 2). A relatively large part of the genomes was represented by unclassified LTR sequences, which indicates a high frequency of unique LTR sequences. DNA transposons and long interspersed nuclear element (LINE) elements were found in low copy numbers, and tandem repeats accounted for 1.5% to more than 8% of the genome sequences (Table 2, Fig. 3). Comparative analyses with RepeatExplorer showed that most clusters of orthologous repeat families contained reads from all accessions and that a large number of similar sequences were present in fescues and ryegrasses. Among the fescues, F. mairei and F. glaucescens showed the lowest similarity in DNA repeats. The composition as well as the abundance of DNA repeats in ryegrasses were highly conserved. Tandem organized repeats were the most diverged elements among the fescues and ryegrasses studied, and some of the repeats were species specific (Fig. 4, Additional file 1: Table S1). In addition to tandem repeats, some small sequence clusters contained reads from only a few species. Species-specific variants of the majority of repetitive elements within and between fescues and ryegrasses were identified only after detailed analyses of individual repeat clusters with SeqGrapheR ( Fig. 5a-c). Detailed analyses revealed the presence of species-specific DNA contigs, which may be used to develop molecular and cytogenetic markers. To confirm the differences determined in silico, we analyzed selected repetitive DNA elements using Southern hybridization. We designed specific probes for those DNA repeats that seemed to have species-specific variants. A probe for the Ty3/gypsy Athila element that was reconstructed in cluster CL1 and showed the largest copy number variation between fescues and ryegrasses ( Table 2) gave strong hybridization signals on genomic DNA from ryegrasses but no or weak signals on DNA from fescues (Fig. 5d). Similarly, a probe for the Ty3/ gypsy Athila element that was reconstructed in cluster CL38 and contained mostly Festuca sequence reads ( Fig. 5b) provided strong visible signals only with fescue genomic DNA (Fig. 5e). Finally, Southern hybridization was performed with a probe for the Ty3/gypsy Ogre-Tat retrotransposon, identified in cluster CL20. The probe, which was designed from contigs representing fescues (Fig. 5c), provided strong hybridization signals on all fescues analyzed and low intensity signals on ryegrasses (Fig. 5f). In general, the signal intensities obtained after Southern hybridization corresponded to the copy numbers identified in silico. Centromere composition Partial genome sequence data obtained using Illumina sequencing technology made it possible to reconstruct nearly complete centromeric LTR elements in all 10 accessions of fescues and ryegrasses. Detailed characterization of the element called Fesreba confirmed that it belongs to the Ty3/gypsy Chromoviridae lineage. Phylogenetic analyses of its reverse transcriptase (RT) domain showed a close relationship with the Cereba element ( Fig. 6), which was identified earlier in barley (Hordeum vulgare) [43]. Southern hybridization with a probe for the RT domain of Fesreba and a probe for its LTR region [35] showed their presence in all fescues and ryegrasses included in this work (Additional file 2: Fig. S1). Similar hybridization patterns indicated sequence conservation between Fesreba repetitive DNA elements in these species. The results were supported by in silico data, which showed high similarity at the DNA sequence level (most abundant copies of Fesreba shared at least 92% similarity at the DNA level within and between fescues and ryegrasses) but lower abundance in ryegrasses. To confirm the differences in Fesreba copy number, we performed quantification for the RT domain and LTR sequence using droplet digital polymerase chain reaction (ddPCR). The results confirmed a two-fold higher copy number of Fesreba in fescues compared to ryegrasses (Additional file 3: Table S2). The assay also showed that the majority of genotypes analyzed contained 5 to 50 times more copies of the LTR region of Fesreba than its coding region (Additional file 3: Table S2). To confirm preferential localization of Fesreba to centromeric chromosome regions, we conducted fluorescence in situ hybridization (FISH) on mitotic metaphase plates with probes derived from its RT domain and LTR region. In all fescues and ryegrasses, both probes localized preferentially to centromeric regions of all chromosomes (Fig. 7). Whereas the hybridization signals of the RT domain were observed almost exclusively in centromeric regions, a probe derived from the noncoding LTR region resulted in stronger signals in centromeric and/or pericentromeric regions and weak signals along the chromosomal arms, as shown previously in F. pratensis [35]. Weak signals of the LTR part of Fesreba , respectively, were used as internal reference standards. The ratio of relative G1 peak positions was used to calculate DNA amounts of the fescue and ryegrass accessions in distal parts of chromosomes indicate the presence of unique LTRs spread over the genome and correspond to a higher copy number of the LTR non-coding part of Fesreba compared to its coding sequence. In addition to the fescues and ryegrasses included in this study, FISH was performed with the same probes on mitotic metaphase plates from related grass species, oat, barley, rye, bread wheat, and Aegilops tauschii. High homology of the RT coding domain resulted in successful in situ localization in all species. However, the probe specific to the LTR region of Fesreba provided visible signals only in A. sativa (Additional file 4: Fig. S2). Finally, immunostaining with the centromere-specific histone H3 variant CENH3 [44] in combination with FISH with probes for the RT domain and LTR region of Fesreba resulted in overlapping signals in all fescues and ryegrasses studied (Fig. 8, Additional file 5: Fig. S3). Discussion Because of genome shock, the 1Cx size of polyploid species is often, but not always, lower than that of their progenitors [25,45]. In this study, we performed comparative analyses of repeatomes and analyzed the impact of DNA repeats on genome size in a set of Festuca and Lolium species differing in ploidy. The set comprised hexaploids F. arundinacea subsp. arundinacea and F. gigantea; tetraploids F. glaucescens and F. mairei; and artificial autotetraploids F. pratensis cv. Westa, L. multiflorum cv. Mitos, and L. perenne cv. Neptun developed in breeding programs. We estimated nuclear DNA amounts using flow cytometry, and a test of normality confirmed that the data set had a normal distribution. Our results suggest possible genome changes in hexaploid F. arundinacea and tetraploid ryegrasses compared to their probable progenitors. Although the differences in the 1Cx size of natural polyploid F. arundinacea and its probable parents (F. pratensis and F. glaucescens) are small, they are statistically significant (P < 0.01). The same is true for tetraploid ryegrass cultivars obtained after polyploidization. Genome downsizing was detected in the case of F. arundinacea (~2% difference between expected and estimated values) and tetraploid L. perenne (~1% decrease). In the tetraploid cultivar of L. multiflorum, a slight increase in genome size (~4%) was detected, corresponding with Kopecký et al. [8]. In the case of tetraploid fescue cultivars obtained after polyploidization, no statistically significant difference in 1Cx value was found (P > 0.01). DNA retrotransposons are major contributors to the variation in nuclear genomes in plants [24,46,47]. Various approaches and tools have been developed to study these important parts of nuclear genomes, one of them being RepeatExplorer, which facilitates de novo repeat identification and characterization [42,48]. The pipeline uses graph-based clustering and analyzes nextgeneration sequencing data to reconstruct and characterize DNA repeats in a particular species or to compare DNA repeat composition in different genotypes [23,24,[49][50][51]. The pipeline has been frequently used to reconstruct DNA repeats in diversity studies, to create repeat databases for repeat masking [19,46,48], and to identify tandem organized repeats suitable as probes for molecular cytogenetics [35,[51][52][53]. Our work revealed that Ty3/gypsy elements had the highest impact on genome size in fescues and ryegrasses. Ty3/gypsy elements are also abundant in other Poaceae species, including wheat, rice, maize, and barley [8,[54][55][56]. In barley, about 50% of the genome is made up of 15 high-copy transposable element (TE) families, with elements of the Angela lineage (Ty1/copia family) being the most abundant and representing almost 14% of the genome [56]. The Ty3/gypsy superfamily is 1.5-fold more abundant than the Ty1/copia superfamily [56]. Festuca and Lolium genera comprise closely related complexes of species, and thus a high homology of DNA repeats was observed in this study. The main difference was the copy number. In Lolium species the Ty3/gypsy Athila LTR retroelement accounted for~25% of the nuclear genomes, whereas in fescues it accounted for0 .7% in tetraploids F. glaucescens and F. mairei and for 6% in other fescues analyzed. This indicates a burst of Athila LTR element linked with Lolium speciation. 6 Phylogenetic tree of Chromoviridae elements. The tree was constructed from a Jukes-Cantor distance matrix of the reverse transcriptase domains of Ty3/gypsy Chromoviridae elements described in Neumann et al. [34] and Fesreba elements identified in the present work with BioNJ implemented in Seaview [5]. The tree was rooted on the Ty3/gypsy/Tat element. The subclade of the Cereba element (in red) and other closely related elements identified in different plant species are labeled in blue. Fesreba elements identified in fescues and ryegrasses are also marked in blue Activation and integration of TEs (e.g., as a result of environment change) may lead to a rapid burst of the Athila element in a species-specific manner [46,47,57] and impact evolution and speciation [46,58]. In some species, a rapid increase in the number of lineagespecific retroelements can also result in significant genome upsizing [24,[58][59][60], which was not observed in the fescues and ryegrasses included in our study. Species-specific DNA elements identified in this work were represented by tandem organized repeats (Additional file 1: Table S1). Unique tandem repeats are also found in other plant species, and thanks to their genus or species specificity they have been widely used in molecular cytogenetics (e.g., to identify chromosomes using FISH) [61][62][63][64]. Tandem repeats originally identified in F. pratensis chromosome 4F are useful as probes for FISH to identify individual chromosomes of the species [18,35] and in comparative karyotype analyses of its cultivars. The present work resulted in the identification of other putative tandem organized repeats, either genus or species specific (Additional file 1: Table S1). These observations expand the number of potential cytogenetic markers for comparative karyotyping and identification of chromosomes in other fescue and ryegrass species. Although the sequencing of F. pratensis chromosome 4F revealed a relatively high number of tandem repeats, none of them localized to chromosome centromeric regions [18,35]. However, the mapping of other types of Fig. 7 Localization of the centromeric LTR retrotransposon Fesreba on mitotic metaphase chromosomes. Localization was performed in Festuca and Lolium species with fluorescence in situ hybridization (yellow-green or violet signals) with a probe for the reverse transcriptase domain of the Fesreba element. a F. arundinacea subsp. arundinacea (2n = 6x = 42). b F. gigantea (2n = 6x = 42). c F. mairei (2n = 4x = 28). d F. pratensis cv. Westa (2n = 4x = 28). (E) L. perenne GR 3320 (2n = 2x = 14). f L. multiflorum cv. Mitos (2n = 4x = 28). Chromosomes were counterstained with DAPI (blue). The bar corresponds to 10 μm DNA repeats on mitotic metaphase chromosomes showed preferential localization of one uncharacterized DNA element CL38 to centromeric regions of F. pratensis chromosomes [35]. In this work, the entire DNA element homologous to the CL38 repeat was reconstructed and its nature was clarified. Phylogenetic analyses of its coding domains (Fig. 6) confirmed close relationships with other plant centromeric elements of Ty3/gypsy Chromoviridae lineage, such as Cereba-like elements [43]. Preferential localization of the Cereba element to centromeric regions of barley chromosomes was shown by Hudakova et al. [33], and more complex study of centromere-specific elements belonginging to the lineage of Centromeric retrotransposons in maize (CRM) of the Ty3/gypsy family in larger set of plant species followed [20,34]. These studies imply a role for TEs at the structural level and their impact on centromere structure. Li et al. [65] showed that the Cereba element was strongly associated with the histone H3 variant CENH3, which plays a role in centromere function. Colocalization of the centromere-specific element Fesreba, reconstructed in this work with histone CENH3 (Fig. 8, Additional file 5: Fig. S3), indicates a role for this element in the function of fescue and ryegrass centromeres as well. Conclusions Partial sequencing of genomes of 10 fescues and ryegrasses revealed various types of retrotransposons as the most abundant repeat. These comparative repeatome analyses increase knowledge of genome organization in fescues and ryegrasses and confirm close relationships between Festuca and Lolium. The most striking difference was observed for the Athila element, which was~5 times more abundant in Lolium than Festuca. Highly diverged DNA repeats were represented by tandem organized repeats, which are candidates for species-specific cytogenetic markers. In addition to tandem repeats, other species-specific variants of the majority of repetitive DNA sequences within and between fescues and ryegrasses were identified. A nearly complete LTR element Fesreba was assembled and found to be highly enriched in centromeric and (peri)centromeric chromosome regions in all species. A combination of FISH with a probe for Fesreba and immunostaining with CENH3 antibody showed their co- Estimation of nuclear genome size Nuclear DNA amounts were determined according to Doležel et al. [66] following the two-step procedure of Otto [67] with modifications. Samples of isolated nuclei stained with propidium iodide were analyzed with a Sysmex CyFlow Space flow cytometer (Sysmex Partec, Münster, Germany) equipped with a 532 nm laser. Two reference standards were used to estimate DNA amounts in absolute units. Pea (Pisum sativum cv. Ctirad; 2C = 9.09 pg DNA) [41] served as an internal standard for estimating DNA content in all accessions except F. mairei, for which rye (Secale cereale cv. Dankovske; 2C = 16.19 pg DNA) [41] was used. Three plants were measured per accession, and each plant was analyzed three times on three different days. At least 5000 nuclei per sample were analyzed. Nuclear amounts were calculated from measurements of individual samples as follows: 2C nuclear DNA content (pg) = 2C nuclear DNA content of reference standard × sample G 1 peak mean / standard G 1 peak mean. Mean nuclear DNA content (2C) was estimated for each plant, with 1 pg DNA equal to 0.978 × 10 9 bp [68]. The statistical significance of the differences between 1Cx sizes was determined with one-way ANOVA. Analyses were conducted with NCSS 97 (Statistical Solutions, Cork, Ireland). The significance level α = 0.01 was used. Illumina sequencing and data analyses Genomic DNA was isolated with the NucleoSpin PlantII kit (Macherey-Nagel, Düren, Germany) according to the manufacturer's recommendations and used to prepare Illumina libraries with a Nextera® DNA Sample Preparation Kit (Illumina, San Diego, CA, USA). Briefly, 50 ng DNA was fragmented, purified, and amplified according to the protocol. The DNA concentration in individual libraries was measured with a Qubit fluorometer, adjusted to an equal molar concentration, and pooled prior to sequencing. DNA sequencing was done with an Illumina MiSeq with either single or paired-end sequencing to produce up to 500 bp reads. Sequence reads were deposited in the Sequence Read Archive (BioProject ID: PRJNA601325, accessions SAMN13866227, SAMN1386 6228, SAMN13866229, SAMN13866230, SAMN138662 31, SAMN13866232, SAMN13866233, SAMN13866234, SAMN13866235, SAMN13866236). Illumina reads were trimmed for adapters and for quality with the FASTX-Toolkit (−q 20 -p 90; http://hannonlab. cshl.edu/fastx_toolkit/index.html). Detailed characterization of repeat families was performed with a stand-alone version of the RepeatExplorer pipeline [37] running on an IBM server with 16 processors, 100 Gb RAM, and 17 Tb disk space. In the first step, comparative analyses of repetitive parts of the genomes were performed with the RepeatExplorer pipeline according to Novák et al. [49]. Random data sets represented the same amount of reads (0.5× coverage of individual accessions) were used to reconstruct repetitive elements using a graph-based method according Novák et al. [48]. The RepeatExplorer pipeline led to the characterization of assembled sequences using different tools (e.g., BLASTN and BLASTX, phylogenetic analysis) [37,48]. Tandem organized repeats were identified with Dotter [72]. In the second step, the RepeatExplorer pipeline was applied to a merged data set containing all species marked by specific prefixes to perform comparative analyses [49]. The results of the clustering were then used to create repetitive databases. Databases of Illumina reads were deposited in the Sequence Read Archive (accessions: SRX7566047-SRX7566056). Assembled contigs from different types of repetitive DNA elements are publicly available online (https://olomouc.ueb.cas.cz/en/content/dna-repeats) and in the Dryad digital repository (doi:https://doi.org/10.5061/dryad.xksn02vch). Southern hybridization Genomic DNA corresponding to 3 × 10 6 copies of a 1Cx nuclear genome was digested by HaeIII enzyme (New England Biolabs, Ipswich, MA, USA). DNA fragments were size-fractionated by electrophoresis in 1.2% agarose gel and then transferred onto Hybond™ N+ nylon membranes (GE Healthcare, Chicago, IL, USA). Probes were prepared with F. pratensis genomic DNA as a template and polymerase chain reaction (PCR) with biotin-labeled dUTP (Roche, Mannheim, Germany) and specific primers (Additional file 6: Table S3, Additional file 7: Fig. S4). Southern hybridization was performed at 68°C overnight, and hybridization signals were detected with a Chemiluminescent Nucleic Acid Module (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's recommendations with 90% stringency. Hybridization signals were visualized with chemiluminiscent substrate on Medical X-Ray Film Blue (Agfa Healthcare, Mortsel, Belgium). ddPCR Based on the assembled DNA contigs from the Fesreba retrotransposon, two restriction endonucleases with unique restriction sites in the retrotransposon (HpaI and HpaII) were identified and used for further analyses. Briefly, 3 μg genomic DNA was digested according to the manufacturer's recommendations (Bio-Rad Laboratories, Hercules, CA, USA) and then diluted 1000-fold to reach a starter concentration of 0.06 ng/μl. ddPCR was performed on a QX200 Droplet Digital PCR machine (Bio-Rad Laboratories) following the manufacturer's recommendations with EvaGreen Supermix (Bio-Rad Laboratories), template DNA, and specific primers for Fesreba (Additional file 6: Table S3). Three independent replicates were performed for every accession analyzed. Cytogenetic mapping and immunostaining Cytogenetic mapping of selected repeats was done by FISH on mitotic metaphase plates. Chromosome spreads were prepared according to Křivánková et al. [35], and immunostaining was performed according to Neumann et al. [73]. Root tips were collected in ice water for 28 h; washed in LB01 buffer [74]; fixed in 3.7% formaldehyde for 25 min; and digested using 2% cellulose, 2% pectinase, and 2% cytohelicase in 1× phosphate-buffered saline (PBS) for 90 min at 37°C. After the coverslip was removed, the preparations were washed in 1× PBS and then in PBS-Triton buffer (1× PBS, 0.5% Triton X-100, pH 7.4) for 25 min and then again in 1× PBS. For incubation with antigrass CENH3 primary antibody [75], the slides were washed in PBS-Tween buffer (1× PBS, 0.1% Tween 20, pH 7.4) for 25 min and then incubated with anti-grass CENH3 primary antibody (diluted 1:200 in PBS-Tween) overnight at 4°C. The next day the slides were washed in 1× PBS, CENH3 antibody was detected using the anti-Rabbit Alexa Fluor 546 secondary antibody (Thermo Fisher Scientific/Invitrogen) diluted 1:250 in PBS-Tween buffer for 1 h at room temperature, and washed 1× PBS. Before FISH, immunofluorescent signals were stabilized with ethanol:acetic acid (3:1) fixative and 3.7% formaldehyde for 10 min at room temperature. FISH was performed after three washes in 1× PBS. Probes for FISH, derived from the RT and LTR regions of the Fesreba element, were labeled with digoxigenin-11-dUTP or biotin-16-dUTP (Roche Applied Science) using PCR with specific primers (Additional file 6: Table S3). FISH and detection of hybridization sites were performed according to Křivánková et al. [35]. The chromosomes were counterstained with 4′,6-diamidino-2-phenylindole (DAPI) and mounted in Vectashield (Vector Laboratories). The slides were examined with an Axio Imager.Z2 microscope (Carl Zeiss, Oberkochen, Germany) equipped with a Cool Cube 1 (Metasystems, Altlussheim, Germany) camera, and images were prepared with ISIS 5.4.7 (Metasystems). Final adjustments were made to figures in Adobe Photoshop 12.0. Additional file 1: Table S1. List of clusters containing putative tandem repeats identified in Festuca and Lolium.
2023-01-15T15:13:01.193Z
2020-06-17T00:00:00.000
{ "year": 2020, "sha1": "72730bff5d41350b03c3f6093a2382e3e8f0d62e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12870-020-02495-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "72730bff5d41350b03c3f6093a2382e3e8f0d62e", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [] }
214810731
pes2o/s2orc
v3-fos-license
Effect of the supramolecular interactions on the nanostructure of halloysite/biopolymer hybrids: a comprehensive study by SANS, fluorescence correlation spectroscopy and electric birefringence. The structural properties of halloysite/biopolymer aqueous mixtures were firstly investigated by means of combining different techniques, including small-angle neutron scattering (SANS), electric birefringence (EBR) and fluorescence correlation spectroscopy (FCS). Among the biopolymers, non-ionic hydroxypropylcellulose and polyelectrolytes (anionic alginate and cationic chitosan) were selected. On this basis, the specific supramolecular interactions were correlated to the structural behavior of the halloysite/biopolymer mixtures. SANS data were analyzed in order to investigate the influence of the biopolymer adsorption on the halloysite gyration radius. In addition, a morphological description of the biopolymer-coated halloysite nanotubes (HNTs) was obtained by the simulation of SANS curves. EBR experiments evidenced that the orientation dynamics of the nanotubes in the electric field is influenced by the specific interactions with the polymers. Namely, both variations of the polymer charge and/or wrapping mechanisms strongly affected the HNT alignment process and, consequently, the rotational mobility of the nanotubes. FCS measurements with fluorescently labeled biopolymers allowed us to study the aqueous dynamic behavior of ionic biopolymers after their adsorption onto the HNT surfaces. The combination of EBR and FCS results revealed that the adsorption process reduces the mobility in water of both components. These effects are strongly enhanced by HNT/polyelectrolyte electrostatic interactions and wrapping processes occurring in the halloysite/chitosan mixture. The attained findings can be useful for designing halloysite/polymer hybrids with controlled structural properties. Introduction In the last decades, supramolecular hybrids based on inorganic nanoparticles and organic macromolecules have attracted great interest as a consequence of their potential applications in several technological fields, including packaging, 1,2 catalysis, [3][4][5][6][7][8] pharmaceutics, [8][9][10][11][12][13][14] and remediation. [15][16][17][18][19] As evidenced in a recent review, 20 the adsorption of sustainable polymers can confer functional properties to the nanoparticles depending on their peculiar nanoarchitecture. It was proved that the modification of carbon 21 and boron 22 supramolecular functionalization of halloysite with oppositely charged PNIPAAm polymers, such as amine-terminated PNI-PAAM and PNIPAAM/methacrylic acid copolymer. 23 Electrostatic interactions were successfully exploited to fabricate pH-sensitive drug delivery hybrids based on LAPONITE s nanodisks and poly(ethylene glycol)-poly(lactic acid) diblock copolymer. 24 Efficient nanocarriers for drugs were prepared by the adsorption of alginate onto calcium carbonate 25 and silica nanoparticles. 26 Among inorganic nanoparticles, halloysite clay nanotubes (HNTs) are suitable for the preparation of smart nanohybrids because of their peculiar surface properties in terms of chemical composition and electrical charge. 27 In particular, the chemical composition of the HNT internal surface is based on alumina, while the external shell is formed by silica. As a consequence of the different acid-base equilibria of alumina and silica groups, the HNT inner and external surfaces are positively and negatively charged, respectively, within an extended pH interval and, in addition, the charge conditions can be tuned by pH. 28 Rheological measurements evidenced that HNT aqueous suspensions can form a lyotropic liquid crystalline phase depending on the pH conditions. 29 The HNT dispersions exhibited stronger shear-thinning behavior by the addition of microcrystalline cellulose. 30 The HNT surfaces can be selectively modified by ionic molecules through electrostatic interactions. 20 The adsorption of cationic alkyltrimethylbromides onto the halloysite outer surface allows fabrication of inorganic reverse micelles, 31 which were used to synthesize alginate-based nanohydrogels within the HNT lumen. 32 The attractions between anionic surfactants and the positively charged internal surface generated functionalized nanotubes with a hydrophobic cavity. 33,34 As evidenced by SANS studies, 33 the structural organization of the adsorbed surfactants affects the hydrophobization degree of the modified halloysite lumen. The immobilization of several enzymes within the HNT lumen was controlled by pH conditions, which influence the electrostatic forces occurring between proteins and halloysite surfaces. 35 The literature 36 reports that the addition of ionic and nonionic macromolecules represents an efficient tool to control the HNT colloidal stability in aqueous solvent. Halloysite aqueous dispersions were stabilized by the adsorption of non-ionic biopolymers (amylose 37 and cellulose ethers 36 ) because of their wrapping around the nanotubes generating a steric barrier toward HNT aggregation in water. The presence of anionic polymers (pectin 36 and poly(styrene)sulfonate 38,39 ) induced an increase of the HNT aqueous colloidal stability as a consequence of electrostatic interactions, which alter the halloysite surface charge. Specifically, the selective adsorption of anionic polymers induced a neutralization of the positively charged inner surface generating an increase of the HNT negative charge. 36,39 Specific interactions between biopolymers and halloysite surfaces affect the HNT efficacy as nanocontainers for drugs. 36 In this regard, nanotubes with enhanced adsorption capacity and sustained-release performance towards ibuprofen were obtained by the self-assembly of chitosan and alginate onto halloysite surfaces. 40 Biopolymer/HNT layered tablets for diclofenac were successfully prepared by exploiting the electrostatic interactions between polyelectrolytes and halloysite. 41 However, despite a number of studies existing on HNT/polymer hybrid systems, still, many questions in that area are open, especially with respect to their dispersion stability in aqueous solution, which is an important aspect for most of their potential applications. Accordingly, studies on the structure and dynamics of polymer/ HNT hybrids in water could be crucial to understand the stabilization mechanism controlling the aqueous colloidal stability of the nanotubes. In this work, we investigated the structural behavior of aqueous mixtures based on halloysite and biopolymers with different charges, including non-ionic hydroxypropylcellulose and biopolyelectrolytes (anionic alginate and cationic chitosan), in order to explore the influence of the specific supramolecular interactions on the nanoarchitecture of the HNT/biopolymer hybrids. These investigations were conducted by using a comprehensive approach based on combining different methods (small-angle neutron scattering, fluorescence correlation spectroscopy and electric birefringence) not employed before together for the characterization of polymer/HNT hybrids, with the aim of gaining an improved understanding of their interactions and structures and the resulting stability in aqueous solutions. Chemicals Hydroxypropyl cellulose (HPC; average molecular weight = 80 kg mol À1 ), sodium alginate (average molecular weight = 90 kg mol À1 ), chitosan (deacetylation degree = 75-85%, average molecular weight = 120 kg mol À1 ), fluorescein isothiocyanate (FITC), 5-[(4,6-dichlorotriazin-2-yl)amino]fluorescein (DTAF), phosphate-buffered saline (PBS), sodium bicarbonate (NaHCO 3 ), sodium hydroxide (NaOH) and glacial acetic are Sigma products. Halloysite (HNT, purity Z99.5%) from Matauri Bay was provided by Imerys. SEM images of HNTs are presented in the ESI. † All the chemicals were used without further purification. Water was of Millipore grade. D 2 O was purchased from Eurisotop in 99.9% isotopic purity. Preparation of HNT/biopolymer dispersions HNT/biopolymer dispersions in aqueous solvents were prepared as reported elsewhere. 36 Firstly, stable aqueous solutions of each polymer were obtained by magnetically stirring at 25 1C for 3 h. Chitosan was solubilized in acidic solvent (pH = 4.5) because of its low solubility under neutral conditions. The pH of the aqueous solvent was adjusted by adding glacial acetic acid. The final concentration of acetic acid was set at 5 g dm À3 (0.083 mol dm À3 ). Then, HNT/biopolymer dispersions with various compositions were obtained by direct addition of appropriate amounts of halloysite powder into the polymer solutions. The HNT/biopolymer mixtures were homogenized by ultrasonication for 10 min and subsequent magnetic stirring at 25 1C for 24 h. Small-angle neutron scattering (SANS) SANS measurements were carried out at Institut Laue-Langevin (ILL), Grenoble (France), on the instrument D11. 42 The experiments were conducted at four different configurations with sample-to-detector (and collimation in parenthesis) distances of 1.5 m (8 m), 8 m (8 m), and 34 m (34 m), using a wavelength l of 6.0 Å (fwhm of 10%) and 39 m (40.5 m) with l = 13.0 Å and fwhm of 10%. Based on these experimental conditions, the investigated scattering vector (q = 4p sin(y/2)/l, y being the scattering angle) ranged between 0.007 and 4.20 nm À1 . The twodimensional patterns were corrected for the detector efficiency using the scattering of a 1 mm H 2 O sample and for the dark current signal; the contribution from the empty cell was subtracted; and finally, the patterns were radially averaged as all samples were scattering isotropically. Data reduction was performed with LAMP, 43 while SASfit 0.94.8 software 44 was used for the analysis of SANS curves in absolute scale. The experiments were performed in full contrast conditions (D 2 O as solvent) on halloysite/biopolymer dispersions. Similar to our previous work on pristine halloysites, 45 the HNT concentration was set at 80 g dm À3 . Raw and reduced SANS data are available free of charge at doi:10.5291/ILL-DATA.9-12-473. 42 Electric birefringence (EBR) EBR experiments were conducted using rectangular pulses of the electric field (1.25 Â 10 5 V m À1 ) generated by a Cober high power pulse generator (Model 606). The pulse length was fixed at 2.5 ms. The quartz cuvettes were illuminated with a He/Ne laser (633 nm), and the signal was detected by a photomultiplier and recorded on a Datalab transient recorder, DL 920. EBR measurements were performed on halloysite/biopolymer aqueous dispersions with various compositions. The HNT concentration was fixed at 0.15 g dm À3 , while the halloysite/biopolymer mass ratio was systematically changed from 0 to 0.95. It should be noted that EBR experiments for halloysite/chitosan mixtures were conducted in acidic solvent (pH = 4.5, concentration of acetic acid of 5 g dm À3 ). All mixtures were thermostated at 25 1C. Fluorescence correlation spectroscopy (FCS) FCS experiments were conducted by using a Leica TCS SMD FCS system with hardware and software for FCS from PicoQuant (Berlin, Germany) integrated into a high-end confocal system, a Leica TCS SP5 II instrument. The calibration of the confocal volume (0.135 fL) was performed by measuring the characteristic time of Rhodamine 6G (5 Â 10 À9 mol dm À3 ) in water with a known diffusion coefficient of 4.0 Â 10 À10 m 2 s À1 . 46 An argon laser (l = 488 nm) was employed for the excitation of the fluorescent probes. Alginate was fluorescently labeled by using 5-[(4,6-dichlorotriazin-2yl)amino]fluorescein (DTAF), 47,48 while fluorescein isothiocyanate (FITC) was selected as the fluorescent probe for chitosan. 49 Details on the preparation of fluorescently labeled polymers (DTAF/alginate and FITC/chitosan) are reported in the ESI. † FCS measurements were conducted on the HNT/labeled biopolymer aqueous mixtures and on the aqueous solutions of the pure labeled biopolymers. The concentrations of the labeled polymers were set at 0.05 and 0.1 mg dm À3 for FITC/chitosan and DTAF/alginate, respectively. The overall concentrations of the polymers were fixed at 5 and 10 g dm À3 for chitosan and alginate, respectively. We selected 50 and 100 g dm À3 as halloysite concentrations for chitosan-and alginate-based mixtures, respectively. On this basis, the mass ratio between HNT and the labeled biopolymer was fixed at 1 : 10 À4 . On the other hand, the HNT/biopolymer mass ratio was 10 : 1. Scanning electron microscopy (SEM) The surface morphology of halloysite nanotubes was investigated using a microscope, ESEM FEI QUANTA 200F. Before each experiment, the surface of the sample was coated with gold in argon by means of an Edwards Sputter Coater S150A to avoid charging under the electron beam. The measurements were carried out in high vacuum mode (o6 Â 10 À4 Pa) for simultaneous secondary electrons; the energy of the beam was 25 kV and the working distance was 10 mm. Results and discussion SANS data analysis: structural characterization of HNT/biopolymer hybrids Fig. 1 shows the scattering curves in full contrast for HNT and HNT/ biopolymer dispersions with a mass ratio of biopolymer/HNT of 0.1. As observed for pure halloysite from different sources, 45 SANS curves of the HNT/biopolymer mixtures did not evidence any oscillations, in agreement with the large polydispersity of the HNT radii. Similar observations were detected for HNT/ surfactant systems. 33 According to the literature, 50,51 two different Guinier regions can be considered for elongated particles, such as rigid rods. Within the low q Guinier interval, the scattered intensity of elongated objects varies as where I(0) is the scattering intensity at the limit q -0, while R g is the gyration radius of the whole particle, which is related to the length (L) and the radius (R) of the scattered rod by the following relation On this basis, we calculated that the theoretical R g of halloysite nanotubes is 294 nm by taking into account their average sizes in terms of length (1000 nm) and external radius (80 nm). 45 According to eqn (1), ln(I(q)) vs. q 2 plots describe linear trends allowing us to determine R g and I(0) from their slopes and their intercepts, respectively. SANS data at small angles (0.007 nm À1 r q r 0.009 nm À1 ) were analyzed by eqn (1). As displayed in Fig. 2, ln(I(q)) was linearly dependent on q 2 for all of the investigated dispersions. Table 1 collects the R g and I(0) data obtained from the Guinier analysis of SANS curves. In addition, the Guinier analysis for elongated particles can be conducted in the intermediate q range, where the dependence of the scattered intensity on the radial gyration radius is expressed as where I*(0) is the prefactor and R g 2 = R 2 /2 and, thus, we can state that R g determined from the intermediate q region is related only to the cross section of the rod. 52 Accordingly, we calculated R g = 53 nm for cylinders with R = 80 nm, which is the average outer radius of halloysite nanotubes. 45 By taking into account both the inner and the outer radii of halloysite (80 and 15 nm, respectively), 45 we estimated R g = 57 nm for the cross section of the hollow nanotube. It should be noted that eqn (3) is valid for qÁR g E 1, which indicates that q values between 0.015 and 0.02 nm À1 represent the proper intermediate q range for scattered halloysite. As expected by eqn (3), ln(qÁI(q)) vs. q 2 plots within the intermediate q range (Fig. 2) were successfully fitted by linear equations that provided the corresponding R g and I*(0) values (Table 1). From the Guinier linear analysis for elongated objects, we find that the halloysite gyration radii are hardly affected by the adsorption of both anionic alginate and cationic chitosan (Table 1). Regarding I(0), the HNT/biopolymer composites showed slightly larger values with respect to that of pure HNT (Table 1) due to the polymer adsorption onto the halloysite surfaces. As reported for silica/polyelectrolytes systems, 53 the actual amount of biopolymer adsorbed onto the nanotubes can be calculated from I(0) values, which are related to the density ( 1 N) and the volume (V) of the nanoparticles by the following equation where S(0) is the structure factor at q -0, whereas DSLD is the difference between the scattering length density of the nanoparticle (SLD nanoparticle ) and that of the solvent (SLD solvent ). 1 N is the number density of HNT and was determined by the concentration and HNT geometry considering the corresponding polydispersity. Based on eqn (4), we calculated I(0) = 27.6 Â 10 5 cm À1 for the pure HNT solution, which is close to the experimental result (Table 1). Details on the calculation of the number density ( 1 N) and the volume (V) of HNT are presented in the ESI. † As concerns the biopolymer coated nanotubes, the volume (V HNT/Biop ) and the scattering length density (SLD HNT/Biop ) of the hybrid nanoparticles can be expressed as As evidenced in Table 2, we observed larger Z values for the ionic biopolymers compared to that of HPC indicating that the electrostatic interactions play a crucial role in the formation of complex systems based on halloysite. For chitosan and alginate, very similar values for the surface coverage are observed, being slightly higher for cationic chitosan, which preferentially interacts with the HNT outer surface, wrapping the nanotubes. On the other hand, anionic alginate should mostly be confined within the positively charged HNT cavity. It might also be noted that the obtained Z values for the ionic biopolymers correspond to a ''dry'' (neglecting hydration water, which, of course, will be there at the real surface layer) polymer layer thickness of B1.6-1.8 nm, which is a realistic value for dense polymer coverage. Similar to our previous work on HNT/surfactant hybrids, 33 SANS curves of HNT/biopolymer mixtures were simulated by using a hollow cylinder with a uniform SLD profile (Fig. 3). According to this model, the scattering intensity can be expressed as I(q) = 1 N Â P(q, R i , DR, s, L, SLD HNT/Biop , SLD s ) + I bck (8) where the form factor is P(q, R i , DR, s, L, SLD HNT/Biop , SLD s ), and R i , DR and L are the geometrical parameters (internal radius, shell thickness and length, respectively), I bck is the incoherent scattered background (evaluated by Porod analysis applied to the larger q range), and SLD HNT/Biop and SLD s are the scattering length densities of the nanotube shell and solvent, respectively. Namely, we assumed that the biopolymer is included in the shell of the hollow cylinder, while the core is based on pure D 2 O. The same assumption was considered for the SANS data analysis of halloysite nanotubes modified with anionic surfactants, which are selectively adsorbed within their cavity. 33 SLD HNT/Biop was estimated on the basis of the composition of the biopolymer coated HNT determined from the Guinier analysis of SANS data in the low q range ( Table 2). The SLD HNT/Biop values used for the simulation of the SANS curves of the HNT/biopolymer mixtures are presented in the ESI. † Based on the Schulz-Zimm distribution, 54 a polydispersity (s) defined as (hDR 2 i/hDRi 2 ) À 1 was considered for the shell thickness. As reported for the simulation of pure HNT from the Matauri Bay deposit, 45 R i and L were fixed at 15 and 1000 nm, respectively. Electric birefringence (EBR): the effect of biopolymer adsorption on the HNT rotational mobility The analysis of EBR results allowed us to investigate the influence of the biopolymer adsorption on the rotational mobility of halloysite nanotubes. As an example, Fig. 4a displays the relaxation of the birefringence signal for the HNT/HPC aqueous mixture (mass ratio = 0.1). It can be noted that the electric field pulse was chosen to be so short that no saturation of the signal took place, but only an initial orientation was imposed. As observed for the pure HNT 45 and for HNT/surfactant hybrids, 33 a transient birefringence was induced by the electric field applied in the form of a rectangular pulse. This phenomenon is related to the HNT polarizability that causes a partial alignment of the nanotubes. Once the voltage pulse is terminated, the nanotubes are free to reorient and, consequently, the magnitude of the birefringence (Dn) exponentially decreases with time following the equation Dn = Dn 0 exp(Àt/t) (9) where t is the characteristic relaxation time of the nanotubes, while Dn 0 represents the maximum of the birefringence signal. As an example, the fitting analysis of Dn vs. t decay for the HNT/ HPC aqueous mixture (mass ratio = 0.1) is reported in the ESI †. Fig. 5 shows the dependence of t on the biopolymer/HNT mass ratio (R w(Biop/HNT) ) of the mixtures. The corresponding rotational diffusion coefficient (D rot ) reported in the ESI † was calculated as D rot = (6t) À1 . As a general result, the presence of the biopolymer induced a t enhancement indicating that the rotational mobility of the nanotubes was reduced. This effect can be attributed to the biopolymer adsorption onto the halloysite surfaces. In particular, t vs. R w(Biop/HNT) plots exhibited increasing trends for R w(Biop/HNT) r 0.3, while further addition of the biopolymer did not affect the relaxation time of the nanotubes. This indicates that here the attachment of the biopolymer is effectively saturated. In this regard, it should be noted that the reduction of the HNT rotational mobility cannot be simply attributed to the increase of mass upon adsorption because the amount of the biopolymer bound to the halloysite surfaces is less than 2% for all the investigated systems (Table 2). It should be noted that the presence of biopolymer causes a slow enhancement of the viscosity, which could contribute to the reduction of the HNT rotational mobility. However, this effect is negligible because the biopolymer concentration range is within a dilute regime (up to ca. 0.12 wt%). As an example, a relative D rot variation of only 7% is expected for the aqueous HPC/HNT mixture (with concentrations of 1 and 0.1 wt% for halloysite and polymer, respectively), because its intrinsic viscosity is 1.078, as reported in our previous paper. 36 In addition, the influence of the polymer concentration on D rot would be more important if the solvent viscosity significantly affected the HNT rotational mobility. Accordingly, overlapping and/or bridging of the polymer-coated nanotubes could be expected for the investigated mixtures. Compared to non-ionic HPC, ionic biopolymers induced stronger effects on the HNT rotational mobility due to the electrostatic interactions. The selective Fig. 3 (a) SANS intensity as a function of q, the magnitude of the scattering vector, after background subtraction for HNT/alginate, HNT/chitosan and HNT/HPC dispersions in D 2 O. The best simulation results (red points) were obtained according to a hollow cylinder as model (eqn (8)) with fixed length (1000 nm) and inner radius (15 nm). (b) Schulz-Zimm distribution for the external radius was centered at 65 nm. The incoherent scattered background was evaluated by Porod analysis applied to the larger q range. Fig. 4 Transient electric-field-induced birefringence signal for aqueous dispersion with a HPC/HNT mass ratio of 0.1. The HNT concentration was 0.15 g dm À3 . The experiments were conducted using rectangular pulses (the applied electric field and the pulse length were fixed at 1.25 Â 10 5 V m À1 and 2.5 ms, respectively). The dashed line represents the time point where the electric field was switched off. interactions of the polyelectrolytes with the charged HNT surfaces affected their influence on the rotational diffusion of the nanotubes. The effect of chitosan on the HNT rotational mobility is stronger with respect to that of alginate, which is mostly confined to the positively charged clay lumen. These results could indicate that chitosan is bound at the outside surface interconnecting different HNTs, thus substantially slowing down their rotation. In addition, D rot values allowed us to determine the length of the nanotubes in the biopolymer/halloysite mixtures by using Broersma theory, which is valid for rigid rods with length/ diameter ratios between 2 and 30. This approach was successfully employed for halloysite nanotubes modified with anionic surfactants. 33 Details on the calculation of the HNT length using Broersma theory are presented in the ESI. † We estimated that the HNT length in water is 809 AE 7 nm. As a general result, the presence of biopolymers induced an increase of the halloysite length (see ESI †). In particular, the chitosan adsorption generated the strongest effect on the HNT length, which reached the largest value of 1228 AE 11 nm for the biopolymer/ HNT mass ratio of 0.67. According to the EBR results, we estimated the overlap concentration for HNTs by assuming a simple cubic model where the contact distance corresponds to the length of the nanotubes. Specifically, the critical volume fraction (f*) for the HNT overlapping was calculated as pR 2 /L 2 . As concerns pure HNTs, we determined that the overlap concentration is ca. 500 times higher with respect to the concentration of the investigated dispersion, confirming that the EBR results reflect the free rotation of the nanotubes. We observed that the overlap concentrations for biopolymer-coated nanotubes are ca. 2 orders higher with respect to the concentration of the investigated mixtures. Therefore, the influence of the biopolymer adsorption on the HNT rotational mobility can be explored by the EBR data. Fluorescence correlation spectroscopy (FCS): the influence of the adsorption onto the HNT surfaces on biopolymer dynamic behavior in aqueous medium The influence of the HNT/biopolymer electrostatic interactions on the dynamic behaviour in water of the biopolyelectrolytes was explored by FCS studies. Biopolymers were fluorescently labelled with proper probes, such as FITC and DTAF for chitosan and alginate, respectively. Fig. 6 shows the effect of the HNT addition on the correlation function of the FITC/chitosan aqueous solution. According to the adsorption process, a significant reduction of the biopolymer mobility was detected in the presence of HNT. As evidenced by Fig. 6, the correlation functions of both FITC/chitosan and HNT/FITC/chitosan systems were successfully described by the stretched model expressed by the following equation 55 where T is the fraction of the molecules in the triplet state, a is the stretched parameter and t T is their relaxation time. S is given by the anisotropy (ratio of the vertical and lateral extensions), while t c and G(0) are the intercept and the decay time, respectively. As reported in the literature, 56 the contribution of the triplet state cannot be neglected for DTAF molecules, while we did so for FITC. It should be noted that the pure diffusion model was successfully employed for the FCS data analysis of HNT/surfactant hybrids containing Nile red. 33 Table 3 reports the fitting parameters (G(0) and t c ) for FITC/chitosan and HNT/FITC/chitosan (a values were 0.770 AE 0.011 and 0.904 AE 0.015, respectively). In addition, we calculated the diffusion coefficient (D) as where o 0 is the lateral extension (590 nm) of the confocal volume. The decay of FITC/chitosan reflects the dynamic behavior of the polymer, confirming that the fluorescent probe was successfully attached to the chitosan molecule. It is noteworthy that the diffusion coefficient of the HNT/FITC/chitosan mixture is much 6 Normalized FCS decay curves for FITC/chitosan and HNT/FITC/ chitosan mixtures in water. The concentration of the labeled chitosan (FITC/chitosan) was set at 0.05 mg dm À3 , while the overall concentration of chitosan was fixed at 5 g dm À3 . Accordingly, the labelled chitosan/HNT mass ratio was 10 À4 and the chitosan/HNT mass ratio was 0.1. Red lines represent the fitting according to the pure diffusion model (eqn (10)). The FCS curve for HNT/FITC/chitosan was arbitrarily shifted along the y-axis by adding a constant 0.2 to the experimental data. larger (ca. 50 times) with respect to that of the pure labeled polymer, which proves the strong binding of chitosan. Table 3 compares the corresponding hydrodynamic radius (R h ) calculated by using the Stoke-Einstein equation. In contrast, the presence of HNT led only to a small reduction of mobility of DTAF/alginate (Fig. 7), which indicates here only a rather weak extent of binding. The ESI † reports the fitting parameters obtained by the fitting through the triplet state model 56 of FCS curves for DTAF/ alginate and HNT/DTAF/alginate suspensions. The presence of HNT induced a reduction by a factor of 2 for the diffusion coefficient of the DTAF/alginate. Based on the FCS results, we can assert that dynamic behavior in water of both biopolyelectrolytes decreases as a consequence of their adsorption onto HNT surfaces. This effect is much stronger for cationic chitosan, which is wrapped onto the nanotubes. We can assume that the D values for HNT/labeled polymers are given by two contributions: (1) the fast diffusion process (D fast ), which is related to the unbound polymer; (2) the slow diffusion (D slow) due to the diffusion of the polymer adsorbed onto HNT surfaces. Based on the SANS data analysis (Table 2), only ca. 2 wt% of both chitosan and alginate are bound onto HNT. Therefore, the fast process should be predominant in the experimental diffusion coefficient. This consideration is valid for HNT/DTAF/alginate, while the much slower diffusion of HNT/FITC/chitosan could indicate that the biopolymer becomes immobilized by bridging different HNTs. In conclusion, FCS findings agree with the EBR data, which evidenced that chitosan adsorption induced the most significant reduction of the HNT rotational diffusion coefficient. Based on these results, we can argue that bridging between chitosan-coated nanotubes can be hypothesized. Conclusions We investigated the structural behavior of aqueous mixtures composed of halloysite nanotubes (HNTs) and differently charged biopolymers, such as cationic chitosan, anionic alginate and non-ionic hydroxypropylcellulose. The simulation of SANS curves by a hollow cylinder model evidenced that the biopolymer coated nanotubes possesses similar geometrical features (in terms of sizes and polydispersity) as those previously observed for pure HNT. SANS data at low and intermediate q were successfully analyzed by the Guinier approach for rod-like objects. In addition, the SANS analysis showed that charged biopolymers exhibit larger adsorption efficiencies that can be attributed to the stronger electrostatic interactions. In agreement, EBR results showed that the decrease of the HNT rotational mobility is more affected and reduced for halloysite/ionic biopolymer mixtures. In this respect, chitosan caused a somewhat stronger alteration of the halloysite rotational mobility because of the HNT wrapping driven by the attractive forces between the positively charged biopolymer and the halloysite external surface, which is positively charged. The analysis of FCS curves evidenced that the adsorption process decreases the aqueous diffusion coefficients of both polyelectrolytes. The stronger effect is observed for the HNT/chitosan mixture, which showed a reduction by ca. 50% compared to that of the pure biopolymer as a consequence of the wrapping process. In conclusion, a systematic correlation between the structure of the HNT/biopolymer hybrid and the structure of the biopolymer was demonstrated by the investigation of aqueous mixtures by applying a comprehensive set of characterisation techniques (SANS, EBR, and FCS) to these composites. Conflicts of interest The authors have no conflicts of interest to declare. 7 Normalized FCS decay curves for DTAF/alginate and HNT/DTAF/ alginate mixtures in water. The concentration of DTAF/alginate was set at 0.1 mg dm À3 , while the overall concentration of alginate was fixed at 10 g dm À3 . Accordingly, the labelled alginate/HNT mass ratio was 10 À4 and the mass ratio between the overall alginate and HNT was 0.1. Red lines represent the fitting according to the triplet state model. 56
2020-03-19T10:39:46.715Z
2020-04-06T00:00:00.000
{ "year": 2020, "sha1": "778b1a692f2fa1729da6b05a72ff42229b8ef6ff", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1039/d0cp01076f", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "564b4a2ba52a83e24bc3a35318716dabb5d5ef52", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
119624526
pes2o/s2orc
v3-fos-license
Variance of the Number of Zeroes of Shift-Invariant Gaussian Analytic Functions Following Wiener, we consider the zeroes of Gaussian analytic functions in a strip in the complex plane, with translation-invariant distribution. We show that the variance of the number of zeroes in a long horizontal rectangle $[0,T]\times [a,b]$ is asymptotically between $cT$ and $CT^2$, with positive constants $c$ and $C$. We also supply with conditions (in terms of the spectral measure) under which the variance asymptotically grows linearly, as a quadratic function of $T$, or has intermediate growth. Introduction Following Wiener, we consider Gaussian Analytic Functions in a strip with shift-invariant distribution. In a previous work [5] we gave a law of large numbers for the zeroes in a long horizontal rectangle [0, T ] × [a, b] (Theorem A below), which extends a result of Wiener [11, chapter X]. Here we go further to study the variance of the number of zeroes in such a rectangle. In Theorems 1 and 2 we show that this number is asymptotically between cT and CT 2 with positive constants c and C, and give conditions (in terms of the spectral measure) for the asymptotics to be exactly linear or quadratic in T . In theorem 3 we give some conditions for intermediate variance. We begin with basic definitions. 1.1. Definitions. A Gaussian Analytic Function (GAF) in the strip D = D ∆ = {z : |Imz| < ∆} is a random variable taking values in the space of analytic functions on D, so that for every n ∈ N and every z 1 , . . . , z n ∈ D the vector (f (z 1 ), . . . , f (z n )) has a mean zero complex Gaussian distribution. A GAF in D is called stationary, if it is distribution-invariant with respect to all horizontal shifts, i.e., for any t ∈ R, any n ∈ N, and any z 1 , . . . , z n ∈ D, the random n-tuples f (z 1 ), . . . , f (z n ) and f (z 1 + t), . . . , f (z n + t) have the same distribution. For a stationary GAF, the covariance kernel K(z, w) = E{f (z)f (w)} may be written as For t ∈ R, the function r(t) is positive-definite and continuous, and so it is the Fourier transform of some positive measure ρ on the real line: Moreover, since r(t) has an analytic continuation to the strip D 2∆ , ρ must have a finite exponential moment: The measure ρ is called the spectral measure of f . A stationary GAF is degenerate if its spectral measure consists of exactly one atom. For a holomorphic function f in a domain D, we denote by Z f the zero-set of f (counted with multiplicities), and by n f the zero-counting measure, i.e., 1.2. Results. First, we present a previous result which will serve as our starting point. This result can be viewed as a "law of large numbers" for the zeroes of stationary functions. Then: (i) Almost surely, the measures ν f,T converge weakly and on every interval to a measure ν f when T → ∞. (ii) The measure ν f is not random (i.e. var ν f = 0) if and only if the spectral measure ρ f has no atoms. In the above and in what follows, the term "density" means the Radon-Nikodym derivative w.r.t. the Lebegue measure on R. A natural question is, how big are the fluctuations of the number of zeroes in a long rectangle? More rigorously, define where for a random variable X the variance is defined by We are interested at the asymptotic behavior of V a,b f (T ) as T approaches infinity. The next two theorems show that V a,b f (T ) is asymptotically bounded between cT and CT 2 for some c, C > 0, and give conditions under which each of the bounds is achieved. We begin by stating the upper bound result, a relatively easy consequence of Theorem A. Theorem 1. Let f be a non-degenerate stationary GAF in a strip D ∆ . Then for all −∆ < a < b < ∆ the limit exists. This limit is positive if and only if the spectral measure of f has a non-zero discrete component. The lower bound result, which is our main result, is stated in the following theorem. The next theorem deals with conditions under which L 1 (a, b) is infinite. Theorem 3. Let f be a non-degenerate stationary GAF in a strip D ∆ . is a closed interval such that for every y ∈ J, the function λ → (1 + λ 2 )e 2π·2yλ p(λ) does not belong to L 2 (R). Then for every α ∈ J the set {β ∈ J : L 1 (α, β) < ∞} is at most finite. (ii) The limit L 1 (a, b) is infinite for particular a, b if either ρ does not have density, or, if it has density p and for any two points λ 1 , λ 2 ∈ R there exists intervals I 1 , I 2 such that I j contains λ j (j = 1, 2) and for at least one of the values y = a or y = b. There is a gap between the conditions given for linear variance (in Theorem 2) and those for super-linear variance (in Theorem 3). For instance, the theorems do not decide about all the suitable pairs (a, b) in case the spectral measure has density 1 √ |λ| 1I [−1,1] (λ). On the other hand, we are ensured to have super-linear variance in case ρ has a singular part. If ρ has density p ∈ L 1 (R) which is bounded on any compact set, then (1 + λ 2 )p(λ) ∈ L 2 (R) implies asymptotically linear variance, and (1 + λ)p(λ) ∈ L 2 (R) implies asymptotically super-linear variance. Remark 1.3. Minor changes to the developments in this paper may be made in order to prove analogous results regarding the increment of the argument of a stationary GAF f along a horizontal line. Namely, let V a,a (T ) denote the variance of the increment of the argument of f along the line [0, T ] × {a} (for some −∆ < a < ∆). Then: exists, belongs to [0, ∞), and is positive if and only if the spectral measure contains an atom. In fact, the first item is essentially proved in this paper (Claim 9 below). The rest of the paper is organized as follows: Theorem 1 concerning quadratic growth of variance is proved in Section 2, and is mainly a consequence of Theorem A. For theorems 2 and 3 we develop in Section 3 an asymptotic formula for V a,b f (T )/T (Proposition 3.1 below). Then we prove Theorem 2 by analyzing this formula and using tools from harmonic analysis. We end by proving Theorem 3 in Section 5. 1.3. Discussion. We mention here some related results in the literature (though they do not seem to apply directly to our case). The question for real processes (not necessarily real-analytic) was treated by many authors. An asymptotic formula for the variance was given in Cramer and Leadbetter [3], but the rate of growth is not apparent from it. Cuzick [4] proved a Central Limit Theorem (CLT) for the number of zeroes, whose main condition is linear growth of the variance. Later, Slud [13], using stochastic integration methods he developed earlier with Chambers [2], proved that in case the spectral measure has density which is in L 2 (R), this condition is satisfied. It is interesting to note that the condition for linear variance in the present theorem (condition (2)) is the main assumption in the work by Slud for real (non-analytic) processes. More recently, Granville and Wigman [7] studied the number of zeroes of a Gaussian trigonometric polynomial of large degree N in the interval [0, 2π], and showed the variance of this number is linear in N . This work was extended to other level-lines by Azaïs and León [1]. Sodin and Tsirelson [14] and Nazarov and Sodin [10] studied fluctuations of the number of zeroes of a planar GAF (a special model which is invariant to plane isometries), proving linear growth of variance and a CLT for the zeroes in large balls (as the radius approaches infinity). 1.4. Acknowledgements. I thank Mikhail Sodin for his advice and encouragement during all stages of this work. I also thank Boris Tsirelson for his interest and suggestions which stimulated the research, and Boris Hanin for a useful discussion. I am grateful to Alon Nishry and Igor Wigman for reading the original draft carefully and pointing out some errors. Theorem 1: Quadratic Variance Recall the notation where Z is some random variable and the limit is in the almost sure sense. Moreover, var Z > 0 if and only if the spectral measure of f contains an atom. Clearly var lim Theorem 1 would be proved if we could change the limit with the variance on the left-hand side. By dominant convergence, it is enough to find an integrable majorant for the tails of To this end we refer to an Offord-type estimate (Proposition 5.1 in the paper [5]), which provides exponential bounds on tails of X T : Proposition 2.1. Let f be a stationary GAF in some horizontal strip, then using the notation above we have We comment that the statement and proof provided in [5] are for a slightly different family of random functions, so-called symmetric GAFs; nonetheless the result for GAFs requires but mild modifications, and is generally easier. We may then conclude that Since both h(s) and h( √ s) are integrable on R, we have the desired majorants. Exchanging limit and variance then yields: and the result is proved. An Asymptotic Formula for the Variance This section is devoted to the derivation of a formula for the variance where T is large. We prove the following: Proposition 3.1. Let f be a stationary GAF in D ∆ with spectral measure ρ. Suppose ρ has no discrete component. Then for any −∆ < a < b < ∆, and any T ∈ R, the series v a,b (T ) = 1 4π 2 converges, and Here ρ * k is the k-fold convolution of ρ, sinc(x) = sin x x , and where l y k (λ) = ∂ ∂y where △ T i arg f is the increment of the argument of f along the segment I i (a.s. f has no zeroes on the boundary of the rectangle R T 1 ). Then, by the argument principle, Our first claim is that asymptotically when T is large, the terms involving the (short) vertical segments are negligible in this sum. 1 To see this, first notice that the distribution of n f (Ij) for j = 2, 4 (the number of zeroes in a "short" vertical segments) does not depend on T . If it were not a.s. zero, then For j = 1, 3, recall that since there are no atoms in the spectral measure, f is ergodic with respect to horizontal shifts (this is Fomin-Grenander-Maruyama Theorem, see explanation and references within [5]). This implies that each horizontal line (such as La = R × {a}) either a.s. contains a zero or a.s. contains no zeroes. If the former holds, then also En f ([0, 1]×{a}) > 0, and the measure ν f from Theorem A has an atom at a -contradiction to part (iii) of that Theorem. Proof. We demonstrate how to bound one of the terms in (4) involving a "short" vertical segment (corresponding, say, to i = 2). We have by stationarity: We now give an alternative formulation of Claim 1. Using Cauchy-Riemann equations we write: and so we arrive at: Later on we shall prove that lim T →∞ √ C a,a (T ) T = 0 if no atom is present in the spectral measure (Claim 9 below). This may be viewed as a onedimensional counterpart of Theorem 1 (though the methods of proof are different). In the mean time, we turn to find an expression for C a,b (T ), which will be refined through most of the section. 3.2. Passing to covariance of logarithms. Our first step is a technical change of order of operations. We comment that the right-hand-side (RHS) of the equation contains a mixed partial derivative, so for C a,a (T ) the computation is as follows: take the prescribed mixed derivative (as if a = b) and then substitute b = a. Proof. Following the definition of C a,b (T ), we should check that By the dominated convergence theorem and Fubini's theorem, we will be done if we show the following two statements: Let us first explain item (I): Let z = t + ia be fixed. The vector (f (z), f ′ (z)) is jointly Gaussian, in fact we may write where ρ is a number and Y (z) is a Gaussian random variable independent of f ′ (z). Therefore, that is, f (z) conditioned on the value of f ′ (z) is Gaussian, with mean depending on f ′ (z) and variance not depending on it (equal to σ 2 = var (Y (z))). The following is a straightforward computation. Using this lemma, we have where the notation EE(X|Y ) for random variables X, Y means first taking the conditional expectation of X given Y (which results in a function of Y ), then taking expectation of this function. Now (I) follows easily. Similarly, for (II), notice that for any points z, w ∈ D ∆ , where Σ is a given covariance matrix, not depending on the values of f ′ (z), f ′ (w). If z = w, then rank(Σ) = 2. Then there exists C > 0 (depending continuously on the entries of Σ) such that for any (µ 1 , µ 2 ) ∈ C 2 , The lemma concludes the proof of (II), since whenever z = w (which is of full measure in the integration domain), we have: where C(z, w) is the constant derived from the lemma. It is also asserted that C(z, w) is continuous in z, w, therefore the integral in (II) is finite. We include a proof of the last lemma for completion. Denote by g(u, v) (u, v ∈ C) the density function of N (0, Σ). We have 3.3. Expansion in terms of the original covariance function. The covariance between logarithms of two Gaussians can be expressed as a power series, using the following claim. Claim 3. Let ξ * , η * ∼ N C (0, 1) be standard complex Gaussian random variables. Then A proof is included in the book [8, Lemma 3.5.2], or in a slightly different language in the paper by Nazarov and Sodin [10,Lemma 2.2]. For any centered complex Gaussian random variable ξ ∼ N C (0, σ 2 ) we may write ξ = σξ * where ξ * ∼ N C (0, 1), and thus get Therefore Claim 3 implies that for any centered complex Gaussians ξ and η we have: We now apply this formula for ξ = f (t + ia) and η = f (s + ib): By stationarity and our notation, we have so that Claim 2 gives: 3.4. Some properties of q. We digress shortly to summarize some properties of q, which we will use later in our proofs. In the following, when we do not specify the variables we mean the statements holds on all the domain of definition. We use the subscript notation for partial derivatives (such as q a for ∂ ∂a q). Proof. Since r(2iy) > 0 for all y ∈ R, the function q is indeed well-defined; differentiability follows from that of r(z). For item 1, notice that and so, by Cauchy-Schwartz, is in [0, 1]. Equality q(x, a, b) = 1 holds only if the function λ → e 2π·aλ e −2πixλ is a constant times the function λ → e 2π·bλ , ρ-a.e., but, if ρ is non-atomic, this is impossible unless x = 0 and a = b. Item 2 will be clear once we prove item 3 and recall the continuity of q. For item 3, notice any one of the functions q, q a , q b , q ab is the sum of summands of the form where 0 ≤ j, m ≤ 2 are integers. It is enough therefore to explain why r (j) (x + ia + ib) is bounded and approaches zero as x → ±∞, for any integer 0 ≤ j ≤ 2. Recall that where c j is some constant. As a function of x, this is a Fourier transform of a non-atomic measure, therefore has the desired properties. If condition (2) holds, then dρ(λ) = p(λ)dλ, and the function λ → λ j e 2π(y 1 +y 2 )λ p(λ) is in L 2 (R). Then, its Fourier transform r (j) (x+iy 1 +iy 2 ) is also in L 2 (R), and each summand of the form (7) is in L 1 (R), as anticipated. For item 4, notice that for all x ∈ R and all a, b ∈ (−∆, ∆) we have the symmetry q(x, a, b) = q(x, b, a), and therefore for all t ∈ R: q a (x, t, t) = q b (x, t, t). On the other hand, for all t ∈ (−∆, ∆) it holds that q(0, t, t) = 1, so taking derivative by t we get q a (0, t, t) · 1 + q b (0, t, t) · 1 = 0. This proves the result. 3.5. From double to single integral. Next, we pass to a one-dimensional integral using a simple change of variables: a, b) converges uniformly to a continuous function. We may apply Claim 5 to equation (5) and get: Once again by uniform convergence of the series: where sinc(ξ) = sin ξ ξ and F[γ] is the Fourier transform of γ. In order to apply this claim to simplify equation (8), we must find a finite measure γ a,b k such that F[γ a,b k ](x) = q(x, a, b) k . This is done in the next step. 3.7. The search for an inverse Fourier transform. For now, we keep a, b and k fixed. Our goal is to find a measure whose Fourier transform results in q k (x, a, b) (or, instead, in |r(x + ia + ib)| 2k ). This measure is given in Claim 7 in the end of this subsection. In order to present it we must first discuss some definitions and relations between operations on measures. Denote by M(R) the space of all finite measures on R, similarly M + (R) denotes all finite non-negative measures on R. For two measure µ, ν ∈ M(R) the convolution µ * ν ∈ M(R) is a measure defined by: When both measures have density, this definition agrees with the standard convolution of functions. We write µ * k for the iterated convolution of µ with itself k times. Next recall that By properties of Fourier transform, or, writing z = x + iy we have This gives rise to the following notation: for a measure µ ∈ M + (R) having exponential moments up to 2∆ (i.e., obeying condition (1)), and a number y ∈ (−2∆, 2∆), we define the exponentially rescaled measure µ y ∈ M + (R) by ∀ϕ ∈ C 0 (R) : µ y (ϕ) = µ(e 2πyλ ϕ(λ)) = R e 2πyλ ϕ(λ)dµ(λ) Observation. For any µ, ν ∈ M(R) and any |y| < 2∆, (µ * ν) y = µ y * ν y . Proof. for any test function ϕ ∈ C 0 (R) we have: As a corollary, we get that for any |y| < 2∆ and any k ∈ N, Therefore there will be no ambiguity in the notation ρ * k y . Next An alternative definition, via actions on test-functions, would be: Notice that the cross-correlation operator is bi-linear, but not commutative. Now relation (9) easily implies: , which leads at last to the end of our investigation: Claim 7. For any x ∈ R, |y| < 2∆ and k ∈ N, we have: This measure acts on a test-function ϕ ∈ C 0 (R) in the following way: (ρ * k y ⋆ ρ * k y )(ϕ) = ϕ(λ − τ )e 2πy(λ+τ ) dρ * k (λ)dρ * k (τ ). 3.8. Taking the double derivative. Using Claims 6 and 7, we rewrite equation (8): Let us now look at the same expression, with the only change being that the derivative ∂ 2 ∂a ∂b has passed through the sum and through the integrals. The result is: where l a k (λ), l b k (λ) are linear functions in λ, given by Recalling Claim 1a, we calculate the expression which, we hope, is asymp- where This is the expression which appears in Proposition 3.1. Now we justify the exchange of operations in two steps. In both, we regard ∂ 2 ∂a ∂b as a limit, and apply classical theorems in order to exchange it with the sum and the integral. We begin from the RHS of equation (10). 1. exchange of R R and ∂ 2 ∂a ∂b (for a fixed k ∈ N): we use the dominated convergence theorem. For given k ∈ N, there is a ∆ 1 ∈ (0, ∆) such that: so that |ψ k (λ, τ )| ≤ C k e 2π·2∆ 1 |λ+τ | . By condition (1) this is an integrable majorant with respect to dρ * k (λ)dρ * k (τ ). 2. exchange of k≥1 and ∂ 2 ∂a ∂b : by the monotone convergence theorem. After passing in the derivative, each term in the RHS of (10) is nonnegative, therefore the exchange is justified. We summarize the result in the following claim. where h a,b k is given by (11). One more step is required in order to establish Proposition 3.1. 3.9. The error term. At last, we show that the error term in Claim 1a approaches zero as T tends to infinity. Claim 9. If ρ contains no atoms, then for any a ∈ (−∆, ∆): Proof. Starting from equation (5) and applying Claim 5, we have: and q(x, a, b) is given by (6). (Note that by Q(x, a, a) we mean the evaluation of the same mixed partial derivative at the point (x, a, a).) It is thus enough to show that We would like to take derivative term-by-term in (12), at least in the region |x| ≥ 1. For shortness, we do not write the variables (x, a, b), and use again the subscript notation for partial derivatives. We compute: We see that the derivative of the k-th term in the sum (12), i.e., S k k 2 , is bounded in absolute value by q k−2 |q a q b | + 1 k q k−1 |q ab |. In the region |x| ≥ 1 we have q(x, a, a) ≤ α < 1 (part 2 of Claim 4), and since q a , q b and q ab are bounded on the line {(x, a, a) : x ∈ R} (part 3 of that claim), we see that the change of derivative ∂ 2 ∂a ∂b with the sum in k in (12) is legal (by dominant convergence). Now, By part 3 of Claim 4 the limit on the RHS is zero, so (13) holds as anticipated. Theorem 2: Linear and Intermediate Variance The proof is divided into two parts. First we prove the existence of the limit L 1 and its positivity, and later we prove that it is finite under condition (2). 4.1. Existence and Positivity. Using the formula for the variance obtained in Proposition 3.1, and recalling the functions h a,b k are non-negative, we see that the limit L 1 exists and is in [0, ∞]. More effort is needed in order to establish that L 1 > 0. We begin with a simple bound arising from Proposition 3.1: where C 0 > 0 is an absolute constant. The last step follows from ignoring the integration outside Diag ε = 1I{(λ, τ ) : |λ − τ | < ε}, for ε < 1 4T . Next we turn to investigate h a,b 1 . Recall its form is given in Proposition 3.1 or more recently in (11). where C > 0 is a positive constant and ψ(y) = 1 2π d dy [log r(2iy)]. Since y → log r(2iy) is a convex function, for a < b we have ψ(a) < ψ(b). Therefore, is a strictly decreasing function, with a pole at ψ(b) and with the same positive limit at ±∞. Thus, it crosses exactly twice the increasing exponential function e 2π(b−a)λ . The next claim will enable us to bound h a,b 1 from below, on most of the real line. Denote by z 1 , z 2 ∈ R (z 1 < z 2 ) the two real zeroes of h a,b 1 whose existence is guaranteed by Claim 10. We also use the notation B(x, δ) for the interval of radius δ > 0 around x ∈ R. The next claim is a slight modification of the previous one, in order to fit our specific need. Fix a parameter δ > 0, and fix F = F δ to be the set provided by Claim 12. Continuing from equation (14), we have where µ is the restriction of ρ 2a to F , i.e. µ(ϕ) = ρ 2a (1I F · ϕ) for any testfunction ϕ. Notice that by the choice of F , µ(R) = ρ 2a (F ) > 0. The next lemma characterizes the limit we are investigating. Lemma 4.1. Let µ ∈ M + (R) (µ ≡ 0). Then the following limit exists (finite or infinite): Positivity of the lower bound which we gave for the limit L 1 is now clear. Using the last observation and the fact that ϕ ε * ϕ ε ≤ 2ϕ 2ε we get: On the other hand, for any compact set K ⊂ R. Since the limit lim ε→0+ sinc(2πεx) = 1 is uniform in x ∈ K, the last expression approaches K |F[µ]| 2 as ε → 0+. Thus, by choosing K and then ε > 0 properly, the lower bound may be made arbitrarily close to R |F[µ]| 2 . This concludes the proof. 4.2. Linear Variance. We recall that by the development in Section 3, we had (Claims 1a and 9): where (relation (8)) The function q(x, a, b) was a normalized covariance (recall (6)), i.e.: By a change of operations, we anticipate that: where S a,b k (x) := and S a,a k (x) denotes the evaluation of the same mixed partial derivative at the point (x, a, a). Indeed, by the justifications in Section 3.8, we may exchange the derivative ∂ 2 ∂a ∂b with the sum k≥1 when writing the total expression for the variance (by positivity of each term, which is apparent from there). Then we may exchange ∂ 2 ∂a ∂b with the integral 2T −2T as the resulting function S a,b k (x) is continuous, hence in L 1 ([−2T, 2T ]) with respect to the variable x. In fact, the following claim is much stronger then the last argument, and is the main tool to what follows. Let us first see how to finish the proof of linear variance using this claim. Again, as we saw in section 3.8, each term of the series in the RHS of (15) is non-negative. Therefore, by the monotone convergence theorem: The limit in each term can be computed using the following: Proof. First, notice that the claim holds for |Q|, since and both ends of the inequality approach the limit R |Q|. Now, We conclude that which is finite by Claim 13. Lastly, once we know the limit is finite we may obtain another formula of it using Proposition 3.1. We may take term-by-term limit of T → ∞, again by monotone convergence, and get the form presented in Remark ??: All that remains now is to prove Claim 13. Proof of Claim 13. We recall that S a,b k was computed in the proof of Claim 9, to be: Step 1: Let g be one of the functions q, q a , q b or q ab . Then g(x, a, a), g(x, a, b) and g( This is, in fact, part 3 of Claim 4. This step ensures that S a,a k , S a,b k and S b,b k are in L 1 (R) with respect to x. We turn now to prove the "moreover" part of the claim. We use (16) in order to rewrite the desired series: Once again, all functions are evaluated at (x, a, a), (x, a, b) or (x, b, b) and what follows holds for each of the three options. By step 1, For the middle sum in (17), it is therefore enough to show that: Step 2: The sum m≥1 R q m q a q b dx converges. Proof. We will show, in fact, that the positive series m≥1 R q m |q a q b | dx converges. First, in case we are evaluating at (x, a, b) (a < b), our series converges due to (18) and the bound in part 2 of Claim 4. Now assume we are evaluating at (x, t, t) (where t ∈ {a, b}). As we deal with a positive series, it is enough to show that both Denote by C = sup x∈R |q a q b (x, t, t)| ∈ (0, ∞). The sum in (II) is bounded by where C ′ ∈ (0, ∞) is another constant. C, C ′ and R q(x, t, t)dx are all finite by Claim 4. We turn to show (I). By parts 1 and 4 of Claim 4, the sum m≥1 q m |q a q b | dx = |qaq b | 1−q is well-defined for all x (including x = 0). By the monotone convergence theorem, item (I) is then equivalent to which is indeed finite as an integral of a continuous function on [−1, 1]. At last, only the right-most sum in (17) remains. Using the boundedness and integrability guaranteed in Step 1, it is enough to show: Step 3: The sum m≥1 1 m+2 R q m dx converges. Proof. We use a fact which is the basis for on of the standard proofs of the Central Limit Theorem (CLT). For completeness, we include a proof in the end of this subsection. Lemma 4.2. Let g ∈ L 1 (R) be a probability density, i.e., g ≥ 0 and R g = 1. Then there exists C > 0 such that for all m ≥ ν, We would like to apply the lemma to Notice that indeed this is probability measure, as by equation (9) with k = 1: and in particular F[g a,b ](0) = R g a,b = 1. This choice also obeys the extra integrability conditions in the lemma (as condition (1) implies (a) and (2) implies (b) with ν = 2). We see now that the last inequality following from the log-convexity of y → r(iy). Similarly we define g a,a and have q(x, a, a) = |F[g a,a ](x)| 2 . Thus in all three cases of evaluation, using the lemma with the appropriate function g yields: as required. Combining all three steps with (17), we end the proof of Claim 13. Our last debt now is to prove Lemma 4.2. The proof is a minor variation of the proof for CLT appearing in Feller [6,Chapter XV.5]. To prove the lemma, it is enough to show that lim m→∞ √ m R |G(x)| m dx exists and is finite. for some value of α > 0, which in fact is α := G ′′ (0). We shall achieve (19) by splitting the integral into three parts, and showing each could be made less than a given ε > 0 if m ≥ ν is chosen large enough. Fix R > 0 (to be determined later). By Taylor expansion, , and so the integrand is less than 2e − αx 2 4 . Choosing R so that 4 ∞ R e − αx 2 4 < ε will satisfy our needs. Theorem 3: Super-linear variance In this section we prove the two items of Theorem 3, in reverse order. Item (ii): Super-linear variance for particular a, b. Assume condition (3) holds for the particular a and b at hand. Fix a parameter δ > 0, and let F = F δ be the set provided by Claim 12. The premise ensures that, if δ is small enough, at least one of the measures (1+λ)ρ 2a | F δ and (1+λ)ρ 2b | F δ does not have L 2 -density. WLOG assume it is the former. At first, assume also ρ 2a | F is not in L 2 . Repeating the arguments of the Subsection 4.1 we get the lower bound where µ = ρ 2a | F and c δ > 0. The LHS is therefore infinite, and so L 1 = ∞. We are left with the case that λρ 2a | F δ does not have L 2 -density, but ρ 2a | F δ does (denote it by p 2a ). The argument is similar. Continuing from (14) and employing Claim 12, we get where K ⊂ F is compact. But, by our assumption, by choosing K properly the last bound can be made arbitrarily large, so that lim T →∞ If E = ∅, once again L 1 (a, b) = ∞ for all a, b ∈ J with no exceptions. Assume then there is some (a 0 , b 0 ) ∈ E. This means there exists λ 1 , λ 2 such that for any pair of intervals I 1 , I 2 such that λ j ∈ I j (j = 1, 2), both the functions (1+λ 2 )e 2π·2a 0 λ p(λ) and (1+λ 2 )e 2π·2b 0 λ p(λ) are in L 2 (R\(I 1 ∪I 2 )), but at least one of them (WLOG, the former) is not in L 2 (R). Observe that the existence of such λ 1 , λ 2 depends solely on p(λ), and may therefore be regarded as independent of the point (a 0 , b 0 ) ∈ E. Moreover, at least one among λ 1 and λ 2 (say, λ 1 ) is such that for any neighborhood I containing it, p ∈ L 2 (I). Suppose now a, b ∈ E are such that where h a,b 1 (λ) = l a 1 (λ)e 2πaλ − l b 1 (λ)e 2πbλ 2 is the function appearing in the the first term of our asymptotic formula, and in the lower bound in inequality (14). Recall h a,b 1 is non-negative and has only two zeroes by Claim 10. We may choose δ > 0 smaller than the minimal distance between λ 1 and a zero of h a,b 1 , and then construct F = F δ as in Claim 12. Certainly λ 1 ∈ F δ , and so the measure µ = ρ 2a | F δ is not in L 2 (R) (it is even not in L 2 (I) for any neighborhood I of λ 1 ). Just as in subsection 4.1 we shall get We end by showing that for a given point λ 1 ∈ R and a given a ∈ J, the set of b ∈ J which do not obey (21) is finite. Indeed, this is the set {b ∈ J : h a,b (λ 1 ) = 0} = {b ∈ J : ϕ(a) = ϕ(b)} where ϕ(y) = e 2πyλ 1 l y 1 (λ 1 ) = ∂ ∂y e 2πλ 1 y r(2iy) . Suppose the desired set is not finite. Since ϕ is real-analytic, it must be constant on J. But then r(2iy) = e 2πλ 1 y cy+d for some c, d ∈ R, and the corresponding spectral density would satisfy condition (2) for all relevant a, b. This contradiction ends the proof.
2016-01-17T18:12:20.000Z
2013-09-09T00:00:00.000
{ "year": 2018, "sha1": "65e8315028603d066cf13cd7304f060b50d5555c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1309.2111", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "65e8315028603d066cf13cd7304f060b50d5555c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
58642751
pes2o/s2orc
v3-fos-license
Postural control and balance in a cohort of healthy people living in Europe Abstract In the past 20 years, posturography has been widely used in the medical field. This observational study aimed to report the values derived from posturography of a wide set of healthy subjects from various European countries using a plantar pressure platform and a standardized method of measurement. A random cluster sampling of 914 healthy subjects aged between 7.0 and 85.99 years, stratified by age, was carried out. To provide percentile values of our cohort, data were processed to obtain 3 curves corresponding to the following percentiles: 25th, 50th, 75th, and the interquartile range. Distance-weighted least squares method was used to represent the percentile on appropriate graphs. In our sample, the balance to improve with age, up to approximately 45 years, but the trend to reverse with older age. The data show that the oscillations on the sagittal plane (y-mean) change with advancing age. Young people had more retro-podalic support than older people; the balance shifted forward in elderly people. As the study included a relatively large quantity of data collected using a standardized protocol, these results could be used as normative values of posturography for similar populations. On the basis of this data, correct diagnostic clues will be available to clinicians and professionals in the field. However, further studies are needed to confirm our findings. Introduction The control of standing is a complicated task, and many factors contribute to an adequate postural control. The postural control system is influenced by peripheral sensory systems and their correct functioning. [1,2] The sensory system allows us to perceive the environment and integrate vestibular, visual, and proprioceptive inputs with the central nervous system. [3,4] The literature demonstrated that vision is involved in programming our locomotion, [5] but that subjects with severe sight impairment show an increased somatosensory contribution to balance control. [6] In humans, the sense of balance is governed by postural receptors located in the vestibular, visual, and proprioceptive systems that provide afferent and efferent information to the kinetic muscle chains. [5,7,8] The postural control variables may be reduced when sensory systems are altered. [9,10] A common method of studying standing balance is to record body segment motion equilibrium, which is unstable, and small fluctuations are seen in balance measurements that reflect continuous and intermittent muscle activity. [11] Balance is defined as the maintenance of the vertical projection of the body's center of mass (COM) onto the support area formed by the feet. [12,13] The center of gravity (CoG) is defined as the vertical projection of the COM onto the ground. [14,15] The center of pressure (CoP) is the point of application of the resultant ground reaction force. Winter defined it as the weighted average of all the pressures over the surface of the area in contact with the ground. It is entirely independent of COM. [5] Posturography is aimed at quantifying the body sway of subjects in a standing position. [16] This test records variations of CoP as evidenced on a supporting platform. [16,17] The literature shows that CoP is the primary stabilized reference for posture and movement coordination. [18] CoP can be visualized as 2 shapes: a stabilogram and a statokinesigram. The stabilogram is a representation of CoP displacement in one direction, either anterior-posterior or medial-lateral, presented as a function of time, whereas the statokinesigram is presented in the horizontal plane. [19,20] Over the past 20 years, the posturography has been widely used in many disciplines of medicine . [21][22][23][24][25] In 2016, Kalron et al [21] showed a good correlation between posturography parameters and the Expanded Disability Status Scale parameters. The Expanded Disability Status Scale is an accepted method of quantifying disability in multiple sclerosis and consists of an 8-function system scale monitoring motor, sensory, cerebellar, brain stem, visual, bowel and bladder, pyramidal, and other functions. [26] However, Samson and Crowe [27] established that repeated measurements of the same subjects may show wide ranging values, reflecting high variability. In this context, de Oliveira et al showed that fatigue can interfere with the CoP signal, and this aspect needs to be standardized in the experimental design. On this line, Liu et al [28] described how the feet position can influence the test results, and therefore, all subjects must assume an identical foot position on the platform when evaluated using posturography. In recent years, there has been a surge of interest in low-cost applications to assess balance. Although there is a clear tendency to adopt posturography in daily life to better plan interventions and predict functional disabilities, a major concern at this stage is the high variability associated with the force platform method. Additionally, normative data derived from a large population are missing. Therefore, we assessed postural control and balance in a cohort of healthy people living in Europe to provide normative data derived from posturography, performed with a standardized method. The main aim of this study was to identify values of normality threshold common to all subjects and independent of anthropometric parameters and sex in the sway patterns of healthy subjects during quiet standing. Methods This was an observational study. The study has been retrospectively registered (ISRCTN14957074). The STROBE statement for observational studies was adopted. [29][30][31] The study design was approved by the Departmental Research Committee (Consiglio di Dipartimento SPPF Prot. n. 290/2014; punto all'ordine del giorno numero 10; approval number: 290-2014/ MEDF-02/11), and the subjects were selected according to the criteria approved by the Ethics Committee of the University of Palermo. All the members of the research team were experts in the field (posturologists, physiotherapists, sports science experts). All members coordinated through skype meetings. The sample recruitment was in accordance with both the Italian and Spanish recruitment guidelines. Personal data of participants will be kept confidential. Anthropometric measurements of participants will be provided anonymously. Anthropometric indices All measurements were performed twice, and the arithmetic mean was recorded for evaluation. The weight was measured with approximation to 100 g (Wunder 960 classic). Height was measured with a portable Seca stadiometer sensitive to changes up to 1 cm (Seca 220, Hamburg, Germany). Measurements were done with subjects barefoot, the heels, hips, and shoulders touching the stadiometer, and the head in neutral position with eyes gazing forward. [32,33] Data were available after completion of analysis, were stored in the database of our department, and it will be disposed of in 5 years as per university policy. The data were shared anonymously upon request from the researchers with journals and the working research groups. All the data will be linked anonymized. All participants provided informed consent before enrolment. Data were collected from 2014 to 2016. Detailed descriptions of the study sampling and recruitment approaches, standardization, data collection, analysis strategies, quality control activities, and inclusion criteria were approved by all operators who participated in the research. A random cluster sampling of 914 healthy subjects for the observational study, aged between 7.0 and 85 years, stratified by age, was carried out. The inclusion and exclusion criteria were as follows: not having a positive diagnosis for any disease which influences the balance (benign paroxysmal positional vertigo [BPPV], labyrinthitis, Ménière disease, tinnitus, vestibular neuronitis, etc); not ex-professional athletes [34,35] ; no fracture in the previous 6 months; no falls in the previous 6 months. [36,37] The sample size was calculated with a confidence level of 95% for the ellipse area posturography parameter. Ellipse area/surface quantifies 95% of the total area covered in the medial/lateral and anterior/posterior direction using an ellipse to fit the data. [17] A standardized methodology with standard operating procedures (SOP) has been developed for the data collection [38][39][40] ; the standardized methodology was used by all team members. Posturography was performed twice, and scores obtained the second time were used for analysis. For posturography assessment, each participant performed the Romberg test with standardized positioning: feet placed side by side, forming an angle of 30°with both heels separated by 4 cm. Posturography values were measured using the FreeMed posturography system, including the FreeMed baropodometric platform and FreeStep v.1.0.3 software. The sensors, coated with 24 K gold, guaranteed repeatability and reliability of the instrument (Sensor Medica, Guidonia Montecelio, Roma, Italy). After test familiarization, participants were asked to take the standardized Romberg test position on the baropodometric platform. The subjects were barefoot and looking at a specific point with a standardized distance. Data from the platform were converted in accordance with instructions provided by the manufacturer and transformed into coordinates of CoP. The following parameters of the statokinesigram were considered in open eyes conditions: length of sway path of the CoP (SP); ellipse surface area (ES); coordinates of the CoP along the frontal (X; right-left; x-mean), and sagittal (Y; forward-backward; y-mean) planes. [41] The ES and the coordinates along the frontal and sagittal parameters were used and cannot be modified significantly by the sampling rate, according to the 1981 Kyoto conventions. [16,42] Statistical analysis Analyses were performed using STATISTICA 8.0 for Windows (Statsoft Inc., Tulsa, OK). Statistical significance was set at P < .05 for all analyses. We analyzed the normality of variables using the Shapiro-Wilk normality test. Mean and standard deviation (SD) of the measures were calculated, and the difference between sexes was assessed using the Mann-Whitney test. To provide percentile values, sample data were analyzed using maximum penalized likelihood with the LMS statistical method, [43] obtaining 3 curves corresponding to the 25th, 50th, and 75th percentiles, and the interquartile range (IQR). Distance-weighted least squares method was used to represent the percentile on graphs. Results The Shapiro-Wilk normality test showed that all variables do not assume Gaussian distributions (P < .05). Table 1 shows the mean and SD of anthropometric and posturography measures of our sample and statistical analysis to show significant differences using the Mann-Whitney test. Posturography measures did not differ significantly between sexes. Tables 2-5 show the cut-off values of the 25th, 50th, and 75th percentile, and IQR, for each posturography component by age. In our sample, the ES (Table 3 and Fig. 1) improved with age, up to approximately 45 years, but the trend reversed with older age. In addition, the SP analysis was fairly linear, with no clear trend ( Table 2 and Fig. 2). Interestingly, the analyses of the y-mean showed an adaptation change with age (Table 5 and Fig. 3). Young people had more retro-podalic support than older people. Although the support remained retro-podalic for all ages, the COP became close to 0 with advancing age. Therefore, the balance shifted forward in elderly people. Similarly, in the analyses of the x-mean, the balance shifted slightly to the right at a young age compared with that at older age (Table 4 and Fig. 4). Discussion The baropodometric platform is an effective method for measuring postural stability. The main aim of this study was to identify, in healthy subjects and during quiet standing, posturography values of normality threshold, common to all subjects and independent of anthropometric parameters and sex. Some studies enrolled mixed-sex groups that has been shown that COP measures differ between age groups, but the reliability of these measures is not influenced by sexes. [44,45] We have tried to describe in detail the variations of posturography parameters; considering that, especially at a younger or an older age, posturography parameters can vary significantly over 5 years. The 5-year analysis interval was decided after analyzing the literature; the similar study in the literature used the same range. [46] While looking at the optimal age range windows for the observations, a 5-year age range or smaller was already adopted in other investigations. [47,48] To our knowledge, a study with such a large sample and with this type of instrument, on posturography parameters, is the second present in the Table 1 Means and standard deviations of anthropometric and posturography test measures collected in the study sample, by sex. Table 3 Percentiles and interquartile range (IQR) of the ellipse surface area (mm 2 ). Ellipse surface area Table 4 Percentiles and interquartile range (IQR) of the x-mean (mm). Table 5 Percentiles and interquartile range (IQR) of the y-mean (mm). literature. [46] Similarly, Goble and Baweja [46] reported a relatively high variability of these parameters in the youngest age group (ie, 5-9 years) and in the oldest ones, respectively, but showed a significant improvement on 10 to 14-year-old range and 15 to 19-year-old range, respectively. Subsequently, according to the results of Goble and Baweja, [46] the human balance seems to remain stable until about 50 years then to worsen until the end of life. This conclusion is in line with our findings and confirm our hypothesis. Interestingly, we identified worsening balance in elderly people, and these results are in line with those published in literature. [49][50][51][52] However, we also recorded altered balance in young age, probably due to a lack of muscle strength (dynapenia) that is present at this age. [53] Authors reported children's lack of key motor skills (strength, power, coordination) that are necessary components for the balance capacity [53][54][55][56] ; this provides us with a possible Figure 4. The x-mean parameters. explanation of altered balance at early age. The underdeveloped visual-sensory system is another factor that additionally contributes to poorer balance in children. [57][58][59][60] Moreover, children have a relatively higher CoG than adults, which consequently alters their balance. [61] These outcomes are in line with the results of Demura et al. [45] The body sway is lower for young adults than preschool children, but is higher for elderly people. Furthermore, in 2014, Barozzi et al showed a similar trend of altered balance in young age, and also, in this case, postural stability improved towards adult age. In younger subjects, a high intersubject variability of stabilometric parameters in comparison to older subjects and adults has been already recorded. [62,63] In a cohort of young 23-year-olds, the study by Clark et al placed the average measure of sway path, measured using the force plate, between the 25th and 75th percentile, as seen in our study, and was very close to the 50th percentile for the corresponding age in our study (410 mm [Clark et al's study] vs 556 mm [in the current study]). [64] Regarding the elderly age, our results indicated a linear decline of balance from ages 70 to 80 years. We retain that the linear decline in cognitive functions and muscle strength/mass seem to be strongly related to postural parameters. Expectedly, these data confirm the main findings in the field of geriatric science. [65][66][67][68][69][70][71][72][73][74][75][76] In 2017, Blomkvist et al analyzed the reaction time (RT) in a large sample of subjects. The study indicated that the RT gets worse with age. [77] Consequently, the assessment and the modification of the risk are the mainstay of fall prevention in the elderly. [78] In this context, Bianco et al [79] showed that particular physical activity can influence (Dance and ballroom dancing) the RT and, ultimately, decrease the risk falls. In this line, our results could be helpful to prevent these accidental events. On the contrary, in the pediatric age of 5 to 10 years, we again observed the same linear trend probably attributable to the not so consolidated cognition and muscle strength/mass, as mentioned before. [53,54,[80][81][82][83] The limitations of this study include the use of a single type of stabilometric platform. Although this allowed a homogeneous comparison of all data, further studies with other tools must be carried out to confirm our findings before it can be generalized. Conclusions This study included a relatively large quantity of data collected using a standardized protocol. Therefore, these results could be used as normative values for posturography assessments in similar populations. Because it is evident that the plantar pressure platform method in itself is biased and may interfere with a correct diagnosis of a good or bad posture, we presented percentile values that would be more helpful to professionals in understanding posturography recordings. The study is still ongoing and we aim to recruit a larger population to update these values within the next 5 years. Author contributions Author contributions: Antonino Patti, Pierre Marie Gagey and Antonio Palma designed the study, discussed the results and drafted the paper; Nes¸e Şahin, Giuseppe Messina performed the testing and participated in drafting paper; Antonio Paoli and Angelo Iovane helped with discussion of results and overviewed previous researches; Damir Sekulic and Antonino Bianco did statistical analyses and drafted the paper. Conceptualization: Antonio Paoli, Antonio Palma.
2019-01-22T22:23:25.956Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "fc930d0c663b733929fba294e329f33142e9f6c9", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc6314740?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "fc930d0c663b733929fba294e329f33142e9f6c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216648079
pes2o/s2orc
v3-fos-license
Discussing alcohol use with the GP: a qualitative study Background Despite most GPs recognising their role in the early diagnosis of alcohol use disorder (AUD), only 23% of GPs routinely screen for alcohol use. One reason GPs report for not screening is their relationship with patients; questions regarding alcohol use are considered a disturbance of a relationship built on mutual trust. Aim To analyse the feelings and experiences of patients with AUD concerning early screening for alcohol use by GPs. Design & setting A qualitative study of patients (n = 12) with AUD in remission or treatment, recruited from various medical settings. Method Semi-structured interviews were conducted, audiorecorded, and transcribed verbatim. The authors conducted an inductive analysis based on grounded theory. The analysis was performed until theoretical data saturation was reached. Results The participants experienced AUD as a chronic, destructive, and shameful disease. The participants expected their GPs to play a primary role in addressing AUD by kind listening, and providing information and support. If the GPs expressed a non-judgmental attitude, the participants could confide in them; this moment was identified as a key milestone in their trajectory, allowing relief and a move toward treatment. The participants thought that any consultation could be an opportunity to discuss alcohol use and noted that such discussions required a psychological and benevolent approach. Conclusion The participants felt fear or denial from the GPs, even though they felt that discussing alcohol use is part of the GP’s job. The participants requested that GPs adopt non-judgmental attitudes and kindness when approaching the subject of alcohol use. Introduction Worldwide, 3 million deaths occur annually due to the harmful use of alcohol, accounting for 5.3% of all deaths. 1 Overall, 5.1% of the global burden of disease and injury, as measured in disability-adjusted life years (DALYs), is attributable to alcohol. 1 In addition to these risks, alcohol use can lead to AUD, which is a chronic, relapsing brain disorder characterised by compulsive drug-seeking and use despite harmful consequences. 2 France is among the countries with the highest rate of alcohol consumption worldwide. 1 Among its 70 million inhabitants, around 5 million are daily users of alcohol, and 4.8% of year olds display harmful patterns of alcohol use. 3 Overall, 16% of patients who consult GPs could be experiencing an excessive use of alcohol, or AUD. 4 In primary care populations, brief interventions and motivational interviews can reduce alcohol consumption among those with hazardous and harmful use compared with minimal or no intervention in the short and long term. 5,6 No evidence of differences in prognosis has been shown among patients who detoxify in primary care. 7 In most countries, GPs manage patients with substance use disorder in primary care settings. 8 In France, patients with AUD can be managed by GPs by consulting with a specialised centre for addiction medicine as needed. 9 In a 2008/2009 study, 52% of a sample of French GPs reported seeing patients for alcohol cessation over the prior 7 days. 10 Three of four GPs manage their patients' AUD, with or without consulting a specialised centre. 10 GPs face difficulties incorporating screening and brief interventions into their routine practice. 4,11,12 In France, only 23% of GPs routinely screen for excessive alcohol use, 10 although most GPs recognise that the early diagnosis of AUD is part of their role. 9 In the UK in 2009, 40% of GPs reported that they enquired about alcohol use most or all of the time. 13 The screening and advice-giving rates seem to be approximately 30%, according to a meta-analysis of 12 trials in European and North American countries. 14 A previous systematic review identified that the barriers to the implementation of screening in primary care reported in surveys mainly include lack of time and lack of training. 15 Previous qualitative research has shown that whether GPs discussed issues related to alcohol use was determined by their personal relationship with alcohol and their personal qualities. 16 In addition, one reason reported by GPs for not screening was concerns about their relationship with the patient, particularly because alcohol-related questions were considered a disturbance in a relationship built on mutual trust. [17][18][19] In a recent cross-sectional survey, most adults in England agreed that healthcare providers should routinely ask about patients' alcohol consumption. 20 Few studies have explored the experience of the general population in qualitative ways. A recent study concerning Australian patients with or without AUD hypothesised that views of patients with AUD should differ. 21 A better understanding of these experiences could help to ease the barriers encountered by GPs in preventing screening for AUD. To the authors' knowledge, no qualitative study has explored the experience of patients with AUD undergoing screening for AUD. The aim of the present study was to analyse the feelings and experiences of patients with AUD concerning early screening by GPs. Method Study design This qualitative study adopted an inductive analysis based on grounded theory. 22 Individual interviews seemed most appropriate for discussing AUD, which is a sensitive subject because of the stigma expressed toward it by the general population. 23 Semi-structured interviews allowed the authors to ask open questions while offering a flexible structure. The first draft of the interview guide was created by two researchers (AC and XA) based on the literature and the researchers' experience. The interview guide was reviewed by two experts (JD and SC) until a consensus was reached. This guide evolved throughout the study. The first and final French interview guides are available on request. The first part of the guide collected the patients' demographic information, and the second part addressed the following: • perceptions of the excessive consumption of alcohol and AUD; • feelings regarding the care pathway for AUD; • experience with screening or diagnosis by GPs; and, • expectations regarding early screening for AUD. The present study was written using the COREQ reporting guideline. 24 Participant selection The participants were required to have or previously have AUD, as defined by a DSM-5 25 score ≥2, at the time of diagnosis. The participants could have been in remission or treatment. The participants were aged >18 years, volunteered to participate, and did not have another substance use disorder (expect for tobacco) or a psychotic disorder. The authors aimed to obtain a varied sample. GPs specialising in addiction medicine, gastroenterologists, and psychiatrists practicing in areas around the city of Muret (Haute-Garonne, Midi-Pyrénées, France) were contacted either by phone or e-mail. Then, an in-person meeting was organised to explain the study and present an abridged protocol. The physicians provided the contact details of patients who were interested in participating in the study. The authors did not ask the physicians to collect the reasons for refusal from the patients who chose not to participate. If the patient agreed, the researchers called them to schedule a single appointment at a place chosen by the participant. The authors proposed to conduct the interviews at the participants' preferred place: the participants' home or work, a public place, the medical office of the GP, or another health structure. Then, the authors collected the reasons for refusal to participate. At the beginning of the interview, each participant signed a written consent form. The participants were informed that they could stop the interview at any time without providing a reason. The participants were also informed of the anonymisation of the data. An audio-recording of all interviews allowed for faithful transcription using text processing software (Microsoft Word 2016). The transcripts were not returned to the participants. Non-verbal information was transcribed. Some field notes were taken during and immediately after the interviews to record the feelings of the researcher, the context, and the participants' attitude. Research team and reflexivity Two authors, who are residents in general practice (AC and XA), conducted the interviews independently. The interviewers were supervised by two experts; one expert in qualitative studies (SC) and one expert in addiction medicine (JD). The interviewers received specific teaching on qualitative studies by the faculty. The interviewers did not know any of the participants prior to the study's commencement. The interviewers were introduced as researchers and not as GPs. The participants only knew that the topic of the interview was alcohol use and were blind to the aim of the study. The interviewers were both vocationally trained as GPs but attempted to overlook their profession before, during, and after the interviews. The interviewers conducted reflexivity work throughout the study. Analysis From the verbatim transcripts, meaning units were identified. The meaning units were clustered into code groups that led to themes. The themes were derived from the data. Quotes, meaning units, and themes were reported on Microsoft Excel 2016. Without data from the literature, the authors could not have chosen predefined themes. The two researchers conducted a blind data analysis. Then, the researchers conferred and chose a common analysis. Each code was discussed with a supervisor (SC) for the first four interviews; subsequently, only problematic codes were discussed. After a prior thematic analysis, the authors attempted to elaborate a theory. All interviews were analysed, including those conducted using the first interview guide. The authors stopped interviews when they had reached theoretical data saturation. Theoretical data saturation was discussed by the researchers with the supervisor (SC) present. The authors elaborated on the theory, respecting the three successive tenets of symbolic interactionism, 26 which could be synthesised as follows: 1) humans act on things according to the meaning they attribute to them; 2) this meaning is derived or arises from the social interaction a person has with others; and 3) these senses are manipulated in, and modified through, an interpretive process used by the person to interact with the things they encounter. Participants The authors invited 13 patients to participate. One participant refused to participate because of time constraints. Twelve participants were interviewed between March 2017 and September 2017. The interviews lasted a mean of 25 minutes (range 15-40 minutes). Data saturation was reached after 10 interviews and confirmed by two more interviews. The characteristics of the interviewed patients are shown in Table 1. Feelings of shame Hiding AUD is the result of the shame associated with the disease and attributed by society: Patients experienced their disease as a cause for shame. The confrontation between their selfrepresentation and what they attribute to the representations by society of AUD reinforce this feeling: 'Today, this disease is still a disease that is considered a shameful disease.' (P3) Even with health professionals, the participants felt guilty and stigmatised; being 'an alcoholic' associated them with a social group viewed poorly by wider society. This taboo is also experienced because alcohol consumption is not addressed by GPs during routine consultations. The participants felt that those around them misunderstood their illness. Feeling misunderstood and ashamed makes AUD a taboo subject that is difficult to discuss, creating loneliness. The feeling of exclusion was equally strong in relation to wider society -which, they felt, viewed them as just 'an alcoholic', without knowledge of them as a whole person -as it was in relation to their relatives. It appears that participants could have an implicit definition of themselves with a predominance of their illness. As participants viewed and experienced their disease as shameful, they could not discuss it spontaneously. Thus, alcohol use could only be discussed following a triggering event, such as hospitalisation or anomalies on the patient's biological balance sheet, 'I started talking to him about it [...] when my blood test was … ' (P4), and by giving limited information, 'I had talked to my attending physician but not very much, not the quantity, I said what I drank' (P3). Step by step, the role participants gave to society and relatives to define themselves encouraged the feeling of isolation, even if not explicitly specified verbatim. Concerning the systematic screening for alcohol use during a consultation, participants felt they might have been surprised by the subject being raised, and such a screening would not necessarily have been well accepted. Participants doubted the feasibility: 'I don't see an attending physician ask all his patients whether they consume alcohol in a way that is excessive' (P1) It is unlikely that talking about AUD in a systematic screening could destabilise the precarious balance of the relationship between the patient and the GP. The importance of the GP relationship Kind listening and a trust-based relationship is expected from the GP, whose importance could be fundamental: 'Good initial care, good detection by the attending physicians of potential alcoholics; it is true that if, from the beginning, there could be a first intervention of the attending physician in relation to the disease, it could perhaps avoid a lot of deaths' (P7) The role of the GP was perceived as important but complex, consisting first of the early identification of people with AUD. Nevertheless, participants felt that GPs feared managing AUD. In addition, GPs' denial of a patient's alcohol use could limit assessments: 'However, for him [the GP], my consumption was not important. I was not drinking. I was not addicted.' (P4) One hypothesis could be that participants did not perceive the 'goodwill role' of GPs as beneficial to the consultation; instead they perceived a potential goodwill approach as a fear of management. The early detection of AUD by GPs was perceived as needing serious improvement. The participants highlighted the lack of training among GPs. The participants expected their GPs to be a confidant and a companion in addressing alcohol use: The importance of the GP in accepting care could be fundamental: '... the role of the physician is to help him or her understand that he or she is actually sick and needs to be treated' (P3) When participants perceived that they needed help, they shared the truth with their GP. There was a real need to carry out a first approach, like a de-escalation of the subject: Sharing information relating to AUD seemed to be difficult for the participant. Even if GPs were spontaneously cited as a potential interlocutor, the authors could use their words to recreate their perceptions of the GP's aim: kind listener, knowledge, and first health professional that could inaugurate the care. A parallel could be made with the role of the GP as a referent. It was the health professional who started the awareness and performed the first step in care. The GP's attitude and the initiation of care With an adapted attitude, the patient could confide in the GP and begin receiving care: 'If you feel his kindness, suddenly, it changes many things.' (P5) When participants encountered kind listening and lack of judgment, their defensive attitudes could be overcome. The participants could, at this time, consult with their GP for reasons specifically related to alcohol use and could take the initiative to address the subject in consultation without any restraint. If the GP discussed alcohol use in an appropriate way, there could be an opportunity to enhance patient awareness: '... when he made me aware of it, it seemed obvious to me.' (P5) The support participants found in discussing AUD was an important step in their disease trajectory. When participants felt no shame in speaking with their families, they could feel some support. Discussing alcohol use with the GP was experienced as a relief, allowing the participants to talk about the subject more easily with others: 'but now I can talk about it with them, with the children.' (P9) Participants could then establish future objectives. When asked how GPs should address the subject of alcohol use, the participants thought that any consultation could be an opportunity. The way the subject was approached in consultation -for example, in a direct way, '... to take no chances. You have to be frank!' (P8), or indirectly, '... not to bring it up brutally like that ... ' (P6) -conditioned the individual's response. To begin with, knowing the patient well was felt to be an asset. Once the climate of trust is established, an attempt should be made to bring up the subject through insignificant questions. The choice of words used seemed important, while there were no specific terms to use: 'I think he wore kid gloves because he thought it wasn't, not necessarily something you could hear easily.' (P5) The participants needed to feel cared for: 'If you feel his kindness, suddenly, it changes many things.' (P5) The GP must demonstrate kind listening. The subject of alcohol must be addressed with a psychological approach. When the patient reported some alcohol consumption, hesitating to explore the subject in detail was not appropriate: '... ask him for his consumption, and how he consumes' (P3) To ensure that the subject in consultation is approached in the best possible way, the participants strongly advised that the GP not be clumsy: 'There are doctors, not necessarily generalists, but even addiction specialists, who are clumsily ineffective with patients.' (P5) It was found that there were two levels in the relationship with the GP. The first step is the establishment of an appropriate contact with their GP, opening a door that can lead to further care. Several key factors could be proposed for establishing appropriate contact: availability, trust, confidence, knowledge of the participant's situation, and skills for AUD management. The analysis demonstrated that this first step was necessary in order to go further in establishing care and support for a patient with AUD, even if not specified by the participants. The second step was the inauguration of a link between participants and GPs. The authors found that the link could be represented as a fine marine anchor. Once the anchorage was effective, each of the two extremities had to slowly make the effort to try to be closer in the relationship and start a care plan. The anchor could be thrown by the GP to the patient to start a link. Once the link was initiated, the GP's attitude and skills could permit an anchorage. It appeared that if the GP was hesitant or too directive, the anchorage failed. During the care, the more the GP was able to develop a trusting environment based on their communication skills, the more the anchorage could be effective in permitting a care plan. The anchor could also be thrown by the patient who identified the GP as a potential source of help. This was possible when the GP appeared competent, available, nonjudgmental, and aware of the patient's social situation. Finally, the authors observed after analysis of the transcripts that communication skills were central for AUD care for the participants. Discussion Summary In this qualitative study, the participants expressed shame about having AUD attributed to them by society and sometimes their family. They regretted that GPs did not always perform screening and highlighted denial and fear of discussing alcohol use among GPs. Discussing alcohol use with the GP is a crucial moment in the disease and care trajectory, and is experienced as a relief. Patients with AUD felt that any consultation should be an opportunity to discuss alcohol use. While the patients' attitudes facing their problem differed, they answered their GP's questions truthfully. The participants requested that GPs approach the topic in a way adapted to the patient's personality, after establishing a climate of trust and demonstrating kind listening. Strengths and limitations Each patient who volunteered was interviewed, except for one patient, who declined for reasons of availability. The authors did not seek any special patients, and selected patients who were managed in various medical settings referred by their GP. This study was interested in the role of the GP when discussing alcohol use with patients. One limitation is that it is possible that patients without a referring physician, and consequently those who were less satisfied with their GPs, would have expressed different opinions. Additionally, recruitment from medical settings did not permit the authors to include patients who were not receiving medical care, and patients engaged in medical care are unlikely to be at the same stage of their disease as patients out of care. These different stages of the disorder could be related to different perceptions of alcohol use, AUD, and relationships with health professionals. The authors regret that some interviews did not last a long time, as some patients found it difficult to talk about the subject even though the authors tried to establish a climate of trust and let the participant choose where the interview would be conducted. Additionally, the subject was intimate and patients expressed important ideas in few words, permitting the authors to establish their results This study is novel in its investigation of patients with AUD. The authors explored participant views that could help GPs improve early screening for AUD. Recording the interviews, transcribing the interviews faithfully and rapidly after each interview, and incorporating a double-blind analysis and supervision ensured the validity of the results. The researchers engaged in reflection throughout the study to avoid projecting their representations onto the analyses. Comparison with existing literature The participants' feelings of devaluation were consistent with the paradoxical attitude towards alcohol in French society. The consumption of alcohol, particularly wine, is commonplace; however, as in the present study, people with AUD may feel condemned by society, devalued, and considered offenders. 3 Stigma against people with AUD is present in society. 23 Asking questions about alcohol consumption is associated with a feeling of stigmatising the patient according to 16.5% of Spanish GPs. 27 In France, alcohol remains a taboo subject. 28 In the present study, the participants indicated that they felt fear or denial on the part of their GP. The patients reported a lack of training among GPs, which is consistent with published data. 12 Previous studies exploring the views of patients with methadone maintenance disorders revealed that GPs need to be more proactive about alcohol screening. 29 In 23 semi-structured interviews, Australian patients (with or without AUD) held positive views of the role of GPs in health promotion, but had reservations about engaging in discussions concerning alcohol use. 21 Nevertheless, the participants of the present study felt that discussing alcohol use is part of the GP's role, which is consistent with the views of GPs. 30 The attitude of the GP is essential in permitting the patient to speak about the disease and find help. The participants suggested that GPs adopt non-judgmental attitudes and demonstrate kindness. These views were also reported by Australian participants interviewed in the study by Tam et al, who asked to be met with a friendly tone and a relaxed atmosphere. 21 This attitude on the part of the GP prompts a change in the patient's communication about the disease. When these conditions are present, the participants asked for screening at any consultation, regardless of the purpose of the visit. Indeed, systematically screening for AUD has been shown to be more effective than screening based on clinical signs in a prospective study. 31 Finally, the present analysis revealed that communication skills were central in AUD care for the participants. A recent review 32 synthesised studies focusing on communication skills for delivering health behaviour change conversations; only one addressed alcohol use. 33 In this review, the authors suggested that health promotion should be considered as a special conversational task. Furthermore, alcohol screening questionnaires are reported by GPs to result in negative reactions from patients. 18 From the present study results, the authors thus recommend patient-centred screening approaches. Implications for research and practice Each consultation could be an opportunity to screen for AUD if GPs adopt non-judgmental attitudes and goodwill during discussions concerning alcohol use. GPs should be comfortable discussing alcohol use with their patients, favouring patient-centred screening approaches. Participants in the present study showed a willingness to respond honestly to their GP's questions and believed that such discussions were appropriate given the GPs' role in screening for AUD. Such occasions could provide great relief for patients with AUD and mark the beginning of change. Funding No specific funding was obtained for this study. The work was conducted in the context of the medicine thesis of Aurélie Comes and Xaviel Abdelnour. Ethical approval The ethics committee of the National College of Teaching General Practitioners (Collège National des Généralistes Enseignants) approved this study (reference number: N° 04041716). Provenance Freely submitted; externally peer reviewed.
2020-04-30T09:06:19.510Z
2020-04-28T00:00:00.000
{ "year": 2020, "sha1": "93bcde151805db44ed8a4923e6ed13ff6e0b6005", "oa_license": "CCBY", "oa_url": "https://bjgpopen.org/content/bjgpoa/4/2/bjgpopen20X101029.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eebb1c6602f4462fbd9bdd7411d3cc740d9cbf18", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
119254429
pes2o/s2orc
v3-fos-license
The missing links of neutron star evolution in the eROSITA all-sky X-ray survey The observational manifestation of a neutron star is strongly connected with the properties of its magnetic field. During the star's lifetime, the field strength and its changes dominate the thermo-rotational evolution and the source phenomenology across the electromagnetic spectrum. Signatures of magnetic field evolution are best traced among elusive groups of X-ray emitting isolated neutron stars (INSs), which are mostly quiet in the radio and $\gamma$-ray wavelengths. It is thus important to investigate and survey INSs in X-rays in the hope of discovering peculiar sources and the long-sought missing links that will help us to advance our understanding of neutron star evolution. The Extended R\"ontgen Survey with an Imaging Telescope Array (\eROS), the primary instrument on the forthcoming Spectrum-RG mission, will scan the X-ray sky with unprecedented sensitivity and resolution. The survey has thus the unique potential to unveil the X-ray faint end of the neutron star population and probe sources that cannot be assessed by standard pulsar surveys. Introduction An isolated neutron star (INS) radiates at the expense of its rotational, thermal, and magnetic energy. The emission is produced through various mechanisms, with photon energies covering the entire electromagnetic spectrum. Fundamental properties inherited at birth (mass, composition, spin rate, magnetic field, etc), as well as their subsequent temporal evolution, determine the dominant emission mechanism at a given age. These, however, are subject to many theoretical uncertainties and remain poorly constrained by observations. The population of over 2,600 neutron stars observed in our Galaxy is dominated by radio pulsars [14]. Of those, about 50 peculiar X-ray emitting sources are not detected by radio and γ-ray pulsar surveys: they are the young and energetic magnetars, usually identified by their remarkable spectral properties and bursts of high energy emission [11]; the local group of thermally emitting middle-aged INSs discovered by ROSAT and dubbed the magnificent seven (M7) [22]; and the young and seemingly weakly-magnetised central compact objects (CCOs), which are thermal X-ray sources located near the centre of supernova remnants [3]. These groups are important as they provide a privileged view of a variety of emission processes and evolutionary channels that cannot be probed through the bulk of the pulsar population. On the one hand, this is realised by the correlation of a neutron star's thermal luminosity and its inferred magnetic field [17], which holds true for sources with B 10 13 G ( Figure 1). The standard cooling theory [24] falls short at describing the temperature of these INSs ( Figure 2). Whereas for most neutron stars magneto-rotational and thermal evolution proceed almost independently, the cooling of strongly magnetised sources is affected by field decay and magnetorotational energy dissipation [1,18]. Evolutionary models that take these effects into account [23] show that strong fields at birth can brake the INS spin to an asymptotic value over a relatively short timescale; moreover, field dissipation keeps the stellar crust hot for a longer time than expected from the standard cooling theory. While accounting for the spectral and timing properties of individual sources, the model implies an evolutionary link between magnetars and the M7. At the opposite end of the magnetic field distribution, CCOs challenge the stereotype of a young INS. The very low measured spin-down values of three CCOs (dubbed anti-magnetars) imply very low magnetic field strengths and characteristic ages much older than expected from their association with a supernova remnant. Moreover, the lack of detected pulsations in most members indicates that the atmosphere is weakly magnetised [7]. This contradicts the evidence that some sources conceal strong magnetic fields [21]. The fact that no old (> 10 kyr) antimagnetar with similar timing properties is recognised in either radio or X-ray surveys [4,13] implies that their magnetic field is not constant at secular timescales. CCOs might represent a different outcome of neutron star evolution, where they are recovering from an early episode of field burial by hypercritical accretion [8]. In this scenario, if a large amount of debris falls back onto the neutron star after the supernova explosion, accretion is likely to occur at such high rates that any field can be hidden [2]. This may delay the onset of radio pulsar activity for several 1 − 100 kyr, until the field can diffuse back to the surface [15]. Evolutionary scenarios: issues and open questions The role played by the decay of the magnetic field in heating the neutron star crust cannot be overlooked, as it partially explains the observed neutron star diversity. However, even state-of-the-art models of field decay are built over uncertain assumptions, concerning, for instance, the initial field configuration and the level of impurity of the crust [23]. At present, the observed radio and X-ray pulsar population is not sufficient to constrain different hypotheses [6]. While the phenomenology of CCOs has triggered a lively debate over their formation and fate, to our knowledge no attempt has been made in population synthesis to include anti-magnetars as a possible outcome of neutron star evolution. The paucity of these INS groups has hindered their use in population syntheses on the Galactic scale. While particular aspects of neutron star evolution and emissivity have been investigated through an extensive number of population syntheses of radio pulsars 1 , overall progress can only be hoped for when the various groups are unified in a 'grand scheme' of neutron star evolution [10]. There are the works which attempt to join the standard radio pulsar population with those of magnetars and the M7, through models of field decay [6,19]. A crucial result is that, to reproduce the number of X-ray thermally emitting INSs we observe to date, the mean strength of the magnetic field distribution at birth has to be significantly higher (and the distribution wider) than one would expect from radio pulsar studies alone. Moreover, the models usually predict a large number of very long spin period neutron stars (P > 20 s), which are not observed. Also important is to assess the relative contributions of the various INS groups to the total number of neutron stars populating the Milky Way. If each INS group is treated independently, a likely consequence is that the Galactic core-collapse supernova rate cannot account for the entire estimated population of neutron stars [12]. The neutron star birthrate derived from population synthesis is even more discrepant from the expected value when field decay is taken into account [5,6]. Moreover, while active magnetars are not expected to contribute significantly to the absolute number of observable neutron stars in the Galaxy, the existence of transient sources, which are believed to possess low quiescent X-ray luminosities, also has implications for both the total magnetar birthrate as for the ensemble of the population. The birthrate of CCOs is similarly unclear, either as anti-magnetars (that is, as neutron stars born with intrinsically weak magnetic fields), or within the 'hidden magnetic field' scenario. The fraction of neutron stars that undergo a phase of fallback accretion and field submergence is unknown, although these events could commonly take place in the early life of a neutron star (see e.g. [2], for a discussion). A forecast for the eROSITA survey As long as only the X-ray bright end of the radio-quiet INS population is known, the observational and theoretical advances seen in recent years may reach a halt. The Extended Röntgen Survey with an Imaging Telescope Array (eROSITA) [20] is the primary instrument on the forthcoming Spectrum-RG mission. The four-year eROSITA all-sky survey (eRASS) is expected to detect a large sample of clusters of galaxies, active galactic nuclei, transient phenomena, and Galactic compact objects, among other interesting case studies. eRASS offers a timely opportunity for a better sampling of a considerable number of neutron stars that cannot be assessed by radio and γ-ray surveys. Eventually, the identification and investigation of eROSITA sources at faint X-ray fluxes will help us to test alternative neutron star evolutionary scenarios and constrain the rate of core-collapse supernovae in our Galaxy. To estimate the number of INSs to be detected by eRASS through their thermal X-ray emission, we performed Monte Carlo simulations of a population synthesis model [16]. Briefly, neutron stars are created from a progenitor population of massive stars distributed in the spiral arms of the Galactic disk; after the supernova explosion and the imparted 'kick' velocity, their evolution in the Galactic potential is followed while they cool down, isotropically emitting soft X-rays. The expected source count rates and total flux are then computed in the eROSITA Figure 3. Histogram of the optical V magnitudes required for ruling out other classes of X-ray emitter among INS candidates, based on X-ray-to-optical flux ratios. We considered in the simulations two possible configurations of optical blocking filters and the resulting total effective area of the seven telescopes, averaged over the field of view [16]. A minimum X-ray-tooptical flux ratio of 10 3.5 is assumed for the eROSITA-detected sample of neutron stars. Distances to the detected sources are within 400 pc and 8 kpc and the accumulated survey exposure ranges from 1.6 to 8 ks, with a median around 1.9 ks. The limiting flux is around 10 −14 erg s −1 cm −2 , which is 50 times deeper than that of the ROSAT Bright Star Catalogue. detectors, taking into account the absorbing material in the line-of-sight, the detector and survey properties, and the celestial exposure after four years of all-sky observation. We apply an analytical description of the interstellar medium, based on layers of hydrogen in both atomic and molecular form, and commonly adopted cross-sections. All neutron stars in our simulations cool down at the same fiducial rate. Our study indicates an expected number of up to ∼ 85 − 95 thermally emitting INSs 2 to be detected in the eRASS with more than 30 counts (0.2−2 keV). In Figure 3 we show the histogram of V magnitudes required to select promising INS candidates for deep follow-up investigations (assuming a logarithmic X-ray-to-optical flux ratio of log(F X /F V ) = 3.5, which is sufficient to exclude other classes of X-ray emitters). Although optical follow-up will require very deep observations -in particular, the identification of the faintest candidates will have to wait for the next generation of extremely large telescopes -sources at intermediate fluxes can be selected for follow-up investigations using current observing facilities. We anticipate ∼ 25 discoveries in the first years after the completion of the all-sky survey (see [16], for details). Summary and conclusions The observed neutron star phenomenology is arguably governed by the properties of the magnetic field: specifically, by its magnitude at birth, whether it decays or grows during the star's lifetime, and how these characteristics affect the rotational and thermal history of the neutron star. Overall, crucial aspects of neutron star evolution and emissivity cannot be probed through the radio and γ-ray pulsar population alone, and are not yet described by theory. It is thus important to investigate and survey INSs in X-rays in the hope of discovering missing links in their evolution, and to gain insight into neutron star physics and phenomenology. The forthcoming allsky survey of eROSITA, expected for launch in September 2018, is a unique oportunity to unveil the X-ray faint end of the neutron star population. Both coordination with photometric and spectroscopic surveys at other wavelengths and catalogue cross-correlation procedures yielding actual probabilities of source identification are required to pinpoint promising INS candidates among the millions of eROSITA-detected sources. Dedidated follow-up investigations will tackle the evolutionary state of individual targets to evaluate the impact of alternative scenarios on the evolution and observability of the neutron star population in the Milky Way.
2017-11-14T10:25:16.000Z
2017-11-14T00:00:00.000
{ "year": 2017, "sha1": "bedc7892522d77d86c5b533e385bbdd72fd7d891", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/932/1/012008", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "23c396727fe7f6632a081827ff83c67d04faaaba", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119315707
pes2o/s2orc
v3-fos-license
Positive Scalar Curvature and Minimal Hypersurface Singularities In this paper we develop methods to extend the minimal hypersurface approach to positive scalar curvature problems to all dimensions. This includes a proof of the positive mass theorem in all dimensions without a spin assumption. It also includes statements about the structure of compact manifolds of positive scalar curvature extending the work of \cite{sy1} to all dimensions. The technical work in this paper is to construct minimal slicings and associated weight functions in the presence of small singular sets and to show that the singular sets do not become too large in the lower dimensional slices. It is shown that the singular set in any slice is a closed set with Hausdorff codimension at least three. In particular for arguments which involve slicing down to dimension $1$ or $2$ the method is successful. The arguments can be viewed as an extension of the minimal hypersurface regularity theory to this setting of minimal slicings. Introduction The study of manifolds of positive scalar curvature has a long history in both differential geometry and general relativity. The theorems involved include the positive mass theorem, the topological classification of manifolds of positive scalar curvature, and the local geometric study of metrics of positive scalar curvature. There are two methods which have been successful in this study in general situations, the Dirac operator method and the minimal hypersurface method. Both of these methods have restrictions on their applicability, the Dirac operator methods require the topological assumption that the manifold be spin, and the minimal hypersurface method has been restricted to the case of manifolds with dimension at most 8 because of the possibility of singularities which might occur in the hypersurfaces. The purpose of this paper is to extend the minimal hypersurface method to all dimensions. Date: April 20, 2017. The first author was partially supported by NSF grant DMS-1404966. The second author was partially supported by NSF grants DMS-1308244 and PHY-1306313. 1 The Dirac operator method was pioneered by A. Lichnerowicz [Li] and M. Atiyah, I. Singer [AS] in the early 1960s. It was extended by N. Hitchin [H] and then systematically developed by M. Gromov and H. B. Lawson in [GL1], [GL2], and [GL3]. Surgery methods for manifolds of positive scalar curvature were developed in [SY1] and [GL2]. For simply connected manifolds M n with n ≥ 5 Gromov and Lawson conjectured necessary and conditions for M to have a metric of positive scalar curvature (related to the index of the Dirac operator in the spin case). The conjecture was solved in the affirmative by S. Stolz [St]. The Dirac operator method was used by E. Witten [W] to prove the positive mass theorem for spin manifolds (see also [PT]). The minimal hypersurface method originated in [SY4] for the three dimensional case and was extended to higher dimensions in [SY1]. The extension to the positive mass theorem was initiated in [SY2] and in higher dimensions in [SY5] and [Sc]. In this paper we extend the minimal hypersurface argument to all dimensions at least as regards the applications to the positive mass theorem and results which can be proven by slicing down to dimension two. The basic objects of study in this paper are called minimal k-slicings and we now describe them. We start with a compact oriented Riemannian manifold M which will be our top dimensional slice Σ n . We choose an oriented volume minimizing hypersurface Σ n−1 . Since Σ n−1 is stable, the second variation form S n−1 (ϕ, ϕ) has first eigenvalue which is non-negative. We choose a positive first eigenfunction u n−1 and we use it as a weight ρ n−1 for the volume functional on n − 2 cycles which are contained in Σ n−1 . We assume we have a Σ n−2 ⊂ Σ n−1 which minimizes the weighted volume V ρ n−1 (·). The second variation S n−2 (ϕ, ϕ) for the weighted volume on Σ n−2 then has non-negative first eigenvalue and we let u n−2 be a positive first eigenfunction. We then define ρ n−2 = u n−2 ρ n−1 and we continue this process. That is if we have Σ j+1 ⊂ Σ j+2 ⊂ . . . ⊂ Σ n which have been constructed, we choose Σ j to be a minimizer of the weighted volume V ρ j+1 (·). Such a nested family Σ k ⊂ Σ k+1 ⊂ . . . ⊂ Σ n is called a minimal k-slicing. The basic geometric theorem about minimal k-slicings which is generalized in Section 2 is the statement that if Σ n has positive scalar curvature then for any minimal k-slicing we have that Σ k is Yamabe positive and so admits a metric of positive scalar curvature. In particular if k = 2 then Σ 2 must be diffeomorphic to S 2 and there can be no minimal 1-slicing. If we start with Σ n with n ≥ 8, there might be a closed singular set S n−1 of Hausdorff dimension at most n − 8 in Σ n−1 . In this paper we develop methods to carry out the construction of minimal k-slicings allowing for the possibility that the Σ j may have nonempty singular sets S j . In order to do this it is necessary to extend the existence and regularity theory for minimal hypersurfaces to this setting. To do this requires maintaining some integral control of the geometry of the Σ j in the ambient manifold Σ n , and also of constructing the eigenfunctions u j which are bounded in appropriate weighted Sobolev spaces. This control is gotten by carefully exploiting the terms which are left over in the geometry of the second variation at each stage of the slicing. This is done by modifying the second variation form S j to a larger form Q j . The form Q j is more coercive and can be diagonalized with respect to the weighted L 2 norm even in the presence of small singular sets. We can then construct the next slice using the first eigenfunction for the form Q j to modify the weight. This procedure only works if the singular sets S j do not become too large. We prove that for a minimal k-slicing the Hausdorff dimension of the singular set S k is at most k −3. The regularity theorem is proven by establishing appropriate compactness theorems for minimal k-slicings and showing that at a singular point there is a homogeneous minimal k-slicing gotten by rescaling and using appropriate monotonicity theorems (volume monotonicity and monotonicity of an appropriate frequency function). A homogeneous minimal k-slicing is one in R n for which all of the Σ j are cones and all of the u j are homogeneous of some degree. It is then possible to show that if we had a Σ k+1 with singular set of codimension at least 3, but Σ k had a singular set of Hausdorff dimension larger then k − 3 then there would exist a nontrivial homogeneous 2-slicing with Σ 2 having an isolated singularity at the origin. We show that no such homogeneous slicings exist to conclude that if S k+1 has codimension at least 3 in Σ k+1 , then S k has codimension at least 3 in Σ k . In particular if k = 2 then Σ 2 is regular. We now state the main theorems of the paper beginning with the positive mass theorem. A manifold M n is called asymptotically flat if there is a compact set K ⊂ M such that M \ K is diffeomorphic to the exterior of a ball in R n and there are coordinates near infinity x 1 , . . . , x n so that the metric components g ij satisfy for some p > n−2 2 . We also require the scalar curvature R to satisfy for some q > n. Under these assumptions the ADM mass is well defined by the formula (see [Sc] for the n dimensional case) m = 1 4(n − 1)ω n−1 lim where S σ is the euclidean sphere in the x coordinates, ω n−1 = V ol(S n−1 (1)), and the unit normal and volume integral are with respect to the euclidean metric. The positive mass theorem is as follows. Theorem 1.1. Assume that M is an asymptotically flat manifold with R ≥ 0. We then have that the ADM mass is nonnegative. Furthermore, if the mass is zero, then M is isometric to R n . It is shown in Section 5 using results of [SY3] to simplify the asymptotic behavior and an observation of J. Lohkamp which allows us to compactify the manifold keeping the scalar curvature positive. The result which is needed for compact manifolds follows. Theorem 1.2. If M 1 is any closed manifold of dimension n, then M 1 #T n does not have a metric of positive scalar curvature. Both of these theorems were known if either n ≤ 8 or for any n assuming the manifold is a spin manifold. Actually for n = 8 there may be isolated singularities, but in this dimension a result of N. Smale [Sm] shows that there is a dense set of ambient metrics for which the singularities do not occur. Using this result the eight dimensional case can also be done without dealing with singularities. In this paper we remove the dimensional and spin assumptions. Finally we prove the following more precise theorem about compact manifolds with positive scalar curvature. Theorem 1.3. Assume that M is a compact oriented n-manifold with a metric of positive scalar curvature. If α 1 , . . . , α n−2 are classes in H 1 (M, Z) with the property that the class σ 2 given by σ 2 = α n−2 ∩ α n−3 ∩ . . . α 1 ∩ [M] ∈ H 2 (M, Z) is nonzero, then the class σ 2 can be represented by a sum of smooth two spheres. If α n−1 is any class in H 1 (M, Z), then we must have α n−1 ∩ σ 2 = 0. In particular, if M has classes α 1 , . . . , α n−1 with α n−1 ∩ . . . ∩ α 1 ∩ [M] = 0, then M cannot carry a metric of positive scalar curvature. We also point out the recent series of papers by J. Lohkmp [Lo1], [Lo2], [Lo3], and [Lo4]. These papers also present an approach to the high dimensional positive mass theorem by extending the minimal hypersurface approach to all dimensions. Our approach seems quite different both conceptually and technically, and is more in the classical spirit of the calculus of variations. In any case we feel that, for such a fundamental result, it is of value to have multiple approaches. Terminology and statements of main theorems We begin by introducing the notation involved in the construction of a minimal k-slicing; that is, a nested family of hypersurfaces beginning with a smooth manifold Σ n of dimension n and going down to Σ k of dimension k ≤ n − 1. This consists of Σ k ⊂ Σ k+1 ⊂ . . . ⊂ Σ n where each Σ j will be constructed as a volume minimizer of a certain weighted volume in Σ j+1 . Let Σ n be a properly embedded n-dimensional submanifold in an open set Ω contained in R N . We will consider a minimal slicing of Σ n defined in an inductive manner. First, let u n = 1, and let Σ n−1 be a volume minimizing hypersurface in Σ n . Of course, it may happen that Σ n−1 has a singular set S n−1 which is a closed subset of Hausdorff dimension at most n − 8. On Σ n−1 we will construct a positive definite quadratic form Q n−1 on functions by suitably modifying the index form associated to the second variation of volume. We will then construct a positive function u n−1 on Σ n−1 which is a least eigenfunction of Q n−1 . We then define ρ n−1 = u n−1 u n , and we let Σ n−2 be a hypersurface in Σ n−1 which is a minimizer of the ρ n−1 -weighted volume V ρ n−1 (Σ) = Σ ρ n−1 dµ n−2 for an n − 2 dimensional submanifold of Σ n−1 and we denote µ j to be the Hausdorff j-dimensional measure. Inductively, assume that we have constructed a slicing down to dimension k + 1; that is, we have a nested family of hypersurfaces, quadratic forms, and positive functions (Σ j , Q j , u j ) for j = k + 1, . . . , n such that Σ j minimizes the ρ j+1 -weighted volume where ρ j+1 = u j+1 u j+2 . . . u n , Q j is a positive definite quadratic form related to the second variation of the ρ j+1 -weighted volume (see (2.1) below), and u j is a lowest eigenfunction of Q j with eigenvalue λ j ≥ 0. We will always take λ j to be the lowest Dirichlet eigenvalue (if ∂Σ j = 0) of Q j with respect to the weighted L 2 norm and we take u j to be a corresponding eigenfunction. We will show in Section 3 that such λ j and u j exist. We then inductively construct (Σ k , Q k , u k ) by letting Σ k be a minimizer of the ρ k+1 weighted volume where ρ k+1 = u k+1 u k+2 . . . u n , Q k a positive definite quadratic form described below, and u k a positive eigenfunction of Q k . Note that if Σ j is a leaf in a minimal k-slicing, then choosing a unit normal vector ν j to Σ j in Σ j+1 gives us an orthonormal basis ν k , ν k+1 , . . . , ν n−1 for the normal bundle of Σ k defined on the regular set R k . Thus the second fundamental form of Σ k in Σ n consists of the scalar forms A ν j k = A k , ν j for j = k, . . . , n − 1 and we have |A k | 2 = n−1 j=k |A ν j k | 2 . Now if we have a minimal k-slicing, we let g k denote the metric induced on Σ k from Σ n , and we letĝ k denote the metricĝ k = g k + n−1 p=k u 2 p dt 2 p on Σ k × (S 1 ) n−k where we use S 1 to denote a circle of length 1, and we denote by t p a coordinate on the pth factor of S 1 . We then note that the volume measure of the metricĝ k is given by ρ k dµ k where we have suppressed the t p variables since we will consider only objects which do not depend on them; for example, the ρ k -weighted volume of Σ k is the volume of the n-dimensional manifold Σ k × T n−k . We will need to introduce another metricg k on Σ k × (S 1 ) n−k−1 . This is defined byg k = g k + n−1 p=k+1 u 2 p dt 2 p . Note thatg k is the metric induced on Σ k × (S 1 ) n−k−1 byĝ k+1 . We also letà k denote the second fundamental form of Σ k × (S 1 ) n−k−1 in (Σ k+1 × (S 1 ) n−k−1 ,ĝ k+1 ). The following lemma computes this second fundamental form. Lemma 2.1. We haveà k = A ν k k − n−1 p=k+1 u p ν k (u p )dt 2 p , and the square length with respect tog k is given by Proof. If we consider a hypersurface Σ in a Reimannian manifold with unit normal ν, then we can consider the parallel hypersurfaces parametrized on Σ by F ε (x) = exp(εν(x)) for small ε and x ∈ Σ. We then have a family of induced metrics g ε from F ε on Σ, and the second fundamental form is given by A = − 1 2ġ whereġ denotes the ε derivative of g ε at ε = 0. If we let exp denote the exponential map of Σ k in Σ k+1 , then since Σ k+1 is totally geodesic in Σ k+1 × T n−k−1 , we have for (x, t) ∈ Σ k × T n−k−1 , and the induced family of metrics is given bỹ Thus we haveġ It follows thatà k = A ν k k − n−1 p=k+1 u p ν k (u p )dt 2 p , and taking the square norm with respect to the metricg k then gives the desired formula for |à k | 2 . We now describe the choice we will make for Q j . Let S j be the second variation form for the weighted volume V ρ j+1 at Σ j , and define where, for now, ϕ is a function supported in the regular set R j and we defineà n = 0, u n = 1. We will discuss an extended domain for Q j in the Section 3. Up to this point our discussion is formal because we have not discussed issues related to the singularities of the Σ j in a minimal slicing. We first define the regular set, R j of Σ j to be the set of points x for which there is a neighborhood of x in R N in which all of Σ j , Σ j+1 , . . . Σ n are smooth embedded submanifolds of R N . The singular set, S j is then defined to be the complement of R j in Σ j . Thus S j is a closed set by definition. The following result follows from the standard minimizing hypersurface regularity theory. In this paper dim(A) always refers to the Hausdorff dimension of a subset A ⊂ R N . In light of this result, we see that our main task in controlling singularities is to control the size of the set S j ∩ S j+1 . We will do this by extending the minimal hypersurface regularity theory to this slicing setting. In order to do this we need to establish the relevant compactness and tangent cone properties and this requires establishing suitable bounds on the slicings. To begin this process we make the following definition. Definition 2.1. For a constant Λ > 0, a Λ-bounded minimal kslicing is a minimal k-slicing satisfying the following bounds for j = k, k + 1, . . . n − 1, where µ j is Hausdorff measure, ∇ j is taken on (the regular set of) Σ j , and A j is the second fundamental form of Σ j in R N . The minimal k-slicings we will consider in this paper will always be Λ-bounded for some Λ. We have the following regularity theorem. Theorem 2.3. Given any Λ-bounded minimal k-slicing, we have for each j = k, k + 1, . . . , n − 1 the bound on the singular set dim(S j ) ≤ j − 3. We now formulate an existence theorem for minimal k-slicings in Σ n . We consider the case in which Σ n is a closed oriented manifold. We assume that there is closed oriented k-dimensional manifold X k and a smooth map F : Σ n → X ×T n−k of non-zero degree s. We let Ω denote a k-form of X with X Ω = 1, and we denote by dt k+1 , . . . dt n the basic one forms on T n−k where we assume the periods are equal to one. We introduce the notation Θ = F * Ω and ω p = F * (dt p ) for p = k +1, . . . , n. We can now state our first existence theorem. A more refined existence theorem is given by Theorem 4.6 which we will not state here. Theorem 2.4. For a manifold M = Σ n as described above, there is a Λ-bounded, partially regular, minimal k-slicing Moreover, if k ≤ j ≤ n − 1 and Σ j is regular, The proofs of Theorems 2.3 and 2.4 will be given in Sections 3 and 4. In the remainder of this section we discuss the quadratic forms Q j in more detail and derive important geometric consequences for minimal 1-slicings and 2-slicings under the assumption that Σ n has positive scalar curvature. Consequences of these results, which are the main geometric theorems of the paper, will be given in Section 5. Recall that in general if Σ is a stable two-sided (trivial normal bundle) minimal hypersurface in a Riemannian manifold M, then we may choose a globally defined unit normal vector ν, and we may parametrize normal deformations by functions ϕ · ν. The second variation of volume then becomes the quadratic form where R M and R Σ are the scalar curvature functions of M and Σ and A denotes the second fundamental form of Σ in M. We have the following result which computes the scalar curvaturẽ R k ofg k . Lemma 2.5. The scalar curvature of the metricg k is given bỹ where ∆ k and ∇ k denote the Laplace and gradient operators with respect to g k . Proof. The calculation is a finite induction using the formulã for the scalar curvature of the metricg = g + u 2 dt 2 . For j = k, . . . , n − 1 Letḡ j = g k + n−1 p=j u 2 p dt 2 p . Note thatḡ k =ĝ k andḡ k+1 =g k . We prove the formulā by a finite reverse induction on j. First note that for j = n − 1 the formula follows from the one above. Now assume the formula is correct forḡ j+1 We then apply the formula above to obtain where as above ρ j = u j+1 · · · u n−1 . The statement now follows from the inductive assumption. Sinceḡ k+1 =g k , we have proven the required statement. We now consider consequences of having a minimal k-slicing of a manifold of positive scalar curvature. Theorem 2.6. Assume that the scalar curvature of Σ n is bounded below by a constant κ. If Σ k is a leaf in a minimal k-slicing, then we have the following scalar curvature formula and eigenvalue estimatê where ϕ is any smooth function with compact support in R k . Proof. First note that from (2.1) and (2.2) we have (|∇ j log u p | 2 + |à p | 2 ))ϕ 2 ]ρ j+1 dµ j , and therefore u j satisfies the equation L j u j = −λ j u j where We derive the scalar curvature formula by a finite downward induction beginning with k = n − 1. In this case the eigenvalue estimates follow from the standard stability inequality (2.2) since ρ n = u n = 1 andR n−1 = R n−1 . We also have from Lemma 2.5 thatR n−1 = R n−1 − 2u −1 n−1 ∆ n−1 u n−1 . The equation satisfied by u n−1 is and so we haveR n−1 = R n + 2λ n−1 + 1 4 |à n−1 | 2 . This proves the result for k = n − 1. Now we assume the conclusions are true for integers k and larger, and we will derive them for k − 1. We first observe thatĝ On the other hand from (2.3) applied with j = k −1 we see that u k−1 satisfies the equatioñ Substituting this above we havê so we havê Using the inductive hypothesis we get the desired formulâ (|∇ p log u q | 2 + |à q | 2 )). Now observe that This formula implies that for each k we havê and so the following eigenvalue estimate follows from (2.2) The remainder of the proof derives the eigenvalue estimate from this one. Since ϕ is arbitrary we may replace ϕ by ϕ(ρ k+1 ) 1/2 to obtain where we used the inequality 2 ≤ 4. After expanding, the term on the right becomes Rewriting the middle term in terms of ∇ k (ϕ) 2 and integrating by parts the term becomes Now recall from Lemma 2.5 that Thus we see that the terms involving ∆ k u p cancel out, and note also that so the second term also cancels. Thus we are left with This gives the desired eigenvalue estimate. This theorem will be central to the regularity proof in the next section and it also has an important geometric consequence which is the main tool in the applications of Section 5. is a Yamabe positive conformal manifold. If Σ 2 lies in a minimal 2slicing, Σ 2 is regular, and ∂Σ 2 = 0, then each connected component of Σ 2 is homeomorphic to the two sphere. If Σ 1 lies in a minimal 1-slicing and Σ 1 is regular, then each component of Σ 1 is an arc of length at most 2π/ √ κ. Proof. Recall that the condition that g k be Yamabe positive is that the lowest eigenvalue of the conformal Laplacian −∆ k + c(k)R k be positive where c(k) = k−2 4(k−1) . In variational form this condition says for all nonzero functions ϕ which vanish on ∂Σ k (if Σ k has a boundary). Since 4 < c(k) −1 we see that this follows from the eigenvalue estimate of Theorem 2.6. Now consider Σ 2 , and apply the eigenvalue estimate of Theorem 2.6 with ϕ = 1 to a component S of Σ 2 to see that S R 2 dµ 2 > 0. It then follows from the Gauss-Bonnet Theorem that S is homeomorphic to the two sphere (note that S is orientable). Finally, it γ is a connected component of Σ 1 of length l, then the eigenvalue estimate of Theorem 2.6 implies that the lowest Dirichlet eigenvalue of γ is at least κ/4. Thus κ/4 ≤ π 2 /l 2 and l ≤ 2π/ √ κ as claimed. Compactness and regularity of minimal k-slicings The main goal of this section is to prove Theorem 2.3. In order to do this we first must clarify some analytic issues concerning the domain of the quadratic form Q j . We let L 2 (Σ j ) denote the space of square integrable functions on Σ j with respect to the measure ρ j+1 µ j . We let denote the square norm on L 2 Σ j . We introduce some notation, defining P j to be the function defined on Σ j We will say that a minimal k-slicing in an open set Ω is partially regular if dim(S j ) ≤ j − 3 for j = k, . . . , n − 1. It follows from Proposition 2.2 that if the (k + 1)-slicing associated to a minimal k-slicing is partially regular, then dim( For functions ϕ which are Lipschitz (with respect to ambient distance) on Σ j with compact support in R j ∩Ω, we define a square norm by We let H j denote the Hilbert space which is the completion with respect to this norm. Note that functions in H j are clearly locally in W 1,2 on R j . We will assume from now on that u j ∈ H j for j ≥ k; in fact, we take this as part of the definition of a bounded minimal k-slicing. We define H j,0 to be the closed subspace of H j consisting of the completion of the Lipschitz functions with compact support in R j ∩Ω. In order to handle boundary effects we also assume that there is a larger domain Ω 1 which containsΩ as a compact subset and that the k-slicing is defined and boundaryless in Ω 1 . Note that this is automatic if ∂Σ j = φ. Thus H j,0 consists of those functions in H j with 0 boundary data on Σ j ∩ ∂Ω. The existence of eigenfunctions u j in this space will be discussed in the next section. The following estimate of the L 2 (Σ k ) norm near the singular set will be used both in this section and the next. The result may be thought of as a non-concentration result for the weighted L 2 norm near the singular set in case the H k norm is bounded. Proposition 3.1. Let S be a closed subset of Ω 1 with zero (k − 1)dimensional Hausdorff measure. Let Σ k be a member of a bounded minimal k-slicing such that Σ k+1 is partially regular in Ω 1 . For any η > 0 there exists an open set V ⊂ Ω 1 containing S ∩Ω such that whenever S k ∩Ω ⊂ V we have the following estimate Proof. Let ε > 0, δ > 0 be given. We may choose a finite covering of the compact set S ∩Ω by balls We let V denote the union of the balls, V = ∪ α B rα (x α ). Assume that S k ∩Ω ⊂ V and let ϕ ∈ H k,0 . We may extend ϕ to Σ k ∩ Ω 1 be taking ϕ = 0 in Ω 1 ∼ Ω. By a standard first variation argument for submanifolds of R N , for a nonnegative function we have The above inequality then implies Now for any α and a small constant ε 0 we consider two cases: (1) There exists r with r α ≤ r ≤ δ/5 such that the inequality We denote such a choice of r by r ′ α . Secondly, we have case (2) For all r with r α ≤ r ≤ δ/5 we have The collection of α for which the first case holds will be labeled A 1 , and that for which the second holds A 2 . We will handle the two cases separately. For the collection of balls with radius r ′ α indexed by A 1 we may apply the five times covering lemma to extract a subset A ′ 1 ⊆ A 1 for which the balls in A ′ 1 are disjoint and such that From the inequality of case (1) above applied for α ∈ A ′ 2 we have . Summing over α ∈ A 1 and using disjointness of the balls we have for r α ≤ r ≤ δ/5. For j = 0, 1, 2, . . . define σ j = 5 j r α and let p be the positive integer such that Integrating we find Thus if we choose ε 0 = 5 −3k+3 we find R j+1 ≥ 5 2(k−1) and hence it follows that R j R j+1 ≥ 5 2(k−1) . Thus we have shown that for any j = 0, 1, . . . , p − 1 we either have Summing this over these α and using the choice of the covering we have Combining this with (3.1) we finally obtain since we have now fixed ε 0 . We can estimate the second term on the right using This implies the bound The desired conclusion now follows by choosing δ so that cδ = η/2 and then choosing ε so that cεδ 1−k = η. This completes the proof. The following coercivity bound will be useful both in this section and in the next. We assume here that we have a partially regular minimal k-slicing. Proposition 3.2. Assume that our k-slicing is bounded. There is a constant c such that for ϕ ∈ H k,0 we have Moreover we have the bound Proof. We can see from (2.1) that Using the stability of Σ k we have Finally we use Lemma 2.1 to conclude that (note thatà n = 0) and thus we have whereR k+1 is given in Theorem 2.6 andR k is given in Lemma 2.5. We will need an upper bound on q k , so we first see from Theorem 2.6 with k replace by k + 1 where the constant bounds the curvature of Σ n and the eigenvalues. Now from Lemma 2.5 we can obtain the bound where X k = n−1 p=k+1 ∇ k log u p . We observe that the Gauss equation implies that |R k | ≤ c(1 + |A k | 2 ), and so we have We want to bound the second term on the right by a constant times the first plus up to the square of the L 2 norm of ϕ, so we use the bound for q k to obtain Now since ϕ has compact support we have Easy estimates then imply the bound We may now absorb the first term back to the left and use (3.2) to obtain the bound To bound the term involving |∇ k log u k | 2 we recall that on the regular set we have∆ (Note that∇ k = ∇ k on functions which do not depend on the extra variables t p .) Now if ϕ has compact support in R k , we multiply by ϕ 2 , integrate by parts to obtain By the arithmetic-geometric mean inequality The first inequality then follows from this and our previous estimate. The second conclusion follows since |∇ k log ρ k+1 | 2 ≤ cP k , and so the integrand on the left Recall that an important analytic step in the minimal hypersurface regularity theory is the local reduction to the case in which the hypersurface is the boundary of a set. This makes comparisons particularly simple and reduces consideration to a multiplicity one setting. We will need an analogous reduction in our situation. Since the leaves of a k-slicing can be singular, we must consider the possibility that local topology comes into play and prohibits such a reduction to boundaries of sets. What saves us here is the fact that k-slicings come with a natural trivialization of the normal bundle (on the regular set). We have the following result. Proposition 3.3. Assume that U is compactly contained in Ω, and that U ∩ Σ n is diffeomorphic to a ball. Assume that we have a minimal k-slicing in Ω such that the associated (k+1)-slicing is partially regular. LetΣ k denote the closure of any connected component of Σ k ∩U ∩R k+1 . Then it follows thatΣ k divides the corresponding connected component (denotedΣ k+1 ) of Σ k+1 into a union of two relatively open subsets, and choosing the one, denoted U k+1 , for which the unit normal ofΣ k points outward, we haveΣ k = ∂U k+1 as a point set boundary inΣ k+1 , and as an oriented boundary in R k+1 . Proof. SinceΣ k ∩ R k+1 andΣ k+1 ∩ R k+1 are connected, it follows that the complement ofΣ k ∩R k+1 inΣ k+1 ∩R k+1 has either 1 or 2 connected components. These consist of the connected components of points lying nearΣ k on either side. Locally these are separate components, but they may reduce globally to a single connected component. If this were to happen, then since dim(S k+1 ) ≤ k − 2, we could find a smooth embedded closed curve γ(t) parametrized by a periodic variable t ∈ [0, 1] with γ(0) ∈Σ k ∩ R k+1 and γ(t) ∈ R k+1 ∼Σ k for t = 0. We may also assume that γ ′ (0) is transverse toΣ k . We choose local coordinates x 1 , . . . , x k forΣ k in a neighborhood V of γ(0) and we may find an where ζ is a nonnegative and nonzero function with compact support in V , is a closed form which has positive integral overΣ k . Since the image V 1 = F (V ×S 1 ) is compactly contained in R k+1 and the normal bundle ofΣ k+1 is trivial, we may choose coordinates x k+2 , . . . , x n for a normal disk, and the coordinates x 1 , . . . , x k , t, x k+2 , . . . , x n are then coordinates on a neighborhood of V 1 in Σ n . We may then extend ω to an (n − 1)-form on this neighborhood by setting where ζ 1 is a nonzero, nonnegative function with compact support in the domain of x k+1 , . . . , x n . Thus ω 1 is a closed (n − 1)-form with compact support in U ∩ Σ n which has positive integral onΣ n−1 , the connected component of Σ n−1 containing γ(0). This contradicts the condition that each connected component of Σ n−1 must divide the ball U ∩ Σ n into 2 connected components and is the oriented boundary of one of them, sayΣ n−1 = ∂U n , since Stokes theorem would imply that Σ n−1 ω 1 = Un dω 1 = 0 (note that ω 1 has compact support in U ∩ Σ n ). We will prove a boundedness theorem which will be needed in the proof of the compactness theorem. Note that we will obtain the partial regularity theorem by finite induction down from dimension n − 1, so we may assume in the following theorems that we have already established partial regularity for (k + 1)-slicings. In the following result we will consider the restriction of a k-slicing to a small ball B σ (x) where x ∈ R N . We consider the rescaled k-slicing of the unit ball given by Σ j,σ = σ −1 (Σ j − x) with u j,σ (y) = a j u j (x + σy) with a j chosen so that Σ j,σ (u j,σ ) 2 ρ j+1,σ dµ j = 1. We note that by Proposition 3.3 we may assume that each Σ j in B σ (x) is the oriented boundary of a relatively open set O j+1 ⊆ Σ j+1 . We take O j+1,σ to be the rescaled open set. The following result implies that the rescaled k-slicing remains Λ-bounded for a suitably chosen Λ. Finally we prove the bound by the use of stability as we did above for the case k = n − 1. We will now formulate and prove a compactness theorem for minimal k-slicings under the assumption that the associated (k + 1)-slicings for the sequence are partially regular. We will say that a Λ-bounded sequence of k-slicings (Σ j converges in C 2 norm to Σ j in U locally on the complement of the singular set (of the limit) S j , and such that for j = k, . . . , n − 1 To make precise the meaning of convergence on compact subsets for this problem involves some subtlety since changing the u p , p ≥ j + 1 by multiplication by a positive constant has no effect on the Σ j , so in order to get nontrivial limits for the u p we must normalize them appropriately. In case Σ j ∩ U has multiple components this normalization must be done on each component. If (Σ j , u j ) is a minimal k-slicing with Σ j being partially regular for j ≥ k +1, then we call a compact subdomain U of Ω admissible for (Σ j , u j ) if U is a smooth domain which meets ∂Σ j transversally and dim(∂U ∩ S j ) ≤ j − 3. It follows from the coarea formula that any smooth domain can be perturbed to be admissble. We make the following definition. Definition 3.1. We say that a sequence of k-slicings (Σ (i) j , u (i) j ) converges on compact subsets to a k-slicing (Σ j , u j ) if for any compact subdomain U of Ω which is admissible for (Σ j , u j ) and for any admissible domains U i for (Σ j+1 ∩ U i in the sense of (3.3) and (3.4) with u j appropriately normalized on each connected component. Remark 3.1. Because of the connectedness of the regular set and the Harnack inequality, we may normalize the u j to be equal to 1 at a point of x 0 ∈ R k about which we have a uniform ball on which the Σ j have bounded curvature, and this normalization suffices for the connected component of Σ k ∩ U for any compact admissible domain for (Σ j , u j ). A consequence of the compactness theorem below implies that this normalization suffices. The following compactness and regularity theorem includes Theorem 2.3 as a special case. Theorem 3.5. Assume that all bounded minimal (k + 1)-slicings are partially regular. Given a Λ-bounded sequence of k-slicings , there is a subsequence which converges to a Λ-bounded k-slicing on compact open subsets of Ω. Furthermore Σ k is partially regular. Proof. We will proceed as usual by downward induction beginning with k = n − 1. We will break the proof into two separate steps, the first establishing the first statement of (3.3) for convergence of the Σ k and the second showing the other two statements (3.4) involving convergence of the u k . For k = n − 1 the first step follows from the usual compactness theorem for volume minimizing hypersurfaces (see [Si]). To complete the proof we will need to develop some monotonicity ideas both for the Σ j and for the u j . We digress on this topic and return to the proof below. We now prove a version of the monotonicity of the frequency-type function. This idea is due to F. Almgren [A], and it gives a method to prove that solutions of variationally defined elliptic equations are approximately homogeneous on a small scale. The importance of this method for us is that it works in the presence of singularites provided certain integrals are defined. We will apply this to show that the u k become homogeneous upon rescaling at a given singular point. Assume that C is a k dimensional cone in R n which is regular except for a set S with dim(S) ≤ k − 3. Assume that Q is a quadratic form on C of the form where ρ is a homogeneous weight function on C of degree p; i.e. assume that ρ(λx) = λ p ρ(x) for x ∈ C and λ > 0. Assume also that ρ is smooth and positive on the regular set R of C and that ρ is locally L 1 on C. Assume also that q is smooth on R and is homogeneous of degree −2; i.e. assume that q(λx) = λ −2 q(x) for x ∈ C and λ > 0. Finally assume that u is a minimizer for Q in a neighorhood of 0 and in particluar that u is smooth and positive on R. Assume also that q = div(X ) +q where |X | 2 + |q| ≤ P for some positive function P and that the following integral bound holds C [|∇u| 2 + (1 + |∇log ρ| 2 + P )u 2 ]ρ dµ < ∞. Under these conditions we may define the frequency function N(σ) which is a function of a radius σ > 0 such that B σ (0) is contained in the domain of definition of u. It is defined by where Q σ (u) and I σ (u) are defined by where the last integral is taken with respect to k − 1 dimensional Hausdorff measure. We may now prove the following monotonicity result for N(σ). Theorem 3.6. Assume that u is a critical point of Q which is integrable as above. The function N(σ) is monotone increasing in σ, and for almost all σ we have where u r denotes the radial derivative of u and ·, · σ denotes the ρweighted L 2 inner product taken on C ∩ ∂B σ (0). The limit of N(σ) as σ goes to 0 exists and is finite. The function N(σ) is equal to a constant N(0) if and only if u is homogeneous of degree N(0). Proof. The argument can be done variationally and combines two distinct deformations of the function u. The first involves a radial deformation of C; precisely, let ζ(r) be a function which is nonnegative, decreasing, and has support in B σ (0). Let X denote the vector field on R n given by X = ζ(r)x where x denotes the position vector. The flow F t of X then preserves C, and we may write where we have used a change of variable and ∇ t and µ t denotes the gradient operator and volume measure with respect to F * t (g) where g is the induced metric on C from R n . Differentiating with respect to t and setting t = 0 we obtain 0 = C {( −L X g, du⊗du −X(q)u 2 )ρ+(|∇u| 2 −qu 2 )(X(ρ)+ρ div(X))} dµ where L denotes the Lie derivative. By direct calculation we have X(q) = −2ζq, X(ρ) = pζρ, div(X) = rζ ′ (r)+kζ, and L X g = 2rζ ′ (r)(dr⊗ dr)+2ζg. Substituting in this information and collecting terms we have Letting ζ approach the characteristic function of B σ (0) this implies The second ingredient we need comes from the deformation u t = (1 + tζ(r))u where ζ is as above. Sinceu = ζu this deformation implies 0 = C ( ∇u, ∇(ζu) − qζu 2 )ρ dµ. Expanding this and letting ζ approach the characteristic function of The proof will now follow by combining these. First we have Substituting in for the terms involving derivatives this implies Since the first term on the right is 0, we may write this as To see that N(σ) is bounded from below as σ goes to 0 we can observe that , and the monotonicity expresses the condition that the function logĪ σ (u) is a convex function of t = log σ. Since this function is defined for all t ≤ 0, and by the coarea formula for any σ 1 > 0, there is a σ ∈ [σ 1 , 2σ 1 ] so that I σ (u) ≤ cσ −1 it follows that there is a sequence t i = log σ i tending to −∞ such thatĪ σ i (u) ≤ cσ −K i for some K > 0. Thus we have the function logĪ σ i (u) ≤ −ct i . It follows that the slope (that is N(σ)) of the convex function logĪ σ (u) is bounded from below as t tends to −∞. Now if N(σ) = N(0) is constant, we must have equality in the Schwartz inequality for each σ, and hence we would have u r = f (r)u for some function f (r). Now this implies that Q σ = f (σ)I σ and hence we have rf (r) = N(0). Therefore it follows that f (r) = r −1 N(0), and ru r = N(0)u so u is homogeneous of degree N(0) by Euler's formula. We will need to extend the usual monotonicity formula for the volume of minimal submanifolds to the setting in which the submanifold under consideration minimizes a weighted volume with a homogeneous weight function within a partially regular cone. Precisely, let C be a k + 1 dimensional cone in R n with a singular set S of Hausdorff dimension at most k − 2. Let ρ be a positive weight function which is homogeneous of degree p; i.e. we have ρ(λx) = λ p ρ(x) for x ∈ C and λ > 0. Assume that ρ is smooth and positive on the regular set of C, and that ρ is locally integrable with respect to Hausdorff measure on C. Theorem 3.7. Let Σ be a hypersurface in a k + 1 dimensional cone C which minimizes the weighted volume V ρ for a homogeneous weight function ρ. We then have the monotonicity formula d dσ where x ⊥ denotes the component of the position vector x perpendicular to Σ. Proof. We take a function ζ(r) which is decreasing, nonnegative, and equal to 0 for r > σ, and we consider the vector field X = ζx where x denotes the position vector. The first variation formula for the ρweighted volume then implies Since ρ is homogeneous we have X(ρ) = pζρ, and by direct calculation div Σ (X) = kζ + r −1 ζ ′ |x T | 2 where x T denotes the component of x tangential to Σ. Thus we have Taking ζ to approximate the characteristic function of B σ (0) we may write this where x ⊥ is the component of x normal to Σ in C. Note that r 2 = |x T | 2 + |x ⊥ | 2 because C is a cone and so x is tangential to C. This may be rewritten as the desired monotonicity formula and completes the proof. We now show that there can be no tangent minimal 2-slicing with C 2 having an isolated singularity at {0}. Theorem 3.8. If C 2 is a cone lying in a tangent minimal 2-slicing such that C 2 ∼ {0} ⊆ R 2 , then C 2 is a plane and R 2 = C 2 . Proof. From the eigenvalue estimate of Theorem 2.6 we have for test functions ϕ with compact support in C 2 ∼ {0}. Since C 2 is a two dimensional cone we have R 2 = 0 away from the origin, and hence we have Letting r denote the distance to the origin, we take ε and R so that 0 < ε << R and choose ϕ to be a function of r which is equal to 0 for r ≤ ε 2 , equal to 1 for ε ≤ r ≤ R, and equal to 0 for r ≥ R 2 . In the range ε 2 ≤ r ≤ ε we choose ϕ(r) = log(ε −2 r) log(ε −1 ) and for R ≤ r ≤ R 2 ϕ(r) = log(R 2 r −1 ) log R . Thus for ε 2 ≤ r ≤ ε we have |∇ 2 ϕ| 2 = (r|log ε|) −2 and for R ≤ r ≤ R 2 we have |∇ 2 ϕ| 2 = (r log R) −2 . It thus follows that Thus we may let ε tend to 0 and R tend to ∞ to conclude that the functions u 3 , . . . , u n are constant on C 2 . This implies that C 2 has zero mean curvature and hence is a plane. If all of the cones C 3 , . . . C n−1 are regular near the origin, then it follows that 0 ∈ R 2 , and we have completed the proof. Otherwise there is a C m for m ≥ 3 which denotes the largest dimensional cone in the minimal 2-slicing for which the origin is a singular point. It follows that C m is a volume minimizing cone in R m+1 = C m+1 , and hence u m must be homogeneous of a negative degree (see Lemma 3.10 below) contradicting the fact that u m is constant along C 2 . This completes the proof. Completion of proof of Theorem 3.5: We first prove the compactness of the Σ k in the sense of (3.3) under the assumption that we have the partial regularity of bounded minimal (k +1)-slicings and the compactness (both (3.3) and (3.4)) for j ≥ k + 1. We need the following lemma. Lemma 3.9. Assume that both the compactness and partial regularity hold for (k + 1)-slicings. Given any x ∈ S k+1 , there are constants c and r 0 (depending on x and Σ k+1 ) so that for r ∈ (0, r 0 ] we have Proof. Since the left hand side of the inequality is continuous under convergence and the right hand side is lower semicontinuous (Fatou's theorem) it is enough to establish the inequality for r = 1 on a cone C k+1 . This we can do by a compactness argument since we can normalize and if we had a sequence of singular cones for which the right hand side tends to zero we would have a limiting cone C k+1 on which P k+1 = 0. It follows that u k+2 , . . . , u n−1 are constant on C k+1 . Note that the highest dimensional singular cone in the slicing C n 0 is minimal and hence u n 0 is homogeneous of a negative degree (see Lemma 3.10 below). Therefore if n 0 > k + 1 we have a contradiction. Therefore we conclude that C k+1 is minimal and C k+2 , . . . , C n−1 are planes. Thus it follows that A k+1 = A k+1 = 0 and hence C k+1 is also a plane. Thus the cones are regular sufficiently far out in the sequence; a contradiction. The second inequality follows easily by reduction to cones. This proves the bounds. Given a sequence (Σ j ) of Λ-bounded minimal k-slicings, we may apply the inductive assumption to obtain a subsequence (with the same notation) for which the corresponding sequence of (k + 1)-slicings converges in the sense of (3.3) and (3.4). By standard compactness theorems we may assume that Σ (i) k converges on compact subsets of Ω ∼ S k+1 to a limiting submanifold Σ k which minimizes V ol ρ k (and is therefore regular outside a closed set of dimension at most k − 7). To establish (3.3) we choose a neighborhood U of S k+1 such that We apply Lemma 3.9 and compactness to find a finite collection of points x α ∈ S k+1 and balls B rα (x α ) ⊂ U so that Now apply the Besicovitch covering lemma to extract a finite number of disjoint collections B α , α = 1, . . . , K of such balls whose union covers S k+1 . If V denotes the union of these balls, then V is a neighborhood of S k+1 , and hence for i sufficiently large we have S (i) k+1 ⊂ V . Because of convergence of the left sides and lower semicontinuity of the right side, we have for i sufficiently large By the coarea formula, for each such ball B r 0 (x) we may find s ∈ [r 0 , 2r 0 ] (s depending on i) so that Using the minimizing property of Σ (i) k and simple inequalities we find for any ε 1 > 0. Applying the inequalities above and summing over the balls (using disjointness and a bound on K) we find For i sufficiently large this implies so that we may fix ε 1 sufficiently small and then choose ε as small as we wish to make the right hand side smaller than any preassigned amount. Since we have Now assume that we have established the partial regularity of all bounded minimal (k + 1)-slicings and that we have proven the compactness for the Σ k in the sense of (3.3). We can then use the results we have obtained above together with dimension reduction to prove partial regularity for Σ k . Precisely, we have dim(S k ) ≤ k − 2, and if dim(S k ) > k − 3, then we can choose a number d with and go to a point x ∈ S k of density for the measure H d ∞ (since H d ∞ (S k ) > 0). Taking successive tangent cones in the standard way and using the upper-semicontinuity of H d ∞ (S k ) we would eventually produce a minimal 2-slicing by cones such that C 2 × R k−2 has singular set with Hausdorff dimension at most k − 2 (by partial regularity of (k + 1)slicings) and greater than k − 3. Therefore the cone C 2 must have an isolated singularity at the origin. This in turn contradicts Theorem 3.8. Therefore it follows that dim(S k ) ≤ k − 3 and Σ k is partially regular. The final step of the proof is to show that the compactness statement holds for the u k under the assumption that it holds for (Σ j , u j ) for j ≥ k + 1 and also for Σ k (as established above). Assume that we have a sequence of minimal k-slicings such that the associated (k+1)-slicings and Σ (i) k converge on compact subsets in the sense of (3.3) and (3.4). We choose a compact domain U which is admissble for (Σ j , U j ) and a nested sequence of domains U i admsisible for (Σ . We work with a connected component of Σ k ∩ U which by abuse of notation we call by the same name Σ k . We may assume that the u (i) k converge uniformly to u k on compact subsets of Ω ∼ S k (where we can write Σ (i) k locally as a normal graph over Σ k and compare corresponding values of u (i) k to u k ). In particular, if W is a compact subdomain of Ω∩R k we have convergence of weighted L 2 norms of u (i) k to the corresponding L 2 norm of u k on W . If U is any compact subdomain of Ω and η > 0, then by Proposition 3.1 applied with S = S k we can find an open neighborhood V of S ∩Ū so that for The same inequality holds for the limit, and by the boundedness of the sequence the integral on the right is uniformly bounded. Thus by choosing η small enough we can make the right hand side less than any prescribed ε > 0. On the other hand if we take W = U \V we then have convergence of the weighted L 2 norms on W , so we can make the difference as small as we wish on W . It follows that the difference of L 2 norms can be made arbitrarily small on U. This completes the proof that the weighted L 2 integrals converge. Completing the proof will require the construction of a proper locally Lipschitz function Ψ k on R k such that u k |∇ k Ψ k | is bounded in L 2 (Σ k ). We give the construction of such a function in Proposition 3.11 below. It also follows that we may construct a subsequence so that Ψ (i) k are uniformly close to Ψ k on compact subsets of R N ∼ S k for i large. We can now prove the second part of the convergence (3.4). Assume that U ⊂ U 1 ⊂ Ω are compact domains. We let ε > 0 we may choose a neighborhood V of S k so small that V ∩Ū 1 u 2 k ρ k+1 dµ k < ε. Because Ψ k is proper on R k , we may choose Λ sufficiently large that E k (Λ) ⊂ V where E k (Λ) is the subset of Σ k on which Ψ k > Λ. We now let γ(t) be a nondecreasing Lipschitz function such that γ(t) = 0 for t < Λ, γ(t) = 1 for t > Λ, and γ ′ (t) ≤ Λ −1 . We let ϕ be a spatial cutoff function which is 1 on U, 0 outside U 1 , and has bounded gradient. We then have the inequality by Proposition 3.2 Since we have convergence of the L 2 norms of u (i) k and boundedness of the L 2 norms of u If we let V 1 be a neighborhood of S k such that Σ k ∩ V 1 ⊂ E k (3Λ), then for i sufficiently large we will have Σ (i) Since this can be made arbitrarily small, we have shown (3.4) and completed the proof of Theorem 3.5. We will need the following lemma concerning minimal cones C m ⊂ R m+1 . Lemma 3.10. Assume that C m is a volume minimizing cone in R m+1 and that u m is a positive minimizer for Q j which is homogeneous of degree d on C. There is a positive constant c depending only on m so that d ≤ −c. Since v and |∇v| are in L 2 (Σ) we must have µ < 0 and this implies that d < 0. To prove the negative upper bound on d recall that the set of volume minimizing cones is a compact set, and we have proven the compactness theorem above for the L 2 norms, so if we had a sequence (C (i) m , u (i) m ) such that d (i) tends to 0 we could extract a convergent subsequence of the (Σ (i) , v (i) ) which converges to (Σ, v) where we could normalize Σ (i) (v (i) ) 2 dµ m−1 = 1 (hence Σ v 2 dµ m−1 = 1). Since we have smooth convergence on compact subsets of the complement of the singular set of Σ we would then have ∆v + 5/8|A m | 2 v = 0 and therefore we would have µ = 0 for the limiting cone, a contradiction. As the final topic of this section we construct the proper functions which were used in the proof of Theorem 3.5. This result will also be used in the next section. Proposition 3.11. Suppose we have a Λ-bounded minimal k-slicing in Ω. There exists a positive function Ψ k which is locally Lipschitz on R k and such that for any domain U compactly contained in Ω, the function Ψ k is proper on R k ∩Ū. Moreover, the function u k |∇ k Ψ k | is bounded in L 2 (Σ k ∩ U) for any domain U compactly contained in Ω. Proof. We define Ψ k = max{1, log u k , log u k+1 , . . . , log u n−1 } and we show that it has the properties claimed. First note that Ψ k is locally Lipschitz on R k since it is the maximum of a finite number of smooth functions on R k . The bound together with Proposition 3.2 implies the L 2 (Σ k ) bound claimed on Ψ k . (Note that we may replace ϕ by ϕu k in the first inequality of Proposition 3.2 where ϕ is a cutoff function which is equal to 1 on U.) It remains to prove that Ψ k is proper on R k ∩Ū . SinceŪ is compact it suffices to show that for any x 0 ∈ S k ∩Ū we have If we let m ≥ k be the largest integer such that Σ m is singular at x 0 , then there is an open neighborhood V of x 0 in which Σ m is a volume minimizing hypersurface in a smooth Riemannian manifold. We will show that u m tends to infinity at x 0 by first showing that this is true for any homogeneous approximation of u m at x 0 . In order to construct homogeneous approximations we need to have the compactness theorem for this top dimensional case, but our proof of compactness used the result we are trying to prove, so we must find another argument for establishing (3.4) since (3.3) is a standard result for volume minimizing hypersurfaces in smooth manifolds. Our proof of the first part of (3.4) did not require the function Ψ k , so we need only deal with the second part. First recall that dim(S m ) ≤ m − 7, so it follows from a standard result that given any ε, δ > 0 and a ∈ (0, 7) we can find a Lipschitz function ψ so that ψ = 1 in a neighborhood of S m , ψ(x) = 0 for points x with dist(x, S m ) ≥ δ, and Σm∩V |∇ m ψ| a dµ m < ε a . We show that Σm∩V |∇ m ψ| 2 u 2 m dµ m ≤ cε 2 . If we can establish this inequality, then we can complete the proof of compactness for k = m in the set V as in the proof of Theorem 3.5. To establish the inequality, we observe that the equation satisfied by u m is of the form ∆ m u m + 5/8|A m | 2 u m + qu m = 0 where q is a bounded function (since Σ m is volume minimizing in a smooth manifold). On the other hand the stability implies that Σm |A m | 2 ϕ 2 dµ m ≤ Σm (|∇ϕ| 2 + cϕ 2 ) dµ m . We may then replace ϕ by u 8/5 m ϕ and use the equation for u m to obtain We may then apply the Sobolev inequality for minimal submanifolds to conclude that u m satisfies We then apply the Hölder inequality to obtain . Setting a = 16m 3m+10 < 7 we have from above Σm∩V |∇ m ψ| 2 u 2 m dµ m ≤ cε 2 as desired. Thus we have the compactness theorem for (Σ m , u m ) in V and we can construct tangent cones to Σ m at x 0 and homogeneous approximations to u m at x 0 . By Lemma 3.10 any such homogeneous approximation v m has strictly negative degree d ≤ −c on its cone C m of definition. If we let R m (C) denote the regular set of C, then it follows that for any µ > 1, we have for a fixed constant α ∈ (0, 1) depending on µ, but independent of which cone and which homogeneous approximation we choose. Note that ∆ m u m ≤ cu m and ∆ m v m ≤ 0, so by the mean value inequality on volume minimizing hypersurfaces (see [BG]) we have for any r so that B r (x 0 ) is compactly contained in V . It follows that the essential infima of both u m and v m are positive on any compact subset. We now show that there exists α ∈ (0, 1) such that for σ sufficiently small. If we establish this, we have finished the proof that u m tends to infinity at x 0 and hence we will have the desired properness conclusion for Ψ k . To establish this inequality we observe that if (Σ (i) m , u (i) m ) is a sequence converging to (Σ m , u m ) in the sense of (3.3) and (3.4) and K is a compact set such that for a fixed constant c. The first and second inequalities are obvious, and to get the third we observe that for a small radius r and any x ∈ R m ∩ K we have from above and hence for i sufficiently large for a positive constant ε 0 . This establishes the third inequality. The proof can now be completed by using rescalings at x 0 which converge to (C m , v m ) for some cone and homogeneous function together with the corresponding result for the homogeneous case. Existence of minimal k-slicings The main purpose of this section is to prove Theorem 2.4. We begin with the construction of the eigenfunction u k assuming the Σ k has already been constructed and is partially regular in the sense that dim(S k ) ≤ k − 3. We define the Hilbert spaces H k and H k,0 as in the last section, namely, H k (respectively H k,0 ) is the completion in · 0,1 of the Lipschitz functions with compact support in R k ∩Ω (respectively R k ∩ Ω). In order to handle boundary effects we also assume that there is a larger domain Ω 1 which containsΩ as a compact subset and that the k-slicing is defined and boundaryless in Ω 1 . Note that this is automatic if ∂Σ j = φ. Thus H k,0 consists of those functions in H k with 0 boundary data on Σ k ∩ Ω. The quadratic form Q k is nonnegative definite on the Lipschitz functions with compact support in R k ∩ Ω, and so the standard Schwartz inequality holds for any pair of such functions ϕ, ψ Q k (ϕ, ψ) ≤ Q k (ϕ, ϕ) Q k (ψ, ψ). (4.1) We now have the following result. Theorem 4.1. The function Q k (ϕ, ψ) is continuous with respect to the norm · 0,1 in both variables and therefore extends as a continuous nonnegative definite bilinear form on H k,0 . The Schwartz inequality (4.1) holds for ϕ, ψ ∈ H k,0 . The function Q k (ϕ, ϕ) is strongly continuous and weakly lower semicontinuous on H k,0 . Proof. From Proposition 3.2 we have for ϕ 1 , ϕ 2 Lipschitz functions with compact support in R k ∩ Ω Q k (ϕ 1 − ϕ 2 , ϕ 1 − ϕ 2 ) ≤ c ϕ 1 − ϕ 2 2 1,k , so it follows from (4.1) that Combining these we see that Q k is continuous in the first slot, and since it is symmetric in both slots. Therefore Q k extends as a continuous nonnegative definite bilinear form on H k,0 and the Schwartz inequality holds on H k,0 by continuity. To complete the proof we must prove that Q k (ϕ, ϕ) is weakly lower semicontinuous on H k,0 . Note that the square norm ϕ 2 0,k + Q k (ϕ, ϕ) is equivalent to ϕ 2 1,k by Proposition 3.2. Therefore these have the same bounded linear functionals and hence determine the same weak topology on H k,0 . Assume we have a sequence ϕ ∈ H k,0 which converges weakly to ϕ ∈ H k,0 . We then have for any ψ ∈ H k,0 This implies that for i sufficiently large for any chosen ε > 0. It follows that which implies the desired weak lower semicontinuity. In order to construct a lowest eigenfunction u k we will need the following Rellich-type compactness theorem. Theorem 4.2. The inclusion of H k,0 into L 2 (Σ k ) is compact in the sense that any bounded sequence in H k,0 has a convergent subsequence in L 2 (Σ k ). Proof. This statement follows from Proposition 3.1 and the standard Rellich theorem. Assume that we have a bounded sequence ϕ i ∈ H k,0 ; that is, ϕ i 2 1,k ≤ c. We may extend the ϕ i to Ω 1 be taking ϕ i = 0 in Ω 1 ∼ Ω, and by the standard Rellich compactness theorem we may assume by extracting a subsequence that the ϕ i converge in L 2 norm on compact subsets ofΩ ∼ S k and weakly in H k,0 to a limit ϕ ∈ H k,0 . We show that ϕ i converges to ϕ in L 2 (Σ k ). Given any ε 1 > 0, we can choose ε > 0, δ > 0 in Proposition 3.1 so that for each i we have where V is an open neighborhood of S k ∩Ω. The Fatou theorem then implies ( Combining these bounds we find for i sufficiently large. This completes the proof. We are now ready to prove the existence, positivity, and uniqueness of u k on Σ k ∩ Ω. Theorem 4.3. The quadratic form Q k on H k,0 has discrete spectrum with respect to the L 2 (Σ k ) inner product and may be diagonalized in an orthonormal basis for L 2 (Σ k ). The eigenfunctions are smooth on R k ∩ Ω, and if we choose a first eigenfunction u k , then u k is nonzero on R k ∩ Ω and is therefore either strictly positive or strictly negative since R k ∩ Ω is connected. Furthermore any first eigenfunction is a multiple of u k which we may take to be positive. Proof. This follows from the standard minmax variational procedure for defining eigenvalues and constructing eigenfunctions. For example, to construct the lowest eigenvalue and eigenfunction we let By Theorem 4.2 and Theorem 4.1 we may achieve this infimum with a function u k ∈ H k,0 with u k 0,k = 1. The Euler-Lagrange equation for u k is then the eigenfunction equation with eigenvalue λ k . The higher eigenvalues and eigenfunctions can be constructed by imposing orthogonality constraints with respect the L 2 (Σ k ) inner product. We omit the standard details. The smoothness on R k ∩ Ω follows from elliptic regularity theory. The fact that a lowest eigenfunction u is nonzero follows from the fact that if u ∈ H k,0 then |u| ∈ H k,0 and Q k (u, u) = Q k (|u|, |u|) a property which can be easily checked on the dense subspace of Lipschitz functions with compact support in R k ∩ Ω and then follows by continuity. The multiplicity one property of the lowest eigenspace follows from this property in the usual way. We omit the details. We now come to the existence results. We first discuss Theorem 2.4 and we then generalize the existence proof to a more precise form. Suppose X is a closed k-dimensional oriented manifold with k < n. We assume that Σ n is a closed oriented n-manifold and that there is a smooth map F : Σ n → X × T n−k of degree s = 0. We let Ω denote a (unit volume) volume form of X and let Θ = F * Ω so that Θ is a closed k-form on Σ n . We let t p for p = k + 1, . . . , n denote the coordinates on the circles and we assume they are periodic with period 1. For p = k + 1, . . . , n we let ω p be the closed 1-form ω p = F * (dt p ). The assumption on the degree of F implies that Σn Θ ∧ ω k+1 ∧ . . . ∧ ω n = s. We will need the following elementary lemma. Lemma 4.4. Suppose N m is a closed oriented Riemannian manifold and let Ω be its volume form. Given any open set U of N which is not dense in N, the form Ω is exact on U. Moreover, given an open set V compactly contained in U, we can find a closed m-form Ω 1 which agrees with Ω on M \ U and such that Ω 1 = 0 in V . Proof. Let f be a smooth function which is equal to 1 in U and such that N f dΩ = 0. Let u be a solution of ∆u = f and let θ be the (m − 1)-form θ = * du. We then have dθ = d * du = (∆u)Ω, so we have dθ = Ω on U. To prove the last statement, we let ζ be a smooth cutoff function which is equal to 1 in V and has compact support in U. We then define Ω 1 = Ω − d(ζ * du). We then have Ω 1 = 0 in V and Ω 1 differs from Ω by an exact form. We now restate the existence theorem. Proof. We begin with the 1-form ω n and we integrate to get a map u n : Σ n → S 1 so that ω n = du n . Let t be a regular value of u n and consider the hypersurface S n = u −1 n (t). Because the map F has degree s and we have normalized our forms in X × T n−k to have integral 1, we see that Sn Θ ∧ ω k+1 ∧ . . . ∧ ω n−1 = s. Let Σ n−1 be a least volume cycle in Σ n with the property that Σn Θ ∧ ω k+1 ∧ . . . ∧ ω n−1 = s. The existence follows from standard results of geometric measure theory. Now suppose for j ≥ k we have constructed a partially regular minimal j + 1 slicing with the property that there is a form Θ j+1 of compact support which is cohomologous to Θ ∧ ω k+1 ∧ . . . ∧ ω j+1 such that Σ j+1 Θ j+1 = s. Since the slicing is partially regular, we have that the Hausdorff dimension of S j+1 is at most j−2, so it follows that the image F j (S j+1 ) under the projection map F j : Σ n → X × T j−k is a compact set of Hausdorff dimension at most j − 2. It follows from Lemma 4.4 that the form Ω ∧ dt k+1 ∧ . . . ∧ dt j is exact in a neighborhood U of F j (S j+1 ), given a neighborhood V of F j (S j+1 ) which is compact in U we can find a form Ω j which is cohomologous to Ω ∧ dt k+1 ∧ . . . ∧ dt j and vanishes in V . Pulling back we see that Θ j = F * Ω j vanishes in a neighborhood of S j+1 and is cohomologous to Θ∧ω k+1 ∧. . .∧ω j . We let u j+1 be the map gotten by integrating ω j+1 and consider its restriction to Σ j+1 . Since u j+1 is in L 2 with respect to the weight ρ j+2 , we see that ρ j+1 = u j+1 ρ j+2 is integrable on Σ j+1 . It then follows from the coarea formula that we can find a regular value t of u j+1 in R j+1 so that the hypersurface S j ⊂ Σ j+1 given by S j = u −1 j+1 (t) has finite ρ j+1 -weighted volume and satisfies S j Θ j = s. We can then solve the minimization problem for the ρ j+1 -weighted volume among integer multiplicity rectifiable currents T with support in Σ j+1 , with no boundary in R j+1 , and with T (Θ j ) = s. A minimizer for this problem gives us Σ j and completes the inductive step for the existence. Remark 4.1. The existence proof above does not specify the homology class of the minimizers even if the minimizers are smooth since we are minimizing among cycles for which the integral of Θ j is fixed. In general there may be homology classes for which the integral of Θ j vanishes. We have chosen the class to do the minimization in order to avoid a precise discussion of the homology of the singular spaces in which we are working. In the following we give a more precise existence theorem which specifies the homology classes and allows them to be general integral homology classes, possibly torsion classes. We now formulate and prove a more general existence theorem for minimal k slicings. In the theorem we let [Σ n ] denote the fundamental homology class in H n (Σ n , Z) and, for a cohomology class α ∈ H p (Σ n , Z), we let α ∩ [Σ n ] denote its Poincaré dual in H n−p (M, Z). Proof. Assume that we are given a partially regular Λ-bounded minimal (k + 1)-slicing which represents α 1 , . . . , α n−k−1 . We thus have the weight function ρ k+1 defined on Σ k+1 which we use to produce Σ k . From the partial regularity the singular set S k+1 of Σ k+1 has Hausdorff dimension at most k − 2. We consider the class of integer multiplicity rectifiable currents which are relative cycles in H k (Σ n , S k+1 , Z); that is, for any k − 1 form θ of compact support in Σ k+1 \ S k+1 we have T (dθ) = 0. Because the set S k+1 has zero k−1 dimensional Hausdorff measure we have H k (Σ n , Z) = H k (Σ n , S k+1 , Z). This follows because a current which is a relative cycle T in Σ n \S k+1 is also a cycle in Σ n since ∂T is zero since it is unchanged by adding a set of k − 1 measure zero. We use ρ k+1 weighted volume to set up a minimization problem. We consider the class of relative cycles T with support contained in Σ k+1 which have finite weighted mass; that is, T = (S k , Θ, ξ) where S k is a countably k-rectifiable set, Θ a µ k -measurable integer valued function on S k , and ξ a µ k -measurable map from S k to ∧ k R N such that ξ(x) is a unit simple vector for µ k a.e. x ∈ S k . Such a k-current T k is ρ k+1 -finite if Since we have already constructed Σ k+1 so that it is Λ-bounded we have Σ k+1 ρ k+1 dµ k+1 ≤ Λ. Now we can find a smooth closed hypersurface H k which is Poincaré dual to α k , and we may perturb it and use the coarea formula in a standard way to arrange thatΣ k ≡ Σ k+1 ∩ H k is a smooth embedded submanifold away from S k+1 and Σ k ρ k+1 dµ k ≤ c. In particular the associated currentT k ≡ (Σ k , 1,ξ) (whereξ is the oriented unit tangent plane ofΣ k ) is ρ k+1 -finite and is a competitor in our variational problem. The standard theory of integral currents now allows us to construct a minimizer for our variational problem which gives us the next slice Σ k which could be disconnected and with integer multiplicity. Thus Σ k represents the homology class α n−k ∩ . . . ∩ α 1 ∩ [Σ n ]. This completes the proof of Theorem 4.6. Application to scalar curvature problems In this section we prove two theorems for manifolds with positive scalar curvature. The first of these is for compact manifolds and the second is the Positive Mass Theorem for asymptotically flat manifolds. Our first theorem which we will need to prove the Positive Mass Theorem is the following. Theorem 5.1. Let M 1 be any closed oriented n-manifold. The manifold M = M 1 #T n does not have a metric of positive scalar curvature. Proof. Such a manifold M has admits a map F : M → T n of degree 1, and so by Theorem 2.4 there exists a closed minimal 1-slicing of M in contradiction to Theorem 2.7. We also prove the following more general theorem. Proof. By the existence and regularity results of Sections 3 and 4, there is a minimal 2-slicing so that Σ 2 ∈ σ 2 is regular and satisfies the eigenvalue bound of Theorem 2.6. Choosing ϕ = 1 on any given component of Σ 2 and applying the Gauss-Bonnet theorem we see that each component must be topologically S 2 . In particular it follows that for any other α n−1 ∈ H 1 (M, Z) we have that α n−1 ∩ σ 2 is a class in H 1 (Σ 2 , Z), and therefore is zero. We now prove a Riemannian version of the positive mass theorem. Assume that M is a complete manifold with the property that there is a compact subset K ⊂ M such that M ∼ K is a union of a finite number of connected components each of which is an asymptotically flat end. This means that each of the components is diffeomorphic to the exterior of a compact set in R n and admits asympototically flat coordinates x 1 , . . . , x n in which the metric g ij satisfies g ij = δ ij + O(|x| −p ), |x||∂g ij | + |x| 2 |∂ 2 g ij | = O(|x| −p ), |R| = O(|x| −q ) (5.1) where p > (n − 2)/2 and q > n. Under these assumptions the ADM mass is well defined by the formula (see [Sc] for the n dimensional case) m = 1 4(n − 1)ω n−1 lim σ→∞ Sσ i,j (g ij.i − g ii,j )ν j dξ (σ) where S σ is the euclidean sphere in the x coordinates, ω n−1 = V ol(S n−1 (1)), and the unit normal and volume integral are with respect to the euclidean metric. We may now state the Positive Mass Theorem. Theorem 5.3. Assume that M is an asymptotically flat manifold with R ≥ 0. For each end it is true that the ADM mass is nonnegative. Furthermore, if any of the masses is zero, then M is isometric to R n . Proof. The theorem can be reduced to the case when there is a single end by capping off the other ends keeping the scalar curvature nonnegative. We will show only that m ≥ 0, and the equality statement can be derived from this (see [SY2]). We will reduce the proof to the compact case using results of [SY3] and an observation of J. Lohkamp. Proposition 5.4. If the mass of M is negative, there is a metric of nonnegative scalar curvature on M which is euclidean outside a compact set. This produces a metric of positive scalar curvature on a man-ifoldM which is gotten by replacing a ball in T n by the interior of a large ball in M. Proof. Results of [SY3] and [Sc] imply that if m < 0 we can construct a new metric on M with nonnegative scalar curvature, negative mass, and which is conformally flat and scalar flat near infinity. In particular, we have g = u 4/(n−2 δ near infinity where u is a euclidean harmonic function which is asymptotic to 1. Thus u has the expansion u(x) = 1 + m |x| n−2 + O(|x| 1−n ) where m is the mass. Now we use an observation of Lohkamp [?]. Since m < 0, we can choose 0 < ε 2 < ε 1 and σ sufficiently large so that we have u(x) < 1 − ε 1 for |x| = σ and u(x) > 1 − ε 2 for |x| ≥ 2σ. If we define v(x) = u(x) for |x| ≤ σ and v(x) = min{1−ε 2 , u(x)} for |x| > σ, then we see that v(x) is weakly superharmonic for |x| ≥ σ, so may be approximated by a smooth superharmonic function with v(x) = u(x) for |x| ≤ σ and v(x) = 1−ε 2 for |x| sufficiently large. The metric which agrees with the original inside S σ and is given by v 4/(n−2) δ outside then has nonnegative scalar curvature and is euclidean near infinity. By extending this metric periodically we then produce a metric on M with nonnegative scalar curvature which is not Ricci flat. Therefore the metric can be perturbed to have positive scalar curvature. Using this result the theorem follows from Theorem 5.2 since the standard 1-forms on T n can be pulled back toM to produce the α 1 , . . . , α n−1 of that theorem. This completes the proof of Theorem 5.3.
2017-04-18T18:33:21.000Z
2017-04-18T00:00:00.000
{ "year": 2019, "sha1": "3c4c1d3274453282f3a0aa1e258ca14a0a041f28", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1704.05490", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3c4c1d3274453282f3a0aa1e258ca14a0a041f28", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
271011663
pes2o/s2orc
v3-fos-license
Analysis of high‑risk factors for brain metastasis and prognosis after prophylactic cranial irradiation in limited‑stage small cell lung cancer Small cell lung cancer (SCLC) is an aggressive malignancy with a high propensity for brain metastases (BM). Limited-stage SCLC (LS-SCLC) can be effectively treated with chemoradiotherapy and prophylactic cranial irradiation (PCI) to enhance patient outcomes. The aim of the present study was to assess the risk factors and prognostic significance of brain metastases (BM) in patients with limited-stage small cell lung cancer (LS-SCLC) who attained complete remission (CR) or partial remission (PR) following combined chemoradiotherapy and subsequent prophylactic cranial irradiation (PCI). Data for 290 patients diagnosed with LS-SCLC and treated at Chengde Central Hospital and Hebei Cangzhou Hospital of Integrated Traditional Chinese and Western Medicine (Chengde, China), who achieved CR or PR and underwent PCI between 2015 and 2023, were retrospectively analyzed. BM rates and overall survival (OS) were estimated using the Kaplan-Meier method, whilst differences were assessed using the log-rank test. Risk factors affecting BM and OS were assessed using univariate and multivariate Cox regression analyses. The overall incidence of BM after PCI was 16.6% (48/290), with annual rates of 1.4, 6.6 and 12.8% at 1, 2 and 3 years, respectively. Multivariate Cox regression analysis identified an initial tumor size of >5 cm [hazard ratio (HR)=15.031; 95% confidence interval (CI): 5.610–40.270; P<0.001] as a significant independent risk factor for BM following PCI. The median OS was 28.8 months and the 5-year OS rate was 27.9%. The median OS for patients with and without BM at 27.55 and 32.5 months, respectively, and the corresponding 5-year OS rates were 8.3 and 31.8%, respectively (P=0.001). Median OS rates for stages I, II and III were 61.15, 48.5 and 28.4 months, respectively, with 5-year OS rates of 62.5, 47.1 and 21.6%, respectively (P<0.001). Further multivariate Cox regression analysis indicated that BM (HR=1.934; 95% CI: 1.358–2.764; P<0.001) and clinical stage (HR=1.741; 95% CI: 1.102–2.750; P=0.018; P=0.022) were significant independent risk factors associated with patient OS. In conclusion, a tumor size of >5 cm is a significant risk factor for BM following PCI in patients with LS-SCLS achieving CR or PR through radiotherapy and chemotherapy. Furthermore, BM and clinical staging independently influence OS. Introduction Lung cancer remains a leading cause of cancer-related mortality globally, with small cell lung cancer (SCLC) representing ~13% of all cases (1).The United States Department of Veterans Affairs categorizes SCLC into two stages: Limited-stage (LS)-SCLC and extensive-stage (ES)-SCLC (2).In China, SCLC accounts for 13-15% of all lung cancer cases, with ~180,000 new cases reported annually (3). SCLC is characterized by rapid proliferation and early onset of distant metastasis, with ~70% of patients diagnosed at the extensive stage (4).The brain is frequently affected by distant metastasis in SCLC, with 10-24% of patients exhibiting brain metastases (BM) at diagnosis and >50% developing them during the disease (5). Recent advancements in comprehensive treatment have incrementally improved SCLC survival rates, subsequently increasing the incidence of BM.Within 2 years of achieving complete or partial remission, 67% of patients with LS-SCLC experience BM, with survival extending >2 years in 50-80% of cases (6).Prophylactic cranial irradiation (PCI) significantly reduces the risk of BM and enhances overall survival (OS), thus becoming the standard post-radiotherapy and chemotherapy treatment for LS-SCLC (7,8).Nevertheless, certain patients still develop BM post-PCI, underscoring the need for further refinement in selecting candidates for this intervention. Advancements in therapeutic strategies, including enhanced chemoradiotherapy protocols and precise radiation techniques, have significantly improved the management of LS-SCLC (9,10).Nevertheless, the aggressive nature of SCLC, characterized by rapid cell division and early metastasis, remains challenging.Brain metastases are especially problematic due to the blood-brain barrier, which limits the effectiveness of many systemic therapies, thus necessitating the use of PCI as a preventive measure (11). Identifying patients at higher risk for BM is crucial for optimizing treatment protocols and improving outcomes.Previous studies have emphasized the significance of factors such as tumor size and treatment response in predicting BM (12).Larger tumors and partial responses to treatment are associated with an increased risk of BM.Research into molecular and genetic markers, such as circulating tumor cells (CTCs) and specific gene mutations, holds promise for more accurately predicting BM risk (13).Integrating these biomarkers into clinical practice could lead to more personalized treatment approaches, thereby improving survival rates and quality of life for LS-SCLC patients (14). In the present study, a retrospective analysis was performed of clinical data from 290 patients with LS-SCLC who achieved complete remission (CR)/partial remission (PR) following PCI at Chengde Central Hospital (Chengde, China) and Hebei Cangzhou Hospital of Integrated Traditional Chinese and Western Medicine (Cangzhou, China).The aim was to elucidate the clinical characteristics that influence the risk of developing BM and prognosis after PCI. Patients and methods Clinical data.The present study gathered clinical data from 290 patients diagnosed with LS-SCLC who received PCI after achieving CR or PR.The data collection spanned from January 2015 to December 2023 at Chengde Central Hospital and Hebei Cangzhou Hospital of Integrated Traditional Chinese and Western Medicine.The time of collecting the statistical data was the same for both hospitals.The present study is based entirely on previously recorded patient data.All patients had a confirmed diagnosis of SCLC, either pathologically or cytologically, and were free of secondary primary malignancies.Restaging was performed using the American Joint Committee on Cancer (AJCC) Lung Cancer 8th Edition tumor-node-metastasis (TNM) clinical staging criteria (15) and the Department of Veterans Affairs two-stage system (2).The factors analyzed in the present study included age, sex, performance status (PS) score, initial tumor maximum diameter and treatment modalities.The inclusion criteria were as follows: i) Histological or cytological confirmation of SCLC; ii) initial diagnosis of LS-SCLC staged according to the 8th edition of the AJCC Cancer Staging Manual and the Veterans Administration Lung Study Group two-tier system (2); and iii) initial treatment with curative intent chemoradiotherapy (CRT; concurrent or sequential), followed by PCI after achieving CR or PR.The exclusion criteria were as follows: i) Presence of a second primary malignancy or other histological types of cancer; ii) diagnosis of ES-SCLC; iii) loss to follow-up or incomplete clinical data; and iv) absence of brain magnetic imagining resonance (MRI) data prior to PCI to exclude BM; vi) those who had surgical interventions. In the present study, levels of carcinoembryonic antigen (CEA) and neuron-specific enolase (NSE) were measured from blood samples collected from patients at the two medical centers.Assessments were performed using the Cobas ® E 601 module analyzer (Roche Diagnostics) using the electrochemiluminescence method, with Elecsys ® CEA and NSE assay kits (Roche Diagnostics, Elecsys ® CEA Assay Kit: Cat.no.11731629322; Elecsys ® NSE Assay Kit: Cat.no.04827021190).To maintain data integrity and accuracy, a dedicated Laboratory Data Collection Team was formed, which was responsible for the collection and verification of laboratory data from both centers, ensuring uniformity in reference ranges.Established reference ranges for CEA and NSE were set at 0-5 and 0-16 ng/ml, respectively. Treatment.All patients underwent standard chemotherapy and PCI according to the Chinese Society of Clinical Oncology guidelines (12,16).The preferred modality was concurrent CRT, with sequential CRT used when the former was not tolerable; 54.5% received concurrent treatment.Those who had surgical interventions were excluded from the analysis.Chemotherapy comprised 4-6 cycles of etoposide combined with cisplatin or carboplatin, with concurrent and induction chemotherapy involving 2-3 cycles and 1-3 cycles, respectively (8). Chemotherapy regimen.All patients underwent standard chemotherapy consisting of etoposide and platinum-based drugs (cisplatin or carboplatin).Etoposide was administered intravenously at a dose of 100 mg/m² on days 1 to 3 of each cycle.Cisplatin was administered intravenously at a dose of 75 mg/m² on day 1, or carboplatin was administered intravenously at an area under the curve (AUC) of 5 on day 1.Each chemotherapy cycle lasted 21 days, and patients typically received 4 to 6 cycles of chemotherapy.Thoracic radiotherapy was administered either as 45 Gy in 30 fractions twice daily or as 54-70 Gy in 28-30 fractions once daily.Response to CRT was evaluated using the Response Evaluation Criteria in Solid Tumours 1.1 criteria (17).Patients achieving a CR or PR proceeded with PCI.Brain MRI was performed prior to PCI in all cases to rule out metastases.PCI typically commenced 4-6 weeks post-CRT, delivered as 25 Gy in 5 weekly fractions over 2 weeks (18).Hippocampal delineation adhered to the RTOG0933 principles (19), ensuring a maximum dose to the hippocampus of <17 Gy and an average dose of <10 Gy (20).Dose constraints for high-risk organs were set as follows: Brainstem, ≤54 Gy; spinal cord, ≤45 Gy; temporal lobe, ≤65 Gy; optic chiasm and nerve, ≤54 Gy; pituitary, mean dose ≤45 Gy; eye, ≤50 Gy or mean dose, ≤35 Gy; lens, ≤9 Gy; mandible and temporomandibular joint, ≤70 Gy; parotid gland mean dose, ≤26 Gy, and V30, ≤50% (at least unilaterally) or D20cc, ≤20 Gy (bilaterally), with average doses kept at <10 Gy and maximum doses of <17 Gy. Follow-up and efficacy evaluation.Efficacy evaluation was performed 1 month following the completion of CRT.Patients underwent follow-up assessments every 3 months for the first 2 years post-treatment, every 6 months until the fifth year, and annually thereafter.These assessments included chest and abdominal CT scans.In instances of headaches or neurological symptoms, an immediate brain MRI was administered.Follow-up methods comprised patient revisits, telephone consultations and reviews of registration data.Survival metrics, such as OS, were calculated from the onset of treatment to death or the last follow-up.The time to BM was measured from initiation of treatment to confirmation via imaging.As of January 2024, follow-up data was up-to-date, with a median duration of 55 months, ranging from 11-102 months. Statistical analysis.Statistical analyses were performed using SPSS software, version 27.0 (IBM Corp.).Survival data were analyzed using the Kaplan-Meier method coupled with the log-rank test.Single-factor and multifactorial risk factors impacting BM and OS were assessed using Cox regression analysis.All tests performed were two-tailed and P<0.05 was considered to indicate a statistically significant difference. Results Analysis of clinical characteristics.At the time of follow-up, basic clinical data were collected from 290 patients involved in this study.The median age was 58 years, ranging from 42-74 years.A total of 44.5% of these patients (129 cases) presented with an initial tumor diameter of >5 cm at the onset of treatment.The clinical characteristic of the patients are presented in Table I.Representative brain MRI images are shown in Fig. 1.The MRI images presented in this study are representative images taken when brain metastases were first detected in the patients.These images provide a visual representation to help readers better understand the typical appearance and progression of brain metastases in patients with LS-SCLC. Factors associated with BM.The overall BM rate was demonstrated to be 16.6% (48/290).Annual rates of BM at 1, 2 and 3 years post-diagnosis were 1.4, 6.6 and 12.8%, respectively (Fig. 2).This study established a tumor size of 5 cm as the initial standard, grounded in the AJCC staging criteria for lung cancer, where T3 is defined as a tumor greater than 5 cm.To validate this standard, statistical analyses were conducted using various tumor sizes as classification criteria, and the results were compared.The analyses revealed no statistically significant differences in BM and OS when 3,4,6 and 7 cm were used as classification criteria (P>0.05)(Table II).This finding further supports the statistical and clinical significance of using 5 cm as the grouping standard.A detailed analysis of factors influencing BM highlighted significant associations in univariate analysis: Notably, the maximum diameter of the initial tumor [hazard ratio (HR)=13.276;95% confidence interval (CI): 5.248-33.586;P<0.001], type of treatment administered (HR=2.149;95% CI: 1.199-3.851;P=0.010) and treatment response (HR=2.981;95% CI: 1.231-7.223;P=0.016) were significantly associated with an increased risk of BM following Prophylactic cranial irradiation (PCI) (Table III).Multivariable Cox regression analysis identified that an initial tumor maximum diameter of >5 cm was an independent risk factor for BM post-PCI (HR=15.031;95% CI: 5.610-40.270;P<0.001; Table III).Patients with tumors >5 cm in diameter experienced a BM rate of 32.6% (42/129), which was significantly higher than the 3.7% (6/161) observed in patients with tumors of ≤5 cm in diameter (P<0.001;Fig. 3). Factors associated with OS.The median OS for the cohort of 290 patients was recorded at 28.8 months, accompanied by a 5-year OS rate of 27.9% (Fig. 4).A comparative analysis between patients with BM and those without revealed median OS values of 27.55 months and 32.5 months, respectively, with corresponding 5-year OS rates of 8.3 and 31.8%,respectively (P=0.001;Fig. 5).The median OS for stage I, II and III patients was 61.15, 48.5 and 28.4 months, respectively, with 5-year OS rates of 62.5, 47.1 and 21.6%, respectively (P<0.001;Fig. 6).Univariate analysis revealed several factors significantly associated with OS, including initial tumor maximum diameter (P=0.003),N staging (P<0.001),clinical staging (P<0.001), Discussion For patients with LS-SCLC who exhibit a favorable initial response to treatment, PCI is recommended as a class I intervention according to the National Comprehensive Cancer Network guidelines (21).In the modern MRI era, studies have reported that patients with LS-SCLC who did not receive PCI experienced a 1-and 3-year BM rate of 23.8 and 41.3%, respectively (22,23).Conversely, the 3-year BM rate among patients who underwent PCI was reported to be notably lower at 11.2% (24), and the 5-year progression-free rate for BM was 69% (25).The results of the present study revealed a 3-year BM rate of 12.8% post-PCI in patients with LS-SCLC, aligning closely with the outcomes observed in the aforementioned research.The findings of the present study demonstrate that patients with an initial tumor maximum diameter of >5 cm at the time of initial diagnosis exhibit a substantially elevated risk of BM following PCI.This observation is consistent with prior research indicating that higher clinical stages, which consider local spread and tumor size, are associated with an increased risk of BM development.Levy et al (26) reported an association between the volume of the primary tumor in the thorax and the subsequent risk of BM in patients with LS-SCLC.Similarly, Chen et al (27) performed a retrospective analysis on 550 patients with LS-SCLC and reported that an initial tumor maximum diameter of >5 cm was a notable risk factor for BM.This increased risk may be attributed to larger tumors dispersing more malignant cells into the circulatory system, which then potentially seed metastases in distant organs (28). In the present study, further analysis was performed on the factors affecting the prognosis of patients with SCLC following PCI.It was observed that patients developing BM post-PCI exhibited a significantly lower OS compared with those without BM, with a median OS of 27.55 months vs. 32.5 months, and five-year OS rates at 8.3 and 31.8%,respectively (P=0.001).Cen et al (29) reported that BM serve as an independent risk factor for the prognosis of patients with SCLC post-PCI.Moreover, the present study identified clinical staging as an independent risk factor influencing the OS of patients with LS-SCLC after PCI.Kim et al (30) noted that in patients aged ≥65 with stage II-III disease, PCI did not confer marked survival advantages.Similarly, Farooqi et al (31) reported no improvement in OS for individuals aged ≥70 with tumor diameters of ≥5 cm following PCI.Furthermore, the size of the tumor at initial diagnosis in the present study was not significantly associated with a worse OS.However, previous research suggests that larger tumor size may indicate a more aggressive phenotype, elevating the risk of metastasis, especially BM.Patients in advanced stages may exhibit higher rates of extracranial disease progression, potentially masking the survival benefits of PCI (27,31).This highlights the crucial role of utilizing TNM clinical staging more extensively for guiding clinical decisions (32). The preventive role of PCI in reducing BM risk for patients with LS-SCLC achieving CR after CRT is well-established.Despite this, 16.6% of patients still developed BM post-PCI in the present study, suggesting that a fraction of patients with LS-SCLC who receive curative CRT may not benefit from PCI.Further research is necessary to delineate the characteristics of these patients.Given the limitations of traditional imaging methods such as CT and MRI in assessing early therapeutic effectiveness and prognostic outcomes, the exploration of molecular biomarkers for the early prediction of BM and evaluation of PCI efficacy represents a vital research direction.Slotman et al (33) examined the effectiveness of PCI in ES-SCLC, categorizing patients into a brain radiotherapy group and a control group, each consisting of 143 patients.The brain radiotherapy group received different dosages: 20 Gy in 5 fractions (n=89), 30 Gy in 10 fractions (n=23), 30 Gy in 12 fractions (n=9) and 25 Gy in 10 fractions (n=7).The results revealed symptomatic BM in 16.8% (n=24) of the radiotherapy group compared with 41.3% (n=59) in the control group (P<0.001).The cumulative risk of BM at 6 and 12 months for the radiotherapy group was 4.4 and 14.6%, respectively, compared with 32.0 and 40.4%, respectively, for the control group.Median disease-free survival was 14.7 weeks in the radiotherapy group and 12.0 weeks in the control group (P=0.02), with median OS at 6.7 and 5.4 months, respectively (P=0.003).The 1-year OS rate was 27.1% in the radiotherapy group and 13.3% in the control group.These findings underscore that PCI can enhance survival and reduce the incidence of subsequent BM in patients with ES-SCLC who respond well to systemic chemotherapy and thoracic radiotherapy. Moreover, a Phase III randomized controlled trial performed in Japan by Takahashi et al (34) provided Table III.Analysis of factors affecting brain metastasis in 290 patients with limited stage-small cell lung cancer.--------------------------------------------------------------------------------------------------------------------------------------------- The findings revealed that the median OS was 11.6 months for the PCI group compared with 13.7 months for the observation group (P=0.094).The 1-and 2-year OS rates were 48.4 and 15.0% for the PCI group, respectively, compared with 53.6 and 18.8% for the observation group, respectively.Furthermore, the cumulative incidence of BM at 6, 12 and 18 months was markedly lower in the PCI group (15.0, 32.9 and 40.1%, respectively) compared with the observation group (46.2, 59.0 and 63.8%, respectively).Despite a significant reduction in the incidence of intracranial metastases (48 vs. 69%; P<0.0001), PCI did not provide a survival advantage.PCI is implicated in the onset of delayed neurotoxicity, particularly when administered at doses of >3 Gy per fraction and/or in conjunction with CRT (31).Consequently, PCI is contraindicated for patients exhibiting a poor PS of 3-4 or compromised neurocognitive function (35).Additionally, a higher incidence of chronic neurotoxicity is observed in individuals of >60 years (36).The conflicting data from several clinical trials and the growing concerns regarding the use of PCI (33,37,38) led to the initiation of the SWOG S1827/MAVERICK trial in the United States (39).This randomized study evaluated the efficacy of exclusive brain MRI monitoring against the combination of brain MRI and PCI in managing both advanced and early-stage SCLC.Participants were randomly allocated to either the MRI-only group or the combined MRI and PCI group.The primary outcome measure was OS, with secondary outcomes including survival free from cognitive decline, survival free from BM, and rates of adverse events.Although the results are pending, this trial is expected to yield significant insight and data for the future management of SCLC (39).Moreover, the study by Chen et al (27) assessed this subject; however, the present study differs in several key aspects: The present study is based on data from a dual-center collaboration between the Hebei Province Cangzhou Hospital of Integrated Traditional and Western Medicine and the Chengde City Central Hospital, which offers more accurate and reliable statistical outcomes than single-center studies; the enrolled patients were re-staged using the 8th edition of the AJCC Cancer Staging Manual and the Veterans Administration Lung Study Group two-tier system, unlike the study by Chen et al (27), which used the 7th edition, potentially affecting the comparability of stage-related outcomes.The adoption of the widely applied 8th edition staging system enhances the credibility of the Table IV.Cox proportional risk model analysis affecting overall survival in patients with limited stage-small cell lung cancer. In conclusion, retrospective analyses of patients with LS-SCLC indicate that an initial maximum tumor diameter of >5 cm serves as an independent risk factor for BM following PCI.Furthermore, both BM and clinical staging independently influence OS in these patients post-PCI.Presently, research into the risk factors for BM post-PCI remains sparse and predominantly retrospective.According to the Chinese Society of Clinical Oncology guidelines, concurrent CRT is the standard treatment for patients with LS-SCLC of stage >T1-2N0 (40).If patients cannot tolerate this regimen, sequential CRT is also an option (41).In the present study of 290 patients, 80% were Stage III (n=232), with 50.7% at N3 (n=147).Treatment plans were tailored for each patient using a multidisciplinary team approach, taking into account functional status, laboratory findings and imaging data.Considering the significant adverse reactions from concurrent CRT in Stage III (N3) patients, which many find intolerable, a portion opted for sequential CRT, resulting in a lower proportion of patients undergoing concurrent treatment (54.5%).Therefore, there is a compelling need for more prospective studies to further assess these associations. Figure 1 . Figure 1.Representative magnetic resonance images of the brain.(A) The left image shows significant heterogeneous enhancement following contrast administration.Imaging on the right shows T2WI revealing a round, mixed-signal lesion at the gray-white matter junction of the left frontal lobe, with surrounding brain tissue edema.(B) Imaging on the left shows prominent ring enhancement post-contrast.Imaging on the right shows T1WI, revealing round, low-signal lesions in the subcortical areas of both frontal lobes, with well-defined boundaries.(C) Imaging of the left shows significant heterogeneous enhancement following contrast administration.Imaging on the right shows T2WI of a round, mixed-signal lesion in the right occipital lobe with surrounding brain tissue edema.T2WI, T2-weighted imaging. Figure 2 . Figure 2. BM rate in 290 patients with limited stage-small cell lung cancer.BM, brain metastasis. Figure 3 . Figure 3.Comparison of BM rates in 290 patients with limited stage-small cell lung cancer with different initial tumor maximum diameters.BM, brain metastasis. Figure 4 . Figure 4. OS rate of 290 patients with limited stage-small cell lung cancer.OS, overall survival.Figure5.Comparison of OS rates between 290 patients with limited stage-small cell lung cancer with BM and those without BM.OS, overall survival; BM, brain metastasis. Figure 6 . Figure 6.Comparison of OS rates in 290 patients with limited stage-small cell lung cancer with different clinical stages.OS, overall survival. Table I . Clinical characteristics of 290 patients with limited stage-small cell lung cancer. Table II . Effect of initial tumor size in relation to brain metastases and overall survival.
2024-07-07T15:55:28.739Z
2024-07-03T00:00:00.000
{ "year": 2024, "sha1": "b83c6dc137005d681f2c020358aba8e002dec76e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.3892/ol.2024.14555", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5813df1b25ab05e7b585f56c8fdeed28eea9f80f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252111095
pes2o/s2orc
v3-fos-license
The Role of Strong Magnetic Fields in Stabilizing Highly Luminous, Thin Disks We present a set of three-dimensional, global, general relativistic radiation magnetohydrodynamic simulations of thin, radiation-pressure-dominated accretion disks surrounding a non-rotating, stellar-mass black hole. The simulations are initialized using the Shakura-Sunyaev model with a mass accretion rate of $\dot{M} = 3 L_\mathrm{Edd}/c^2$ (corresponding to $L=0.17 L_\mathrm{Edd}$). Our previous work demonstrated that such disks are thermally unstable when accretion is driven by an $\alpha$-viscosity. In the present work, we test the hypothesis that strong magnetic fields can both drive accretion through the magneto-rotational instability and restore stability to such disks. We test four initial magnetic field configurations: 1) a zero-net-flux case with a single, radially extended set of magnetic field loops (dipole); 2) a zero-net-flux case with two radially extended sets of magnetic field loops of opposite polarity stacked vertically (quadrupole); 3) a zero-net-flux case with multiple radially concentric rings of alternating polarity (multi-loop); and 4) a net-flux, vertical magnetic field configuration (vertical). In all cases, the fields are initially weak, with the gas-to-magnetic pressure ratio $\gtrsim 100$. Based on the results of these simulations, we find that the dipole and multi-loop configurations remain thermally unstable like their $\alpha$-viscosity counterpart, in our case collapsing vertically on the local thermal timescale and never fully recovering. The vertical case, on the other hand, stabilizes and remains so for the duration of our tests (many thermal timescales). The quadrupole case is intermediate, showing signs of both stability and instability. The key stabilizing criteria is, $P_\mathrm{mag} \gtrsim 0.5P_\mathrm{tot}$ with strong toroidal fields near the disk midplane. We also report a comparison of our models to the standard Shakura-Sunyaev disk. INTRODUCTION From the earliest work on thin (H/R 1) accretion disks based on the α-viscosity prescription (Shakura & Sunyaev 1973), there have been notable problems in region "A," where the vertical pressure support comes from radiation and the opacity is dominated by electron scattering. Region A covers radii R/r g 600(L/L Edd ) 16/21 , where r g = GM/c 2 is the gravitational radius and L Edd = 1.2 × 10 38 (M/M ) erg s −1 is the Eddington luminosity of a black hole of mass M . In this region, the disk is predicted to be both thermally (Shakura & Sunyaev 1976) and viscously (Lightman & Eardley 1974) unstable. The thermal instability arises because the disk heating rate per unit area, Q + , and cooling rate per unit area, Q − , depend on different powers of the mid-plane pressure for a fixed surface-density, Σ, as d ln Q + d ln P rad,0 Σ = 2 (1) and d ln Q − d ln P rad,0 Σ = 1 , such that small deviations in P rad,0 may lead to runaway heating or cooling. The viscous instability, which typically acts on a longer timescale than the thermal instability, arises due to an inverse correlation between the vertically integrated stress, W Rφ , and the surface density, Σ, which can cause the disk to break up into rings of high and low surface density (Lightman & Eardley 1974;Mishra et al. 2016). In region "B" (gas-pressure-supported, but still scattering dominated), by contrast, which occurs at larger radii or sufficiently low luminosity, the Shakura-Sunyaev solution is predicted to be stable. Previous shearing box (Jiang et al. 2013;Ross et al. 2017) and global simulations (Teresi et al. 2004;Mishra et al. 2016;Fragile et al. 2018) have largely confirmed the thermal instability of these disks. In Fragile et al. (2018), multiple α-viscosity simulations that started on the radiation-pressure-dominated (region A) branch underwent runaway cooling until they collapsed down to the gas-pressure-dominated (region B) branch of the thermal equilibrium (S-) curve. Only simulations starting on that lower branch remained stable for more than a thermal timescale. Curiously, we did not see any examples of runaway heating, even in one case where the initial gas temperature was perturbed upward by 50%. A similar preference toward cooling and collapse was noted in Teresi et al. (2004). All of this is particularly puzzling in light of observations of black hole X-ray binaries (BHXRBs), for which the spectra look most disk-like (soft, with a prominent thermal bump around 1 keV) and stable (rms variability 3%) whenever L/L Edd = 0.1-0.2 (e.g. van der Klis 2004; Done et al. 2007). In other words, there is no sign of the thermal or viscous instabilities previously mentioned, precisely in the luminosity range when such disks are predicted to be unstable. Two possible exceptions are GRS 1915+105 (Belloni et al. 1997;Neilsen et al. 2011) and IGR J17091-3624 (Altamirano et al. 2011;Zhang et al. 2014), which show evidence for limit cycle behavior that may be consistent with a thermal instability (Honma et al. 1991;Szuszkiewicz & Miller 1998), although this seems to be limited to when those sources are at their highest (possibly super-Eddington) luminosity (Done et al. 2004). One proposed solution to the dilemma of thermal instability has been to invoke strong magnetic fields to provide the additional support needed to stabilize the disk (Begelman & Pringle 2007;Oda et al. 2009;S adowski 2016a). This is because, while the cooling rate is insensitive to the magnetic pressure and still follows eq. (2), the heating rate is not. Instead, the heating rate in a strongly magnetized disk scales as (S adowski 2016a) d ln Q + d ln P rad,0 Σ,Pmag,0 = 2(1 − β −1 r ) , where β r = P tot,0 /P mag,0 and we have ignored gas pressure. This allows us to quantify how strong the magnetic field must be, as β −1 r > 0.5 leads to d ln Q − d ln P rad,0 Σ > d ln Q + d ln P rad,0 Σ,Pmag,0 , which restores stability. In this work, we set out to test the idea of magnetic stabilization through a set of numerical experiments. Each of our simulations starts from the Shakura & Sunyaev (1973) disk solution withṀ = 3L Edd /c 2 , i.e. one on the unstable branch and in the prescribed luminosity range (L = ηṀ c 2 ≈ 0.057Ṁ c 2 = 0.17L Edd ). The question we want to ask is: Can an initially weak magnetic field be amplified self-consistently to the required strength to stabilize these disks in this luminosity range? We consider four different seed magnetic field configurations, all with β mid,0 = P gas,0 /P mag,0 100 and, hence, β −1 r 0.5. In one, we consider a single poloidal field anti-node centered far from the black hole, giving a very radially extended dipole field configuration. In a second case, we consider two radially extended poloidal loops of opposite polarity stacked vertically, one above the midplane, one below. Both of these field configurations start out weak, though they will be subject to strong shear amplification from the orbital motion of the disk. Similar radially extended fields were reported in S adowski (2016a), with the dipole field deemed unstable and the quadrupole deemed stable. However, it was not entirely clear from that study why the one configuration was stable while the other was not. Hence, our decision to revisit both. In a third case, the field consists of numerous small poloidal loops of alternating polarity, with length scales comparable to the local disk height, arranged in concentric radial rings. For such a configuration, the field is unlikely to amplify sufficiently to offer significant pressure support, and therefore we expect to see a thermal runaway analogous to our previous simulations (Mishra et al. 2016). In the final case, we consider a vertical magnetic field threading through the disk. This field configuration is also subject to shear amplification and can reach strengths sufficient to provide magnetic pressure support, as shown in shearing box (Salvesen et al. 2016) and global Mishra et al. (2020) studies. However, ours is the first three dimensional, global, general relativistic, radiation MHD simulations to consider this configuration. We elected not to test a purely toroidal field configuration, as we have already shown in Fragile & S adowski (2017) that even a strong toroidal field, on its own, is not enough to stabilize an initially radiation-pressure-dominated disk. This is because strong toroidal fields decay on roughly the local orbital timescale due to magnetic buoyancy. However, in cases where a radial or vertical seed field is present, such as in our dipole, quadrupole, and vertical field cases, the toroidal magnetic field can be continually replenished by the Ω-dynamo. As long as this replenishment happens fast enough to keep the field strong, this may be enough to stabilize the disk. Alternatively, if a purely toroidal field is able to generate local net poloidal flux through an α-Ω dynamo, as in the thick disk simulations of Liska et al. (2020), then perhaps such a field configuration could yield a stable disk. We do not explore this possibility. The remainder of our paper is organized as follows: In Section 2, we describe the numerical procedures used in our simulations; in Section 3, we present evidence regarding the stability of each simulation; in Section 4 we describe the vertical profiles of the disk; in Section 5, we compare the properties of our stable simulations to the predictions of the Shakura-Sunyaev model; in Section 6, we discuss how our results fit in with previous, comparable simulations; and finally, in Section 7, we present our conclusions. All of our simulations assume a non-spinning black hole of mass M = 6.62M ; therefore, our distance unit, GM/c 2 , is equal to 9.8 km, and our time unit, GM/c 3 is equal to 3.3 × 10 −5 s. NUMERICAL SETUP As stated previously, the simulations presented here start from a Shakura-Sunyaev disk withṀ = 3L Edd /c 2 . For the hydrodynamic and radiation variables, we follow the initialization steps described in Fragile et al. (2018). In order to initialize the Shakura-Sunyaev solution, we assume the viscosity parameter, α, to be 0.02, based on previous similar simulations (e.g., Mishra et al. 2016;S adowski 2016a) and an adiabatic equation of state with γ = 5/3. However, we emphasize that the current simulations do not employ any form of explicit viscosity. On top of this disk we impose various initially weak (β mid,0 100) seed magnetic field configurations. We choose to start with weak magnetic fields to test whether the accretion process itself can amplify them to the required strengths. All simulations are carried out using the general relativistic, radiation, magnetohydrodynamics (GRRMHD) code, Cosmos++ (Anninos et al. 2005). We use the high resolution shock capturing (HRSC) scheme described in Fragile et al. (2012) to solve for the flux and gravitational source term of the gas and radiation. Rather than evolving the magnetic fields directly, we instead evolve the vector potential and recover the fields from it as needed, as described in Fragile et al. (2019). For the radiation, we use the M 1 closure scheme described in Fragile et al. (2014), which retains the first two moments of the radiation intensity and (average) radiative flux. We use grey (frequency-independent) opacities, which are captured in the radiation four-force density (coupling) term: where κ a P = 2.8 × 10 23 T −7/2 K ρ cgs cm 2 g −1 and κ a R = 7.6 × 10 21 T −7/2 K ρ cgs cm 2 g −1 are the Planck and Rosseland mean opacities for free-free absorption, respectively, κ s = 0.34 cm 2 g −1 is the opacity due to electron scattering, R µν is the radiation stress tensor, u µ is the fluid four-velocity, T K is the ideal gas temperature of the fluid in Kelvin, and ρ cgs is density in g cm −3 . Thus, we are assuming Kramers-type opacity laws, with the Rosseland mean also used for the flux mean, κ a F = κ a R , and the Planck mean used for the J-mean, κ a J = κ a P . Since the simulations capture turbulence, reconnection, and shock heating as well as radiative cooling directly, we do not need to include any artificially imposed heating or cooling terms. To invert the conserved fields to the corresponding primitives, we use the Cosmos++ 9D primitive solver, named for the 9 variables that make up the solve. This step uses a Newton-Raphson iterative technique and linear matrix solver to numerically invert the Jacobian matrix composed of derivatives of the conserved fields with respect to the primitive ones. It also simultaneously accomplishes the implicit forward integration of the radiation source term (eq. 5) and produces a fully updated set of primitive fields. Details of this procedure are provided in Fragile et al. (2014). In cases where the primitive solver fails to converge or settles on an nonphysical solution, the primitive values of the surrounding zones from the previous timestep are averaged and used to replace the failed zone values. We also impose numerical floors and ceilings on the primitive fields, such that the density, ρ, and internal energy density, e, are not allowed to drop below 90% of their initial background values, and the Lorentz factor is not allowed to exceed 20. We also impose relative restrictions between the gas and magnetic field properties, such that ρ ≥ B 2 /100 and P gas ≥ P mag /25; these limits help with the stability of the code in the relatively evacuated background region. Whenever mass or energy is added to a cell because of these magnetization limits, this is done in the drift frame according to Ressler et al. (2017). Simulation Setup We only simulate the inner region of the Shakura-Sunyaev disk model from r = 4 GM/c 2 to r = 160 GM/c 2 , 0 to π in the θ direction, and from 0 to π/2 in the φ direction, making our simulation domain a wedge shape. As these are very thin disks (H/R 0.03), we use a variety of techniques to concentrate resolution as much as possible toward their inner regions, including a logarithmic radial coordinate, x 1 = 1 + ln(r/r BH ), a concentrated latitude coordinate, and static mesh refinement. We start with a base mesh of 48 × 48 × 12 and add two or three levels of refinement focused around the inner disk for an equivalent resolution of 384 × 384 × 96 for the highest resolution, four-level simulations. Even so, we only have approximately 6(12) zones per scale height of the disk initially for our three-(four-)level simulations. The three-level grid , used for most of the simulations, is shown in the top panel of Figure 1. We apply outflow boundary conditions at both radial grid boundaries, reflecting boundary conditions at the poles, and periodic boundaries in the azimuthal direction. One change from how we set the disk up in Fragile et al. (2018) is that here we ignore the relativistic correction terms A, B, C, D, E, and Q that appear in the Novikov & Thorne (1973) form of the thin-disk solution (see eq. (99) of Abramowicz & Fragile 2013). Otherwise, the procedure is the same. Magnetic Field Setup The simulations are each seeded with one of four relatively simple field geometries; the key is that each geometry is qualitatively different in some way. In all four cases, the fields are initially weak relative to the gas and radiation pressure within the body of the disk. Finally, the fields are constructed such that β mid,0 is approximately constant with radius. The first geometry we consider is a zero-net-flux, single-poloidal-loop case (second panel of Figure 1). This is the standard dipole field configuration that has been used to initialize many global MHD disk simulations; the one difference is that our field is very elongated in the radial direction, extending from near the inner radius of the disk all the way to the outer boundary of our simulation domain. To initialize this field, we first set the azimuthal component of the vector potential to be where R = r sin θ is the cylindrical radius, H is the local height of the disk, r max = 30 1.5 GM/c 2 is the maximum radius of the grid, and ∆ = 10 where z = r cos θ, R t = max(R ISCO , R), and R ISCO = 6GM/c 2 is the usual ISCO 1 radius. We then set the poloidal components of the magnetic field as B r = −∂ θ A φ and B θ = ∂ r A φ . These choices keep the initial magnetic field confined within our very thin initial disk. This field configuration is subject to a strong radial shear amplification (leading to a growth of the B φ component) due to the orbital motion of the disk (the so-called Ω-dynamo), along with MRI-driven amplification. However, a common feature of all such dipole field configurations is that they have a current sheet exactly at the midplane of this disk. This turns out to be an important factor in determining the subsequent evolution of this case. Our second magnetic field configuration consists of two poloidal field loops of opposite polarity stacked vertically, one on top of the other, about the midplane of the disk (third panel of Figure 1). To achieve this, we use the same vector potential as the dipole field case (Eq. 7), except multiplied by an extra factor of z to introduce the asymmetry across the midplane. Again, we expect significant field amplification from the orbital motion of the disk. Although this configuration introduces a second current sheet, neither is located in the midplane of the disk, unlike the dipole case. The third field configuration consists of multiple small poloidal loops of alternating polarity distributed in concentric rings moving outward through the disk midplane (fourth panel of Figure 1). Each ring has a width comparable to the local disk height. To achieve this, we start from the following vector potential: For such a configuration, we do not expect the field to amplify sufficiently to offer significant pressure support. This is because the narrow radial range of each magnetic cell prevents significant radial shear. Also, this configuration lacks any sort of underlying guide field that can replenish field lost to reconnection. Ultimately, any amplification of this field is limited to the action of the magneto-rotational instability (MRI), which typically saturates at β r 10 for zero-net-flux configurations (Turner 2004;Hirose et al. 2009), and therefore, we expect to see thermal runaway (either collapse or expansion) analogous to our earlier simulations with a similar field configuration (Mishra et al. 2016). Nevertheless, we run this simulation as a control case. For the final configuration, we consider a net-flux, vertical-field threading through the disk (bottom panel of Figure 1), such that, again, β is reasonably uniform throughout the midplane. Here the vector potential is simply Such a field configuration is subject to amplification due to both the shearing at the interface between the disk and background medium and the MRI inside the disk. In non-radiative, shearing box simulations, such a configuration can reach saturation field strengths of β mid = P gas,0 /P mag,0 ≈ 0.25 even for initially weak fields (Bai & Stone 2013;Salvesen et al. 2016). If similar saturation strengths could be reached in radiation-pressure-dominated cases, then this may be enough to stabilize the disk (S adowski 2016b). At the start of all our simulations, the magnetic fields are normalized to match a specific target value for β mid . In the dipole, quadrupole, and multi-loop cases, β mid ≈ 100, while for the vertical field, β mid ≈ 1000. Since all of our simulations start from the same base disk configuration as the S3E simulation in Fragile et al. (2018), we follow the same naming convention in this paper. To distinguish the different field configurations, we append the base simulation name with a "d" for the dipole case, a "q" for the quadrupole case, an "m" for the multi-loop case, and a "v" for the vertical field case. All of the simulations presented in this work are summarized in Table 1. The first column shows the model name (S3Ed:dipole, S3Eq:quadrupole, S3Em:multi-loop, S3Ev:net-vertical field, and 4L for four level). The second column shows the duration of each simulation. In third and fourth column we report the initial and late-time (t = 20000 GM/c 3 ) magnetic flux threading the disk. The initial magnetic flux is computed through a sphere of radius r = 15 GM/c 2 as Φ 0 = |B r |r 2 sin θdθdφ . Since the dominant field component at late times is the toroidal one, we calculate the late-time flux through a poloidal fan covering 4 ≤ r/(GM/c 2 ) ≤ 15 and 0 ≤ θ ≤ π as The flux required to stabilize thin accretion disks such as ours was estimated in S adowski (2016b) to be roughly 2×10 23 G cm 2 . Our initial fluxes are about 4-6 orders of magnitude below this level, while at late times, models S3Eq and S3Ev reach fluxes within 1-2 orders of magnitude of this level. Interestingly, some of our fluxes approach those required for a MAD disk (∼ 10 22 G cm 2 ; S adowski 2016b), though most of that flux is in the toroidal component. The poloidal flux threading our inner boundary remains well below the MAD limit in all cases. The last column in Table 1 shows the eventual fate of each model. STABILITY RESULTS In order to fully evaluate the stability of each field configuration, we run each simulation until either the disk clearly collapses (or expands) or until many thermal timescales have passed. In this study, we take the thermal timescale to be t th = 0.1(αΩ) −1 , where Ω is the local orbital frequency. To standardize our plots, we take α = 0.02, which is based on our expectations for the dipole and multipole cases. We expect α to be higher and the true t th to be correspondingly shorter in the quadrupole and vertical cases. This estimate of t th is a factor of 10 shorter than the estimate we used in Fragile et al. (2018). The reason is that, without an explicit viscosity in these simulations, there is initially very little heating in the disk. Thus, the thermal energy content is smaller at early times than the simulations in Fragile et al. (2018). This reduced thermal energy content reduces the time needed for heat to diffuse out (the thermal timescale). For simulations that are able to stabilize themselves, with heating and cooling balancing out and a higher thermal energy content present in the disk, the thermal timescale probably returns to something closer to b Initial radial magnetic flux through a shell at r = 15 GM/c 2 . c Toroidal magnetic flux at t = 20000 GM/c 3 through a poloidal fan covering 4 ≤ r/(GM/c 2 ) ≤ 15 and 0 ≤ θ ≤ π. (αΩ) −1 . For all purported stable simulations, we run them for up to 30, 000GM/c 3 , which is over 300 orbits and dozens of (initial) thermal timescales at the ISCO. A big caveat, though, is that this only covers about one viscous timescale, t vis = r 2 /ν = r 2 /(αc s H), at that same radius. Another caution is that previous work has shown that the onset of the thermal instability can sometimes be delayed for periods of up to hundreds of orbits (Jiang et al. 2013;Ross et al. 2017), so it is possible that one or more of our stable configurations could turn out to be unstable if they were allowed to run indefinitely. 3.1. Zero-net-flux, dipole field case (S3Ed) It should be of little surprise that all of our simulated disks undergo an initial period of collapse (over roughly a thermal timescale). Remember, they all start off supported by radiation pressure, and since it takes the MRI a few orbital periods to reach saturation, there is a stretch at the beginning of each simulation when there is minimal turbulence, hence little energy dissipation and heating and magnetic pressures are low. As radiation leaks out of the disk and is not replaced during this period (roughly the first thermal timescale), the vertical support declines and the disk height shrinks. In our analysis, the disk height is calculated using a density-squared weighting, as where the integrals are carried out over each radial shell and dV is the proper volume of a computational shell. Space-time plots of the scale height (H/R) are provided in Figure 2 for the dipole, quadrupole, and vertical cases. The difference between our simulations is the degree to which the disk recovers from this initial collapse. In the case of the dipole configuration, shown in the left panel of Figure 2, the disk marginally recovers. While the MRI attempts to heat the disk, this heating rate is not quite sufficient to balance the cooling rate, especially at small radii. Figure 3 shows the ratio of heating, Q + , to cooling, Q − , demonstrating the slight dominance of the latter for the dipole case (left panel). The net heating rate per unit surface area is computed as where the integration is carried out within the limits of the effective photosphere and the integrand is azimuthally averaged, with V φ ≈ Ω the azimuthal component of the fluid three velocity and Wrφ the covariant r-φ component of the MHD stress tensor in the co-moving frame. The radiative cooling is computed by tracking the radiative flux through the photosphere at each radius: (middle), and S3Ev (right) simulations, showing that cooling dominates, especially at smaller radii in S3Ed; heating and cooling balance fairly well in S3Eq; and heating seems to mostly dominate, especially in the outer regions in S3Ev. Inside of ≈ 10 GM/c 2 , the plots are white because the disk is effectively optically thin in those regions, and our heating and cooling formulae do not apply. Beyond that radius, the occasional white patches correspond to locations where the cooling rate takes on negative values, which happens when the flux measured at the photosphere points toward the disk midplane, rather than away. The solid white curves show the estimated thermal timescale. where F θ photo± (R) = −4/3E R u θ R (u R ) t is the flux escaping through the top or bottom photosphere. As advective cooling is not important in these simulations, we ignore its contribution to Q − . Also neglected is the contribution of the radial component of the radiative flux to cooling, which is appreciable only close to the ISCO. Additionally, the magnetic pressure in S3Ed model fails to reach the stability threshold, P mag > 0.5P tot , except maybe very close to the disk midplane as shown in Figure 4 (left panel). The S3Ed configuration takes longer to amplify the magnetic field compared to other runs (particularly S3Eq and S3Ev) and ultimately is unable to fully compensate for the lost thermal and radiative support. Instead, the S3Ed disk is effectively "frozen" and settles down to a new solution at a lower mass accretion rate and luminosity ( Figure 5). However, if the outer parts of the disk are still supplying material at the original, higher rate, as they would be in a real disk or in a much larger and longer simulation, then matter must begin to pool somewhere in the disk. Eventually, this excess material must be accreted, likely in a rapid burst, after which the cycle would likely repeat (Cannizzo et al. 1995). As this type of limit-cycle behavior is not seen in most BHXRBs, this argues against such disks having a predominantly dipole field configuration. Finally, we find an interesting anti-correlation between the scale height of the disk ( Figure 2) and the effective viscosity, defined here as the density-weighted, height-averaged ratio of the covariantr-φ component of the stress tensor to the total pressure, i.e., α ≡< Wrφ/P tot > ρ . A space-time diagram of this quantity is shown in Figure 6. It appears there may even be a threshold value of α > 0.01 associated with stability. Zero-net-flux, quadrupole field case (S3Eq) Unlike the dipole configuration, which seems to quickly collapse to a different configuration, the quadrupole simulation recovers from its initial collapse to re-inflate back to a height comparable to its original profile, as shown in Figure 2 (middle panel). Later (t > 12, 000GM/c 3 ), the disk even "bounces" a few times (shown by alternating increases and decreases in height at a given radius). The notably different behavior of simulation S3Eq compared to S3Ed can best be understood by comparing the left and middle panels of Figure 4. As mentioned previously, the dipole simulation is plagued by a midplane current sheet at early times that prevents the magnetic pressure from building up sufficiently over the bulk of the disk. The quadrupole simulation, by contrast, does not have a midplane current sheet. In fact, during the initial period of collapse, the midplane value of β −1 r actually increases because more magnetic field is being squeezed into a given volume. Without a current sheet to dissipate this energy, the disk is able to store it, and even amplify it, in order to use it later to restore the disk back to something close to its original configuration. More quantitatively, we can see from Figure 4 that simulation S3Eq achieves and sustains the required β −1 r 0.5 (−0.3 on the logarithmic color-bar of Figure 4), at least until t ≈ 14, 000 GM/c 3 . However, we see evidence in Figure 7 that, beginning around 6, 000GM/c 3 for case S3Ed (left panel) and 12, 000GM/c 3 for case S3Eq (middle panel), the toroidal magnetic field starts buoyantly rising out of the disk. At the same time, the coherent, extended, radial field component, which is crucial for replenishing the toroidal field, begins to break up due to turbulent motions of the MRI. Therefore, the toroidal field is no longer being replaced as fast as it is being lost and β −1 r drops. In our higher resolution studies, we found that these same changes occurred, but about 50% later in time, so the timing of this transition from stability to instability is apparently not fully resolved yet. Interestingly, there remain periods during the evolution of S3Eq and to some extent S3Ed when a strong radial field component is able to reestablish itself near the midplane, although not always of the same polarity as the initial field. This revived radial field is able to generate sufficiently strong toroidal fields to briefly allow β −1 r to again approach the stability limit, but these periods are relatively short and the fields localized; thus, they are not enough to truly restore stability. As a result S3Eq appears to undergo multiple instability cycles, with the disk expanding and contracting vertically on roughly the local thermal timescale. Similar oscillations between stability and instability are suggested in the figures of S adowski (2016a), though the author makes no specific mention of this. Figure 5. Mass accretion rate measured near the ISCO, scaled to the Eddington luminosity, i.e.,ṁ =Ṁ c 2 /L Edd (upper panel); luminosity through a radial shell at r ≈ 15 GM/c 2 (middle left panel) and r ≈ 20 GM/c 2 (lower left panel), scaled to the Eddington luminosity; and the radiative efficiency, η = L/Ṁ c 2 = (L/L Edd )/ṁ at each radius (rightmost panels). The horizontal thin, dashed black lines show the target values ofṁ = 3, L/L Edd = 0.17, and η = 0.057, respectively. Because the unstable simulations (S3Ed and S3Em) have collapsed to a different disk solution, they settle to a lowerṁ and L. Note that for these plots we employ moving window averages with window widths equal to three consecutive data to smooth them. It is interesting that it is only after simulation S3Eq loses its magnetic pressure support and begins to oscillate in height that it reaches a fairly steady state in terms of mass accretion rate and luminosity ( Figure 5), with values very close to the targets for this study. During this same period, simulation S3Eq reestablishes a rough thermal equilibrium (Q + ≈ Q − ), as shown in Figure 3 (middle panel). An initial period of cooling domination (t ≤ 2, 500 GM/c 3 ) and a later period of heating domination (10, 000 ≤ t ≤ 15, 000) are also noticeable. We will make a more detailed comparison of this simulation with the Shakura-Suynaev model in Section 5. 3.3. Zero-net-flux, multi-loop case (S3Em) The zero-net-flux, multi-loop simulation is qualitatively very similar to the simulations we reported in Mishra et al. (2016). It is also the case where the disk is least likely to stabilize, as zero-net-flux MRI turbulence saturates at a field strength of β r 10 (Turner 2004;Hirose et al. 2009) and there is no global field component for the Ω-dynamo to amplify. Not surprisingly, we see this disk collapse on the local thermal timescale until it becomes too poorly resolved to sustain MRI turbulence. Because of this, we choose not to present any figures specifically for this simulation, although it is included in Figure 5. The solid black curve in Figure 5 shows that this model remains under-luminous and also maintains a low mass accretion rate. Our results, plus Mishra et al. (2016), support our conclusion that this field configuration is unstable to thermal collapse in this mass accretion range. Net-flux, vertical field case (S3Ev) The net-flux, vertical magnetic field configuration leads to our only fully stable disk configuration, in this case by producing the strongest magnetic pressure support among all the models we simulated. It quickly satisfies β −1 r 0.5 (right panel of Figure 4) and has a heating rate that matches, or even exceeds, its cooling rate (right panel of Figure 3) and hence causes the disk to elevate (right panel of Figure 2). Fig. 4, 7 and 9). The strong magnetic pressure support in this case happens despite the presence of a midplane current sheet (seen in the right panel of Figure 7). This is owing to the very strong magnetic pressure gradients found just above and below the disk midplane, as we will show in the next section. Another interesting feature of this particular configuration is that it hits the target mass accretion rate and luminosity pretty much right from the beginning (green, dot-dashed curves in Figure 5). This is in contrast to the S3Eq case, which also achieves the target values, but only after β −1 r drops below 0.5. This suggests that a net-flux, vertical field configuration does not need to wait for the MRI to develop to begin driving accretion. In fact, the subsequent development of the MRI does not appear to appreciably affect the luminosity. This finding is consistent with previous Newtonian global MHD models reported in Mishra et al. (2020), where the dominant accretion on the surface was driven by the coherent, rather than turbulent, component of the Maxwell stress. DISK PROFILES We now carefully compare the spatial profiles of the dipole (S3Ed), quadrupole (S3Eq), and vertical (S3Ev) field simulations, focusing mostly on the quadrupole and vertical field cases, since those are the ones we found to be stabilized by strong magnetic pressure support. Figure 8 shows azimuthally averaged profiles of mass density (top panel), mass flux (second panel), and the three magnetic field components (bottom three panels) for each of the three field configurations at t = 20, 000 GM/c 3 . Note that we do not show time-averaged profiles due to the previously mentioned field reversals observed for case S3Eq (middle panel of Figure 7). Time averaging, especially of the toroidal field, would lead to an incorrect conclusion about its strength. The density profiles (top panels) for both S3Eq and S3Ev show a puffy structure with a reduced disk midplane density compared to its initial value, a trait that is more obvious at larger radii (R > 15 GM/c 2 ). Although our models cannot scale to a supermassive black hole, such a decrease in the disk midplane density (purely an effect of magnetic pressure increase) could play a role in stabilizing active galactic nuclei disks against gravitational instability (Shlosman & Begelman 1987;Riols & Latter 2018). In each model reported here, the scattering photosphere (dashed, white curves) is quite thick (only visible in the upper and lower left corners of the top panels). The effective photosphere (dot-dashed, white curves) lies inside the disk height (solid, blue curves) at small radii (R 12 GM/c 2 ), but flares out beyond that radius. Since these simulations have not run long enough to reach an equilibrium state much beyond that radius, the flaring could just be a transient feature. Finally, the absorption photosphere (solid, green curves) lies inside the disk height at all radii. The profiles of mass flux, ρ φ V r φ (second row), show that, within R ≤ 15 GM/c 2 , all of the simulations accrete primarily through the disk midplane, though S3Ed and S3Ev also exhibit significant accretion along the disk surface. The surface accretion is associated with regions of extended radial field coherence seen in the B r φ plot (third row). For S3Ev, these are regions where the magnetic field doubles back on itself (if you were to follow a single "vertical" magnetic field line from the disk midplane to larger heights, you would sees it first bend radially inward and then outward), as noted in previous Newtonian MHD simulations (Zhu & Stone 2018;Mishra et al. 2020). Despite such an irregular profile of B r φ , B φ φ maintains a strong (two orders of magnitude larger amplitude compared to B r φ ), coherent field structure (fifth row, right panel). The S3Eq model also has a strong toroidal field, but with a polarity switch at R ≈ 12 GM/c 2 . This is interesting because the initial field configuration has its radial component pointing outwards near the midplane, which should lead to a negative component of toroidal magnetic field (as it does for large radii). However, in the inner region (R 12 GM/c 2 ), the disk rearranges the magnetic field leading to a positive toroidal field. This could be better understood by reminding ourselves of the toroidal field reversals seen in the middle panel of Figure 7. The S3Ev case, on the other hand, has its dominant polarity change roughly in the disk midplane, with stronger B φ φ overall compared to model S3Eq. The vertical field component, B θ φ (fourth row), has a more complicated, turbulent structure compared to the radial and toroidal field components. Continuing our analysis, in Figure 9 we present azimuthally averaged vertical profiles of a number of key disk parameters, all measured at a radius of R = 15 GM/c 2 and at time t = 20, 000 GM/c 3 for the same three field configurations: dipole, quadrupole, and vertical. In panel (1), the density profile shows that model S3Ed exhibits a much thinner, denser slab profile, whereas the stable configurations, S3Eq and S3Ev, show an enhanced density at higher altitudes, with a corresponding decrease in their midplane values. In panels (2) and (3), we show radial velocity and mass accretion rate plots. The azimuthally averaged magnetic pressure (panel 4) shows one of the key differences between models S3Eq and S3Ev. In the case of S3Eq, the magnetic pressure is highest close to the disk midplane and tapers off from there, whereas the midplane magnetic pressure is relatively low in the case of S3Ev because of the current sheet there. But on either side of the midplane, the magnetic pressure is quite high. In fact, it is perhaps a bit surprising that such different magnetic pressure profiles could yield relatively similar density profiles (panel 1). This is explained by the fact that the radiation pressure is still a major contributor, and has a roughly similar profile in both cases (not plotted). In panels (5) and (6) we show the radial and azimuthal magnetic field components. The radial magnetic field shows a complicated structure with multiple reversals in the S3Eq and S3Ev cases. The azimuthal magnetic field is two orders of magnitude larger in amplitude and maintains a coherent structure showing no field reversals over this height in the S3Eq case and only one field reversal at the disk midplane in the S3Ev case. In panel (7), we show T rφ M φ = − B r φ B φ φ , which is the coherent component of the Maxwell stress tensor. Near the disk midplane, all the models show very little coherent field. This suggests that the disk region is turbulent, which hinders the development of coherence. The S3Ev model, though, develops coherent magnetic field at higher altitudes, though the sign of the stress reverses multiple times. No such behavior is seen in the other two models. In panel (8) we show the vertical component of the coherent stress, which is again very small for cases S3Ed and S3Eq, whereas S3Ev again exhibits large fluctuations at higher altitudes. In panel (9), we show the vertical profile of the heating rate. Interestingly, the S3Eq and S3Ev cases have nearly identical heating rates in the disk midplane. At higher altitudes (around z ≈ ±2.5 GM/c 2 ), the S3Ev case has about an order of magnitude larger heating compared to S3Eq and S3Ed. These results inform the longstanding question of where the maximum dissipation occurs within a disk. Standard accretion disk theory assumes that disk heating is primarily confined to the disk midplane, as exhibited by our S3Eq case. This is consistent with its magnetic pressure profile. Contrarily, the S3Ev model shows the least heating in the disk midplane (yet still equal to the S3Eq case), but greatly enhanced heating in the region 1 |z|/(GM/c 2 ) 4. If we compare panels (6) and (9), we see that the enhanced heating rate correlates roughly with the amplitude of the toroidal magnetic field. In the S3Eq case, the strongest toroidal magnetic field is confined within the disk region with a maximum at the disk midplane, whereas in the S3Ev case, the toroidal magnetic field has a local minimum at the disk midplane (due to the current sheet there), while it is larger at higher altitudes. Although, these are ideal GRMHD models, the heating due to magnetic energy dissipation still scales with the available magnetic energy in a given region. The enhanced magnetic energy in the S3Ev case means there is more available energy to cascade down; hence, the enhanced heating in this case. This enhanced heating may be compensated by extra cooling due to outflows (some of which are seen in the second row of Figure 8) that help maintain thermal stability (Li & Begelman 2014). COMPARING OUR STABLE SOLUTIONS TO SHAKURA-SUYNAEV We find that, despite their different magnetic field setups and evolution, both the S3Eq and S3Ev configurations can lead to stable disks supported primarily by magnetic pressure, though the S3Eq model seems to fluctuate between stability and instability. Both cases achieve mass accretion rates and luminosities close to what would be expected based on our starting Shakura-Sunyaev model. We now compare how well our simulated disks match other predicted properties of the Shakura-Sunyaev model. Since the Shakura-Sunyaev model is a one-dimensional, vertically integrated model without explicit turbulence, we consider time-averaged, radial profiles of our simulations to be the closest proxy. Three key properties to focus on are the disk height, temperature, and surface density. In Figure 10, we provide azimuthally and time-averaged radial profiles of the mass accretion rate,ṁ; disk height, H, defined as in eq. (13); disk gas temperature, T gas , defined as and disk surface density, From the plot ofṁ, we can see that each simulation has achieved a steady state out to about R ≈ 15 GM/c 2 , with S3Eq and S3Ev closely straddling the target value. The S3Ed and S3Em models show lower accretion rates, consistent with their collapse. The upper right panel compares the disk scale height with the standard disk model (black dashed curve). We can see that both of the stable disk configurations (S3Eq and S3Ev) achieve heights much larger than the standard disk model, with a disk scale height of H/R ≈ 0.03 at R = 15 GM/c 2 , which is nearly a factor of two larger than predicted by the Shakura-Sunyaev model (although the higher resolution, 4-level model, S3Eq4L, is somewhat thinner). This thickened structure is in agreement with previous GRRMHD simulations of black hole accretion disks at comparable accretion rates (e.g. S adowski 2016a; Lančová et al. 2019;Wielgus et al. 2022). S3Ed (and even more so S3Em), on the other hand, has collapsed below the expected disk scale height. The lower left and right panels show profiles of T gas and Σ. We notice that all our disk configurations have temperature profiles nearly the same as the Shakura-Sunyaev model. The profiles of Σ, by contrast, show elevated surface densities (factors of 3-10) compared to the standard model. COMPARISON TO PREVIOUS SIMULATIONS We already mentioned that our zero-net-flux, multi-loop simulation, S3Em, is very similar to the simulations we presented in Mishra et al. (2016). The biggest difference is in the initial setup of the disk. In Mishra et al. (2016), we initialized the disk as a constant height slab of gas, orbiting everywhere at the Keplerian frequency. As such, the simulation was in some ways more akin to a radially extended shearing box simulation than a traditional global one. A negative consequence of that choice was that the disk did not start on the thermal equilibrium curve for a standard disk. This made it more difficult to assess the true nature of the instabilities we witnessed, though a thermal collapse was clearly evident. This is fixed in the current paper by starting from the Shakura-Sunyaev solution, which by construction, begins on the thermal equilibrium curve. Despite the differences, both sets of simulations reach the same conclusion: that a zero-net-flux, multi-loop magnetic field configuration is unable to stabilize itself in the radiation-pressure-dominated regime. A couple of other simulations we performed share some basic properties with simulations presented in S adowski (2016a). Specifically, our zero-net-flux, dipole case, S3Ed, is very similar to simulation "D" of that paper, and our zero-net-flux, quadrupole case, S3Eq, is very similar to their simulation "Q." The biggest difference is that S adowski (2016a) started each simulation from a torus of gas initially located far (R 40 GM/c 2 ) from the black hole. With the build up of MRI turbulence, that torus spreads out into a flatter, wider disk. Although this is a popular starting configuration, it does have its shortcomings when it comes to the goals of studying thermal stability. One is that it Table 1. Our stable disk models, S3Eq and S3Ev, show a height profile somewhat thicker than the model prediction. The temperature remains nearly the same as what we started with, while the surface density finishes slightly higher than its initial configuration. Note that the dashed black curves are computed using the initial disk profiles from the numerical model. is nearly impossible, a priori, to know what mass accretion rate one will get from such simulations; it is a matter of trial and error. Another is that the mass accretion rate usually decays over longer timescales, as the torus eventually depletes of mass. Finally, this configuration can only achieve inflow equilibrium inside of roughly the initial mid-radius of the torus. To avoid these issues, and to more directly compare with theoretical predictions, we chose instead to initialize our simulations from the Shakura-Sunyaev disk solution (Shakura & Sunyaev 1973), with weak magnetic fields added. This gets us around the shortcomings of the torus configuration in that we set the mass accretion rate as one of our input parameters, matter can be continually fed in from the outer boundary of the simulation domain, and an inflow equilibrium can, in principle, be reached over most of the grid. It is noteworthy then, that despite these differences in the setup, we reach essentially the same conclusions as S adowski (2016a), namely that the dipole configuration is unstable, while the quadrupole one is marginally stable. S adowski (2016a) concluded that the dipole configuration was unstable based upon: 1) a very thin profile (low H) of the disk; 2) a dominance of cooling over heating; 3) a lack of magnetic pressure support; and 4) a drop inṁ -the same points we made in Section 3.1. Conversely, S adowski (2016a) concluded that the quadrupole configuration was stable based upon: 1) a balance of heating and cooling; 2) magnetic pressure domination, β −1 r 0.5; and 3) a steadẏ m -the same points we made in Section 3.2. These same two simulations, S3Ed and S3Eq, also bear some similarities to the two simulations presented in Jiang et al. (2019), in that one of the Jiang et al. (2019) simulations, AGN0.2, started with a dipole field configuration, while the other, AGN0.07, used a quadrupole. Both were initialized from a torus configuration similar to S adowski (2016a), although the Jiang et al. (2019) simulations were tuned to active galactic nuclei (AGN) parameters (e.g., M = 5 × 10 8 M ). Nevertheless, they cover similar ranges of mass accretion rate and luminosity as our simulations, and are, thus, relevant to a discussion of thin-disk stability. Although Jiang et al. (2019) claim that both of their simulations are thermally stable, there are hints in Figures 1 and 2 that their AGN0.2 (dipole) model undergoes a transition around 40, 000GM/c 2 , whereṀ and L increase (by a factor of 2 in the case of L), the midplane density jumps up (by an order of magnitude), and the gas temperature inside the disk drops (also by about an order of magnitude). All of this happens around the same time B φ undergoes a major field reversal. Most of this is reminiscent of the transition we see in our S3Eq simulation around 12, 000GM/c 3 , which we associate with a transition from stability to instability. Interestingly, the quadrupole simulation in Jiang et al. (2019) appears to remain stable until at least 45, 000GM/c 3 , nearly four times longer than our S3Eq simulation and about 2.5 times longer than S3Eq4L. Since the Jiang et al. (2019) simulations were done at a higher effective resolution than ours, this could just be an extension of the effect we noticed that higher resolution simulations remain stable longer. Or it could be a product of other differences, such as how the radiation is handled in each case. In addition to S3Ed and S3Eq, we also modeled an initial magnetic field configuration with net-vertical magnetic flux (S3Ev). Such net-flux configurations have been gaining interest in recent years because of their producing higher effective viscosities (Hawley et al. 1995;Bai & Stone 2013;Salvesen et al. 2016) and feeding dynamically important magnetic fields toward the central object (Igumenshchev 2008;Beckwith et al. 2009;Cao 2011). Due to numerical challenges in simulating such a magnetic field configuration, there are very few reported global simulations of them, especially involving thin disks; a couple examples are Zhu & Stone (2018) and Mishra et al. (2020). Our S3Ev model shows rapid magnetic field amplification and surface accretion with weak outflows along the disk midplane, similar to those reported in Mishra et al. (2020). DISCUSSION AND CONCLUSIONS We performed four simulations of Shakura-Sunyaev thin disks threaded with different magnetic field configurations to evaluate their thermal stability and confront these models with observations of stable disks. Our zero-net-flux quadrupole and net-flux vertical field configurations seem to have achieved the desired stability (at least for some period). Both evolve to a late time accretion rate of ≈ 3L Edd /c 2 and luminosity of ≈ 0.17L Edd . The properties of these simulated disks are broadly consistent with the Shakura-Sunyaev model, with turbulent heating largely matching radiative cooling inside a magnetic-pressure-supported thin disk. As mentioned previously, it could be that the thermal instability would manifest itself on longer timescales (hundreds of orbital periods and many thermal timescales). However, we have confirmed that our stable configurations can reach β −1 r 0.5, which should remove the instability. One limitation of our simulations is that they only cover a quarter of the azimuthal domain. This prevents us from capturing low-order azimuthal structures such as the spiral features seen in Mishra et al. (2020). These could substantially alter the density profile of the disk and hence their radiative properties. Another issue is that all of our simulations undergo an initial thermal collapse before some of them recover. In order to prevent this initial transient feature, we could have started our simulations from an already turbulent disk to overcome the initial imbalance between heating and cooling. However, it is unlikely in our opinion that this would have changed the outcomes of any of the models. The key to stabilizing these disks is their ability to rapidly build up and sustain a large magnetic pressure. Simulations without a substantial radial or vertical magnetic flux are unlikely to ever achieve a dynamically significant magnetic pressure, whether via the MRI or Ω-dynamo. This statement appears to have recently gained additional support from the reported thermal collapse of simulated thin disks threaded only by toroidal magnetic fields (Liska et al. 2022). There are many plausible magnetic field topologies for accretion disks; here we have only considered four very simplified ones. It could be that in real astrophysical systems there may be a few preferred topologies. Recent EHT polarization measurements of M87 suggested an organized poloidal field component in the near-horizon region (Event Horizon Telescope Collaboration et al. 2021a,b). S adowski (2016b) gave an estimate for how strong such fields would need to be to stabilize the disks in particular X-ray binaries. Although he argued that it is reasonable for such field strengths to be provided by the companion star, he left open the question of how the fields might reach the inner accretion disk regions where thermal stability is a question. We, too, have dodged this important question for now. One thing our work adds to this debate is that a net magnetic flux may not be necessary for thermal stability. In most accretion flows, the toroidal magnetic component will dominate due to the Ω-dynamo, even in so-called MAD disks (Begelman et al. 2022). This component can provide quite high magnetic pressure, even while supporting the MRI (Wielgus et al. 2015). The question is really how this toroidal component sustains itself. This is where either a background vertical or extended radial field becomes crucial, as it will allow the toroidal component to be continually regenerated. One possible new requirement from our current work is that in order to achieve the strengths required to stabilize radiation-pressure-dominated disks, a radial field configuration must not be strongly affected by a midplane current sheet.
2022-09-08T06:43:29.603Z
2022-09-07T00:00:00.000
{ "year": 2022, "sha1": "5017e5573f393e9dd966cf3af7a516d56cc7206f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e72d7c791893f8b2b45f5e1118d5cd0ae8eaa6b6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }