id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
220603211
pes2o/s2orc
v3-fos-license
Intramuscular Delivery of Gene Therapy for Targeting the Nervous System Virus-mediated gene therapy has the potential to deliver exogenous genetic material into specific cell types to promote survival and counteract disease. This is particularly enticing for neuronal conditions, as the nervous system is renowned for its intransigence to therapeutic targeting. Administration of gene therapy viruses into skeletal muscle, where distal terminals of motor and sensory neurons reside, has been shown to result in extensive transduction of cells within the spinal cord, brainstem, and sensory ganglia. This route is minimally invasive and therefore clinically relevant for gene therapy targeting to peripheral nerve soma. For successful transgene expression, viruses administered into muscle must undergo a series of processes, including host cell interaction and internalization, intracellular sorting, long-range retrograde axonal transport, endosomal liberation, and nuclear import. In this review article, we outline key characteristics of major gene therapy viruses—adenovirus, adeno-associated virus (AAV), and lentivirus—and summarize the mechanisms regulating important steps in the virus journey from binding at peripheral nerve terminals to nuclear delivery. Additionally, we describe how neuropathology can negatively influence these pathways, and conclude by discussing opportunities to optimize the intramuscular administration route to maximize gene delivery and thus therapeutic potential. INTRODUCTION With thousands of clinical trials to date, gene therapy is a flourishing strategy with great promise for the treatment of diseases impacting the nervous system. Indeed, virus-mediated gene therapies have now been approved by the FDA in the US for RPE65-associated retinal dystrophy (voretigene neparvovec marketed as Luxturna) and SMN1-linked spinal muscular atrophy (SMA; onasemnogene abeparvovec marketed as Zolgenmsa), as well as non-neuronal conditions (High and Roncarolo, 2019). Gene therapy viruses are non-replicating, but still hijack host cell machinery to express transgenes of interest in the nucleus. Crucially, some viral vectors (i.e., viruses specifically used to deliver genetic material into cells) have the potential to circumvent the blood-brain-(BBB) and blood-spinal cord barriers (BSCB) when intravenously injected. Similarly, direct injection of viruses into the cerebrospinal fluid (e.g., via lumbar puncture in humans) also permits targeting of the peripheral (PNS) and central nervous systems (CNS). These two administration routes for neuronal delivery have been extensively covered in recent reviews (Hocquemiller et al., 2016;Deverman et al., 2018;Hudry and Vandenberghe, 2019). A complementary, and perhaps sometimes superior (Benkhelifa-Ziyyat et al., 2013), method to introduce genetic material into select neuronal populations is by virus administration into muscle, which is the focus of this review. Muscles contain the synaptic connection between lower motor neurons and muscle fibers, i.e., the neuromuscular junction (NMJ), as well as specialized sensory nerve endings (e.g., muscle spindles). Viruses can be internalized into peripheral nerve terminals and subsequently retrogradely transported along axons to deliver viral payloads into corresponding motor and sensory neurons, with scope for widespread transfer to additional cells throughout the spinal cord and brain (Benkhelifa-Ziyyat et al., 2013;Chen et al., 2020). The NMJ is a tripartite synapse comprised of a pre-synaptic motor nerve terminal, a post-synaptic muscle fiber, and several terminal Schwann cells (Li et al., 2018a). Moreover, the synaptic cleft consists of a complex and dynamic extracellular matrix (ECM) that contributes to receptor translocation and internalization of a variety of molecules (Heikkinen et al., 2020). Targeting muscles with viruses can transduce all three cellular constituents of the NMJ (Mazarakis et al., 2001;Homs et al., 2011)-by ''transduction,'' we mean the introduction of genetic material into target cells. Furthermore, uptake at sensory nerve terminals can lead to transgene expression in dorsal root ganglia (DRG), trigeminal ganglia, and dorsal horn nerve fibers (Watson et al., 2016;Chen et al., 2020). When injected into a muscle, viruses are close to nerve endings for longer periods and at higher concentrations than when systemically injected. Moreover, limiting widespread virus distribution is likely to decrease safety risks due to immunogenicity or toxicity, while possible negative effects caused by central injections will be avoided. Hence, targeting muscle may prove to be a useful method to introduce viral vectors to certain central and peripheral neurons and/or glia. For this strategy to be exploited, viruses must undergo several major processes, including host cell binding, internalization, intracellular sorting, and retrograde axonal trafficking to neuronal soma before nuclear entry. In this review article, we outline these mechanisms for major gene therapy viruses-adenovirus (AdV), adeno-associated virus (AAV) and lentivirus (LV ; Table 1)-with a focus on peripheral neurons. We also comment on the impact of neuropathology on using intramuscular virus injection as an administration route. To conclude, we discuss opportunities to optimize gene therapy delivery to muscle for nervous system targeting. by serology or sequencing. HAdVs primarily cause ocular, gastrointestinal, or respiratory infections (Ghebremedhin, 2014). It is estimated that more than 80% of the human population has been exposed to HAdV and develop type-specific humoral and cross-reactive cellular immunity (Ahi et al., 2011), hence, for utilization as a gene therapy vector, strategies to circumvent the host immune response have been examined (Duffy et al., 2012). In the 1990s, AdV became the first gene therapy virus to be tested in human clinical trials and currently remains the most investigated (Lee et al., 2017). The more common human serotypes 2 and 5 belonging to species C have been the focus for gene therapy development. E1/E3-deleted AdVs have a relatively large packaging capacity of ≈8 kb, can transduce many different cell types, and form episomes rather than integrating into the host genome. Moreover, AdVs can be efficiently produced in large, concentrated quantities. In some hosts and some organs, transgene expression using AdV can be transient, likely due to host-specific responses, while in other cases, transgene expression remains robust for months (Li et al., 2016). In this regard, the transient expression can be advantageous for scenarios requiring short-term upregulation of therapeutic genes and for limiting deleterious consequences that may arise from long-term expression (discussed in Tosolini and Morris, 2016b). However, the transgene capacity of AdV can be increased up to ≈36 kb by removing essential elements and exogenously providing them for in vitro packaging, and with this approach, they lack the elements that usually activate host immunity, which can thereby facilitate prolonged-expression (Ricobaraza et al., 2020). Permitting much broader options for transgene incorporation, this expansive packaging capacity is one major advantage of AdV over other viral vectors. AdVs display broad cell and tissue tropisms mediated by the interaction between their capsid and specific cellular receptors (Arnberg, 2012). Capsid modification, for instance by altering the virus genome or adding ligands, can widen or narrow tissue specificity depending on the required strategy (Worgall and Crystal, 2014). Direct intracranial injection of HAdV has been shown to result in the transduction of several different neuronal and non-neuronal cell types in the rodent CNS (Akli et al., 1993;Davidson et al., 1993;Le Gal La Salle et al., 1993). Furthermore, intramuscular administration of AdVs can result in their uptake at rodent NMJs and sensory terminals before retrograde transport to cell bodies (Finiels et al., 1995;Ghadge et al., 1995;Tosolini and Morris, 2016a), which is a viable strategy to counteract neuromuscular disease (Haase et al., 1998;Acsadi et al., 2002) and peripheral nerve injury (Giménez y Ribotta et al., 1997;Baumgartner and Shine, 1998). Of note, the canine adenovirus serotype 2 (CAV-2; also known as CAdV-2), which can cause mild respiratory infections in Canidae, has become the AdV of choice for neuronal transduction (Del Rio et al., 2019). Due to possessing greater specificity in host cell receptor binding than HAdVs, CAV-2 preferentially targets neurons (Soudais et al., 2001). Furthermore, it is efficiently retrogradely transported along axons (Salinas et al., 2009), while a helper-dependent CAV-2 has been shown to drive transgene expression in the rodent CNS for over a year (Soudais et al., 2004). CAV-2 injection into craniofacial muscles of rhesus monkeys caused robust motor neuron transduction (Bohlen et al., 2019), while intramuscular administration in rats results in superior motor neuron uptake and transport compared to AdV serotype 5 (Soudais et al., 2001), which together highlight the potential of CAV-2 for motor neuron targeting via skeletal muscle. Adeno-Associated Virus Belonging to the Dependoparvovirus genus and thus needing factors from helper viruses (e.g., AdV) to replicate, AAVs are non-enveloped, single-stranded DNA viruses discovered as AdV preparation contaminants (Zinn and Vandenberghe, 2014). More than 100 natural AAV variants, including 13 serotypes from primates, have been identified, each with differing tissue tropisms, transduction efficiencies, and antigenicities, all resulting from their distinct protein capsids (Zincarelli et al., 2008;Srivastava, 2016). Additional synthetic AAV subtypes have been derived/engineered in the laboratory to optimize these features for gene transfer (Kotterman and Schaffer, 2014). Impinging considerably upon its tractability, the packaging capacity of AAV is limited to ≈4.7 kb, which is halved in the more rapidly expressing self-complementary AAV (for simplicity, we refer to single-stranded and self-complementary AAV as one), although DNA delivery across separate AAV particles is possible (Patel et al., 2019). In most cases, AAV vectors induce limited immunogenicity in naïve hosts (Ronzitti et al., 2020), and have a good safety record, although there may be toxicity issues when administered at high doses (Hinderer et al., 2018). However, the AAV vector effect on brain homeostasis has not been completely addressed and is an important consideration . Forming stable, non-replicating episomes for sustained transgene expression, AAV is largely non-integrating (Schnepp et al., 2005), although insertional mutagenesis has been reported (Chandler et al., 2017). These combined features have led to AAV becoming the premier clinical gene therapy vector and its recent regulatory approval for the treatment of several conditions (High and Roncarolo, 2019). However, AAV gene therapy is not entirely infallible, as wild type AAV infections have been linked with human disease (Nault et al., 2016); however, potential solutions to overcome these and other concerns to drive human AAV gene therapy are continuing (Colella et al., 2018). Nonetheless, many more clinical trials of AAV-mediated gene therapy are ongoing or planned, including several involving intramuscular administration (although not necessarily for neuronal transduction). AAVs have been used for many years in the laboratory to drive transgene expression in the nervous system (Hudry and Vandenberghe, 2019). Due to its ability to cross the BBB, AAV serotype 9 (AAV9) has become the principal serotype for CNS-targeting upon systemic administration (Foust et al., 2009;Bevan et al., 2011;Samaranch et al., 2012), although superior serotypes, such as AAVrh10, have also emerged (Tanguy et al., 2015). However, cell binding and transduction can change with age (Chakrabarty et al., 2013), thus engineered serotypes with greater neuronal tropism, at least in mice, are being developed (Choudhury et al., 2016;Deverman et al., 2016). Nervous system delivery has also been achieved by AAV injection into muscle; intramuscular administration of several AAV serotypes (e.g., AAV2, AAV9) results in AAV uptake into the motor and sensory neurons in rodents (Hollis Ii et al., 2008;Zheng et al., 2010;Benkhelifa-Ziyyat et al., 2013;Jan et al., 2019;Chen et al., 2020) and motor neurons in non-human primates (Towne et al., 2010). Consequently, this method of gene delivery has proven beneficial in mouse models of motor neuron diseases amyotrophic lateral sclerosis (ALS), and SMA (Tosolini and Sleigh, 2017). Increasing the possible clinical applicability of AAV, single intramuscular injections of rAAV2-retro, a newly evolved variant with robust retrograde transport capacity (Tervo et al., 2016), were recently shown to result in broad transgene expression across ipsilateral and contralateral motor neurons along the length of the spinal cord, as well as brainstem motor nuclei, DRG, trigeminal ganglia and dorsal horn nerve fibers (Chen et al., 2020). Importantly, AAV targeting of peripheral neurons is therefore not limited to those cells innervating the injected muscle. Lentivirus Belonging to the Retroviridae family, LV possesses a singlestranded RNA genome and can infect both dividing and non-dividing cells (Parr-Brownlie et al., 2015). LV is an enveloped virus with a packaging capacity of ≈8 kb and it relies on reverse transcription of its single-stranded RNA genome to generate corresponding double-stranded DNA for integration into the host genome (Mátrai et al., 2010). This provides benefits of long-term transgene expression and inheritance of genetic material in dividing cells; however, integration also has the major disadvantage that it can disrupt host gene function through insertional mutagenesis, which poses a safety risk. Incorporation into the host genome is not random, as there are preferential sites and conditions for integration (e.g., highly expressed and intron-rich genes), but it is unpredictable (Lesbats et al., 2016). Nonetheless, this has not prevented several LV-mediated gene therapies being approved for human use, albeit being utilized for ex vivo modification of autologous immune cells (High and Roncarolo, 2019). For gene delivery, essential viral coding regions (e.g., gag, pol, and env) are removed from the LV genome, and instead provided by separate expression plasmids for in vitro packaging (Milone and O'Doherty, 2018). This removal of viral genes ensures that the immunogenicity of LV is relatively low, although not absent (Annoni et al., 2019). LVs are typically derived from primate or non-primate immunodeficiency viruses [e.g., human immunodeficiency virus type 1 (HIV-1) or equine infectious anemia virus (EIAV)]. LV tropism is mediated by the viral envelope, which is engineered to include glycoproteins from other enveloped viruses in a process called pseudotyping (Cronin et al., 2005). The most common virus used to pseudotype LV is the vesicular stomatitis virus (VSV), but heterologous envelope proteins from many other viruses have been used to target LV to particular cells and tissues, e.g., measles virus, murine leukemia virus and influenza viruses (Joglekar and Sandoval, 2017). The VSV glycoprotein (VSV-G) binds to a widely expressed receptor, leading to broad tropism when integrated into the LV envelope. In contrast, LVs pseudotyped with rabies virus (RV) display greater neuronal selectivity and have been shown to aid efficient transduction of neurons both in vitro and in vivo. Compared to VSV, LV pseudotyped with RV glycoprotein (LV-RV) shows superior neuronal transduction and transport when injected into the rat striatum and spinal cord (Mazarakis et al., 2001). A similar high efficiency has been reported when injected into the primate brain (Kato et al., 2007), while distal uptake and efficient retrograde trafficking occurs in rodent primary motor neurons (Hislop et al., 2014). Moreover, LV-RV administration into gastrocnemius muscle results in effective transgene expression in spinal cord motor neurons, while LV-VSV remains restricted to the muscle injection site (Mazarakis et al., 2001), which was confirmed with additional RV strains (Wong et al., 2004;Mentis et al., 2006). Pseudotyping with several different hybrid glycoproteins has since shown improved targeting of motor neurons when delivered to muscle, which can be further enhanced by the coupling of antibodies against NMJ receptors to the virus surface (Hirano et al., 2013;Eleftheriadou et al., 2014). As a consequence, numerous different LV-mediated therapeutic strategies that target motor neurons via muscle have proven successful in mouse models of ALS and SMA (Azzouz et al., 2004a,b;Ralph et al., 2005;Raoul et al., 2005;Benkler et al., 2016;Eleftheriadou et al., 2016). FROM VIRUS BINDING TO NUCLEAR ENTRY For viruses injected into a muscle to express transgenes in neurons, they must undergo a series of events: host cell binding and internalization, intracellular sorting, retrograde axonal transport, liberation from the transporting structure/organelle and nuclear entry (Figure 1). AdV, AAV, and LV rely on the same or similar mechanisms for several parts of this journey which are also shared by botulinum and tetanus neurotoxins (Surana et al., 2018). For instance, they all hijack retrograde axonal transport (Merino-Gracia et al., 2011), which is dependent on active, processive movement along microtubules by the motor protein complex cytoplasmic dynein-dynactin (Schiavo et al., 2013). By trafficking towards the stable minus ends of the microtubule, which are located at the cell body end of an axon, cytoplasmic dynein enables long-range retrograde delivery of cargoes, such as autophagosomes and neurotrophincontaining signaling endosomes. Additionally, the Rab (Rasrelated proteins in the brain) GTPase protein family is specifically required for signaling endosome trafficking (Villarroel-Campos et al., 2018). Target tissue-derived (e.g., muscle) neurotrophins transition from early Rab5-positive endosomes into retrogradely transported Rab7-positive signaling endosomes (Deinhardt et al., 2006). Unlike in the canonical endolysosomal pathway, retrograde Rab7-endosomes within axons display a tightly regulated neutral pH value that is maintained during transport (Bohnert and Schiavo, 2005). All three gene therapy viruses have been shown to localize to these axonal Rab7-endosomes, indicating that they share a common compartment when voyaging to the nucleus. Retrograde trafficking is a rapid and constitutive process that delivers large quantities of endosomes to the motor and sensory soma; it is thus unlikely to be a rate-limiting step in virus transgene expression. Rather, idiosyncratic aspects of the journey of each virus, e.g., binding to specific receptors or endosomal liberation at the cell body, probably have a greater impact on overall transduction efficiency. Highlighting similarities and differences, we now describe the individual journeys that each virus must take to migrate from muscle to peripheral nerve soma for transgene expression. Adenovirus Similar to most viruses, AdV is typically internalized in a twostep, receptor-mediated fashion that is dependent on the viral capsid, although non-specific, large-scale internalization has also been reported (Meier et al., 2002). Primary receptors that mediate AdV attachment to cells include, heparan sulfate proteoglycans, CD46, and sialic acid, which selectively interact with different serotypes (Arnberg, 2012); however, the appears to be the major initial binding partner for AdVs (Bergelson et al., 1997;Arnberg, 2012). Coxsackie and adenovirus receptor (CAR) is a widely expressed cell adhesion protein critical for heart development (Dorner et al., 2005), and is involved in neurogenesis through its synaptic expression throughout the mature brain (Zussy et al., 2016). CAR serves as the primary receptor for several different HAdV species (i.e., A, C-F) and serotypes, including 2 and 5, as well as CAV-2 (Arnberg, 2012;Loustalot et al., 2016). The second step of AdV internalization (i.e., entry) is facilitated by penton capsomere binding to members of the integrin receptor family, e.g., α V β 3 and α V β 5 (Wickham et al., 1993). Facilitating cell-tocell and cell-to-ECM interactions, integrins are expressed in a tissue-specific fashion and can in some instances mediate AdV attachment in the absence of CAR (Huang et al., 1996). Despite extensive knowledge on AdV receptors, relatively little is known about the specific entry of AdV at the NMJ or sensory nerve terminals. Intramuscular injections of AdV result in the targeting of both muscle fibers and innervating motor neurons in juvenile and adult mice (Tosolini and Morris, 2016a), which is consistent with the reported expression of CAR in muscle fibers (Nalbantoglu et al., 1999) and at both mouse and human NMJs (Shaw et al., 2004;Sinnreich et al., 2005). However, one of the major issues with AdV-mediated gene therapy is the relatively poor transduction of neurons in adults compared to young mice, including upon intramuscular injection (Acsadi et al., 1994;Huard et al., 1995;Tosolini and Morris, 2016a). This is somewhat unsurprising as CAR is downregulated post-natally in several neuronal subtypes (Hotta et al., 2003) and muscle (Nalbantoglu et al., 1999). Indeed, CAR is highly expressed in FIGURE 1 | The journey of gene therapy viruses from peripheral nerve terminals to the nucleus. Viruses used to deliver gene therapy must access cell nuclei to express their packaged genetic material. When administered into muscles for targeting of peripheral nerve somas, viruses such as adenovirus, adeno-associated virus (AAV) and lentivirus, undergo a series of processes that aid their transfer from the periphery to CNS (depicted here using AAV as an example). (A) First, the virus interacts with specific host cell surfaces. This entails primary receptor binding (e.g., glycans) followed by internalization, which is often mediated, at least in part, by a secondary receptor (e.g., AAV receptor, AAVR or fibroblast growth factor receptor, FGFR). Internalization at nerve terminals is regulated by a variety of endocytic pathways. Post-internalisation, viruses hijack the Rab GTPase-mediated endosomal sorting system, transitioning through Rab5-positive early endosomes to non-acidic Rab7-positive late endosomes. (B) Virus-containing Rab7-positive signaling endosomes are actively transported along microtubules by cytoplasmic dynein-dynactin complexes towards nerve cell bodies (i.e., retrogradely). (C) At the neuronal soma, viruses escape endosomes and are processed, sometimes through the Golgi apparatus, before entry into the nucleus (e.g., via the nuclear pore complex), where the virus can begin to drive transgene expression. immature skeletal muscle fibers but is drastically downregulated after birth (Nalbantoglu et al., 1999) becoming restricted to the NMJ (Shaw et al., 2004;Sinnreich et al., 2005). Nevertheless, to better understand the limited uptake of AdV into adult motor neurons, further investigation is required to provide a thorough longitudinal assessment of CAR levels at post-natal neuromuscular synapses. Upon muscle damage caused by Duchenne muscular dystrophy or polymyositis, CAR expression increases within muscle fibers and co-localizes with markers of regeneration (Sinnreich et al., 2005); given the parallels between mechanisms of muscle development and regeneration, this suggests that CAR may indeed be developmentally regulated at the NMJ and serve in the synaptic response to regeneration (Sinnreich et al., 2005). After binding to CAR, AdVs are internalized and processed in a cell type-dependent manner. Experiments in immortalized non-neuronal cells describe AdV internalization into endosomes via clathrin-coated pits (Meier et al., 2002) and subsequent endosomal liberation via acidification (Leopold et al., 1998). The intracellular domain of CAR plays a critical role in this by recruiting the endocytic machinery and influencing subsequent intracellular AdV trafficking (Loustalot et al., 2015). AdVs are then transported towards the nucleus by cytoplasmic dyneinmediated trafficking along with the microtubule network (Kelkar et al., 2004), impairments in which drastically disrupt this nuclear targeting (Suomalainen et al., 1999;Leopold et al., 2000). The AdV capsid directly interacts with cytoplasmic dynein via hexon capsomeres (Bremner et al., 2009), suggesting that in non-neuronal cells AdVs are transported as ''naked particles'' rather than in membrane-bound organelles (e.g., endosomes; Scherer et al., 2020). Moreover, this interaction appears to be dependent on exposure to low pH, suggesting that AdV binding to the motor protein is primed by transition through the early endosomal system (Bremner et al., 2009). AdV serotype 5 has also been shown to interact with the Kif5B subunit of kinesin-1, a motor protein that drives transport in the opposite direction to cytoplasmic dynein (i.e., towards dynamic plus ends), possibly as an evolutionary strategy for increased cellular exploration (Zhou et al., 2018). In primary neurons, AdVs are also internalized in a CAR-dependent manner , facilitated by CAR enrichment in actin-domains of neuronal growth cones as well as lipid rafts (Huang et al., 2007). Internalization occurs through a lipid microdomain-, actin-and dynamindependent manner before the receptors are eventually targeted for lysosomal degradation (Salinas et al., 2014). The major difference between neuronal and non-neuronal AdV trafficking is that in neurons, CAR does not undergo lysis during intracellular sorting, and is instead transported to the neuronal soma as part of non-acidic, Rab7-positive endosomes, thus preventing pH-induced conformational changes to the AdV capsid and restricting endosomal liberation (Salinas et al., 2009). CAR-positive organelles favor the retrograde direction but can also be anterogradely transported by kinesin motor proteins (Salinas et al., 2009). Again confirming the essential nature of transport to AdV migration, in vivo pharmacological blockade of microtubule dynamics inhibits the delivery of AdV to the neuron (Boulis et al., 2003). Once in the soma, AdV accesses the nucleus at the nuclear pore complex via histone H1 (Trotman et al., 2001) or the nucleoporin receptors (Trotman et al., 2001;Cassany et al., 2015), with the route also appearing to be cell type-dependent (Kremer and Nemerow, 2015). Adeno-Associated Virus AAV also gains cellular access via a two-step process involving primary cell surface receptors with a secondary receptor mediating entry. Negatively charged glycans or glycoconjugates serve as primary attractants with which AAVs initially interact allowing extracellular viral accumulation and co-receptor access. These include heparan sulfate proteoglycans for AAV2, AAV3, AAV6 and AAV13, N-terminal galactose for AAV9, and specific N-and O-linked sialic acid moieties for AAV1, AAV4, AAV5 and AAV6 (Huang et al., 2014). The wide expression of surface glycans, including in neuronal extracellular matrices (Broadie et al., 2011;Singhal and Martin, 2011), explains the broad infectivity of AAV, while glycan diversity and relative density likely dictates selectivity of AAV serotype tropism. Several serotype-specific co-receptors have also been identified that after glycan binding, facilitate AAV uptake. These co-receptors include fibroblast growth factor receptor (FGFR) and hepatocyte growth factor receptor (HGFR) for both AAV2 and AAV3, platelet-derived growth factor receptor (PDGF) for AAV5, and epidermal growth factor receptor (EGFR) for AAV6 (Madigan and Asokan, 2016). Signaling through each of these receptors has been linked to NMJ formation/function (Zhao et al., 1999;Li et al., 2012;Taetzsch et al., 2018), consistent with their synaptic availability. Additional receptors have been identified for engineered serotypes contributing to distinct tropisms (Hordeaux et al., 2019;Huang et al., 2019). However, a common receptor required for endocytosis of most natural primate AAV serotypes was recently identified (Pillay et al., 2016). Originally called KIAA0319L and linked with dyslexia and functions of neuronal migration and axon guidance (Poon et al., 2011), the AAV receptor (AAVR) possesses an N-terminal MANSC domain, several immunoglobulin-like PKD domains, a C6 domain, and a transmembrane region before a short C-terminal tail (Poon et al., 2011). As expected given the broad cellular and tissue infectivity of AAV, AAVR is expressed across many human tissues, including muscle and nerve, and can be found as several spliced variants and post-translationally modified isoforms (Poon et al., 2011;Gostic et al., 2019). AAVR knockout rendered mammalian HeLa cells highly resistant to infection with AAV serotypes 1, 2, 3b, 5, 6, 8, and 9, with a similar finding in AAV9-injected AAVR knockout mice in vivo (Pillay et al., 2016). The removal of AAVR resulted in no obvious phenotype, suggesting that AAVR is non-essential or there is genetic compensation. In subsequent work from the same group and others, AAV serotypes have been shown to differentially interact with AAVR PKD domains Zhang et al., 2019), while AAV4 gains full cellular access in absence of the receptor, suggesting that some serotypes can utilize non-AAVR internalization pathways (Dudek et al., 2018). In immortalized cells, AAVR localizes to the cytoplasm and perinuclear region where it associates with the Golgi network (Poon et al., 2011;Pillay et al., 2016). Several hypotheses as to where exactly AAV interacts with AAVR have been put forward, including on the cell surface, in the endolysosomal system and at the Golgi apparatus; however, this requires further clarification (Summerford et al., 2016;Pillay and Carette, 2017). Data are supporting several distinct AAV internalization mechanisms, including clathrin-dependent endocytosis (Uhrig et al., 2012), caveolar endocytosis (Sanlioglu et al., 2000), and the clathrin-independent carriers and GPI-enriched endocytic compartments (CLIC/GEEC) pathway (Nonnenmacher and Weber, 2011). However, not all routes result in an efficient delivery to the nucleus, rather they traffic AAV through unproductive paths leading to a viral cul-de-sac (Nonnenmacher and Weber, 2012;Pillay and Carette, 2017); only ≈30% of internalized AAV is estimated to enter the nucleus (Zhong et al., 2008;. Nonetheless, there are distinctions in AAV uptake depending on cell type and serotype (Weinberg et al., 2014), thus future work identifying neuronal-specific internalization mechanisms is required. Upon cellular entry, AAVs have been reported to be retrogradely transported from the cell surface to Golgi in a syntaxin 5-dependent mechanism (Nonnenmacher et al., 2015), before escaping into the cytoplasm and entering into the nucleus via the nuclear pore complex . However, before reaching the Golgi, AAV must transit through various acidic endosomal compartments to drive pHand cathepsin-mediated conformational changes in the capsid (Akache et al., 2007;Salganik et al., 2012). Indeed, the passage of AAV through the endosome to Golgi system appears to be necessary for transgene expression, as AAV directly injected into cytosol do not migrate to the nucleus (Sonntag et al., 2006). AAV has been reported to localize to Rab5-, Rab7-, and Rab11positive (recycling) endosomes (Berry and Asokan, 2016), and, as expected, requires a functioning microtubule network for transport . Nevertheless, its exact route through the cell requires further elucidation, especially its transit through long and highly polarized peripheral nerves, as little data have been generated in neurons. That being said, there is ample indirect evidence that AAVs are transported in axons in vivo both in peripheral (Hollis Ii et al., 2008;Towne et al., 2010;Zheng et al., 2010;Benkhelifa-Ziyyat et al., 2013;Jan et al., 2019) and CNS (Salegio et al., 2013;Castle et al., 2014a,b) neurons, suggesting the availability of AAV receptors and uptake mechanisms; however, observations of AAV being actively trafficked are limited. Nevertheless, peripherally administered AAV likely hijacks Rab-positive endosomes in peripheral nerves to reach the CNS, like that of AdV. Indeed, in primary cortical neurons grown in microfluidic chambers to separate axons and soma, AAV9 was shown to localize in a time-dependent fashion to several different endosomes/vesicles (e.g., Rab5-, Rab7-, Rab11positive; Castle et al., 2014b). AAV9 internalized at axon tips was retrogradely transported in cytoplasmic dynein-dynactindriven Rab7-positive endosomes and was subsequently capable of inducing transgene expression post-transition through the Golgi (Castle et al., 2014b). Moreover, in a companion study it was shown that AAV1, AAV8, and AAV9 share the same intra-axonal compartment when being transported in primary cortical neurons, indicating that once they have gained access to the endosomal sorting system, AAV serotypes harness common axonal transport mechanisms (Castle et al., 2014a). However, direct evidence from the motor and sensory neurons remains unavailable. Lentivirus LV tropism is dictated by the envelope glycoproteins with which it has been pseudotyped (Cronin et al., 2005). VSV-G interacts with the low-density lipoprotein receptor (LDLR; Finkelshtein et al., 2013). LDLR mediates uptake of cholesterol-rich LDL and is broadly expressed, thus LV-VSV is pan-tropic. A measure of cell-type selectivity can be achieved with cell/tissue-specific promoters, which is a strategy used with all three gene therapy viruses. For example, LV-VSV combined with an hGFAP promoter induces astrocytic expression, whereas LV-VSV with an rNSE promoter selectively expresses in neurons (Jakobsson et al., 2003). Alternatively, envelope modification coupled with surface antibody-mediated targeting can confer tissue specificity and improve virus uptake (Yang et al., 2006;Eleftheriadou et al., 2014). In contrast, LV-RV interacts with receptors that are predominantly expressed by neurons, including the pan-neurotrophin receptor p75 NTR (Tuffereau et al., 1998), neuronal cell adhesion molecule (NCAM; Thoulouze et al., 1998) and nicotinic acetylcholine receptor (nAChR; Hanham et al., 1993). p75 NTR non-selectively binds to all neurotrophins (i.e., BDNF, NGF, NT-3, and NT-4/5) and, depending on the active co-receptor, can activate both pro-survival or pro-death signaling (Gentry et al., 2004). NCAM is an immunoglobulin-like glycoprotein that mediates cell-to-cell contact and functions in adhesion, guidance, and differentiation during neuronal growth (Weledji and Assob, 2014). nAChRs bind to the excitatory neurotransmitter acetylcholine secreted into the synaptic cleft to facilitate depolarization of the postsynaptic cell. All three LV-RV receptors are integral constituents of the NMJ (although nAChRs are post-synaptic), explaining the efficient in vivo uptake into motor neurons of these RV pseudotyped viruses when injected into a muscle (Mazarakis et al., 2001;Azzouz et al., 2004a,b;Wong et al., 2004). After receptor-mediated internalization, most likely in clathrin-coated pits as dictated by their neuronal receptors (i.e., p75 NTR ; Bronfman et al., 2003), RV-LVs migrate through the endolysosomal system transitioning from Rab5-positive early endosomes to the non-acidic Rab7-positive compartment (Hislop et al., 2014). In non-neuronal cells, endosome acidification causes a conformational change in LV glycoproteins, which initiates membrane fusion between the viral envelope and endosome membrane to permit the escape of the virus into the cytoplasm (Gaudin et al., 1993;Gaudin, 2000). However, in neurons, LVs are retrogradely transported within neutral Rab7-positive signaling endosomes towards peripheral nerve cell bodies through the same motor proteindriven process as AdV and AAV. In rat primary motor neuron cultures, LV-RV was shown to co-localize in axons with all three receptors (i.e., p75 NTR , NCAM, and nAChR) with co-migration confirmed for p75 NTR (Hislop et al., 2014). However, despite transport being rapid and effective, neuronal transduction was comparatively inefficient, suggesting that post-trafficking processes are suboptimal in neurons (Hislop et al., 2014). Upon arrival at the cell body, LV must undergo a process known as uncoating, in which several viral proteins (e.g., Gag structural proteins) are removed to permit reverse transcription of the viral RNA (Matreyek and Engelman, 2013). The resulting doublestranded DNA then complexes with virus proteins for entry into the nucleus via the nuclear pore complex, before integration into the DNA of the host neuron. Improving understanding of these processes in motor and sensory neurons will be key to optimizing the effectiveness of intramuscular virus delivery. INFLUENCE OF PATHOLOGY Neuropathology will impact most, if not all, major steps in the journey of viruses from the nerve terminal to the nucleus (Figure 2). Neurodegeneration of peripheral nerves results in the loss of axon terminals within muscles (Figure 2A). Motor neuron retraction from the NMJ, i.e., denervation, is an early feature of motor neuron diseases [e.g., ALS, SMA and Charcot-Marie-Tooth disease (CMT; Goulet et al., 2013;Moloney et al., 2014;Sleigh et al., 2014;Spaulding et al., 2016)], and will limit neuron-virus interactions within muscles. Sensory degeneration observed in conditions like CMT , will have a similar restrictive effect. Nonetheless, motor neurons branch frequently within muscles resulting in multiple contacts across the entire muscle; thus, if one or several NMJs become denervated, there is likely to be a window of time in which at least some neuromuscular contacts of a pathological neuron remain viable. In ALS mice, for instance, rather than all neuromuscular contacts of a single motor neuron denervating simultaneously, healthy synapses close to degenerating NMJs are more likely to denervate than those located further away, suggestive of localized pathological transfer (Martineau et al., 2018). It is therefore conceivable that functional synapses may facilitate virus uptake and nuclear delivery to preserve the integrity of NMJs that remain. Moreover, once delivered, viral vectors encoding secretable proteins (e.g., neurotrophins) can influence central networks through both autocrine and paracrine mechanisms (Baumgartner and Shine, 1997;Benkhelifa-Ziyyat et al., 2013). NMJs resident in different muscles, and even within a single muscle, can show large differences in both pre-and post-synaptic structures (Mech et al., 2020) as well as levels of key synaptic proteins (Allodi et al., 2016), thus virus binding and uptake are likely to differ across motor nerve terminals. Significantly, intramuscular injections of gene therapy viruses can result in efficient and extensive transgene expression within the neonatal and adult mouse spinal cord, brainstem, and sensory ganglia, likely via the cerebrospinal fluid (Benkhelifa-Ziyyat et al., 2013;Chen et al., 2020). This finding is particularly important, as it suggests that injecting one muscle can result in viral transduction of an array of central neurons (Chen et al., 2020), meaning that not all muscles require injection for potential widespread motor and sensory neuron transduction; although injecting more muscles can cause greater therapeutic benefit (Benkhelifa-Ziyyat et al., 2013). Furthermore, muscle transduction can be used to promote synaptogenesis and/or reinnervation after neuromuscular pathology (Darabid et al., 2014). In this regard, collateral sprouting and dynamic remodeling of the NMJ, as is observed in ALS mice (Martineau et al., 2018), may also be therapeutically targeted. In addition to the loss of peripheral nerve endings in muscle, deficiencies in endocytosis (e.g., in SMA; Dimitriadi et al., 2016), endolysosomal sorting (observed in many conditions; Neefjes and van der Kant, 2014), Golgi processing (e.g., in ALS;van Dis et al., 2014), and nuclear import (e.g., in ALS; Dormann and Haass, 2011) would all likely reduce the efficiency of viral transgene expression ( Figure 2B). As would pathologyassociated restrictions in axonal transport (Figure 2C), which have been reported in many neurodevelopmental and neurodegenerative conditions (Sleigh et al., 2019), such as the signaling endosome transport deficits observed in ALS mice (Bilsland et al., 2010;Sleigh et al., 2020a). Nevertheless, Rab7-positive endosomes containing AAV have been shown in primary cortical neurons in vitro to increase retrograde transport speeds compared to non-AAV containing Rab7 organelles (Castle et al., 2014b), which could perhaps counteract transport dysfunction. Only a few studies are have investigated the impact of disease on virus transduction after intramuscular delivery. Despite downregulation during development, CAR expression is upregulated in regenerating adult skeletal muscle in response to disease (Nalbantoglu et al., 1999;Shaw et al., 2004;Sinnreich et al., 2005), which will likely positively impact AdV uptake. Increased levels of sialic acid, a known AAV9 inhibitor, in the CNS of a mouse model of lysosomal storage disorder have been shown to severely limit the effectiveness of AAV9-mediated gene therapy (Chen et al., 2012). Nonetheless, the opposite may be true for particular AdV and other AAV serotypes, which use sialic acid as a primary attachment factor. Involved in pro-apoptotic signaling during development, but downregulated in the mature nervous system, the p75 NTR receptor is also re-expressed in neurons after disease or trauma (Dechant and Barde, 2002), possibly impacting LV efficacy. For example, p75 NTR expression is increased in SOD1 G93A mice motor neurons and human ALS tissue (Lowry et al., 2001), and plays a key role in organizing and maintaining NMJ connectivity (Pérez et al., 2019). Moreover, NCAM expression is a major regulator of synaptic remodeling in pre-synaptic NMJ terminals (Chipman et al., 2014) and levels are dysregulated in ALS (Jensen et al., 2016), which could also affect LV binding. Also, the background of the experimental animal can influence the transduction efficiency of some vectors and must be carefully considered . Overall, these studies warn against the assumption of similar virus binding and uptake profiles between healthy and disease states and indicate that further studies in disease models at symptomatic stages are required. Despite these hurdles, intramuscular injections of gene therapies have proved successful at symptomatic stages in ALS mice (Tosolini and Sleigh, 2017), hence the abovediscussed effects of pathology do not abolish virus transduction. Furthermore, symptomatic SMA patients treated with onasemnogene abeparvovec to augment SMN protein levels respond positively to treatment (Mendell et al., 2017), albeit with AAV administered intravenously. Nevertheless, while it remains unclear precisely how and to what extent specific diseases and associated pathologies will impact the transduction of peripheral neurons, the described viral vectors have the undisputed potential for the treatment of neuromuscular disorders when delivered to skeletal muscle. OPTIMIZING INTRAMUSCULAR GENE THERAPY One of the biggest challenges facing gene therapy is achieving sufficient delivery to target cells/tissues to combat disease. This is particularly difficult for peripheral nerve disorders in which pathological cells are located deep within the spinal cord and behind the BBB and BSCB. Several investigatorindependent factors such as nervous system maturity (Foust et al., 2009;Tosolini and Morris, 2016a) and pathology influence viral transduction and transgene expression, but these cannot be modified in a clinical setting. However, varied investigator-driven factors also impact the effectiveness and should be carefully considered when designing gene therapy for intramuscular administration. Differences in tropism, infectivity, and transport between viruses and their serotypes will impact the success of this delivery method; for example, in a side-by-side comparison, muscle injection of rAAV2-retro was shown to have superior capacity to transduce peripheral neurons compared to AAV serotypes 1, 2, and 5-9 (Chen et al., 2020). Similarly, superior LV pseudotypes based on hybrid glycoproteins have also been identified (Hirano et al., 2013;Eleftheriadou et al., 2016). Moreover, vector purity and concentration will impact transduction levels (Hollis Ii et al., 2008;Klein et al., 2008), as will the efficiency and specificity of the promoter (von Jonquieres et al., 2013;Borel et al., 2016), the choice of which can also reduce FIGURE 2 | Neuropathological events impair the viral transduction of peripheral neurons. Several general and virus-specific pathological events caused by neurological disease diminish the effectiveness of gene therapy delivery to the nervous system via muscle. (A) Loss of motor and sensory nerve endings due to neurodegeneration will restrict nerve-muscle connections and the frequency of virus-nerve interaction. (B) Alterations in the expression or availability of certain primary or secondary receptors will affect virus attraction and binding. Deficits in endocytosis, as seen in spinal muscular atrophy (SMA), or impaired endosomal sorting, as identified in amyotrophic lateral sclerosis (ALS) and some forms of Charcot-Marie Tooth disease (CMT), could reduce virus uptake into peripheral nerve terminals. Defects in Golgi processing and nuclear import may also decrease viral transduction (not depicted). (C) A variety of impairments affecting axonal transport machinery (e.g., microtubule dysfunction) are known to cause defects in cargo trafficking (e.g., slowed transport or reduced quantity/flux), which will limit viral delivery. Several different methods have been pioneered that can enhance peripheral neuron transduction upon intramuscular virus administration. As may be expected, these techniques focus on enhancing virus uptake rather than other processes essential to transduction. For instance, a complementary viral strategy can be used to boost the expression of the virus receptor(s) at peripheral nerve terminals that can then be therapeutically targeted with a different virus, as has been demonstrated with AAV-mediated CAR expression for increased AdV binding and uptake (Larochelle et al., 2010;Li et al., 2018b). Receptor expression may also be selectively increased by genetic overexpression (Nalbantoglu et al., 2001) or administration of drugs that enhance transcription, albeit non-specifically (e.g., histone deacetylase inhibitors; Larochelle et al., 2010). Similarly, genetic screens are beginning to identify a variety of viral restriction factors (i.e., proteins that constrain uptake and transduction), which could also be genetically or chemically manipulated, perhaps in a tissue-specific fashion, to aid uptake (Mano et al., 2015;Madigan et al., 2019). Alternatively, approaches are being developed in which recombinant viral receptor proteins are conjugated to biomaterials and pre-loaded with gene therapy viruses before injection. Indeed, intramuscular administration of recombinant cysteine-tagged AAVR chemically linked to polyester microspheres and pre-incubated with AAV resulted in local and prolonged gene delivery with reduced spread compared to AAV alone (Kim et al., 2019). However, it remains to be seen whether this system can be adapted to increase uptake into peripheral nerve terminals, which would require the release of AAV from the receptor microspheres. Similarly, viral capsids can be chemically modified with a variety of different substances that may aid peripheral nerve binding (e.g., conjugation with neuron-specific homing peptides; Terashima et al., 2009), or antibodies against key neuronal receptor proteins (e.g., p75 NTR and CAR; Hedley et al., 2006;Eleftheriadou et al., 2014). Furthermore, motor neuron transduction efficiency upon intramuscular administration of AdV was shown to be enhanced by pre-treatment with flaccid paralysis-causing botulinum toxin type A (BoNT/A; Millecamps et al., 2002). Likely mediated by enhanced motor terminal sprouting, this enhancement was even greater in the SOD1 G93A ALS mouse (Millecamps et al., 2001(Millecamps et al., , 2002. Unfortunately, many of these strategies are not currently a clinical possibility, for obvious reasons. Nonetheless, their implementation in the laboratory to deliver genes within the therapeutic range, along with the development of novel and improved tools to assess virus transduction and treatment efficacy (Han et al., 2019;Chen et al., 2020;Sleigh et al., 2020b;Surana et al., 2020;Ueda et al., 2020), will undoubtedly lead to improved understanding of disease mechanisms and assessment of potential gene therapy strategies. CONCLUSION Gene therapy injected into the skeletal muscle for delivery to neurons holds therapeutic promise for peripheral nerve disorders. Motor and sensory nerve terminals located within muscles can act as therapeutic conduits not only for the innervating neurons (Figure 1) but also neighboring nerve and glial cells via paracrine mechanisms. Moreover, some viruses can escape from the initially transduced neurons, resulting in widespread gene delivery throughout the spinal cord, brain stem, and sensory ganglia. Importantly, this indicates that not all muscles need to be injected to obtain broad cellular dosing. Unfortunately, neuropathology is likely to hinder the effectiveness of intramuscular gene therapy delivery (Figure 2); but innovative pre-clinical methods are being developed that will enhance peripheral neuron transduction via this method. Also, the intramuscular administration could be combined with, for example, intrathecal delivery to further enhance CNS uptake. However, due to the immune response, repeated successful dosing is unlikely, and hence such treatments need to be given within a short time frame to circumvent this impediment. Nevertheless, by factoring in a detailed understanding of the dynamics of viruses and host cell receptors, especially in the context of peripheral nerve biology and neuromuscular pathology, perhaps this minimally invasive delivery method can contribute to successful gene therapy in the future. AUTHOR CONTRIBUTIONS AT and JS wrote the manuscript and have approved the submission of this work.
2020-07-18T13:07:05.678Z
2020-07-17T00:00:00.000
{ "year": 2020, "sha1": "2fc9afde0542e3ab3ff4c1d72d3cfa9cbecbcb44", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2020.00129/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2fc9afde0542e3ab3ff4c1d72d3cfa9cbecbcb44", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
153738662
pes2o/s2orc
v3-fos-license
Interpreting employment in a recession using an epidemiological model Abstract This note suggests that the employment characteristics in a typical recession are somewhat similar to the disease characteristics found in an epidemic. Preliminary numerical results indicate that the model may have some validity. This note interprets the employment characteristics found in an individual economic recession using a model that has been used in the past to describe the characteristics of epidemics of various diseases. It appears that both recessions and epidemics do have some similar characteristics in that they both seem to recur at various times with little remembrance of the previous occurrence and each episode usually reoccurs on a rather slow timescale which we will indicate with the independent variable τ. In the past according to records compiled by Wells Fargo Bank, this separation time τ for recessions is of the order of a decade or so. Within an individual recession, populations of employed people may become unemployed and these unemployed may in turn somewhat discourage the remaining employees who are still employed or if unemployed, hopefully become reemployed. The variable of time within a particular recession will be t and this time which is typically measured in weeks or months is significantly less than the interval between recessions and one can usually assume that t ≪ τ. The purpose of this note is to suggest a model that has been used in epidemiology to describe the disease characteristics in an epidemic can possibly be reinterpreted to describe the employment characteristics in a recession. In a disease, a certain proportion of the total population S is susceptible to becoming ill and therefore becoming an infected population I which could possibly in turn infect members of the healthy community. Eventually, the infected population hopefully recovers and becomes a recovered population R. The description of the three populations during the epidemic has been described by Kermack and McKendrick (1927) and is now known as the SIR model in epidemiology PUBLIC INTEREST STATEMENT This short note suggests that a model that was first introduced by Kermack and McKendrick in epidemiology to describe the temporal evolution of sickness in the healthy population and their subsequent recovery in an epidemic can also be interpreted in terms of the employment characteristics in a recession. (Kermack & McKendrick, 1927). As an example of the separation of timescales in epidemiology, a typical measles epidemic could have a temporal duration of weeks or months and the interval before the next outbreak would have a temporal separation of four or five years. The employment characteristics of the temporal evolution of the three populations that occur in the nth individual recession at the time τ = τ n can be interpreted in terms of the proposed employment model with the three first-order ordinary differential equations. where the dependent variable E represents the population of fully employed members, U represents the population of unemployed members and R represents the population of members who have been reemployed in the recession. The term βU reflects the proportion of the unemployed population who may subsequently discourage members of the employed community which in turn will cause a reduction in the employed community in that particular recession. The constants β and ν, respectively, represent the rate of populations being laid-off or being rehired in the recession. The coefficient β could also have a negative value if one interprets rumors of impending layoffs in the company cause the remaining employees to work harder in the hope of impressing the supervisor that their particular employment is required. The proposed simplified model is labeled as being the EUR model since it is very similar to the SIR model in epidemiology (Kermack & McKendrick, 1927). The simplification that is made is assuming that the constants β and ν will have numerical values rather than being functions that depend nonlinearly upon the dependent variables. Adding the three equations in Equation 1, we find that which has the solution This states that the total population of employed E, unemployed U, and reemployed R members in a particular recession is equal to a constant N( n ). This result supports the validity of the model in Equation 1. As time goes on to the next recession at the time τ = τ n + 1, this number may either increase, remain the same, or decrease. This model assumes that the details of an individual recession are independent of the previous or subsequent recession. Arbitrarily selecting values for the numerical constants of we numerically solve the three equations. Choosing the initial conditions at the nth recession to have the normalized values of 99% employed and 1% unemployed, where the normalization parameter is given in Equation 3 = 0.5 and = 0.3 E( = n ) = 0.99 and U( = n ) = 0.01 we obtain the behavior for the three normalized populations as shown in Figure 1. As the fully employed population E becomes unemployed, there will be a slow increase in the unemployed population U until the employment recurs and the reemployed population R increases as the recession dissipates. Just as epidemics of various diseases such as measles or malaria appear and disappear in the temporal evolution of mankind, we speculate that economic recessions and recoveries will follow a somewhat similar pattern in time. The simplified model that has been proposed here can be used to describe such an evolution. A more detailed analysis would include more correct numerical values for the constants and also introducing nonlinear functions which would replace the assumed constant values for β and ν that appear in our preliminary numerical analysis. There does not appear to be an a priori reason to assume that there will be a periodicity in the long-time behavior in the occurrence of a recession.
2019-01-02T08:50:31.493Z
2014-11-10T00:00:00.000
{ "year": 2014, "sha1": "6c4283710a49761bb9da55674208768e169ecf7b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311916.2014.976330", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6c4283710a49761bb9da55674208768e169ecf7b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
258639580
pes2o/s2orc
v3-fos-license
Multiple Primary Malignancies: A Clinicopathological Pro fi le of Patients at a Tertiary Center of North India — A Retrospective Hospital-Based Observational Study Introduction The incidence, prevalence, as well as survival of cancer patients, is increasing day by day due to the use of screening and improved diagnostic modalities. Simultaneously, the development of multiple primary malignancies (MPMs) in cancer survivors is not uncommon in recent years, because of an improved understanding of biology and effective management of cancer in the form of local (i.e., surgery/ radiotherapy) and systemic (chemotherapy/targeted therapy) treatment, leading to improved survival and subsequent development of more malignancies. The study was conducted to describe the clinicopathological pro fi le of patients diagnosed with MPMs. Objective To study the clinicopathological pro fi le of MPMs and to look for treatment patterns of these patients. Materials and Methods This was a retrospective hospital-based observational study. Medical records of 73 patients with MPMs, who were registered in the department of medical and surgical oncology between January 2016 and December 2018, were enrolled in the study. The statistical analysis was done by using IBM SPSS Statistics for Windows from IBM Corp. Categorical data were expressed in the form of frequencies and Background Multiple primary malignancies (MPMs) in cancer patients are not very rare because of prolonged survival due to advances made in the treatment of cancer patients.The probability of recurrence or a secondary from the initial malignancy may delay treatment and impact the overall prognosis and survival, making the diagnosis of MPMs complicated.The most common presentation of MPMs is double malignancies. 1,2MPMs were first described by Billroth 3 in 1889 and reported in a detailed study by Warren and Gates 4 in 1932.The criteria for diagnoses of MPM, as proposed by Warren and Gates, include: (1) histological confirmation of malignancy in both the index and second primary tumors; (2) there should be at least 2 cm of normal mucosa between the tumors; if the tumors are in the same location, then they should be separated in time by at least 5 years; (3) probability of one being the metastasis of the other must be excluded.Double primary malignancies could be divided into two categories, depending upon the interval between tumor diagnoses.Synchronous malignancies are defined where second tumor develops simultaneously or within 6 months after the diagnosis of first malignancy, whereas in metachronous malignancies, second tumor develops after 6 months or more after the diagnosis of the first malignancy. 5he aim of our study was to assess the clinical and pathological profile of patients diagnosed with MPMs in our region. Material and Methods This was a retrospective hospital-based observational study.Medical records of 73 patients with MPMs who were registered in the department of medical and surgical oncology between January 2016 to December 2018 were enrolled in the study.The patient's details were entered in a set proforma, which include age, sex, family history, smoking and drinking history, histology of synchronous and metachronous lesions, and treatment received. Inclusion Criteria Patients with two or more lesions at different sites with different histology or those with two lesions at different sites with similar histology but with different immunohistochemistry markers were included in the study.The tumors were divided into synchronous and metachronous lesions depending upon the time interval between the occurrence of two lesions.Synchronous tumors developed simultaneously or within 6 months of each other, whereas metachronous lesion occurred more than 6 months apart from each other. Exclusion Criteria Patients with malignancy at different sites but with same immunohistochemistry or disease at same site within 5 years of first malignancy were excluded from the study. Sample Size Sample size was calculated by using the Cochran's formula n ¼ (1.96) 2 p(1-P)/d 2 , p ¼ 0.73%, d ¼ 0.10%, and thus, the calculated sample size is 76.Case records of three patients were incomplete and were excluded from the study, so the final sample size of our study was 73 patients having 85% power of study. Primary Outcome To study the clinicopathological profile of MPMs. Secondary Outcome To study the treatment patterns of MPM patients. Statistical Analysis The data analysis was done on a computer running Microsoft Windows.The data were initially entered into a Microsoft Excel spreadsheet to be checked for mistakes.The IBM Corp.'s IBM SPSS Statistics for Windows was used for the statistical analysis (released 2020, Version 27.0.Armonk, New York, USA).The frequency and percentage representations of categorical variables were displayed. Results Out of 13,852 newly diagnosed cancer cases, 73 patients were diagnosed to have MPMs comprising of 0.51% of the total cases enrolled during the study.Two patients had triple malignancies, whereas 71 patients had double malignancies.In the two patients with triple malignancy, one had non-Hodgkin's lymphoma and developed metachronous squamous cell carcinoma of esophagus and adenocarcinoma of sigmoid colon.In another patient with index squamous cell carcinoma of skin (thigh), developed two metachronous malignancies, i.e., renal cell carcinoma and adenocarcinoma stomach.For rest of discussion purposes, we will exclude triple malignancies.Of the 71 cases of double malignancy, 39 most common histologies seen in primary, whereas adenocarcinoma was the most common histology seen in second primary malignancy.Conclusions The phenomenon of MPMs is not an uncommon presentation due to longer survival and side effects of treatment (radiotherapy/chemotherapy).It should always be kept in consideration in any cancer survivor during surveillance in order to detect it and treat at the earliest. (54.92%) were men and 32 (45.07%) were women with a male to female ratio of 1.21:1.Median age of our patients was 55 (30-80) with median time to diagnosis of second cancer of 36 months (12-228). Discussion Recently, there has been a sharp increase in the prevalence of MPMs, ranging from 0.7 to 11.7% among various populations. 6This can be due to multitude of reasons including the improved survival of cancer patients due to improved treatment modalities, better diagnostic modalities, and more stringent surveillance of cancer survivors. 7The prevalence of MPMs in the studied group was 0.51%. MPMs are a special phenomenon in the tumorigenesis.A number of studies have been conducted worldwide leading to better understanding of this phenomenon.The etiopathogenesis of MPMs can be attributed to genetic events or the common environmental risk factors. 8Various other mechanisms like aging, an unhealthy lifestyle, cancer treatments, or interactions between any of these factors are also believed to contribute to the development of MPMs. 9 The increased risk of MPMs can be attributed to field carcinogenesis due to exposure to tobacco, smoking, and alcohol consumption. 10In our study population, 32% patients had smoking history, and none had history of alcohol consumption. The treatment of primary malignancy by chemotherapy and/ or radiotherapy may contribute to increased risk of second malignancy as both ionizing radiation and cytotoxic agents (etoposide, cyclophosphamide, and Adriamycin etc.) can cause DNA damage leading to carcinogenesis.The harmful effects of these treatments as well as of the tumor microenvironment on the patient's immune system may be an important contributing factor allowing future renegade mutant cancer cells from escaping the body's defense mechanisms.Children and young adults may be especially prone to such iatrogenically induced cancers. 11This was also seen in our patient population, as around 40% of the patients in total and 51.71% patient in metachronous group had received either chemotherapy/radiotherapy or both as treatment for their primary cancers. In the present study, the incidence of multiple primaries was more common in men as compared with women with a male to female ratio of 1.21:1.3][14][15][16] The median age in our study was 55 years (range 30-80 years).Etiz et al in their study had male to female ratio of 1.19:1 with a median age of 59 years (range 29-80 years), 17 consistent with our study.The interval between index primary and second primary in our study was 12 to 228 months (median 36 months), which is consistent with other studies 9,18 (comparison between different studies given in ►Table 4). 0][21][22] In a study by Aydiner et al, 14 synchronous malignancies constituted 34%, whereas metachronous malignancies constituted 66%, consistent with our study with synchronous and metachronous malignancy of 27 and 73%, respectively.The most common system involved in first primary as well as second primary malignancy in our study was GI (30.99 and 39.44%) with lung being the most common second primary malignancy after GI (16.9%).In a retrospective study, Zhai et al 9 found that the most common pairs were digestive-digestive (25.75%) followed by digestivelung pairs (19.16%), which coincides with our findings.In another study conducted by Etiz et al, the most common second primary malignancies were GI (22%) and lung (19%), similar to present study. 17There is high prevalence of GI malignancies in this region of country, which is presumed due to geographic, dietary, and cultural reasons.In a study from the region by Khan et al, 23 which included 22,180 patients, cancer of esophagus, stomach, and colon were second, third, and sixth most common causes of cancer incidence.This could explain the reason for GI tract being the most common site in both synchronous and metachronous groups. The possibility of existence of MPMs must always be considered during pretreatment evaluation.There is some evidence that screening will improve outcomes among patients who may develop second malignancies, although the data are limited.The optimal screening modalities and strategies to reduce mortality from second malignancies remain to be defined for most tumor sites. 21With careful monitoring, second primary tumors can be detected early, and with appropriate intervention might be better managed, without compromising survival. A sizable prospective study needs to be conducted to better understand the profile and outcome of MPMs in order to better develop the various strategies for screening and early identification of second primary malignancies and to enhance outcomes. Limitations The small sample size and retrospective nature of our study are its primary limitations. Conclusion In conclusion, second primary malignancies are not rare.They can be synchronous or metachronous.Improvements in diagnostic and staging modalities and improved survival after management of primary cancers have increased the detection of second primary malignancies.A strong clinical suspicion and thorough evaluation would be beneficial in the management of these tumors.A regular follow-up in a patient diagnosed and treated for primary malignancy would help not only to detect recurrence but also could detect most of the metachronous second primary malignancies at an early stage. Table 1 System wise invasion sites Abbreviation: CI, confidence interval. Table 3 Summary of metachronous double malignancies
2023-05-13T13:04:00.964Z
2023-05-12T00:00:00.000
{ "year": 2023, "sha1": "ea7ac6b0e412ac5de05d94e68410d0a94bb9dcd5", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0043-1768051.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6b90269350a3370162dfa05049feadf85bcc8c35", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
261851934
pes2o/s2orc
v3-fos-license
Plant and mammalian-derived extracellular vesicles: a new therapeutic approach for the future Background: In recent years, extracellular vesicles have been recognized as important mediators of intercellular communication through the transfer of active biomolecules (proteins, lipids, and nucleic acids) across the plant and animal kingdoms and have considerable roles in several physiological and pathological mechanisms, showing great promise as new therapeutic strategies for a variety of pathologies. Methods: In this study, we carefully reviewed the numerous articles published over the last few decades on the general knowledge of extracellular vesicles, their application in the therapy of various pathologies, and their prospects as an approach for the future. Results: The recent discovery and characterization of extracellular vesicles (EVs) of diverse origins and biogenesis have altered the current paradigm of intercellular communication, opening up new diagnostic and therapeutic perspectives. Research into these EVs released by plant and mammalian cells has revealed their involvement in a number of physiological and pathological mechanisms, such as embryonic development, immune response, tissue regeneration, and cancer. They are also being studied as potential biomarkers for disease diagnosis and vectors for drug delivery. Conclusion: Nanovesicles represent powerful tools for intercellular communication and the transfer of bioactive molecules. Their molecular composition and functions can vary according to their origin (plant and mammalian), so their formation, composition, and biological roles open the way to therapeutic applications in a variety of pathologies, which is arousing growing interest in the scientific community. Clinical Trial Registration: ClinicalTrials.gov identifier: NCT03608631 Introduction Extracellular vesicles (EVs), also called microparticles, microvesicles, or exosomes, are a heterogeneous group of phospholipids membrane-bound micro-to nano-sized biovesicles derived from eucaryotic and procaryotic cells.EVs are complex membranes composed of lipid bilayers containing mainly proteins and surface receptors (Table 1), protecting the EV content (soluble and genetic material, including miRNAs) from proteases and nucleases (Abels and Breakefield, 2016;Linares et al., 2017;Suharta et al., 2021).These membrane-demarcated particles are produced and released by the cells of all three realms of life and are diverse in their morphology, biogenesis, composition, and biological role (Gill et al., 2019). The history of their discovery dates from 1868 when Charles Darwin first presented the concept of EVs.His theory was based on the idea that each cell type in the body generates small germs or gemmules (particles), permitting communication and transfer of hereditary information with other cell types.He also suggested that the composition of the granules could be modified by the environment, reflecting the exposure of the organism.Extracellular vesicles have been of increasing interest to researchers for over 50 years.The first time EVs were observed and described by transmission electron microscopy in plants (cotton synergids) was in 1965 by William Jensen; he described them as "single-membraned spheres", associated morphologically with the multivesicular bodies, which were structures believed at that point to be derived from the terminal portion of the endoplasmic reticulum (Jensen, 1965).Two years later, the extracellular vesicles associated with the multivesicular bodies were described by Halperin and Jensen in carrot cell cultures (Halperin and Jensen, 1967;Potestà et al., 2020;Pinedo et al., 2021), and further classified by Marchant et al., in 1968 as para mural-bodies (lonesome and plasmalemmas).Similar structures of small, spherical "particles" were ultrastructurally identified during the same time in Gram-negative bacteria, where Knox et al. (1966) described them as "extracellular globules" (Knox et al., 1966).In mammalian cells, the discovery of the extracellular vesicles is linked with the work of Chargaff and West (1945) on blood clotting, but the first morphological characterization is attributed to Wolf (1967), describing the extracellular vesicles derived from thrombocytes as dud-like expansions and "platelet dust" (Wolf, 1967;Couch et al., 2021). They are applied in many fields as participating elements in pathophysiological processes, including those that favor pathological processes, such as atherosclerosis (Gomez et al., 2020) or thrombosis associated with cancer (Lacroix et al., 2019), and as a reflection of a biological event biomarker.EVs carry a large amount of information due to their lipid, protein, and nucleic acid contents.They were quickly considered potential new biomarkers.They are contained in numerous biological fluids, such as blood, pleural fluid, and urine which is also an advantage in the theatre as a biomarker.Some studies also consider them as therapeutic targets.The EVs derived from plants have been examined for their therapeutic activities and their structure and cargo are similar to that of EVs isolated by mammalian cells.In addition, plant-derived EVs include bioactive lipids, proteins, and mRNAs, and can deliver this important cargo to other cells just like mammalian EVs (Ali et al., 2022).More importantly, they compound several advantageous properties such as antioxidant (Dinicola et al., 2014), antibacterial (Hosseini-Giv et al., 2022), anti-inflammatory (Zhang et al., 2016a), anticancer (Chen et al., 2022), and regenerative potential for various diseases (Zhu and He, 2023). Microvesicles (MV) also called microparticles (MP) are vesicles that bud by burgeoning from the plasma membrane and has a size of 50-1,000 nm or even more (Figure 1) (Gould and Raposo, 2013).MV contains essential protein markers including integrins, selectins, and CD40.Their membranes possess cholesterol, diacylglycerol, and phosphatidylserine in greater quantities than exosomes (He et al., 2021).Their constitution is based on its principal mechanisms by a change of membrane phospholipids and the modification of the cytoskeleton of the cell.These MVs then implicate part of the cell's cytoplasm.They also reflect the state of activation of their cell of origin or its apoptosis (Boireau and Elie-Caille, 2021). Sub-types of Evs Origin Size (nm) Apoptotic bodies also called apoptosomes are vesicles with the dimension of 1-5 µm delivered by cells in apoptosis (Figure 1) (Zitvogel et al., 1998).They are essentially liberated during cellular apoptosis (Poon et al., 2014).Apoptotic bodies are cleared to assure a "clean" elimination of the cellular content following apoptosis.This content has a strong immunogenic potential, and its release in the extracellular milieu induces local inflammatory reactions.By embedding them in vesicles, the contents of the cell will be eliminated by macrophages that recognize the PS expressed on the apoptotic bodies, thus avoiding any inflammatory responses (Zitvogel et al., 1998).These apoptotic bodies' function is not limited to the clearance of cellular components since they also participate in intercellular communication. Extracellular vesicle derived from plant cells Plant-derived VEs are heterogeneous groups of vesicles containing different functions, mainly multivesicular bodies (MVBs), autophagosomes, vacuoles, and exocyst-positive organelles (EXPOs).The size of plant-derived nanovesicles is generally between 50 and 1,000 nm and varies according to plant origin and isolation technique (see Table 2).Plant-derived extracellular vesicles conduct heterogeneous cargos containing different biomolecules (proteins, small RNAs, lipids, and nucleic acids) and are primarily composed of phosphatidic acid, phosphatidylcholine, di galactosyl diacylglycerol, monogalactosyldiacylglycerol, and phytosterols (Liu et al., 2020a;Kocholata et al., 2022).The group of phospholipids occupies several functions including stability, vesicle liberation, and intercellular communication, and participates in the mechanism of membrane fusion.In addition, phosphatidylcholine and phosphatidylethanolamine are known for their important roles in strengthening therapeutic activities (antioxidant, antipolitic, and anti-inflammatory) (Wang et al., 2014).The presence of lipid composition in the vesicle membrane has an important role in intercellular interactions and in maintaining vesicle stability under physiological and pathological conditions (Xu et al., 2023).However, small RNAs are present in plant-derived nanovesicles that are capable of regulating biological functions, notably interkingdom communication to ensure communication between species as has been indicated in Arabidopsis EVs (Zhang et al., 2016b) in the presence of tiny RNAs (ty RNA) with a length of 10-17 nucleotides, long non-coding RNAs (lncRNA), circular RNAs (circRNA), and small RNAs (sRNA) (Cai et al., 2018).RNAs can be identified in the exterior of EVs, where they can be secured by RNA-binding proteins against enzymatic destruction.In addition, PENETRATION1 was identified between the Golgi complex and the plasma membrane and established as a plant-derived nanovesicle (PDNVs) biomarker (Wang et al., 2010).Syntaxins, which are localized in the membrane of the SNARE family were also established as integrated proteins that are implicated in the transport of vesicles inside the cells (Rutter and Innes, 2017).It has been demonstrated that PENETRATION1 does not localize with ARA6, so it indicates that the biogenic pathway of TET8positive EVs is not the same as that of PEN1-positive EVs.Certain plant tetraspanins, notably Arabidopsis thaliana TETRASPANIN 8 and TETRASPANIN 9 (AtTET8 and AtTET9) are identified explicitly in infection by the fungal pathogen Botrytis cinerea and colocalize with the Arabidopsis MVB marker Rab5-type GTPase ARA6 inside the cell and EVs at fungal infection sites (Ding et al., 2014).In addition, the Exocystpositive organelle (EXPO) was recently found to combine with the plasma membrane and release Exo70E2-positive cells in the intercellular spaces during live-cell imaging in plants using the immunostaining technique.The Exo70E2 secretion pathway is free of MVB pathways, and EXPO is unaffected by inhibitors of secretion and endocytosis in protoplasts. 3 Biogenesis of extracellular vesicles Biogenesis of plant-derived extracellular Plant-derived nanovesicles generally present as a spherical structure when isolated which could promote cell wall passage.Plant-derived nanovesicle biological compounds, which include proteins, small RNAs, and metabolites, are unique in every case and depend on the original cell (Pegtel and Gould, 2019).These two biomarkers PENETRATION1 and TETRASPANIN 8 of EVs, which are not located in the same places are classified in two categories of plant EVs.Tetraspanins occupy a very important role in biogenesis, especially cargo selection, membrane fusion, and exosome absorption (Colombo et al., 2014).Plantderived nanovesicles' biogenesis pathway is similar to the exosome biogenesis pathway in that plant nanovesicles associated with TETRASPANIN 8/TETRASPANIN 9 (TET8/TET9) are produced by multivesicular bodies (MVBs).PENETRATION1 is a plantderived nanovesicle biomarker found in the plasma membrane (Rutter and Innes, 2017).It has been established that trafficking between the Golgi and the plasma membrane is mediated by PEN1.There is no coincidence between PEN1 and ARA6, which determines that the biogenic pathway of TET8 EVs is not the same as that of PEN1 EVs (He et al., 2021).However, another pathway of EV biogenesis has been observed in the EXPO (EXocyst-Positive Organelles), presented generally as a spherical double-membrane structure, compared to the autophagosome, which has been identified in Arabidopsis.Although they present similarities with autophagosomes, which combine with the plasma membrane and liberate membrane vesicles in the cell wall, these vesicles are considered to be exosomes secreted by EXPO (Wang et al., 2010).Indeed, EXPO can combine with Exo70E2 passing the plasma membrane and thus releasing plant-derived nanovesicles (Figure 2).Plant-derived nanovesicles of RNAs are identified in encapsulated EVs, but they are outside the nanovesicles and can be defended by RNAbinding proteins from enzymatic degradation.As has been proved, ty-RNAs of Arabidopsis thaliana are abundant in EVs in proportion to cellular RNAs (Baldrich et al., 2019), and only siRNAs of the same RNA precursors can be discovered in EVs.RNA-binding proteins (RBPs) appear to have an important role in loading RNA into EV precursors and in the stability of exotic RNAs.Membrane polysomes could be a site for RNA loading into vesicles.Plant-derived nanovesicles are derived from the endocytic pathway when membranes invaginate during late endocytosis to form intraluminal vesicles (ILVs) inside multivesicular bodies (MVBs) (Zhang et al., 2017).ILVs are generated when premature endosomes integrate with the peripheral membrane layer.They encapsulate plant-derived nanovesicles in MVBs, which combine with the plasma membrane to release exosomes.Although the biogenesis of plant-derived nanovesicles has not been well studied, there is evidence for at least three distinct pathways (Figure 2) (Cai et al., 2019;Nemati et al., 2022). Biogenesis of mammalian-derived extracellular vesicles The types of EVs are MVs and EXOs, and CA has various modes of biogenesis.MVs are derived from bourgeoning from the plasma membrane, while EXOs originate in the endosomal system in the form of intraluminal vesicles (ILVs), which are released when MVBs fuse with the plasma membrane, and apoptotic bodies are essentially liberated by bourgeoning from a cell undergoing apoptosis to apoptotic bodies (Poon et al., 2014) (Figure 3).However, even if the formation of MV and EXO occurs at different cell locations, intracellular mechanisms may be implicated in the formation of the two entities, depending on the cell type (van der Pol et al., 2014).According to the discovery in this study (van der Pol et al., 2014), the results indicated that T lymphocytes were capable of forming EVs at the level of the plasma membrane consisting of the characteristics of EXOs.Then, the intracellular mechanisms implied in the two formations simultaneously alter the possibility of establishing different subpopulations of EVs (Colombo et al., 2014). Biogenesis of exosomes Exosomes of endosomal origin are confirmed by the analysis of vesicles liberated by immune cells, such as B lymphocytes and dendritic cells (Zitvogel et al., 1998).They are generated by the association of late endosomes with multivesicular bodies (MVBs) which are then released in the extracellular milieu by exocytosis.The constitution of MVBs implies the machinery of the Endosomal Sorting Complex Required for Transport (ESCRT).This machinery is composed of many proteins assembled in four complexes: ESCRT-0, -I, -II, and -III.These complexes have fused proteins such as VSP4, VAT1, ATPase, and TSG101 (Hanson and Cashikar, 2012).In short, before the cascade, we discovered ESCRT-0, which takes care of mobilizing the cargo to the lysosome.It engages ESCRT-I, which makes interaction with ESCRT-II to compose the lipidic vesicle during which the cargos are sequestered.Finally, ESCRT-II engages ESCRT-which takes care of liberating the vesicle in the MVB, acting at the same time with the fifth complex of the machinery: VPS4/VTA1.The formation of EXO independently of the ESCRT complex has been described, implying the synthesis of ceramides.Neutral sphingomyelinase hydrolyzes sphingomyelin to ceramide.This ceramide generates membrane sub-domains that impose a spontaneous negative curvature on the membrane.It has been demonstrated that inhibition of neutral sphingomyelinase 2 (nSMAse-2) prevents ILV bourgeoning in BVMs and EXO liberation by a mechanism independent of the ESCRT machinery (Trajkovic et al., 2008).Exosomes can be detected by different markers, some of which are specific to this type of EV, notably tetraspanin-like membrane proteins (CD9, CD63, CD81, and CD82), or cytosolic proteins involved in endo-lysosomal traffic including Alix, Tsg101, or 14-3-3, major histocompatibility complex (MHC) proteins, heat shock proteins (HSP) chaperones as well as the ESCRT-3 complex binding to the Alix protein (Verderio et al., 2018). Biogenesis of microvesicles Knowledge of the biogenesis process of MVs produced from healthy cells is more recent (Minciacchi et al., 2015).The biogenesis of MVs results from a remodeling of the cytoskeleton induced by an increase in intracytosolic Ca2+ concentration (Al-Nedawi et al., 2008) (Figure 3A).The increase in Ca2+ stimulates enzymatic machinery providing membrane phospholipid asymmetry, including aminophospholipid translocases (flippases and floppies), scramblases, and calpain.The activation of the enzymatic machinery leads to the externalization of phosphatidylserine (PS) from the inner to the outer leaflet of the plasma membrane.The loss of membrane asymmetry induces an excess of negative charge at the surface of the plastic membrane (Piccin et al., 2007).This excess charge is responsible for membrane curvature and cytoskeletal reorganization favoring MV release (Jimenez et al., 2003;Del Conde et al., 2005).Inhibition of scramblase has been shown to suppress PS externalization in platelets and the formation of procoagulant MVs (Jimenez et al., 2003).However, even when membrane phospholipid asymmetry is maintained, MVs can be formed (Del Conde et al., 2005).These observations indicate that other lipids, and the domains they structure, contribute to the biogenesis of MVs.An important membrane lipid component is cholesterol, which is present in MVs.Depletion of membrane cholesterol has been shown to decrease MV production by THP-1 monocytes (Li et al., 2012).In addition to membrane remodeling and cytoskeletal rearrangement, other regulators are necessary for MV biogenesis.The activity of small GTPases of the RHO family and ROCK (RHOassociated protein kinase), which are important regulators of actin dynamics, induces MV formation in different tumor cells.(McConnell et al., 2009). Release of MVs requires rupture of the plasma membrane.This mechanism depends on the interaction of actin and myosin with ATP-dependent contraction.(Muralidharan-Chari et al., 2009).In cancer cells, it has been shown that activation of ARF6 (ADP-Ribosylation Factor 6), ADP Ribosylation Factor 1 (ARF1), and small GTP-binding proteins lead to phosphorylation of the myosin side chain of myosin and contraction of actomyosin, permitting MV to detach from the membrane (Nabhan et al., 2012).Another regulator of actin dynamics, the Cdc42 (Cell division control protein 42 homolog), is involved in MV release in HeLa cells; however, the mechanism is unknown (McConnell et al., 2009).In another study, TSG101 (Tumor Susceptibility Gene 101 Protein) and ATPase VPS4, which are mainly involved in the EXO formation via the ESCRT machinery, were reported to be involved in the fission and release of MV (Wehman et al., 2011).The involvement of the ESCRT machinery in MV release has also been demonstrated in C. elegans embryos (Bianco et al., 2005).Another possible pathway for MV release is the activation of the ATP-dependent P2X receptor 7 (P2X purinoceptor 7), which is ATP-dependent.Activation of the receptor results in membrane rearrangement that influences MV release (Bianco et al., 2009).This process is associated with the translocation of acid sphingomyelinase to the plasma membrane, which produces ceramide, favoring membrane bourgeoning and MV liberation.(Reátegui et al., 2018). Biogenesis of apoptotic bodies Apoptotic bodies are larger than other vesicles and carry particular markers (thrombospondin, complement component 3, and C3B) (Akers et al., 2013;Duijvesz et al., 2015).Apoptotic bodies produce apoptosis-induced cell death (Suárez et al., 2017).They are fragments of the cell formed by blebbing.This phenomenon is produced when ruptures are formed in the actin cortex caused by actomyosin contractions.This creates weak points in this actin cortex.These weak points end up inflating under the hydrostatic pressure of the cell providing bourgeoning called blebs.The actin is then rearranged in these blebs, leading to membrane scission (Suárez et al., 2017).The vesicles are thus liberated into the extracellular milieu (Akers et al., 2013;Suárez et al., 2017).Some smaller vesicles can also be produced during apoptosis (apoptotic vesicles), but the mechanisms that control their production and release are not yet clearly defined (Suárez et al., 2017). Different methods of isolating extracellular vesicles Extracellular vesicles are generally isolated from different biofluids.Given their generosity in terms of size and concentration, and depending on their mode of biogenesis and cellular origin, the separation of these extracellular vesicles is according to their sub-types and sub-populations of EVs, using different isolation techniques or methods (Boireau and Elie-Caille, 2021). Nowadays, several methods of EV purification exist, but not all of them allow for obtaining the same degree of purification and the same level of concentration of the sample.For this reason, some techniques are called "concentration" techniques, such as differential centrifugation, which allows only a slight purification of the sample even when several rounds of washing are carried out, whereas other techniques are called "purification" techniques because they allow a real separation between the EVs and the soluble proteins of the surrounding medium.At present, many teams are proposing a combination of several methods for purifying EVs.This section will examine the different reference techniques used to isolate EVs as indicated in Figure 4. Ultracentrifugation Ultracentrifugation is the most frequently used method for isolating EVs.This method has been used for studying EVs derived from cell culture supernatants and biological fluids (Campoy et al., 2016;Coughlan et al., 2020).Some advantages of ultracentrifugation are that it can isolate EVs from large volumes of biological fluids, requires a relatively restricted set of reactants and consumables, and has no impact on EVs apart from gravitational force and pipetting (as no chemicals that are likely to interfere with downstream EV analysis are used).To reduce debris caused by cosedimentation and contamination of preparations by cell lysis products, this phase also includes several sub-phases, firstly by centrifugation at 300-400 × g for 10 min, secondly sedimentation of a large proportion of the cells at 2000 × g to remove cell debris, thirdly at 10,000 × g to eliminate biopolymer aggregates and apoptotic bodies, and finally to obtain contiguous EVs in the supernatant, which are increased by ultracentrifugation at (100,000-200,000 × g) over 2 h. Ultracentrifugation differential Differential ultracentrifugation (UC) is a well-known method for isolating EVs in favor of its ease of use and its low character accessibility (Witwer et al., 2013;Pérez-Bermúdez et al., 2017).It is used for separating large EVs (by centrifugation at around 10,000 g) and small EVs (ultracentrifugation at 100,000 g).Differential ultracentrifugation pellets large extracellular vesicles at a slow centrifugation speed and small ones at a high speed (100 000 g). Macromolecules and/or lipoproteins can be co-precipitated with the extracellular vesicles (Blandin and Le Lay, 2020). Density gradient ultracentrifugation Density gradient ultracentrifugation is a classic technique for isolating groups of vesicles based on their density and buoyancy velocity (Cai et al., 2018).This enables them to improve the purification of extracellular vesicles.The flotation of the isolated EVs on a density gradient permits the separation of macromolecular complexes and/or lipoproteins co-precipitated with the EVs.In addition, this method permits excellent results in terms of EV fraction purity and the number of EV proteins and RNA to classical ultracentrifugation and commercial kits (Van Deun et al., 2014).Moreover, it is established that EV preparations isolated from this solution are deprived of microvesicles superior to 200 nm in contrast to EV obtained by other methods (Lobb et al., 2015).At present, density gradient ultracentrifugation is frequently used to isolate microvesicles.However, this method leads to a decrease in the size of EVs; it is complex, tedious, time-consuming (up to 2 days), and requires expensive materials (Lobb et al., 2015;Zeringer et al., 2015). The filtration techniques Diverse techniques have been invented to filter EVs.These include cross-flow filtration (CFT) and centrifugal ultrafiltration (UF).In the last stage, high pressure is used to increase the density of smaller particles, permitting them to pass through the membrane without being blocked by clumps.In addition, UF provides a cut-off point for damage to its membranes due to the weight of the particles.This permits the removal of any contaminants found in extracellular vesicles using TFF.Sequential UF, consisting of a succession of filtration steps with a gradual reduction of the cut-off point, can permit EV isolation and enrichment from complex biological fluids (Boireau and Elie-Caille, 2021). Gel filtration (size exclusion chromatography) Size exclusion chromatography is a classical technique for the fractionation of biological entities that establish isolation in proportion to the dimension of the object or its hydrodynamic volume (Boireau and Elie-Caille, 2021).It is used as a porous matrix where EVs smaller than the pore size will be reserved for a long time according to their elution of more advanced large-sized EVs (Blandin and Le Lay, 2020).It is largely applied to the preparation of biopolymers (proteins, polysaccharides, proteoglycans, etc.).As demonstrated, this technique can also be used to separate EVs from protein and lipoprotein complexes in blood plasma and urine (Böing et al., 2014;Muller et al., 2014;Lozano-Ramos et al., 2015;Gámez-Valero et al., 2016), which is difficult, and many other methods have failed (Mathivanan et al., 2012).Gel chromatography is an efficient and rapid technique permitting the isolation of EVs efficiently without loss of high reproducibility.In particular, exosomes have a very large hydrodynamic radius in comparison with proteins, lipoproteins, and protein complexes, and they can be separated very well from these components.However, the size of some of the chylomicrons is similar to that of the isolated vesicles; similarly, EV preparations made in this manner compound lipoproteins but at a much slower rate than that found in the case of other techniques applied to EV isolation (Yamamoto et al., 1970). Immunocapture or magnetic sorting illustrated Immunocapture or magnetic sorting is a technique that permits the retention of EVs expressing a surface antigen (e.g., CD9, CD63, or CD81) using monoclonal antibodies (directed against this antigen) coupled to magnetic beads retained on a column by magnets.In the absence of a magnetic field, EVs positive with this antigen are then eluted.The isolation of sub-populations of EVs can also be performed by immuno-capture using antibodies specifically directed against antigens carried by EVs, such as tetraspanins (Blandin and Le Lay, 2020).This technique consists of isolating a particular sub-category of EVs from balls coated with an antibody that recognizes the specific protein marker exposed on EV membranes.It can also prevent contamination of isolated EVs by cytoplasmic proteins or RNA (Van Deun et al., 2014). Precipitation in PEG This is a method that uses polymer solutions with dextran to induce phase separation or extracts that will induce the precipitation (at low-speed centrifugation) of VEs by its surfactant properties, and polyethylene glycol (PEG) retains the macromolecules and other molecular components.The precipitation of EVs by polymer mixtures (Dextran, polyethylene glycol, or PEG) permitting phase separations at a low centrifugation speed and the precipitation of EVs is the basic principle of many commercial kits (e.g., Exo Quick TCTM).These last kits present the advantage of being rapid and easy to use but often lead to the co-precipitation of macromolecular protein complexes, lipoproteins, or immunoglobulins (Blandin and Le Lay, 2020).PEGs of diverse molecular weights have been used for many years for the precipitation of proteins, nucleic acids, viruses, and other small particles (Yamamoto et al., 1970).The technique uses a diminution of and the solvency of elements in super hydrophilic polymer solutions, PEG.The procedure is limited to the combination of polymer solution and sample, incubating and pelleting the EVs by low-speed centrifugation (1,500 × g). In conclusion, there are other methods for isolating extracellular vesicles, such as microfluidic systems, immunological separation, and microfluidics.This number continues to grow in parallel with technological advances. Isolation of mammalian-derived extracellular vesicles At this moment, various methods have been used to isolate mammalian-derived EVs, primarily ultracentrifugation, differential centrifugation, density gradient, and size exclusion chromatography.However, not all these methods are adapted to the evolution of extracellular vesicle production.These methods require several complicated and repetitive centrifugation steps to remove all debris.In addition, these steps are not yet well-detailed in the literature.This observation has permitted some researchers to develop new methods to purify EVs using simple procedures such as those mentioned in the following studies such as isolation of EVs derived from human milk by ultracentrifugation and filtration.For example, separate milk samples by different centrifugations (6,500 × g at 4 °C for 30 min and 12,000 × g at 4 °C for 1 h) were used to remove debris.Next, skimming was affected using 0.45 and 0.22 μm filters to remove any remaining debris.The filtered supernatant was centrifuged at 135,000× g for 90 min at 4 °C to granulate the exosomes (Reif et al., 2020;You et al., 2021).In another study, EVs derived from milk were isolated by differential centrifugation; for density gradient, the milk was centrifuged twice at 3,000 × g, and then the milk supernatant was subjected to differential centrifugation at 5,000 × g and 10,000 × g in new, sterilized SW40 tubes.The supernatant at 10,000 × g was again added to a sucrose gradient (ranging from 2.0 to 0.4 M sucrose) and centrifuged at 192,000 × g for 15-18 h.At the final stage, the samples were collected, combined, and centrifuged at 100,000 × g for 65 min, the supernatant removed, and the EV pellets aliquoted and stored at 80 °C (Van Herwijnen et al., 2018a).Bovine milk was isolated by the acetic acid/ultracentrifugation (AA/UC) method.The first skimmed milk was heated for 10 min at 37 °C, then mixed with acetic acid, followed by centrifugation at 10,000 g for 10 min at 4 °C.The supernatant was filtered through a 0.22 μm membrane and designated as lacto-serum.Lacto-serum was ultracentrifuged at 210,000 g for 70 min at 4 °C.An EV pellet was resuspended with phosphate-buffered saline (PBS) for clean-up, and the remaining precipitates were removed by centrifugation at 10,000 g for 5 min at 4 ° (András and Toborek, 2015).In Table 3 we have summarized several studies on the Isolation of mammalian-derived extracellular vesicles. Isolation of plant-derived extracellular vesicles There are several methods for the isolation of plant-derived nanovesicles (PDNVs), for example, differential ultracentrifugation (UC) combined with sucrose density gradient centrifugation, PEG precipitation, size exclusion chromatography (SEC), ultrafiltration membrane separation, etc., Nevertheless, in practical terms (extraction rate, extraction time, and purity) none of these methods are excellent, but the combination of several methods seems to be the best possible solution.The advantages and disadvantages of these techniques are established.Differential ultracentrifugation and the extraction method are the most used techniques for isolating nanovesicles of plant origin, then they are purified by density centrifugation on a sucrose gradient.Different types of nanovesicles can be selected based on centrifugation density and sucrose gradient (Yang et al., 2018a).The main stage of the UC step is to prepare the sample including mixing, pressing, or grinding, then plant-derived nanovesicles (by differential ultracentrifugation and sucrose gradient ultracentrifugation) are isolated and purified.In a centrifuge, the supernatant is initially centrifuged at a low speed to remove fiber debris in the plant tissue; in a second step, the centrifugation speed is gradually increased to remove finer particles while preserving plantderived EVs in the supernatant.Evidently, as the number of centrifugations increases, the speed of centrifugation gradually increases and the duration of centrifugation becomes longer and longer.After three centrifugations at a relatively low speed, the supernatant is maintained.At this point, a centrifugal force of between 100,000 and 150,000 g is selected for the second centrifugation.(Li et al., 2023a).(See Figure 5).In Table 4 we have summarized several studies on the Isolation of Plant-derived extracellular vesicles.The biological activities of exosome-type nanovesicles, in their intact natural structural integration with their bioactive cargoes after their simple isolation from plant EVs, diminish pathological situations in species of other reigns and provide multiple therapeutic alternatives (Mu et al., 2014;Dad et al., 2021). To give an overview of the potential therapeutic application of plant-derived extracellular vesicles (P -EVs) based on plant types, plant comestibles have in recent years been the focus of numerous studies, which have revealed encouraging specificities that explain their availability, biocompatibility, and biodegradability.These EVs could well be necessary for the therapy of many diseases.(Kameli et al., 2021). More importantly, they consist of several advantageous properties such as Ginger which has various beneficial properties including antioxidant (Hung et al., 2017), antibacterial (Dinicola et al., 2014), antiinflammatory (Rome, 2019), anti-cancer (Vislocky and Fernandez, 2010), and regenerative potential for various diseases (Brahmbhatt et al., 2013).In addition, plant-derived nanovesicles are continents of biomolecules including proteins, drugs, DNA vectors, and siRNAs, and are delivered to target tissues (Nemati et al., 2022).It is for this cause that the biomolecules of plants have gotten much attention from researchers for their potential to improve health and defend against diverse infections. In recent years, in the field of nanotechnology, nanovesicles derived from edible plants have attracted the attention of scientists for their drug delivery potential, as these particles can deliver hydrophobic and hydrophilic therapeutic agents to disease target sites.We will describe studies that have proven the therapeutic efficacy of nanovesicles derived from edible plants (grape, grapefruit, ginger, lemon, and carrot). Ginger has intrinsic chemical compositions, such as shoal and gingerol, possessing several health benefits.In addition, multiple studies have indicated the therapeutic effects of ginger, including these studies (Zhang et al., 2016b;Zhu and He, 2023) where the authors have studied the effects of ginger in regeneration.Ginger-derived nanovesicles have demonstrated their role in reducing the expression of secreted hemotoxic protein and influencing the expression of other mitochondrial and cytoplasmic proteins such as heat shock protein, axing, and kinesin for intestinal wound recovery (Kim et al., 2022).Further studies have been interested in the effect of ginger in antitumor activities; the research found that plant-derived miRNAs are capable of crossing species and performing a veritable regulatory role in the human organism (Liu et al., 2017).Furthermore, ginger-derived nanovesicles have been demonstrated to reduce cyclin D1 RNA levels in mice with colorectal cancer (Zhang et al., 2016b).Studies indicate that ginger has hepatoprotective properties against ethanol-and acetaminophen-induced hepatotoxicity of carbon tetrachloride (Zhuang et al., 2015). Grapefruit-derived nano-vectors (GNVs) presented varieties of therapeutic agents, including chemotherapeutics, DNA expression vectors, siRNAs, and proteins such as antibodies.It has been reported that particles purified from grapefruit indicated high stability and can carry several agents such as curcumin and Zymosan that were functionally active.Grapefruit-derived nanovesicles had similarities in size and structure to mammalianderived exosomes.They consisted of proteins, lipids, and miRNA and have been assimilated by intestinal macrophages and stem cells.In another study, Wang et al. coated grapefruit-derived nanoparticles (GDNV) rich in active anti-inflammatory leukocyte receptors (IGNV).Using various animal models of inflammationinduced diseases, they have demonstrated that IGNVs target inflamed tumor tissue better than GDNVs.Moreover, this targeting of inflamed tissue was significantly inhibited by blocking LFA-1 or CXCR1/CXCR2 on IGNV membranes.Wang et al. reported that grapefruit-derived nanovesicles could release chemotherapeutic agents, siRNAs, DNA expression vectors, and proteins in different cell types.They also simultaneously administered grapefruit-derived nanovesicles and folic acid, and reported that this strategy significantly increased the efficiency of targeting cells expressing folic acid receptors.They then demonstrated that these nanovesicles improved chemotherapyinduced tumor growth inhibition in CT26 and SW620 cellderived tumors in mice (Wang et al., 2015). Nanovesicles derived from Citrus limon (lemon) contain many benefits, including antibacterial, antifungal, anti-inflammatory, anticancer, hepato-regenerative, and cardioprotective activities (Bhavsar et al., 2007;Kim et al., 2012;Riaz et al., 2014;Parhiz et al., 2015;Otang and Afolayan, 2016).The pharmacological potential of citrus lemon is determined by its rich chemical composition such as phenolic acids, coumarins, carboxylic acids, amino acids, and vitamins.In addition, some studies have demonstrated that citrus lemon nanovesicles inhibited the growth of chronic myeloid leukemia (CML) tumors in vivo by specifically reaching tumor sites and activating the TRAIL-mediated apoptosis process.Also, they were studied for their role in reducing tumor growth.They were capable of curbing in vivo tumor development in chronic myeloid leukemia (CML) by targeting tumors, reducing oxidation, and reducing cancer risk (Raimondo et al., 2015).In another study, citrus limon-derived nanovesicles were shown to possess cell growth inhibitory effects, primarily in p53-inactivated CRC cell lines, via the macropinocytosis pathway.The results of this study indicate that p53 inactivation activated macropinocytosis activity and that citrus lemon-derived nanovesicles had a cell growth inhibitory effect via the macropinocytosis pathway (Takakura et al., 2022). Extracellular vesicles derived from carrot contains phytochemicals, namely, phenolics, carotenoids, polyacetylenes, and ascorbic acid.These chemical products help reduce the risk of cancer and cardiovascular disease because of their antioxidant, anti-inflammatory, plasma lipid-modifying, and anti-tumor properties.The role of polyphenols in the prevention of degenerative diseases, such as cancer, cardiovascular diseases, and neurodegenerative diseases has been reported.Carrot (Daucus carota) is also used in research for its medical effects. Carrot juice also contains glutathione, an antioxidant that protects against free radicals.It presents potent antiinflammatory properties that permit the relief of rheumatic and arthritic symptoms (Metzger et al., 2008).A study establishing that carrot-derived nanovesicles have anti-inflammatory and antioxidant effects that can restore glucose tolerance and cardiovascular and liver functions has been assessed in an in vivo model (Poudyal et al., 2010).In another study, EVs derived from carrots (Carex) were studied as a novel biomaterial with antioxidant functions in cardiomyoblastoma and neuroblastoma cells.The results indicated similar properties to EVs, and their antioxidant and apoptotic effects in cardio-myoblasts and neuroblastoma cells were further studied.Carex significantly inhibited ROS production and apoptosis induction; therefore, the antioxidant effect of Carex may be more effective in the early phase of diseases. Carex presented low cytotoxicity in H9C2 cardiomyocytes and SH-SY5Y neuroblastoma cells when high levels of Carex were delivered to cells.In addition, Carex inhibited the reduced expression of antioxidant molecules, including Nrf-2, HO-1, and NQO-1, in both models (Kim and Rhee, 2021). Grape (Vitis vinifera) nanovesicles and their bioactive compounds have several pharmacological activities such as antioxidation and risk reduction and contain several active components including flavonoids, polyphenols, anthocyanins, proanthocyanidins, procyanidins, and resveratrol, a derivative of stilbene.It has a wide range of pharmacological and therapeutic effects, such as antioxidant, anti-inflammatory, and antimicrobial activities, as well as cardioprotective, hepatoprotective, and neuroprotective effects.Wang et al. (2013); Zhuang et al., 2016 reported that grapederived nanovesicles contain microRNA, proteins, and lipids.Although these nanovesicles do not resemble mammalian cellderived exosomes, their structure and composition are similar to mammalian cell-derived exosomes (Ju et al., 2013).Grape-derived nanovesicles consist of proteins such as aquaporins and HSP70 proteins enriched in phosphatidylethanolamine.A study has indicated that grape-derived nanovesicles present unique transport characteristics and biological functions (Ju et al., 2013;Yu et al., 2020).These nanovesicles can traverse the intestinal mucosal barrier and be taken up by mouse intestinal stem cells, significantly inducing intestinal stem cells through the Wnt/βcatenin pathway (Yu et al., 2020).In addition, nanovesicle grape derivatives can reduce many of the risk factors associated with cancer, cardiovascular health, age-related cognitive diseases, and neurodegenerative diseases.These effects are generally attributed to the function of flavonoids in grapes, antioxidant activity, and increased nitric oxide production (Vislocky and Fernandez, 2010). In terms of therapeutic applications, the extracellular vesicles released by plant cells possess several therapeutic agents, including drugs, proteins, DNA vectors, and siRNAs, and deliver them to target tissues, making them promising natural resources for modern drug discovery.In addition, they also reduce the risk of various pathologies as mentioned above, including cancer, chronic inflammatory diseases, and others.Because of their drug delivery potential, as these particles can deliver both hydrophobic and hydrophilic therapeutic agents to targeted disease sites, there has been a lot of attention from scientists which shows that extracellular vesicles are positioned as an ideal candidate for therapeutic applications.However, plant-derived nanovesicles can locate intrinsically in target tissues, which is one of the most important characteristics of a targeted delivery system.All aspects of plant-derived nanovesicles have not yet been fully identified and described, as they represent a new concept in the field of nanomedicine.The therapeutic potential of plant-derived nanovesicles has recently been demonstrated in several disease models and has been widely applied in the development of new drugs to treat particular diseases or maintain healthy body functions.A variety of plants have been used for the isolation of therapeutically effective exosomes presenting diverse functionalities.Table 5 lists important plants that have been used to extract plant-derived nanovesicles and their therapeutic applications. In conclusion, to advance therapeutically, drug nanocarriers require a thorough evaluation of their physicochemical characteristics and communications in various biological environments (Herrmann et al., 2021).While liposomes have been largely evaluated, EVs have indicated properties that make them superior for drug delivery systems (Kooijmans et al., 2016).Plant-derived nanovesicles can be used as drug carriers as they present advantageous properties including low immunity, tissue-specific targeting, safety, large-scale production, preferred negative zeta potential values, and the ability to load many biomolecules.In Table 5, we present a summary of another plant-derived nanovesicle for therapeutic use. Therapeutic effects and application of mammalian-derived extracellular vesicles We will discuss different studies that attribute various therapeutic functions to mammalian-derived extracellular vesicles, e.g., extracellular vesicles of an endosomal origin that are released by different cell types (neurons) serve to remove unwanted proteins in a drainage system and also serve to intercellularly transport their cargo: a specific set of proteins, RNAs, and lipids.Recently, extracellular vesicles derived from mouse neuroblastomas captured Aβ, resulting in reduced amounts of Aβ, decreased amyloid deposition, and reduced Aβ-induced synaptotoxicity in the hippocampus.This demonstrates the role of neuroblastoma cell extracellular vesicles in Aβ release (Yuyama et al., 2014).Results in another study were also similarly described on neuronderived EVs (Yuyama et al., 2015).On the other hand, endothelial extracellular vesicles secreted by activated or apoptotic endothelial cells may be capable of participating in both toxic and positive effects of the vascular endothelial response.These effects may include anticoagulation, anti-inflammatory effects, angiogenesis, endothelial survivance, and endothelial regeneration (Dignat-George and Boulanger, 2011).In another study of miRNAs presented in human and porcine milk, EVs focused on a variety of genes that were involved in the regulation of the epithelial barrier and neonatal defense; these miRNAs associated with breast milk EV contributed to the subsequent guided development of the newborn (Van Herwijnen et al., 2018b).However, another study indicated that extracellular vesicles in pig milk also attenuated deoxynivalenol-induced changes in body weight and intestinal epithelial growth in mice.These extracellular vesicles inhibited cell proliferation and the creation of tight junction proteins and reduced deoxynivalenol-induced apoptosis.EVs also increased the expression of miR-181a, miR-30c, miR-365-5p, and miR-769-3p in IPEC-J2 cells, then reduced the expression of their target genes in the p53 pathway, and finally attenuated DON expression by promoting cell proliferation and tight junctions and inhibiting apoptosis to induce lesion (Xie et al., 2020).In addition, a study of cow's milk-derived extracellular vesicles that reduced primary tumor growth and attenuated the progression of weight loss problems in cancer-related body weight loss was examined using the C-26 colorectal tumor model.Their discoveries highlighted the role of milk-derived extracellular vesicles in interspecies communication and their considerable context-dependent role in regulating cancer progression and metastasis.However, their results also suggested that milk-derived extracellular vesicles possess antiproliferative properties on cancer cells in vitro (Zhuang et al., 2016), as demonstrated in similar studies (Munagala et al., 2016).It has been indicated that peripheral endothelial cell-derived EVs may be involved in the modulation of innate immunity (Palette et al., 2013).In addition, human dermal microvascular endothelial cells have selectively sequestered cytoplasmic RNA degradation mechanisms in exosomes, which could also be involved in gene regulation.It has been established that dendritic cell-derived EVs can be focused to deliver siRNAs to neurons, microglia, and oligodendrocytes in the mouse brain (András and Toborek, 2015;Somiya et al., 2018).In another study, EVs derived from umbilical cord polyvalent mesenchymal stromal cells presented anti-apoptotic, proangiogenic, and antifibrotic activities, and immunomodulatory effects; effects similar to those of their source cells (Rohde et al., 2019). In terms of therapeutic application, after confronting a series of challenges when developing a therapeutic approach including toxicity, safety, target specificity and large-scale production, leading to several targeting strategies for drug delivery in preclinical and clinical settings. Mammalian-derived nanovesicles are promising for clinical applications, both as biomarkers and as therapeutic vectors.As proven by scientists, EVs can be used as therapeutic agents or as drug cargo for reaching a specific target thanks to their membrane protein (Yang et al., 2018b).While the drug is encapsulated in extracellular vesicles, the advantage is that EVs act as a drug carrier, protecting the drug and transmitting it safely to the target site. Numerous studies have indicated that vesicles derived from mesenchymal stem cells (MSCs) appear especially useful for enhancing recovery after various injuries.As has been indicated in mice, injection of MSC-derived EVs suppressed hypoxia-induced inflammation and hypertension (Lee et al., 2012).MSC EVs can exert a neuroprotective effect after a brain lesion (Xin et al., 2012).Similarly, MSC EVs release miR-16 and other molecules to mouse breast cancer cells, exerting a decrease in vascular endothelial growth factor expression and a reduction in tumor growth (Lee et al., 2013). More recently, a study has demonstrated that VEs derived from macrophages or liver sinusoidal cells treated with interferon-α deliver antiviral RNA and proteins to hepatocytes, thereby reducing hepatitis B virus replication (Li et al., 2013).How different natural EVs promote these diverse responses remain to be elucidated.Drug loading into EVs can be achieved by these two strategies, either by directly loading the drug into exosomes or/and by loading the drug to target the mother cell during exosome biogenesis (Mittelbrunn and Sanchez-Madrid, 2010).Moreover, in the case of lipophilic drug switching, the mechanism is relatively simple due to the interaction between the lipid bilayer of EVs and the drug, which occurs via a hydrophobic interaction (Shtam et al., 2013).Such methods as electroporation, incubation, sonication, and thawing have been performed for the exogenous loading of EVs with drugs (Jo et al., 2014).The use of EVs as drug carriers presents several advantages but must conform to good manufacturing practice (GMP) (Chen et al., 2020).There are several important aspects of nanovesicle GMP.The low large-scale productivity of nanovesicles is one of the major problems encountered in the implementation of nanovesicle-based therapeutics; the main issues to be known about the various isolation methods are the physicochemical and purity properties of exosomes for the high-quality, uniquely shaped nanovesicle community and the standardization of storage requirements.The evolution of the therapeutic potential of nanovesicles by enriching them with therapeutic biomolecules is a simple way of improving their therapeutic potential (Abou-El-Enein et al., 2013).To use nanovesicles as reliable therapeutic agents, scalable manufacturing processes are needed to produce exosomes rapidly, cost-effectively, and reproducibly.In Table 6, we present a summary of another mammalian nanovesicle for therapeutic use. Therapeutic efficacy of plant and mammalian EVs in vitro and in vivo In this section, we will present the different studies that have been performed on plant and mammalian-derived extracellular vesicles in vivo and in vitro. Studies that have evaluated plant-derived extracellular vesicles in vivo and in vitro: in vitro studies, researchers observed that Grapefruit-derived nano-vectors have important biocompatibility with little toxicity and apoptosis of macrophages and colon-26 cells in vitro, compared to commercial DC-Chol/DOPE liposome preparation (Nemati et al., 2022).Further in vivo and in vitro tests confirmed that Grapefruit-derived nano-vectors had the potential to regenerate mucus tissue in mice with colitis (Zhang et al., 2016b).Other studies have examined in detail the function of four plant-derived EVs (carrots, grapes, grapefruit, and ginger) on intercell communication.This study was realized in vitro and in vivo, and the results have suggested that these extracellular vesicles can regulate intestinal intercellular communication (Mu et al., 2014). It was reported that grapefruit-derived nanovesicles were labeled with a lipophilic carbocyanine dye that permitted their in vivo tracking by fluorescence (Zhuang et al., 2016).An in vivo study using diverse inflammation models in mice indicated that leukocytecoated plant-derived extracellular vesicles (P-EVs) promoted the efficiency of Dox release at inflammatory sites (Li et al., 2018). Studies that evaluated mammalian-derived extracellular vesicles in vivo and in vitro: In vitro and in vivo studies used to identify TGFβ receptors and mechanisms of action revealed pleiotropic roles for TGFβ in the control of pathophysiological processes.In addition, numerous preclinical results from in vitro cell models and in vivo animal models demonstrate the great potential of antitumor therapies using TGFβ-neutralizing antibodies and ligand traps that block the interaction of TGFβ with its receptors or selective receptor kinase inhibitors for the small molecule TGFβ (Liu et al., 2021).In another study, the effect of siponimod on ocular neovascularization in vivo was evaluated using suture-induced corneal neovascularization in albino rabbits.These results suggest that siponimod does not affect endothelial cell proliferation or metabolic activity but significantly inhibits endothelial cell migration, increases human microvascular endothelial cells' (HMEC) barrier integrity, and reduces TNF-αinduced barrier disruption (Palette et al., 2013).In another in vivo study, differences in protein cargo and non-coding RNA were identified that differentiated cardio-sphere-derived EV cells from mesenchymal stem cells and reflected differences in the effects of in vivo treatment in vivo (Walravens et al., 2021). 7 Toxicity and immunogenicity of mammal and plant-derived nanovesicles The study of the efficacy of extracellular vesicles for immunogenicity or toxicity in therapy is fundamental to the preclinical development and development of their therapeutic properties.Evaluation of the toxicity and security of EVs in vitro and in vivo will permit the establishment of the dose for future clinical use and the recommendations for safe consumption for human application.An ideal drug delivery system should guarantee non-toxicity and non-immunogenicity without secondary effects both in vitro and in vivo (Zhuang et al., 2015).However, there are still a few tentative discoveries of their cytotoxic effect on living subjects.So far, plant-derived nanovesicles have indicated fascinating biocompatibility due to their natural origin.To evaluate the toxicity of plant-derived nanovesicles' toxicity, the authors used tumortargeted Grapefruit-derived nano-vectors (GDNV) and examined whether significant tissue damage occurred in the organs.Because of the effectiveness of GDNV on tumors, GDNV accumulates less in the spleen and liver; this permits the minimization of systemic drug toxicity to normal tissues while improving blood diffusion of delivered drugs.Histological analysis of the heart, liver, spleen, lungs, or kidneys has not demonstrated significant damage compared to the control group, indicating that plant-derived nanovesicles can be applied as nanodrug delivery platforms, improving drug efficacy and reducing potential toxicity (Kim et al., 2022).In a recent study, markers such as pro-inflammatory cytokines and serum levels of liver enzymes, including alanine aminotransferase (ALT) and aspartate aminotransferase (AST), have been evaluated to determine the potential cytotoxic effects of grape-derived nanovesicles in mice.Mice were pretreated with grapefruit-derived nanovesicles and commercially available DOTAP (1,2-diammonium-propane)-DOPE (dioleoyl phosphatidylethanolamine) liposomes. Loyal-3-trimethylammonium-propane)-DOPE (dioleoyl phosphatidylethanolamine) liposomes and pro-inflammatory cytokines were significantly elevated in liposomes.In addition, the authors found that no increase was recorded in the group of mice treated with Plant Exosome-like Nanovesicles (PELNV).Furthermore, no pathological alterations were observed in histological samples of the liver, kidney, spleen, and lung from mice treated with plant exosome-like nanovesicles (Ding et al., 2014). In addition, in a study, researchers evaluated the toxicity of milkderived EVs.The result indicated that no systematic toxicity or immunogenicity was observed.In the experiment, animals were given an intravenous injection containing milk-derived EVs.The results of the blood tests confirmed no damage from these EVs, and no markers of harm or toxicity to the kidneys or liver were found (András and Toborek, 2015).Another study conducted in vitro to evaluate the toxicity of EVs derived from mesenchymal cells and bovine milk indicated that there was no genotoxic response to these two EVs, but a platelet aggregation induced by collagen in a dose-dependent manner was recorded.Another study was conducted in vitro to evaluate the safety of HEK293T cellderived EVs.Mice were injected intravenously with HEK293T cellderived EVs.The results indicated no toxic effects, no immune changes, and no alterations in EVs (Rohde et al., 2019). In effect, plant-and mammal-derived nanovesicles present numerous advantages in terms of biocompatibility, stability, biodistribution, and cellular internalization.However, some challenges related to biosafety and toxicity may also be encountered due to unknown bioactive components of the plant. Plant-derived EVs in clinical trials Based on the results of the effects of plant-derived nanovesicles, many clinical trials have been initiated to complement the results (Table 7).In another clinical trial (NCT01668849), grape-derived nanovesicles were administered as an anti-inflammatory agent to diminish oral mucositis in head and neck cancer patients undergoing chemotherapy (Xie et al., 2020;Nemati et al., 2022).Trials were conducted after 6-7 weeks of treatment.In addition, another clinical trial (No.NCT01294072) was carried out in 2011 on turmeric-derived nanovesicles for more effective delivery of curcumin to the gut.(Xie et al., 2020).The effect of turmeric-derived nanovesicles on malignant and normal clonal cells and their effect on the immune system were applied to colon cancer patients.This study was developed to investigate the effect of turmeric nanovesicle administration on oral tablets under recruitment conditions (Xie et al., 2020). Mammalian-derived EVs in clinical trials There were numerous clinical trials on mammalian-derived extracellular vesicles for therapeutic purposes.The application, dose, number of patients, identification number, and follow-up are indicated.Currently, studies are in clinical trials (listed at www.clinicaltrials.gov). In a study, the application of mesenchymal stem cell (MSC)-derived exosomes based on siRNA-exosome therapy is used in the treatment of pancreatic cancer patients with KrasG12D mutation via KrasG12D in a phase I clinical trial (ClinicalTrials.govidentifier: NCT03608631).Despite these encouraging results for the application of MSC-derived exosomes as drug vesicles for cancer treatment in the clinic, many challenges remain (Mendt et al., 2018).A combination of ascites-derived exosomes with granulocyte-macrophage colony-stimulating factor (GM-CSF) was tested in a phase I clinical trial for the treatment of advanced colorectal cancer, which was found to be feasible and safe and also capable of eliciting more CTL infiltration in tumor regions (Dai et al., 2008).A study was performed at Sahel University Hospital, Cairo University, to evaluate the effect of consecutive doses of CSM-EV in 20 patients with type 1 diabetes, with a follow-up of 3 months.The results are not yet available (Mendt et al., 2018).In the same hospital, another study recruited 20 patients with chronic renal failure and were administered two doses of CMS-EV from the umbilical cord with follow-up for 1 year and the results are already available (Nassar et al., 2016).Finally, a clinical trial, which involves the injection of MSC-EVs engineered with miR-124 for the treatment of patients after acute ischemic stroke, was approved in Iran (Grange et al., 2019) (see Table 8). New therapeutic approaches to extracellular vesicles Therapies based on plant-and mammalian-derived extracellular vesicles (EVs) are attracting growing interest as a promising therapeutic approach in various medical fields for the treatment of diverse diseases.The benefits of these devices include their ability to cross biological barriers, enhance drug pharmacokinetics and therapeutic efficacy, and diminish the toxic side effects commonly associated with conventional synthetic nanovesicles.In addition, EVs can be chemically modified to incorporate additional ligands for targeted drug delivery.In addition to their ability to transport biomolecules to other cells, namely, proteins, lipids, and nucleic acids, they are a promising tool for clinical diagnostic and disease prognostic assessments.However, many technological, functional, and safety aspects still need to be taken into account (Li et al., 2023b;Sadeghi et al., 2023). Targeted drug therapy: EVs can be modified to express specific surface ligands, giving them the ability to selectively target particular cells or tissues.This offers exciting opportunities for targeted drug delivery while at the same time minimizing adverse effects on other healthy tissues.However, the obstacle to EV therapy that needs to be overcome to achieve a therapeutic result is the regulation of EV uptake.Many steps have been taken to improve the factors influencing EV uptake, including cell source selection, cell growth procedures, extraction and purification methods, storage, and routes of administration.Rapid distribution, targeted delivery, and nontargeting of EVs are current challenges that need to be addressed (Claridge et al., 2021;Esmaeili et al., 2022). Gene therapy: Gene therapy can be applied to treat genetic diseases, control gene expression in particular cells, or trigger specific healing processes.In addition, there is an urgent need for drug delivery vectors capable of efficiently transferring therapeutic cargo to recipient cells while bypassing the cellular barriers created by the development of new genotoxic anti-cancer therapies.Drug delivery methods have been proposed to overcome these restrictions, but their successful clinical application has been thwarted by the occurrence of unanticipated adverse effects and related toxicities (Duan et al., 2021;Jayasinghe et al., 2021). Immunotherapy: the use of extracellular vesicles has been applied to the field of cancer immunotherapy, and this has made remarkable progress, becoming a real tool in the fight against cancer.Extracellular vesicles were also seen as vectors for molecules capable of activating an Frontiers in Bioengineering and Biotechnology frontiersin.orgimmune response and destroying cancer cells.Based on this observation, the new approach is to use extracellular vesicles as a new means against cancer.The immunotherapeutic approach based on extracellular vesicles in the treatment of cancer patients has been demonstrated, even in the case of advanced cancers.The use of extracellular vesicles in immunotherapy does not reveal any qualitative or moral difficulties.Progress in this field will therefore lead to practical results and a new, innovative approach to the fight against cancer.(Giacobino et al., 2021;Marar et al., 2021).Tissue regeneration: A promising approach in regenerative medicine is the therapeutic use of extracellular vesicles for tissue regeneration.Extracellular vesicles can contain growth factors and proteins essential to the tissue regeneration process.Growth factors can be selectively delivered to target cells via these vesicles, promoting their proliferation and differentiation.This approach is still in the development stage, so it will be some time before it becomes established as a clinical therapy for tissue regeneration.It could 1 day provide new therapeutic alternatives for the treatment of various diseases and conditions associated with tissue damage.(Ju et al., 2022;Zheng et al., 2022). Conclusion and prospects In recent decades, VEs had capabilities of cell-to-cell communication in prokaryotes and eukaryotes due to their ability to transfer active biomolecules such as proteins, lipids, nucleic acids, and other biologically active substances, and have considerable roles in several physiological and pathological mechanisms.In recent years, extracellular vesicles present numerous benefits in terms of biocompatibility, therapeutic capacity, targeting ability, and cellular uptake.Their low toxicity and immunogenicity make extracellular vesicles an emerging, versatile, and promising biotherapy for a very wide variety of diseases. In addition, research has indicated that EVs are essential for communication between plants, mammals, and pathogens and that they perform significant roles in a variety of pathologies.However, despite the constant progress of discoveries, complementary studies on extracellular vesicles are still needed in several aspects, especially cellular and molecular biogenesis, functions, and uptake which are not yet well known.In the context of isolation, we remark that particularly for plant-derived EVs, the results obtained in the various studies often vary according to plant derivative, the isolation technique, or the physiological state of the plant, as well as the content of the isolated nanoparticles due to the lack of a standardized isolation protocol.Also, improvements in EV isolation permit the fabrication of high-purity EVs for biomarker discoveries.As new markers of EV subcategories continue to emerge, the capacity to use high-fluorescence microscopy should improve the understanding of EV subcategory biogenesis.In addition, there is a lack of information on plant-derived EV biogenesis, as there are few specific protein markers for EVs, and determining the biological characterization of exosomes needs further investigation, as the surface markers and other characteristic elements of plant exosomes remain uncertain.The application of extracellular vesicles' therapeutic potential requires further clinical trials to obtain more precise information on results, development of effective isolation, scale-up methodologies, stability, and properties of either plant-or mammalian-derived extracellular vesicles.Although clinical trials on some extracellular vesicles are underway, the regulatory aspects of their use as therapeutic agents are not known.Despite the difficulties and obstacles encountered by researchers in this field, it has been proven that extracellular vesicles have natural therapeutic capacity advantages without toxicity or side effects.Therefore, when targeted and developed with multidisciplinary expertise, EVs can be transformed into reusable therapeutic agents to fight against various pathologies. The advantage of using nanovesicles is that the efficacy of natural products (plants or mammals) in therapeutic applications as indicated in these studies can be improved by increasing their bioavailability.In systems, nano-distribution can also be used to overcome the limits of therapeutic applications of natural products for several reasons: their capacity to target nanovesicles to specific organs, thus improving selectivity, therapeutic application, efficacy, and security.Nanovesicles passively target pathological sites of action without the addition of specific ligand fragments.The therapeutic efficacy of nanovesicles can reduce side effects because of their properties.Nanovesicles increase the solubility of natural compounds.Nanovesicles can dissolve rapidly in the blood, so they appear to be able to administer small-sized drugs.In addition, it is well established in these studies that the application of nanovesicles in clinical trials is an ideal candidate for the treatment of many diseases.The authors have indicated that several preclinical experiments have confirmed the advantages of nanovesicles for the treatment of numerous pathologies ranging from regenerative medicine to cancers. Ultimately, we expect that with further research on all the aspects we have just mentioned above on the use of extracellular vesicles, in the next few years, we will probably see an increase in the use of extracellular vesicles both for the diagnosis against more widespread pathologies and as a starting point for the development of new therapies. FIGURE 1 FIGURE 1 Schematic illustration of the formation of the different EV sub-types.Schematic representation of the liberation of the different types of EVs directly by bourgeoning from the plasma membrane for MPs, by fusing internal multivesicular divisions (MVBs) with the plasma membrane for exosomes, and by bourgeoning from a cell in the process of apoptosis for apoptotic bodies.Adapted from Gurunathan et al. (2019). FIGURE 5 (A) Isolation process of nanovesicles derived from a comestible plant (fruit or vegetable) as a first step of sample preparation (mixing, pressing, or grinding).The second step is the isolation and purification of plant-derived nanovesicles (e.g., by differential ultracentrifugation and sucrose gradient ultracentrifugation). (B) The process of isolating EVs from plant leaves.Initial steps to isolate the fluid of apoplastic wash from detached plant leaves.The leaves are then cut using a pair of scissors.The leaves are put into a syringe, then deposited on a piece of transparent adhesive tape and rolled onto the syringe.The taped leaves are then placed in a 50 mL conical tube and collected by centrifugation.Adapted from Li et al. (2023a). TABLE 1 Representation of the liberation of the different types of mammalian-derived EVs. TABLE 3 Summary of isolation methods of mammalian-derived extracellular vesicles.TABLE 4 Summary of isolation methods of plant-derived extracellular vesicles. TABLE 5 Therapeutic application of plant-derived nanovesicles. TABLE 6 Therapeutic application of mammal-derived nanovesicles. TABLE 7 Plant-derived EVs in clinical trials. TABLE 8 Mammalian-derived EVs in clinical trials.
2023-09-15T15:17:19.841Z
2023-09-13T00:00:00.000
{ "year": 2023, "sha1": "f519bcb95062046915b313a338dbe462c9a204b3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2023.1215650/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e4cc4da566cf464e0f6d0c83dfaf7b208d8fc47", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56158510
pes2o/s2orc
v3-fos-license
Total Antioxidant Capacity (Tac) in Hypertensive Patients Hypertension has been an age long ailment that has affected the entire population of the world with its many complications. This study investigated the relationship between hypertension and TAC in different age groups. Samples were collected from Bomadi general hospital in Bomadi Local Government Area of Delta State, Nigeria and the project was also conducted there. A total of 40 consenting subject between the age of 45-65 years were randomly selected for the study of which 20 were control subjects (non-hypertensive individuals). The age groups 40-49 years in hypertensive patients and 60-69 years in normotensive individuals indicate statistical significance when compared to their corresponding age group (0.9±0.25mMol/L and 1.68±0.32Mmol/L, respectively). In hypertensive subject, male shows a statistically insignificant (p≤0.05) mean higher value than female (1.14±0.18Mmol/L) and 1.01±0.21Mmol/l, respectively) while in normotensive subjects, female shows a statistically significant (p≤0.05) mean lower value than male (1.66±0.35mMol and 1.77±0.06, respectively). Total antioxidant capacity is lower in hypertensive patients and as a result these individuals are predisposed to ills associated with reduced TAC. This includes cell damage by free radicals. INTRODUCTION The term antioxidant originally was used to refer specially of oxygen, extensive study was devoted to the uses of antioxidant in important industrial processes, such as the prevention of metal corrosion, the vulcanization of rubber and the polymerization of fuel in fueling of internal combustion engines (Kelly, 1998). However, it was the identification of vitamins A, C and E as antioxidant that revolutionalized the field and lead to the realization of the importance of antioxidant in the biochemistry of living organisms. Although oxidation reactions are crucial for life, they can be damaging, hence plants and animals maintain complex systems of multiple types of antioxidants, such as glutathione, vitamin C, vitamin A, vitamin E, as well as enzymes such as catalase and SuperOxide Dismutase (SOD) (Burton and Ingold, 1981). Antioxidadants may be synthesized in the body or obtained from the diets.The different antioxidants are present at a wide range of concentration in body fluids and tissues, with some such as glutathione or ubiquinone mostly present within cells, which others such as uric acid are more evenly distributed.Some antioxidants are only found in a few organisms and those compounds can be important in pathogens and can be virulent factors (Rice- Evans and Miller, 1994). The action of one antioxidant may therefore, depend on the proper function of other members of the antioxidant system. Hypertension or high blood pressure is a cardiac chronic medical condition in which the systemic arterial blood pressure is elevated.It is the opposite of hypotension.Hypertension is classified as either primary (essential) or secondary.About 90-95% of cases are termed "primary hypertension" for no medical cause which refers to high blood pressure can be found. The remaining 5-10% of cases (secondary hypertension) is caused by other conditions that affect the kidneys, arteries, heart, or endocrine system.Persistent hypertension is one of the risk factors for stroke, myocardial infarction, heart failure and arterial aneurysm and is leading cause of chronic kidney failure, moderate elevation of arterial blood pressure leads to shortened life expectancy.Dietary and lifestyle changes can improve blood pressure control and decrease the risk of associated health complications, although drug treatment may prove necessary in patients for whom lifestyles changes prove ineffective or insufficient (Carretero and Oparil, 2000). Antioxidants protect the body against the destructive effect of free radicals.Antioxidant neutralizes free radicals by donating one of their own electrons, thereby rending the electron stealing reactions, known as chain reaction.Antioxidant themselves do not become free radicals by donating an electron because they are stable in either form. Antioxidants help to maintain the concentration of free radicals at an optimum level, thereby preventing oxidative stress.In addition, antioxidants play a key role in these defense mechanisms (Sies, 1993).Blood sample Collections: 2 mL of blood sample was collected from each consenting subject into an EDTA anticoagulant container using the vein puncture technique .The whole blood was centrifuged at 1200 rpm for 5-min at room temperature at (29°C-31°C) to separate the plasma which was decanted into the bijou bottle for analysis.The total antioxidant capacity was estimated using the trolox equipment total antioxidant assay supplied by cayman chemicals USA. Statistics: The unpaired student "t-test" was used to analyze the data obtained and the level of significance was set at p = 5 %( ∝ = 0.5). DETERMINATION OF TOTAL ANTIOXIDANT CAPACITY (CAYMAN METHOD) Principle of reaction: The caymanantioxidant assay is used to measure the total antioxidant capacity of the serum.The oxidation process of 2, 2-Azino-di-3ethylbenzthiazoline sulphonate to ABTS + by metmyoglobin was inhibited by the presence of antioxidants in the serum.The amount oxidized ABTS + is measured at 750 nm and compared to trolox: A hydrophilic tocopherol analogue is proportional to the concentration of the total antioxidant (mm) present in the Rice- Evans and Miller (1994). Procedure: Trolox standard was prepared in seven test tubes A-G as shown by the Table 1, Samples serum was diluted with assay buffer at 1:20.10uL of final concentration of trolox standard A-G was pipette into labeled A-G well-plates, respectively.Another 10uL of serum sample was also pipette into well-plates labeled S 1 -S 40, respectively.Again 10ul of metmyoglobin, 150uL of chromogen and 40uL hydrogen peroxide was also pipette to each well-plates A-G and S 1 -S 40 , respectively.Then coversheets and mixed thoroughly by shaking for 5 min. Removed the cover sheets and read the absorbance of each mixture (A-G and S 1 -S 40 ) at 750 nm using a plate reader and was recorded for calculation. Calculation: A graph was plotted using the absorbance of trolox standards labeled A-G against final concentration A-G, respectively from the table above.The y-intercept and the slop of the graph was determined therefore, the y-intercept and slop derived from the graph was used to calculate the concentration of samples labeled S1-S 40 Conc.ofTAC(mmol/L)= Absorbance of samples (S1-S40) -y-I ntercep /slope RESULTS The total antioxidant capacity of the subjects was measured and the results obtained are presented in Table 2 to 3. Values are expressed as mean ±SD for "n"subject. DISCUSSION Hypertensive has been recently seen as a major health problem affecting both the developed, developing and most under-developed countries (Krouf et al., 2003).It was hypothesized that high blood pressure which is the clinical manifestation of hypertensive, is associated with loss of balance between per oxidation and various antioxidant factors which are reactive oxygen species (Krouf et al., 2003).It was hypothesized that high blood pressure which is the clinical manifestation of hypertension, is associated with loss of balance between per oxidation and various antioxidant factors which are reactive oxygen and species (Krouf et al., 2003). In this present study, age was shown to influence the Total Antioxidant Capacity (TAC) as TAC increased also age increases but at a much older age, TAC decreased (Table 2) although hypertensive subjects shown mean values lower. TAC when compared with the normotensive subject (p<0.05) which is statically significant. The result also showed that male gender have mean higher TAC with only normotensive male subjects being significant when compared to female subject.In overall, male and female hypertensive subject showed a statistically significant mean lower TAC when compared to normotensive subject as seen in Table 3. A brief explanation for the obtained result may be due to reduction of overall body function and hormonal organ depletion as human grows older (Guyton and Hall, 2006). This may account for the lower TAC in age 60-69 years in both normotensive and hypertensive patient.The increase in TAC in 50-59 years may not be far from the increase in thought and other emotional demand as one grow older leading to increases impulse and generation of free radicals (reactive oxygen species) leading to hypertension (Evans, 2008;Knight, 1998).It has been established that male leads activate lives than female and this may also lead to increase muscular activities and lipid peroxidation which will in turn lead to increase in TAC production (Lenaz, 2001) from the fore going, it has been established that free radicals production thus stimulate thus stimulate the production of antioxidants (Nelson and Cox, 2005). Hypertension results from excessive generation of free radicals (Subash et al., 2010) and this account for the presence of adequate antioxidants (Alsaif, 2009).This therefore may accounts for statistically significant lower mean TAC observed in hypertensive subject. Table 1 : Results of Trolox standard prepared in seven test tubes A-G Table 2 : Age influence on the TAC of subjects
2018-12-07T20:18:00.891Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "58fcb319103090ccd6a5930d8ab48f66da5b213a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.19026/ajms.5.5355", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "58fcb319103090ccd6a5930d8ab48f66da5b213a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
152056604
pes2o/s2orc
v3-fos-license
INTENTION TO PURCHASE ALCOHOL BY ADULTS IN THE COUNTRY IN TRANSITION: THE EFFECTS OF HEALTH CONSCIOUSNESS, SELF-EFFICACY AND RELIGION IMPORTANCE The major trend in modern societies is towards encouragement of the reduction of alcohol use; however, this is not always in line with the various contexts and occasions. Individual factors may present rather non-homogeneous groups that often exert totally opposite influence on the intention to purchase alcohol. This research aims to examine the phenomenon of adult intention to purchase alcohol in Lithuania as a country in transition influenced by an individual’s health-consciousness, selfefficacy and religion importance. The nature of these factors is very different; their essence may lie in a rather individualistic concern about personal health, or can be linked with rather distant, but strong personal beliefs, priorities or lifestyles. Therefore, this research aimed to explore these effects. A total of 487 completed questionnaires were collected to perform the research. The findings reveal that health consciousness and religion importance have a significant influence on alcohol purchase intention among adults. However, self-efficacy proved to be of low influence. Introduction claim that Eastern Europe countries such as the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, the Slovak Republic are examples of emerging European economies. Although there are a few explanations to post-soviet transition, Moskalewicz (2000) emphasizes two main features: the collapse of planned economy model; and shifts in political system eliminating a single party or leader domination and introducing democracy. The transition is followed by changes in the legal system and significant transformation of existing institutions. Under such circumstances citizens of countries in transition often face instability of living standards, future uncertainty, increased rates of unemployment and poverty. Lithuania and other states mentioned above are former transition countries that used to belong to the Soviet Union. During the very first transition years these states gradually moved from planned economy to market economy. Government interferences and various restrictions were lowered in order to increase economic competitiveness in the global market. The privatization process went on rapidly. Approximately 9 thousand Lithuanian companies changed ownership from public to private at the time of economic transition (Amdam et al., 2007). The collapse of previous political and economical systems brought about new challenges and social issues in the countries of transition. Since Lithuania restored its independence in 1990, health problems particularly related to alcohol and tobacco use have become of increased importance. Lack of legal regulations (Popova et al., 2007) and privatization of alcohol production sector played a significant effect on the production, supply and distribution of alcohol. As a result, alcohol consumption increased rapidly over the first years of Lithuania's independence, according to Statistics Lithuania. There are constant attempts to reduce alcohol consumption in Lithuania by setting alcohol selling time regulations, restricting and banning alcohol advertisement, changing taxation. However, according to Statistics Lithuania, despite all efforts, no essential changes in alcohol consumption were reported. On the contrary, the consumption of alcohol per capita steadily increased (Figure 1). During the period from 1990 to 2010 the amount of alcohol consumed in Lithuania per capita doubled (Klumbiene et al., 2012). It should be also noted that the number of deaths resulting from alcohol abuse is considerably higher in Eastern European countries, such as Lithuania, Latvia, Poland, in comparison to Western European region. While analysing alcohol poisoning prevalence in the former Soviet Union countries Stickley et al. (2007) concluded that cases of death due to harmful alcohol consumption at a time had reached extremely high rates. The overall drinking trends, according to Statistics Lithuania, have increased during the last sixteen years from 7.8 litres in 2000 to preliminary 12 litres per capita in 2015. What is also specific during the period of observation is the fact that drinking became more popular among women and adolescents. Existing insights into alcohol consumption behaviour rely on survey findings, which serve to inform statistics organizations, and often attempt to profile consumers along socio-demographic dimensions. Despite the attention given to the subject of alcohol purchase intention and related topics, the current scientific knowledge does not provide a clear understanding of how consumers manage their personal characteristics, which conclusively impacts their purchase behavior. The nature of individual factors is very different; their essence may lie in a rather individualistic concern about personal health, or can be linked with rather distant, but strong personal beliefs, priorities or lifestyles. For instance, it was observed that young adults' health intentions are guided by a dissociation from the drunk prototype (negative) and an association with the abstainer prototype (positive). Falling in-between, or being a moderate drinker and becoming more sociable, is also a preferred intention. The moderate prototype drinker in general has been associated with positive attributes, like being spontaneous and sociable, while heavy drinkers are perceived as annoying, volatile and uncontrolled (Lettow et al., 2013). Individual factors may facilitate or impede the alcohol purchase intention, and different personal factors may present rather non-homogeneous groups that often exert totally opposite influence on the intention to purchase alcohol. This imposes a need for gathering more insights and reflections that might explain how different individual factors are related to adult alcohol purchase intention. The research that aims to analyze adult alcohol purchase intention in relation to health-consciousness, self-efficacy and religion importance is scarce. The research attempts to investigate consumer purchase decision as a phenomenon not related to cases of addiction to alcohol, a chronic habit, or alcoholism as a disease. The paper is structured as follows. First, a brief review presents theoretical framework and proposes research hypotheses. Then methodological approach is introduced. Data sample is discussed. Third, empiric analysis of quantitative data containing 487 adult respondents is performed. Further, research results, discussion and conclusions are presented. The intended contribution is twofold. First, the research seeks to provide evidence that consumers' individual determinants, namely health consciousness, self-efficacy and religion importance, may interact with the behavioural intentions and serve as a discouragement factor in alcohol purchase. Second, whereas a vast majority of research is concentrated only on adolescence and the stimuli to consume alcohol, the current research employs a different prospective by investigating adult population. Consumer behaviour is prone to changes over time and adults might demonstrate different personal and social motives that cause stronger versus weaker intention to use alcohol. Thus, it brings additional light to the consumer behaviour related to alcohol. Theoretical framework A number of theories are involved in explaining alcohol purchase and consumption behavior. According to Lee et al. (2011), alcohol consumption is described in terms of "consistent process of acquisition, use and disposal. " This research focuses only on the pre-phase of acquisition limiting itself to the intention to acquire. However, the question remains: what are the factors that stimulate and what are the motives that inhibit alcohol purchase intention in the country in transition. Consumer behavior theorists have long argued that people use products as a form of self-expression, highlighting the relationship of one's identity to a particular behavior. Behavior pertinent to alcohol consumers is varied, but a prominent factor is the inclusion of personal considerations within the consumer's decision-making process. Aertsens et al. (2009) state that personal motivations are able to shape one's behavior in a characteristic-congruent direction as far as they are activated during the pre-decisional process. While analyzing and evaluating different research perspectives, it is important to keep in mind that any type of behavior, including alcohol purchasing, contains social, cultural, economic and traditional bonds. Some research focuses on a single or a particular number of factors that might lead to alcohol purchase. However, it is wrong to assume that there is just one single cause that comprehensively explains the intentions behind alcohol purchasing. There is hardly any single factor that would explain why an individual has a higher versus lower temptation to purchase alcohol. The possible impact of one determinant on another is palpable. It has been shown that health consciousness (individual's overall interest in issues related to general health and health-related consumption) could be negatively associated with alcohol purchase (Bui et al., 2011;Walton & Roberts, 2004). Religion importance as one's attachment to religion is yet another determinant that is thought to lead to weaker intention to purchase alcohol, since the purchase and abuse of alcohol beverages might contradict religious dogmas and be considered unacceptable behaviour within a particular religious community. Both religion importance and health consciousness are linked to certain personality traits. Results of longitudinal studies on adolescents' alcohol purchase behavior have also shown that not all interventions designed to increase self-efficacy and to change addictive behaviors have led to the expected changes in target health behaviors or cognitions. However, it is expected that high self-efficacy could enable individuals to resist the intention of purchasing alcohol. Alcohol purchase phenomenon is of multifaceted nature, and the decision to purchase alcohol may vary. Therefore, this research will focus on three specific determinants broadly classified as personal factors: self-efficacy (a conscious effort to say no to alcohol purchase), health consciousness and importance of religion, which depending on the context could be considered as a personal belief. Previous studies of these three determinants had shown either a negative or positive perceived influence towards in-tention. For instance, self-efficacy and personal beliefs demonstrated strong negative effect. These personal variables might interact to either further encourage or inhibit intention towards alcohol purchase. Such interactions and inhibitive influences among the determinants have not yet been largely explored. Alcohol purchase intention and health consciousness Lithuania's rapid transition from planned to market economy, improved economic situation and the increased household income over the past decade ensured that more people can afford not only necessities, but are able to spend more on products related to their health and cultivate healthy lifestyles. Increased health consciousness could change consumers' view towards products containing alcohol and might be a significant predictor of alcohol purchase intention. Mai & Hoffmann (2012) have discovered that health consciousness determines individual priorities over the choice characteristics. The authors conclude that healthconscious individuals take their health into serious consideration when performing certain actions. The perception towards alcohol purchase may also affect health perceptions, i.e. the idea that alcohol is not healthy, which in turn would affect the intention to purchase alcohol. Such claims are supported by Bui et al. (2011, p. 186) that state "health consciousness is an indicator of individual overall interest in issues related to general health. " The authors claim that health consciousness is stimulated by the intent to protect oneself from harmful products and a wish for social acceptance. Health conscious individuals are more attentive to their personal health and there exists a postulate that health conscious consumers make healthier choices. For instance, Michaelidou & Hassan (2008) state that health conscious individuals are well aware of their health status and seek to maintain or improve their health. According to Hong (2009), health conscious consumers are seeking to maintain or improve their health by undertaking particular actions (e.g. engaging in healthy life activities, consuming organic food, maintaining physical health through sports). Chen (2009) researched individuals' health consciousness in relation to attitudes towards organic food and found a positive relation. The authors conclude that health conscious individuals take their health into serious consideration. Dong (2010) investigated healthy regular drinkers and found that individuals that were health conscious purchased and used less alcoholic beverages. Gould (1988) hypothesized and found support of the idea that health conscious individuals are better aware of health related information, and health consciousness will serve as a preventive mechanism. Conversely, Yoon et al. (2008) claim evidence that consumers that are health conscious do not necessarily demonstrate healthier lifestyles and better overall health. Healthy lifestyle and alcohol drinking is hardly compatible. Therefore, in this research we hypothese that: Hypothesis 1. There is a negative relationship between the consumer's health consciousness and alcohol purchase intention. A theoretical analysis of the relationship between health consciousness and alcohol purchase intention has provided some valuable insights claiming that individuals concerned about their health and life quality tend to use less alcohol (Lee & Thomas, 1997;Dong, 2010;Nichols et al., 2012). It has also been proven that health conscious individuals tend to choose healthier, organic, green products (Yoon et al., 2008;Chen, 2009;Mai & Hoffmann, 2012). Therefore, it is hypothesized that health consciousness is one of the personal determinants when it comes to the decision to purchase alcohol. Alcohol purchase intention and self-efficacy According to Bandura (1995, p. 2), self-efficacy is "beliefs in one's capabilities to organize and execute the courses of action required to manage prospective situations. " Bandura (2006) identifies self-efficacy as a foundation of human agency and probably of a higher cognitive mechanism associated with behavioral choice. Self-efficacy is a phenomenon converged with individuals' beliefs in their capabilities to achieve certain goals or execute particular performances. In contrast, collectivism approach dominated Lithuanian society for a long time. That was one of the core ideas of socialism at the time of the Soviet Union. Individualism, promotion of personal qualities and strong beliefs in one's capabilities were not appreciated. Over the years in transition these values were prone to change. Individuals seek to be the architects of their own life. The ability to control provides security over undesired outcomes and enables the search for the valuable, desired ones. Self-efficacy reflects on the amount of challenges an individual can overcome. Certainly, challenges may vary widely. It is also argued that the strength of self-efficacy may vary from person to person and it is a personal rather than unified characteristic. Luszczynska et al. (2005) claim that self-efficacy is one's belief in the ability to cope with a wide range of challenges. Alcohol is referred to as a health-harmful product and it is presumed that consumers are well aware of the harms alcohol might cause. Consequently, individuals that demonstrate high levels of self-efficacy are capable of resisting the temptation to acquire and use many harmful products. Self-efficacy is directly related to health behavior, but it also affects health behaviors indirectly through its impact on goals. Self-efficacy influences the challenges that people take on as well as how high they set their goals (e.g. "I intend to reduce my smoking" or "I intend to quit smoking altogether"). Individuals with strong self-efficacy select more challenging and ambitious goals, they focus on opportunities, not on obstacles. Bui et al. (2011) claim that individuals with high levels of self-efficacy are capable of resisting hunger, thirst and some particular products. The same authors in their research have concluded that self-efficacy has a significant impact on greater health consciousness and both determinants play a significant role in fighting obesity. Kinard & Webster (2010) state that high self-efficacy individuals are able to resist engaging in the behaviour that might be considered harmful to their health. Jang et al. (2013) add that self-efficacy is one of the essential variables for sustaining healthy lifestyle. Self-efficacy could also manifest itself as a risk-related behavior to avoid alcohol purchase. Although a wide range of research has established the existing relation between one's self-efficacy and the ambitions for better health, it is interesting to investigate if self-efficacy could impact alcohol purchase intention. This research hypothesizes that self-efficacy is a significant negative predictor of alcohol purchase intention. Hypothesis 2. There is a negative relationship between the consumer's self-efficacy and the alcohol purchase intention. Alcohol purchase intention and religion importance It has been widely discussed and demonstrated in literature that alcohol use and religion are negatively associated (Kendler et al., 1997). Religious teachings generally promote a healthier lifestyle with respect to known risk factors and also classify alcohol or drug use as sins since they could possibly harm the body, which is believed to be the temple of the Holy Ghost (Idler et al., 2013). Bjarnason et al. (2005) state that religion is a communal ritual that promotes a common understanding of surrounding world. Desmond et al. (2011) stress the importance to distinguish upon different dimensions of religiosity such as: church attendance and overall religion importance. Lorencova (2011, p. 181) suggests an explanation that "religiosity is participation in collective ceremonies, beliefs and activities of organized traditional religions". The scholar emphasized that religion is an integral part of certain doctrines: Buddhism, Christianity, Islam, etc. According to Martin et al. (2003), there are studies, particularly in the field of psychology, that claim evidence on religion's influence on one's mental health condition. Several studies have concluded that individuals' religiosity and engagement in religion rituals leads to reduction of health risk related behaviours such as smoking and consuming alcohol (Preston, 1969;Idler, 1987;Benda et al., 2006). For instance, Strawbridge et al. (1997) provide evidence that church attendees are non-smokers or tend to smoke less, consume less alcohol, enjoy active social life, are more likely to engage in sports and various physical activities. All the above cases are directly related to active participation in the religious ceremonies and rituals with strong religious confession. Benda et al. (2006) have concluded that religiousness is significantly related to alcohol consumption, use of drugs and delinquency. Similarly, Preston (1969) has concluded that non-drinkers tend to be more religion oriented than the drinkers. Kolstad & Pedersen (2000) research also concludes that individuals that abstain from the purchase and use of alcohol tend to be more religious. This research suggests that religion importance might have a significant impact on the consumers' decisions. Over 70 per cent of the total population in Lithuania are Roman Catholics. Although the church's authority and influence diminished over the years of transition, religion importance might still play a significant role in shaping one's choices and preferences. This research hypothesizes that individuals prone to religion tend to resist alcohol purchase intention. Hypothesis 3. There is a negative relationship between the consumer's emphasis on religion importance and the alcohol purchase intention. Sample There are certain scales that help with the preliminary identification of individuals having problems with the use of alcohol. Since individuals who have alcohol problems are not the object of this research, 47 such individuals were excluded in accordance to the Sorocco & Ferrell (2006) CAGE survey results. CAGE is a four-item questionnaire and serves as the basic alcohol problem indication tool. The four items are: Have you ever felt you should cut down on your drinking? Have people annoyed you by criticizing your drinking? Have you ever felt bad or guilty about your drinking? Have you ever had a drink first thing in the morning to steady your nerves or to get rid of a hangover (an eye opener)? Individuals are instructed to answer yes or no to each of the questions. A positive answer scores 1, while a negative answer scores 0. If the total number of scored points equals 2 or more, it is an indication that a respondent might have potential alcohol problems. The data (Table 1) for this research was collected on 2-16 December 2014 based on a contract with the international market research, analysis and consulting company, TNS. The questionnaire items were translated from English to Lithuanian by the author of this research. A pre-test of the survey was executed with a group of 12 colleagues to ensure that all statements were understandable. Further, a professional English-Lithuanian language translator was used to perform back translation to be able to compare those with the original. After those procedures were finished, the questionnaire was approved and forwarded to the analysis and consulting company. A total of 487 individuals responded to a self-reported omnibus type survey. The research included the following socio-demographic determinants: gender, age, level of education and the level of monthly income. Many authors claim a tendency that respondents underreport their drinking amounts and drinking patterns. However, this research survey did not require any specification of amount, brands or types of alcohol beverages used, therefore it is believed that respondents indicated their true intentions to purchase alcohol in the near future. Independent variables In order to measure the suggested phenomenon, the following measurement scales were used: Consumers' health consciousness was measured using a 4-point scale by Gould (1988). (Example item: I reflect about my health a lot). Consumers' religion importance was measured using a 6-point scale by Burroughs & Rindfleisch (2002). (Example item: My religion is one of the most important parts of my philosophy of life). The respondents were asked to evaluate the given statements of the two phenomena (health consciousness and religion importance) using a 7-point scale, where 1 meant Absolutely disagree and 7 meant Absolutely agree. Consumers' self-efficacy was measured using a 10-point scale by Schwarzer & Jerusalem (1995). (Example item: It is easy for me to stick to my aims and accomplish my goals). The respondents were asked to evaluate the given statement using a 5-point scale, where 1 meant Absolutely disagree and 5 meant Absolutely agree. Dependant variable The consumers' alcohol purchase intention was measured using a 3-point scale by Spijkerman et al. (2004). (Example item: To what extent do you think you will drink weekly in the future?). The respondents were asked to evaluate the given statements using a 5-point scale, where 1 meant Not likely and 5 meant Very likely. Results To test the hypotheses, a structural equation model with LISREL 9.1 was estimated. It produced good fit (χ² = 494.1, df = 224, RMSEA = .050, CFI = .977, SRMR = .039). According to Vieira (2011), RMSEA value <0.05 indicates a good model fit. Further, in compliance with Bagozzi (1981), recommendations convergent validity was examined by looking at the t-values of Lambda-X matrix. All t-values were higher than 2.00 level, ranging from 11.87 to 47.29. As far as the reliability is concerned, all Cronbach's alphas range from .841 to .968 and are greater than the 0.7 recommended. Note: standardized estimates shown (t˗values in brackets). Alcohol purchase intention Self-e cacy Composite reliabilities of measurement models ranged from .84 to .91, while average variance extracted (AVE) values ranged from .50 to .64. All AVEs exceeded the squared correlation between each construct with all other constructs (Fornell & Larker, 1981). The relevant standardized parameter estimates and associated t-values are shown in Figure 2. The findings support hypotheses 1 and 3. Health consciousness is negatively related to alcohol purchase intention (β = -0.35, p < .01), religion importance is also negatively related to alcohol purchase intention (β = -0.14, p < .01). However, no such effect is observed for self-efficacy. Surprisingly, and in contrast to hypothesis 1 and hypothesis 3, consumer self-efficacy reveals week impact on alcohol purchase intention (β = 0.08, p < .10) and is rejected. Discussion The collapse of previous political and economic systems brought about new challenges and social issues in Lithuania. Since the country restored its independence in 1990, health problems particularly related to alcohol and tobacco use have become of increased importance. Although efforts were made to reduce alcohol purchase and use, little success was achieved. Therefore, this research extends the prior research by demonstrating how personal determinants such as health consciousness, self-efficacy and religion importance interact with one's intention to purchase alcohol. Health consciousness has a strong negative effect on alcohol purchase intention This research has revealed that adult consumers that are health conscious tend to demonstrate a lower level alcohol purchase intention. The research results are consistent with the previous findings demonstrated in the field. At first sight this finding could be claimed to be obvious, but not necessarily. There are scientific claims that moderate alcohol drinking contributes to a person's total well-being (both physical and mental). For instance, there are claims that moderate drinking of some alcohols such as beer and spirits are linked with lower risks of developing or suffering from coronary illnesses. However, our research suggests that health-conscious individuals do not intend to purchase and drink alcohol for the health benefits. This may be linked to the fact that alcohol is seen to be a health risk rather than a benefit. Health conscious individuals are fully aware and very much concerned about their health and quality of life, they actively engage in healthy behaviors and being-selfconscious regarding health. This knowledge could be applied in the context of the alcohol user as the reminder of the consequences of the behavior. Another way of looking at it is through the concept of social norms and deviants. Social norms, or the prevailing accepted behaviors, depend on the context and from what view a person is observing. Transition from college to adulthood is marked by the formation of a new identity and the establishment of more mature interpersonal and intimate relationships and the transition into new adult-type roles (White & Jackson, 2004). There is also a significant transition from less to more responsible social roles. This suggests that adult-like responsibilities become the new priorities and adults are now conforming to the new image expected of his age. Age also enhances awareness and consciousness in relation to alcohol purchase and resulting health risks if consumed. This finding suggests that in order to prevent or discourage people from purchasing alcohol, it is possible to put emphasis on health education and promotion of healthy lifestyles. It is not necessary to focus on the harm the alcohol may cause, instead, it is reasonable to stress behaviors that are useful for health. Self-efficacy is a weak determinant of intention to purchase alcohol in adult population Another interesting observation in this research is the seemingly low significance of influence that self-efficacy plays in deciding whether to purchase alcohol. A variety of studies have shown that self-efficacy predicted a strong negative intention towards alcohol purchase. There were claims that self-efficacy could advocate against alcohol purchase and use, and it was noted that the phenomenon is directly related to healthy behaviour. However, this research proved no support to the previous investigation. It could be discussed that self-efficacy may be related with an adult's maturity since there are more important things to be done than purchasing alcohol and drinking. An adult who is a rational and responsible individual would think of his work and commitments first before engaging in alcohol related matters. Another explanation could be an improperly selected measurement scale. Self-efficacy is a well known phenomenon that has a range of validated scales: general selfefficacy; perceived self-efficacy; various particular purpose modified self-efficacy scales. In this research general self-efficacy scale by Schwarzer & Jerusalem (1995) was used. Oei & Jardim (2007) suggested that the general self-efficacy measurement instrument is a poor predictor of alcohol purchase intention and use in comparison to the more robust and specific drinking-refusal self-efficacy measurement scale. Therefore, additional research using the specific scale could clarify the proposed influence of self-efficacy on purchase intention. Religion importance has a strong negative effect on alcohol purchase intention The measured influence of religion importance on alcohol purchase intention is consistent with previous research. According to several previous studies, an individual's religiosity and engagement in religious rituals leads to the reduction of behaviors that hold a high risk to one's health, such as smoking and consuming alcohol. The dominant religion in Lithuania is Christianity (77.2 per cent of the population residents indicated being Roman Catholics). Although wine is consumed in ritual ceremonies remembering the Last Supper, alcohol use leading to drunkenness is considered sinful and is inappropriate for religious individuals. Therefore, alcohol purchase and abuse might be considered unacceptable behavior for Christians. Although individuals surveyed do not necessarily align themselves with a strong religious faith, most of them take religion into consideration. Religion importance transpires through personal perceptions and leads to weakened alcohol purchase behavioral intentions. Theoretical and practical implications Efforts have been made to reduce alcohol purchase, with only little success, however. Therefore, this research was aimed to expand upon prior research by demonstrating that the determinants: health consciousness, consumer self-efficacy and importance of religion interact with consumers' behaviour intention and serve as measures to discourage alcohol purchase. Consumers are often described as rational decision-makers, however, they differ in their careful evaluations of what to buy. Shaw et al. (2005) conclude that personal characteristics are an important area of academic interest contributing to a better understanding of consumer behaviour. Many factors may facilitate a consumer's behaviour. Factors that discourage alcohol purchase should be investigated and later used for the creation of a policy aimed at alcohol purchase reduction. Certainly, consumers as potential alcohol buyers are different individuals and may have different motives for purchasing alcohol, other than the suggested determinants bringing additional light to a better understanding of factors that play a role in a country in transition. Adult consumers often are well aware of the harm alcohol causes and therefore, as expected, health conscious consumers demonstrate a significantly lower intention to purchase alcohol. This finding suggests that in order to prevent or discourage the use of alcohol, it is possible to put an emphasis on health education and the promotion of healthy lifestyles. It is therefore recommended to stress behaviours that are useful for health, rather than focus on the harm that alcohol may cause. There were claims that self-efficacy could advocate against alcohol purchase and it was noted that the phenomenon is directly related to health related behaviour. Kinard & Webster (2010) and Jang et al. (2013) and others have found direct evidence that self-efficacy is one of the essential variables in order to avoid harmful consumption practises. However, this research resulted in a weak relationship between the variables among Lithuanian adults. There may be at least two explanations for that: an error in research methodology while selecting the measurement scales or specific characteristics of the researched group. Self-efficacy is a well-known phenomenon that has a range of validated scales: general self-efficacy; perceived self-efficacy; and various particular purpose modified self-efficacy scales. In this research, the general self-efficacy scale was used and it was not specifically modified to investigate alcohol issues. Therefore, for the sake of scientific certainty, another study could be conducted to design and use an alcohol specific scale or a modified self-efficacy scale. Another explanation could be that current adults demonstrate specific characteristics formed in the previous social and economic system at the time of the Soviet Union, which in turn are related to differences in personal values and social identity. And although transition in economics and social life took place, personal values and behavior did not adapt that quickly and Lithuanian adult population demonstrate low rates of self-efficacy. Limitations and future research The research has several deficiencies. Firstly, survey data collection could have been conducted multiple times in a set period of time. This would have allowed the strengthening of the results and conclusions obtained and for the assessment of any possible inconsistencies. Furthermore, the age of the respondents is of a wide range, from 18 to 72 years. Separating respondents into 2 to 3 age groups probably would result in slightly different results between the groups. The research performed in a selected country of transition -Lithuania, possibly limits the findings of this research. Given that only a limited amount of research exists in the field of adult alcohol purchase and consumption, this investigation provides an important insight and contribution to the scientific literature concerning the explanation of alcohol purchase intention. A relatively large data source which was pre-screened to filter out alcohol dependents with almost equal representation of men and women also allowed for a less or altogether unbiased survey. However, despite efforts to lessen biases, this study still has a number of caveats and limitations that need to be recognized. Here, these limitations are categorized as technical. One technical limitation is that the sample population only consists of employed adult respondents. Although this sample could give an insight into the dynamics of the alcohol related behavior of adults, it is not representative of the total national population, thus limiting the generalizability of the research. In addition, the study is only limited to the employed population with non-dependence or alcohol-related problems. However, since the survey was of the self-reported omnibus type, there are no means to detect possible personal biases (i.e. the conscious effort to lessen or even exaggerate answers as related to social stigmas), which could have implications on the interpretation of the results and conclusions of the research. Another technical limitation of the study was its use of and reliance on self-reported data. The use of self-reporting itself has been heavily criticized in the past and has been associated with a number of inaccuracies despite recognitions that such self-assessments are the core of survey studies. Although one advantage of self-reporting is anonymity and less pressure compared to face-to-face interviews, respondents are still prone to self-enhancement and self-presentation (Paulhus & Vazire, n.d.). In such research, assumption on the credibility of the information lies on the honesty of the respondent. A large sampling population coupled with sound analytical approaches may, however, lessen these biases. In this study 487 respondents may not necessarily represent a larger part of the population but the insights gained could guarantee foundations for future research. Another important caveat in this study is the use of determinants that belong to a broad area of social cognition, which has been shown to possibly suggest different explanations for certain constructs. In looking at these determinants, it should be carefully considered whether the one being measured is a misperception. It could also be noticed that while the sample population surveyed in this study was also of a certain economic group (i.e. employed), this variable was not explored nor included as one of the possible determinants. Conclusions This research explored adult alcohol purchase intention phenomenon in relation to health consciousness, consumers' self-efficacy and importance of religion in one's life. Most studies that attempted to understand alcohol purchase and drinking phenomenon were done on younger age populations. However, this research used data from consumers of a myriad of ages, bringing additional light to the complexity of consumer behavior related to alcohol. The phenomenon of alcohol purchase intention was analyzed from a consumer perspective, investigating how the above mentioned factors in-teract either encouraging or discouraging intentions to purchase. The research can be summarized with these conclusions: This research has revealed that adult consumers who are health conscious tend to demonstrate a lower level of alcohol purchase intention. At first sight this finding could be claimed to be obvious, but not necessarily. There are scientific research based claims that moderate alcohol drinking contributes to a person's total well-being (both physical and mental). For instance, epidemiological studies have shown that individuals who have the habit of daily moderate wine consumption had lower cardiovascular mortality when compared to cases who abstain from it altogether (German & Walzen, 2000). A cup or glass of red wine is suggested to limit the initiation and progression of atherosclerosis (Szmitko et al. 2005). There are also claims that moderate drinking of other alcohols such as beer and spirits are linked to lower risks of developing or suffering from coronary illnesses (Rimm et al. 1996). However, our research suggests that health conscious individuals do not intend to purchase and drink alcohol to benefit their health. Another interesting observation in this study is the seemingly low influence that self-efficacy plays in deciding whether or not to purchase alcohol. There were claims that self-efficacy could advocate against alcohol purchase and it was noted that the phenomenon is directly related to health behaviour. However, there was no significant evidence found to claim importance of self-efficacy to alcohol purchase intention among Lithuanian adults. Self-efficacy measured in this study could have been inhibited or overpowered by other variables more pronounced in the context of alcohol purchase intention, such as health consciousness. Finally, it was revealed that religion importance serves as a significant stimulus for adult alcohol purchase resistance. Historically, alcoholic drinks were recognized to have played a number of roles in religion. Most religions condemn or inhibit alcohol use in their doctrines. In Christian societies religion serves as a measure to discourage alcohol purchase. These findings are similar to several studies that have shown negative correlation between religiosity and alcohol drinking (Francis et al., 2005). Taking an aging society into account, ongoing changes in demographics requires an increased attention to the alcohol purchase behavior of adults. Although the research was restricted to investigation of only moderate alcohol consumers, it still provided interesting insights of how individuals are influenced by their personal factors and which of those demonstrated an effect on a consumer's alcohol purchase intention. The research does not only yield insights on the psychology and social understanding of alcohol purchase behavior but also provides opportunities for institutions and concerned individuals to focus on the right agencies to prevent, control or mitigate the use of alcohol.
2019-05-10T13:09:43.073Z
2016-12-30T00:00:00.000
{ "year": 2016, "sha1": "49dd5c4096743ee2682c5b8c0064b626f5bb41b1", "oa_license": "CCBY", "oa_url": "http://www.journals.vu.lt/omee/article/download/14206/13100", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "49dd5c4096743ee2682c5b8c0064b626f5bb41b1", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "extfieldsofstudy": [ "Political Science" ] }
243985899
pes2o/s2orc
v3-fos-license
Enhancing Autoignition Characteristics: A Framework to Discover Fuel Additives and Making Predictions Using Machine Learning Combustion process can become more energy efficient and environment friendly if used with appropriate fuel additive. Discovery of fuel additive can be accelerated by applying hybrid approach of using of chemical kinetics and Machine Learning (ML). In this work, we present a framework that takes the robustness of Machine Learning and accuracy of chemical kinetics to predict the effect of fuel additive on autoignition process. We present a case of making predictions for Ignition Delay Time (IDT) of biofuel n-butanol ($C_4H_9OH$) with several fuel additives. The proposed framework was able to predict IDT of autoignition with high accuracy when used with unseen additives. This framework highlights the potential of ML to exploit chemical mechanisms in exploring and developing the fuel additives to obtain the desirable autoignition characteristics. Introduction In the wake of fast changing global climate scenario, a lot of emphasis has been laid on reducing hazardous emissions and using renewable energy solutions. Specially during the last few years, climate change has become a global challenge and many regions have already started experiencing its impact. Fortunately, during the same time period, ML algorithms have come of age and helping researchers in tackling variety of challenges. Also for climate change, ML is proving to be very helpful in suggesting ways to reduce emissions Rolnick et al. [2019]. For conventional fuels -which have been found to contribute significantly to increase emissions -ML has high potential to find ways to reduce emissions. For example, Li et al. used ML approach to explore organic waste to find equivalent renewable energy source of fossil fuels Li et al. [2020]. Badra et al. used combined approach of Computational Fluid Dynamics and ML to optimize combustion process Badra et al. [2020]. Despite all such applications of ML to minimize effects of climate change and making combustion process more environment friendly, it can be noted that not much attention has been devoted to find fuel additives which can provide desirable emissions related characteristics of burning fuels. In this work, we present a framework that uses ML algorithm and chemical kinetics to discover fuel additives. First we present the methodology that was employed to obtain data and train the model, and then we present results for fuel additives obtained using the framework. Methododology As an application of the framework, we present a case of finding IDT for autoignition of n-butanol. The approach of exploring new additives using this framework consist of three main steps: first step consists of obtaining results of arXiv A PREPRINT IDT for n-butanol using experimentally validated chemical kinetics mechanism Black et al. [2010] which consists of 6 elements (C, H, N, O, Ar and He), 243 species and 2892 unidirectional reactions. Adiabatic autoignition of butanol at constant-volume was considered for simulations. Apart from n-butanol, total of 50 stable species were found in the mechanism which were used as additive in volumetric ratios of 0.0 (pure n-butanol), 0.01, 0.1, 0.2, 0.4, 0.6 and 0.8. IDT were obtained by running separate simulations for all these additives. For each additive, above mentioned volumetric ratios were considered in combination of different input conditions of temperature, pressure and stoichiometric ratios. Details of these input parameters are given in Table 1. It should be noted that all these combinations of initial conditions do not necessarily lead to ignition, therefore those simulations that did not result in ignition were omitted from the data such that total number of IDT data points obtained in this work count to 11,732. ) and (v) Thermodynamic properties (coefficients of polynomials used to represent thermodynamic data used in NASA chemical equilibrium code Gordon and McBride [1994] etc.). All these features sum up to 46. Figure 1 shows additives plotted using Multi-Dimensional Scaling (MDS) of all features such that similar additives cluster together. For example, butane and iso-butane have many features in common, hence they share close proximity. Figure 2 shows distribution of IDT obtained in step 1 when plotted against features obtained in step 2. Although in this figure, IDT is plotted against only six features, yet it can be seen that features of fuel additives relate to IDT with clear patterns. Third step is to exploit such patterns of additive features with IDT using ML. Deep Neural Network (DNN) were employed in this work to fit the IDT with additives features and initial conditions. First a DNN model was generated with full data of all 50 additives such that 80% of data was used to train the model while 20% data was used for testing purpose. This model was tested to predict the IDT with the different initial conditions of additives for which model was trained. ] Figure 2: Scatter plots of IDT with few features of fuel additives. Once it was established that model can predict the IDT with additives which were part of DNN training, the study was extended to predict the IDT for additives which were not trained in the network so that the capability of this framework to predict the effect of new additives can be assessed. To achieve this objective, another DNN model was trained with only 48 additives and results were tested for two additives which were not part of new trained model. In the next sections, first the results of trained model for 50 species are presented, followed by the results of DNN model trained on 48 additives to predict IDT for two unseen additives. Here unseen is referred to the additives which were not part of DNN trained on 48 additives. Results This section is divided into two parts. In the first part results of the multi-layer DNN are presented where model was trained and tested for 50 additives. The second part relates to the DNN model which was trained for 48 additives and was tested for two unseen additives. Figure 3 shows the comparison between true values of IDT -which were obtained from autoignition simulations -with IDT obtained from DNN model trained on 50 species. This figure shows the predictions against randomly selected test data points which are 20% of all the available data for 50 species. Overall R2 score for the test data was 0.99. It can be seen that most of the IDT values are below 0.10 s and less than 10 values are above 0.10 s which have relatively high error. This distribution of error can also be seen in Figure 4. As can be seen, this distribution is reasonable because most of the data used for training has IDTs below 0.10 s which leads to high accuracy for region below 0.10 s. Evaluations for unseen additives To test the framework for unseen additives, data points for two additives were completely omitted from the training data and a new DNN was trained on 48 additives. The two unseen additives were ethane (C 2 H 6 ) and methyl-vinyl-ketone (C 2 H 3 COCH 3 ). It can be seen from Figure 1 that C 2 H 3 COCH 3 has features which are similar to several other neighboring fuel additives such as ethyl ketene and 2-butenal. This shows that although C 2 H 3 COCH 3 is not included in the training data, however training data contains fuel additives which have similar features. Unlike C 2 H 3 COCH 3 , C 2 H 6 does not share close proximity with other fuel additives. This indicates that training data does not contain fuel additives which have as similar features to C 2 H 6 as C 2 H 3 COCH 3 . Using this selection of additives will enable to assess the capability of model to predict IDT for unseen additives; irrespective of similarity in features with training data. Figure 5 shows that result of IDT predictions against true values for C 2 H 6 and C 2 H 3 COCH 3 . It can be seen that predicted values are close to true IDT values. Similar to Figure 3, most of the data points locate below 0.004 s. Although the points are more dispersed as compared to DNN trained on 50 species, however R2 score is still 0.97 which indicates high accuracy. So, the trained model on 48 additives successfully able to predict IDT for the unseen additives. Conclusions In this work, a framework to predict autoignition characteristics -both for seen and unseen additives -is presented. The framework combines the accuracy of experimentally validated chemical mechanism and robustness of ML to predict autoignition characteristics. An example of renewable fuel n-butanol is presented to predict IDT for trained and untrained fuel additives. It was shown that the framework was able to capture the chemical kinetics to predict IDT for the additives included in the chemical mechanism. Moreover, framework was also successfully able to predict IDT for species which were not part of DNN trained model. As shown using the case of unseen additives, this work also highlights the applicability of this framework to study fuel additives which that are not part of chemical kinetic mechanism, thus opening a whole new domain to explore and discover new fuel additives to achieve desired autoignition characteristics. In summary, the framework can be used to: • Study the effect of new fuel additives; irrespective of their presence in chemical kinetics mechanism. • Predict the effect of new additives on emission such as N O x , CO, CO 2 , CH 2 O etc. • Predict maximum heat generation during autoignition process. • Predict adiabatic flame temperature during autoignition process. • Predict flame-type using new additive.
2021-11-12T02:15:42.429Z
2021-11-11T00:00:00.000
{ "year": 2021, "sha1": "68811b62e77fb5a8e72c1203aaf45c10ec9e375c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "68811b62e77fb5a8e72c1203aaf45c10ec9e375c", "s2fieldsofstudy": [ "Computer Science", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
246791378
pes2o/s2orc
v3-fos-license
Comprehensive detection of analytes in large chromatographic datasets by coupling factor analysis with a decision tree . Environmental samples typically contain hundreds or thousands of unique organic compounds, and even minor components may provide valuable insight into their sources and transformations. To understand atmospheric processes, individual components are frequently identified and quan-tified using gas chromatography–mass spectrometry. How-ever, due to the complexity and frequently variable nature of such data, data reduction is a significant bottleneck in analysis. Consequently, only a subset of known analytes is often reported for a dataset, and large amounts of potentially useful data are discarded. We present an automated approach of cataloging and potentially identifying all analytes in a large chromatographic dataset and demonstrate the utility of our approach in an analysis of ambient aerosols. We use a coupled factor analysis–decision tree approach to deconvolute peaks and comprehensively catalog nearly all analytes in a dataset. Positive matrix factorization (PMF) of small subsections of multiple chromatograms is applied to extract factors that represent chromatographic profiles and mass spectra of potential analytes, in which peaks are detected. A decision tree based on peak parameters (e.g., location, width, and height), relative ratios of those parameters, peak shape, noise, retention time, and mass spectrum is applied to discard erroneous peaks and combine peaks determined to represent the same analyte. With our approach, all analytes within the small section of the chromatogram are cataloged, and the process is repeated for overlapping sections across the chromatogram, generating a complete list of the retention times and estimated mass spectra of all peaks in a dataset. We validate this approach using samples of known compounds and demonstrate the separation of poorly resolved peaks with similar mass spectra and the resolution of peaks that appear only a of As a case study, this Abstract. Environmental samples typically contain hundreds or thousands of unique organic compounds, and even minor components may provide valuable insight into their sources and transformations. To understand atmospheric processes, individual components are frequently identified and quantified using gas chromatography-mass spectrometry. However, due to the complexity and frequently variable nature of such data, data reduction is a significant bottleneck in analysis. Consequently, only a subset of known analytes is often reported for a dataset, and large amounts of potentially useful data are discarded. We present an automated approach of cataloging and potentially identifying all analytes in a large chromatographic dataset and demonstrate the utility of our approach in an analysis of ambient aerosols. We use a coupled factor analysis-decision tree approach to deconvolute peaks and comprehensively catalog nearly all analytes in a dataset. Positive matrix factorization (PMF) of small subsections of multiple chromatograms is applied to extract factors that represent chromatographic profiles and mass spectra of potential analytes, in which peaks are detected. A decision tree based on peak parameters (e.g., location, width, and height), relative ratios of those parameters, peak shape, noise, retention time, and mass spectrum is applied to discard erroneous peaks and combine peaks determined to represent the same analyte. With our approach, all analytes within the small section of the chromatogram are cataloged, and the process is repeated for overlapping sections across the chromatogram, generating a complete list of the retention times and estimated mass spectra of all peaks in a dataset. We validate this approach using samples of known compounds and demonstrate the separation of poorly resolved peaks with similar mass spectra and the resolution of peaks that appear in only a fraction of chromatograms. As a case study, this method is applied to a complex real-world dataset of the composition of atmospheric particles, in which more than 1100 unique chromatographic peaks are resolved, and the corresponding peak information along with mass spectra are cataloged. Introduction Atmospheric samples are highly complex and often contain multiple thousands of compounds (Goldstein and Galbally, 2007) with a potentially wide range of physicochemical properties and multiple isomers. Valuable information relating to the sources and chemistry of atmospheric components can be extracted from these compounds; however, the complexity of the samples requires analytical techniques to effectively separate those compounds. Gas chromatography (GC), when combined with mass spectrometry (MS) as a detection method, is one of the most widely used analytical methods in chemical analysis due to high sensitivity, low limits of detection, and high chemical resolution (Hübschmann, 2015). Though used frequently for analysis of atmospheric samples, the complex nature of atmospheric data yields substantial challenges, in particular co-elution of many chromatographic peaks. In some cases, co-elution can be so complex that resolution and integration of individual components cannot be readily achieved, and data are treated as an "unresolved complex mixture" (Zhang et al., 2014). The resolution of GC can be expanded by coupling multiple columns in series, and comprehensive two-dimensional gas chromatography (GC × GC) can provide greater sensitivity and resolution of complex mixtures (Bertsch, 1999;Phillips and Beens, 1999). This technique has yielded valuable insights into atmospheric composition (Hamilton, 2010), but the increased complexity of the instrumentation and more stringent requirements for the mass spectrometer (e.g., time resolution faster than ∼ 50 Hz; Worton et al., 2012) have limited adoption of GC × GC. Furthermore, despite the higher resolving power, co-elution of peaks still occurs (Potgieter et al., 2016) when highly complex samples are analyzed, and challenges remain in the data analysis. Therefore, it is consequently common for analyses of environmental data to focus on the resolution and quantification of only a subset of specific analytes of interest and leave a large fraction of data unprocessed and unused. "Traditional" processing of chromatographic data has relied on manual inspection of data to locate analytes of interest, followed by integration of peaks using an algorithmically determined baseline. While software to perform these analyses is readily available, this approach may require substantial user interaction, which can be time-and resource-intensive . Furthermore, the algorithms implemented in these software programs show limited capability in handling separation of co-eluted peaks, which leads to suboptimal utilization of data (Johnsen et al., 2013) and makes it difficult to extract clean mass spectra of analytes for accurate identification. These challenges often result in discarding potentially valuable information, particularly in large datasets that main contain hundreds or thousands of chromatograms that need to be processed. As fast chromatography has improved and field-deployable gas chromatography has advanced in fields like atmospheric chemistry Zhao et al., 2013;Apel et al., 2003;Goldan et al., 2004;Hornbrook et al., 2011), the size and complexity of chromatographic datasets make manual processing approaches unfeasible. Field instruments are also more impacted by shifts in operating conditions that may impact data reproducibility and peak co-elution due to nonideal laboratory conditions (e.g., temperature fluctuations). Efforts to tackle the analytical challenge of integrating complex environmental datasets have focused on improved peak integration methods that use idealized mathematical peak shapes and defined mass spectra to resolve and integrate even poorly resolved chromatographic peaks (Blaško et al., 2009;Di Marco and Bombi, 2001;Isaacman-Vanwertz et al., 2017;Jeansonne and Foley, 1991;Mydlová-Memersheimerová et al., 2009;Naish and Hartwell, 1988). However, these methods still require manual inspection of the data to identify and catalog peaks of interest. To facilitate peak identification in complex samples, matrix decomposition methods have been proposed to resolve complex co-eluting peaks. In a relatively simple form, the covariance of ions with chromatographic time can resolve representative mass spectra that rise and fall together as a chromatographic peak. This approach has, for instance, been implemented as the Automated Mass Spectral Deconvolution & Identification System (AMDIS) (Zhang et al., 2006) and is a useful tool for the identification of analytes within a single chromatogram (Meyer et al., 2010). However, large chromatographic datasets present an opportunity to include an additional dimension of resolution, as not all chromatograms necessarily contain all the same analytes. As an example, a peak that is unresolved from a neighboring peak in one chromatogram may not exist in another sample, which would allow identification of the neighbor, and the neighbor could then be integrated more accurately in the first chromatogram now that its spectrum and retention time (and potentially peak shape) are defined. Several multidimensional covariance or factorization approaches have been developed to identify peaks across multiple chromatograms. One such approach is PARAFAC, a generalization of bilinear principal component analysis (PCA) to high-order arrays (Hubert et al., 2012) in which a data array consisting of multiple chromatograms is decomposed into loadings and scores representing chromatographic profiles and mass spectra that can be more efficiently integrated. With a relatively high signal-to-noise ratio and the proper number of components, a unique solution that consists of true mass spectra of analytes can be found (Skov and Bro, 2008). PARAFAC2 was further developed to perform similar decomposition but with more robust handling of potential retention time shifts by not requiring all samples to have nearly identical time profiles, as PARAFAC requires (Zhang et al., 2014). Similarly, positive matrix factorization (PMF), a matrix decomposition method based on a weighted least squares fit (Paatero and Tapper, 1994), has been applied to deconvolve chromatographic data (Zhang et al., 2014). Unlike PCA, the result matrices, scores, and loadings of PMF are constrained to be non-negative, which reflects the characteristics of environmental data more accurately (Paatero, 1997). Another major difference is that, in contrast to PCA, the factors obtained by PMF are not constrained to be orthogonal, are determined independently, and do not form a hierarchy in which each successive factor captures less variance. These advantages make PMF well-suited to describe environmental data, and PMF has become a preferred matrix size reduction technique, particularly in the field of atmospheric chemistry (Ulbrich et al., 2009). Prior applications to chromatographic data have instead focused on either the resolution and integration of major components (Amigo et al., 2010;Hoggard and Synovec, 2007) or the extraction of average mass spectra and chromatographic profiles that provide binned information on broad classes of compounds (Zhang et al., 2014). However, this approach has some limitations, as minor components may provide unique information regarding the sources or chemical transformations of a sample. This study presents an automated approach of cataloging and potentially identifying all analytes in a complex chromatographic dataset. By coupling PMF, an established factor analysis technique, with a decision tree, the approach described here deconvolutes complex chromatograms into a list of all or nearly all unique analytes and their associated mass spectra. This technique complements the new generation of tools described above developed to improve the efficiency and accuracy of integrating a list of known chromatographic peaks. Methods Our approach to cataloging all analytes in a set of chromatograms consists of two major processes. The first process is PMF analysis, and the second is a decision tree used to filter, sort, and catalog PMF outputs into a list of analytes. We first provide an overview of the PMF algorithm before describing the overall cataloging approach. In this work, the term "analyte" is used to refer to a chromatographic peak (e.g., a chromatographic "feature") with a unique mass spectrum and retention time whether it has a known definitive identification or not, following the usage of this term in other studies (Amigo et al., 2010;Grace et al., 2019;Isaacman-VanWertz et al., 2017, 2021Li et al., 2022). Positive matrix factorization Positive matrix factorization is a bilinear model that approximates an observed data matrix, X, by finding the weighted least squares solutions for a set of factors that can describe the dataset (Paatero and Tapper, 1994). In the case of chromatographic data, the data form a matrix in which rows represent the average mass spectra of each averaging time period (typically 1-5 Hz) and columns are the time series of each mass spectral mass-to-charge ratio (m/z). The model is represented as where X is the data matrix, E is the residual matrix, and G and F are the score and loading matrices, respectively. The chromatographic data matrix is thus described by a set of factors, each of which has an average mass spectrum (the loading) and a time-dependent score that represents the chromatographic profile. The elements of both G and F matrices are constrained to be non-negative and are therefore expected to more accurately represent real-world data than PCA or other PCA-based matrix decomposition methods. The number of factors is prescribed a priori by the user, which represents a major source of uncertainty and subjective interpretation in typical PMF applications. In contrast to other PMF applications, the primary goal in this work is not to optimally describe the complete dataset, but rather to increase the number of factors to a point at which even minor components are extracted as separate factors, even at the risk of overfitting the data (which will be rectified by a subsequent decision tree). With this approach, any existing analytes with a significant level of signal should be identified as separate analytes, regardless of whether such an analyte is a compound present in the sample or is a contaminant. Assigning too many factors (e.g., greater than the number of compounds in the sample) in the model can cause "factor splitting", a phenomenon in which a factor that might carry real-world meaning or interpretation is divided into multiple factors that cannot be readily interpreted (Hoggard and Synovec, 2007). In the context of this work, splitting would result in the separation of an analyte into multiple different chromatographic peaks that all represent the same analyte. We address this case using the decision tree presented below and therefore do not rely on existing metrics for evaluating the optimality of the factor solution (e.g., the ratio of error in the solution to expected error, Q/Q exp ). In this study the PMF Evaluation Tool, PET version 3.04 (Ulbrich et al., 2009), software package in Igor Pro 8 (Wave-Metrics, Inc.) was used to run the PMF2 (Paatero and Hopke, 2009) algorithm on the dataset and obtain PMF outputs. Chromatograms were analyzed using the freely available TERN software package within the same programming environment . Batch process of positive matrix factorization An overview of the complete process is shown in Fig. 1. Multiple chromatograms representing time-varying mass spectra are stacked to yield a three-dimensional data array (I × J × K), where I and J constitute a chromatogram (elution profile × mass spectra) and K is the number of samples (Amigo et al., 2010). Each chromatogram is first aligned to the same retention time basis by using a small number of known compounds or introduced standards in each sample to define known retention times. Strictly speaking, this preprocessing is not necessary for factor analysis. However, interpretation of the outcome of data reduction techniques such as PARAFAC(2) and PMF can be unreliable when chromatograms are used directly as input (Eilers, 2004;Van Nederkassel et al., 2006), as it may be difficult or impossible to determine if unaligned peaks in each chromatogram represent the same analyte. Chromatogram alignment may occur through manual adjustment by users or may be automated using any of multiple solutions (Eilers, 2004;Kassidas et al., 1998;Nielsen et al., 1998) to align chromatograms in the preprocessing of data with relatively little user input. Implementation of some alignment (manual or automated) is necessary in preprocessing, but the cataloging approach described here is independent of the details of any such approach (a manual approach is used in this work), so details are not included. To achieve high chemical resolution and identify minor constituents, PMF is not performed on the full matrix, but rather on subsectioned "slices" that represent short periods of the chromatogram. As samples of environmental data can be highly complex and heterogenous, successively applying PMF to small portions of the elution time is significantly more effective than extracting factors from the full range of the elution time data (Zhang et al., 2014). Each slice is comprised of the same period of elution time from each sample chromatogram. Though retention times of chromatograms have been globally aligned, some small shifts in retention time may still exist at the timescale of a slice; consequently, a secondary "fine-scale" retention time correction is applied to each sample within the slice to yield a unified time basis (typically the retention time of one of the chromatograms). A form of correlation optimized warping (COW) (Nielsen et al., 1998) is used, in which the retention time offset is determined that maximizes the correlation of the maximum number of single ion chromatograms (SIC) within the slice. This approach is expected to work so long as a dominant fraction of peaks is present in all chromatograms but does not require all peaks to be present. This fine-scale retention time adjustment is not strictly necessary for the application of PMF; without it, the same analytes will generally be found in the chromatograms. However, minimizing retention time differences between chromatograms is very useful for the subsequent decision tree to determine whether peaks in different chromatograms represent the same analyte, as opposed different analytes with highly similar retention times and mass spectra (e.g., isomers). For each slice of the three-dimensional data array, the dimensionality is reduced by concatenating samples such that the first row of the two-dimensional matrix of one chromatogram is positioned after the last row of the matrix of another chromatogram. The resulting two-dimensional slice matrix represents repeating periods of elution time. PMF is then applied to this concatenated slice matrix, yielding a set of factors that represent the elution profiles of mass spectra that covary (i.e., chromatographic peaks of analytes). For the reasons discussed above, tens of factors are used in the PMF solution, which is higher than used in most other more common PMF applications (Ulbrich et al., 2009). This number of factors is on the order of the length of the slice divided by the typical peak width (i.e., one or two factors per resolvable peak), but a more detailed discussion of optimizing the num- ber of factors is presented in the "Results and discussion" section. Each resulting PMF factor from the slice consists of the chromatographic profiles of a given mass spectrum in each chromatogram (an example is shown in Fig. S2 in the Supplement). Factor data are stored for subsequent processing, and PMF is performed iteratively through slices until the entire range of chromatographic time is covered. Each slice overlaps in part with the slices before and after to capture potential peaks that may be cut off at the edges of each slice; overlap must equal or exceed the typical width of a chromatographic peak to ensure this outcome. The full set of PMF results for all slices is compiled and addressed through a decision tree, as shown broadly in Fig. 1 and addressed in more detail below. The optimal number or length of slices is expected to be data-dependent and is expected to control the number of factors needed to fully deconvolute the data. Shorter slices can likely be analyzed with fewer factors but require more slices to analyze the full chromatogram, making estimation of computational time somewhat complex. In this work we use slices of 5-15 s and PMF solutions with up to 35 factors, then discuss trade-offs in the results. We demonstrate in this work that the decision tree described below addresses potential factor splitting caused by high factor solutions, so there is little disadvantage to increasing the number of factors other than the additional required computational time. In contrast, decreasing the number of factors can result in the detection of fewer analytes. Consequently, it is necessarily a decision of the user of this method to balance a potential decrease in data extraction against computational resources. Decision tree A detailed description of steps involved in the decision tree is presented in Fig. 2. Steps are described in detail below, but the overall approach proceeds as follows. 1. Peaks are detected in each PMF factor and cataloged by quantitative parameters describing their idealized mathematical form. 2. Spurious peaks are removed by several filters to eliminate noise. 3. Peaks in each factor are sorted into potential analytes based on retention times. 4. Potential analytes are sorted and combined into a catalog of unique analytes by comparing retention times and mass spectra. Peak detection and fitting. Within a factor, peaks are detected by using first and second derivatives to find local minima and maxima. All found peaks are then simultaneously fit to mathematically idealized forms; we assume a Gaussian curve as the ideal chromatographic peak shape (Anderson et al., 1970), though experimental peaks are often perturbed from the ideal shape by instrumental factors and a certain level of mixing of signals is introduced . Implementation of this approach was performed using built-in packages within the Igor Pro 8 programming environment (specifically, Multipeak fitting 2). A more complex approach could include modified peak shapes (e.g., convolution with an exponential; Isaacman-VanWertz et al., 2017), which would likely enable more accurate characterization of the parameters describing a peak. However, in this work the goal is to catalog all peaks by their approximate parameters as opposed to perfectly integrating them, so increasing the complexity of peak fitting by incorporating refined peak shapes has not been implemented. Implementation of an exponentially modified Gaussian (EMG) procedure as a peak fitting model has been inspected on the samples containing deuterated tetradecane presented in Fig. 4 and discussed in Fig. S7. Optimal peak shapes could be used in subsequent processing for accurate integration of data. The outcome of this peak fitting is a set of peaks present within each factor chromatographic profile (example shown in Fig. S3), including their known retention times within each chromatogram and their retention times relative to peaks in all the other chromatograms (i.e., retention times both uncorrected and corrected to a unified time basis). Not all factor chromatographic profiles necessarily contain any peaks at all, with noise or background factors frequently returned (as seen in the example shown in Fig. S2). The parameters that quantitatively describe the peak (location, width, and height) are stored alongside corresponding time profiles and mass spectra from the PMF. The location of a peak (i.e., retention time) is the mean of the Gaussian curve, and the width of a peak is described using the standard deviation of the curve. The uncertainties in these three parameters determined by the fit are also stored. Alternate descriptors of width such as full (or half) width at half-maximum, FWHM (or HWHM), may be more appropriate if other peak shapes are considered, but all of these descriptors can be mathematically related for a Gaussian curve, so any descriptor is useful in this case. We discuss the implications of this choice in the discussion of analyte sorting. By performing peak detection and initial fitting on all factors within the slice, an index of all potential peaks is generated within the region of the chromatogram. These peaks are then validated and cataloged into analytes in subsequent steps by a decision tree. Peak filtering. Once all potential peaks are indexed, spurious potential peaks are eliminated. The automated peak detection and fitting algorithm may find peaks in factors that do not qualitatively appear to contain any chromatographic peaks or may imply that the inclusion of a negative peak improves the fit (Anderson et al., 1970). Filtering thresholds are introduced to remove these peaks. Specifically, peaks with negative parameters and/or an estimated error greater than the corresponding parameter (i.e., width) are considered to be a result of a bad fit and thereby rejected. In addition, peaks with a weak signal (peak height / baseline signal < 10) are removed. Furthermore, because true chromatographic peaks are expected to have peak widths that can be reasonably welldefined (i.e., the range of possible widths in a dataset may be variable but is generally not very broad), outliers of peak widths are identified using Tukey's fences method (Tukey, 1977) with a conservative range for reasonable peak width. The range is defined as where Q 1 and Q 3 are lower and upper quartiles of the sorted peak width, respectively, and k = 3. Peaks with widths outside this range are rejected; i.e., those with widths outside the interquartile range by more than 3 times the magnitude of the interquartile range. Finally, a small number of peaks whose parameters are near both the upper boundary of the peak width and the lower boundary of the peak height to base ratio (i.e., low abundance, broad peaks, those with a heightto-width ratio of, empirically, ∼ 10 000) are rejected as they indicate either poor fitting of the peaks or a fitting of noise. Peak sorting. Ideally, using the optimal number of factors will result in each factor representing one chemical compound, but in practice more than one analyte may be detected within a given factor. This may be due to the true presence of multiple analytes within the slice that have mass spectra too similar to deconvolve (e.g., branched alkane isomers), or it may be due to an error introduced during peak detection or fitting. Peaks in a given factor by definition share a mass spectrum, so those that are chromatographically separated by less than a critical retention time difference (i.e., have nearly the same retention time) are assumed to represent the same analyte. Selection of a critical retention time difference is somewhat dependent on the goals of the user but is inherently related to peak widths. A conservative estimate of a critical width is several times the standard deviation (e.g., FWHM = 2.355σ ), which would ensure that only peaks that are truly chromatographically resolved are regarded as unique. However, in many cases, isomers may not be wellresolved but nevertheless represent unique analytes, which may be apparent in small changes in ion ratios or signal intensities across chromatograms. In these cases, a more aggressive (i.e., smaller) approach to critical retention time differences may be appropriate, which might include HWHM (∼ 1.18σ ) or, most aggressively, peaks that are separated by only one or two data points (i.e., a peak in a different time period of instrument acquisition). Setting this parameter more aggressively increases the possibility of positive errors, as discussed in Sect. 2.4. When peaks with the same mass spectrum (i.e., from the same factor) closer together than critical peak width are found in the same chromatogram, they are assumed to be the product of factor and/or peak splitting and are combined. Peaks that meet these criteria across multiple chromatograms (i.e., found within one factor at the same relative retention time) are assumed to represent the same unique analyte across each chromatogram. Peaks within a factor that are separated by more than a peak width are considered unique an-alytes. This process sorts the peak catalog to yield a list of all potentially unique analytes across all factors, with some factors containing multiple analytes (all with the same mass spectrum) and some factors containing no analytes. Potential analytes each have an associated retention time and mass spectrum, as well as a known peak height and width in each chromatogram in which it was found. It is theoretically also possible that multiple analytes are present in a factor not because they have similar spectra, but because the number of factors is substantially lower than the number of analytes present, so PMF yields some approximate convolution of analytes. However, in practice, this issue is largely avoided by using a large number of factors. Furthermore, the approach to combining peaks with similar retention times and spectra within a chromatogram may capture a small number of isomers that share a mass spectrum, are rarely resolved, and covary between samples, (e.g., mand p-xylene). This limitation is likely inherent to the approach, as their resolution by an automated peak detection algorithm would require an assumption of peak shape, which would limit its application to complex data. Isomers such as these represent an example of the potential impact of a userspecified critical retention time difference, as an aggressive value (e.g., one or two data points) may separate these analytes if there is at least some separation by retention time and some variability in ratios between samples that may be detected by the PMF, while a more conservative approach (e.g., FWHM) is unlikely to separate poorly resolved isomers. Analyte sorting. PMF followed by peak fitting, peak detection, and peak sorting is performed for all slices, generating a list of potential analytes across the full chromatographic range. Because these potential analytes were generated by examining peaks within each factor, this process does not account for the possibility that PMF factor splitting generates multiple factors containing the same analyte with slight variations in their mass spectra due to instrument drifts, for example. To address the issue of factor splitting, all the potential analytes are intercompared to remove and combine possible repeats. Mass spectra (i.e., the mass spectrum of the factor in which they were found) are compared by cosine similarity: where M 1 and M 2 are normalized mass spectra of two potential analytes being compared to determine whether they represent the same analyte. This is the preferred approach of commonly used mass spectral libraries and search programs (Stein, 2014). Two identical mass spectra will have = 1. Values of 0.8 and higher are generally considered to indicate two mass spectra that may represent the same analyte (Stein, 1994;Worton et al., 2017). Analytes with mass spectral cosine similarity values, ≥ 0.8, are compared by their retention times. In cases in which the difference is greater than the median width -in other words, if the peaks are considered sufficiently distant to each other -they are cataloged as two unique analytes. Analytes with matching mass spectra and retention time differences below the critical threshold are considered to be the same compound detected by two different factors or are two analytes that cannot be resolved by the instrument either chromatographically or by their mass spectra. When found within one slice, analytes are combined by summing peak heights and weighted averaging of their defining parameters (width, mass spectra, etc.). In overlapping sections between slices, any repeat analytes (i.e., found in both slices with matching retention times and spectra) are simply filtered out. Again, the selection of the critical retention time difference exerts some control on the opposing tendencies of this approach to either consider peaks unique (potentially leaving multiple peaks representing the same analyte) or combine peaks (potentially binning multiple analytes). In this step, any potential analytes being compared must exhibit at least some difference in mass spectrum and sample variability, since they were separated by the PMF, so a more aggressive critical retention time difference is likely warranted here. The outcome of this analyte sorting process is a catalog of unique analytes with associated retention times and mass spectra, including information about their widths and heights in each chromatogram used in the analysis. Examples of analytes found are provided in Figs. S4 and S5, which are discussed in more detail in the "Results and discussion" section below. This catalog of analytes is the end goal of the present work but could be used as a template for subsequent analyses or as a dataset to be matched against existing libraries or authentic standards for identification (Worton et al., 2017). Sample datasets The method developed is tested using two GC-MS datasets: a laboratory-generated dataset of known standards and a dataset of ambient aerosol samples. Both datasets were collected using a semi-volatile thermal desorption aerosol gas chromatograph (SV-TAG). This instrument has been described elsewhere in detail (Isaacman et al., 2014;Williams et al., 2006;Zhao et al., 2013). In brief, a sample is collected on a passivated metal fiber filter housed in a temperaturecontrolled cell, either by introducing a liquid standard or pulling through sampling ambient air. The cell is then thermally desorbed with a programmed temperature ramp (25 to 315 • C over 8 min), and analytes are transferred to a GC column ramped from 50 to 310 • C. GC eluent is then analyzed by electron ionization mass spectrometry (Agilent Technologies). The two datasets differ in their column ramp rate and dimensions. Laboratory data were collected with an MTX-5 column (15 m × 0.25 mm × 0.25 µm, Restek) at a ramp rate of 12.5 • C min −1 . Ambient data were collected with an Rtx-5Sil MS column (20 m × 0.18 mm × 0.18 µm, Restek) at a ramp rate of 23.6 • C min −1 . For analysis of known standards, liquid standards were injected into the sample collection cell through the automated liquid injection system of the TAG (Isaacman et al., 2011). Standards included 10 ng of n-alkanes (C 8 -C 40 diluted from 500 µg mL −1 , supplied by AccuStandard) and 15 ng of select perdeuterated n-alkanes: C 14 , C 15 , C 16 , C 20 , C 24 , and C 26 (diluted from stock mixtures made of pure compounds, supplied by C/D/N Isotopes approximately 2 ,years prior to use). Collection of ambient air data took place near Manaus, Brazil as part of the GoAmazon2014/5 campaign . Details of sampling and the SV-TAG instrument used to collect this dataset have been previously published (Isaacman-VanWertz et al., 2016). The data presented in this work were collected during the wet season in February and March 2014. Samples of atmospheric particles and semi-volatile gases were collected during the first 22 min of every hour. During desorption of the collection cell, analytes were derivatized by introducing N-methyl-Ntrimethylsilyltrifluoroacetamide (MSTFA) into the desorption flow; this method silylates all hydroxyl groups, improving transfer through the GC column (Isaacman et al., 2014). Approximately 100 compounds have been previously resolved and cataloged in this dataset (Isaacman-VanWertz et al., 2016), only a small fraction of which were identified as compounds with known molecular structures and identities. Method validation The analyte cataloging method is investigated using realworld GC-MS data collected on known calibrants and under field conditions, as described below. Two major failure modes are examined: (1) negative errors in the form of uncataloged analytes due to underfitting and (2) positive errors in the form of false analytes identified due to factor splitting or overfitting. The former can theoretically be addressed in large part by increasing the number of factors, but this approach increases the potential for the latter. To examine this interplay and the ability of the decision tree to compensate for potential positive errors, we examine sections of chromatograms containing known n-alkanes and perdeuterated isotopologues. These samples are analyzed with a varying number of factors to understand the ability of the method to identify major and minor components, address factor splitting caused by high numbers of factors, and examine the potential impacts of the critical retention time difference. Slices of four chromatograms of a 5-15 s window containing known analytes are investigated under a range of method parameters. Application to complex field data provides an additional test for negative errors by challenging the method with data that have been previously catalogued by an expert operator, as well as providing insight into the power of the proposed method. To validate the method, below we discuss the results of three specific tests. In the first test (Sect. 3.1), we investigate the potential for positive errors by using high-factor PMF solutions to generate the catalog of peaks used by the decision tree. In the second test (Sect. 3.2), we investigate the potential for negative errors by examining the deconvolution of poorly resolved analytes with similar mass spectra. In the third test (Sect. 3.3), we investigate the utility of the method in realworld data by applying the method to a complex environmental sample and examine the potential for negative errors by comparing the analyte catalog to a previously published analysis (Isaacman-VanWertz et al., 2016). Effects of increasing factors The proposed cataloging method was applied to a 15 s chromatographic window that included the peak known to represent injected perdeuterated tetradecane (C 14 D 30 ), with the number of PMF factors ranging from 1 to 20 (Fig. 3). In a 1-factor solution, one analyte was found, representing the known compound (Fig. 3a). The number of analytes found increased as more factors were used, with the injected compound always found and minor analytes found in higherfactor solutions as discussed below. The critical retention time difference used in this analysis was relatively aggressive (median HWHM, which equals 0.7 s in these data) in order to examine the capability of the method to find unique peaks; the effects of this selection are discussed below. The relationship between the number of analytes found and the number of factors is nonlinear, approaching an apparent plateau. This plateauing behavior for analytes is in contrast to growth in the number of peaks found, which continues to increase linearly with the number of factors. An ever-growing number of analytes is physically improbable given the relative simplicity of the data, and these peaks likely represent factor splitting. The decision tree addresses this issue by rejecting and combining these found peaks, eventually yielding six unique and distinct analytes that remain relatively stable across solutions. This result agrees with the trend in the percent of the total signal that is not described by the found analytes, i.e., the percent residual calculated as the sum of the absolute difference between the total ion chromatogram and the reconstructed signal curve at each point in time relative to the sum of total ion signal. With increasing factors, the percent residual first drops from approximately 17 % with one factor down to less than 10 % with a few factors. Though identifying the main injected compound and describing 83 % of the measured data is independently quite compelling, the 9-factor solution (Fig. 3b) suggests that the measured data can be better described by increasing the number of factors. Though these analytes appear to represent splitting of the chromatographic peak, we demonstrate in the following section that these data represent real analytes that might be overlooked by a manual operator. The three to four major analytes are therefore cataloged with only a few factors, and the same major analytes are cataloged in a 9-factor solution as a 19-factor solution (Fig. 3c). Subsequent increases in the number of factors, from 4 to 20, yield little additional information to describe the measured data, detecting only a small number of lowabundance peaks. Overall, these results demonstrate that the approach can identify minor components, while increasing the number of factors beyond the minimum necessary neither provides additional information nor impedes the method (other than the additional computational resources used). Deconvolution of poorly resolved analytes Analysis of perdeuterated tetradecane discussed above indicates that most of the signal can be described by three analytes and while a few additional analytes may be present, their inclusion does little to describe the overall signal. These three analytes are those shown in Fig. 3b and c as poorly resolved peaks, and they are commonly identified in all factor solutions that found three or more analytes (i.e., 4-factor solutions and higher). The presence of three analytes under this peak is curious, as the known sample was comprised of only perdeuterated tetradecane (C 14 D 30 ). However, we demonstrate here that these data can be described as two isotopo-logues, C 14 D 29 H and C 14 D 28 H 2 (Fig. 4), which are not unexpected in isotopically labeled standards mixed into methanol, in particular those that were purchased several years prior as is the case here. The total ion chromatographic peak appears to be normally distributed with minimal skew, but the analyte cataloging method we develop here finds three analytes with slight shifts in retention time and differences in mass spectra. One analyte is deconvolved using only a 2-factor solution (Fig. 4a), and the third is found in higher-factor solutions (Fig. 4c). Retention times are shifted later, as expected for replacement of a deuterium with a hydrogen, and projection of this trend forward (i.e., replacement of all deuterium with hydrogen) predicts a retention time roughly that of non-labeled tetradecane as expected. Similarly, the fragmentation patterns are highly similar, but there are some significant differences in their intensities. This is clearest in the fragmentation patterns at their molecular weight, with C 14 D 30 , C 14 D 29 H, and C 14 D 28 H 2 having a substantial signal at m/z 228, 227, and 226, respectively. At lower m/z, all compounds have a large signal on mass m/z 66 and differences of 16 (CD 2 ), but isotopologues also have a higher signal at masses shifted by 1 or 2 amu (e.g., higher m/z 65 for C 14 D 29 H). The separation of isotopologues presents one of the most difficult challenges for separation methods like chromatography (Valleix et al., 2006;Filer, 1999) due to the tendency of isotopologues with a relatively smaller signal to be completely embedded in the peak of their counterpart and their similar spectral signals (Amigo et al., 2010). Figure 4 demonstrates these issues and the ability of the method to overcome them. It is a clear possibility that the deconvolution of these three analytes is a case of positive error -that these found analytes are an error within the method as opposed to real analytes. To test for that possibility, we performed the same analysis on non-labeled tetradecane and found all signal reasonably described by a single analyte with no co-eluting analytes as expected (Fig. S6). This result supports the conclusion that the additional peaks found for deuterated alkanes are not artifacts due to the high-factor solutions but rather represent true co-eluting peaks that demonstrate the ability to find difficultto-resolve analytes. Separation of these isotopologues presents an opportunity to examine the impact of the critical retention time difference and the impact of assumed Gaussian peak shapes on this separation. Though exhibiting interpretable differences in their higher-molecular-weight ions, the heavy fragmentation of alkanes yields mass spectra that are not sufficiently different to be separated by the cosine similarity threshold (i.e., comparisons between all three isotopologues have ≥ 0.8), despite sufficient differences to be separated into different factors in the PMF. Consequently, resolution of these peaks relies on separation by retention time in the analyte sorting step. Separation between each peak is roughly 0.75 s in retention time, while median peak width in the dataset (σ ) is 0.6 s, peak widths of these analytes are roughly on the order of 0.7 s, and a mass spectrum is collected every 0.3 s. Peaks are consequently separated by more than two data points and more than the median HWHM of the dataset (0.71 s), but not by more than the HWHM of these specific peaks (0.82 s) or by more than the median FWHM of the dataset (1.4 s). In other words, only more aggressive screening methods (i.e., using σ or median HWHM as the critical retention time difference) would separate these isotopologues. This approach also increases the chance of chromatographic artifacts being cataloged as real analytes (positive error), but a more conservative approach increases the possibility of overlooking poorly resolved and similar analytes such as these (negative error). Ultimately, it is up to the user to decide the optimal critical retention time difference. The effect of a non-Gaussian peak shape was also examined. Because peak detection relies on derivatives to identify potential peaks based on inflection points in the data, the number of peaks found is agnostic toward peak shape; instead, peak shape primarily impacts peak widths. Using an exponentially modified Gaussian peak shape in the analysis of isotopologues does not substantially change the result (Fig. S7). With this peak shape, isotopologues remain separated using more aggressive critical retention time differences (median HWHM or more than two data points) but are combined by more conservative thresholds. This result is of course limited to the shown case, in which a Gaussian curve reasonably describes the observed data. Datasets containing highly non-Gaussian peak shapes may be more impacted and should be examined closely for the potential impact of peak tailing on positive errors. Cataloging analytes in real-world data To evaluate the proposed method in a real-world application, we apply it across the full chromatographic range for data representing the gas-and particle-phase composition of atmospheric samples. The goal of this analysis is to both provide an estimate of the number of analytes found in representative atmospheric samples and evaluate the ability of the cataloging approach to identify analytes known to exist in a complex, real-world dataset. Doing so requires user decisions on the optimal parameters (e.g., number of factors, slice length). Figure 3 demonstrates the tendency of the method to find increasing numbers of analytes with increasing factors until reaching a certain threshold. It is reasonable to expect that the maximum number of analytes found in each slice is also a function of slice size (i.e., length of the chromatographic window). As slice length increases, the number of slices decreases (roughly linearly scaling computational time) and the number of necessary factors increases (roughly exponentially scaling computational time, Fig. S9). Because the decision tree is effective at mitigating positive errors, the results of the methods (i.e., the catalog of analytes) are not strongly impacted by optimization decisions, which instead primarily impact efficiency (i.e., minimizing computational time). For the real-world data tested here, the maximum number of analytes observed in each slice roughly approaches a plateau when the number of factors used is 2-3 times the length of each slice (in seconds) (Fig. S8). Due to the balance between factor number and length, it is generally somewhat more efficient to use lower-factor solutions for a larger number of shorter slices, but computational time is not substantially different across different sets of parameters that meet the necessary number of factors per slice length (Fig. S10). For these data, we use a 25-factor PMF on 10 s slices with 2 s overlap between slices (based on a typical peak width of 1-2 s). A sample of four chromatograms within the retention time window 200-650 s was analyzed by this approach (Fig. 5), constituting 56 slices. These chromatograms were selected from the complete dataset based on their similar sampling times, minimizing differences in instrument operating conditions (retention time, mass spectrometer tuning) over time. We recognize that the variance obtained by a larger sample size may increase the amount of information extracted, but this would significantly increase the computational expense. In essence, the optimization of sample size is dependent on sample-to-sample variability and processing capability. In this work, we use a sample of four chromatograms to demonstrate the effectiveness of this approach; optimization of sample size is dataset-dependent and will be explored in future work. From this sample of chromatograms, a to-tal of 1169 analytes were identified, with a computational time of 290 min. This analysis uses a moderately aggressive critical retention time difference (1.4σ ), but the number of analytes found is slightly reduced by more conservative approaches (e.g., only 20 % lower using the much more conservative FWHM, Table S1). In contrast, a previously published analysis of this dataset focused on only ∼ 100 compounds cataloged by manual inspection, though additional compounds are observed to exist in the dataset that were not a focus in this previous analysis. We note that a major advantage of the proposed approach is not only the larger number of analytes cataloged (with significantly less manual interaction), but also that each of these analytes has a well-defined mass spectrum that can be used for identification or comparison to existing mass spectral libraries. We probe the present analysis for negative errors by comparing the analyte catalog against analytes identified in the previously published analysis. The analyte catalog in this work was found to detect every peak in the dataset with a known identification, including introduced isotopically labeled internal standards, analytes identified by authentic standards, and tracer compounds of interest for known atmospheric processes such as oxidation products of naturally emitted gases and emissions from biomass burning. For example, the identified peaks observed in the inset of Fig. 5 at 350 and 356 s are the known, highly studied oxidation products of isoprene, 2-methylthreitol and 2-methylerythritol (Claeys et al., 2004;Surratt et al., 2010;Wang et al., 2005), while the peak coeluting earlier at 350 s is the α-pinene oxidation product pinic acid. The mass spectra were also compared to the NIST library, and only 96 (∼ 8 %) of the cataloged analytes had mass spectral matches in the library that were in the "good" or "excellent" range (Stein, 2008) (Fig. S11). Previous work has shown that matches below these thresholds indicate that the found spectra do not represent the unknown analyte (Worton et al., 2017), suggesting that 90 % of analytes in these samples do not exist in the mass spectral library. These results demonstrate the utility of the proposed approach to detect and identify known analytes of interest and to catalog hundreds of unknown analytes by their retention time and mass spectra. Significant work remains to be done to identify the unknown compounds in the atmosphere. However, many tracers commonly used by the community started out as components with unknown structure or origin. For example, C5 alkene triols that are commonly measured as isoprene oxidation tracers required significant dedicated effort to identify (Wang et al., 2005). Previous work has also been done wherein correlation with known tracers was used to identify the likely sources of unknown compounds (Isaacman-VanWertz et al., 2016), and in some cases, this information was used to quantitatively attribute sources of aerosol (Zhang et al., 2018). Therefore, despite the lack of current identification, we believe it is useful to integrate and investigate all analytes and examine the data as a whole. Conclusions In this work, we describe and evaluate a method to catalog analytes in a set of chromatograms representing complex environmental data. Analysis of known standards demonstrates high skill at finding minor analytes even when poorly resolved, with no strong tendency to find spurious analytes that are not actually present. This approach will consequently be valuable for the automated processing of complex chromatographic data and will enable new information to be extracted that might otherwise be ignored or discarded by conventional approaches due to technical difficulties or limited resources. Analysis of real-world data cataloged more than 1000 analytes with little or no human interaction. Three major future developments would further enhance this approach: improved retention time correction without human interaction (e.g., by parametric optimized warping; Eilers, 2004), incorporation of modified peak shapes (e.g., exponentially modified Gaussian) as a peak fitting model, and algorithmic optimization of decisions around the length of each slice, the number of factors, and the number of chromatograms. However, even without these advancements, the required levels of operator interaction are limited, and the proposed method has the potential to substantially improve and expand data analyses of both new and previously collected data. Code availability. The code used in this study is being implemented as an automated analyte detection module into TERN, the latest version of which is publicly available at https://doi.org/10.5281/zenodo.6940761 (Isaacman-VanWertz et al., 2022), including access to the source code. The specific implementation of PMF used in this work is the PMF Evaluation Tool (PET) 3.04 as described in Sect. 2.1, which is commercially available, though the module could be modified to incorporate other publicly available PMF tools (e.g., US EPA PMF 5.0). Data availability. Calibrated time series for analytes with definitive identifications are publicly available through the Department of Energy Atmospheric Radiation Monitoring data archive, which can be found at https://iop.archive.arm.gov/arm-iop/2014/ mao/goamazon/T3/goldstein-svtag/ (last access: 13 August 2022; DOE ARM, 2022). Access to the data requires an account with DOE, which is available to any user, and sign up can be done at the following link: https://adc.arm.gov/armuserreg/#/new (last access: 19 August 2022). The raw chromatograms of these data are available upon request by contacting the corresponding author. Author contributions. GIVW conceptualized and supervised the project; SK developed the software code, performed the analysis, and wrote the paper draft; BML, DTS, and GIVW contributed to the analysis and reviewed and edited the paper. Competing interests. The authors have the following competing interests: BML and DTS are employed by Aerodyne Research, Inc., which commercializes GC and MS instrumentation. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2022-02-13T16:14:07.189Z
2022-02-11T00:00:00.000
{ "year": 2022, "sha1": "db2d5e9ebdbc1a016dba9ba7189dcd8056b696a6", "oa_license": "CCBY", "oa_url": "https://amt.copernicus.org/preprints/amt-2022-16/amt-2022-16.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "eef49f93f678813cc9ecbdea6621bd4918fddca0", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [] }
2557491
pes2o/s2orc
v3-fos-license
The Insulin-Like Growth Factor System The insulin-like growth factor (IGF) system in ubiquitous and plays a role in every tissue of the body. It is comprised of ligands, receptors and binding proteins, each with specific functions. While it plays an essential role in embryonic and post-natal development, the IGF system is also important in normal adult physiology. There are now numerous examples of diseases such as diabetes, cancer, and malnutrition in which the IGF system is a major player and, not surprisingly, there are attempts to affect these disorders by manipulating the system. INTRODUCTION The insulin-like growth factor (IGF) system includes three ligands (insulin, IGF-I, and IGF-II), three receptors (the insulin receptor [IR], the IGF-I receptor [IGF-IR], and the mannose-6-phosphate IGF-II receptor [M6P/IGF-IIR]), as well as six IGF-binding proteins (IGFBPs). This family of growth factors has been extensively studied, because of its critical roles in both normal physiology and in various disease states, such as cancer, diabetes, and nutritional status abnormalities. The various components of the IGF family are widely expressed, and other important functions of this system are rapidly being discovered. This article will present an overview of the IGF system, including the biological functions of its various components and how the expression levels of these components are controlled. The major emphasis will be on more recent findings, with particular focus on aberrations of the system, because many excellent reviews have previously covered the more familiar aspects of the IGF system (Baxter, 2000;Clemmons, 2001;Le Roith et al., 1995;Rajaram et al., 1997;Stewart and Rotwein, 1996). Where appropriate, aberrations of the system will be discussed in the context of diabetes and its complications. IGF-I Insulin-like growth factor-I (IGF-I) is expressed by most tissues of the body. It circulates as a 70-residue single-chain polypeptide with four domains, designated as B, C, A, and D. In comparison, proinsulin includes the B, C, and A domains, whereas mature insulin produced and secreted by the pancreas includes only the B and A domains. Circulating IGF-I is primarily derived from the liver, although other tissues, such as fat, may also contribute (Yakar et al., 1999). The major factors that regulate hepatic IGF-I biosynthesis are growth hormone (GH), insulin, and nutritional status. Although GH is the major factor that stimulates IGF-I expression and release, insulin and nutrients can also significantly affect this response (Bichell et al., 1992;Kaytor et al., 2001;Zhang et al., 1998). In extrahepatic tissues, IGF-I gene expression is regulated by several factors in addition to GH. For example, both prostaglandin E 2 (PGE 2 ) and parathyroid hormone (PTH) increase IGF-I mRNA levels in cultured osteoblasts, whereas GH has little effect on IGF-I expression in this system (Bichell et al., 1993). Estradiol also increases expression of IGF-I in osteoblasts (Ernst and Rodan, 1991). Other examples include angiotensin II regulation of IGF-I expression at the local level, 205 206 by stimulating IGF-I production in the cardiovascular system (Brink et al., 1999), the inducion of IGF-I mRNA during compensatory renal growth (Mulroney et al., 1992), and in the regulation of skeletal muscle growth and repair (reviewed by Florini et al., 1996). Thyroid-stimulating hormone (TSH) induces IGF-I mRNA expression in thyroid cells (Hofbauer et al., 1995). Finally, estrogen may play an important role in the local regulation of IGF-I in ovarian and uterine tissue (Murphy and Friesen, 1988). The role of circulating "endocrine" IGF-I (primarily derived from the liver) versus the local "paracrine/autocrine" IGF-I has recently been reevaluated through the use of the tissue-specific gene-deletion (cre-lox/P) technology (Yakar et al., 1999). In this system, a construct including the IGF-I gene flanked by lox/P sequences was introduced into mice using homologous recombination. In another transgenic mouse line, the recombinase enzyme (cre) is expressed in the tissue of interest using a specific promoter-enhancer element, and the two mice lines are crossed. Using this approach, IGF-I gene expression was ablated specifically in the liver of mice (Yakar et al., 1999). The resulting mice exhibited complete abrogation of IGF-I gene expression from the liver with normal production in all nonhepatic tissues. Although total circulating IGF-I levels were reduced by 75% to 80% in the "liver knockout" mice, their growth and development was apparently normal. However, the total levels of circulating IGF-I levels were further reduced when these mice were crossed with acid-labile subunit (ALS) gene-deleted mice. In the circulation, ALS forms a complex with IGFs and IGFBP-3. The loss of ALS results in very low binding capacity and protection for circulating IGFs. This double knockout animal exhibited growth retardation, suggesting that circulating IGF-I contributes to growth, but that local production of IGF was also important (Yakar et al., 2002). Interestingly, these studies revealed that IGF-I has dual effects on bone development; circulating IGF-I played a major role in bone density, whereas both circulating and local IGF-I were involved in linear bone growth. IGF-II Ablation of the IGF-II gene demonstrated that IGF-II plays a critical role in normal growth and development in mice. From embryonic day 11 onward, IGF-II knockout mice exhibited proportionate growth retardation. There were no further postnatal effects on growth in these mice, because the expression and circulating levels of IGF-II decrease dramatically after birth in rodents. In contrast, the major effects of IGF-I gene ablation were observed during the postnatal period, including severe growth retardation, infertility, and other effects Liu et al., 1993;Liu and Le Roith, 1999;Wang et al., 1999). Expres-sion of IGF-II in cultured cells is regulated by various agents, including follicle-stimulating hormone (FSH), chorionic gonadotrophin and cyclic AMP (cAMP) in ovarian cells, adrenocorticotropic hormone (ACTH) and cAMP in fetal adrenal cells, glucocorticoids and thyroid hormone in hepatic cells, and glucose in a pancreatic beta-cell line. IGF-II is also increased in response to glucose in fetal hepatocytes, and plays an important autocrine/paracrine role in skeletal muscle myoblast differentiation in vitro (Stewart and Rotwein, 1996). In the circulation, IGF-II is a 67-amino acid, single-chain polypeptide. However, patients with certain types of tumors occasionally release "big-IGF-II," a larger precursor form with a 21-amino acid extension designated as the E-peptide. Big-IGF-II may cause hypoglycemia by interfering with the normal effect of the IGFBPs on neutralizing circulating IGFs, thereby enabling big-IGF-II to interact with IRs (Daughaday et al., 1988). RECEPTORS IR and IGF-IR The IR and IGF-IR are products of separate genes that span >100 kilobases and contain over 20 exons. They belong to the membrane spanning family of tyrosine kinase receptors. These receptors are organized into functional domains. The mature receptor is expressed in an α2β2 configuration, where two α chains are joined by disulfide bonds. The β subunit lies entirely within the extracellular region and contains a cysteine-rich domain that forms the primary binding site for IGFs in the IGF-IR, whereas insulin apparently binds regions flanking the cysteinerich domain of the IR. The β subunit includes a 24-residue hydrophobic transmembrane domain, a short extracellular region, and a large cytoplasmic region that includes a tyrosine kinase domain. The tyrosine kinase region is highly conserved between the IGF-IR and the IR, sharing approximately 84% similarity at the amino acid level. The juxtamembrane region contains various motifs that bind to important intracellular substrates. The most divergent region between the two receptors is the cytoplasmic carboxyl-terminal domain (Ulrich et al., 1985(Ulrich et al., , 1986. As a consequence of this high level of homology, hybrid receptors, comprised of an insulin αβ hemireceptor and an IGF-I αβ hemireceptor, can form in tissues and cultured cells expressing both the IR and the IGF-IR (Federici et al., 1997a(Federici et al., , 1997b. Such hybrid receptors may play a role in the divergent actions of insulin and IGF-I. The biological response elicited by these hybrid receptors can vary, depending on the specific isoforms of the IR that are involved. The IR exists as two isoforms generated by alternative splicing of the IR gene that either lacks (IR-A) or includes (IR-B) 12 amino acid residues encoded by exon 11 at the carboxyl terminus of the IR α subunit. Hybrids comprised IGF 207 of an IR-A hemireceptor and an IGF-IR hemireceptor bound IGF-I, IGF-II, and insulin and exhibited cell proliferation and migration, even in response to insulin, presumably via activation of the IGF-IR. In contrast, hybrids comprised of an IR-B hemireceptor and an IGF-IR hemireceptor were only responsive to IGF-I but not IGF-II or insulin (Pandini et al., 2002). The IR-A homoreceptor binds IGF-II with high affinity and may be important in the functional role of IGF-II in the fetus and in dedifferentiated (malignant) cells Sciacca et al., 1999). Receptor Functioning Najjar and coworkers identified pp120, a plasma membrane glycoprotein, as a specific substrate for the IR but not the IGF-IR (Najjar et al., 1997;Soni et al., 2000). Phosphorylation of pp120 is required for its function in insulin endocytosis (Formisano et al., 1995), and also for its inhibitory effect on the mitogenic actions of insulin (Soni et al., 2000). Interestingly, when the carboxyl terminus of the IGF-IR is replaced by an equivalent region of the IR, the chimeric IGF-IR can then bind to and phosphorylate pp120, and the effect of IGF-I on cell growth is decreased (Soni et al., 2000). Mutation of Tyr 1316 in the IR, which is not conserved in the IGF-IR, abrogates the insulininduced tyrosine phosphorylation of pp120 and its ability to suppress insulin-induced mitogenesis. Like the IR, IGF-IRs are internalized following ligand binding and activation. Activation of the IGF-IR enhances the association of EHD1 with the IGF-IR and SNAP29. EHD1 belongs to a family of proteins that contain EH domains and are involved in forming protein complexes that promote clathrin-coated vesicles and are involved in endocytosis. Overexpression of EHD1 in NIH-3T3 fibroblasts inhibits IGF-I signaling and supports the hypothesis that endocytosis may be a mechanism whereby the IGF-IR signal is abrogated (Rotem-Yehudar et al., 2001). Internalization is also affected by Gαi and β-arrestin-1, which bind to the IGF-IR after activation, and also enhances the activation of mitogen-activated protein (MAP) kinase by the IGF-IR (Dalle et al., 2001;Lin et al., 1998). Common Signaling Pathways In addition to the structural similarities of the IR and IGF-IR, many of the intracellular signaling events that result from ligand-induced receptor activation are remarkably similar (Cheatham and Kahn, 1995;Le Roith et al., 1995;White, 1994). The tyrosine kinase domains of the IR and IGF-IR catalyze the phosphorylation of specific substrates and are critical for IR-and IGF-IR-induced signaling (Kato et al., 1993). All conserved tyrosine residues that are phosphorylated in the IR in response to insulin are also phosphorylated in the IGF-IR in response to IGF-I. Thus, the IGF-IR and IR share many substrates, such as the members of the IR substrate (IRS) family (IRS-1 to IRS-4), Gab-1, and Shc (Fantin et al., 1998;Lavan and Lienhard, 1993;Patti et al., 1995;Pelicci et al., 1992;Winnay et al., 2000). IRS-1 is the best characterized of the IRS family members. IRS proteins and Shc contain an amino-terminal phosphotyrosine-binding (PTB) domain that enables them to bind to the juxtamembrane domain of the IR and IGF-IR, via phosphotyrosines in NPEY motifs. IRS-1 and Shc are competitive substrates that can interact with both the IR and the IGF-IR (Sasaoka et al., 1996). Upon stimulation with insulin or IGF-I, tyrosine-phosphorylated IRS and Shc proteins engage in the formation of signaling complexes via phosphotyrosine-containing binding motifs (YXXM) within Src homology 2 (SH2) domains found in molecules like GRB2 (growth factor receptor binding-2 protein) (Lowenstein et al., 1992;Skolnik et al., 1993), and the p85 regulatory subunit of phosphatidylinositol 3 kinase (PI3K) (Backer et al., 1992). The phosphotyrosine residues on IRS-1 form docking sites for additional signaling molecules, including Syp (SHPTP2) (Xiao et al., 1994), Fyn (Sun et al., 1996), Nck , and Crk (Beitner-Johnson et al., 1996). By binding to GRB2, IRS proteins couple GRB2 to the IR or IGF-IR. Shc also couples these receptors to GRB2, even more strongly than the IRS proteins. Once associated with Shc and/or IRS proteins, GRB2 forms a complex with the Son of Sevenless (SOS) p21Ras guanine nucleotide GDP/GTP exchange factor. This causes translocation of SOS to the plasma membrane and activation of the Ras/MAP kinase pathway and regulation of cell growth, differentiation, and proliferation in response to insulin and IGF-I (Blenis, 1993;Crews and Erikson, 1993). Activation of the IR and IGF-IR and their intracellular components in response to ligand binding is transient and is controlled by several mechanisms, including phosphorylation, dephosphorylation, and/or degradation of certain components. Syp (SHPTP2, PTP-1D, or SHP-2) (Lamothe et al., 1996;Maile and Clemmons, 2002) and PTP1B (Ravichandran et al., 2001) are two candidate phosphotyrosine phosphatases that interact with both the IGF-IR and IR and dephosphorylate them in response to the ligand binding. SHIP-2 and PTEN (Butler et al., 2002) are lipid phosphatases that play significant roles in the regulation of PI3K signaling in response to both IGF-I and insulin. It has also been shown that phosphorylation of the serine or threonine residues of IRS-1 (Rui et al., 2001) or degradation of IRS-1 (Sun et al., 1999) can counterregulate the insulin or IGF-1 response. One of the major effects of IGF-I is to promote cell survival. Several molecular mechanisms underlying the basis of 208 D. LE ROITH IGF-I-mediated cell survival have been described, involving PI3K/Akt, MAP kinase, and 14-3-3 proteins. All of these proteins are associated with increases in the phosphorylation state of the proapoptotic protein BAD (Bai et al., 1999). In neuronal cells, IGF-I induced phosphorylation of the forkhead transcription factor via a PI3K/Akt-dependent signaling pathway, thereby inhibiting apoptosis. Furthermore, IGF-I promotes transcription of the antiapoptotic bcl-2 gene, by promoting phosphorylation of the cAMP response element-binding protein (CREB) transcription factor via both the p38 stressactivated protein kinase and PI3K/Akt pathways (Pugazhenthi et al., 1999). The antiapoptotic effects of IGF-I are mediated through PI3K and are also important in preventing mannitolinduced apoptosis. Mannitol induces dephosphorylation and degradation of FAK, apparently in response to activation of capsases. IGF-I counteracts this effect, leading to increases in cell adhesion and cell survival (Kim and Feldman, 2002). The IGF-IR can promote cell spreading and cell contact with the extracellular matrix (ECM) by interacting with RACK1, a Gβ homologue. This interaction delays progression of the cell cycle, but enhances FAK and paxillin phosphorylation, thereby altering integrin signaling (Hermanto et al., 2002). RACK1 may also modulate the antiapoptotic effects of the IGF-IR via Akt (Kiely et al., 2002). Receptor Cross-Talk The insulin and IGF-I receptors do not function in isolation; their signals affect and are affected by other receptor signaling cascades. The IGF-IR activates heterotetrameric G proteins in certain cell types. It has been reported that the βγ subunits of the Gi class of G proteins can mediate IGF-I-induced activation of MAP kinase (Luttrell et al., 1995). In addition, it has been demonstrated that the IGF-IR forms a complex with the Gα subunit of Gi proteins (Dalle et al., 2001). Interestingly, chronic treatment with insulin can cause heterologous desensitization of the IGF-I-induced activation of MAP kinase by down-regulation of β-arrestin-1, a molecule that plays an important role in IGF-IR internalization and activation of MAP kinase (Dalle et al., 2002). Other examples of this type of receptor cross-talk include the estrogen receptor and the IGF-IR. Thus, in MCF-7 breast cancer cell lines and other cell lines, costimulation with estradiol and IGF-I can induce either additive or synergistic effects on downstream signaling pathways, such as PI3K and various cell cycle events (Dupont et al., 2000). The IGF-IR interacts with the cell-cell adhesion complex that includes E-cadherin, β-catenin, and p120 catenin. When IGF-IR expression is reduced in MCF-7 cells by the introduction of antisense mRNA, these cells exhibit a more malignant phenotype that is associated with a reduction in the cell-cell adhesion complex. This is throught to result from a p120 catenin-induced reduction in E-cadherin and activation of Rac and Cdc42 activity (Pennisi et al., 2002). IGFBPs To date, six IGFBPs that have high affinity for the IGFs have been described (Jones and Clemmons, 1995). These proteins are characterized by their well-conserved amino-and carboxylterminal domains that contain several highly conserved cysteine residues. IGFBPs are found both in the circulation and at the local tissue level. In the circulation, IGFBPs act as "transport proteins" for the IGFs, but at the local level they act as modulators of IGF activity (Zapf, 1995). In the circulation, the major proportion of IGF is bound to a 150-kDa complex that includes IGFBP-3 and ALS, which protect the IGFs from proteases and prolong their circulating half-life (Rajaram et al., 1997). IGFBPs may also function as carrier proteins, because other IGFBPs may contribute to a 50-kDa circulating complex that facilitates the transfer of IGFs from the circulation to target cells. At the target cell level, the IGFBPs have multiple roles; some IGFBPs modulate the effects of the IGFs and others act independently from the IGFs and the IGF-IR (Lalou et al., 1996). IGFBP-induced inhibition of IGF-I action occurs when IGF-BPs prevent the interaction of IGFs with the IGF-IR (Baxter, 2000;Jones and Clemmons, 1995). However, the binding affinities of IGFBPs are altered by various modifications, including phosphorylation, partial proteolysis, and attachment to the cell surface or ECM. For example, dephosphorylation of IGFBP-1 lowers its affinity for IGFs. Attachment of IGFBP-3 to the cell surface or IGFBP-5 to the ECM lowers their respective affinities for IGFs. All of these effects have been proposed to enhance the delivery of IGFs to the IGF-IR. The potential role of the IGFBP system that has been most extensively studied as it relates to complications of diabetes is its effects on nephropathy. GH and IGF-I were initially shown to affect the diabetic kidney by increasing glomerular filtration rates and renal plasma flow, resulting in the typical enlarged kidney seen in patients with recent onset of diabetes (Flyvbjerg, 2000). Similar changes were seen in experimental animal models of diabetes, particularly of the type 1 variety. This change was associated with increased IGF-I concentrations in the renal tissue, despite a significant reduction in IGF-I gene expression as measured by mRNA levels. This suggested that the IGF-I peptide was being trapped from the circulation. Indeed, subsequent studies demonstrated that both the IGF-IR and certain IGFBPs were expressed by the kidney at higher levels than in control animals and suggested that the IGFBPs play a critical role in the presentation of IGF-I to its receptor and thereby enhance its biological function (Landau et al., 1995;Werner et al., 1990). Interestingly, somatstatin analoges have been shown to inhibit IGFBP gene expression in diabetic animals and to prevent the early renal changes described above (Raz et al., 1998). IGFBP-1 has been reported to exhibit IGF-independent actions. When the RGD sequence was prevented from interacting with the integrin α5β1 receptor, IGFBP-1 was unable to stimulate cell migration (Jones et al., 1993). In breast cancer cells, the binding of IGFBP-1 to integrin at the cell surface resulted in dephosphorylation of FAK, detachment from the ECM, and cellular apoptosis (Perks et al., 1999). A proapoptotic action of IGFBP-3 has also been reported to be independent of IGF (Perks et al., 1999a(Perks et al., , 1999bButt et al., 2000;Maile et al., 1999). A number of potential IGF-independent survival-promoting effects of IGFBP-4 and IGFBP-5 have been reported; however, the mechanisms underlying these effects are not known (Perks et al., 1999a). Finally, IGFBP-3 and IGFBP-5 are translocated into the nucleus via the importin-5 subunit (Schedlich et al., 1998(Schedlich et al., , 2000. The cellular consequences of these actions are unknown. Nevertheless, taken together, these findings suggest that various IGF-I-independent actions of IGFBPs can regulate cell growth and survival. SUMMARY The IGF system is ubiquitous and has multiple roles in normal physiology and pathological states. Although it regulates important functions in normal growth, development, and differentiation of most tissues, aberrations in the IGF system are clearly associated with various pathological conditions, including cancer, acromegaly, growth retardation, diabetes, and its associated complications, such as retinopathy, nephropathy, neuropathy, and insulin resistance. Understanding the control of expression of the various components as well as the signal transduction pathways involved in IGF-I receptor function will facilitate the development of specific therapeutic modalities for these and other disorders.
2014-10-01T00:00:00.000Z
2003-10-01T00:00:00.000
{ "year": 2003, "sha1": "aa11d4566dd5ab29181a468719e207851ec6e151", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jdr/2003/459056.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "945808e5d106849252680e68ff5b3484db4a4ab5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
2387311
pes2o/s2orc
v3-fos-license
Expression of Angiopoietin 1, 2 and Their Common Receptor Tie2 in Human Gastric Carcinoma: Implication for Angiogenesis Angiogenesis, formation of new microvessels providing oxygen and nutrient supply, is essential for tumor growth. It is dependent on the production of angiogenic growth factors by tumor cells. Angiopoietin 1 (Ang-1) and 2 (Ang-2) and their common receptor, Tie2, are thought to be critical regulators of tumor angiogenesis. We examined expression of Ang-1, Ang-2, and their common receptor Tie2 mRNAs and proteins in gastric cancers using in situ hybridization and immunohistochemistry. We also investigated the relationship between their expression and differentiation of cancer cells, lymph node metastasis, tumor size, depth of cancer cell invasion, TNM staging and microvessel density (MVD). The expression of Ang-1, Ang-2, and Tie2 mRNA in cancer cells significantly correlated with the MVD (p<0.001, <0.001 and =0.019, respectively). Ang-1 and Tie2 positivity correlated with advanced gastric cancers (p<0.05) and larger cancers had higher positive rates of Ang-1, Ang-2, and Tie2 mRNA expression (p<0.001, =0.010 and =0.039, respectively). Significant positive correlations were also found between mRNA expression of Tie2 and those of Ang-1 and Ang-2 (p<0.01 and <0.001, respectively). These findings indicate that the expression of Ang-1 and Ang-2 is important for tumor angiogenesis, and suggest a possible role of autocrine/paracrine function of angiopoietin/Tie2 system in gastric cancer progression. INTRODUCTION Angiogenesis, generation of new microvessels from preexisting blood vessels, is essential for tumor growth and invasion (1). Cancer cells stimulate angiogenesis by secreting angiogenic growth factors and cytokines, such as vascular endothelial growth factor (VEGF), platelet derived growth factor (PDGF), and fibroblast growth factor (FGF) that act on the endothelial cells of adjacent vessels and microvessels (2). Angiopoietins, a new family of angiogenic growth factors that are mostly specific for the vascular endothelium, have been identified in recent years (3)(4)(5). Angiopoietins have been shown to function as ligands for the Tie2/Tek vascular endothelial-specific receptor (6,7). Angiopoietin 1 (Ang-1) functions to stabilize and maintain mature vessels by promoting interaction between endothelial cells and its supporting cells such as pericytes and smooth muscle cells (8,9). Angiopoietin 2 (Ang-2) is expressed at sites of vascular remodeling and is thought to play a facilitating role at sites of vascular remodeling by disrupting the constitutive stabilizing action of Ang-1 (5). Gastric cancer is the second most common malignancy in the world. It has been demonstrated that certain cancer cells (e.g. breast, stomach) produce several angiogenic growth factors, including VEGF, PDGF and transforming growth factor 1 (TGF-1) and the expression of these factors correlate with tumor angiogenesis, tumor progression and poor prognosis (2,(10)(11)(12). Among the known angiogenic factors, VEGF has emerged as the central regulator of the angiogenic process in cancer (2). Increased expression of VEGF in gastric cancer has been demonstrated and it is correlated with tumor angiogenesis and poor prognosis (11)(12)(13). However, little is known about the expression of angiopoietins and Tie2 in gastric carcinoma and their relation to angiogenesis and clinicopathologic findings. Angiopoietin receptors, Tie2/Tek, were previously thought to be expressed exclusively by endothelial cells (6,7). Recently, some studies have suggested that Tie2 could be expressed in hematopoietic precursors and cancer cells (14,15) and prostate carcinoma cells (16). However, the possibility that angiopoietins/Tie2 are expressed in gastric cancer and can function in an autocrine or paracrine manner has not been previously examined. In the present study we examined the expression and localization of Ang-1 and Ang-2 mRNAs and proteins in human gastric carcinomas and investigated the correlation between angiogenesis and differentiation of carcinomas, lymph node metastasis, tumor size, depth of invasion, and TNM staging. Patients and specimen The Human Ethics Committee of Chonbuk National University Medical School approved this study. We used gastric cancer specimens obtained from 51 patients (between 1998 and 1999) at Chonbuk National University Hospital who underwent curative gastrectomy without prior chemotherapy or radiation therapy. There were 35 male patients and 16 female patients with ages ranging from 34 to 76 yr (mean, 60.2 yr). Clinicopathologic data obtained included histological type of gastric cancer, differentiation, lymph node metastasis, size of tumor, and post-operative TNM staging. The pathologic findings were determined according to guidelines established by the Japanese Society Committee on Histological Classification of Gastric Cancer (17). The TNM staging was determined based on criteria of the American Joint Committee on Cancer (AJCC) (18). The cases have included 38 tubular adenocarcinomas and 13 signet ring cell carcinomas. In 38 cases diagnosed as tubular adenocarcinoma, there were 2 cases of well differentiated, 18 cases of moderately differentiated and 18 cases of poorly differentiated adenocarcinoma. In situ hybridization for angiopoietin 1, 2 and Tie2 Tissue detection of the mRNAs for human Ang-1, Ang-2, and Tie2 were performed using in situ hybridization. Paraffin-embedded sections and digoxigenin-labeled sense and anti-sense RNA probes were used. The human Ang-1, Ang-2, and Tie2 RNA probes were generated from linearized pBluescript II KS+ plasmid (Stratagene, La Jolla, CA, U.S.A.), which contains an Hind III-Eco RI fragment corresponding to nucleotides 396 through 809 of the human angiopoietin-1 cDNA, a Xho I-Bam HI fragment corresponding to nucleotides 638 through 920 of the human angiopoietin-2 cDNA, and an Hind III-Xbal fragment corresponding to nucleotides 511 through 668 of the human Tie2 cDNA, respectively. Digoxigenin-labeled RNAs were synthesized using a DIG RNA Labeling kit (Boehringer Mannheim, Indianapolis, IN, U.S.A.). Sections were preincubated in mRNA in situ hybridization solution (DAKO, Carpinteria, CA, U.S.A.). Hybridization was carried out overnight at 55℃ in a humidified chamber. The concentration of hybridization mixture was 0.5 ng RNA probe per 1 L of mRNA in situ hybridization solution used for the prehybridization step. Posthybridization wash was performed at 53℃ in 0.1×stringent wash solution (DAKO, Carpinteria, CA, U.S.A.). After blocking the nonspecific protein binding with Protein Block Serum-Free (DAKO), the mRNA signals were detected using antidigoxigenin/alkaline phosphatase (DIG/AP) antibody and bromochloroindoylphosphate/nitrobluetetrazolium (BCIP/ NBT) chromogen substrate (DAKO). Sense probes were used as negative controls and staining evaluation was performed under the same conditions. The extent of angiopoietin and Tie2 staining were recorded using a grading system, based on the percentage of cancer cells positively stained: grade 0=0-10% cells; grade 1=11-70% cells; grade 2=>70% cells positively stained. When the positive cells were more than 10%, we regarded it as positive. Immunohistochemistry For immunohistochemical staining, the immunoperoxidase method was used with the streptoavidin-biotinylated horseradish peroxidase complex (DAKO). Four m thick sections were cut from the formalin-fixed and paraffin-embedded tissue blocks. For angiopoietin 1 and angiopoietin 2 immunostaining, sections were treated with target retrieval solution (DAKO) for 20 min at 97℃, and then incubated in methanol containing 0.3% hydrogen peroxide at room temperature for 20 min to block endogenous peroxidase. Subsequently, sections were incubated with Protein Block Serum-Free (DAKO) at room temperature for 10 min and were then incubated for 2 hr at room temperature with antifactor VIII related antigen antibody (DAKO), which stains only endothelial cells, or overnight at 4℃ with anti-angiopoietin 1, 2 or Tie2 (Chemicon International, Temecula, CA, U.S.A.) primary antibodies. After washing, the sections were incubated with a biotin-conjugated secondary antibody at room temperature for 30 min and finally with peroxidase conjugated streptoavidin at room temperature for 30 min. Peroxidase activity was detected with the enzyme substrate 3 amino-9-ethyl carbazole. Sections treated the same way described above, except they were incubated with Tris buffered saline instead of the primary antibody, served as the negative controls. Determination of microvessel density Sections stained for factor VIII related antigen, which visualizes endothelial cells, were used for determination of microvessel density (MVD). Sections were screened under ×40 magnification to identify the areas with the highest vascular density within the tumor. Microvessels were counted in 4 areas under ×200 magnification. Any single stained cells or cluster of endothelial cells that were clearly separated from adjacent microvessels, tumor cells, and other connective tissue elements were considered as vessels. Statistical analysis The relationship between expression of Ang-1, Ang-2, and Tie2 mRNA, and microvessel density was analyzed using Student's t-test. Associations between the expression of Ang-1, Ang-2, and Tie2 mRNA, and clinicopathologic factors were tested by chi-square test. The following clinicopathologic factors were correlated with angiogenic factor expression: age, sex, differentiation of cancer cells (differentiated; well and moderately differentiated carcinomas vs. undifferentiated; poorly differentiated and signet ring cell carcinomas), tumor depth (early gastric cancer [Tis, carcinoma in situ; T1, tumor invades lamina propria or submucosa] vs. advanced gastric cancer {T2, tumor invades the muscularis propria or the subserosa; T3, tumor penetrates the serosa without invading adjacent structures; T4, tumor invades adjacent structures]), lymph node metastasis, size of tumor (<2 cm vs. ≥2 cm) (19), and post-operative TNM staging (I+II vs. III+IV). A p-value of less than 0.05 was considered significant. Angiopoietin-1, Angiopoietin-2 and Tie2 expression and localization Ang-1 mRNA was expressed in 30 of 51 specimens (58%); 9 tumors were grade 1 (11-70% cells positively stained), and 21 were grade 2 (>70% cells positively stained). Ang-2 mRNA was expressed in 25 of 51 specimens (49%); 8 tumors were grade 1, and 17 were grade 2. Tie2 mRNA was expressed in 10 of 51 specimens (19%); 7 were grade 1, and 3 were grade 2. All ten Tie2 mRNA positive specimens expressed Ang-1 mRNA, Ang-2 mRNA or both. Ang-1 and Ang-2 mRNA were mainly expressed in cancer cells as a strong cytoplasmic staining (Fig. 1A, B). No or minimal staining was observed in normal and metaplastic gastric mucosal cells (Fig. 1A). In addition to the strong staining present in carcinoma cells, smooth muscle cells of large vessels, occasional stromal cells and endothelial cells demonstrated positive staining for Ang-1 mRNA (Fig. 1C). Occasionally, endothelial cells of blood vessels expressed Ang-2 mRNA. Tie-2 mRNA was mainly expressed in infiltrating cancer cells of undifferentiated group and in endothelial cells (Fig. 1D, E). Expression of Tie2 mRNA was predominantly confined to T2-4 classification and carcinomas of undifferentiated group. No specific staining was present when the E F G H sense probes were used. We selected the 10, 10, and 5 representative gastric carcinoma specimens with a strong Ang-1, Ang-2 and Tie2 mRNA expression and performed immunostaining for Ang-1, Ang-2 and Tie2 proteins in order to compare localization of proteins and mRNAs. Immunostaining with polyclonal antibodies specific for human Ang-1, Ang-2 and Tie2 showed that localization of these proteins and respective Ang-1, Ang-2 and Tie2 mRNA was very similar in all specimens (Fig. 1F-H). Statistical results The microvessel counts in gastric cancer specimens ranged from 5 to 71 with a mean value of 27.7 (standard deviation, 16.3). Ang-1, Ang-2 and Tie2 mRNA expression significantly correlated with the MVD. Table 1 shows the correlation between MVD and Ang-1, Ang-2 and Tie2 mRNA expression. Large gastric cancers (≥2 cm) had a significantly higher positive rate of Ang-1, Ang-2 and Tie2 mRNA expression than small-sized (<2 cm) ones ( Table 2). The Ang-1 and Tie 2 positive rates were also higher in T2-4 cancers than those of Tis and T1 cancers. A strong correlation was found between Ang-1 and Ang-2, and Tie2 mRNA expression (p=0.002, and p<0.001, respectively). There was no close correlation between the expression of these angiogenic factors and sex, age, tumor stage, histologic types of cancer, and lymph node metastasis. Although there was a tendency for higher Tie2 mRNA expression in carcinomas of undifferentiated group compared with the carcinomas of differentiated group, this correlation was not statistically significant (p=0.165). Our study showed that the proportion of gastric cancers expressing Ang-1, Ang-2, and Tie2 is significantly higher in advanced cancers, and in larger size tumors. Furthermore, expression of these angiogenic factors significantly correlated with tumor MVD. Our findings are consistent with the result of Etoh et al. (28), who reported that Ang-2 mRNA levels correlated with more advanced stages and more frequent vascular involvement in gastric cancer. In other types of cancer, for example, brain, liver, and lung cancer, high expression of Ang-1 and Ang-2 correlated positively with tumor angiogenesis and tumor growth (22,24,25,27,29). However, this issue remains somewhat controversial. Hayes et al. reported that in breast cancer overexpression of Ang-1 did not enhance tumor growth (20). Another study showed a significant reduction in expression of angiopoietins in breast cancers compared with that of normal breast tissue (26). Our present data support the contention that angiopoietins and their receptor Tie2 play an important role in gastric cancer angiogenesis and cancer growth. While the role of Tie2 receptor and angiopoietins in developmental angiogenesis has been intensively studied, little is known about their expression and function in malignant cells. Although the precise function of Tie2 expressed in tumor cells remains unclear, our findings of Tie2 expression in gastric cancer cells is particularly interesting in light of recent observations that Tie2 can be expressed by certain types of tumor cells, hematopoietic, prostate cancer and giant cell tumors of tendon sheath (14)(15)(16)30). Certain angiogenic factors, such as FGF-1 and VEGF, have receptors distributed not only in tumor cells, but also in surrounding stromal cells and endothelial cells of vessels. This distribution suggests the possible paracrine or autocrine regulation of tumor growth by angiogenic factors (10,31). In our present study, some specimens showed that cancer cells, as well as endothelial cells, express Tie2. The expression of Tie2 was significantly associated with large size tumors, advanced gastric cancer, and increased MVD. Moreover, a strong correlation was found between Ang-1, Ang-2, and Tie2 mRNA expression. Emerging evidence suggests that the autocrine activity of VEGF could be important for tumor cell survival, and growth (10,32,33). Recently, Soker et al. (32) reported that the expression of VEGF receptors by prostate tumor cells correlates with progression to a more malignant phenotype, and increased chemotactic migration of FB2 prostate tumor cells, and suggested an autocrine signaling loop involving VEGF and its receptor. Our findings are consistent with the recent paper of Nakayama et al. (34), who reported that Tie receptors and angiopoietins were highly expressed in human gastric adenocarcinoma cells and the Tie-Ang receptor-ligand complex is one of the factors involved in the progression of human gastric adenocarcinoma. In our study, we found that Tie2 expression was mainly confined to advanced, undifferentiated carcinomas. While a functional role for Tie2 receptor expression in tumor cells has not been reported, our observation suggests that the Tie2 receptor may be related to tumor progression or dedifferentiation of gastric cancer cells. Further analysis of Tie2 expression by cancer cells is required to determine its mechanism of action and whether Tie2 has an important role in gastric cancer progression. In summary, our findings indicate that expression of Ang-1 and Ang-2 is associated with and most likely related to increased angiogenesis and tumor growth in human gastric cancers. The expression of both angiopoietins and their Tie2 receptor in gastric cancer cells suggests a possible autocrine/ paracrine regulation of gastric cancer cell growth and may be involved in the emergence of an aggressive phenotype during gastric cancer progression.
2018-04-03T03:18:25.675Z
2006-04-01T00:00:00.000
{ "year": 2006, "sha1": "1b8197277d259346a7f59bc645534b31d99cd800", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2734003?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "71f04bbb70d01177b5b6f83a79447707ad92eb10", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
239171309
pes2o/s2orc
v3-fos-license
The Role of Institutional Quality in Health Expenditure-Labor Force Participation Nexus in Africa The study investigated the role of institutional quality in the relationship between health expenditure and labor force participation (LFP) in Africa, taking into consideration two forms of health expenditures (government health expenditure (GHE) and out-of-pocket health expenditure (OOPHE)) and gender labor force participation dichotomy. We employed data from 39 African countries for the period between 2000 and 2018 using Panel Fixed Effects with Driscoll and Kraay standard errors and a two-stage system Generalized Method of Moments (GMM). The results revealed that GHE yields an increasing effect on total, female, and male LFP. OOPHE, in most cases, leads to a decline in LFP. The institutional quality was found to be detrimental to LFP. The magnitude of the positive effect of GHE on LFP is reduced by the interaction of institutional quality with GHE. In conclusion, we advocate for the improvement in institutional apparatuses across African countries. Introduction In a bid to achieve the Universal Health Coverage and to pursue the health goals for Sustainable Development Goals (SDGs), many health policymakers in Africa have seen health expenditure as one of the germane components of health outcomes. At the 2001 Abuja Declaration, African leaders agreed to allocate at least 15% of their yearly budget to the health sector to improve, promote and foster quality healthcare in their countries but many of these countries are defaulters of this declaration. For instance, the average total health spending ranges from 5% to 6% of the Gross Domestic Product (GDP) between the years 2000 and 2015. However, the per capita total expenditure increased from $150 to $292 based on 2015 purchasing power parity (PPP) with variations across different countries in Africa. Available statistics shows that, on average, per capita health expenditure in low-income African countries stood at $99, ranging from $23 in Central African Republic to $256 in Sierra Leone. Middle-income African countries recorded a mean per capita health expenditure of $298, ranging from $147 in Djibouti to $774 in Tunisia. The upper-middle income African countries have an average per capita income of $914 with a minimum of $481 for Gabon and maximum of $1,100 for Mauritius (World Bank Development Indicator, 2020). Unlike the rest of the world which has an average of 22% of the total health expenditure as out of pocket payments, African countries are over-reliance on out-of-pocket health expenditure (hereafter, OOPHE). The average OOPHE in Africa stood at 36% of total health expenditure. The phenomenon borne out of paucity of government health facilities. The paucity of healthcare facilities has denied the majority of the African citizens the access to quality healthcare. The obvious consequence is the lower health outcomes. The situation may not give majority of the ability to participate in labour market. This is because it takes a physically healthier individual to energetically participate in the labour market. Grossman (1972) and Bloom and Canning (2000;2003) asserted that being healthy will not only be of benefit to the individual in non-labour market activities but will affect the entire economy through labour market activities. Thus, the combined effect of low government spending on health and heavy dependence on out-of-pocket payments in Africa has however played an important role in the health outcomes of the citizens and affects the participation of labour in productive activities (Osundina, 2020). Increasing labour force participation can help to achieve some of the SDGs, especially in the areas of decent jobs, poverty, inequality and even improved health condition. It has been argued that the increase in labour force participation would generate income that would enable people to have better access to healthcare. In fact, Iregui-Bohórquez et al. (2016) observed that people who participated in the labour force usually report better or sound health. A cursory look at labour force participation data, particularly in sub-Saharan Africa, shows that labour force participation has remained high but steadily decline over time until recent time. Besides this, there is uneven participation in the labour force across regions and gender. The average labour force participation between 1990 and 2019 stood at 69.9% which is greater than the world average within the same period 68.5%. However, there is persist gender dichotomy in labour force participation in the region. While the male labour force participation stood at 76.3% that of their female counterparts stood at 63.6%. A similar trend is observed for the youth labour force participation rate. The total youth labour force participation rate stood at 50.8%. The gender gap in youth labour force participation stood at 5.5% as the male and female youth labour force participation rate stood at 53.6% and 48.1% respectively. However, the high rate of labour force participation in SSA, especially among the age bracket of 15-64 year, has been attributed to many factors. One of these factors according to ILO is the presence of the working-age population who are striving to survive the limited opportunities offered by their economies (ILO, 2018). Many African countries are characterised by weak institutions which manifested in the form high rate of corruption, lack of accountability and transparency, political and social violence, neglect of or disregard for the rule of law and absence of government effectiveness. Of particular paramount in most African countries is the daily occurrence of corruption, especially among the government officials. The latest report by the Transparency International ranked most African countries high in the corruption perception index. Even the so called the biggest economy in Africa, Nigeria, ranked 146 out of 180 countries ranked in 2020. In fact, of most of the resource-endowed countries are characterised by high rate of corruption. In light of the above, we examine the nexus between health expenditure and labour force participation taking into consideration the types of health expenditure (government health expenditure (hereafter GHE) and OOPHE and total and gender labour force participation for adults with age bracket . We also investigate the mediating role of institutional quality in health expenditure and labour force participation nexus. Investigating the mediating role of institutional quality becomes indispensable or crucial because the quality of institutions determines a lot of economic outcomes, including labour force participation (Acemoglu, et al. 2005;Acemoglu and Robinson, 2008;Acemoglu, 2010;Agovino, et al. 2019). There are ample of studies that have examined health expenditure and labour force participation (Faraget al.2013;Novignon, et al. 2015;Boachie and Ramu, 2016;Rauf, et al. 2018). Also, some studies have examined the relationship between institutional quality and health expenditure on one hand and institutional quality and labour force participation on the other hand (Su, et al. 2006;Cooray and Dzhumashev, 2018). However, to the best of our knowledge, we do not come across any study that has examined the role of institutional quality in the nexus between health expenditure and labour force participation, especially in Africa. This is the gap this study would fill. We implement our study following four step estimation procedures. First, we estimate the relationship between health expenditure and labour force participation. The essence of this is to isolate the effect of health expenditure on labour force participation. This is important because if we include the other variables ab initio the impact of health expenditure on labour force participation may be crowded out. Second, we introduce institutional quality variable to ascertain the impact of institutional quality on labour force participation. Third, we add interactive term constructed from multiplication of health expenditure and institutional quality to determine the mediating role of institutional quality in health expenditure and labour force participation nexus. Fourth, we control for other variables that could serve as determinants labour force participation. These variables were selected based on a priori expectations and they include GDP per capita growth rate, life expectancy at birth, secondary school enrolment, infant mortality rate total fertility rate and trade openness. We deploy two estimation techniques which include panel fixed effects with Driscoll and Kraay (1998) standard errors and two-step system Generalised Method of Moments. Driscoll and Kraay's standard error in conjunction with panel fixed effects is used to address the issues heterogeneity and autocorrelation problems while the two-step system GMM is used to address endogeneity problem in the nexus between health expenditure and labour force participation. A cursory look at our findings reveals that government health expenditure leads to an increment in labour force participation while out-of-pocket leads to a decline in labour force participation. Institutional quality appears to be detrimental to labour force participation. The interaction of the institutional quality with government health expenditure reduces the magnitude of the positive effect government expenditure on labour force participation. Given the introduction, we proceed with the rest of the study as follows: section 2 reviews the existing studies. Section 3 focuses on data sources and some stylised facts on labour forces participation, health expenditure and quality of institutions. Section 4 presents the methodology. The results are presented in section 5 while section 6 concludes with policy implications. Literature review In the literature, health expenditure serves as one of the health inputs to produce good health as an output, which has been theorized through human capital theory. The role of human capital development has been theorized at the macro and micro level to show its importance on individual, households and the economy as a whole. At the macro level, the major discussion is centred around the channels through which health affect economic growth, while at the micro-level, the central highlight has been on how health inputs affect an individual's or household's health (health outcomes) in participating in both market and non-market activities. A modified neoclassical growth theory by Romer (1990) which emphasizes the role of human capital in creating new ideas for improved growth stated that a higher level of human capital will boost new technological development and spurs growth in the long run. However, it should be noted that education or health is usually used to proxy for human capital as a healthier individual has the opportunity to attend school and be ready to supply labour inputs that will yield improvement in growth. Hence, Romer (1990) and Barro (1991) emphasized that health is the most important factor in determining labour force participation. According to Becker (1962;1964;2009), human capital does not only include education, training and skills but also include health and other values embodied in individuals; which allow them to be more productive in an economy. The human capital theory developed by Grossman (1972) assumed that an individual has an initial level of health that depreciate /deteriorate as a result of time (Age) and can be improved through investment, especially with health inputs such as medical care which incorporates health expenditure, education, exercise et cetera to produce healthy time. Healthy time allows individuals to participate in non-labour and labour activities over their lifetime. Based on this theory, a healthy individual may decide to either used his/ her healthy time to participate in the labour market (supply labour units) and reinvest to gain healthy life or use it as leisure time (nonlabour activities) to derive utility as desired. Methodology varies to the related study of this nature due to data proxies, data availability, cross country and single country analysis. There is no unique single methodology used by previous researchers related to this study because much of the studies dwell more on both health expenditure and economic growth or labour force participation and economic growth or labour force participation and health (see Piabuo and Tieguhong, 2017). Al-Jebory (2014), Ayanwu and Erhijakpor (2007), Farag et al. (2013), Piabuo and Tieguhong (2017), and Thu Ha (2018) used fixed, dynamics/ and random effect Panel ordinary least squares for cross country analysis to account for measurement error and autocorrelation. In the same vein, the use of the Generalized Method of Moments (GMM) by Isiaka (2020), Novignon et al.(2015) and Umoru and Yaqub (2015) was to address the autocorrelation due to the presence of the lagged dependent variable among explanatory variables, individual effect characterizing the heterogeneity among individuals and ordinary least squares (OLS) estimator biased. Other methods used in related studies for single-country analysis and baseline analysis are OLS, Nonlinear least squares, Autoregressive Distributive Lag (ARDL) and standard multinomial logit. In addition, two-stage least squares (R2SLS), three-stage -least -square (3SLS) and Newey-west test have been employed in a study by Ayanwu and Erhijakpor (2007), Boachie and Ramu (2015) and Anochiwaet al. (2019) to control for endogeneity, reverse causality and robustness of the estimator. On the empirical front, there is paucity of research on health expenditure and labour force participation. However, studies have drawn possible results on the relationship between them using other channels like the link between health expenditure and growth, labour force participation and economic growth, and health expenditure and health outcomes. Al-Jebory (2014) examined the effects of health expenditures on population age distribution and labour participation rates among 84 countries from low-and high-income countries. He established that in high-income countries, health expenditure has a high influence on the labour force participation rates, while in low-income countries; health expenditure has low influences on the labour force participation rates. Similarly, Powell and Seabury (2013) confirmed that medical care spending can impact health and that health affects labour outcomes. Also, a study by Mushtaqet al. (2013) found evidence that health expenditure has a positive and significant impact on the labour force participation rate in the short run, but this result disappears in the long run. In addition, a recent study by Rauf et al. (2018) affirmed that labour force participation is increased by an increase in health expenditure, secondary school enrolment and Investment both in the short and long run. The results of other channels through which labour force participation/ health expenditure is affected by health/ economic growth or vice versa include the study from Australia by Laplagne (2007) and averred that better health and education can result in substantially greater labour force participation for those affected. Similarly, Novignon et al. (2015) showed that health status relates positively with labour force participation and relationship was significant for total and female labour force participation. Also, Isiaka (2020) confirmed that government spending increases the labour force participation rate in the West African Monetary Zone. On the health expenditure side, Ayanwu and Erhijakpor (2007) found that total health expenditures are certainly an important contributor to health outcomes. Farag et al. (2013) also confirmed that government health spending has a significant effect on improving health outcomes and the size of the coefficient depends on the level of good governance achieved by the country. In addition, Umoru and Yaqub (2015) suggested from their result that health capital investment enhances the productivity of the labour force. Similarly, Boachie and Ramu (2015) found evidence that falling health outcomes in Ghana has been influenced by public health spending among other factors. A most recent study by Anochiwa et al. (2019) also established that health expenditure is significant in determining health outcome but has no significant relationship with economic growth. Contrary to this submission, Thu Ha (2018) investigation suggested that countries with higher health expenditure and labour force participation rate are expected to have higher GDP. At the frontier of knowledge based on the role of institutions on health expenditure/ labour market participation, researchers have found the mediating role of quality of institution to be positive and highly significant Novignon (2015) has shown that high corruption and poor public sector institutions reduced health expenditure efficiency. Also, Makuta and O'Hare (2015) found that public spending impact is mediated by the quality of governance, which has a higher impact on health outcomes in countries with a higher quality of governance and lower impact in countries with lower quality of governance. It is also the case for Bousmah et al. (2016) and Dhrifi (2020) that institutional quality plays an important and significant role in health expenditure. The result from Massimiliano et al. (2019) has been able to identify that institutional quality has a positive effect on local labour market participation for both men and women in Italy but it does not affect the participation gap. Abstracting from empirical evidence that most of the studies merry-go-round the relationship between health expenditure and labour force participation with the exclusion of quality of institutions, this study contributes to knowledge by directly showing the empirical relationship between health expenditure and labour force participation as well as emphasizing on the imperative role of institutional quality. Data Sources and Description This study aims to investigate the role of institutional quality in the relationship between health expenditure and labour force participation. To achieve this aim, we utilise the data of 39 African countries covering the period from 2000 to 2018. 1 We chose these countries due to the availability of relevant data. For the dependent variable, we use labour force participation for adults with age bracket (15-64) across gender, females and males. For independent variables, we use government domestic expenditure as a percentage of current health expenditure for the main analysis. For robustness analysis, we use out-of-pocket expenditure as a percentage of current health expenditure. We control for other variables based on the model of labour force participation and health expenditure model. These variables include GDP per capita growth rate, life expectancy at birth, female life expectancy at birth, male life expectancy at birth, secondary school enrolment, female secondary school enrolment, male secondary school enrolment, infant mortality rate, female infant mortality rate, male infant mortality rate, total fertility rate, and trade openness. All these variables including labour force participation and health expenditures data are obtained from the World Development Indicators (WDI). Based on the aim of this study, we construct an institutional index from six governance indicators which include control of corruption, government effectiveness, political stability, regulatory, rule of law and voice, and accountability. The Principal Component Analysis is used to compute the institutional quality index. Following Aluko and Ibrahim, (2020), Alagidede et al. (2020), Ibrahim and Vo (2020) and Raifu et al. (2021), the computed institutional quality index is winzorised to remove possible outliers in the institutional quality index. Thereafter, the winzorised institutional quality index is normalised so that its value ranges 0 and 1 with zero 1 Algeria, Angola, Benin, Botswana, Burkina Faso, Burundi, Cabo Verde, Cameroon, Central African Republic, Chad, Comoros, Congo Democratic Republic, Congo Republic, Egypt, Equatorial Guinea, Eritrea, Ethiopia, Gambia, Ghana, Guinea, Kenya, Lesotho, Libya, Madagascar, Malawi, Mali, Mauritius, Morocco, Mozambique, Niger, Nigeria, Rwanda, Senegal, South Africa, Sudan, Togo and Tunisia. means poor institutional quality and 1 means good institutional quality. The governance variables are selected from the World Governance Indicators of the World Bank. The summary statistics of the variables are presented in Table 1 2 . As shown in the Table, the average values of total labour force participation, male labour force participation, and female labour force participation are 66.1%, 75.1%, and 57.2%, respectively. This suggests that the labour force participation have been relatively high and increasing over time in Africa with its mean value been above average over the period considered. Government health expenditure (GHE) ranged from 4.1% to 77.5%of the current expenditure, with an average of 33.5% implying a relatively low % contribution to government health expenditure. The out-of-pocket (OOPHE) health expenditure is averaged at 42.5% with its lowest ebb of 2.99% and highest value at 84.2%. The institutional quality index ranged from 0 to 1 with an average value of 0.49. This implies that the level of institutional quality in Africa is still below average. Although, there is a disparity of the institutional quality across the sampled African countries with some countries experiencing a relatively low standard of institutional policy while some still recorded high institutional efficiency of their policies. The average GDP per capita growth rate is 2.16%, with a rangeof62.4%to 121.8%. The gap between the minimum and the maximum values of GDP growth rate as well as the standard deviation of 7.01% reveals the wide disparity in GDP growth rate across Africa. The average life expectancy at birth (LEAB) was 59.5 %. Female LEAB averaged 61.3 % and male LEAB averaged 57.8 %. In addition, Secondary School Enrolment (SSE) averaged 48.7%, with female SSE averaging 46.4% and male SSE averaging 50.9 %. The infant mortality rate (IMR) ranged from 10.3% to 121.2%, with an average of 56.3%. Also, male infant mortality (MIMR) ranged from 11.3% to 130.8% with an average value of 61.3%, while female infant mortality (FIMR) ranged from 9.1% to 111.1% with a mean value of 51%. With the measures of dispersion of LEAB, SSE, and IMR, it could be observed that there have been improvements in these indicators over time across Africa. The total fertility rate ranged from 1.36% to 7.68% with an average value of 4.66 %. Trade openness has the minimum and maximum values of 17.93% and 193.48% respectively while being averaged at 70.93%. Abuja declaration in the African region indicate that many of the countries in this region spent less than 15% of the total health expenditure. For OOPHE expenditure, which requires households to pay directly to a healthcare provider, Comoros recorded the highest percentage of 77.95%, followed by Equatorial Guinea, while Botswana recorded the least value of 6.17%. There are opposite fluctuations between the percentage shares of the two-health expenditure from the total health expenditure used for the graph. A country where the share of GHE is more than 50% of the total health expenditure is most likely to lessen the burden on households on the use of OOPHE payments. Figure 2 shows the labour force participation in Africa between the ages of 15 and 65 years. As shown in the Figure, the highest average total labour participation between 2000 and 2018 was recorded in Madagascar (88.52%). This country also exhibits the feminism of gender equality in Africa, in which the average gap between male and female labour participation rates was just a marginal percentage of 2.01%. Rwanda also recorded an infinitesimal margin between average female and male labour participation rates. In most African countries, the labour market is more biased towards males. The gap is more obvious in countries where their religion and culture allow males to financially fend for the households. For instance, in Algeria, the average labour participation rate for males is 75.05%, while females account for just 15.59%. Similarly, Egypt also recorded an average female labour participation rate of 23.08%, while males account for over 70%. countries recorded below the threshold of institutional quality, that is, they are less than 0.5. These countries, among others, include Algeria, Burundi, Cameroon, Central African Republic, Chad, Comoros, Democratic Republic of Congo, Congo, Equatorial Guinea, Eritrea, Ethiopia, Guinea, Libya, Nigeria, Sudan and Togo. Institutional Quality One of the prominent indicators of the existence of good or bad institutions in a given country is the level of corruption. In a highly corrupt country, the probability that other institutional quality indicators would be at a low ebb is very high. A highly corrupt country is likely to be characterised by disregard of rule of law, lack transparency and accountability among the officeholders and witnessing constant socio-political crises. Figure 4 represents a scatter plot that shows the relationship between female labour participation rate (15-64) and Domestic Government Health Expenditure and Labour Force Participation domestic general GHE among the countries under consideration. The scatter plot indicates that there exists a linear negative relationship between the two variables. This, however, does not suggest that GHE does not encourage labour force participation but it does mean that governments in Africa like their counterparts in developing countries spends more on health to ensure that their citizens participate in the labour force (Devarajan, Swaroop and Zou, 1996). Moreover, a cursory look at the figure shows that few countries cluster along the fitted line. This is an indication that female labour force participation and domestic general GHE are linearly negative correlated in countries like Congo, Mali, South Africa, Libya, Burkina Faso, Chad, Equatorial Guinea, Botswana, and so on. Methodology Modelling health expenditure or status and labour force participation nexus has been controversial due to the issue of endogenous bias (Cai and Kalb, 2006). One strand of argument posits that health expenditure being a health input affects labour force participation. The amount of expenditure, particularly by the government, determines the provision and availability of healthcare facilities which in turn also determines the access to healthcare and improvement in the health status of the citizens. When the citizen's health status improves due to access to the available healthcare facilities, they tend to be more participating in the labour force and improve their productivity. Thus, health expenditure assumes to have a positive effect on labour force participation judging by human capital theory (Laplagne, Glover and Shomos, 2007). Conversely, people with poor health tend not to participate in the labour force. Another argument posits that labour force participation affects healthcare status or expenditure (Waghorn and Lloyd, 2005). It has been argued that participating in some works may have a detrimental effect on the health of the workforce. Some works lead to stress and create mental health for the labours. However, ill-health may motivate an individual to participate in the labour force in order to raise income to take care of his health condition (Stern, 1989;Laplagne et al., 2007). In this study, we are particularly interested in addressing two issues in modelling the mediating role of institutional quality in health expenditure-labour force participation nexus in Africa. The first is to address the spatial dependence or cross-sectional dependence among the countries that have some sorts of similar characteristics or coalition among the countries. Hence, using conventional ordinary panel OLS as a method of estimation would lead to inconsistent estimated standard error (Driscoll and Kraay, 1998). Given this, Driscoll and Kraay (1998) developed a nonparametric covariance estimation method that yields a robust standard error. The method is also useful when the model is prone to heteroscedasticity and autocorrelation problems (see Hoechle, 2007). Thus, we adopt this method under the fixed effects estimation method. The second issue addressed is the issue of endogeneity between health expenditure and labour force participation as argued above. To address endogeneity between the two variables, early studies used the two-stage-least-squares estimation method using two-stage approaches (see Stern, 1989;Cai and Kalb, 2006;Cai, 2010). In this study, we follow Nonvignon et al. (2015) by employing the dynamic system Generalised Method of Moments (GMM) developed by Arellano and Bover (1995) and Blundell and Bond (1998). Following the argument above, we proceed to model specification. Assume a simple regression in which labour force participation depends on health expenditure and other control variables, we specify linear fixed effect regression as follows: Where LP is labour force participation rate, HE designates health expenditure and X is a set of control variables used as explanatory variables aside from health expenditure,  is a constant, U is a country-specific effect and V is the error term assumed to be normally distributed with zero mean and constant variance. We consider labour force participation for adults (15-64) and youths (14-25) across gender, males and female. We also use two forms of health expenditure and they include domestic GHE as a percentage of current health expenditure and OOPHE health expenditure as a percentage of current health expenditure. Domestic GHE is used for the main analysis while OOPHE health expenditure is used for robustness analysis. Other sets of explanatory variables include GDP per capita growth rate, life expectancy at birth (total, female and male), gross secondary school enrolment (total, female and female), infant mortality rate (total, female and male), total fertility rate, and trade openness. These variables are included whether we are considering total labour force participation, female labour force participation or male labour force participation for adults and youths. Considering the main aim of this study, we modify equation 1 to allow for the role of institutional quality as follows: Other variables remain as previously defined. INST is the institutional quality and * INST HE is the interaction of institutional quality with health expenditure which shows the indirect channel through which health expenditure affects labour force participation. The equations 1 and 2 are estimated using panel fixed effects with Driscoll and Kraay (1998) standard errors. However, to address the endogeneity problem, we use a dynamic two-stage system GMM. We begin the presentation of the two-step system GMM by incorporating the lag of dependent variable into equation 2 with slight modification as follow: Based on a priori expectation,  could either be 0 or 1. If 1   , it means that labour force participation declines over time and it does not persist into the future. However, if 1   , it means that the past and present labour force participation persists into the future. This suggests that countries with a high level of labour force participation continue to increase in labour force participation in the future. Depending on the quality of institutions possessed by a country or a group of countries, institutions could have a positive or negative effect on labour force participation. In a country characterised by poor institutional quality, especially a high rate of corrupt practices among the government officials, money meant to finance the provision of healthcare facilities may be misappropriated or end up in some coffer of some unscrupulous government officials. This could lead to a lack of insufficient healthcare provision and delivery with a negative effect on the health status of the citizens and their labour force participation. The reverse is the case for a country or a group of countries with high-quality institutions. is the coefficient of the interactive term of institutional quality and health expenditure ( * ) INST HE . This coefficient shows that whether or not institutional quality influences positively or negatively the relationship between health expenditure and labour force participation. Thus, when  is positive and statistically significant, then it implies that institutional quality and health expenditure are complementarities in influencing labour force participation. Here, it  is an unobserved country-specific fixed effect, t  is the time effect and it v is the error term. According to Roodman (2009a, b), estimating equation 3 by OLS would suffer two problems-identification and endogeneity problems. To overcome these problems, especially the problem of endogeneity, we use a two-step system GMM. Main Results: The Effect of Government Health Expenditure and Its Interaction with Institutional Quality on Labour Force Participation The key findings of the study are presented and discussed in this section. The study estimated four different models to implement the impact of the institutional role and health expenditure on labour force participation (LFP) in Africa. The first model estimated the effect of health expenditure alone on LFP. The second model considered the effect of health expenditure and institutional quality which were estimated on LFP. The third model included interactive term (that is, institutional quality and health expenditure) as one of the independent variables. Hence, health expenditure, institutional quality and interactive term were the independent variables in this model. Table 3, domestic GHE appears to have a positive effect on LFP (total, female and male) when we regress labour force participations on domestic GHE but the positive effect is not statistically significant. However, the introduction of institutional quality in the second model allows for a positive and significant relationship to be observed between domestic GHE and LFP. Specifically, a percentage increase in GHE raises total and female LFP by 0.005% and 0.006% respectively. This is similar to the findings of Al-Jebory (2014), Isiaka (2020) and Raufet al., (2018) revealing that health expenditure has a positive influence on LFP rate. Though, institutional quality has a negative influence on LFP, suggesting that poor institutional quality in Africa discourages LFP. This means that an upwards trend in poor institutional quality leads to a decline in total and female LFP by 0006% and 0.012% respectively. When the institutional quality is interacted with GHE, the resulting finding reveals a positive and significant effect on total, female and male LFP. This is in line with the findings of Dhrifi (2020) and Massimiliano et al., (2019). However, a closer look at the estimated coefficients across the models, we could see that the coefficients of interaction are lower than the estimated coefficients of GHE alone. This means that poor institutional quality in Africa tends to reduce the positive effect of GHE on labour force participation. The impact of the control variables captured in the final model revealed that life expectancy at birth (LEAB) spurs total, female, and male LFP by 0.223%, 0.190% and 0.192% respectively. However, secondary school enrolment (SSE) tends to reduce total, female, and male LFP. The effect of infant mortality rate (IMR) is not statistically significant on total and female LFP rate. It is, however, positive and statistically significant on male labour LFP. Total fertility rate (TFR) is also found to positively influence female LFP, suggesting that the increase in the rate of fertility could gear up women to participate in the labour force as they have many children to feed. Trade openness is detrimental to female labour force participation We would now examine the consistency of the above results when we control for endogeneity in the relationship between GHE and LFP as well as other variables when we use the two-stage system GMM. As observed in Table 4, we obtain more consistent results from the GMM which is conformed to the economic expectations. However, the presentation of the results begins with the results of the diagnostic tests. It can be shown that the numbers of groups are found to be greater than the number of instruments used, revealing the validity of the instruments. Furthermore, the post-estimation findings revealed that endogeneity problems had been properly addressed. The second-order autocorrelation values reject the existence of second-order autocorrelation and thus, we accept the null hypothesis of no autocorrelation. Consequently, our models do not suffer from second-order serial autocorrelation (AR (2)). Also, the Hansen / Sagan test of over-identification of instruments results indicates that the instruments included are valid. The fourth models were found to be more reliable. Another important result reported in Table 4 is the result of the lag of dependent variable (LFP). As shown in the Table, a Robustness Results: The Effect of Out-of-Pocket Health Expenditure and Its Interaction with Institutional Quality on Labour Force Participation We further examine the robustness of our results by using an alternative variable of out-of-pocket health expenditure (OPHE) rather than the domestic GHE. Table 5 presents the fixed effect regression results for the effects of OPHE and its interaction with institutional quality on LFP in Africa. When the LFP is ran against the OPHE alone, it is observed that OPHE has positive and significant effect on female LFP only. This suggests that OPHE only encourages female LFP. In fact, when institutional quality is introduced into the model, the effect of OPHE on total and male LFP becomes negative but not significant. However, when we control for other variables, the negative effect becomes statistically significant. This suggests that OPHE discourages total and male LFP. When institutional quality is introduced into model 1, institutional quality has a negative effect on all categories of LFP. However, the introduction of interactive term changes the direction and sign of the effect of institutional quality on LFP as the effect turn positive and, in most cases, statistically significant, especially for total and female LFP. The interactive term on its own has a negative impact on LFP (total, female and male). The effects of control variables are reported in Table 5 as well. Evidence from the Table signifies that life expectancy at birth increases as total, female and male LFP rise by 0.191%, 0.212% and 0.132% respectively. However, SSE leads to a decline in all categories of LFP. This is consistent with other a priori expectation that an increase in time spent in schooling would reduce LFP. This is because, during schooling, citizens that engage in schooling do not participate in the labour force. The more the categories of these people stay in school, the decline in the LFP (Burk and Montes, 2018). The TFR is found to be positively influenced by female LFP. Table 6 reports the results of the Generalized Method of Moments (GMM) effect on OPHE and its interaction with institutional quality on LFP in Africa. The post-estimation tests show that the number of instruments does not exceed the number of groups, suggesting the validity of the instrument with the Sargen and Hansen test of overidentification. Also, the AR(2) result shows that the model does not suffer from second-order serial or autocorrelation. The Wald test was also found to be significant. The period lag of the dependent variables has been observed to have a positive and significant impact on the current LFP. OPHE decreases total, female and male LFP by 0.024%, 0.023 and 0.022% respectively. The negative effect remains unchanged when institutional quality and interactive term (Institutional quality and health expenditure) are introduced into the models. However, when control for other independent variables, OPHE increases as female LFP by 0.013%. Institutional quality, when introduced, has a negative effect on all categories of all LFP. However, when control for control variables, the negative effect turns positive in all the models. The effect of interactive term on LFP is negative and significant statistically, implying poor institutional quality affects negatively the more the nexus between OPHE in African countries. In system GMM results, LEAB, GDP growth rate and IMR decreases as total and male LFP rises. Trade openness increases with increasing total and male LFP by 0.014% and 0.022% respectively. More precisely, SSE has a positive and significant impact on female LFP while total fertility decreases as female LFP increases and vice versa. The results from the OPHE do not have an improving impact on LFP. This underscores the importance of government health expenditure despite the increasing rate of overdependence on OPHE in Africa. GDPPCGR, LEAB, SEE, IMR, TFR and TOPEN are out-of-pocket health expenditure, institutional quality, the interaction of out-of-pocket health expenditure and institutional quality, total, female and male life expectancy at birth, total, female and male secondary school enrolment, total, female and male infant mortality rate, total fertility rate and trade openness respectively. Conclusion and Policy Implications In this study, we have examined the effect of institutional role in health expenditure and labour force participation in 39 African countries over the period 2000 and 2018. We consider the impact of government health expenditure visà-vis out-of-pocket health expenditure on different categories of labour force participation, particularly total, female and labour force participation. Aside from this, we take into cognise what the role of quality of institutions in Africa has to play in the relationship between health expenditure and labour force participation. In order to implement our objectives, we employ two specific estimation techniques, namely: Panel Fixed Effects estimation method that accounts for Driscoll and Kraay (1998) standard errors and two-stage system GMM. While accounting for Driscoll and Kraay (1998) standard errors allow Fixed Effects estimation to address the issue of heteroscedasticity and autocorrelation method, the system GMM addresses the issue of endogeneity characterising the modelling of nexus between health expenditure and labour force participation. Interesting findings are found from our study. The supremacy of government health expenditure over out-of-pocket health expenditure in spurring all categories of labour force participation is documented in all the estimation techniques employed. This underscores the indispensability of the government's investment in the health sector. Our findings can be explained from two perspectives. Governments have the capability in terms of resources to invest in health infrastructure or equipment that would improve the health status of the citizens that would enable them to participate in the labour force. This is because only healthy citizens can actively participate in the labour force. Second, the majority of the citizens in developing countries are poor even though out-of-health expenditure is on the increase. Hence, they only seek health services when the crucial need arises. In fact, the majority of the citizens in poor countries like many African countries seek health services when it is almost too late. Consequently, their health expenditure may not spur their participation in the labour force as expected. We also document that the poor quality of institutions in Africa is detrimental to labour force participations either total, female or male labour force participation. This is not surprising considering the low level of institutional quality in many countries that make up the continent. Corruption is still rampant in many African countries, even among the strong and resources endowed countries. Rule of law and protection of human rights and properties are still mirage in many African countries despite many years of democratic system of government. The transition from one government to another is still marred with violence that results in political instability. The existence of corruption and political instability in any country has a high tendency to discourage labour force participation. The appalling low institutional quality is manifested in the relationship between health expenditure and labour force participation. Specifically, we find that institutional quality moderated downwards the positive nexus between government health expenditure and all categories of labour force participation, suggesting that poor quality of institutions worsens the nexus between government health expenditure and labour force participation. Although the effects of control variables such as GDP per capita growth rate, life expectancy at birth (total, female and male), gross secondary school enrolment (total, female and female), infant mortality rate (total, female and male), total fertility rate and trade openness on labour force participation varies across models, albeit we can affirm that GPD per capita growth rate does not have any discernible impact on labour force participation. Life expectancy at birth positively influences labour force participation. However, secondary school enrolment reduces labour force participation. Infant mortality rate positively affects labour force participation while total fertility rate has a positive effect on labour force participation. However, trade openness is only detrimental to female labour force participation. Two policy implications can be drawn from our findings. First, there is a need to increase health expenditure, particularly government health expenditure so as to spur labour force participation in the continent of Africa. Also, institutional quality in the continent needs to be improved to ensure that health expenditure is properly spent without being misappropriated by some groups of people.
2021-10-19T15:18:20.291Z
2021-09-27T00:00:00.000
{ "year": 2023, "sha1": "f6b1af6ad8eb2c6da367448db3c5363bee0ac6c4", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-940235/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "7290a139d850d442666fba7a8cb86256dc78911f", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [] }
220494820
pes2o/s2orc
v3-fos-license
HSPA12A unstabilizes CD147 to inhibit lactate export and migration in human renal cell carcinoma Background: Metastasis accounts for 90% of cancer-associated mortality in patients with renal cell carcinoma (RCC). However, the clinical management of RCC metastasis is challenging. Lactate export is known to play an important role in cancer cell migration. This study investigated the role of heat shock protein A12A (HSPA12A) in RCC migration. Methods: HSPA12A expression was examined in 82 pairs of matched RCC tumors and corresponding normal kidney tissues from patients by immunoblotting and immunofluorescence analyses. The proliferation of RCC cells was analyzed using MTT and EdU incorporation assays. The migration of RCC cells was evaluated by wound healing and Transwell migration assays. Extracellular acidification was examined using Seahorse technology. Protein stability was determined following treatment with protein synthesis inhibitor cycloheximide and proteasome inhibitor MG132. Mass spectrometry, immunoprecipitation, and immunoblotting were employed to examine protein-protein interactions. Results: RCC tumors from patients showed downregulation of HSPA12A, which was associated with advanced tumor node metastasis stage. Intriguingly, overexpression of HSPA12A in RCC cells inhibited migration, whereas HSPA12A knockdown had the opposite effect. Lactate export, glycolysis rate, and CD147 protein abundance were also inhibited by HSPA12A overexpression but promoted by HSPA12A knockdown. An interaction of HSPA12A with HRD1 ubiquitin E3 ligase was detected in RCC cells. Further studies demonstrated that CD147 ubiquitination and proteasomal degradation were promoted by HSPA12A overexpression whereas inhibited by HSPA12A knockdown. Notably, the HSPA12A overexpression-induced inhibition of lactate export and migration were abolished by CD147 overexpression. Conclusion: Human RCC shows downregulation of HSPA12A. Overexpression of HSPA12A in RCC cells unstabilizes CD147 through increasing its ubiquitin-proteasome degradation, thereby inhibits lactate export and glycolysis, and ultimately suppresses RCC cell migration. Our results demonstrate that overexpression of HSPA12A might represent a viable strategy for managing RCC metastasis. Introduction Renal cell carcinoma (RCC) is one of the most frequently diagnosed cancers worldwide, and the incidence rates have been steadily increasing [1][2][3]. More than 10 histological and molecular subtypes of RCC have been identified, among which clear cell RCC (ccRCC) is the most common type and Ivyspring International Publisher accounting for 80% of all cases [2,4]. The prognosis of RCC is poor because approximately 25% of patients present with metastases at the time of diagnosis, and another 30-35% of patients underwent resection for localized or locally invasive kidney cancer will develop potentially fatal metachronous distant metastases [5,6]. Therefore, a comprehensive understanding of RCC metastasis is urgently needed to develop effective targeted therapies for reducing the risk of recurrence and death from metastatic disease. Cancer cell invasion and metastasis are complex processes that involve many genetic alterations and subsequent metabolic transitions [7]. Cluster of differentiation 147 (CD147) has been shown to play a critical role in metastasis [8,9]. CD147, also known as Basigin or extracellular matrix metalloproteinase inducer (EMMPRIN), is a multifunctional glycoprotein involved in various biological functions including cell proliferation, survival, and invasion, and is expressed at high levels in a variety of human cancers [10][11][12][13]. The N-terminal domain of CD147 contains three glycosylation sites, which results in its molecular mass ranges from 27 kDa for the non-glycosylated form (NoG-CD147) to 32 kDa for the low-glycosylated (LG-CD147) form, and 40-65 kDa for high-glycosylated CD147 (HG-CD147) according to immunoblot analysis [13,14]. HG-CD147 is the mature and active form for facilitating the translocation of monocarboxylate transporters MCT1 and MCT4 to the membrane [9,13,15]. CD147 also promotes the expression of MCT1 and MCT4 [16,17]. MCT1 facilitates lactate uptake whereas MCT4 favors lactate export to increase the content of lactate and acidification of the tumor microenvironment that leading to cancer cell migration and invasion [9,18,19]. There is convincing evidence that CD147 increases the efficacy of lactate export through MCT4 thereby removing the feedback inhibitory effect of cellular lactate on glycolytic flux [9,20]. Thus, CD147 inhibition has been proposed as a potential therapeutic anti-cancer approach. However, the regulation of CD147 in cancer cells has not been clearly clarified. Heat shock protein A12A (HSPA12A) is a novel but atypical member of the HSP70 family [21]. Hspa12a mRNA is expressed at high levels in the human and murine brains under normal conditions, whereas its expression is decreased in schizophrenia patients [21,22]. We recently showed that HSPA12A mediates a pro-survival pathway against cerebral ischemic injury, and it also promotes high-fat diet-induced non-alcoholic liver disease and obesity [23,24]. Besides its elevated expression levels in the brain, HSPA12A is highly expressed in the kidney [23,24], suggesting that it might play a role in the maintenance of renal homeostasis. However, the involvement of HSPA12A in renal disorders including renal cancers remains to be investigated. In this study, we found that RCC tumors from patients showed downregulation of HSPA12A, which was associated with advanced tumor node metastasis (TNM) stage and Fuhrman grade. In loss-and gain-offunction experiments, HSPA12A overexpression inhibited RCC cell migration whereas HSPA12A knockdown had the opposite effect. Molecular studies revealed that HSPA12A decreased CD147 protein stability by promoting ubiquitin-proteasomal degradation, thereby inhibited lactate and glycolysis, and ultimately suppressed RCC cell migration. These findings suggest that HSPA12A is a novel suppressor of RCC migration. Thus, HSPA12A overexpression might represent a viable strategy for preventing metastasis in human RCC. Human samples A total of 82 primary RCC tumor samples were collected from patients who had underwent nephrectomy in the First Affiliated Hospital of Nanjing Medical University (Nanjing, China). All patients were not previously received systemic therapy. Tumor stage and grade were determined after nephrectomy according to the 2010 TNM classification system and the Fuhrman grading system [25,26]. In the present study, we included 72 of clear-cell RCCs, 3 of papillary RCCs, and 1 of chromophobe RCCs, 2 of spindle cell carcinoma, and 4 other types of carcinoma. The Ethical Board of First Affiliated Hospital of Nanjing Medical University approved these studies (#2019-SR-489). Patients gave informed consent at the time of recruitment. All the human studies were conducted according to the principles set out in the WMA Declaration of Helsinki and the Department of Health and Human Services Belmont Report. Bioinformatics analysis Using the TCGA (https://www.cancer.gov/ tcga) database, we obtained the standardized expression levels of HSPA12A mRNA in Kidney renal clear cell carcinoma and their association with clinical features, including TNM stage, tumor grade, overall survival and disease free survival. Cell cultures and treatments Human clear cell carcinoma Caki-1 cells and human renal cell adenocarcinoma 786O cells were grown in modified McCoy's 5A medium and in RPMI 1640 medium, respectively. Both media were supplemented with 10% fetal bovine serum, 100 units/ml penicillin, and 100 µg/ml streptomycin. All cell lines were free of mycoplasma contamination. Cells were plated in 60-mm dishes at a density of 3×10 5 cells/dish or 24-well plate at a density of 1.5×10 4 cells/well. The cells were passaged when they reached at 80% confluence. Overexpression of HSPA12A in cells was established by infection with Flag-tagged HSPA12Aexpressing recombinant adenovirus (Ad-HSPA12A) or normal control vectors (Ad-NC). To overexpress CD147, cells were transfected with pTT3-CD147 plasmids or empty pTT3 control vectors using Lipofectamine 3000. Knockdown of HSPA12A was achieved by introducing HSPA12A-targeting siRNA (Si-HSPA12A) or the corresponding scramble negative controls (Si-NC) using siRNA-mate (Genepharma, China). The siRNA sequences were shown in Table S1. All the measurements were performed 48 h after gene overexpression or knockdown unless indicated elsewhere. In FAK inhibition experiments, cells were treated with SU6656 (2µM) 1 h prior to HSPA12A knockdown. Immunoblotting and immunoprecipitation-immunoblotting Tissues or cells were subjected to cytosolic and nuclear protein preparation using the lysis buffer A and B, respectively (Table S2). Equal amount (30 µg) of proteins was used for immunoblotting according to our previous methods [23,24]. To control for lane loading, the membranes were probed with anti-GAPDH antibodies for cytosolic proteins and anti-Lamin A/C antibodies for nuclear proteins. The developed bands were normalized to the NC control and expressed as the relative levels. For analysing interaction between HSPA12A and CD147 or HRD1 by immunoprecipitation-immunoblotting, Caki-1 or 786O cells were overexpressed with Flag-tagged HSPA12A. After HSPA12A overexpression for 48 h, cells were collected for protein extraction. Aliquots of equal protein content (0.7 mg) were precipitated with anti-Flag antibodies, followed by Western blotting with anti-CD147, anti-HRD1, and anti-HSPA12A antibodies, as described previously [23]. Antibodies used in the experiments are listed in Table S3. Immunofluorescence staining Immunofluorescence staining was performed on 4% PFA-fixed cells or frozen tissue sections according to our previous method [23,24]. Briefly, after incubation with the indicated primary antibodies (1 : 100) overnight at 4 °C, Cy3-or FITC-conjugated secondary antibody was applied to the samples to visualize the staining. Hoechst 33342 reagent was used to counterstain the nuclei. The staining was observed using a fluorescence microscope and quantified using Cellsens Dimention 1.15 software (Olympus, Tokyo, Japan). Quantitative real-time PCR Quantitative real-time PCR was performed as described previously [23,24]. Briefly, total RNA was extracted and an amount of 2 µg of total RNA was used for cDNA synthesis using the oligo (dT) primer. After cDNA synthesis, the expressions of indicated genes were estimated by real-time PCR using the SYBR Green Master (Roche, Indianapolis, IN). The PCR results of Gapdh served as internal controls. We took 2-∆∆CT method in the calculation. The primers used for PCR were listed in Table S4. Examination of proliferative ability MTT assay. After knockdown or overexpression of HSPA12A for the indicated times, cell viability was determined using an MTT assay as described previously [28]. EdU incorporation Following overexpression of HSPA12A for 46 h, Caki-1 cells were incubated with EdU for another 2 h. Cell proliferation was indicated by EdU incorporation that visualized by the assay kit according to the manufacture's instruction. Wound healing assay Following overexpression or knockdown of HSPA12A for 48 h, cell monolayers that grown in six-well plates were scratched across the plate to create a streak wound with a 10 µl pipette tip. Progression of migration was observed and photographed at 24 h after wounding and expressed as the migratory distance using computerized Image J software (National Institutes of Health, Bethesda, MD). Three fields on each well were randomly examined with a magnification of 100. Transwell migration assay Cells were plated in the upper chamber of transwell of 24-well plate (1.5×10 4 cells/well) following overexpression or knockdown of HSPA12A for 48 h. The pore size of insert was 8 micrometer. After further culture for another 48 h, the migratory cells passing through the insert membrane to the lower chamber were fixed by methanol and stained with 1% crystal violet. The migrated cells into lower chamber were quantified in four randomly selected areas at a magnification of 100 of each sample using Cellsens Dimention 1.15 software (Olympus, Tokyo, Japan). Following HSPA12A knockdown or overexpression for 48 h, culture medium and cells were collected for lactate content analysis according to the manufacture's instruction. In another set of experiments, CD147 was overexpressed in Ad-HSPA12A cells for 48 h, and the lactate contents in culture medium and cells were examined subsequently. The lactate values were expressed as relative contents to the respective NC controls. Also, lactate export was indicated by medium pH values to indicate acidification. Measurement of Extracellular acidification rates (ECAR) ECAR was examined using a Seahorse XF e 24 Extracellular Flux Analyzer (Seahorse Biosciences, USA). Experiments were performed following manufacturer's protocols. ECAR was assessed using Seahorse XF Glycolysis Stress Test Kit. In Brief, 8 × 10 3 cells per well were seeded into a Seahorse XF e 24 cell culture micro-plate and incubated overnight. Glucose, the oxidative phosphorylation inhibitor oligomycin, and the glycolytic inhibitor 2-DG were sequentially injected into each well at indicated time points following baseline measurements. ECAR was shown in mpH/min, and data were analyzed by Seahorse XF e 24 Wave software. Mass spectrometry Human liver carcinoma HepG2 cells with or without HSPA12A overexpression were used for mass spectrometry analysis according to previous study [23]. In brief, the anti-HSPA12A immunoprecipitates were separated by SDS-PAGE followed by Coomassie-blue staining, digested in gel with trypsin, and analyzed by liquid chromatographytandem mass spectrometry. Peptides were dissolved in solvent A (2% FA in 3% ACN) and loaded directly onto a reversed-phase Trap column (Chrom XP C18-CL-3m 120A, Eksigent). Peptide separation was performed using a reversed-phase analytical column (3C18-CL-120, 3µm, 120A, Eksigent). Eluting peptides from the column were analyzed using an AB Sciex 5600+ TripleTOF™ system. MS/MS data were processed using ProteinPilotTM Software 4.5 (AB Sciex). Tandem mass spectra were searched against UniProt_Homo sapiens (160,566 sequences, released on April 9. 2016) database concatenated with reverse decoy database. Trypsin/P was specified as cleavage enzyme allowing up to 3 missing cleavages, 4 modifications per peptide and 5 charges. Flow cytometry Cells were collected and washed with PBS. After washing, the cells were stained with anti-human CD147 APC for 30 min on ice and in dark. The fluorescence intensity was analyzed by FACSCalibur (Becton Dickinson). Samples without antibody incubation or with anti-mouse IgG1 K isotype control APC served as controls. All of the data were analyzed with the FlowJo Software (Becton Dickinson). Statistical analysis Data represent as mean ± standard deviation (SD). Groups were compared using Student's two-tailed unpaired t-test, Student's two-tailed paired t-test, one-way ANOVA or two-way ANOVA followed by Bonferroni's test as a post-hoc test. A P value of < 0.05 was considered as significant. HSPA12A expression is downregulated in human RCC To address the role of HSPA12A in RCC pathogenesis, we first analyzed Hspa12a mRNA expression by mining The Cancer Genome Atlas (TCGA) database for Kidney Renal Clear Cell Carcinoma (KIRC). The results showed that Hspa12a mRNA expression was 33.8% lower in human RCC tissues than in normal controls ( Figure 1A). Next, we collected 82 pairs of matched RCC tumors and corresponding normal kidney tissues from human patients and measured HSPA12A protein expression, which was downregulated by 28.3% in cytosolic and 37.3% in pellet fractions of RCC tumors compared with their normal counterparts ( Figure 1B). Immunofluorescence analysis confirmed the downregulation of HSPA12A in RCC tumors ( Figure 1C). HSPA12A downregulation is associated with unfavorable prognosis in RCC patients To determine the clinical significance of HSPA12A downregulation in RCC progression, we analyzed the association between HSPA12A protein expression and various clinicopathological variables in our cohort of 82 RCC patients. Reduced HSPA12A expression was significantly associated with advanced TNM stage, high Fuhrman grade, and large tumor size (Figure 1D). Consistent with our results, analysis of TCGA cohorts from the UALCAN website (http://ualcan.path.uab.edu) showed an association between downregulated Hspa12a mRNA levels and advanced TNM stage and tumor grade in RCC patients [29] (Figure S1A-B). Furthermore, the analysis of the Kaplan-Meier plotter from the GEPIA database (http://gepia.cancer-pku.cn) indicates a significant correlation of Hspa12a mRNA downregulation with poor overall survival and disease-free survival in RCC patients [30] (Figure 1E). Taken together, our findings suggest that HSPA12A downregulation is associated with poor outcomes in RCC patients. Overexpression of HSPA12A reduces migration but not the proliferation of RCC cells The association of HSPA12A downregulation with poor outcomes in RCC patients led us to investigate the potential effect of HSPA12A on RCC cell growth and migration. To this end, HSPA12A was overexpressed in two different human RCC cell lines (Caki-1 and 786O) by infection with Flag-tagged HSPA12A-expressing recombinant adenovirus (Ad-HSPA12A) or normal vector control (Ad-NC). Knockdown of HSPA12A was achieved by introducing HSPA12A-targeting siRNA (Si-HSPA12A) or the corresponding scramble control (Si-NC). Neither HSPA12A overexpression nor its knockdown affected the viability of Caki-1 and 786O cells compared with their respective controls ( Figure S2A). Also, cell proliferation was not changed by HSPA12A overexpression in Caki-1 cells, as indicated by EdU incorporation (Figure S2B). Consistent with these results, the mRNA and protein expression of genes associated with cell survival and proliferation, such as Bax, Bcl-2, C-Myc, and Cyclin D1, showed no changes in response to either HSPA12A overexpression or HSPA12A knockdown in Caki-1 cells (Figure S3A-B). We then examined whether HSPA12A affects the migratory ability of RCC cells. The wound closure assays showed that knockdown of HSPA12A in both Caki-1 and 786O cells significantly increased cell migration into the wounds compared with that in Si-NC controls (Figure 2A). This was also supported by the Transwell migration assay results, which demonstrated a greater number of Caki-1 and 786O knockdown cells passing through the insert membranes than Si-NC cells ( Figure 2B). By striking contrast, HSPA12A overexpression markedly suppressed cell migration for wound closure and the ability to pass through the Transwell insert membranes in both Caki-1 and 786O cells (Figure 2A-B). The integrin/FAK/ERKs signaling-mediated matrix metalloprotease (MMP) expression has been shown to be critical for cancer metastasis [31][32][33][34]. Western blot analysis indicated that HSPA12A knockdown promoted, whereas HSPA12A overexpression inhibited integrin-β1, MMP2, and MMP9 expression in Caki-1 cells compared with their respective controls ( Figure 2C). Also, FAK and ERKs phosphorylation were promoted by HSPA12A knockdown whereas inhibited by HSPA12A overexpression in Caki-1 cells. Similarly, in 786O cells, overexpression of HSPA12A reduced expression of integrin-β1 and MMP2 and phosphorylation of FAK and ERKs compared to their respective NC controls ( Figure S4). Collectively, the data suggest that HSPA12A negatively regulates RCC cell migration. To further determine the role of FAK-related signaling in the migratory regulation of HSPA12A, we treated Si-HSPA12A and Si-NC Caki-1 cells with SU6656, a widely used FAK inhibitor that suppresses FAK phosphorylation [35,36]. Both wound healing and Transwell migration assays demonstrated that SU6656 abolished the HSPA12A knockdown-induced increase of migration (Figure S5A-C). Moreover, no difference in cell migration between Si-NC and Si-HSPA12A groups was observed in the presence of SU6656. The data suggest that FAK signaling is involved in the HSPA12A knockdown-induced promotion of RCC cell migration. Overexpression of HSPA12A inhibits lactate export and glycolysis in RCC cells Lactate, once considered a waste product of glycolysis, has emerged as a critical regulator of cancer development, maintenance, and metastasis [37][38][39][40]. We found that the extracellular lactate content and medium acidification of Caki-1 cells were increased by HSPA12A knockdown compared with those of Si-NC controls (Figure 3A-B). By contrast, HSPA12A overexpression decreased extracellular lactate content and medium acidification compared with those in Ad-NC controls (Figure 3A-B). To exclude the possibility that this change of extracellular lactate was due to the augmented lactate generation and therefore increased lactate output, we measured the intracellular lactate content. The results showed that the intracellular lactate contents were decreased by HSPA12A knockdown but increased by HSPA12A overexpression in Caki-1 cells compared with the controls (Figure 3C). Similar effects of HSPA12A on medium acidification, lactate export, and intracellular lactate content were observed in 786O cells ( Figure 4A-C). Our findings were supported by previous reports that intracellular lactate accumulation serves as a negative feedback signal for glycolysis [9,20]. To further analyze the effect of HSPA12A on RCC glycolysis, extracellular acidification rates (ECAR) were examined using Seahorse technology. A significant increase in glycolysis rate and capacity was detected in HSPA12A knockdown Caki-1 cells compared to the controls (Figure 3D). Consistent with these observations, knockdown of HSPA12A in Caki-1 cells upregulated the expression of genes regulating glucose uptake (GLUT1 and GLUT4), glycolysis (HK2 and PFKFB3) and lactate export (MCT4) (Figure 3E). By contrast, overexpression of HSPA12A in Caki-1 cells downregulated the expression of GLUT1, GLUT4, HK2, PFKFB3 and MCT4 in Caki-1 cells compared to their Ad-NC controls. Similarly, the expression of GLUT1, GLUT4, HK2, PFKFB3, LDHA, and MCT4 were downregulated in 786O cells following HSPA12A overexpression ( Figure 4D). These data suggest that HSPA12A negatively regulates lactate export and glycolysis of RCC cells. Overexpression of HSPA12A decreases CD147 protein abundance, maturation, and membrane localization The transmembrane glycoprotein CD147 has been shown to promote migration and glycolysis of cancer cells via MCT4-mediated lactate export [9,15,27,[41][42][43]. Based on our previous unbiased screen with mass-spectrometric analysis, showing an interaction between HSPA12A and CD147 in human hepatocellular carcinoma cells (Figure S6), we hypothesized that HSPA12A might play a role in the regulation of CD147 during RCC migration. Nonetheless, CD147 protein was not detected in the HSPA12A immunocomplexes of Caki-1 and 786O cells (Figure S7A-B), suggesting the lack of direct interaction between HSPA12A and CD147 in RCC cells. However, both immunoblotting and immunostaining analyses revealed increased CD147 protein expression accompanied by reduced HSPA12A expression in human RCC tumors compared to their normal counterparts (Figure 5A-B), indicating that HSPA12A might negatively regulate CD147 expression. Indeed, CD147 protein levels were increased by HSPA12A knockdown but decreased by HSPA12A overexpression in Caki-1 cells as indicated by both immunoblotting and immunostaining analyses (Figure 5C-D). Reduced CD147 expression was also observed in HSPA12A-overexpressing 786O cells compared to NC controls ( Figure S8). The effects of HSPA12A on CD147 expression was also confirmed by flow cytometry (Figure S9). High glycosylation is an indicator of CD147 maturation for its membrane localization [13]. Interestingly, HG-CD147 levels were increased in human RCC tumors accompanied by reduced HSPA12A expression (Figure 5A). Moreover, HSPA12A knockdown increased, whereas HSPA12A overexpression decreased HG-CD147 content in Caki-1 cells (Figure 5C-D). HSPA12A knockdown also substantially increased the membrane localization of CD147 in Caki-1 cells, whereas HSPA12A overexpression markedly decreased CD147 membrane localization (Figure 5E-F). Besides, a decreased HG-CD147 protein content was observed in 786O cells following HSPA12A overexpression ( Figure S8). These findings collectively indicate that HSPA12A negatively regulates CD147 protein abundance, maturation, and membrane localization in RCC cells. Given that CD147 promotes lactate export by promoting MCT4 stability, cell surface expression, and function [15,42,43], we analyzed the effect of HSPA12A on MCT4 expression. We found that human RCC tumors with lower HSPA12A expression showed significantly higher MCT4 protein expression than their corresponding non-tumor counterparts (Figure S10A-B). This observation was confirmed by in vitro experiments showing that HSPA12A knockdown increased, whereas HSPA12A overexpression decreased MCT4 protein expression in both Caki-1 and 786O cells compared with control cells (Figure 3E and 4D). However, neither HSPA12A knockdown nor HSPA12A overexpression affected Mct4 mRNA levels in Caki-1 cells (Figure S11), indicating that HSPA12A may regulate MCT4 at post-transcriptional levels. Overexpression of HSPA12A reduces CD147 to mediate anti-migration effects To determine whether the anti-migratory activity of HSPA12A overexpression is mediated by decreasing CD147, we overexpressed CD147 in HSPA12A overexpressing-Caki-1 cells (Ad-HSPA12A) by transfection with pTT3-CD147 plasmid or empty pTT3 vector control (Figure 6A-B). Notably, overexpression of CD147 reversed the HSPA12A-induced inhibition of cell migration in the Transwell migration assay (Figure 6C). Consistent with this observation, overexpression of CD147 in Ad-HSPA12A Caki-1 cells upregulated integrin-β1, MMP2, MMP7, and MMP9 protein expression and increased FAK phosphorylation compared to pTT3 controls ( Figure 6D). Similar results were observed in 786O cells showing that overexpression of CD147 reversed the HSPA12A-induced inhibition of cell migration in the Transwell migration assay ( Figure S12) and also increased FAK phosphorylation compared to pTT3 controls ( Figure S13). These data indicate that HSPA12A inhibits RCC cell migration by reducing CD147. Overexpression of HSPA12A reduces CD147 to mediate anti-lactate export and anti-glycolysis effects We next investigated whether the HSPA12Ainduced inhibition of lactate export and glycolysis were mediated by the reduction of CD147. We noted that overexpression of CD147 in Ad-HSPA12A Caki-1 and 786O cells increased medium acidification compared with control (pTT3) cells (Figures 7A). Consistently, lactate content was increased extracellularly (culture medium) and decreased intracellularly in Ad-HSPA12A Caki-1 and 786O cells following CD147 overexpression (Figures 7B). Also, overexpression of CD147 in Ad-HSPA12A cells upregulated protein expression of the genes involved in glucose uptake (GLUT1 and GLUT4), glycolysis (HK2, PFKFB3 and LDHA), and lactate export (MCT4) (Figure 7C), compared to their pTT3 controls. These results indicate that HSPA12A inhibits lactate export and glycolysis by reducing CD147 protein expression. HSPA12A reduces CD147 protein stability via ubiquitin-proteasome degradation Next, we explored the potential mechanisms underlying the HSPA12A-mediated regulation of CD147 protein expression. Unexpectedly, neither HSPA12A knockdown nor HSPA12A overexpression had an effect on Cd147 mRNA levels in Caki-1 and 786O cells that were comparable to the respective NC controls (Figures 8A and S14). Analysis of the TCGA database for KIRC showed comparable Cd147 mRNA levels between RCC tumors and their normal controls (Figure S15), supporting our observations and suggesting that HSPA12A may regulate CD147 protein stability. To test this hypothesis, we treated Si-HSPA12A Caki-1 cells with cycloheximide (CHX) to block protein translation for 48 and 72 h. Notably, the remaining HG-CD147 protein content, which was expressed as the percentage over NC level at 0 h, was significantly higher in Si-HSPA12A Caki-1 cells than in Si-NC cells after treatment with CHX up to 72 h ( Figure 8B). Similar results were detected in 786O cells following CHX treatment ( Figure S16). These data indicate that knockdown of HSPA12A increased CD147 protein stability. The proteasome is an important system for protein degradation [42]. To investigate whether HSPA12A regulates CD147 protein stability by modulating its proteasomal degradation, we examined the effects of the proteasome inhibitor MG132 on HSPA12A-induced changes in CD147 stability in both Caki-1 and 786O cells. The results showed that the HSPA12A overexpression-induced reduction of CD147 was abrogated in the presence of MG132 (Figures 8C and S17A). Furthermore, CD147 protein level was increased by MG132 in Si-NC control cells, whereas it was comparable between MG132 and vehicle groups in HSPA12A knockdown Caki-1 and 786O cells (Figures 8D and S17B). The analysis of flow cytometry confirmed that the HSPA12A-induced decrease of CD147 in Caki-1 cells was abolished in the presence of MG132 (Figure S9). Taken together, these results suggest that the HSPA12A-induced negative regulation of CD147 protein stability is proteasome-dependent. LG-CD147, low-glycosylation CD147; pTT3, empty pTT3 plasmid; pTT3-CD147, plasmid expressing CD147. Considering that ubiquitination is necessary for proteasomal protein degradation, we examined the effects of HSPA12A on CD147 ubiquitination. To this end, ubiquitin-conjugated proteins from HSPA12Aoverexpressing (Ad-HSPA12A) Caki-1 cells and the control (Ad-NC) cells were immunoprecipitated with the antibody against ubiquitin. The ubiquitinprecipitates from Ad-HSPA12A cells showed higher HG-CD147 protein content than the precipitates from Ad-NC cells and a higher level of non-glycosylated CD147 (NoG-CD147) was recovered rather than LG-CD147 ( Figure 9A). Conversely, HSPA12A deficiency significantly reduced the ubiquitination of both HG-CD147 and NoG-CD147 compared with that in control cells using immunoprecipitation immunoblotting analyses (Figure S18). HSPA12A interacts with HMG-CoA reductase degradation protein 1 (HRD1), an E3 ubiquitin ligase HRD1 is a ubiquitin E3 ligase that ubiquitinates and degrades CD147 [44,45]. To examine whether HRD1 was involved in the HSPA12A-induced CD147 proteasomal degradation, we first evaluated the effects of HSPA12A on HRD1 expression. A significant increase of HRD1 protein expression was found in both Caki-1 and 786O cells following HSPA12A overexpression compared to their Ad-NC controls ( Figure 9B). Notably, immunoprecipitationimmunoblotting analysis revealed the presence of HRD1 protein in the Flag-tagged HSPA12A immuno-precipitates from Ad-HSPA12A Caki-1 or 786O cells, showing the interaction of HSPA12A with HRD1 in RCC cells (Figure 9C). Discussion In this study, we identified HSPA12A as a novel negative regulator of renal cancer cell migration and metastasis. HSPA12A's function, as a potential migration-and metastasis suppressor, was mediated by modulating CD147 stability and CD147-mediated lactate export and glycolysis ( Figure 9D). These findings indicate that promoting HSPA12A expression could be an effective strategy for the management of RCC metastasis. In the present study, we detected a downregulated expression of HSPA12A in human RCC tumors, which was associated with advanced TNM stage and Fuhrman grade, as well as larger tumor size. The Kaplan-Meier plotter from the GEPIA database showed a correlation between low Hspa12a mRNA expression and poor survival in RCC patients. Loss-and gain-of HSPA12A function experiments in human RCC cells showed that HSPA12A did not directly regulate RCC cell proliferation but exhibited a negative correlation with RCC cell migration. We also found that HSPA12A negatively regulated phosphorylation of FAK as well as the expression of integrin-β1 and MMP2/9. FAK is a multifunctional regulator of cell signaling within the tumor microenvironment and is at the intersection of various signaling pathways that promote cancer metastasis. It can be activated by integrin-β to increase the expression or activation of MMPs, which, in turn, facilitate tumor cell invasion into the surrounding microenvironment [31][32][33]. Various other tumorpromoting signaling pathways are also involved in FAK's function. Thus, FAK inhibitors are emerging as promising chemotherapeutics for tumor metastasis in mouse models. In our study, inhibition of FAK diminished the HSPA12A knockdown-induced promotion of RCC cell migration, suggesting that FAK signaling was involved in the regulation of HSPA12A in RCC migration. These findings suggest that the downregulation of HSPA12A expression was associated with poor prognosis in human RCC, which could be attributed to the negative regulatory effect of HSPA12A on RCC cell migration. The results provided insights into the role of heat shock proteins in cancer progression. CD147 is a glycoprotein that was initially identified as a regulator of MMPs. Increasing evidence indicates that CD147 is overexpressed in cancer cells and involved in promoting cancer cell metastasis through several mechanisms. One important mechanism underlying the effect of CD147 on promoting cancer metastasis is the metabolic modification of the tumor microenvironment through interactions with specific MCTs, such as MCT4, to facilitate lactate export and tumor glycolysis [37,50]. Moreover, CD147 induces MMPs expression by activating FAK in an integrin-dependent manner [50]. However, little is known about the regulation of heat shock proteins in CD147 expression. Here, we found that human RCC tumors showed reduced HSPA12A accompanied by increased CD147 protein expression, suggesting that downregulation of HSPA12A might increase RCC cell migration by upregulating CD147. This hypothesis was supported by the findings that HSPA12A negatively regulated CD147 protein abundance and membrane expression in RCC cells. Moreover, HSPA12A negatively regulated lactate export, which was not caused by changes in lactate generation. In view of previous reports showing that intracellular lactate accumulation serves as a negative feedback signal for glycolysis [9,20], the present data suggest that the downregulation of HSPA12A might increase lactate export to upregulate glycolysis. Indeed, the glycolysis rate, glycolytic capacity and expression of glycolysis-related genes were promoted by HSPA12A knockdown in RCC cells. More importantly, overexpression of CD147 reversed the HSPA12A overexpression-induced inhibition of RCC cell migration, lactate export, and the glycolysisrelated gene expression. These data suggest that the downregulation of HSPA12A in RCC cells increases migration and glycolysis in a CD147-dependent manner. Previous studies have reported that CD147 can promote the proliferation of some cancer cells such as hepatocellular carcinoma cells through multiple pathways such as PI3K/Akt signaling [51,52]. Though we found a negative regulation of HSPA12A in CD147 protein abundance, the proliferation of both Caki-1 and 786O RCC cells was not affected by knockdown or overexpression of HSPA12A. The Akt activation in RCC cells was also not affected by HSPA12A ( Figure S19). When taken into account that our recent reports have shown a promoted proliferation of hepatocellular carcinoma cells by HSPA12A [53], the current data suggest that the effects of CD147 and HSPA12A on proliferation are cancer cell type-and signaling activation-dependent. Evidence has shown that CD147 is degraded through the ubiquitin-proteasome system, and HRD1 is an E3 ligase involved in CD147 ubiquitination [54]. Previous studies have demonstrated that HRD1 is an endoplasmic reticulum-associated ubiquitin ligase involved in CD147 degradation in human hepatocellular carcinoma cells. Moreover, HRD1 has been shown to form a signaling axis with HSP70 to regulate the onco-repressor potential of N-terminal misfolded Blimp-1s in lymphoma cells [55]. To understand the increased CD147 protein expression by downregulated HSPA12A, we analyzed its effect on Cd147 mRNA levels and found that HSPA12A did not affect Cd147 mRNA expression in RCC cells. Our results were supported by the TCGA database which showed a comparable Cd147 mRNA expression between RCC patients and the normal controls. We speculated that HSPA12A might regulate CD147 protein abundance by modulating its stability. The following findings supported this hypothesis. (1) Protein synthesis blocker CHX caused a gradual decrease in CD147 protein abundance in control RCC cells; however, the CHX-induced reduction of CD147 protein abundance was attenuated by HSPA12A knockdown, (2) Treatment with the proteasome inhibitor MG132 abolished the HSPA12A overexpression-induced decrease of CD147 protein, (3) In the ubiquitin immuno-precipitates from HSPA12A-overexpressing RCC cells, CD147 was present at a higher level, and the opposite pattern was observed in response to HSPA12A knockdown, and (4) HSPA12A showed an interaction with E3 ubiquitin ligase HRD1. These results suggest that HSPA12A negatively regulates CD147 protein stability by modulating CD147 ubiquitination for proteasomal degradation. Conclusion Our study demonstrated that the loss of HSPA12A was associated with metastasis in human RCC, while overexpression of HSPA12A inhibited RCC cell migration. We further provided evidence that the anti-metastatic action of HSPA12A was caused by the negative regulation of CD147 stability and the CD147-mediated lactate export ( Figure 9D). The present data suggest that increasing HSPA12A expression provides an effective strategy for preventing migration of human RCC cells.
2020-07-13T00:42:08.179Z
2020-07-09T00:00:00.000
{ "year": 2020, "sha1": "5d782ac26102f77623287fbfe1af2cdd90b76423", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7150/thno.44321", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d782ac26102f77623287fbfe1af2cdd90b76423", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
220521476
pes2o/s2orc
v3-fos-license
New Insights on Genetic Diagnostics in Cardiomyopathy and Arrhythmia Patients Gained by Stepwise Exome Data Analysis Inherited cardiomyopathies are characterized by clinical and genetic heterogeneity that challenge genetic diagnostics. In this study, we examined the diagnostic benefit of exome data compared to targeted gene panel analyses, and we propose new candidate genes. We performed exome sequencing in a cohort of 61 consecutive patients with a diagnosis of cardiomyopathy or primary arrhythmia, and we analyzed the data following a stepwise approach. Overall, in 64% of patients, a variant of interest (VOI) was detected. The detection rate in the main sub-cohort consisting of patients with dilated cardiomyopathy (DCM) was much higher than previously reported (25/36; 69%). The majority of VOIs were found in disease-specific panels, while a further analysis of an extended panel and exome data led to an additional diagnostic yield of 13% and 5%, respectively. Exome data analysis also detected variants in candidate genes whose functional profile suggested a probable pathogenetic role, the strongest candidate being a truncating variant in STK38. In conclusion, although the diagnostic yield of gene panels is acceptable for routine diagnostics, the genetic heterogeneity of cardiomyopathies and the presence of still-unknown causes favor exome sequencing, which enables the detection of interesting phenotype–genotype correlations, as well as the identification of novel candidate genes. Introduction Inherited cardiomyopathies are characterized by extensive genetic and phenotypic heterogeneity. Primary diseases of the myocardium associated with mechanical and/or electrical dysfunction are included within this large group of diseases [1]. Various classification schemes have been proposed, with the classification according to functional and morphologic features remaining the most useful for clinical practice [2]. According to this scheme, the following major forms of cardiomyopathies have been defined: dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), restrictive cardiomyopathy (RCM), left ventricular noncompaction cardiomyopathy (LVNC), arrhythmogenic right ventricular cardiomyopathy (ARVC), and primary arrhythmias including long QT syndrome (LQTS), Brugada syndrome, and catecholaminergic polymorphic ventricular tachycardia (CPVT) [1,3,4]. As the involved pathogenetic mechanisms have been unveiled over the years, it has been attempted to link the different phenotypic forms of cardiomyopathies with the underlying genetic causes [5]. Thus, for example, genes encoding for sarcomere proteins are considered to play the primary role in the development of HCM, while mutations in genes that encode desmosomal proteins are found in most cases of ARVC. Nevertheless, the definition of such genotype-phenotype correlations is far from simple due to the phenotypic overlap between the different forms of cardiomyopathy, as well as the overlap between several disease gene groups [2,6]. Further factors that contribute to the complexity of genetics of inherited cardiomyopathies are mutational heterogeneity, modifier genes, multiple variants or polygenic causes, genetic interplay with environmental influences, etc. [7]. These factors, along with the variable expressivity and incomplete penetrance shown by most of the cardiomyopathy phenotypes, make genetic testing and result interpretation particularly challenging. Another limiting factor of current genetic testing in cardiomyopathies is that not all genetic causes have yet been discovered. This is reflected by the mutation detection rate of gene panels involving known genes with established association with cardiomyopathies, which is much less than 100%. The current diagnostic yield of such gene panels is around 10-40% for DCM and does not exceed 60% in the case of HCM or RCM [6]. Though secondary causes can account for a part of the remaining cases, the presence of positive family history in unsolved cases suggests that novel genetic causes of cardiomyopathies remain to be discovered [8]. As sequencing technologies have evolved over the years, extended genetic testing methods like whole exome sequencing (WES) or whole genome sequencing have been gaining ground in clinical diagnostics. These methods offer the possibility of analyzing more genes than the ones included in routine gene panels. However, there is an ongoing debate in the literature regarding the utility of whole exome/genome sequencing [9]. Among the expressed considerations are difficulties in the interpretation of numerous variants of uncertain significance, as well as secondary findings [10]. Furthermore, concerns have been raised about the wrong implications of some genes in the genetic etiology of cardiomyopathies, that may lead to false positive results in clinical diagnostic settings [11]. In this study, we examine the diagnostic benefit of extended genetic analysis beyond the major genes included in routine panels and address some of the above-mentioned challenges in cardiogenetics. For this purpose, we performed comprehensive genetic analyses of 61 consecutive patients diagnosed with a major form of cardiomyopathy or primary arrhythmia syndrome. Exome sequencing was performed, and data were analyzed by a stepwise approach starting from a disease-specific gene panel up to the analysis of all the exome data. We report the diagnostic yield after each step and describe the occurrence and distribution of relevant genetic variants, as well as the spectrum of affected genes. We discuss the limitations of current variant classification schemes and suggest new categories for uncertain variants. In the era of increasing availability of next generation sequencing (NGS) in clinical settings, it is important to explore both the potential and challenges of the complex field of cardiogenetics. Clinical Evaluation Unrelated patients were consecutively investigated in the "Center for Inherited Cardiovascular Diseases" at the University Hospital Würzburg between 2017 and 2019. The study protocol and procedures received positive votes from the ethics committee of the Medical Faculty of the University of Würzburg (vote #29/17), which was approved on 16 May 2017. All participants provided written informed consent prior to any study participation. The purpose of presentation was the clinical evaluation of the index case, family history evaluation, and genetic diagnostics. The clinical evaluation included medical history, family history, 12-lead electrocardiogram (ECG), 24 h ambulatory Holter monitoring, transthoracic two-dimensional echocardiography, cardiac magnetic resonance imaging (CMR) in some cases, a cardiac biopsy, cardiac catheter examination, or additional electrophysiological investigations. Sixty-one patients with a primary diagnosis of HCM, DCM/LVNC, RCM, ARVC, LQTS, Brugada syndrome, CPVT, or who had survived sudden cardiac death without structural heart disease were included in the study. Classification and diagnosis were made according to the scientific statement of the American Heart Association [1]. For the sub-cohort of 36 DCM/LVNC patients, a secondary cause-in particular coronary artery disease, a primary/structural valvular defect, and an acute myocarditis-was excluded. Exome Sequencing For genetic testing, we performed exome DNA sequencing using different library preparation kits by Illumina (San Diego, CA, USA) and IDT (Integrated DNA Technologies, Coralville, IA, USA) according to the latest updates in library preparation proposed by Illumina. The pooled libraries were sequenced paired-end (2 × 150 base pairs) on a NextSeq ® 500 sequencing system (Illumina, San Diego, CA, USA). A minimum read depth of 10x across the target regions was achieved in 97.9% with the clinical Exome TruSight One Sequencing Panel, 96.5% with the Illumina Exome probes, and 99.3% with the IDT xGen-Exome-Research-Panel v1.0 probes. Bioinformatics and Workflow of Genetic Data Analysis Data analysis was performed by GensearchNGS (PhenoSystems, Wallonia, Belgium). Genetic variants were called for their minor allele frequency (MAF), exon distance (±20 bp into intron), and quality (variant present in > 10% of the NGS reads, coverage > 5×). Called variants were categorized as splice site, missense, nonsense, or small deletions/insertions. As the study focused on rare Mendelian variants with dominant inheritance, we used the MAF cutoff of <0.001 proposed in the literature [12]. The pathogenicity predictions were made using Alamut (Interactive Biosoftware, Rouen, France), which combines different prediction tools, information from mutation/polymorphism databases, and the Human Gene Mutation Database (HGMD ® ), which collects literature information on all known (published) gene variants responsible for human inherited diseases. The Genome Aggregation Database (gnomAD v2.1) was used as genetic reference database for unaffected individuals (http://gnomad.broadinstitute.org/) [13]. The variants were classified according to the American College of Medical Genetics and Genomics (ACMG) Standards and Guidelines [14]. According to this classification scheme, variants in genes that cause Mendelian disorders are categorized as "pathogenic," "likely pathogenic," "uncertain significance," "likely benign," or "benign" on the basis of various aspects of variant evidence (e.g., population data, computational data, functional data, and segregation data). VUS (variants of uncertain significance) in this study were subclassified further in "VUS favor pathogenic," those that remained inconclusive (VUS), and "VUS favor benign" (Supplementary Figure S1). "VUS favor pathogenic" were classified as those variants that could not be confirmed as likely pathogenic according to the ACMG criteria but whose characteristics indicated a pathogenic relevance (e.g., absent from controls or very low frequency, affecting genes with well-known association with the patient phenotype, and/or affecting mutational hotspots). "VUS favor benign" were classified as those variants whose characteristics indicated that they are more likely to be benign (relatively high MAF, affected gene not fitting the clinical phenotype, and weakly conserved amino acid for missense variants). The analysis of sequence data followed a three-step approach, as illustrated in the workflow diagram ( Figure 1): In the first-tier analysis, variants were filtered for a gene panel according to the proposed clinical diagnosis (e.g., HCM; Supplementary Table S1). The gene selection for the panels was based on the recommendations of the American College of Medical Genetics and Genomics [6], the European Heart Rhythm Association [15], and the OMIM (Online Mendelian Inheritance in Man ® ) phenotype database. In a second step, an extended set of 79 genes (Supplementary Table S2; defined in 2017) associated with hereditary heart diseases in the OMIM database was screened for variants of interest (VOIs). This second step was performed in 40 patients in whom a likely pathogenic or pathogenic variant could not be detected in the first tier. "Variants of interest/VOIs" were considered pathogenic and likely pathogenic variants, as well as "VUS favor pathogenic." In 32 patients in whom no VOIs could be detected in the second step, an analysis of the exome data followed. Variants in all genes enriched through exome sequencing were considered in the third tier. The prioritization of the single nucleotide variants (SNVs) in this third step was mainly based on the mutation type, MAF, conservation data for missense variants, literature data, and the function and expression profiles of the affected genes (http://www.proteinatlas.org [16]). "uncertain significance," "likely benign," or "benign" on the basis of various aspects of variant evidence (e.g., population data, computational data, functional data, and segregation data). VUS (variants of uncertain significance) in this study were subclassified further in "VUS favor pathogenic," those that remained inconclusive (VUS), and "VUS favor benign" (Supplementary Figure S1). "VUS favor pathogenic" were classified as those variants that could not be confirmed as likely pathogenic according to the ACMG criteria but whose characteristics indicated a pathogenic relevance (e.g., absent from controls or very low frequency, affecting genes with well-known association with the patient phenotype, and/or affecting mutational hotspots). "VUS favor benign" were classified as those variants whose characteristics indicated that they are more likely to be benign (relatively high MAF, affected gene not fitting the clinical phenotype, and weakly conserved amino acid for missense variants). The analysis of sequence data followed a three-step approach, as illustrated in the workflow diagram ( Figure 1): In the first-tier analysis, variants were filtered for a gene panel according to the proposed clinical diagnosis (e.g., HCM; Supplementary Table S1). The gene selection for the panels was based on the recommendations of the American College of Medical Genetics and Genomics [6], the European Heart Rhythm Association [15], and the OMIM (Online Mendelian Inheritance in Man ® ) phenotype database. In a second step, an extended set of 79 genes (Supplementary Table S2; defined in 2017) associated with hereditary heart diseases in the OMIM database was screened for variants of interest (VOIs). This second step was performed in 40 patients in whom a likely pathogenic or pathogenic variant could not be detected in the first tier. "Variants of interest/VOIs" were considered pathogenic and likely pathogenic variants, as well as "VUS favor pathogenic." In 32 patients in whom no VOIs could be detected in the second step, an analysis of the exome data followed. Variants in all genes enriched through exome sequencing were considered in the third tier. The prioritization of the single nucleotide variants (SNVs) in this third step was mainly based on the mutation type, MAF, conservation data for missense variants, literature data, and the function and expression profiles of the affected genes (http://www.proteinatlas.org [16]). Patient Characteristics The overall cohort consisted of 61 unrelated patients who were seen in a specialized clinic for inherited cardiac diseases for the purpose of clinical and family evaluation and genetic diagnostics. Fifty-one patients were diagnosed with a primary cardiomyopathy, with the majority of 36 patients with DCM or LVNC. Ten patients had a diagnosis of a primary arrhythmia syndrome, including three cases of survived sudden cardiac death without structural heart disease ( Figure 2). The entire cohort had a mean age at first diagnosis of 37 ± 16 years, with 64% being male, and about half (48%) had an obvious family history of cardiomyopathy or arrhythmia in first-or second-degree relatives ( Table 1). Patient Characteristics The overall cohort consisted of 61 unrelated patients who were seen in a specialized clinic for inherited cardiac diseases for the purpose of clinical and family evaluation and genetic diagnostics. Fifty-one patients were diagnosed with a primary cardiomyopathy, with the majority of 36 patients with DCM or LVNC. Ten patients had a diagnosis of a primary arrhythmia syndrome, including three cases of survived sudden cardiac death without structural heart disease ( Figure 2). The entire cohort had a mean age at first diagnosis of 37 ± 16 years, with 64% being male, and about half (48%) had an obvious family history of cardiomyopathy or arrhythmia in first-or second-degree relatives ( Table 1). The largest sub-cohort of 36 cases consisted of patients with DCM/LVNC. Here, the mean age of diagnosis was 37 ± 16 years, with 72% being males and 20 (56%) cases with a positive family history. About half of the cohort (47%) presented with chronic heart failure, while acute (decompensated) heart failure was the primary manifestation in about 44% of this cohort (Table 1). Echocardiographic measurements revealed a mean left ventricular ejection fraction (LVEF, Simpson biplane) of 32 ± 12% and a mean left ventricular end-diastolic diameter (LVEDD) of 65 ± 11 mm, thus indicating that most patients presented at an advanced stage of the disease. Detection Rates of Variants of Interest and Spectrum of Incolved Genes Overall, in 39 (64%) out of 61 patients of the entire cohort, a VOI could be detected including a pathogenic or likely pathogenic variant in 28 (46%) cases and a "VUS favor pathogenic" in 11 cases ( Figure 3A). Twenty-three genetic variants appeared to be novel and were not previously described in the literature or in mutation databases ( Table 2). Novel variants were submitted in the ClinVar database. Detection Rates of Variants of Interest and Spectrum of Incolved Genes Overall, in 39 (64%) out of 61 patients of the entire cohort, a VOI could be detected including a pathogenic or likely pathogenic variant in 28 (46%) cases and a "VUS favor pathogenic" in 11 cases ( Figure 3A). Twenty-three genetic variants appeared to be novel and were not previously described in the literature or in mutation databases (Table 2). Novel variants were submitted in the ClinVar database. In the cohort of cardiomyopathy patients (DCM/LVNC, HCM, ARVC, LVNC, and RCM), the detection rate of VOIs was 68% ( Figure 3B). The largest sub-cohort consisted of 36 DCM/LVNC patients. In this sub-cohort, a likely pathogenic/pathogenic variant was found in 16 (44%) of the patients, while "VUS favor pathogenic" could be detected in additional nine patients (25%), thus indicating an overall rate of VOIs of 25 (69%) ( Figure 3C). The highest detection rate was observed in the subgroup of patients with HCM, where a VOI was detected in seven out of nine cases. The lowest detection rate of VOIs (about 40%) was found in the primary arrhythmia patients ( Figure 3D). Regarding the involved genes, most detected variants in the DCM subgroup were truncating variants in the TTN gene. In accordance with previous reports [17], these truncating variants showed clustering in the A-band region of TTN and affected all major transcripts. One proband with a heterozygous truncating TTN variant had a clinical diagnosis of LVNC; an association of TTN variants with LVNC, although not established, was recently reported [18]. Genetic variants in further genes with established association for DCM/LVNC accounted for the rest of the cases: variants in genes coding for sarcomere proteins (e.g., MYH7, MYBPC3, and TNNT2), nuclear envelope proteins (e.g., LMNA), or the cytoskeleton (FLNC) (Supplementary Figure S2). Causative variants in genes encoding for desmosomal proteins (DSP and DSG2) that are primarily associated with ARVC were detected in two probands with a primary diagnosis of DCM, thus supporting the already reported association of those genes with DCM [19]. Most of the HCM-associated variants were identified in genes encoding for sarcomere proteins (MYH7, MYBPC3, and TNNT2), which is consistent with the established view that HCM is mainly a disease of the sarcomere (Supplementary Figure S2) [2]. The following interesting genotype-phenotype correlations were detected in the HCM group: One male proband with early onset HCM carried a hemizygous FHL1 variant (patient #25). He had mildly elevated serum creatine kinase levels (~400 U/L) but no obvious skeletal muscle involvement. Isolated hypertrophic cardiomyopathy due to pathogenic FHL1 mutations has been reported [20]. Further segregation studies revealed that his 50-year-old mother and 81-year-old grandmother were heterozygous carriers of the FHL1 variant, both with mild signs of cardiac hypertrophy and diastolic dysfunction. In another patient with HCM (#23), a pathogenic missense variant in MYH7 was detected in compound heterozygous state with a truncating splice variant in the same gene. A subsequent segregation analysis showed that the truncating variant was not associated with a clinical phenotype in the heterozygous state but led to a severe cardiomyopathy phenotype with non-compaction features in the compound heterozygous state [21]. Multiple variants could be also detected in five additional patients (pat. #3, #7, #22, #28, and #30; Table 2). Unlike patient #23, these multiple variants were found in different genes than the one harboring the primary causal variant. Diagnostic Yield of Core Gene Panel, Extended Gene Panel and Exome Analysis. The highest diagnostic yield of VOIs was achieved in the core gene panel that fitted the clinical diagnosis (46%; 28/61). The panels contained all genes that were linked with the respective clinical phenotype in the OMIM database (Supplementary Table S1). In those cases, where no VOI was detected in the core gene panel, the analysis of an extended gene panel (Supplementary Table S2; defined in 2017) led to the detection of VOIs in an additional 13% of cases (8/61). This additional yield concerned cases with relevant variants in genes not primarily associated with clinical diagnosis, thus unveiling interesting genotype-phenotype correlations. For example, a likely pathogenic variant in DSG2 was found in a proband with a clinical diagnosis of HCM (patient #27), suggesting a possible association with this phenotype. A further analysis of the WES data led to the identification of VOIs in three cases, thus providing an additional diagnostic yield of about 5% (3/61). These cases involved variants in genes not well or only recently described in the literature (TRIM63, MIB1, and MYLK3) that were not included in the routine panel diagnostics at the time of the initial genetic analysis or were related to unusual inheritance patterns and phenotypes (Table 2 and Supplementary Table S3). In particular, an analysis of the whole exome data revealed a homozygous truncating variant in TRIM63 (c.739C>T, p.Gln247 *) in an 18-year-old index patient of a consanguineous family (patient #45; Supplementary Figure S3). He was diagnosed with HCM (LVWT 27/24 mm) at the age of 12 years and got a defibrillator implanted after a syncopal episode. His sibling, who was also homozygous for the mutation, presented with a mild DCM phenotype at birth and also exhibited muscle pain and elevated serum creatine kinase levels. The index patient did not show an overt skeletal muscle phenotype but also had mildly elevated serum creatine kinase levels. Both heterozygous parents were healthy and did not show any signs of cardiomyopathy in echocardiographic studies. The detected TRIM63 variant has been described in two literature reports in association with HCM and skeletal myopathy [22,23], but no OMIM phenotype had been assigned to the gene at the time of the genetic testing. In another female patient (#44) with DCM and an early disease onset at the age of two years, a truncating MIB1 variant (c.1111C>T, p.Arg371 *) could be detected. Her current age is 29 years, and she does not complain about heart failure symptoms under standard therapy. Her cardiac function with an LVEF of 42% has been stable for many years. Pathogenic MIB1 mutations have been described as rare causes of LVNC [24], but they were not part of most routine gene panels at the time of testing. Another 50-year-old patient with DCM (patient #15) carried a novel variant affecting a highly conserved amino acid residue in MYLK3 (c.2042C>T, p.Pro681Leu). Loss-of-function variants in MYLK3, which codes for myosin light chain kinase 3, have just recently been described in association with DCM [25]. The same patient had an additional VUS in the FLNC gene (Table 2). However, we regard the MYLK3 variant as more likely to be primarily causal because it affects a highly conserved amino acid residue at the catalytic domain of the encoded protein. Novel Variants in Candidate Genes Next, in unsolved cases, we prioritized variants of the exome data according to criteria illustrated in the workflow diagram ( Figure 1). This allowed for the detection of variants in not well-studied genes whose expression and function profiles suggested a possible etiological relationship (Supplementary Table S3). The strongest candidate was a novel frameshift variant (c.222dup, p.Glu75Argfs * 16) in STK38 encoding the serine-threonine kinase 38 in a 30-year-old patient presenting with acute heart failure, a severely dilated left ventricle (LVEDD: 75 mm), and a severely reduced left ventricular ejection fraction (LVEF: 19%). His family history revealed that his mother had a sudden cardiac arrest at the age of 59 years, but no autopsy was done. His maternal grandfather died at the age of about 40 years of unknown cause. Recently, functional studies have shown that STK38 interacts with RBM24 (RNA binding motif protein 24), an RNA-binding protein with an important role in sarcomere assembly and heart contractility [26]. Furthermore, the knockdown of STK38 led to the reduction of sarcomere proteins and the disarrangement of sarcomere [27]. STK38 loss-of-function variants are infrequent and observed only in heterozygous form in population databases (<0.01%, https://gnomad.broadinstitute.org/). The gene shows a haploinsufficiency score of 2.67% (http://decipher.sanger.ac.uk) [28], suggesting an intolerance of loss-of-function variation. Discussion The identification of genetic causes of inherited cardiac diseases along with the improvement of DNA sequencing technologies have drastically changed cardiogenetics in the last years. Thus, genetic diagnostics have entered clinical routine and allow for the extensive testing of cardiomyopathy and arrhythmia patients. In this study, we performed a comprehensive genetic analysis of sequencing data starting from a targeted gene panel up to whole exome data in a stepwise approach. We reported on the additional diagnostic yield of an extended genetic analysis and propose novel candidate genes, but we also address some of the challenges encountered through this extensive data analysis approach and discuss the limitations of the ACMG variant classification criteria. One of the challenges of extensive NGS diagnostics in inherited cardiac diseases is the classification of the numerous detected genetic variants and the interpretation of genetic findings in the clinical context. For this purpose, we used the ACMG criteria [14] in the first place. However, this classification scheme has some limitations, also reported in the literature [29], requiring revision and adaptation to the special features of different disease groups and genes. We observed in our study that three ACMG criteria in particular, i.e., PP1, PP5 (criteria with supporting evidence of pathogenicity), and PM2 (criterium with moderate evidence of pathogenicity), are sometimes difficult to apply. For example, a novel variant had not often been reported in mutation databases such as ClinVar or HGMD or in the literature before (PP5 criterium) or no family members were available for segregation analysis (PP1 criterium). Another frequently missing criterium was that, although very rare, variants were not completely absent from population databases, as demanded for the PM2 ACMG criterium [14]. In our view, this is a strong limitation, as pathogenic variants associated with diseases that show variable penetrance, like cardiomyopathies, may appear at a low frequency in control databases (i.e., <0.01%). To overcome these limitations, we extended our classification of VUS by "VUS favor pathogenic" for variants missing one of these ACMG criteria (PP1, PP5, or PM2) (Supplementary Figure S1). A reclassification as "likely pathogenic" in some cases was possible after segregation studies or released literature/database reports. For example, a detected FLNC variant in patient #31 with RCM, could be reclassified as likely pathogenic, because the affected daughter was shown to be a carrier of the detected variant. Another example was a novel variant in the X-chromosomal FHL1 gene found in a patient with HCM that was classified as "VUS favor pathogenic," because segregation was initially missing and the variant was not described in the literature (missing PP5). This variant could be reclassified after confirmatory segregation analysis in likely pathogenic. Based on these examples, it is important to emphasize the significance of literature/database reports as well as segregation analysis for the interpretation of sequence variants. Overall, 7 out of the 11 VOIs missed a literature/database report at the time of the study in order to be classified as likely pathogenic (Table 2 and Supplementary Table S3). Four variants could not be classified as likely pathogenic because they were not completely absent from population databases, although they were only reported at a very low frequency (<0.01%). For example, a TNNT2 variant in patient #11 (Table 2) could be regarded as likely pathogenic in our view, although it was not completely absent from population databases (MAF of~0.001% in gnomAD). In order to limit the number of reported variants to the most relevant ones, we also subclassified VUS with characteristics implying no clinical relevance (e.g., relatively high MAF, affected genes not fitting to the phenotype, and/or lacking sufficient evidence for a pathogenic role) as "VUS favor benign." These variants are not reported in the manuscript. In contrast to previous studies, the variant detection rate in the whole cohort, as well as in the DCM/LVNC sub-cohort, was higher [6,30], taking into account both pathogenic and likely pathogenic variants alone and even more pronounced when adding the subclass "VUS favor pathogenic"-together grouped as VOIs ( Figure 3 and Table 1). This higher detection rate reflected the benefit of an extensive analysis approach that included not only the core genes for the respective clinical phenotype but also genes associated with all known cardiomyopathy or arrhythmia phenotypes. This allowed for the detection of variants of interest in genes other than the ones commonly associated with the clinical phenotype. In addition, we extended variant screening of the exome data to genes without a p-OMIM numbers at the time of testing based on the latest literature data and mutation databases (e.g., HGMD). Finally, a dedicated specialist for cardiogenetic conditions reviewed the variants in detail at the end of the process. Another factor explaining the increased detection rates could be the fact that the examined patients were recruited in a specialized center for heart failure and cardiogenetics, although the recruitment criteria for genetic testing only excluded patients with known secondary causes of DCM such as ischemic, valvular, hypertensive, and acute inflammatory cardiomyopathy. We did not select for known family history of cardiomyopathy arrhythmia or sudden death. Interestingly, family history also did not show a statistically significant correlation with the variant detection rate (Fisher's exact test: p = 0.857), indicating that family history alone may not be an indicator for genetic etiology. In accordance with previous studies [9], the majority of VOIs were detected in disease-specific core gene panels (28/39 VOIs; 71%). However, 21% of VOIs (8/39 VOIs) were detected only after the extension of the analysis to genes not directly associated with the patient phenotype. This mainly concerned patients with a diagnosis of DCM who harbored a VOI in a gene coding for a desmosomal or a Z-disc (Filamin C, FLNC) protein, which demonstrated that overlapping phenotypes are a common issue in those patients. The benefit of expanded testing for DCM, which is characterized by extensive genetic heterogeneity, is in accordance with previous studies [30]. In our cohort though, a patient with a diagnosis of HCM carried a likely pathogenic variant in the desmosomal gene DSG2, an association that has thus far not been reported in the literature. It is important to emphasize here that the diagnostic benefit of the second step of our analysis mainly concerned genes with established pathogenic relevance that are primarily associated with a different clinical cardiac phenotype than the original clinical diagnosis. Though some studies have questioned the benefit of broad genetic diagnostics in inherited heart diseases [9] and have suggested limiting genetic testing to the core genes associated with a given phenotype [11], the extensive phenotypic variability of inherited cardiac diseases supports the requirement of at least an extended gene panel in routine diagnostics. The weakness of targeted gene panel analysis was demonstrated in our study by the cases with the TRIM63 and MYLK3 variants. These genes had not been assigned a p-OMIM number at the time of analysis, and causal variants probably would have been missed if the analysis was confined to a gene panel. As genetic causes of cardiomyopathies continue to be discovered, it is important that panel diagnostics is not confined to genes linked with a phenotype in the OMIM database but also includes recently discovered genes, including the latest literature reports and current mutation databases (e.g., HGMD). The need for extended genetic testing is also supported by the fact that multiple variants were detected in six patients, and these possibly modified the primary clinical phenotype. For example, in patient #28 with a primary clinical diagnosis of DCM, a pathogenic truncating variant in TTN was accompanied by a novel missense variant in RYR2. Interestingly, the patient showed an increased occurrence of ventricular arrhythmias. This exemplified the possibility that multiple variants could act as clinically relevant modifiers that are only detectable through expanded genetic analysis. Since the additional diagnostic yield achieved by the analysis of the whole exome data was not extremely high, the utility in routine diagnostics can be questioned. Though the costs for consumables of exome sequencing are relatively low, the bioinformatic analysis and interpretation of the numerous variants are still time-consuming processes that require a dedicated specialist in the field of cardiogenetics. For as long as variant databases and bioinformatic tools remain imperfect and some genetic causes remain unrevealed, WES may not be practically applicable in daily clinical routines. However, as the identification of an underlying genetic cause of an inherited cardiac disease has implications for the surveillance of the patient and at-risk relatives, extensive genetic testing should be aimed at and ideally combined with a research project. Moreover, an analysis of whole exome data in research settings offers the possibility to identify candidate genes and thus unveil new genetic causes. In our study, we addressed this issue by expanding the analysis in unsolved cases and prioritized variants according to tissue expression profiles, knockout mouse model databases, and literature research, which led to the identification of variants in candidate genes, i.e., STK38. STK38 is considered a strong candidate gene because of the variant type, as well as the population and current experimental data. Of course, the disease association should be thoroughly examined on the basis of functional and segregation studies. However, the chance of identification of these candidate genes would have been missed if the analysis was limited to a gene panel. Furthermore, the availability of exome data offers the possibility of the future reevaluation of the findings using new literature data. However, our study had several limitations. First, the patient cohort was relatively small and heterogenous, with a wide spectrum of inherited cardiomyopathies and arrhythmias that did not allow for solid statistical conclusions. Furthermore, systematic family assessment was not performed to evaluate the segregation of VUS in all suspected cases, as well as the relevance of multiple variants with phenotypic severity. Apart from that, in those patients where a VOI was detected in the first tier, the analysis did not proceed to the second and third tier and so the presence of some multiple variants could have been missed. Finally, functional studies to prove the disease association of the proposed candidate genes were not part of this study. Conclusions Though the diagnostic yield of targeted gene panels can be considered as acceptable in a clinical setting, we favor extended genetic testing that makes use of the lately more readily available WES with subsequent thorough and stepwise analysis of the data. Using this approach, many of the challenges of genetic diagnostics in cardiogenetics-such as multiple variants, genetic heterogeneity, and phenotypic overlap-can be addressed. Of course, the challenge concerning the classification of the numerous detected variants increases with growing number of analyzed genes, thus highlighting the need to revise current classification schemes. However, as new causal genes for inherited cardiomyopathies are being described and some of the causes still remain undiscovered, it is important to extend the genetic analysis beyond targeted gene panels that contain a limited number of genes with established pathogenic relevance. The implementation of whole exome sequencing offers the possibility to identify variants in candidate genes, as well as the provision of data for a future analysis in a research setting. In this study, we clearly demonstrated the benefits of this approach in a cohort of 61 patients by describing new genotype-phenotype correlations, variants in not well-studied genes that would have been missed by a gene panel approach, and variants in novel candidate genes. Supplementary Materials: The following are available online at http://www.mdpi.com/2077-0383/9/7/2168/s1, Table S1: Disease-specific gene panels, Table S2: Extended gene panel for cardiomyopathies and primary arrhythmia syndromes, Table S3: Detected variants in genes without a p-OMIM number at the time of testing, Figure S1: Variant classification scheme proposed in this study. Figure S2: Variants of interests according to molecular function in different patient sub-cohorts, Figure S3: Pedigree of the consanguineous family with the TRIM63 variant.
2020-07-15T13:05:57.847Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "f027932d9062691028f572148616ac50fb9bf992", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/jcm/jcm-09-02168/article_deploy/jcm-09-02168.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea64ffd3bf83c56ae7a4f558934c6c0af88ff703", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256376979
pes2o/s2orc
v3-fos-license
Aglycemic growth enhances carbohydrate metabolism and induces sensitivity to menadione in cultured tumor-derived cells Hepatocellular carcinoma (HCC) is the most prevalent form of liver malignancy and carries poor prognoses due to late presentation of symptoms. Treatment of late-stage HCC relies heavily on chemotherapeutics, many of which target cellular energy metabolism. A key platform for testing candidate chemotherapeutic compounds is the intrahepatic orthotopic xenograft (IOX) model in rodents. Translational efficacy from the IOX model to clinical use is limited (in part) by variation in the metabolic phenotypes of the tumor-derived cells that can be induced by selective adaptation to subculture conditions. In this study, a detailed multilevel systems approach combining microscopy, respirometry, potentiometry, and extracellular flux analysis (EFA) was utilized to examine metabolic adaptations that occur under aglycemic growth media conditions in HCC-derived (HEPG2) cells. We hypothesized that aglycemic growth would result in adaptive “aerobic poise” characterized by enhanced capacity for oxidative phosphorylation over a range of physiological energetic demand states. Aglycemic growth did not invoke adaptive changes in mitochondrial content, network complexity, or intrinsic functional capacity/efficiency. In intact cells, aglycemic growth markedly enhanced fermentative glycolytic substrate-level phosphorylation during glucose refeeding and enhanced responsiveness of both fermentation and oxidative phosphorylation to stimulated energy demand. Additionally, aglycemic growth induced sensitivity of HEPG2 cells to the provitamin menadione at a 25-fold lower dose compared to control cells. These findings indicate that growth media conditions have substantial effects on the energy metabolism of subcultured tumor-derived cells, which may have significant implications for chemotherapeutic sensitivity during incorporation in IOX testing panels. Additionally, the metabolic phenotyping approach used in this study provides a practical workflow that can be incorporated with IOX screening practices to aid in deciphering the metabolic underpinnings of chemotherapeutic drug sensitivity. Background Hepatocellular carcinoma (HCC) is the most common form of liver malignancy, comprising the majority of primary liver cancer incidence worldwide [1]. Because HCC is a comorbidity of fatty liver disease and diabetes, incidence is expected to rise as the population of patients with metabolic disease grows globally [1]. Current treatment methods for HCC rely heavily on liver resection and chemotherapeutic modalities [1]. However, the efficacy of these treatments depends on early detection of the disease, which often does not become symptomatic until late stages [2]. Though chemotherapeutic drugs often have multiple associated mechanisms of action, metabolic liabilities are commonly suggested to be important for their anti-cancer effects. For example, sorafenib, doxorubicin, and etoposide have all been shown to alter mitochondrial respiratory function in cancer cells [3][4][5]. Additionally, many new drugs (e.g., sorafenib derivatives), which are also likely to have metabolic implications, are being developed for use in late-stage HCC and other cancers [1,2,6,7]. A common platform for testing chemotherapeutic strategies for HCC treatment is the rodent intrahepatic orthotopic xenograft (IOX) model [8][9][10]. Unfortunately, promising candidate drugs identified in rodent models often fail in follow-up clinical trials [8]. Poor translation from preclinical testing to clinical efficacy is a complex phenomenon with many confounding variables. However, one potentially important factor is variation in the underlying metabolic phenotypes of the cultured tumor-derived cells that are used in IOX testing panels [8]. Tumor-derived cell lines have variable degrees of metabolic flexibility when challenged with toxicants (e.g., 2-deoxy-glucose, 3-bromo-pyruvic acid, dinitrophenol, metformin) [11][12][13][14]. To compensate for these effects, IOX studies often use several different cell lines, which has resulted in reports of variable drug efficacy that depends (in part) on the metabolic phenotype of the source cell line [8]. Though tumor-derived cell metabolic phenotype can be influenced by many biological variables, one particularly important factor is selective adaptation to growth conditions during subculture [15,16], the effects of which are readily mediated by a multitude of commercially available media formulations and supplements [16]. Selective growth conditions have been used to induce metabolic sensitivities [6,17,18] and screen for inborn errors of metabolism [15]. However, metabolic adaptations that arise in response to growth conditions are rarely examined explicitly, thus limiting the quality of the IOX model and complicating the interpretation of studies to identify metabolic toxicants. In this study, a detailed multilevel systems approach was used to define specific bioenergetic adaptations to aglycemic growth conditions in HEPG2 cells. The HEPG2 tumor-derived cell line was selected because it is commonly included in IOX testing panels and is supported by an extensive body of comparative literature [13,[19][20][21][22][23][24][25][26][27][28]. The approach encompasses parallel assessment at the organelle and whole-cell levels, mitochondrial structural and functional measurements, selective substrate refeeding to target specific modes of central carbon metabolism, and comparison of metabolic fluxes under different states of energetic demand. Based on the observations of previous studies [6,17,18], it was hypothesized that aglycemic growth conditions would facilitate compensatory enhancement of oxidative metabolism and repression of aerobic fermentative metabolism compared to HEPG2 cells subcultured in glycemic growth conditions. HEPG2 cell subculture Human hepatocellular carcinoma cells (HEPG2) were obtained from ATCC (Manassas, VA, USA). Cells were cultured for three passages (three population doubling periods) in DMEM-H-GLC. To induce adaptation to aglycemic conditions, cells were switched to DMEM-H-GAL and grown for an additional three passages. Control cells were grown in DMEM-H-GLC concomitantly. The two growth condition-adapted cell lines are denoted: HEPG2-Glc (glucose) or HEPG2-Gal (galactose). Cells were then frozen in working stocks and stored in a liquid nitrogen dewar until thawed for experiments. All experiments were performed using cells at passage six from those initially obtained from ATCC. Laser scanning confocal microscopy Cells were seeded at 8 × 10 5 cells/well in #1.5 glassbottom 6-well culture dishes (MatTek, Ashland, MA, USA) and were grown for 48 h in DMEM-GLC or DMEM-GAL as appropriate. One hour prior to imaging, cells were washed and incubated with DMEM-GLC or DMEM-GAL, containing 2 μM Hoechst and 250 nM mitotracker green-FM (Molecular Probes, Eugene, OR). Cells were then rinsed in 1X PBS and changed to DMEM-B-SF containing 50 nM tetramethyl rhodamine methyl ester (TMRM) and were incubated in a CO 2 -free incubator for 1 h. All imaging were performed using an Olympus FV1000 laser scanning confocal microscope (LSCM) with an onstage incubator at 37°C. Acquisition software was Olympus FluoView FSW (V4.2). The objective used was × 60 oil immersion (NA = 1.35, Olympus Plan Apochromat UPLSAPO60X(F)). Images were 800 × 800 pixel with 2 μs/ pixel dwell time, sequential scan mode, with a × 2.5 digital zoom. Mitotracker green-FM was excited using the 488nm line of a multiline argon laser; emission was filtered using a 560-nm dichroic mirror and 505-540-nm barrier filter. TMRM was excited using a 559-nm laser diode; emission was filtered using a 575-675-nm barrier filter. Zero detector offset was used for all images, and gain at the detectors was kept the same for all imaging. The pinhole aperture diameter was set to 105 μm (1 Airy disc). Mitochondrial volumetric analysis TMRM channel images were selected for volumetric analysis because of the dye's higher signal to background ratio (compared with MitoTracker Green-FM) and were analyzed using ImageJ [29]. Sixteen bit images were made into a composite. Circular ROIs were manually selected using the ROI manager plugin. Images were then decomposed into separate 16-bit image stacks leaving the ROI positions intact. A Huang auto threshold was used for automated selection of signal for all three channels. Following threshold application, each signal was measured using the multi-measure feature. Only whole cells were analyzed (i.e., cells on edges of the FOV were excluded). The following calculations were performed to determine the relevant signal volumes. where A is the signal positive area selected using a Huang auto threshold (μm 2 ), Z is the optical section thickness (axial resolution; μm), and N is the number of steps within each optical section (i.e., axial resolution divided by the step size). The latter operation is necessary to correct for oversampling of the signal volumes. Mitochondrial network complexity Analysis was performed on images from the TMRM channel, using a method that was closely adapted from two previous studies [30,31]. This method was shown to be robust to variation in thresholding values during segmentation and was also sensitive to experimentally induced changes in mitochondrial network architecture [30,31]. TMRM channel images were selected for network complexity analysis because of the dye's higher signal to background ratio (compared with MitoTracker Green-FM). Briefly, spatial resolution was determined (and point spread function measured) using subresolution fluorescent beads (PS Speck; ThermoFisher, Waltham, MA, USA). Curve fitting was performed using the MetroloJ plugin in ImageJ. Image stacks were trimmed to an~2.5-μm-thick z-stack range from the middle of the cells. Image deconvolution was performed using the Richardson-Lucy algorithm (N = 100 iterations) in the DeconvolutionLab2 plugin in ImageJ [32]. Summed intensity Z-projections were made from the deconvolved image stacks, and a custom tophat spatial filter [31] was applied to enhance the pixel intensities between background and signal. Network complexity parameters were determined using Matlab (Version R2020a). Images were binarized (im2bw routine; standard threshold value = 0.75) and skeletonized (bwmorph routine), and the connectivity of clusters was determined using a nearest neighbor search routine (bwconncomp; k = 8); clusters less than three pixels in size were removed to reduce noise (bwareaopen), and cluster mass in pixels was determined for all clusters in each cell (regionprops routine). Probability of observing a cluster of mass (M) was determined from the relative frequency distributions of cluster masses in each cell analyzed. Cluster entropy is defined as previously described [30]: where P i (M) is the i th relative frequency of a cluster of mass M in pixels. RandClust and SingleClust entropy values were determined by simulating a cell in which an equal frequency of cluster masses was observed among all possible values, or > 99% of cluster masses fell within a single value (respectively). Electrical potential across the inner mitochondrial membrane (ΔΨm) in live intact cells ΔΨ m was determined using non-quenching concentrations of tetramethylrhodamine methyl ester (TMRM) [33][34][35]. In non-quenching mode, low concentrations of TMRM equilibrate directly with variation in ΔΨ m , as opposed to quench mode in which accumulating TMRM forms aggregates that reduce signal intensity, which is positively correlated with ΔΨ m [33]. As an additional control, carbonyl cyanide 4-(trifluoromethoxy)phenylhydrazone (FCCP; 1 μM) was used to fully depolarize the membrane potential. Because TMRM is used in the non-quenched mode, FCCP addition leads to a decrease in fluorescence such that it is similar to background (SFigure 5). Images were analyzed using ImageJ [29]. Out-of-field images from the axial dimension were trimmed from each stack. Image stacks were background-corrected using darkfield images obtained each day of imaging. Median background intensity (I B ) was measured by inverting a Huang auto threshold for each image stack. The estimated electrical potential difference relative to the median value of the extramitochondrial signal was then mapped onto each pixel in each image using a math macro: where R is the universal gas constant (J/mol*K), T is the absolute temperature (310.15K), z is the + 1 charge of TMRM, F is Faraday's constant (C), and v is the gray value of the designated pixel. A Huang auto threshold was then applied to each image in the stack, and median and maximum values for each threshold were obtained. This measurement method reduced biasing of the total measured signal toward low-intensity pixel values. The median and max values for the cells in each stack were averaged for data representation. Mitochondrial isolation Differential centrifugation was employed to prepare isolated mitochondria from cultured cells. The buffers used for all isolations were as follows: buffer A-MOPS (50 mM; pH = 7.1), KCl (100 mM), EGTA (1 mM), MgSO4 (5 mM); buffer B-buffer A, supplemented with bovine serum albumin (BSA; 2 g/L). Cells were trypsinized at 37°C for~5 min, then growth media was added to stop the trypsin reaction, and the cells were centrifuged down at 300×g at room temperature. Growth media was then aspirated, and the pellet was resuspended in phosphatebuffered saline (PBS) and spun again at 300×g. The pellet was then resuspended in ice-cold buffer B, and the cells were homogenized via a Teflon pestle and borosilicate glass vessel for 40 passes. The homogenate was centrifuged at 800×g for 10 min at 4°C. The supernatant was then pipetted into a separate tube, and the pellet was again resuspended in buffer B, homogenized, and centrifuged at 800×g. This process was repeated a total of 3 times, and the supernatants from each of the 3 rounds of homogenization were pooled together and centrifuged at 10,000×g for 10 min at 4°C. The mitochondrial pellet was then washed in buffer A, transferred to a microcentrifuge tube, and centrifuged again at 10, 000×g for 10 min at 4°C. Buffer A was aspirated from each tube, and final mitochondrial pellets were suspended in 100-150 μL of buffer A. Protein content was determined via the Pierce BCA protein assay. The following substrate conditions were tested for all functional assays except for the substrate preference assay: pyruvate/malate (Pyr/M; 5/2 mM) and glutamate/malate (G/M; 10/2 mM). Several important quality assessment measures were performed that should be emphasized: (1) damage to the mitochondrial outer membrane was assessed by determining the cytochrome c flux control efficiency [36]. Ten micromolar cytochrome c was included in the assays to ensure that cytochrome c was not limiting to respiration. (2) Citrate synthase activity was assessed to ensure that the preparations contained similarly pure mitochondrial fractions (described in more detail in the "Citrate synthase activity" section). (3) Respiratory control ratios [36], as well as sensitivity to controlled changes in adenylate concentration, were determined using the creatine kinase free energy clamp (described in more detail in the "Force flow" section). Citrate synthase activity Citrate synthase (CS) activity was determined using a colorimetric plate-based assay in which CoA-SH, a byproduct formed by the CS-mediated reaction of oxaloacetate and acetyl-CoA, interacts with 5′, 5′-dithiobis 2-nitrobenzoic acid (DTNB) to form TNB (OD: 412 nm). Assay buffer consisted of buffer C (105 mM potassium-MES, 30 mM KCl, 10 mM KH2PO4, 5 mM MgCl2, and 1 mM EGTA; pH = 7.2) supplemented with DTNB (0.2 mM) and acetyl-CoA (0.5 mM). A 96-well round bottom plate was loaded with assay buffer (200 μL/well), the permeabilizing agent alamethicin (0.03 mg/mL), and isolated mitochondria (10 μg/well) were added and then incubated at 37°C for 5 min to deplete endogenous substrates. The assay was initiated by the addition of oxaloacetate (1 mM) to sample wells, with absorbance at 412 nm recorded every 30 s for 20 min. The mitochondrial suspension was also added to one control well per sample to account for nonspecific activity, which was later subtracted from the sample rate. CS activity was determined using the Beer-Lambert Law and the molar absorption coefficient of TNB (13.6 mM/cm). Substrate preference assay Differences in substrate preference between the two feeding conditions were assessed using the steady-state O 2 consumption rates (JO 2 ) following sequential additions of different carbon substrates and specific inhibitors, in the presence of a saturating steady-state ADP concentration of 300 μM. This experiment was conducted in buffer C supplemented with cytochrome C (CytC; 10 μM), HK (1 U/mL), and glucose (5 mM). Mitochondria (0.05 mg/ml) were added to the buffer, followed by ADP and pyruvate/ malate (1/2 mM). The pyruvate carrier inhibitor UK5099 was then added to prevent pyruvate oxidation, followed by the addition of glutamate (10 mM). NADH-linked respiration was then inhibited using rotenone (0.05 μM), and respiration was stimulated using glycerol-3-phosphate (G3P; 10 mM). Succinate was then added to stimulate complex II-linked respiration (10 mM). Complex III was then inhibited by the addition of antimycin A (0.005 μM), and respiratory flux through complex IV was stimulated through the addition of the electron donor TMPD dissolved in 2 M ascorbate (0.5 μM). Force flow Steady-state JO 2 was determined within individual forceflow experiments using a modified version of the creatine kinase energetic clamp technique [37][38][39]. This assay is based upon the calculation of the free energy of adenosine triphosphate (ATP) hydrolysis (ΔG′ ATP , written throughout the manuscript as ΔG ATP ) from known amounts of Cr, phosphocreatine (PCr), and ATP in the presence of excess amounts of creatine kinase (CK), using the equilibrium constant for the CK reaction (i.e., K CK ). ΔG ATP was calculated according to the following formula: where ΔG′°A TP is the standard apparent transformed Gibbs energy (under a specified pH, ionic strength, free magnesium, and pressure), R is the gas constant (8.3145 J/ kmol), and T is temperature in Kelvin (310.15). For complete details regarding the calculation of ΔG′ ATP at each titration point, see [40]. To begin, isolated mitochondria (0.05 mg/ml) were added to buffer C supplemented with CytC (10 μM), followed by the addition of respiratory substrates to bring respiration to state 4. The CK clamp was then initiated by the addition of ATP (5 mM), PCr (1 mM), and CK (20 U/mL). Sequential additions of PCr to 6, 15, and 21 mM were then performed to gradually slow JO 2 back toward baseline. After the final PCr addition, the uncoupler FCCP was then titrated (0.5, 1, 2 μM) to stimulate respiration back up toward maximal JO 2 . Plotting the calculated ΔG ATP against the corresponding JO 2 depicts a force-flow relationship, the slope of which represents the conductance/sensitivity of the entire respiratory system under specified substrate constraints as previously described [37,38,40]. Mitochondrial membrane potential (ΔΨ) and NAD(P)H/ NAD(P)+ redox Fluorescent determination of ΔΨ and NAD(P)H/NAD(P) + throughout the force-flow experiment was carried out in parallel using the QM-400. Determination of ΔΨ via TMRM was done by recording the fluorescence ratio of the following excitation/emission parameters: Ex/Em, (576/590)/(552/590) [34]. A KCl standard curve was then used to convert the 576/552 ratio to millivolts. The KCl standard curve was performed in the presence of valinomycin as described previously [40]. In this protocol, isolated mitochondria energized with succinate/rotenone (Succ/R; 10 mM/0.05 μM) were incubated in a potassiumfree buffer in the presence of valinomycin, a potassiumspecific ionophore. ΔΨ can be reasonably estimated by applying the Nernst equation and buffer ion concentrations resulting from sequential additions of KCl, assuming a starting matrix potassium concentration of 120 mM [40]. NAD(P)H excitation/emission parameters were 350/450. Buffer for all assays was buffer C, supplemented with CytC (10 μM) and TMRM (0.2 μM). To begin, isolated mitochondria (0.05mg/ml) were added to the assay buffer, followed by the addition of respiratory substrates, CK clamp components, and then sequential PCr additions to 6, 15, and 21 mM as in the respiratory experiment. Following the final PCr addition, cyanide (10 mM) was added to induce a state of 100% reduction within the NAD(P)H/ NAD(P) + couple. The fluorescence (Ex/Em, 350/450) signal recorded in the presence of mitochondria alone without respiratory substrates was used as the 0% reduction state for the NAD(P)H/NAD(P) + couple. NAD(P)H/ NAD(P) + during the entire experiment was expressed as a percentage of reduction according to the following formula: Measurement of ATP production rates (JATP) and ATP/O ratios Parallel respiration and fluorometric experiments were carried out in order to generate an ATP/O ratio. Both sets of experiments were conducted in buffer C supplemented with CytC (10 μM), Ap5A (0.15 μM), hexokinase (HK; 1 U/mL), glucose (5 mM), glucose-6phosphate dehydrogenase (2 U/mL), and NADP + (4 mM). The rate of change in NAD(P)H fluorescence (Ex/ Em, 350/450) in the QM-400 experiments was equated to the rate of ATP production by the mitochondria (JATP), as has been previously described [41]. Experiments began with the addition of mitochondria (0.05 mg/ml for both) to the modified buffer C, followed by the addition of respiratory substrates. ADP was then titrated in to 10 and 500 μM. Fluorometry experiments were then ended by the addition of oligomycin (oligo; 0.02 μM). Respiration experiments continued with an FCCP titration (0.5, 1, 2 μM) following the final ADP addition. The ATP/O ratio was then calculated using the ratio of the JATP and the steady-state JO 2 at each addition point, divided by 2. Live cell extracellular flux analysis with substrate refeeding Extracellular oxygen flux (JO 2 ) and acidification rate (ECAR) were measured using a Seahorse XF24e flux analyzer (Agilent Technologies, Santa Clara, CA, USA). Cells were seeded at 2 × 10^5 cells/well, 48 h prior to running assays. Seeded cells were rinsed with 1X PBS and media was changed to DMEM-U-SF (pH 7.4 at 37°C). Cells were then incubated for one hour in a CO 2free incubator. The flux analysis protocol was as follows: basal OCR and ECAR were measured in unbuffered DMEM, followed by carbon fuel substrate refeeding (25 mM glucose, 4 mM glutamine, or 1 mM pyruvate at standardized volumes with a 3 min mixing step prior to measurement). Five micromolar oligomycin A was then injected to inhibit ATP-coupled respiration, FCCP (1 μM) was injected to increase H + conductance across the inner mitochondrial membrane, and rotenone and antimycin A (5 μM) were injected to inhibit electron transport system flux at complexes I and III, respectively. Immediately following the experimental protocol, plates were removed, and cells were lysed in radioimmunoprecipitation (RIPA) buffer at 4°C. Protein concentration in each well was determined using a bicinchoninic acid (BCA) protein assay (Thermo Fisher). ECAR unit conversion and JATP Glyc rate estimation ECAR measurements are often interpreted as direct measurements of lactic acid production derived from reduction of pyruvate [42]. However, these units are difficult to interpret in the context of cellular metabolism because they are not stoichiometrically linked with lactic acid production rates. Here, the method described by Mookerjee et al. was used to convert the ECAR (pH) measurements (mpH • min −1 • μg protein −1 ) to a proton efflux rate (JH + ) [43]. The buffering capacity (C; mpH • pmol −1 H + ) of the DMEM-U-SF medium was measured at 37°C in bulk solution by titrating known amounts of hydrochloric acid (HCl) and was determined to be .094 mpH • pmol −1 H + . The unit conversion was performed as follows: To estimate the ATP production rate (JATP Glyc ) due to glycolytic flux, the JH + described in Eq. (6) was multiplied by the stoichiometric coefficient 1 pmol ATP/pmol lactic acid-derived H + . Oxygen consumption rate correction and JATP OxPhos rate estimation Mitochondrial-specific oxygen consumption rate (JO 2,Mito ) was determined by: where JO 2M is the measured JO 2 and JO 2Rot/AmA is in the presence of 5 μM rotenone and 5 μM antimycin A. This corrects for non-mitochondrial OCR. Apparent coupling efficiency (Q) was determined using the fractional change between the basal mitochondriaspecific JO 2 and the JO 2 in the presence of the F o F 1 ATPase inhibitor oligomycin A (5 μM). To estimate ATP production rates due to mitochondrial respiration (JATP OxPhos ), JO 2 values were multiplied by the measured coupling efficiency, and the estimated ATP/O ratios provided in Table 1, similar to a previously described method [44]. Ketogenesis was assumed not to play a large role in HEPG2 cell carbon flux due to reports of low expression levels of ketogenic enzymes in HEPG2 cells as well as limited amounts of ketogenic amino acids and absence of free fatty acids in the media formulation [45,46]. Stimulation of acute metabolic demand Monensin is a polyether antibiotic and ionophore that can be used to partially depolarize the plasma membrane sodium potential resulting in an energy-dissipating cycle between freely diffusing ionophore and the Na + /K + ATPase [47,48]. Extracellular oxygen flux (JO 2 ) and acidification rate (ECAR) were measured as described above. The flux analysis protocol was as follows: basal OCR and ECAR were measured in unbuffered DMEM, followed by carbon fuel substrate refeeding (25 mM glucose, 4 mM glutamine, or 1 mM pyruvate at standardized volumes), 20 μM monensin A was then injected to cause an acute energetic demand, 0.1 mM ouabain was injected to inhibit the Na + /K + ATPase, and rotenone and antimycin A (5 μM) were injected to inhibit electron transport system flux at complexes I and III, respectively. Immediately following the experimental protocol, plates were removed, and cells were lysed in radioimmunoprecipitation (RIPA) buffer at 4°C. Protein concentration in each well was determined using a bicinchoninic acid (BCA) protein assay (Thermo Fisher). JH + and ATP production rates were determined as described above. Specific APRs were calculated as a change from the monensin-induced rate, corrected for the ouabain rate: Cytotoxicity assays Cells were seeded at 1.5 × 10 5 cells/well in plastic 96well culture dishes (MatTek, Ashland, MA, USA), and were grown for 48 h in DMEM-GLC or DMEM-GAL as appropriate. Cells were switched to DMEM-L-GLC containing 5-fold serial dilutions of the following compounds: 2-deoxy-glucose, dimethyl biguanide (metformin), oligomycin, or menadione. Following a 24-h incubation period, cells were rinsed and switched to compound-free DMEM-L-GLC. Cell viability was determined by quantitating the rate of reduction of resazurin dye to its fluorescent product resorufin, which is proportional to diaphorase activity in live cells [49]. The absolute rate of dye reduction was determined by calibrating to a standard curve derived by reacting known concentrations of dye (0-80 μM) with saturating (4 mM) ascorbate. The 560/590 nm fluorescence was measured using a microtiter plate reader (Biotek, Winooski, VT). Fluorescence was measured over a 2-h period at 37°C. Dye reduction rates were linear for all samples. All chemical compounds and relevant metabolites were tested with the dye to ensure that false positive dye reduction would not occur under the measured conditions. Assays were repeated in duplicate on three separate occasions for each treatment/group (N = 6). Statistics Data were analyzed using GraphPad Prism (Version 8.4.2). Data are represented by mean ± SEM. For univariate designs, means were compared using a two-tailed Student's t test. For multivariate designs, means were compared using two-way ANOVA with Sidak's multiple comparison test. Assumption of equal variance was tested using a Brown-Forsythe test. P < 0.05 were considered statistically significant. Results Aglycemic growth adaptation did not result in changes in mitochondrial volume or network complexity Substitution of media glucose with galactose (aglycemic condition) most likely exerts its effects by limiting glucose availability to central carbon metabolism [50,51]. Conversion of galactose to glucose through the Leloir pathway requires a uridine diphosphate glucose substrate pool for the galactose-1-phosphate uridyltransferase reaction, which is most likely the limiting reaction in this model due to the chronic absence of exogenous glucose (Fig. 1a). In the present study, we performed metabolic phenotyping in isolated mitochondria and live cells from high-galactose adapted (HEPG2-Gal) cells and compared several metabolic outcome measures with high-glucose adapted (HEPG2-Glc) control cells (Fig. 1b). Adaptation to aglycemic growth conditions may lead to variation in mitochondrial volume, which would likely confound comparative metabolic measurements. To address this possibility, mitochondrial volume (as well as reference nuclear volume) was measured in both HEPG2-Gal and HEPG2-Glc cells. Cells from both growth conditions had comparable nuclear and mitochondrial volumes, as well as mito to nuclear volume ratios (Fig. 1c, d). Additionally, citrate synthase enzyme activity did not differ between isolated mitochondrial fractions from both growth conditions (SFigure 2A). Together, these findings indicate that mitochondrial content did not differ between the groups. Conversely, studies have indicated that environmental or genetically induced changes in ETS flux result in adaptive structural reorganization of the mitochondrial network [31,52]. To investigate the possibility that similar morphological effects are induced under aglycemic growth conditions, we quantitated mitochondrial network complexity in live HEPG2-Glc and HEPG2-Gal cells using laser scanning confocal microscopy. The size distribution of connected mitochondrial clusters within samples of individual cells from both growth conditions was characterized by a relatively high probability (P(M)) of small clusters and a relatively low probability of large clusters (Fig. 1e). Mean cluster number and mass were similar among cells from both growth conditions (SFigure 1A-D). As a way of more directly summarizing network complexity for comparison, cluster entropy was also determined using an approach that is similar to a previously reported method [30]. A high-entropy value would indicate that cluster masses are uniformly distributed among their possible values (maximal value is RandClust) (Fig. 1f). A low entropy value would indicate that cluster masses are skewed toward a small number of possible values (minimal value indicated by SingleClust) (Fig. 1f). The observed cluster mass distributions were characterized by relatively high-entropy values indicating that cluster masses were spread out among their possible values, and no differences in cluster entropy were observed between cells from the two growth conditions, indicating a similar degree of network complexity (Fig. 1f). Aglycemic growth conditions do not enhance intrinsic mitochondrial oxidative capacity or efficiency HEPG2 cells grown in galactose have been previously described as "aerobically poised," characterized by enhanced basal rate of oxygen consumption and decreased extracellular acidification rate [6,17,18]. However, it is unclear if galactose induces adaptations in intrinsic mitochondrial function or other, more complex whole-cell adaptations. To address this question, mitochondria were isolated from HEPG2-Glc and HEPG2-Gal cells and subjected to detailed in vitro phenotyping of mitochondrial function. Mean citrate synthase activity did not differ between mitochondrial preparations isolated from either growth condition, indicating that the preparations had similar purity (SFigure 2A). Maximal NADH-, FADH 2 -, or CytC-linked ADP-stimulated respiration did not differ in mitochondria isolated from the HEPG2 cells grown under the two different conditions (Fig. 2a). Respiration and electrical potential across the inner mitochondrial membrane (ΔΨ m ) were assessed in a manner that more closely models the oxidative phosphorylation (OxPhos) system in vivo by clamping extra-mitochondrial ATP/ADP ratios (i.e., the free energy of ATP hydrolysis; ΔG ATP ) over a broad range of respiratory demand states (Fig. 2b). JO 2 responses were not different between HEPG2-Glc and HEPG2-Gal mitochondria during respiration supported by pyruvate/malate (Fig. 2c, d) or glutamate/malate (Fig. 2f, g). Interestingly, HEPG2-Glc mitochondria were slightly more polarized (i.e., higher ΔΨ m values) than HEPG2-Gal mitochondria over the respiratory demand range under both fuel conditions (Fig. 2d, g). ATP production rates (JATP) were assessed using a hexokinase clamp system [41] at two different steady-state rates of ADP-stimulated respiration. JATP was not different between mitochondria from cells grown in glucose versus galactose under either pyruvate/malate (Fig. 2e) or glutamate/malate (Fig. 2h) supported conditions. ATP/O stoichiometric ratios were also not different under any of the tested conditions (Table 1). Finally, NAD(P) + / NAD(P)H autofluorescence, a parameter reflecting the redox state driven by matrix dehydrogenase enzymes, also did not differ between growth conditions over the same range of ΔG ATP (SFigure 2B,C). Given the similarities in indices of fuel supply (NAD(P) + /NAD(P)H) and OxPhos efficiency (JATP and ATP/O) between growth conditions, the origin and biological significance of the slightly lower ΔΨ m in mitochondria from cells grown in galactose is unclear. Intrinsic mitochondrial function was not affected by aglycemic growth. a Maximal respiratory capacities determined using substrate/ inhibitor titration protocol in isolated mitochondria from HEPG2-Glc and HEPG2-Gal cells. ADP concentration was clamped at 300 μM using a hexokinase-coupled system. b Schematic of the creatine kinase free energy (ΔG ATP ) clamp system used in c, d, f, and g. c Pyruvate/malatesupported respiration over the clamped ΔG ATP range. d Pyruvate/malate-supported electrical potential across the inner mitochondrial membrane (ΔΨ m ) measured using tetramethyl rhodamine-methylester (TMRM) over the clamped ΔG ATP range. f Pyruvate/malate-supported ATP production rate (JATP) determined fluorometrically using a coupled hexokinase/glucose-6-phosphate dehydrogenase system under clamped ADP concentrations. f-h Glutamate/malate-supported JO 2 , (ΔΨ m ), and JATP. N = 7/treatment/group. Data are mean ± SEM. Means were compared using Student's t test (a) and a two-way ANOVA with Sidak's multiple comparison test (c-h). *p < 0.05 Aglycemic growth conditions result in adaptive changes to glucose metabolism To further explore basal metabolic adaptations to aglycemic growth, oxygen consumption and acidification rates were measured in intact cells. To account for nonrespiratory oxygen consumption rate (JO 2 ), measured rates were corrected to those obtained in the presence of antimycin A (cytochrome bc 1 complex inhibitor) plus rotenone (NADH oxidase inhibitor). Additionally, extracellular acidification rates (ECAR) were converted to more useful units (proton efflux rate; JH + ) using empirically derived buffer capacity measurements. Finally, compartmentalized flux patterns through central carbon metabolism were determined by separately refeeding exogenous glucose, pyruvate, or glutamine. This strategy was combined with the use of inhibitors to limit mitochondria-specific respiration and induce compensatory flux through glycolysis. Glycolysis produces net ATP equivalents via substratelevel phosphorylation coupled with pyruvate fermentation to lactate and/or via oxidative phosphorylation coupled with the oxidation of pyruvate in the TCA cycle (Fig. 1a). JATP was estimated by combining empirical measurements with stoichiometric coefficients (Table 2) as previously described [44]. Notably, the pattern of estimated JATP partitioning between fermentation and OxPhos differed substantially between HEPG2-Glc and HEPG2-Gal cells following glucose refeeding (Fig. 3a). However, total JATP did not differ between the two growth conditions, indicating a similar rate of metabolic demand (SFigure 3A). Both HEPG2-Glc and HEPG2-Gal cells exhibited increased JH + and decreased JO 2 in response to glucose refeeding (SFigure 3A, B); however, this effect was more pronounced in HEPG2-Gal cells. HEPG2-Gal cells also exhibited a greater compensatory increase in JH + in response to oligomycin (Fig. 3b, SFigure 3C). Interestingly, this effect was accompanied by a substantial reduction in maximal uncoupled respiration (relative to basal), indicating that mitochondrial respiration is repressed in response to glucose refeeding (Fig. 3c, SFigure 3D). Exogenous pyruvate can be reduced to lactate by cytosolic lactate dehydrogenase or oxidized in the TCA cycle; however, net ATP equivalents are only produced in the latter instance (Fig. 1a). The pattern of estimated JATP partitioning between fermentation and OxPhos did not differ substantially between HEPG2-Glc and HEPG2-Gal cells following pyruvate refeeding (Fig. 3d). Total JATP did not differ between the two growth conditions, again indicating a similar rate of metabolic demand (SFigure 3D). HEPG2-Gal cells exhibited no substantial changes in JH + or JO 2 in response to pyruvate refeeding (SFigure 3E). No compensatory increases in JH + (from basal) in response to oligomycin were observed in either HEPG2-Glc or HEPG2-Gal cells (Fig. 3e, SFigure 3F), nor were any differences in maximal uncoupled respiration detected between the two growth conditions (Fig. 3f, SFigure 3F). Exogenous glutamine contributes anaplerotic substrates to the TCA cycle through glutaminolysis. Under normal circumstances, glutamine-derived alpha-ketoglutarate maintains the oxaloacetate pool, but in the absence of exogenous sources of pyruvate, glutamine is able to support replenishment of oxaloacetate and acetyl-CoA through pyruvate (Fig. 1a) [53]. The pattern of estimated JATP partitioning between fermentation and OxPhos did not differ between HEPG2-Glc and HEPG2-Gal cells following glutamine refeeding (Fig. 3g). Total JATP also did not differ between growth conditions (SFigure 3G). HEPG2-Gal cells exhibited no differences in JH + or JO 2 in response to glutamine refeeding (SFigure 3H). No compensatory increases in JH + (from basal) in response to oligomycin were observed in both HEPG2-Glc and HEPG2-Gal cells (Fig. 3h, SFigure 3H) nor were any differences in maximal uncoupled respiration detected between the two growth conditions (Fig. 3i, SFigure 3I). Interestingly, in the presence of respiratory complex I and III inhibitors rotenone and antimycin A, JO 2 was higher in HEPG2-Gal cells, indicating that oxidase activity not associated with the electron transfer system was enhanced (SFigure 4A). Oligomycin rates (corrected for rotenone/antimycin A) did not differ, indicating that proton "leak" rates were not altered by aglycemic growth adaptation (SFigure 4B). Finally, the apparent coupling coefficient (Q), an indicator of the fraction of respiration associated with ATP oxidative phosphorylation, did differ between substrate refeeding conditions (highest with glutamine) but did not differ between cells from either growth condition (SFigure 4C). Mitochondrial membrane potential (ΔΨ m ) measurements in isolated preparations are limited by the necessity of adding substrates at saturating concentrations to maintain the respiratory steady-state [54]. To investigate differences in ΔΨ m in situ, live intact cells were stained with the potentiometric dye tetramethyl rhodamine methyl ester (TMRM) and the non-potentiometric carbocyanine dye mitotracker green-FM (as a total mitochondrial counterstain) and imaged under substrate/inhibitor conditions that matched the substrate refeeding experiments (Fig. 3). Heterogeneity in ΔΨ m values within individual cells was observed in both growth conditions (Fig. 4a), so ΔΨ m measurements are represented with both median and maximal values for each condition. Notably, median and maximum ΔΨ m values among the groups were similar, which was interpreted to mean that the degree of heterogeneity was not influenced by growth condition. Median ΔΨ m was significantly lower in HEPG2-Gal cells following addition of either pyruvate or glutamine, but not glucose (Fig. 4b-d). Interestingly, refeeding with glucose resulted in a relatively large decrease in HEPG2-Gal maximal ΔΨ m upon addition of oligomycin (3 0% drop in HEPG2-Gal vs.~10% drop in HEPG2-Glc) (Fig. 4e). This observation is consistent with the attenuated response in uncoupled (FCCP) respiration during the extracellular flux analysis (EFA) experiments (Fig. 3e). Maximal ΔΨ m values were similar among the two cell lines for the pyruvate and glutamine refeeding conditions (Fig. 4f, g). An additional image panel is available in the online supplement that confirms the expected decrease in intra-mitochondrial fluorescence intensity following depolarization of ΔΨ m by addition of FCCP (SFigure 5). Aglycemic growth conditions induce adaptation of demand-stimulated metabolic rates To stimulate energetic demand and therefore metabolic flux, the sodium ionophore monensin [47] was added to partially depolarize the plasma membrane sodium potential, which results in an energy-depleting cycle that is dependent on the Na + /K + ATPase (Fig. 5a, b). Stimulated JATP was quantitated using the monensininduced JO 2 and JH + rate increases minus rates measured following subsequent addition of ouabain to inhibit the Na + /K + ATPase. HEPG2-Gal cells exhibited enhanced stimulated JATP through both OxPhos and, to a lesser extent, glycolysis (Fig. 5c-e). Greater total JATP supported by all three substrate conditions was also observed (Fig. 5f-h). Aglycemic growth conditions result in selective sensitivity to the redox cycling agent menadione To determine whether aglycemic growth adaptation causes specific sensitivities to cytotoxic agents or changes in nutrient availability (as would occur during IOX implantation), cells were switched to a media designed to more closely approximate substrate concentrations found in serum [55] (DMEM-L-GLC) and were incubated for 24 h with five-fold serial dilutions of compounds that target metabolic features in distinct ways. 2-Deoxy-glucose (2-DOG) is a glucose antimetabolite, and its accumulation results in osmotic stress and glycolysis inhibition [11]. Metformin (dimethyl biguanide) is an organic cation that acts as an inhibitor of respiratory complex I at micromolar concentrations [56][57][58]. The macrolide antibiotic oligomycin is a potent inhibitor of the F o F 1 ATP synthase [18]. Menadione is a vitamin K analog that participates in a redox cycle with redox active enzymes resulting in substantial production of reactive oxygen species [59]. Neither 2-DOG nor metformin had any discernable effect on cell viability (measured via resazurin dye reduction rate) in cells from either growth condition (Fig. 6a-d). Oligomycin reduced viability similarly in cells from both growth conditions across its entire dose range (Fig. 6e, f). Notably, HEPG2-Gal cells were more sensitive to high concentrations of menadione compared to HEPG2-Glc cells (Fig. 6g, h). Galactose substitution as a model for aglycemic growth conditions Substitution of media glucose for galactose as a model of aglycemic growth has been used for decades to investigate limitation of carbohydrate metabolism in cultured cells [15,18,50]. Cells grown in galactose have been previously described as "aerobically poised," characterized by enhanced basal rate of oxygen consumption and decreased extracellular acidification rate [6,17,18]. However, the exact metabolic adaptations, particularly with respect to partitioning of ATP production between substrate-level phosphorylation in glycolysis and mitochondrial oxidative phosphorylation have not been well defined. Galactose growth conditions are often described as imposing a stoichiometric ATP constraint on (See figure on previous page.) Fig. 3 Adaptation to aglycemic growth conditions enhanced glucose-supported energy partitioning in HEPG2 cells. a Bivariate plot of estimated energy partitioning between oxidative phosphorylation (JATP OxPhos ) and glycolysis (JATP Glyc ) in response to glucose refeeding. b Glucosesupported proton efflux rate (JH + ) fold change between basal condition (prior to glucose refeeding) and a high glycolytic flux condition: induced using 5 μM oligomycin. c Glucose-supported respiration rate (JO 2 ) fold change between basal and a high respiratory flux condition (5 μM oligomycin + 1 μM FCCP). d Estimated JATP in response to pyruvate refeeding. e Pyruvate-supported JH + under a high glycolytic flux condition (5 μM oligomycin). f Pyruvate-supported JO 2 fold change between basal and a high respiratory flux condition (5 μM oligomycin + 1 μM FCCP). g Estimated JATP in response to glutamine refeeding. h Glutamine-supported JH + under a high glycolytic flux condition (5 μM oligomycin). i Glutamine-supported JO 2 fold change between basal and a high respiratory flux condition (5 μM oligomycin + 1 μM FCCP). N = 8/treatment/ group. Data are mean ± SEM. Means were compared using Student's t test or two-way ANOVA (c). *p < 0.05. ns = not significant Adaptation to aglycemic growth conditions enhanced metabolic response to plasmalemmal sodium potential uncoupling. To assess the response to an increased energy demand, the cells were exposed to the sodium ionophore monensin, which induces increased ATP utilization through Na + /K + ATPase pumps that is sensitive to the Na + /K + ATPase inhibitor ouabain. a Schematic diagram of monensin action. b Example trace of respiration (JO 2 ) and proton efflux (JH + ) rates in response to monensin, followed by inhibition of the Na + /K + ATPase inhibitor ouabain. c Glucose-supported stimulated energy demand (calculated by subtracting the basal rate from the ouabain-corrected monensin rate for both JATP OxPhos and JATP Glyc ) represents the rate increase that is attributable to the monensin. d Pyruvate-supported stimulated energy demand. e Glutamine-supported stimulated energy demand. f Glucose-supported total stimulated ATP production rates (sum of OxPhos and glycolysis). g Pyruvate-supported total stimulated ATP production rates. h Glutamine-supported total stimulated ATP production rates. N = 8/treatment/group. Data are mean ± SEM. Means were compared using Student's t test. *p < 0.05. ns = not significant glycolytic ATP production [17,18,60,61]. In this model description, galactose conversion to galactose-1-phosphate by galactokinase requires one ATP molecule. Glucose-1-phosphate is produced from galactose-1-phosphate by the galactose-1-uridyl transferase reaction at the expense of UDP-glucose, which is converted to UDP-galactose. The Fig. 6 Aglycemic growth adaptation induced sensitivity to redox cycling agent menadione. a Resazurin dye reduction rates (as an indicator of cell viability) for cells exposed to 2-deoxy-glucose (2-DOG) for 24 h in DMEM-L-GLC media. b Complex I inhibitor-metformin-treated cells (dimethyl biguanide). c F o F 1 ATPase inhibitor-oligomycin-treated cells. d Redox cycling agent-menadione-treated cells. N = 6 wells/treatment/ group. Data are mean ± SEM. Means were compared using one-way ANOVA with multiple comparisons against compound-free control cells resulting glucose-1-phosphate is then converted to glucose-6-phosphate by phosphoglucomutase. UDP-glucose is synthesized from glucose-1-phosphate by UDP-glucose pyrophosphorylase. This reaction requires UTP, which is considered an ATP equivalent. Because this pathway costs two net equivalents of ATP, flux through glycolysis that is solely dependent on glucose-6-phosphate derived from galactolysis would yield no net ATP equivalents [60,61]. However, the model described above has not been definitively tested and studies that used radio-isotope-labeled carbohydrates in cultured cells found that galactose carbons did not enter glycolysis at an appreciable rate [50,51,62]. More likely, the galactose-1-uridyl transferase reaction is limited by the size of the UDP-glucose pool, which is restricted because the cells are starved of exogenous glucose [63]. This suggests that galactose growth conditions more closely simulate a state of glucose deprivation, rather than stoichiometric negation of net glycolytic ATP production. However, the model is still advantageous because it facilitates comparison of carbohydrate-restricted growth conditions under matched media osmotic pressures and provides a minimal carbohydrate source on which many cell lines can maintain growth rates that are comparable to glucose-supported conditions [64]. Cellular energetic adaptations to aglycemic growth conditions HEPG2 cells are favored for tumor-derived cell metabolism studies due to their apparent metabolic flexibility and pseudo-differentiated phenotype [19][20][21][22]. When challenged with restricted nutrient conditions, HEPG2 cells are capable of rapidly switching between a balanced state of combined oxidative energy metabolism and aerobic fermentation to a state dominated by only one of these elementary modes [19,25,26,60,61]. When grown in aglycemic conditions, HEPG2 cells develop sensitivity to toxicants that affect mitochondrial function, which has been used as a useful tool for identifying drugs with mitochondrial liabilities [6,17,18]. The interpretation of these observations has been that aglycemic growth conditions result in an obligate shift from a primary reliance on glycolytic substrate-level phosphorylation to mitochondrial oxidative phosphorylation for energy metabolism (OxPhos) [18,25,60]. The findings in this study do not strictly support the phenomenon of adaptive "aerobic poise." Though HEPG2 cells did thrive under aglycemic conditions (likely by oxidizing exogenous glutamine and pyruvate among other trace media fuel sources), they did not enhance their intrinsic capacity or efficiency to utilize these fuel sources, which was assessed over a range of physiologically relevant demand states in isolated mitochondria [37,40]. A slight increase in basal respiratory rate independent of mitochondrial content or network complexity was observed in live cells, which is consistent with other reports [6,18,64]. However, the most notable adaptive response to aglycemic growth observed was in the apportioning of energy metabolism between substrate-level phosphorylation (glycolytic fermentation) and oxidative phosphorylation following refeeding with glucose, supporting the conclusion that carbohydrate metabolism is the primary target of adaptation in this model. Together, these observations showcase the importance of combining both isolated mitochondria and live cell metabolic flux measurements and highlight the importance of interpreting either set of measurements cautiously (if performed on their own). There are several possible underlying mechanisms by which the rate of carbohydrate metabolism could be enhanced. A recent study that used metabolic flux analysis in combination with selective overexpression of glycolytic pathway enzymes in immortalized mouse kidney cells determined that control over glycolytic flux was largely attributable to only four steps: glucose transport, hexokinase, phosphofructokinase, and lactic acid efflux (through monocarboxylate transporters) [65]. Adaptive changes in activity or expression of one or more of these enzymes may have contributed to the observations in the present study. Interestingly, glucose refeeding rapidly repressed the rate of respiration in HEPG2-Gal cells, while simultaneously increasing JH + , a phenomenon that resembles the Crabtree effect in yeast [66]. There are two likely explanations for these observations. The first is that the increase in ADP phosphorylation rate by substrate-level phosphorylation inhibited OxPhos through respiratory control exerted via the free energy of the ATP hydrolysis reaction (similar to kinetic effects demonstrated in the force-flow experiments) [38,40,67]. This is further supported by the similarity in the estimated total JATP between groups, indicating that glucose refeeding induced a redistribution of flux through energy-transducing pathways. Alternatively, the increased rate of pyruvate reduction to lactic acid may have simply outpaced the transport/oxidation of pyruvate in the mitochondria. However, exogenous pyruvate refeeding did not substantially increase JH + suggesting that the first explanation is more likely. Another interesting observation was that rotenone/ antimycin A-insensitive oxygen consumption rate was greater in HEPG2-Gal compared with HEPG2-Glc cells. This was interpreted to represent enhanced expression or activity of oxidase enzymes that are not associated with the electron transfer system. Though the enzymes underlying the putative oxidase activity were not identified in this study, there are some plausible candidates that should be investigated in future studies. First, NAD(P)H oxidase (NOX) enzyme family members are expressed in HEPG2 cells and have been implicated in regulation of central carbon metabolism [68]. However, it has also been previously reported that NOX family member expression/activity is enhanced under high-(rather than low) glycemic conditions [60]. Alternatively, the Cytochrome P450 family of monooxygenases are also expressed in HEPG2 cells, but their expression is generally considered to be low [69]. However, increased expression/activity may occur in response to aglycemic conditions [70]. Though the free energy clamp used for the isolated mitochondria experiments facilitated titration of the energetic demand state over a physiologically relevant range [37,40], there is no precise method to do this in live cells. However, the sodium ionophore monensin can be used to stimulate an energetic demand by depolarizing the plasma membrane sodium potential [47]. Glucose supported the highest stimulated rate of ATP production (primarily through aerobic fermentation). This finding suggests that glycolytically derived ATP responds to fluctuations in peak cellular energetic demand (e.g., plasma membrane potential maintenance), while OxPhos-derived ATP supports basal energetic demands (e.g., macromolecular synthesis), which is congruent with a previous report that used a similar method [71]. It is unclear why HEPG2-Gal cells exhibited greater stimulated JATP in response to monensin treatment. Notably, it has been reported that HEPG2 cells are more sensitive to ouabain treatment under glucosedeprived conditions [72]. This suggests that Na + /K + ATPase expression/activity is important for adaptation to aglycemia and may be upregulated under these conditions. Increased Na + /K + ATPase expression/activity would be expected to increase the total ATP demand in response to monensin, which may account for the observed effects. Relevance to the IOX model Intrahepatic orthotopic xenograft (IOX) models in which primary or tumor-derived cell lines are implanted into rodent livers have been used to predict drug efficacy and provide information about metastatic patterns [8]. Due to cell line heterogeneity, use of multiple cell lines is recommended for drug screening in order to account for phenotypic variability [8]. This report provides a detailed description of adaptive variation in metabolic phenotype induced by modification of macronutrient sources in growth medium. This may have some useful implications for the design of IOX testing models; particularly, the potential impact of growth media conditions should be carefully considered, especially when multiple cell lines are used [16,25,26]. The phenotyping methods described in this study (e.g., substrate refeeding) may be useful in identifying metabolic implications associated with greater tumorgenicity, metastasis, or drug sensitivity, given that the observed adaptive effects were induced in cells from the same line that were grown in otherwise identical culture media (with the exception of carbohydrate source). Previous studies have indicated that HEPG2 cells incubated in galactose growth media are more sensitive to staurosporine-induced apoptosis [60,61], and several studies have shown specific sensitivities to many common drugs in cells cultured in galactose or other nutrient-limited media [6,17,18,57]. However, the common practice in nutrient sensitization studies is to perform the cytotoxicity assays directly in the aglycemicor nutrient-deprived media. To our knowledge, this is the only such report that has performed the cytotoxicity assays in normo-glycemic media following adaptation to aglycemic growth conditions. Of the four metabolic toxins assessed, only menadione elicited specific sensitivity in HEPG2-Gal cells. Combined with the observation that background oxidase activity was elevated, this may allude to a redox stress liability that is uncovered by the adaptive growth response. Menadione is a redox cycling agent, and its toxicity is enhanced by the activity/expression of oxidoreductase enzymes; thus, increased oxidase expression/activity may sensitize the cells to menadione [59]. Notably, another recent study found that HEPG2 cells treated with competitive inhibitors of glycolytic flux exhibited increased redox stress when treated with doxorubicin, further supporting a link between aglycemic growth and redox stress liability in these cells [73]. The relevance of such observations to cancer chemotherapeutic sensitivity requires further investigation, but is an attractive hypothesis due to the tendency of other common chemotherapeutics, particularly the HCC therapeutic sorafenib, to induce oxidative stress [2,[74][75][76][77]. Conclusions This study provides a detailed, multilevel systems approach to define specific bioenergetic adaptations to aglycemic growth conditions in HEPG2 cells. The approach involves parallel assessment at the organelle and whole-cell levels, structural and functional measurements, selective substrate refeeding to target specific modes of central carbon metabolism, and comparison of metabolic fluxes under different states of energetic demand. The hypothesis that aglycemic growth conditions would facilitate compensatory enhancement of oxidative metabolism and repression of aerobic fermentative metabolism did not strictly hold, as the results indicated that oxidative metabolism did not differ substantially between the two tested cell lines. However, fermentative substrate-level phosphorylation was substantially enhanced and a degree of selective sensitivity toward menadione toxicity was observed. These findings will support further hypothesis development, advance the understanding of implicit regulation of metabolic adaptation in tumor-derived cells, and improve IOX testing panels by providing a practical workflow that reports detailed information regarding the effects of subculture conditions on adaptive energy partitioning in tumor-derived cells.
2023-01-30T14:09:49.783Z
2021-01-19T00:00:00.000
{ "year": 2021, "sha1": "19b3ce13e8132f0ace0d6c1a3ad9cf4763504fd4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40170-021-00241-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "19b3ce13e8132f0ace0d6c1a3ad9cf4763504fd4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
195843203
pes2o/s2orc
v3-fos-license
Road bike accidents involving cervical fractures presenting as cardiac arrest: a report of two cases We present two cases in which elderly male recreational cyclists suffered from cervical fractures and coinciding injuries of the spinal cord that subsequently led to cardiac arrest. Based on reports from eye witnesses and due to the low impact nature of the crashes, the two patients were initially considered as having cardiac arrest before falling of their bikes. The spinal cord injuries triggering cardiac arrest were acknowledged with delay, as the primary eliciting cause was considered cardiac disease in conjunction with all-out exercise. We suggest that increased focus should be made on possible cervical injuries even following low energy crashes in road cycling. Background Middle-aged or elderly men performing recreational cycling are over-represented in the group of patients presenting with exercise-related severe cardiovascular events [1]. Thus, in general, it is reasonable to assume that in this population of cyclists, the eliciting cause is exercise-related, when they are found with cardiac arrest on the road next to the bicycle. This is, however, not always the case. Falls and other accidents were previously primarily seen in off road cycling (mountain biking), mostly resulting in minor injuries [2,3]. However, in recent years, there has been an increase in injuries, as off road cycling appears to be a high-risk sport with regard to severe spine injuries [4]. It has been reported that cyclingrelated spine injuries are more frequent in off road cycling compared to road cycling [5]. Although cervical cord injury (CCI) in cycling accidents not involving vehicles is rare [6], cycling has been shown to be the primary cause of traumatic cervical fractures in a sports-related sub-population [7]. CCI is reported to occur in 10 to 50% of traumatic cervical fractures [8]. Following CCI, bradycardia is the most common cardiac arrhythmia. Cardiac arrest following CCI has both been described immediately after CCI and in the days following CCI [9]. We present two cases of middle-aged or elderly men with suspected exercised-induced cardiac arrest following seemingly innocent falls in recreational road cycling. Both men were later diagnosed with cervical fractures, CCI being identified as the eliciting cause of cardiac arrest. Case presentation A 68-year-old man was admitted to hospital following cardiac arrest during indoor track cycling. Bystanders described the events leading up to the cardiac arrest as the cyclist gradually losing speed, eventually falling sideways off the bike. This suggested to the attending prehospital anaesthesiologist that the cyclist got ill before he actually fell of the bike. No obvious signs of trauma were noted and the helmet remained intact. Bystanders quickly acknowledged that the patient had cardiac arrest and initiated resuscitation efforts. An automatic electronic defibrillator was attached just as the prehospital anaesthesiologist and the ambulance arrived. The initial rhythm analysis revealed pulseless electric activity. Following three to 4 minutes of treatment for cardiac arrest, return of spontaneous circulation was achieved. However, spontaneous respiration did not return. The patient was intubated at the scene and escorted to the regional university hospital as exercise-related cardiac arrest was suspected. At the hospital, a fellow bicyclist eventually revealed that the patient in fact was hit by another bicycle rider immediately before the crash. Therefore, the initial hypothesis that the cyclist became ill before the fall was discarded and trauma was suspected. The patient had a computerised tomography (CT) scanning performed, which revealed an isolated fracture of dens axis type 2 and contusion of the medulla oblongata at the affected level. Cardiac genesis was excluded based on results from echocardiography, electrocardiography, and blood samples including Troponin I. The following day the patient showed signs of spinal shock and autonomic dysfunction, including bradycardia and asystole, prompting placement of a pacemaker. Repeated electroencephalograms revealed refractory myoclonic status epilepticus. As the patient did not regain consciousness, treatment was withheld 6 days after the accident, and the patient deceased shortly afterwards. The second case was a 73-year-old man who was admitted to a cardiology department after being resuscitated from cardiac arrest during a road bike race. Bystanders described the man wobbling on the bike, eventually falling in a ditch. Cardiac arrest was quickly acknowledged and resuscitative efforts initiated. Approximately 5 minutes of CPR was given. At the arrival of the pre-hospital emergency caretakers, the patient had regained spontaneous circulation and respiration. No medicine or shock had been given. Pre-hospital electrocardiography revealed no signs of cardiac ischemia. The patient, now awake, had no complaints, be it chest, back or neck pain. The patient was referred to the cardiology department as the primary diagnosis assigned to the patient was exercise-related cardiac arrest. Upon arrival at the department of cardiology, echocardiography revealed a preserved ejection fraction and no regional hypokinesia of the heart. After cardiac examinations, the patient now began to complain of neck pain. A CT scanning was performed revealing a dislocated fracture of dens axis. The patient was immediately inline-stabilised and transferred to the trauma unit at the regional university hospital. He was admitted at the neurosurgical department, presenting no signs of autonomic dysfunction. Four days later he underwent a successful operation to stabilize the cervical column. The patient eventually achieved full recovery. During hospital admission the patient was consulted by a cardiologist ruling out primary cardiac event. The patient stated that there were no prodromal events prior to the accident. He simply had steered too close to the edge of the roadside ditch and subsequently crashed. Although the cause of cardiac arrest could not be established with complete certainty, no other plausible cause than acute CCI could be found. Discussion and conclusion Exercise-related severe cardiac events such as coronary occlusion and cardiac arrest are feared complications in middle-aged or elderly patients performing all-out exercise. Coronary heart disease is responsible for most of these events, and cycling is the most common sport associated with these cardiovascular events [1]. In the cases presented, the common denominator was the assumption that cardiac arrest preceded the crashes, and both patients were initially treated as patients with primary cardiac arrest. In both cases the cervical trauma and CCI was acknowledged with delay. CCI's effect on the cardiovascular system is due to the disruption between the autonomic centres in the brain and the sympathetic neurons in the spinal cord. This results in a complete lack of sympathetic tonus causing vasodilatation and bradycardia; concurrently, an unopposed vagal tone activates the parasympathetic nervous system. This combined autonomic dysfunction can lead to severe bradycardia and asystole. It has previously been reported, that regardless of performing road cycling or off road cycling, severely injured cyclists display similar patterns of injury and comparable outcomes following accidents. The exception is spinal injuries [5], and concern has been raised in regards to spinal injuries following crashes in off road cycling [10]. It is a common conception that off road cycling is riskier than road cycling as crashes in general may be considered high-energy crashes, potentially involving cliffs, hillsides and trees. As such, appropriate diagnostic measures regarding CCI are often made when caring for an acutely injured mountain bike cyclist following a crash. This conception, however, may be wrong. It has been reported that road-biking is more often associated with head-trauma, whereas off road cycling causes a greater number of less severe injuries [11]. Crashes of seemingly low intensity may also be harmful. The incidence of traumatic cervical fracture in lowenergy trauma has been investigated in a study from Norway. The study concluded that 57% of traumatic cervical fractures were caused by falls from less than 1 m or less than 5 stair-steps [12]. Hence, it is probably not correct to assume that the degree of energy in the trauma is always proportional to the incidence of traumatic cervical fractures. This may have implications for road cyclists or indoor track cyclists. An increase in spinal injuries has been linked to a growing popularity of cycling, and in particular competitive cycling [13]. Both the emergency medical systems and the emergency departments should thus increase focus on spinal cord injuries in all forms of recreational cycling. The two cases presented here are reported from two different prehospital units and two different hospitals in the Region of Southern Denmark. Both cases demonstrate that considerations pertaining to traumatic injury are not necessarily the first considerations made when cardiac arrest is encountered in conjunction with what may be perceived as low-energy trauma. Conclusion With the presentation of these two cases, we highlight that cardiac arrest in recreational road cyclist may be caused by CCI following crashes. CCI should thus be suspected following crashes regardless of the immediate reports on events leading up to the incident given by bystanders. We suggest that appropriate measures regarding spinal stabilisation should be made in patients with cardiac arrest even following seemingly minor accidents with potential for spinal cord injury.
2019-07-10T13:05:12.467Z
2019-07-08T00:00:00.000
{ "year": 2019, "sha1": "68069dab6fcf592420296b8f0e14ef4f9616da14", "oa_license": "CCBY", "oa_url": "https://sjtrem.biomedcentral.com/track/pdf/10.1186/s13049-019-0641-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68069dab6fcf592420296b8f0e14ef4f9616da14", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235479396
pes2o/s2orc
v3-fos-license
Analytical modeling of three-stage inactivation of viruses within droplets and solid porous particles Graphic abstract Various viruses can hide within fluid and solid structures and thus successfully cross different distances, causing the spread of viral infections. Analytical modeling of the triple treatment of virus within a small liquid droplet and within a solid porous particle is the basic research polygon of this paper. The three-stage treatment aims to maximize the efficacy of deactivating viruses indoors. In order to achieve this, viruses undergo treatment by infrared heating, ultraviolet deactivation and ionization–electrostatic deactivation by negative ions. When the droplets are treated with infrared heating, incomplete evaporation occurs, reducing their initial diameter by 10 times; an initial diameter of droplets is 0.01 mm, 0.03 mm and 0.05 mm. Thermal inactivation of viruses inside the droplets is almost negligible, due to short exposure time and a maximum temperature of 100 °C. On the other hand, when solid porous particles are heated to a much higher temperature at the same exposure time, this causes significant thermal inactivation of viruses inside them. Reducing the diameter of the droplet (due to evaporation) by 10 times causes a multiple increase in UV-C deactivation of viruses inside the droplets. The effect of UV-C radiation on viruses within solid porous particles is not included in this paper. Introduction The rapid spread of coronavirus (SARS-CoV-2) shows that physical interpretations of virus transport indoors and outdoors are not the clearest. Coughing, sneezing and even speech are accompanied by the desorption of droplets of different dimensions that move at different speeds through the air. The distance traveled by droplets is usually up to 2 m, and smaller droplets travel further than larger droplets. The subject of many analyses and studies, for example [1][2][3], referred to the dispersion of liquid droplets during coughing and sneezing, which directly lead to the spread of the virus from infected persons. The reason for the different path lengths depends on the initial kinetic energy of the droplet, as well as on the forces of aerodynamic air resistance between the droplet and the ambient air. Accordingly, the viruses are transmitted by droplets and thus spread in the area around the person infected with this virus. Some authors have conducted modeling of the droplet evaporation and sedimentation, resulting from speaking and coughing, such as [4][5][6]. Today, there are several ways to prevent the spread of viruses, which have different efficiencies, according to the type of viruses and the parameters of the environment in which they are found. One of the ways to disable various viruses is thermal inactivation. Many researchers have performed analyses of the thermal treatment of various viruses and microorganisms of the coronavirus, such as [7][8][9][10][11]. On the other hand, thermal inactivation of the coronavirus, SARS-CoV, has also been the research site of several researchers, e.g., [12,13]. One way to deactivate coronavirus is to use ultraviolet light (UV-C). For the deactivation of various viruses, UV germicidal lamps are most often used, which emit a wavelength of ultraviolet waves in the amount of 254 nm. Different selective inactivations of the virus by the action of UV-C radiation, i.e., Ultraviolet Germicidal Irradiation (UVGI), using a germicidal lamp have been the subject of research by various researchers [14][15][16][17][18]. On the other hand, the impact of negative ions on human health and the air quality that people breathe was investigated by many authors, such as [19][20][21]. When it comes to the effect of negative ions on various viruses and the prevention of their transmission, there are many studies and analyses [22][23][24]. In this paper, the focus is on the triple deactivation treatment of two types of viruses those within liquid droplets and those within porous solid particles. The first type is contained in tiny droplets that are found in enclosed spaces and can occur in a variety of ways, forcibly or naturally. The entry of the free virus into the droplet can occur under different physical conditions, and this is not the subject of research in this paper. This analysis does not cover the way the virus entered inside a droplet or porous particle. The efficiency of the proposed solution of three-stage inactivation of the virus will be higher if the accumulation of viral particles on the droplet surface is taken into account, and if the viruses have become hydrophobic for any reason. Reasons that may cause the hydrophilic virus to become partially or completely hydrophobic are uncovered by this analysis. There are many studies about the colloid retention on air-water interface, which has been also applied to viral particles. Inactivation of accumulated viruses on the droplet surface is not covered by this paper. Some studies have analyzed in detail the accumulation of viral particles on the droplet surface, because of viral particle hydrophobicity [25][26][27]. The very small dimensions of the virus, on the order of 0.1 μm, allow it to enter the pores of a solid porous particle. The porous solid particle in this way provides transport to the virus, through the ambient air. In order to ensure successful deactivation of viruses, an innovative three-stage deactivation system is established in this paper. The three-component system consists of an infrared heating chamber, a chamber with ultraviolet radiation and an ionization-separation chamber. The purpose of the infrared heating chamber is to evaporate the droplets and heat the porous solid particles by thermal radiation in the infrared region. The first virus is in the droplet, while the second virus is inside the porous solid particle entering the first chamber, with infrared thermal heating. Inside this chamber, the diameter of the droplet is reduced several times due to evaporation, while the solid porous particle is heated, but the dimensions do not change. In the next chamber, ultraviolet inactivation of the virus contained inside the droplet occurs. The effect of ultraviolet radiation on the virus within the porous solid particle is not included in this paper. The third (last) chamber contains ionization-separation subchambers. In the first ionization subchamber, the virus is deactivated within the porous particle by the action of negative ions. In the separation chamber, the deactivated virus together with the porous particle separates and stops at the electrodes. Combined treatment of viruses The initial diameters of the droplet and the porous particle are the same and have equal d o , while their initial velocity is v o . The term initial velocity means the velocity immediately before entering the three-component system for the treatment of viruses. The purpose of the infrared heating chamber is to evaporate the droplets and heat the porous solid particles by thermal radiation in the infrared region. When the droplet hiding the virus is heated, the diameter of the droplet decreases due to the evaporation of the droplet liquid. With this treatment, the virus is no longer hidden and there is a possibility that it can be effectively treated in the next chamber. On the other hand, infrared heating of a porous solid particle causes an increase in its temperature. In the next stage, the virus and the particle enter a chamber with ultraviolet radiation, within which UV-C radiation acts on the free virus and inactivates it by acting on the thymine inside the DNA molecule. The effect of UV-C radiation on the virus within the porous solid particle is not addressed in this paper. An inactivated free virus and a porous particle in which the virus is hidden come out of the ultraviolet chamber. The ionization-separation chamber is a space for the final treatment of viruses, and it consists of two subprocesses. The first subprocess involves electrostatic isolation of a porous particle and deactivated free virus. In this way, negative ions come into direct contact with the surface of the virus and inactivate it. By the action of the electric field force, the porous particle and the deactivated free virus begin to move radially in the direction of the separation positive electrodes, where they accumulate in the form of micro-dust. Removal of harmless micro-dust is achieved in several mechanical ways or by blowing, which goes beyond the application of the methodology in this paper. The described procedure treated two viruses hidden in a liquid droplet and a porous solid particle, but it is clear that the whole process also applies to more viruses. Thermal analysis of liquid droplets and solid porous particles The effect of high temperature on viruses is generally unfavorable, while the sensitivity of viruses to high temperatures depends on: -Temperature height; -Duration of virus exposure time to high temperature; -The type of media in which the virus is located; -Protein sensitivity; -RNA sensitivity. When heated, most viruses lose their ability to reproduce, along with their infectious power, antigenicity and immunogenicity. The thermal analysis covered by this research is focused in two directions: -Reducing the volume of the droplet in which the virus is hidden by infrared heating; -Infrared heating of a solid porous particle inside which the virus is located. 3.1 Thermal analysis: reduction of droplet volume During coughing, sneezing or talking, drops of different diameters and speeds come out of people's mouths and noses. If a person is infected, the droplets ensure its spread into the ambient air. Smaller droplets are lighter, and their path through the air is longer. The larger the droplets, the larger is their weight, the sooner they will fall. Droplets of different sizes fall on different surfaces, where they can remain for a long time. The droplet enters the annular channel with the central infrared heat source installed, as shown in Fig. 1. The droplet is approximated by a sphere of initial diameter d o and density ρ, while its initial velocity is v o . It is assumed that the heat flux from a cylindrical infrared heat source is a constant amount q. The heat flux enters the droplet through its outer surface of the initial radius r o and is completely absorbed in it. Due to the absorption of heat, the liquid part of the droplet evaporates, and thus, the mass of the droplet changes during time τ , Eq. (1), where i is the specific latent heat of evaporation of liquid. Due to the thermal influence, i.e., heating, the radius of the spherical droplets decreases, so at some point in time it is r r o -dr, so by reorganizing Eq. (1) we get q(r + dr ) 2 dτ 2iρr 2 dr (2) that is, after neglecting the square term dr 2 q r 2 + 2r dr + dr 2 dτ ≈ q r 2 + 2r dr dτ 2iρr 2 dr (3) by sorting the variables and integrating it is obtained from which the time of thermal exposure τ of the droplet to infrared radiation is explicitly found The previous expression can also be represented in a form where geometrical ratio is γ A spherical droplet moves inside the thermal field of an infrared cylindrical heater, whereby both its mass and its velocity change. The movement of the droplet is opposed by the friction force, so the balance of all forces is represented by Eq. (8). The left term of Eq. (8) represents the differential change of amount of movement per unit of time, the change in speed and mass, where C D is drag coefficient, Re is Reynolds number, and μ is dynamic viscosity of the air. The drag coefficient is C D 0.44 for Reynolds number of 1000 < Re. After arranging Eq. (8), it follows where β is introduced, as the ratio of the velocities of the spherical droplet at the initial moment v o in relation to the velocity of the droplet v after the time τ , Eq. (10). The parts of Eq. (9) on the left side can be observed as relative changes in the velocities of the droplet and its mass, respectively, Eqs. (11) and (12). v Combining Eqs. (11) and (12) with Eq. (9) gives which after arranging is reduced to the quadratic equation, by β, Eq. (14). Solving Eq. (14) yields the final expression for the ratio of the change speed of the liquid droplets during the time τ d and the ratio γ , and after combining Eq. (15) with Eq. (10), the expression for changing the velocity of the droplet as a function of the ratio γ is that is, the expression for the length of the infrared heater L IC , Mass balance of the droplet The mass flow rate of water vapor from the droplet represents the transition of mass flux from the surface of the droplet into the ambient air, as shown in Fig. 2, according to Fick's law and has the form,ṁ where c w is the mass concentration of water vapor, and r is the radius of the droplet, while D w is the diffusion coefficient for water vapor in the air. After separating the variables and after integrating the expression, a mass flow rate of water vapor from the droplet is obtainedṁ where r d is droplet radius, r am is some distance away from the droplet surface, c rd is mass concentration of water vapor at the droplet surface, and c am is mass concentration of water vapor at some distance away from the droplet surface. If an approximation is introduced that water vapor has the properties of an ideal gas, then the application of the gas state equation followṡ where p d and p am are partial pressure of water vapor at the curved droplet surface and in surrounding air, respectively. The temperature of the droplet and humid air at the surface of the droplet is T d , while T am is the temperature of the surrounding air. On the other hand, the mass production of water vapor represents a decrease in the mass of the water droplet over time; that is, it is a valid next equation. After arranging the previous equation, we get the expression for changing the radius of the droplet as a function of time, Eqs. (23) and (24), and after integration, it is finally obtained where R u 8314,47 J kmol −1 K −1 is universal gas constant, and M w 18 kg kmol −1 is relative molecular mass of water, while ρ d is the density of water droplet. Combining Eq. (24) with expression (7) gives whence the time required to reduce the diameter of the droplet, under the given conditions, γ times, is obtained, where the term used for the diffusion coefficient of water vapor in air is D w , [28], p is the total pressure, and T is temperature, while σ is Stefan-Boltzmann constant. Within expression (26), it is considered that T am ≈ T IC , where T IC is the temperature of the infrared source, and p am is partial pressure of water vapor in the surrounding air approximately equal to the pressure near the infrared heater, p am ≈ p am.IC . The droplet temperature T d changes during heating due to both continuous heating and changes in its dimensions. In this analytical modeling and analysis, the heat flux q from the infrared source to the droplet is kept constant regardless of droplet temperature changes. The partial pressure of water vapor at the curved droplet surface increases its value according to the Kelvin effect, which is used in expression (26). According to the extension of expression (26), p sat is saturated vapor pressure above a flat surface, while σ d is liquid/vapor surface tension. 3.3 Thermal analysis: heating of a porous solid particle When it comes to infrared heating of solid porous particles inside the annular channel, as in the previous case with droplets, a heat balance is established by Eq. (27). The solid particle is approximated by a sphere of radius r o and mass m p , which is exposed to thermal radiation from an infrared heat source of temperature T IC . Due to the thermal radiation, the particle is heated to temperature T (τ ) and itself becomes a source of thermal radiation. The initial temperature of the particle is equal to the ambient temperature T amb After the integration, the expression for the exposure time (τ) of the virus is obtained, i.e., the heating time of the porous particle, Eq. (29), that is, after sorting, the final expression is obtained, Eq. (30). Thermal inactivation of viruses within droplets and solid porous particles The change in the concentration of thermally inactivated viruses, within droplets and solid porous particles, after an exposure time (τ ) can be expressed in differential form, where k is the time-dependent parameter due to the variation of temperature and after integration and using the Arrhenius equation for k(τ ) is obtained, where C is virus concentration at time τ , C o is virus concentration in the initial state, E a is activation energy, P is collision number or frequency factor, R is universal gas constant, T is absolute temperature, and τ is time. The efficiency of thermal inactivation of the virus η IC due to infrared heating of the droplet and the porous solid particle after combining with Eq. (32) is shown by Eq. (33). In order for thermal inactivation of viruses within a droplet and in a porous solid particle to occur, the temperature of the droplet and viruses must be higher than the critical temperature, which can be found from the following equation The second condition for thermal inactivation of viruses is that the exposure time must be longer than the critical time, which is achieved from the following equation, i.e., so it follows Ultraviolet treatment of viruses Ultraviolet radiation (UV) is classified into three categories, commonly referred to as UV-A, UV-B and UV-C. UV-A (from 320 to 390 nm) has the lowest energy level, while UV-C (from 200 to 280 nm) has the highest energy level. Ultraviolet UV-C radiation in this wave region is also called Ultraviolet Germicidal Irradiation (UVGI). Most often, a low-pressure mercuryvapor lamp is used to generate UV germicidal light, while recently a xenon lamp that emits a wide range of wavelengths has also been used. UVGI or UV-C is often used to disinfect or destroy various microorganisms and viruses by breaking down their cell membranes and damaging their RNA or DNA. In this way, the damaged RNA structure stops the virus particles from multiplying, making them harmless, as shown in Fig. 3. In order to neutralize, for example, the influenza virus, the energy dose of ultraviolet radiation that enables inactivation of the virus should be 3400 μWscm −2 for 90%, and 6600 μWscm −2 for an efficiency of 99% at a wavelength of 253.7 nm. Furthermore, it is necessary to maximize the probability of hitting the virus with ultraviolet beams. If the virus is not physically isolated, the efficiency of UV-C is very high. The reason for this is the reflection of UV-C waves from the outer surface of the droplet, which reduces its intensity. As the intensity of UV-C waves decreases, the efficiency of virus inactivation within the droplet decreases. The effect of UV-C on virus inactivation within a solid porous particle is not included in this analysis paper. The liquid droplet is approximated by a sphere of diameter d o , which moves at a velocity where ϕ a , ϕ d and ϕ p are the mass ratio of air, droplets and solid porous particles, respectively. The average value of the UV-C intensity in the annular channel established between the while the total UV-C dose is distributed in the annular channel where τ is exposure time. Within the annular channel, there are air, droplets and solid particles in the mass ratio ϕ a , ϕ d and ϕ p , respectively. The total UV-C dose of D UV is distributed in the amounts of ϕ a D UV , ϕ d D UV and ϕ p D UV . The effect of UV-C radiation only to viruses within droplets equals, i.e., In this case, it is considered that the velocity of the droplets of diameter d o is constant and amounts to v o , and that during exposure time τ the droplets pass the length of the UVGI lamp L L . When UV-C waves arrive on the surface of the droplet, as shown in Fig. 3, one part of the radiation is reflected, while the other part enters the droplet. The effect of the reflector is to enable the droplet to hit the UV-C wave over the entire outer surface. The reflection coefficient of the reflector was adopted and is 1. The intensity of UV-C radiation entering the droplet is reduced by the albedo value of the droplet (a). The albedo of the droplet represents the ratio of reflected UV-C radiation to the radiation that reached its surface and can be calculated based on the following expression, a (n water − n air ) 2 (n water + n air ) 2 where n water and n air are the refractive indexes of water and air, respectively. According to the energy losses of UV-C radiation, it is necessary to provide the required dose of UV-C radiation D UV from UVGI lamp, where α is Napierian absorption coefficient, and I o is irradiance of UV-C light. The singlestage exponential decay equation for viruses exposed to UV-C light can be presented, where S is surviving fraction of initial viruses, and k is standard inactivation UV-C rate. According to Eq. (45), the efficiency of UV-C inactivation of viruses hidden inside the droplets is where N is number of inactivated viruses hidden inside the droplets, and N o is the total number of droplets at the entrance to the UV-C chamber. (The total number of viruses is considered to be equal to the total number of droplets, i.e., one virus per droplet.) Within the droplet, multiple reflections of UV-C waves can occur by the reflection of this wave from the inner surfaces of the droplet in the form of a sphere. Ultraviolet Germicidal Dose DUVC for SARS-CoV was used as a value of 20mJcm −2 for the wavelength values of 254 nm [17]. For example, the rotavirus requires about 25 mJcm −2 of 254 nm UV-C radiation, but for adenovirus (Type 40), the value is approximately six times higher (140 mJcm −2 ) [17,29]. Napierian absorption coefficient (α) for water at a wavelength of 254 nm is in the amount of 240 m −1 . The value of albedo (a) in the reflection of ultraviolet waves (UV-C) from the surface of the water droplet was adopted in the amount of 0.025, i.e., 2.5%. In this analysis, it was adopted that the UV-C wave passes through the inside of the droplet (diameter d o ) and after reflection from the inner surface hits the meta-virus. The weakening of the UV-C wave intensity after reflection from the inner surface of the droplet is not included in this analysis. paper. The UV-C wave just before entering the surface of the droplet has an intensity of I o . Due to the different refractive index of the medium, air-liquid, one part of the intensity of the incident wave is reflected by I r , while the other part enters the droplet I τ . The intensity of the transmitted wave I τ can be so small that even a direct collision with the virus will not inactivate it. On the other hand, the missed UV-C wave of intensity I τ can be reflected many times from the inner surface of the droplet and thus further reduce its intensity. Electrostatic treatment of viruses If the virus is inside a porous solid particle, ultraviolet radiation has no possibility of destroying the virus and it is further transported with the particle to the next chamber. If a particle is charged and exposed to negative ions, its transport will be possible to control within the electric field. In order to obtain a large number of negative ions, an electrical system was established as shown in Fig. 4. The voltage between the corona electrode and the anode has the task of establishing the so-called corona discharge between these two electrodes, which will affect the creation of a large number of free electrons. The voltage between the corona electrode and the anode establishes a corona discharge, which affects the creation of a large number of free electrons. The generated electrons move away from the negative electrode (cathode) at high speed and collide with solid porous particles. Although free electrons will cause ionization of surrounding molecules and their transition from an electrically conductive state, the research in this paper is directed toward solid porous particles. As we move away from the corona electrode, the velocity of the electrons decreases, their kinetic energy weakens, and thus, their ability to ionize air molecules weakens. Also, the charge of solid particles occurs in random collisions of negative molecules, which previously became negative in the collision with electrons of low kinetic energies. In this way, solid Fig. 4 The hidden and deactivated virus enters the ionization-separation chamber porous particles become electrically conductive, i.e., negatively charged. The electronic filling of the porous surface of a particle will last until its surface is saturated with a negative charge. The mass balance of solid porous particles when passing through the ionization-separation chamber at the elementary length of the annular channel, inner (r i ) and outer radius (r o ) can be represented by the equation where v r is the radial component of the particle velocity, and C x and C x+dx are the concentration of porous particles at the site x and x + dx, respectively. After separating the variables the final expression for the change in the concentration of isolated particles when passing through the ionization-separation chamber is obtained, Eq. 49. The radial component of the velocity of a solid porous particle caused by the action of an electric field between the positive and negative cylindrical electrodes is where a particle of mass m p and diameter d p which has an accumulated charge q is subjected to an electric field force in the amount of so the combination with Eq. (50) gives the final expression for the radial velocity of the particle v r , Eq. (52), where ε is the dielectric constant of the particle, and ε o is the dielectric constant of the vacuum, while μ is the dynamic viscosity of the gas (air). In the previous equations, it was adopted that E ip ≈ E p , while the value of Cunningham's correction factor is determined from for particle diameter values d p [m]. After combining Eqs. (49) and (50), the final expression is obtained for reducing the concentration of isolated porous particles when passing through the ionization-separation chamber, x . The total efficiency of the three-component system for the treatment of viruses The overall efficiency (η tot ) of the analyzed system for inactivation and isolation of viruses within droplets and porous solid particles is influenced by the efficiency of each individual chamber. The overall efficiency (η tot ) d of inactivation of viruses within liquid droplets is represented by Eq. (55). This efficiency (η tot ) d is affected by the efficiency of the chamber with infrared heating and ultraviolet radiation. Inside the infrared chamber, thermal inactivation of the virus within the droplet is qualified with η IC.d . After the arrival of a drop of reduced diameter (due to infrared heating and evaporation of the same) in the chamber with ultraviolet radiation, the inactivation of viruses will be treated with an ultraviolet germicidal lamp of efficiency η uv.d . On the other hand, the total efficiency (η tot ) p of the system for inactivation of viruses within solid porous particles depends on the thermal inactivation during infrared heating η IC.p and the efficiency of the ionization-separation chamber η el.p , Eq. (56). The influence of ultraviolet radiation on the inactivation of viruses within a solid porous particle is not included, and therefore, the efficiency of η uv.p is omitted. Evaporation of droplets and infrared heating solid porous particles in which the virus is hidden Immediately before entering the infrared chamber, a diameter of droplets (d o ) is 0.01 mm, 0.03 mm and 0.05 mm, while the mean velocity of fluid inside the annular cylindrical channel is 0.1 m s −1 . The droplet temperature is equal to an ambient temperature of 293 K, while the surface temperature of the infrared heat source varies from 650 to 900 K. When moving through the chamber with an infrared heater, the droplets heat up and evaporate, so that their initial diameter decreases. Figure 1 shows the length of the path that the droplets pass in a straight line inside the chamber with the infrared heater, whereby the droplets' diameter is reduced by 10 times (d o /d 10). Droplets of the largest diameter (0.05 mm) require a larger heating distance in order to reduce their diameter to a value of 5 μm. It is assumed that the virus does not lose its properties during heating and is located in the center of the droplet. For the smallest size of droplet diameter (0.01 mm), during heating, when the droplet diameter decreases to 1 μm, the influence of the surface temperature of the infrared heater is minimal. On the other hand, at a constant inlet droplet diameter d o of 0.03 mm, its velocities varied in the amounts of 0.1 m s −1 , 0.2 m s −1 , and 0.3 m s −1 , as shown in Fig. 5. Heating is performed until the droplet diameter is reduced by a value of 10 times (d o /d 10), or up to a value of 3 μm. To achieve this requirement, the longest heating This is due to the shortened heating time of the droplet inside the annular chamber with an infrared heat source. By heating the solid particle, the virus is also heated, i.e., its temperature approaches the value of the thermal point of inactivation, which can cause the virus to lose its infectivity. For three different diameters of porous solid particles (0.01 mm, 0.02 mm and 0.03 mm), the particle velocity has a constant value of 0.1 m s −1 . The porous particle with a diameter of 0.01 mm has the smallest surface area and volume and thus heats up to a value of 500 K once it passes 0.098 m. The largest particle diameter of 0.05 mm passes the greatest distance inside the chamber with an infrared heater, so the rate of temperature rise of this particle is the smallest. Porous particles with a diameter of 0.03 mm and different velocities of 0.1 m s −1 , 0.2 m s −1 and 0.3 m s −1 , for their own heating from the initial temperature of 300 K to 500 K, travel different distances. At a minimum velocity of 0.1 m s −1 , particles with a diameter of 0.03 mm need about 0.028 m for their temperature to rise to 500 K. Thermal inactivation of viruses in droplets and porous solid particles Thermal inactivation of the virus depends on the exposure time and temperature of the medium within which the virus is located. Since the temperatures of the droplets and particles are different, the temperature of the their viruses is also different. On the one hand, the temperature of the liquid droplets does not exceed 373 K since the liquid evaporates and the droplets reduce their volume. On the other hand, the temperature of the porous solid particles rises above 373 K. The initial temperature of the droplet and solid porous particle is 300 K. Since the droplet and the porous solid particle move through the chamber with infrared radiation, the exposure time depends on their velocity. Initial velocities were 0. Fig. 6. The average thermal inactivation in the intervals of 0 to 3 s and 0 to 1 s is 0.85% and 0.26%, respectively. At the set values, the thermal inactivation of the virus, as shown in Fig. 6, at a temperature of 460 K within the solid porous particle, has a mean value of 18% at a maximum exposure time of 3 s (0.1 m s −1 , d o 0.03 mm). For example, overall a thermal inactivation at 65°C, and exposure time of 15 min, for SARS-CoV Strain Urbani [30], was effective to strongly reduce coronavirus infectivity by at least 4log10. At a minimum exposure time of 1 s, thermal inactivation of the virus has a maximum value of 97%. The second case is based on maintaining a flow velocity rate constant of 0.1 m s −1 and varying the particle and droplet diameters in the amounts of 0.01 mm, 0.03 mm and 0.05 mm. The largest diameter of 0.05 mm absorbs the largest amount of heat, so that thermal inactivation also has the maximum value. The effect of UV-C radiation on viruses within solid porous particles is not included. The analysis was performed on liquid droplets with a diameter of 0.01 mm, 0.03 mm and 0.05 mm at a speed of 0.1 m s −1 , as shown in Fig. 7. The outer radius of the cylindrical annular channel is 40 mm, while the radius of the UVGI lamp is 12.7 mm. The irradiance or power (I o ) delivered from the UVGI lamp is 842 μWcm −2 , while the standard rate constant (k) is 0.001187 cm −2 μJ −1 . The droplet of Napierian absorption coefficient (α) was adopted at a value of 1.5. At the smallest diameter value (0.01 mm), the shortest length (about 0.01 m) of the chamber with UV-C radiation is required, so that the survival fraction has the value 0, i.e., so that the efficiency of UV-C inactivation of viruses has the value 1, as shown in Fig. 7. For UV-C inactivation of viruses in droplets of larger diameter, it is necessary to provide a much longer chamber length with UV-C radiation. At the largest droplet diameter, of 0.05 mm, for a survival fraction to be below 47%, and UV-C chamber length of 0.01 m is required. The voltage between the inner (r i ) and outer electrode (r o ) is 5 kV, while the dielectric constant of the particle (ε r ) is 12. For the given values, a functional relationship was established between the electrostatic separator efficiency and the length of the negative electrode. The electrostatic separator efficiency (of 97.5%) reaches the maximum at the separation of the largest particles (0.05 mm), where the required length of the electrostatic separator is about 0.3 m, as shown in Fig. 8. As the particle diameter decreases, the radial velocity of the particle also decreases, and a longer length of the electrostatic separator is necessary in order for the particle to be separated on the separation cylindrical electrode. On the other hand, varying the particle velocity (0.1 m s −1 , 0.2 m s −1 and 0.3 m s −1 ) at a constant diameter of 0.03 mm does not significantly affect the change in the value of electrostatic separator efficiency. It can be seen from the diagram that particles with a diameter of 0.03 mm at the highest speed of 0.3 m s −1 have the fastest increase in the value of electrostatic separator efficiency. Total efficiency of treatment of viruses The total efficiency of the three-component system for the treatment of viruses within droplets and solid porous particles is calculated according to Eqs. (55) and (56), respectively. For Fig. 9 The total efficiency of the three-component system for the treatment of viruses within droplets and solid porous particles, velocity and diameter varied two characteristic cases, when the droplet (or particle) diameter and velocity vary, the total efficiency is presented in Fig. 9. At a constant input velocity v o of 0.1 m s −1 , due to the increase of the droplet diameter d o to the maximum value of 0.05 mm, the total efficiency decreases by 0.90. In this case, the change in the diameter of the droplet caused by the evaporation of the liquid is not present, as shown in Fig. 9, because the chamber with the infrared heater is not in operation. If the droplet diameter is reduced by 10 times, the total efficiency of the system for treating viruses within the droplet is approximately constant and is about 0.98. The multiple increase in total efficiency is not due to the thermal inactivation, but to the increased UV-C inactivation of the virus in droplets. Under the same conditions, the total efficiency of the three-component system in treating viruses hidden within solid porous particles of up to 0.05 mm diameter increases linearly, up to a value of 0.98. At a diameter of 0.044 mm (point A), the total efficiency value is the same (about 0.93) for a porous particle and a droplet (which does not change its diameter without evaporation). On the other hand, with the increase of the velocity of the liquid droplet, at a constant diameter of 0.03 mm, the value of total efficiency decreases linearly. The reason for the linear decline in efficiency of the three-component system for the treatment of viruses within droplets (up to 0.03 mm) consists of two parts: -Reduction of exposure time inside the UV-C chamber, with increasing velocity of the droplet; -Decrease of UV-C efficiency due to reduced intensity of UV-C radiation entering the droplet. Conclusions The research carried out in this paper aimed to form a three-stage system that would enable the correct treatment of viruses within droplets and solid porous particles. In this paper, viruses were considered to viruses contained within droplets and viruses contained within porous solid particles. A drop of liquid, with its surface, significantly reduces the effectiveness of UV-C treatment of the virus located inside the droplet. The intensity of UV-C radiation decreases due to the reflection on the outer surface of the droplet. The intensity of UV-C radiation can be so weakened that it is unable to inactivate the virus inside the droplet. In the case of virus within a solid porous particle, the effect of UV-C radiation on its deactivation is excluded and was not analyzed. In this analysis, the possibility that the UV-C wave directly hits the virus within the solid porous particle is excluded. For this reason, UV-C inactivation of viruses within porous particles is negligible. In order to increase the efficiency of UV-C inactivation of viruses inside the droplets, a chamber with infrared heating and evaporation of the droplets and heating of solid porous particles was used. When droplets and porous solid particles are heated by infrared radiation, thermal inactivation of viruses can occur, especially those inside solid porous particles. The ionization-separation chamber has the role of electrifying and separating porous solid particles, i.e., inactivating viruses inside these particles with negative ions. The influence of this ionization-separation chamber on the ionization of droplets is not included in this analysis. Accordingly, the following results can be mentioned: -Infrared heating and evaporation of droplets increase the efficiency of UV-C inactivation of viruses inside them. -Infrared radiation is selectively directed toward heating liquid droplets and porous solid particles, while the increase in air temperature is negligible. -Solid porous particles are significantly heated by the infrared radiation, which causes the appropriate thermal inactivation of viruses inside them. -Due to low exposure time and low temperature, thermal inactivation of viruses inside liquid droplets is of relatively small value. -Evaporation of droplets reduces their diameter, which causes a decrease in losses of UV-C radiation when interacting with the outer surface of the droplet. -Negative ions fill the cavities inside the porous particle and thus block viruses from leaving the porous particle, inactivating them. -The porous particle in which the virus is deactivated is stopped and separated in the ionization-separation chamber. Further research, based on the results achieved, will focus on various geometric and process optimizations of the three-component system treatment of viruses, in order to maximize the overall efficiency of their inactivation.
2021-06-20T05:22:43.940Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "8bb736ea8acab86e62aed24498e0d5c64fa4e828", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1140/epjp/s13360-021-01651-1.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8bb736ea8acab86e62aed24498e0d5c64fa4e828", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
220260099
pes2o/s2orc
v3-fos-license
The H2BG53D oncohistone directly upregulates ANXA3 transcription and enhances cell migration in pancreatic ductal adenocarcinoma Supplementary data figures and tables The H2BG53D oncohistone directly upregulates ANXA3 transcription and enhances cell migration in Pancreatic Ductal Adenocarcinoma Yi Ching Esther Wan, Jiaxian Liu, Lina Zhu, Tze Zhen Evangeline Kang, Xiaoxuan Zhu, John Lis, Toyotaka Ishibashi, Charles G. Danko, Xin Wang, Kui Ming Chan Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China Key Laboratory of Biochip Technology, Biotech and Health Centre, Shenzhen Research Institute of City University of Hong Kong, Shenzhen, China Department of Molecular Biology and Genetics, Cornell University, NY, USA Division of Life Science, Hong Kong University of Science and Technology, Hong Kong, China USA James A Baker Institute for animal health, Cornell University, NY, USA Supplementary Fig.3 | H2BG53D alters transcription of ANXA3 in vivo. (a) Genome viewer showing the normalized RNA-seq reads (upper) and PRO-seq reads (lower) of ANXA3 in two wildtype and two H2BG53D cell lines. Validation of the elevated expression of ANXA3 by (b) RT-qPCR (*p < 0.05; LSD post hoc one-way ANOVA test) and (c) Western Blotting. (d) Levels of transcription of the indicated genes were detected by RT-qPCR using primers at indicated intron-exon boundaries. (* p < 0.05) Schematic diagram showing exon (Ex)-intron (In) junctions along the gene body of ANXA3 and SNAP47. Cells were first incubated with 300 μM of DRB for 3.5 hours and then the cells were washed with PBS and further incubated in fresh medium for the indicated times. Levels of pre-mRNA of the regions were measured by RT-qPCR. Pre-mRNA values are normalized to the values of DMSO-treatment control, which was set to 1. Results are shown as means ± standard deviation (SD) from three independent experiments (*p < 0.05 vs. WT with unpaired t-test). GCGTTCGATTGGATGGCTAT ACCTGAGCAGTTTCTACCCC ; 1% Sarkosyl (Sigma-Aldrich, # L5125)) and incubated at 37°C for 5 min. The run-on reaction was terminated by adding Trizol LS (Invitrogen, 10296010) and pelleted by ethanol precipitation. RNA pellets were re-dissolved in nuclease-free water and briefly denatured at 65°C followed by base hydrolysis with NaOH to produce 100-150 nt fragments. The biotinylated nascent transcripts were purified three times using Dynabeads™ M-280 Streptavidin (Invitrogen, 11206D), each round followed by Trizol (Invitrogen, 15596026) extraction and ethanol precipitation. The 5' cap of transcripts were removed with RNA 5' Pyrophosphohydrolase (NEB, M0356S) and the 5' hydroxyl group repaired with T4 polynucleotide kinase (NEB, M0201). The libraries were then generated using TruSeq small RNA adapters and size-selected to a range of 140-350bp through Solid Phase Reversible Immobilisation beads (Beckman Coulter AMPURE XP, A63881) before being sequenced using Illumina NextSeq500 with 75 bp paired-end reads. ATAC-seq ATAC-seq libraries were generated as described in 3 . 50,000 cells were harvested and washed once with 50 l of cold 1X PBS and then resuspended in 50 l of lysis buffer (10 mM Tris-HCl, pH 7.5; 10 mM NaCl; 3 mM MgCl2; 0.1% NP-40). Cells were then centrifuged for 10 min at 500 g. The supernatant which contains the cytoplasmic components was discarded and the pellet was collected. Transp osition was initiated by adding 2X TD buffer with 2.5 l of Tn5 transposase (Illumina, FC121-1030) in 50 l total volume. Transposition was allowed to proceed for 30 min at 37°C in a thermomixer shaking at 500 rpm. Transposition reactions were cleaned up with Qiagen MinElute Kit. Libraries were generated using the custom Nextera PCR primers 3 and were amplified for 10-12 cycles. Libraries were purified with AMPure beads to remove primer dimmers and > 1,000 bp DNA. Library quality was assessed using the Agilent Bioanalyzer High-Sensitivity DNA kit and quantified using the NEBNext Library Quant Kit. Libraries were sequenced on Illumina NextSeq 500 with 50 bp paired-end reads. ChIP-qPCR Cells were cross-linked with 1% PFA at room temperature for 5 min and then quenched the formaldehyde with 125 mM glycine at room temperature for 5 min. Cells were washed twice with 1X TBS and harvested by scarping in 1 ml extraction buffer (10 mM Tris-HCl, pH7.5; 10 mM NaCl; 0.5% NP-40; proteinase inhibitor cocktail) and incubated on ice for 30 minutes. Nuclei was washed once with MNase digestion buffer (20 mM Tris-HCl, pH7.5; 15 mM NaCl; 60 mM KCl; 2 mM CaCl2). Digestion was started by adding 5l MNase (NEB M0247S, diluted 1:10) to the nuclei suspension. The reaction was then incubated at 37°C with 500 rpm shaking for 5 min. Digestion was quenched by adding 2X STOP buffer (100 mM Tris-HCl, pH 8.0; 20 mM EDTA; 200 mM NaCl; 2% Triton X-100; 0.2% sodium deoxycholate). Soluble chromatin was collected after two sequential high-speed centrifugations of the sonicated lysate (10,000 g for 5min and 15 min at 4°C). 5% of the lysate was taken as input and the remaining lysate was incubated with specific antibodies at 4°C for overnight. 30 l of pre-washed Protein G Sepharose (GE Healthcare, 17061802) were added to each sample and incubated at 4°C for 1-2 hours. The beads were washed with different buffers, once with ChIP lysis buffer, once with lysis buffer with 0.5 M NaCl, once with Tris/LiCl buffer (10 mM Tris, pH 8.0; 0.25 M LiCl; 0.5% NP-40; 0.5% Na-deocycholate; 1 mM EDTA) and twice with Tris/EDTA buffer (50 mM Tris, pH 8.0; 10 mM EDTA). After washing, 100 l of 10% chelex (Bio-Rad, cat. no. 142-1253) were added to the washed protein-G beads and boiled at 95°C for 10 min and then 5 l of 20 mg/ml Proteinase K (NEB, P8107S) were added and incubated at 37°C for 30 min. Samples were boiled again for 10 min to inactivate proteinase K and centrifuged to collect the supernatant. 100 l of 20 mM Tris, pH 8.0 was added to the pellet and centrifuged again to collect the supernatant. The supernatants were combined, and it was used as template for qPCR reaction. qPCR was performed using Applied Biosystems QuantStudio 3 Real-Time PCR System. CUT&RUN sequencing data analysis Reads were aligned to human reference genome hg19 and yeast reference genome sacCer3 by Bowtie 2 separately 4 . Human reads were normalized by spike-in yeast reads using deepTools 5 . MACS2 6 was used to call peaks using parental as control under p < 0.001 with paired-end mode. Mutant enriched peaks were identified by 'DESeq2' 7 taking yeast spike-in reads as scale factor after counting reads within peaks using BEDTools (p < 0.05 and log2 fold change > 0.5) 8 . Peak annotation to different genomic regions, including gene body, promoter (transcription start site (TSS) +-1kb), downstream (3kb downstream of transcription end site (TES)) and distal intergenic regions, was performed by R package 'ChIPseeker' 9 . As a random control, the same number of 'shuffled peaks' of the same lengths as mutant enriched peaks were randomly generated for each chromosome for 1,000 times using BEDTools 8 . A Chi-square test was then performed to assess the statistical significance in the difference of genomic distributions between mutant enriched peaks and shuffled peaks. For each gene, the occupancy of FLAG was quantified by the number of reads located in gene body and 1 kb upstream of TSS counted by BEDTools 8 , and subsequently normalized by the yeast spike-in factor. To identify genes showing differential occupancy of FLAG between mutants and WT, differential occupancy analysis was further performed by 'DESeq2' (p < 0.01 and log2 fold change > 0.5) 7 . RNA-seq data analysis Reads were mapped to the human reference genome hg19 and counted by STAR 2.6.1a 10 with default parameters. R package 'DESeq2' 7 was used to perform differential expression analysis (p < 0.05 and |log2 fold change| > 0.25). 'bamCoverage' in deepTools 5 was used to generate bigwig files for IGV visualization using the traditional normalization method: Reads Per Kilobase per Million mapped reads (RPKM). Gene set overrepresentation analysis based on differentially expressed genes was performed by R package 'HTSanalyzeR2' 11 using hypergeometric tests, and significant gene sets were defined by Benjamini-Hochberg adjusted p < 0.05. PRO-seq data analysis Adapter cutting, reads alignment and coverage files generation were based on the pipeline illustrated by Dig et al. 12 using human genome hg19 as reference genome. Count data was obtained from the bigwig files using R package 'bigWig' 13 . R package 'DESeq2' 7 was used to perform differential expression analysis, and significantly differentially expressed genes were defined by p < 0.05 and |log2 fold change| > 0.25. ATAC-seq data analysis Reads were aligned to the human reference genome hg19 by BWA 14 with default parameters. Reads from mitochondrial were removed and de-duplicated. Only paired reads were used for further analysis. MACS2 was used to call peaks with paired-end mode (q < 0.05) 6 . Coverage files for IGV visualization were generated from bam files using deepTools 5 . DRB treatment 5,6-Dichlorobenzimidazole 1-b-D-ribofuranoside (DRB) (Sigma, D1916) was dissolved in DMSO as 75 mM solution stored at -20°C. S2VP10 wild type and G53D mutation cells grew overnight on 35 mm plates to 60%-70% confluency and then were treated with 300 μM DRB for 3.5 hours. Cells were washed with PBS to remove the DRB and then incubated in fresh medium for various time periods. Following the incubation period, cells were washed with PBS and subjected to total RNA isolation using a universal RNA extraction kit (Takara, 9767). 500 ng of total RNA were used for reverse transcriptase reaction according to PrimeScript RT Master Mix (TaKaRa,RR036A). The levels of pre-mRNA at various positions of ANXA3 gene were determined by real-time PCR. Values obtained were normalized relative to the average level of 5S and GAPDH. Results were expressed in relation to the pre-mRNA value of cells treated with DMSO.
2020-06-30T15:30:20.148Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "e465869233917234c24837f6c7cc071beda57026", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41392-020-00219-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4903b16c5bd00ed076f9f6c194511bd6f033550e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244671807
pes2o/s2orc
v3-fos-license
Enseki Far-Infrared Sandbath: Its Basic and Future Therapeutic Possibilities It is well known that heat therapies, including Finnish sauna, have various health advantages and thus long been performed worldwide since ancient time. Recently, we focused on physical nature of ceramic, which emit far-infrared ray upon heating. Accordingly, we developed ceramic beads, whose electromagnetic wave profiles are almost identical to those of a black body, a perfect far-infrared radiator. A bathtub, equipped with computerized hot water circulation system, was first filled with the ceramic beads. Those beads in the bathtub were heated by circulating hot water at 50 ℃ which was drained out thereafter, and then a bather was laid in a supine position and covered with the heated ceramic beads in depth of 5-10 cm except the facial-cranial region and sandbathed for 15 minutes. In this minireview, we briefly overview the physiological response to the far-infrared sandbathing and the immune impacts. Finally, some discussion was done for the logical and mechanical basics and future therapeutic possibilities. Introduction It has widely been recognized that sauna bathing is generally used to maintain mental as well as physical health [1]. Not only in the Scandinavian regions and northern Europe, sauna bathing has long been used as one of therapeutic modalities for hundred years, even in far-east countries, i.e. oriental style sauna of Korea and Ibusuki sandbath, Kagoshima, Japan. Finnish steam sauna is a representative in this context, however, there are several other ways of sauna, such as dry-heat sauna, infrared sauna and far-infrared (FIR) sauna [2]. Developing energy sources, emitters and other photo-technologies, various ways of FIR sauna have been upgraded so far [3]. The classification of the International Commission on Illumination has three subdivisions for the infrared (IR) radiation, which are composed of near-IR (IR-A, wavelength ranges of 700 nm -1400 nm), mid-IR (IR-B, wavelength ranges of 1400 nm -3000 nm) and far-IR (IR-C, wavelength ranges of 3000 nm -1 mm). In terms of tissue penetration when exposed to human skin, deepest penetration is obtained by near IR (approximately 5 mm) [4]. FIR exposure only results in superficial penetration [5], which, however, has various clinical advantages [6,7]. In general, FIR saunas heat to 40-60 °C and utilize 120-V infrared elements which irradiate electromagnetic waves with a wavelength of around 10 µm [8]. There are two ways of FIR irradiators applicable for patients' use as follows. First, an FIR emitter consisted of electrified ceramic plates is located approximately 20 cm above a bather, which steadily increase skin temperature [9]. The second is an FIR dry sauna in which light is used to create heat and directly irradiate the target (indeed, skin) [10]. In 'Enseki' sandbath, a novel FIR sauna system our team reported recently, ceramic beads were filled in a bathtub equipped with computerized hot water circulating system and heated at up to 50 ℃ [11]. Immediately after temperature of ceramic beads reached sufficiently, the hot-water was drained-out, and a bather was laid in a supine position and was completely covered with depth of 5-10 cm (except for a head and a face) by ceramic beads like 0036 sandbathing. Based on the biological nature of the ceramic beads which emit FIR ray when heated at around 50 ℃, the Enseki method was conducted as FIR bathing. By repetitively conducting microbacteriological analyses, it was found that at any step of the procedures or on any parts of the circulation systems no pathogens were grown [11]. To analyze its safety for human use, various physiological parameters were checked in bathers before, during and after the sandbathing, including blood pressure, heart rates, oral temperature, body weight and blood viscosity, all of which were only affected in the similar extent with other types of sauna. Consequently, it appeared that Enseki method was, at least, no negative impact on human health. Biochemical analysis was also performed including blood glucose, HbA1c, uric acid, lactate, fatty acid and others, demonstrating no abnormal score [11]. Finally, results of questionnaires demonstrated that 90% of the participants answered the comfort and wished to further repeat the bathing [11]. Discussion When electromagnetic waves transfer heat to human body, three possible routes exist, i.e. 1) direct heat transfer, 2) convection, and 3) radiation. Since FIR does not necessarily require transfer media by energy transfer, convection may be involved only in a minor route. Again, majority of ceramic beads did not contact skin surface except for those locating just outside of the skin and thus were not entirely capable of direct energy transfer. Hence, radiation of FIR from heated ceramic beads is likely to be the major route which is assisted by direct energy transfer. Enseki sandbathing may influence immune cascades as evidenced as follows. First, the numbers of peripheral leukocytes were increased after the sandbathing [12]. However, the proportion of each leukocyte subsets (granulocytes, lymphocytes, eosinophils and basophils) was not altered. Similarly, it was previously reported that sauna bathing led to the increase in the number of white blood cells among athletes but not in non-athletes13. In that study, they concluded as 'sauna bathing stimulated the immune system to a higher degree in the group of athletes compared to the untrained subjects'. As we did not enroll athletes in our studies, the Enseki sandbathing might lead more effectively to the increase in peripheral leukocyte numbers. Second, the ratio of CD4 + / CD8 + T cells was significantly increased [12,13]. The similar finding was previously reported that the thermal bathing as well as radon hot spring bathing highered CD4/CD8 ratio in bathers [14]. Thus it might be probable that this is rather common phenomenon among thermal therapies. Previously, it was reported that inverted CD4/CD8 ratio was associated during aging and thus may reflect immune senescence [15]. In terms of senescence, it was reported that aging is accompanied with decline in acquired cell-mediated immunity, leading to the decrease in acquired humoral immunity [16]. Since aging is continuous and progressive processes, the sandbathing-induced increment of CD4/CD8 ratio might be interpreted as forced immune rejuvenation. The biological significance of this event should further be explored. Third, the Enseki sandbathing led to production of interleukin (IL)-6 and tumor necrosis factor (TNF)-α in vivo in sera of human subjects [12]. Although we do not know the biological significance yet, one can interpret that Enseki-sandbathing-affected events, at least, may activate immune cascade. Forth, Enseki sandbathing may lead to PHA-stimulated proliferation of T cells [12]. Collectively, these findings indicate that sandbathing induced 1) release of cytokine, IL-6 and TNF-α and 2) enhanced proliferation of T cells in PHA-stimulated PBMC. One can speculate that Enseki sandbathing may activate immune cascade including T cell proliferation, which might possibly be of therapeutic significance. To date molecular mechanisms underlying the biological effects of FIR rays are still unclear. As previously reported, FIR mediates its biological effects on target cells via thermal and non-thermal ways. In terms of thermal effects, FIR-signals first may activate thermoregulators on the skin cells. For instance, it has been reported that transient receptor potential (TRP) channels in skin are crucial to maintain internal temperature balance and thermal homeostasis [17]. Interestingly, different TRP may be activated depending upon degrees of temperature stimulation. It implies that, when compared with traditional Finnish sauna, relatively lower temperature setting of the Enseki sandbathing might activate different TRP channels, leading to different clinical effects. Furthermore, it also has nicely been shown that using snake systems TRP channels may work as sensors for infrared detection [18]. In our system, radiated FIR rays might activate some TRP channels other than those affected by high temperature Finnish sauna. This might be attributed to physiologic effects that are characteristic for the Enseki sandbathing. By contrast, it was reported that one of possible candidates representing non-thermal effects is FIR-induced nitric oxide (NO) in endothelial cells [19]. Extending that speculation, Hsu et al. has reported that FIR may induce nuclear translocation of promyelocystic leukemia zinc finger protein (PLZF) in the endothelial cells of human umbilical vein, which was independent of a thermal effect [20]. In that paper, they demonstrated that FIR exposure induced the nuclear translocation of PLZF which up-regulated phosphatidyl inositol-3 kinase to activate Akt, and then activated endothelial NO synthase (eNOS) to induce NO generation. Finally, they concluded that through a PLZF-mediated pathway, FIR may be a potential therapeutic modality to maintain vascular endothelial health and function [20]. Based on their observation, it might be possible to speculate that the Enseki sandbathing also may activate such signal transduction pathway Juniper Online Journal of Dermatology & Cosmetics via thermal as well as non-thermal ways. Not only via non-thermal pathways, but thermal effects are also known to be involved in activation of eNOS [21]. It was known that heat shock proteins are capable of induction of immune systems such as proliferation of T cells and production of cytokines such as IL-6 and TNF-α, and thus plays a proinflammatory role when released as soluble forms. Wang et al. reported that soluble HSPs activated antigen presenting DC via induction of surface CD40L and IL-15R which further activated T cells via CD40/CD40L-and IL-15/IL-15R-systems [22]. Interestingly, sHSP may induce effective anti-tumor immunity possibly via induction of CD4 + cells producing granzyme B independently of target cells, while it induced the anti-tumor ability of CD8 + cells which demanded the presence of target tumor cells. Furthermore, it was reported that balneotherapy with hot water at 38-40 ℃ induced production of serum HSP70 [23]. It might be possible to speculate that Enseki sandbathing via inducing soluble HSPs may possess immune modulatory capability. This remains elucidated and is future target of further researches. The Enseki sandbathing led to the decrease in blood triglyceride and blood fatty acid peroxide, while the increase was observed in blood free fatty acid [11]. This phenomenon might be induced by the facilitation of degradation from triglyceride to free fatty acids via activation of lipoprotein lipase. Hence, through the activation of metabolisms, the Enseki sandbathing induces triglyceride metabolisms by activated lipoprotein lipase. So when adequate aerobic exercise is combined, the Enseki procedure might well be useful for weight control. By use of Enseki sandbathing we performed laser Doppler study demonstrating dilatation of radial artery by 24% and increase in circulating blood volume of the radial artery at 2.43 times. Furthermore, acceleration of the flow speed by approximately 1.7 times [11]. The similar tendency was also observed in popliteal artery. These data indicates that the Enseki procedure temporarily dilated radial as well as popliteal arteries, leading to the increase in the peripheral circulating blood volume. Together with previous observations emphasizing usefulness of heat therapy on cardiovascular diseases [24][25][26][27], those findings suggest that the Enseki bathing might be possibly one of useful therapeutic modalities for cardiovascular disorders. One of self-explanatory effects on heat therapy is the increase in body temperature. Indeed, thermographic analysis demonstrated the relatively rapid increase in body surface temperature. The Enseki sandbathing may flexibly control skin circulation as well, so it might be the effective tool to activate function and proliferation of skin constituents, such as dermal fibroblasts capable of collagen production. So repetitive Enseki sandbathing might be a better tool for skin rejuvenation. Conclusion Although more detailed validation of the effects at molecular levels is necessary, we conclude that this is worth being a new FIRtherapeutic modality, by which Enseki sandbathing might give us better contribution on our life in physical, immunological, mental as well as rejuvenation contexts.
2021-11-27T16:30:48.112Z
2020-04-13T00:00:00.000
{ "year": 2020, "sha1": "18ab4aa6315f63bbdef9c2fb1a9b6e9dfbf769ec", "oa_license": "CCBY", "oa_url": "http://juniperpublishers.com/jojdc/pdf/JOJDC.MS.ID.555593.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "560e41b7a9f87b38b94b6480d55b919c9086a3bc", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
253168238
pes2o/s2orc
v3-fos-license
Focusing on the Role of Natural Products in Overcoming Cancer Drug Resistance: An Autophagy-Based Perspective Autophagy is a critical cellular adaptive response in tumor formation. Nutritional deficiency and hypoxia exacerbate autophagic flux in established malignancies, promoting tumor cell proliferation, migration, metastasis, and resistance to therapeutic interventions. Pro-survival autophagy inhibition may be a promising treatment option for advanced cancer. Furthermore, excessive or persistent autophagy is cytotoxic, resulting in tumor cell death. Targeted autophagy activation has also shown significant promise in the fight against tumor drug resistance. Several research groups have examined the ability of natural products (NPs) such as alkaloids, terpenoids, polyphenols, and anthraquinones to serve as autophagy inhibitors or activators. The data support the capacity of NPs that promote lethal autophagy or inhibit pro-survival autophagy from being employed against tumor drug resistance. This paper discusses the potential applications of NPs that regulate autophagy in the fight against tumor drug resistance, some limitations of the current studies, and future research needs and priorities. Introduction One of the most lethal threats to human life and health, malignant tumors, are increasing in mortality and morbidity. In 2020, there were approximately 19.3 million new cancer diagnoses and 9.9 million deaths worldwide [1]. Chemotherapy, radiation, immunotherapy, and targeted therapy are the primary therapy modalities for tumors and have demonstrated success. However, the development of drug resistance significantly impacts the therapeutic outcomes. Intricate pathways of resistance to cancer therapy exist. Membrane transport proteins, the tumor microenvironment, tumor stem cells, programmed cell death (PCD), DNA damage, and epigenetics are factors known to play key roles in developing resistance to cancer therapy [2]. In recent years, there has been a surge of interest in using autophagy to combat tumor drug resistance. Autophagy, a critical mechanism for removing defective proteins and organelles from cells, is also considered a type of PCD, along with apoptosis and necrosis. Autophagy can be triggered by various stresses including nutritional deprivation, hypoxia, oxidative damage, and DNA damage [3]. In the early stages of cancer progression, autophagy suppresses tumor development, whereas in the later stages of cancer progression, autophagy promotes tumor growth, shields cancer cells during therapy, and induces drug resistance [4]. Autophagy has the ability to reverse tumor drug resistance [5][6][7]. Some preclinical and clinical investigations indicate that chloroquine (CQ) and hydroxychloroquine (HCQ), two autophagy inhibitors, can increase sensitivity to chemotherapy and synthetic medicines [8]. They prevent increased autophagic flux caused by tumor therapy, enhancing therapeutic efficacy. However, using CQ and HCQ is constrained by factors such as side effects, dose limits, and non-specificity. There is an immediate need to identify specific and safe autophagy modulators for tumor drug resistance. Natural products (NPs) have demonstrated advantages of high efficacy and low toxicity in treating various diseases. They are progressively attracting significant attention in the therapy of cancer. In particular, "French", "Italian", and "Japanese" paradoxes have all been positively linked to these NPs. The so-called paradoxes refer to the fact that people in these nations consume a lot of fat, although exhibit relatively low rates of cardiovascular disease and cancer [9]. The traditional diet of the Mediterranean and Japanese regions, as part of a healthy lifestyle, is linked to a lower risk of chronic illnesses and cancer, according to a growing body of scientific research [10,11]. In particular, resveratrol, a NP with anticancer effects, is found in the wine drunk in considerable amounts by the French. For this reason, the French paradox might be partially explained by the consumption of wine [10]. Tomato sauce, high in the antioxidant lycopene, is a pizza essential in Italy. Lycopene, like resveratrol, may fight inflammation and cancer [10]. The prevalence of chronic illnesses and cancer in Japan may be lowered by consuming fish rich in omega-3 fatty acids [11]. Dietary and plant-derived NPs are thus being evaluated for possible roles in cancer-preventative and therapy strategies. Our review focused on several NPs that have not been covered thoroughly in earlier publications. Several previous reviews have focused on describing the modulatory effects of one or a class of NPs on autophagy or the potential to combat tumor drug resistance [12][13][14]. To the best of our knowledge, this is the first comprehensive review of the topic that incorporates alkaloids, terpenoids, polyphenols, and anthraquinones. Overall, we summarize the majority of NPs that can regulate autophagy to achieve precise autophagy regulation and fight against tumor drug resistance. Methods We searched the PubMed and Google Scholar databases for publications written in English and published between the years 2010 and the present. Included in the list of search terms were "autophagy", "tumor", and "drug resistance" as well as "alkaloids", "terpenoids", "polyphenols", or "anthraquinones". All original research was considered, including animal trials and/or in vitro experiments. Initially, detecting autophagic flux should include observing the cellular ultrastructure and identifying molecular markers. Regarding NP screening, we did not consider any NPs that did not overcome drug resistance by modifying autophagy. Moreover, autophagic cell death can only be identified if three requirements are met: (1) increased autophagic flux; (2) cell death that occurs independently of apoptosis; and (3) the ability to reverse cell death by inhibiting autophagy. Finally, the study's experimental design and dependability are crucial criteria for evaluating the quality of the literature. The experimental design of the literature included at least two groups: combination treatment with NPs and anti-tumor drugs (research group) and a group of anti-tumor drugs alone (control group). In addition, this study builds on published research and did not involve any human or animal experiments. Therefore, ethical approval from an institutional review board was not necessary. Mechanisms of Autophagy Autophagy is a cellular self-protection system that preserves cell survival under varied stress circumstances by degrading and reusing cellular proteins and peptides. Autophagy may be classified into three types based on the biodegradation process: macroautophagy, microautophagy, and chaperone-mediated autophagy [15]. In recent years, accumulating studies have highlighted autophagy as a critical research mechanism in tumor biology, whose activation or inhibition plays a "double-edged sword" function in tumor progression and drug resistance, as discussed in a review by Chang et al. in 2020 [16]. They believe that manipulating autophagy-mediated resistance could represent a vast study arena for tumor therapy. Macroautophagy is the most common and sophisticated type of autophagy. Previous studies discussed in this article have focused on macroautophagy [17]. Five steps of autophagy have been proposed: induction, nucleation, elongation, fusion, and degradation [18]. Briefly, the UNC-51-like kinase 1 and 2 (ULK1/2), autophagy-related gene 13 (ATG13), and FIP200 proteins comprise a multiprotein complex that is vital in the early stages of autophagic vesicle formation. The phosphatidylinositol 3-kinase (PI3K-III) complex, comprising Beclin1, ATG14, VPS34, and p150, contributes to membrane formation and nucleation. Finally, the ULK1/2 complex forms a pre-initiation complex with the PI3K-III complex. It binds to the complex composed of ATG12, ATG5, and ATG16L to promote the conversion of microtubule-associated protein light chain 3 (LC3)-I to LC3-II. LC3-II marks the formation of intact autophagic vesicles. The selective uptake and load degradation of autophagosome loads is made possible by LC3-II's interaction with the autophagy substrate, p62. Autophagosomes, on the other hand, include LC3-II in their outer membranes, which aids in the elongation and closing of the membrane. As a result, autophagy markers such as LC3-II and p62 are often used to monitor autophagic flux [19]. Autophagy is strictly regulated by ATGs and various signaling pathways [20]. For example, mammalian targets of rapamycin (mTOR) and AMP-activated protein kinase (AMPK) maintain low levels of cellular autophagy under normal conditions by suppressing the action of the ULK1/2 complex. The phosphoinositide 3-kinase (PI3K)/protein kinase B (AKT) signaling pathway is critical for autophagy because it can effectively activate mTOR and inhibit autophagy. AMPK functions as an energy receptor and frequently works as a negative regulator of mTOR, inducing autophagy. Compared with AMPK, p38MAPK suppresses LC3-I to LC3-II conversion via phosphorylating ATG5 protein and may act as a negative autophagy regulator. Furthermore, p38MAPK can suppress ERK activation, thus impairing cellular autophagy. The potential function of reactive oxygen species (ROS) as aerobic metabolites, which are toxic to cells, and upstream signaling molecules of autophagy, is highlighted in the report [21]. P53 is also a key molecule in the autophagic cascade. It plays a dual role in autophagy by controlling how different signaling pathways work. Specifically, nuclear p53 induces ATG expression based on its transcriptional activity to activate autophagy. In contrast, cytoplasmic p53 inhibits autophagy by suppressing AMPK, activating mTOR, and inhibiting ROS formation [22]. Post-translational modifications change how ATGs work. These changes are also crucial for controlling autophagy. It is possible that enzymes such as protein kinases and phosphatases as well as acetylases and ubiquitinates change autophagy-related proteins in response to stressors. The most well-known class IIb deacetylase, histone deacetylase 6 (HDAC6), has been demonstrated to impede autophagy by deacetylating TFEB and FOXO1, two key players in the process [23]. Additionally, non-coding RNAs including circRNAs, lncRNAs, and microRNAs play a role in autophagy initiation and inhibition [24,25]. Moreover, autophagy and apoptotic signaling are interdependent and interact with each other. In cancer cells, autophagy and apoptosis may oppose or encourage each other. Autophagy and apoptosis regulatory factors include proteins such as P53, ATGs, Beclin1, and Bcl-2 family proteins. For instance, autophagy as a pro-cell death process may induce apoptosis when it is excessive or persistent. How apoptosis and autophagy are regulated has significant implications for cancer research and therapy [26]. The Dual Role of Autophagy in Tumors At different stages of tumor growth, autophagy may exert bipolar functions. Autophagy may prevent tumorigenesis by controlling the cell cycle, boosting the immune response, decreasing DNA damage, and maintaining genomic stability. However, autophagy can also increase malignant cell growth. The processes may include providing energy to tumor cells, increasing the stemness profile of cancer stem cells (CSCs), regulating unfolded protein responses, promoting epithelial-mesenchymal transition, and facilitating cancer cell adaptation to hypoxia, oxidative stress, and DNA damage [27][28][29][30]. In other words, autophagy is associated with practically all cancer-related signaling pathways. Furthermore, certain ambiguous and complex mechanisms are highlighted. A review by Wang et al. highlighted the dominant role of autophagy in pathogenic microbial interactions with human cancer and suggested that targeting autophagy or microbiota could be a potential anti-cancer strategy [28]. The function of autophagy in the interaction between tumor cells and the tumor immune microenvironment is compelling. Autophagy has the potential to reshape the relationship between glioblastoma (GBM) cells and the immune microenvironment [31]. Dormancy is a unique state in which tumor cells exist. Dormant cells have the ability to reactivate and retain malignant biological behavior when activated, linked to tumor recurrence and drug resistance. The data confirm that autophagy is a significant factor of tumor cell dormancy [32]. In summary, autophagy's role in cancer is dynamic and contentious, and is partially reliant on time and the environment. Autophagy and Tumor Drug Resistance The most exciting feature of autophagy regulation is its ability to regulate drug resistance in tumors [33]. Chemotherapy, targeted therapy, immunotherapy, and radiotherapy are currently the most frequently used types of tumor therapy. However, doctors and researchers continue to face difficulties with the resistance of the majority of patients to chemotherapeutic drugs, immune checkpoint inhibitors, and targeted medicines [34]. Resistance mechanisms include the aberrant expression of membrane transport proteins such as ATP-binding cassette transport proteins and P-glycoprotein, altered drug metabolic pathways, DNA repair, autophagy, hypoxic tumor microenvironments, enhanced stemness of tumor stem cells, mutations in epidermal growth factor receptor genes, and T cell depletion [35,36]. Tumor cells frequently initiate pro-survival autophagy in response to the cytotoxicity of chemotherapeutic drugs. Despite the absence of large-scale clinical trials, autophagy may be a critical target for drug resistance in human cancers. CQ and HCQ are the most commonly used autophagy inhibitors and have been demonstrated to be the primary inhibitors of drug resistance in multiple basic studies [37,38]. Aga et al. observed that combination therapy with cisplatin and CQ decreased nasopharyngeal carcinoma cell viability and increased cell apoptosis. Mechanistically, chemotherapy increased Beclin-1 expression, whereas CQ had the reverse effect [39]. In addition, Wang et al. elucidated that CQ could enhance the sensitivity of GEM to gallbladder cancer, depending on the modulation of autophagy [40]. Imatinib (IM) is a tyrosine kinase inhibitor (TKI) that has shown clinical efficacy and a favorable safety profile when used to treat recurrent or metastatic gastrointestinal stromal tumors. However, 20-30% of patients develop autophagy-mediated resistance and do not react well to IM therapy. CQ improves IM sensitivity by inhibiting autophagy through the mitogen-activated protein kinase (MAPK)/extracellular signal-regulated kinase (ERK) pathway [41]. Moreover, a vast number of anti-tumor drugs including docetaxel [42], doxorubicin (DOX) [43], mitogen-activated protein kinases 1/2 (MEK1/2) inhibitors [44], and pabisterostat [45] have been demonstrated to have anti-cancer activity in human cancer and links to CQ. However, the clinical efficacy of targeted autophagy is mixed. A phase I/II trial conducted in 2014 demonstrated that the combination of HCQ, radiation therapy, and temozolomide (TMZ) had no effect on the survival rates in patients with GBM [46]. In contrast, a clinical trial was conducted to determine the anti-cancer effect of CQ in combination with taxane or taxane-like chemotherapy in patients with breast cancer. The ORR was 45.16% greater than planned. Patients experienced an increase in progression-free and overall survival [47]. Clinical trials in pancreatic cancer (PC) have demonstrated that CQ enhances the clinical response to gemcitabine (GEM) [48]. In addition to clinical efficacy, adverse reactions to CQ and HCQ are a key rationale for limiting their use. According to a clinical study, the daily administration of 500 mg CQ had no inhibitory effect on breast cancer cell proliferation. However, nearly 15% of patients terminated therapy due to CQ-related adverse effects [49]. Consequences of CQ and HCQ use such as gastrointestinal responses, skin hypersensitivity, and retinal toxicity are always important limitations. It is worth mentioning that both CQ and HCQ are not specific autophagy inhibitors. They may accumulate in acidic cell compartments and impair lysosomal activity, affecting the activation of autophagy [50]. In conclusion, although targeted autophagy has demonstrated significant promise in the fight against tumor drug resistance, various issues now preclude its clinical application. Conducting an urgent search for precise and safe autophagy modulators is critical. Natural Products Overcome Autophagy-Mediated Tumor Drug Resistance Autophagy is an adaptive response that tumor cells depend on for survival, particularly during periods of chemotherapeutic or targeted drug therapy. Protective autophagy, activated by numerous signaling pathways, supplies nutrition to cancer cells to maintain their growth and migration while also making them resistant to therapy [51,52]. As a result, autophagy suppression is often exploited as a target to increase cancer cell susceptibility to medicines. Autophagy inhibitors such as CQ, HCQ, and 3-methyladenine increase tumor cell susceptibility to treatment by reducing autophagosome formation, preventing autophagosome fusion with lysosomes, decreasing lysosomal degradation function, and suppressing ATG expression. Lethal autophagy, on the other hand, may serve as an alternate cell death mechanism for apoptosis-deficient cancer cells. Some autophagy inducers such as rapamycin have improved drug sensitivity by triggering autophagic cell death [53]. Recently, NPs derived from plants have shown unique advantages in the management of cancer, particularly in the battle against drug resistance [54,55]. Reviewing the literature revealed that a subset of NPs causes apoptosis in tumor cells by suppressing pro-survival autophagy. A subset of NPs promotes toxic autophagy, which causes autophagic cell death. More intriguingly, certain NPs may both trigger toxic autophagy and suppress pro-survival autophagy. We are thus interested in the mechanism of action of NPs against drug resistance in tumors through the modulation of autophagy. Natural Products as An Inhibitor for Protective Autophagy The inhibition of protective autophagy by NPs has significant advantages in inducing apoptosis in tumor cells, inhibiting their proliferation and reducing drug resistance. In advanced tumor stages, NPs inhibit the initiation and degradation processes of autophagy by suppressing autophagic vesicle formation or autolysosome formation, which in turn reverses drug resistance via the endogenous apoptotic pathway of cells. Several NPs that can be utilized to inhibit autophagy and promote cell apoptosis against drug resistance are listed in Table 1. Figure 1 depicts the precise process of the NP control of autophagy inhibition. Plant-derived terpenoids are another possible novel anti-cancer drug class. They regulate cancer cell proliferation, migration, angiogenesis, and drug resistance [56]. Several terpenoids have been demonstrated to operate as autophagy modulators against tumor drug resistance and enhance chemotherapeutic and synthetic drug sensitivity [57]. Andrographolide (AG), a diterpene lactone from Andrographis paniculate, exerts antiinflammatory, antiviral, and neuroprotective properties [58]. Data suggest that AG can be an essential anti-cancer agent, which should be introduced to oncology therapy to enhance chemotherapeutic sensitivity. Cisplatin, in conjunction with AG, increases the susceptibility of NSCLC cells to cisplatin, according to one study. AG reduces autophagic flux and limits the growth of drug-resistant cells by targeting the PTEN/AKT/mTOR pathway [59]. In lung cancer, AG promotes the conversion of LC3B-I to LC3B-II and reduces ATG5 protein expression, impeding autophagy. AG inhibits tumor cell growth and reduces the incidence of lung metastases when combined with cisplatin [60]. Notably, autophagy is not required for AG to improve VCR sensitivity. Essentially, AG regulates the PI3K/AKT/p53 signaling pathway, which promotes apoptosis rather than autophagy [61]. Consistent with this evidence, terpenoids such as α-hederin [62], jolkinolide B [63], PC3-15 [64], pris-timerin [65], celastrol [66], and hemistepsin A (HsA) have been demonstrated to act against drug resistance by inhibiting autophagic fluxes and promoting cell apoptosis. Plant-derived terpenoids are another possible novel anti-cancer drug class. They regulate cancer cell proliferation, migration, angiogenesis, and drug resistance [56]. Several terpenoids have been demonstrated to operate as autophagy modulators against tumor drug resistance and enhance chemotherapeutic and synthetic drug sensitivity [57]. Andrographolide (AG), a diterpene lactone from Andrographis paniculate, exerts anti-inflammatory, antiviral, and neuroprotective properties [58]. Data suggest that AG can be an essential anti-cancer agent, which should be introduced to oncology therapy to enhance chemotherapeutic sensitivity. Cisplatin, in conjunction with AG, increases the susceptibility of NSCLC cells to cisplatin, according to one study. AG reduces autophagic flux and limits the growth of drug-resistant cells by targeting the PTEN/AKT/mTOR pathway [59]. In lung cancer, AG promotes the conversion of LC3B-I to LC3B-II and reduces ATG5 protein expression, impeding autophagy. AG inhibits tumor cell growth and reduces the incidence of lung metastases when combined with cisplatin [60]. Notably, autophagy is not required for AG to improve VCR sensitivity. Essentially, AG regulates the PI3K/AKT/p53 signaling pathway, which promotes apoptosis rather than autophagy [61]. Consistent with this evidence, terpenoids such as α-hederin [62], jolkinolide B [63], PC3-15 [64], pristimerin [65], celastrol [66], and hemistepsin A (HsA) have been demonstrated to act against drug resistance by inhibiting autophagic fluxes and promoting cell apoptosis. Inhibits autophagy Facilitates sorafenib sensitivity [74] Polyphenols are a class of secondary metabolites in plants that exhibit various pharmacological activities [75]. Notably, the impact of polyphenols in preventing drug resistance appears encouraging in oncology therapy [14]. Icariin is one of the main active ingredients of the Chinese botanical drug Epimedium, a well-researched anti-cancer agent [76]. Icariin induces apoptosis in TMX-resistant breast cancer cells and inhibits autophagy; thus, it is an ideal sensitizing agent. Combined treatment with autophagy inhibitors and icariin also induced potent antitumor effects, while tumor cells modified with overexpressing ATG5 were found to resist the toxic effects of icariin [67]. Indeed, ATG5 is also an essential player in the enhancement of cisplatin sensitivity by icariin. Icariin activates the AKT/mTOR/ATG5 signaling pathway in cisplatin-resistant OC cells, inhibiting autophagy and causing cell death [68]. Hypoxia has been implicated in the efficacy of anti-cancer medications as a critical factor that activates many drug resistance pathways in tumor cells [77]. Numerous correlations between drug resistance and hypoxia have been discovered. Hypoxia-inducible factor-1 (HIF-1) is an essential regulator of cellular adaptation to hypoxia; it is frequently overexpressed in cancer cells and has been related to drug resistance [78,79]. Apigenin (APG), a flavonoid, is a natural active ingredient that exhibits anti-cancer action as well as a favorable safety profile. APG can perturb the tumor cell microenvironment, induce ERS, and trigger autophagic cell death [80]. Among the APG targets, HIF-1α and Ezh2 have been identified. APG treatment may activate ERS signaling, increase the expression of autophagy-related proteins, and downregulate the expression of p-mTOR and p62. Notably, this effect was observed under normal and hypoxic conditions, indicating that APG does not exhibit selective behavior [81]. Drug resistance has also been hypothesized to be targeted via non-coding RNA-mediated autophagy. DOXresistant HCC cells showed an ATG7-dependent autophagy pathway driven by miR-520b. In other words, protective autophagy promoted drug resistance in tumor cells, whereas miR-520 mimics inhibited ATG7-dependent autophagy against DOX resistance. In conjunction with DOX, APG stimulated miR-520b expression [69]. In conclusion, APG causes autophagic cell death in tumor cells, and by suppressing protective autophagy, it may increase chemotherapeutic drug susceptibility. Furthermore, several polyphenols such as tea polyphenol [70], genistein [71], apple dihydrochalcone phloretin [72], formononetin (FMNT) [73], and rutin [74], which reduce off-target effects and enhance drug sensitivity, are currently under investigation. Natural Products as Promoters of Lethal Autophagy In vivo and in vitro studies have shown that NPs that promote lethal autophagy are useful against tumor treatment resistance. NPs activate cancer cell autophagy, improve chemotherapeutic drug sensitivity by increasing lysosomal membrane permeability, and induce tumor cell apoptosis by activating intact autophagic flow through several pathways including PI3K/AKT/mTOR, AMPK, and ROS [82,83]. Several NPs that can be utilized to induce lethal autophagy against drug resistance are listed in Table 2. Figure 2 depicts the precise process of the NP control of autophagy activation. Alkaloids are NPs of importance with potential value in cancer therapy. Alkaloids have been demonstrated to impact malignant cancer cell proliferation and work in tandem with chemotherapeutic drugs. Berberine (BBR), as observed by Zhang and his group, downregulates the c-Myc signaling pathway and increases ROS production, which causes an increase in the sensitivity of lapatinib [84]. BBR is an alkaloid capable of upregulating basal autophagy and inducing hyperautophagy. Its properties make BBR essential for malignant proliferation and drug resistance in cancer research [85]. BBR may modulate autophagic flux and apoptosis in drug-resistant cells. In this sense, BBR downregulates the ERK1/2 signaling pathway that induces intracellular ROS accumulation and EGFR degradation, all of which have been linked to chemotherapy and targeted therapy sensitivity [86,87]. Matrine is an active component of the botanical drug Sophora flavescens, which exhibits potent anti-tumor properties. Matrine targets ATGs and affects the cell cycle of tumor cells, enhancing the sensitivity of drug-resistant cells to vincristine (VCR) and adriamycin (ADM) [88]. Alkaloids are NPs of importance with potential value in cancer therapy. Alkaloids have been demonstrated to impact malignant cancer cell proliferation and work in tandem with chemotherapeutic drugs. Berberine (BBR), as observed by Zhang and his group, downregulates the c-Myc signaling pathway and increases ROS production, which causes an increase in the sensitivity of lapatinib [84]. BBR is an alkaloid capable of upregulating basal autophagy and inducing hyperautophagy. Its properties make BBR essential for malignant proliferation and drug resistance in cancer research [85]. BBR may modulate autophagic flux and apoptosis in drug-resistant cells. In this sense, BBR downregulates the ERK1/2 signaling pathway that induces intracellular ROS accumulation and EGFR degradation, all of which have been linked to chemotherapy and targeted therapy sensitivity [86,87]. Matrine is an active component of the botanical drug Sophora flavescens, which exhibits potent anti-tumor properties. Matrine targets ATGs and affects the cell cycle of tumor cells, enhancing the sensitivity of drug-resistant cells to vincristine (VCR) and adriamycin (ADM) [88]. Induces autophagy and promotes cell ferroptosis Facilitates GEM sensitivity [103] Ursolic acid (UA) is a pentacyclic triterpenoid with a broad spectrum of anti-cancer properties. Lin et al. revealed that the UA treatment of GEM-resistant cells resulted in the activation of ERS and the suppression of receptors for advanced glycation end products (RAGE), concomitant to enhanced apoptosis and autophagy [89]. Toxic autophagy and ferroptosis were significantly elevated in osteosarcoma cells treated with cisplatin and UA [104]. As a result of these findings, UA may be an ideal adjuvant for chemotherapy drugs. In addition, one study found that betulinic acid (BA) could act against the acquired resistance of EGFR-TKI [90]. Compared with EGFR TKI alone, the combination of BA and EGFR TKI demonstrated promising efficacy against EGFR-TKI-resistant lung cancer cells and was associated with inducing cytotoxic autophagy. Additionally, BA blocked the ERK/MEK signaling pathway, promoted toxic autophagy, and enhanced autophagic cell death in SGC-7901 cells (GC lines) [105]. However, ERK inhibitors (ERKi) do not exploit the promotion of autophagy by BA for NSCLC therapy. Even if ERKi is present, NSCLC cells are more susceptible to BA. Dual treatment with BA and ERKi, on the other hand, resulted in an increase in protective autophagy, which significantly reduced the effectiveness. Notably, the addition of HCQ to dual treatment with BA and ERKi was more effective than either single or dual therapy [106]. In summary, this evidence suggests that combination therapy may provide clinical benefits for oncology patients as a potential treatment option, possibly reducing drug resistance. Similarly, aloe emodin (AE), isolated from Rheum palmatum L., is a well-known anthraquinone compound that exerts significant anti-tumor effects [107]. Recent research has found that AE and targeted drug delivery systems significantly overcome drug resistance in various human cancers [108]. Interestingly, Cheng et al. found that AE also enhanced protective autophagy in reversing ADM-induced drug resistance. By supplementing AE with autophagy inhibitors, anti-tumor activity and sensibility may be enhanced (in vitro: MCF-7/ADR cell line, 20 µM, 48 h) [109]. Triptolide (TPL), a naturally occurring compound that acts via a distinct molecular mechanism, has shown growth inhibitory efficacy in preclinical studies for various solid tumors [110]. TPL may be an alternative for treatment-resistant human malignancies due to its high sensitivity and minimal toxicity [111]. The tumor-necrosis-factor-related apoptosis-inducing ligand (TRAIL) can form a homotrimer with its receptor to initiate and target apoptosis in tumor cells. Numerous recent studies have been conducted on TRAIL [112]. For example, Feng et al. used a targeted drug delivery system with promising results, showing that TRAIL enhanced the targeting ability of nanoparticles and acted synergistically with DOX-inhibiting tumor cells [113]. Pumilio-1 (PUM1) is an RNAbinding pumilio protein with a dramatically increased expression level in various tumor tissues. PUM1 stimulates tumor cell proliferation, motility, and colony formation in colon cancer, and is required to regulate tumor spherification [114]. TPL therapy decreased PUM1 expression in PC, activating autophagy to enhance TRAIL sensitivity in tumor cells. The data revealed potential molecular insights into TPL's effect on drug resistance [91]. Similarly, Zhong et al. evaluated TPL's anti-tumor efficacy against SKOV3/DDP OC cells and discovered that TPL, which induces ROS production, strongly depressed the JAK2/STAT3 signaling pathway and promoted toxic autophagy [92]. Consistent with this evidence, terpenoids such as oleanolic acid (OA) [93], AGE [94], and demethylzeylasteral (ZST93) [95] have been demonstrated to overcome drug resistance by inducing lethal autophagy and cell apoptosis. Resveratrol (RV) is a polyphenol compound possessing anti-tumor, anti-bacterial, antiinflammatory, and anti-aging activities. Recently, it was discovered that RV acts as a sensitizer to chemotherapeutic agents and aids in the fight against drug resistance in human cancers [115]. RV has been implicated in the apoptosis and autophagy of tumor cells. RV activates the PI3K/AMPK signaling pathway in drug-resistant oral cancer cells, enhances the expression of autophagy-related genes, and promotes autophagic death. Notably, RV is relatively non-toxic to normal oral cells [96]. Further transcriptomic analysis revealed that RV could influence the migration of OC cells by the targeted hedgehog (Hh) pathway and epithelial-mesenchymal transition, and subsequently acts against lysophosphatidic acid (LPA, a lipid growth factor that promotes drug resistance) activity. BMI-1 plays a critical role in the hedgehog pathway. Ferraresi et al. demonstrated that treatment of RV downregulated BMI-1 expression in response to LPA treatment and restored toxic autophagy, sensitizing tumor cells to platinum-based therapy [97]. In addition to RV, structural analogs of RV have been investigated for their ability to resist drug resistance by targeting autophagy [98]. High levels of RAGE and MDR1 protein expression have been seen in GEM-resistant pancreatic ductal adenocarcinoma (PDAC) cells. RAGE activates the PI3K/AKT signaling pathway, resulting in MDR1 overexpression. The RAGE/PI3K/AKT/MDR1 axis is critical for the GEM resistance process. Piceatannol (PTE) downregulates RAGE expression, decreases the expression of p-PI3K, p-AKT, and MDR1, and enhances autophagic death, indicating a reversal of GEM resistance [98]. Furthermore, several polyphenols such as quercetin [99], hyperoside [100], scutellarin [101], and chrysin [102], which reduce off-target effects and enhance drug sensitivity, are currently under investigation. Anthraquinones are NPs that are frequently used to treat constipation. They exert anti-inflammatory, anti-oxidative damage, and anti-tumor properties. Currently, anthraquinones have shown unique chemotherapeutic efficacy [116]. The development of nano-drug delivery technologies, in particular, has increased the bioavailability and targeting ability of anthraquinones [117,118]. Tanshinone IIA (Tan IIA), a diterpenoid quinone isolated from the botanical drug Salvia miltiorrhiza, modulates autophagy through regulating the Beclin1/LAMP1 and PI3K/Akt/mTOR signaling pathways. Tan IIA promotes autophagy and reduces ADM-induced cardiotoxicity and oxaliplatin-induced peripheral neurotoxicity [119,120]. Alternatively, Tan IIA has been utilized to address resistance to drugs such as DOX resistance. Tan IIA alone was ineffective at inhibiting the proliferation of drug-resistant GC cells. However, a Tan IIA and ADM combination showed promising toxic autophagy against drug-resistant GC cells in vitro (SNU-216, SNU-601, SNU-620, SNU-638, SNU-668, and SNU-719 cell lines, 5 µM, 24-72 h). These findings provide a theoretical basis for clinical trials combining Tan IIA and ADM to treat GC [121]. Numerous studies have demonstrated that thymoquinone (TQ) has beneficial anti-cancer properties [122]. Mifepristone is a progesterone receptor antagonist with anti-proliferative effects on endothelial cells. Mifepristone may also act as a potential chemotherapy drug sensitizer [123]. One intriguing study demonstrated that when used to treat polycystic ovarian syndrome, mifepristone may generate chronic inflammation mediated by p65 by increasing autophagy. In contrast, TQ inhibits the side effects of mifepristone by upregulating aromatase, decreasing androgen receptor expression, and reducing autophagic flux [124]. It is worth acknowledging that TQ exhibits a direct inhibitory effect on the GEM resistance of BC. Bashmail et al. showed that TQ enhanced the chemo modulatory effect of GEM by inducing autophagic cell death [103]. Natural Products with a Dual Role in Autophagy Regulation Interestingly, the same NPs have various effects at different stages of tumor development. Some NPs can inhibit both autophagy to reverse drug resistance and activate autophagy to promote tumor cell apoptosis. This might be associated with the cancer type, cell line, and the duration and concentration of drug action. Table 3 summarizes the NPs exerting a dual role in autophagy regulation. Figure 3 depicts the precise process of how NPs control autophagy. Tetrandrine (TET) has been shown to exert anti-cancer properties via targeting autophagy, according to a recent review [125]. TET induces autophagy in tumor cells by increasing the accumulation of ROS. Specifically, ROS activates ERK/MAPK and promotes the transcription of ATG7, thus promoting autophagy [126,127]. Furthermore, autophagy and apoptosis induced by TET-stimulated ROS accumulation may be controlled by caspase-3, which mediates apoptosis via interacting with p21, but not by AKT activity [128]. TET, on the other hand, can inhibit mTOR and induce autophagy by acting as a protein kinase C (PKC) inhibitor. This signaling pathway is not dependent on ROS [129]. TET promotes autophagy, which improves chemotherapeutic drug susceptibility. In NSCLC, TET and cisplatin decrease PI3K/AKT activity and Bcl-2 expression, which are important regulators of cellular autophagy inhibition [130]. In addition to the PI3K/AKT pathway, TET enhanced autophagy by downregulating the protein expression of survivin (one of the most potent anti-apoptotic suppressor genes). In turn, it reverses GEM resistance and promotes apoptosis in drug-resistant PANC-1 cells [131]. Tamoxifen (TMX) is an anti-estrogenic medication used to treat some types of breast and endometrial cancer. Wang et al. demonstrated that TET increases TMX sensitivity. In vitro experiments revealed that autophagy inhibition could be a mechanism for TET to overcome tamoxifen resistance [132]. Epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) such as gefitinib, erlotinib, and afatinib are routinely used to treat patients with lung adenocarcinoma. However, the development of acquired drug resistance impacts the overall therapy result. By blocking lysosomes, TET improves the susceptibility of gefitinib in human lung cancer cells [133]. As previously stated, TET's control of autophagy may be the opposite in different malignancies and drug resistance states. Tetrandrine (TET) has been shown to exert anti-cancer properties via targeting autophagy, according to a recent review [125]. TET induces autophagy in tumor cells by increasing the accumulation of ROS. Specifically, ROS activates ERK/MAPK and promotes the transcription of ATG7, thus promoting autophagy [126,127]. Furthermore, autophagy and apoptosis induced by TET-stimulated ROS accumulation may be controlled by caspase-3, which mediates apoptosis via interacting with p21, but not by AKT activity [128]. TET, on the other hand, can inhibit mTOR and induce autophagy by acting as a protein kinase C (PKC) inhibitor. This signaling pathway is not dependent on ROS [129]. TET promotes autophagy, which improves chemotherapeutic drug susceptibility. In NSCLC, TET and cisplatin decrease PI3K/AKT activity and Bcl-2 expression, which are important regulators of cellular autophagy inhibition [130]. In addition to the PI3K/AKT pathway, TET enhanced autophagy by downregulating the protein expression of survivin (one of the most potent anti-apoptotic suppressor genes). In turn, it reverses GEM resistance and promotes apoptosis in drug-resistant PANC-1 cells [131]. Tamoxifen (TMX) is an anti-estrogenic medication used to treat some types of breast and endometrial cancer. Wang et al. demonstrated that TET increases TMX sensitivity. In vitro experiments revealed that autophagy inhibition could be a mechanism for TET to overcome tamoxifen resistance [132]. Epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) such as gefitinib, erlotinib, and afatinib are routinely used to treat patients with lung adenocarcinoma. However, the development of acquired drug resistance impacts the overall therapy result. By blocking lysosomes, TET improves the susceptibility of gefitinib in human lung cancer cells [133]. As previously stated, TET's control of autophagy may be the opposite in different malignancies and drug resistance states. [145] Lycorine is an active alkaloid isolated from a plant in the amaryllidaceae family that has received increasing attention from researchers because of its anti-cancer properties [146]. Recently, Hu et al. devised and constructed a novel nanocomposite comprising lycorine and folic-acid-modified mesoporous silica-coated gold nanostars [147]. The mass release of lycorine once the nanocomposite is internalized into cancer cells causes mitochondrial dysfunction and ERS in tumor cells. It causes an increase in ROS, which leads to apoptosis, and notably, a higher inhibitory effect and selectivity against drug resistance. Tongue-cancer-resistant protein 1 (TCRP1) may be a candidate oncoprotein that promotes chemotherapy resistance in tumors and is highly linked to TMX, cisplatin, and radiation resistance. Previous studies have revealed that TCRP1 is a target gene for c-Myc [148], a protein that regulates the phosphoinositide-dependent kinase 1 (PDK1)/serum-and glucocorticoid-inducible kinase 1 (SGK1) [149] and PI3K/AKT/NF-κB [150] signaling pathways, promoting TCRP1 transcription to render tumor cells resistant to chemotherapeutic agents. Lycorine has recently been shown to be a possible TCRP1 inhibitor [151]. The TCRP1 protein degradation pathway in hepatocellular carcinoma (HCC) cells results in the suppression of the AKT/mTOR signaling pathway and an increase in autophagic flux. Lycorine-induced c-Myc inhibition increases the TCRP1 protein degradation pathway [152]. Proteasome inhibitors such as bortezomib, carfilzomib, and isazomib are effective in the treatment of multiple myeloma (MM). However, some patients have various degrees of side effects and drug resistance. HMGB1 is a critical signaling molecule in autophagy. Bortezomib-resistant myeloma cells are more responsive to bortezomib because lycorine blocks HMGB1-mediated autophagy, according to Roy et al. [134]. The sterol regulatory element-binding protein (SREBP) cleavage-activating protein (SCAP) is a sterol-sensitive protein that regulates triglyceride and cholesterol levels. In a recent study, SCAP was shown to be substantially expressed in HCC tissues and associated with sorafenib resistance. Lycorine, a specific SCAP inhibitor, triggered autophagy by elevating AMPK activity, increasing the HCC cells' sensitivity to sorafenib [135]. Carnosic acid (CA) is a polyphenolic diterpene that has recently been found to be helpful against tumor drug resistance. CA overcomes TMZ resistance in glioma cancer cells by the induction of autophagy; CA directly inhibits the PI3K/AKT signaling pathway and induces autophagy by interacting with p62 [136]. In addition, CA has a direct synergistic effect on the vitamin D2 analog doxercalciferol (D2), enhancing sorafenib-mediated tumor cell death. Increased cytotoxic autophagy is associated with an increase in this efficacy. After the treatment of HCC cells with a CA and Vitamin D2 analog combination, an increase in cytoplasmic vacuolization was seen, along with increased protein levels of Beclin1, ATG3, and LC3 [137]. Trastuzumab (Tz), a monoclonal antibody directed against the human epidermal growth factor receptor 2 (HER-2), dramatically improves the prognosis of BC patients who are HER-2-positive. CA partially restores the BC cells' susceptibility to Tz by inhibiting autophagy [138]. β-Elemene is another terpene that enhances the sensitivity of cancer cells to particular medicines. Numerous evidence supports the notion that, in terms of autophagy, β-elemene increases p53-induced toxic autophagy and cyclin D3-dependent cycle arrest to reverse 5-FU resistance [139]. It has been shown that β-elemene inhibits the inhibition of autophagic flux in gefitinib-resistant cells while also decreasing the expression of methyltransferase-like 3 (METTL3). METTL3 promotes autophagy by increasing the expression of ATG5 and ATG7 [140]. Curcumin (CUR) is critical for overcoming drug resistance in tumors via various mechanisms including decreased drug uptake, drug efflux, PCD, epigenetics, and DNA damage responses [14]. CUR may also be a potential autophagy modulator [153]. As small non-coding RNAs with regulatory functions, some miRNAs govern cellular autophagy and are involved in tumor cell proliferation, migration, invasion, and drug resistance. miR-142-5p is the highlight of this series of studies, demonstrating the suppressive efficacy of chemotherapy drug resistance [154]. CUR inhibits miR-142-5p and its target ULK1 on autophagy, reducing crizotinib resistance in NSCLC cells [141]. It is concerned with CUR's synergistic effect on gefitinib. By disrupting the interaction of Sp1 and HADC1, CUR and gefitinib induce autophagic cell death by downregulating EGFR activity and inhibiting the ERK/MEK and AKT/S6K pathways. CUR's sensitizing impact on gefitinib was abolished by autophagy inhibitors or the inhibition of Beclin-1 or ATG7. CUR and gefitinib effectively suppress tumor growth in xenograft experiments [142]. Luteolin exhibits multiple biological activities and attacks tumor cells by diverse mechanisms. First, luteolin inhibits HIF-1 activity and modulates autophagy, which aids in the fight against hypoxic tumors [155]. In addition, luteolin enhances TRAIL-induced apoptosis by increasing autophagic flux. The toxic autophagy of tumor cells was induced by the upregulation of DR5 expression. The c-Jun N-terminal kinase (JNK) inhibitors inhibit DR5, nullifying the toxic effects of luteolin [143]. In contrast, luteolin inhibits autophagy in OC cells and enhanced cisplatin sensitivity by suppressing PARP1 expression [156]. In conclusion, through modulating autophagy, lignocaine exerts a durable synergistic effect against drug resistance. Epigallocatechin gallate (EGCG) is a key component of catechins with a wide range of biological properties. Previous studies have shown that EGCG has a considerably detrimental effect on cisplatin-resistant oral cancer CAR cells via a mechanism that may be related to autophagy and apoptosis mediated by the AKT/STAT3 signaling pathway [144]. Later, Jiao et al. found a link between EGCG and autophagy in tumor drug resistance. EGCG suppresses tumor cell autophagy by targeting the ERK signaling pathway, eliminating gefitinib resistance in NSCLC cells [145]. Limitations However, issues in the research process continue to be a "stumbling obstacle" for NPs sought for application in the clinic. NPs including alkaloids, terpenoids, polyphenols, and anthraquinones regulate autophagy by influencing numerous signaling pathways including PI3K/AKT/mTOR, ERK, JNK, and AMPK [157][158][159][160]. On one hand, most studies on the mechanisms of NPs in the regulation of autophagy are still at an early stage. The mode of interaction between NPs and autophagy-related targets remains to be elucidated. On the other hand, the signals above-mentioned do not exist in isolation, but overlap and constitute a regulatory network affecting tumor drug resistance. However, most current research focuses on a few targets or single pathways, although less research has been conducted on the multi-target, multi-pathway, and integrated control mechanisms of NPs. Furthermore, throughout the therapeutic process, certain NPs may either stimulate or inhibit autophagy. This is a distinct benefit of NPs, because fighting drug resistance requires bidirectional modulation of autophagy. However, the studies summarized in this paper are unidirectional experiments in which NPs affect autophagy, and the kind of cancer, cell line, dosage, and administration time vary. As a result, the "bidirectional control" of NPs in tumor cells remains controversial. Conclusions Cancer is a global medical challenge that presents grave risks to human health. Both the incidence and death rates of cancer are on the rise. There were approximately 19.3 million new cancer diagnoses and 9.9 million cancer-related deaths worldwide in 2020. Even though chemotherapy and targeted treatments provide some patients with a glimmer of hope, drug resistance has become the most challenging obstacle in cancer treatment. Numerous studies have demonstrated that combining NPs such as alkaloids, terpenoids, polyphenols, and anthraquinones with anti-cancer drugs can result in tumor cells becoming sensitized to anti-tumor drugs, increasing their effectiveness. To be more precise, both inducers and inhibitors of autophagy can be utilized to overcome drug resistance. Antitumor drugs can benefit from NPs that block protective autophagy or induce autophagic cell death. Compared with CQ or HCQ, several NPs have also been shown to dramatically minimize the hazardous side effects of chemotherapy medications. There are also a few obstacles worth mentioning. Autophagic flux and indicators of autophagy have not been consistently measured or identified across investigations. The autophagy cascade, in particular, is an ever-changing mechanism. It is also problematic that few studies have only explored the influence of NPs on autophagy at a specific moment in time, which might cause bias. The experimental outcomes of numerous studies differ, which might cause problems when comparing and assessing therapeutic effectiveness. Finally, the signaling molecules implicated in botanical-drug-regulated autophagy against drug resistance interact and intersect. Future research should focus on numerous signaling pathways rather than just one. In summary, resistance to chemotherapeutic and synthetic drugs is a complex process. Despite the significant synergistic benefits of NPs, many preclinical and clinical studies are required to establish their value. Conflicts of Interest: The authors declare no conflict of interest.
2022-10-28T15:07:35.129Z
2022-10-26T00:00:00.000
{ "year": 2022, "sha1": "f570580181cda8e6ea6ea0b6bcbbcad2c8e866d0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/12/11/1565/pdf?version=1666787084", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f6c31d9e44e19b01f759b1b0a5e69e9a6e078b0", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
218538393
pes2o/s2orc
v3-fos-license
Evolutionary Multi Objective Optimization Algorithm for Community Detection in Complex Social Networks Most optimization-based community detection approaches formulate the problem in a single or bi-objective framework. In this paper, we propose two variants of a three-objective formulation using a customized non-dominated sorting genetic algorithm III (NSGA-III) to find community structures in a network. In the first variant, named NSGA-III-KRM, we considered Kernel k means, Ratio cut, and Modularity, as the three objectives, whereas the second variant, named NSGA-III-CCM, considers Community score, Community fitness and Modularity, as three objective functions. Experiments are conducted on four benchmark network datasets. Comparison with state-of-the-art approaches along with decomposition-based multi-objective evolutionary algorithm variants (MOEA/D-KRM and MOEA/D-CCM) indicates that the proposed variants yield comparable or better results. This is particularly significant because the addition of the third objective does not worsen the results of the other two objectives. We also propose a simple method to rank the Pareto solutions so obtained by proposing a new measure, namely the ratio of the hyper-volume and inverted generational distance (IGD). The higher the ratio, the better is the Pareto set. This strategy is particularly useful in the absence of empirical attainment function in the multi-objective framework, where the number of objectives is more than two. Introduction A Complex network can be considered as a graph, having set of a nodes and edges between them. Examples of such networks are The World wide web, collaboration networks, online social networks, Food Web, biological networks etc. Analysis of these complex networks provides us better insights into the quality of interconnections among the nodes such as the identification of important nodes and the structure of underlining communities present in it. Community detection is paramount having numerous applications in e-commerce, communication networks social networks, biological systems, health care, economics, academia, fraud detection etc. [1]. The issue of detecting communities is to find the sets of nodes such that, each set has nodes that are thickly connected with one another and are loosely connected with the nodes present in the remaining sets. This problem is NP hard [1]. In the last decade, numerous approaches have been propounded to find communities in networks. Some of the techniques are hierarchical clustering algorithms, graph partitioning methods and evolutionary algorithms. In this paper, community detection in a given undirected and unweighted network is formulated as a multi-objective optimization problem with three objectives and is solved using NSGA-III [2]. Throughout this paper, the words community and cluster are used interchangeably. In what follows, section 2 presents the related work, section 3 presents the motivation, section 4 describes the contribution of the present study, section 5 presents basic definitions, section 6 presents proposed methodology, section 7 describes the datasets analyzed, section 8 displays results obtained and discussion thereof and finally, section 9 concludes the paper. Literature Survey In the last decade, several meta heuristic algorithms have been suggested to solve community detection problem in complex networks. In 2003, Newman introduced a classical [3] algorithm which optimizes Modularity in a greedy manner. It uses agglomerative hierarchal clustering method to iteratively maximize Modularity. Later in 2008, Blondel et al. designed another classical [4] two-phase algorithm, which also optimizes Modularity. In the first phase, nodes in one community are shifted to another community one at a time iteratively, if Modularity increases and in second phase communities are merged to get larger communities. In the same year, Pizzuti proposed GA-NET [5]. It uses locus-based representation to represent a community structure and optimizes Community score to identify communities in a network. Thereafter, in 2011, Gong et al. developed MEME-NET [6]. It is observed that Modularity suffers from resolution limit problem [7]. So, they optimized Modularity Density instead of Modularity using genetic algorithm (GA) and including hill climbing for local search to find communities in a network Later In 2012, Shang et al. proposed MIGA [8]. It also optimizes Modularity using GA and included simulated annealing to perform local search to find communities in a given network. Then, Pizzutti introduced MOGA-NET [9]. It optimizes two objective functions viz., Community score and Community fitness using GA to detect communities in a network. Then, In 2014, Gong et al. developed MODPSO [10]. It optimizes two objective functions viz., kernel k means and Ratio cut using discrete particle swarm optimization algorithm to find communities in a network. This approach can be used for both signed and unsigned networks. Later, in 2017, Abdollahpouri et al. proposed MOPSO-Net [11], a customized version of particle swarm optimization by altering the moving technique of particles. While moving from one iteration to another, this method uses Normalized mutual information (NMI). NMI needs the ground truth cluster structure of the graph as input. Hence, this method is not helpful if we do not know the ground truth community structure of the network in advance. In 2018, Yuanyuan et al. proposed two quantum inspired evolutionary algorithms viz., QIEA-net and iQIEA-net [12] to find community structures. QIEA-net detects the communities by optimizing Modularity, and in IQIEA-net, it takes the help of the classical partitioning algorithm. Most recently, Tahmasebi et al. [13] in 2019 proposed a many-objective community detection algorithm which takes five objectives. Out of the five, two objectives cannot be calculated if the ground truth community structure is unknown which is indeed the case in real-life problems. In such cases, those methods cannot be used because the very task there is to find communities in the conspicuous absence of ground truth. To sum up, single objective community detection algorithms lead to some difficulties such as limiting to particular community structure properties. Then, bi-objective formulations did indeed leave out some important measures, which could potentially be used as objective functions. We noticed that some of the measures are indeed non-overlapping conceptually. They describe different aspects of a community. Hence, a different approach is proposed in the current paper, which is a multi-objective (three objective) optimization framework in two variants to search for communities in complex social networks. This is a clear departure from all the works appeared in the literature so far. Motivation To the best of our knowledge, except for one of the latest pappers, all the works in the literature, formulated community detection of networks as an optimization problem in either single objective or two objectives. Frameworks that considered single objective have considered mostly Modularity as the objective function, while those with two objectives considered two objectives as follows: Kernel k means & Ratio cut or Community fitness & Community score or Ratio cut & ratio association or Modularity (by dividing the Modularity into two parts and considered each part as one objective). In bi-objective optimization frameworks, one objective maximizes the density of communities and the other minimizes the fraction of interlinks present between communities in the network. (For instance, Kernel k means tries to find the solution with maximum community density and Ratio cut tries to find the solutions with minimum fraction of interlinks between communities). For evaluating the effectiveness, they employed Modularity and NMI (for networks with known the ground truth communities) as external measures outside the optimization process. If we consider only two objectives, we may get solutions having high community density and less interlinks between communities. However, these solutions may or may not have good community structure. For example, in a network N, if we consider a solution with only one community consisting of all the nodes in the network, that solution has maximum intra-links and zero interlinks but it may not the best structure because the Modularity value becomes zero for that solution and it does not satisfy the goal of the problem namely to find distinct, nonoverlapping communities. Most recently, Tahmasebi et al. [13] also proposed a many-objective community detection algorithm which takes five objectives. Out of five, two objectives cannott be calculated if the ground truth community structure of the given network is unknown. Thus, in effect, it reduces to three-objective formulation. Further, they used another objective function Coverage and mentioned that Coverage is the proportion of edges inside the community to the total edges in network. Thus, it refers to the density of a given cluster. In this paper, we propose a multi-objective optimization framework using three objectives, which try to find solutions with good community densities, less fraction of interlinks and good community structures as well. Our approach is more generic enough as it does not need to know the ground truth community structure in advance. Toward this end, we employed customized NSGA-III as the optimizer. Contributions  Some studies [11] performed the selection of solutions after every generation based on NMI. But, it should be noted that computation of NMI requires the ground truth community structure. These methods are not helpful if we do not know the ground truth community structure of the network in advance. Therefore, we developed a framework, which is generic enough and applicable to all the networks where the ground truth is not necessarily known. In essence, we neither included NMI as the objective function nor took its help in progressing from one generation to another generation. This is a radical and well thought-out departure from the state-of-the-art making our approach in reallife situations.  We formulated community detection problem as a multi -objective optimization problem with three objectives.  We used locus-based representation of community structure to represent a solution. In this, an array of size equal to number of vertices present in the network is used to represent a community structure. It is noteworthy that a single solution can be represented in its various permutations. However, technically all of them are one and the same. Hence, we customized NSGA-III to solve this problem by adding a filter, which checks for the presence of duplicate (permutation) solutions in a generated population at the end of each iteration and if present, they are replaced by a randomly generated solution. Community Definition Community in a network can be described as a subset of nodes that are thickly connected with one another and loosely connected with the remaining nodes present in that network. Intra-links of a given community are represented as the set of edges present inside the community, whereas, interlinks of a given community c are represented by the set of edges connecting the vertices of community c to the vertices not present in community c. Multi-objective Optimization Problem Multi-objective optimization problems optimize two or more objective functions simultaneously. Let us consider a problem where we need to maximize nob number of objective functions simultaneously as follows: where x =(x 1 , x 2 , … x ) is the input vector or solutions and f 1 (x), f 2 (x), … f n (x) are the objective functions that need to be optimized and noi is the dimension of the solution vector. We say that a solution x dominates another solution y, if all the objective functions values with the solution x are better or equal to the respective values of the objective functions with the solution y and at least one objective function value with x is strictly better than the respective objective function value with xj as input [15]. Here vector x is a community structure of a network encoded using locus-based representation explained in the next subsection D and X is the set of all possible community structures in a network. Second variant: Community fitness, Community score and Modularity as the objective functions. Subject to x ϵ X, Here vector x is a community structure of a network encoded using locus-based representation explained in the next subsection 6.3 and X is the set of all possible community structures in a network. Objective functions considered and justification Kernel k means (KKM) [16] is used to find dense communities in a network. KKM is computed as follows: where n is the number of vertices in a network, m is the number of communities in a network, | | is the number of vertices in community i, ( , ) = ∑ , ∈ where A is the adjacency matrix of the network. KKM should be minimized in order to get structures having denser communities. Ratio cut (RC) [17] is used to find the clusters in a network such that each cluster present in it is sparsely connected to the remaining other clusters. The formula for computing the Ratio cut is as follows: Where m is the number of communities in a network, ( , ) = ∑ ∈ , ∈ where A is adjacency matrix of the network. Here is the set of vertices in the graph but not present in the set . Ratio cut needs to be minimized in order to get the community structures with less interlinks. Community fitness (CF) [18] is another measure used to find dense communities in a network. When its reaches its highest value, the number of external links is minimized. The formula for computing the CF is as follows: where s is the community in a network, ( ) and ( ) are the internal and external degrees of nodes present in the community s, and is the positive real valued parameter controlling the community size. We considered value as 1. The higher the value of the parameter, the smaller is the size of the communities found. Community score (CS) [5] measures the quality of the division in communities of a network. The higher the CS, the denser the clusters obtained. The formula for computing the CS is as follows: Where, denotes the fraction of edges connecting node i to the other nodes in s, | | denotes the cardinality of s, S is the set of communities, the exponent r increases the weight of nodes having few connection inside community s. we considered r value as 1 while conducting experiments, score of a community s i.e. ( ) is the product of power mean of s of order r i.e. ( ), and , is the volume of the community s, A is the adjacency matrix of the network. Modularity [19] is defined as the fraction of the edges that fall within the given groups minus the expected fraction if the edges were distributed at random. The Modularity is computed as follows: where is the number of intra-links present in community s, is the sum of degrees of nodes in community s, m is the total number of edges in a network, k is the number of communities found inside a network. The greater the Modularity value, the desirable is its community structure. Representation of Solution The community detection problem formulated as a multi-objective optimization problem, turns out to be a combinatorial optimization problem. Therefore, we need to suitably represent a community, which becomes a solution in the optimization parlance. Toward this end, we used locus based representation taking cue from [20] and [21]. Here, we consider an n dimensional array to represent a solution, where n is the number of nodes in the network. Each cell index in the array represents a node in the network. A cell with label i which represents node i in the network can have value i itself or the labels of nodes which are connected to the node i with an edge in the network. It is to be noted that a single solution can be represented in its various permutations. However, technically all of them are one and the same. NSGA-III Algorithm Non-dominated sorting genetic algorithm III (NSGA-III) [2] is a multi and many-objective optimization algorithm and used to optimize three to 15 objective functions simultaneously. This algorithm yields well-diversified and converged solutions. It uses a reference-based framework in order to select a set of solutions from a substantial number of non-dominated solutions to look for diversity. For more details, the reader is referred to [2]. Customizations performed In this paper, we performed two customizations on the NSGA-III based approach: (i) As a single solution can be represented in various ways (meaning its permutations), in a population for any iteration, if a solution is repeated more than once, then we replace it with a randomly generated solution, (ii) Another customization is that we excluded a solution in which entire network is considered as a single community. Evaluation Functions Normalized Mutual Information (NMI) and Modularity are widely used to figure out the performance of various evolutionary algorithms invoked to detect clusters in any network. NMI [22] is used to measure the likenesses between two cluster structures. NMI can help us calculate how close the clusters detected by an algorithm and the ground truth cluster structure are. The maximum and minimum values possible for NMI are 0 and 1 respectively. Higher the NMI value between two cluster structures, higher is their likeness. If the NMI value is 1 then it means that both the cluster structures are one and the same. The formula for computing the NMI is as follows: where, is the number of nodes appeared in both clusters i and j present in cluster structures A and B respectively. ( ) is the number of the elements in cluster i (cluster j) present in cluster structure A(B), N is the total number of nodes in the network. R(D) is the number of clusters' present in the cluster structure A(B). To make our framework more generic we have not considered NMI of the network or any other evaluation function which requires the knowledge of ground truth community structure as in most of the real-world networks, the ground truth community structure is unknown. Measures of Convergence and Diversity To measure the extent of diversity and the state of convergence of the solutions found by multi and many objective optimization algorithms such as NSGA-III, at the end of a run (in other words, after convergence) two widely used criteria include Inverted Generational Distance (IGD) [2] [23] and Hyper volume (HV) [24]. IGD is computed as follows: , A is the set of solutions obtained by the algorithm, is the set of points present in Pareto optimal surface. is a solution present in set A. is a solution in the Pareto optimal surface which is near to . The IGD measure indicates how close the obtained solutions are to the solutions present in the true Pareto front or Pareto optimal surface. In cases where the true Pareto front is unknown, we run the algorithm by taking large population size and large number of generations. Then, the first Pareto front solutions obtained at the end of the execution are considered as approximation to the Pareto optimal solutions [25]. In our case we considered population size as 500 and number of generations as 500 to approximate Pareto optimal surface. The Hyper volume [24] of set X is the volume of space formed by non-dominated points present in set X with any reference point. Here the reference point is the "worst possible" point or solution (any point that is dominated by all the points present in solution set X) in the objective space. For a maximization (minimization) problem with positive (negative) valued objectives, we consider origin as the reference point. If a set X has a higher hyper volume than that of a set Y, then we say that X is better than Y. Dataset Description Four benchmark datasets were analyzed in this paper: (i) Zachary's Karate Club [26] having 34 nodes and 78 edges with two ground truth communities (Fig. 1) (ii) Bottlenose Dolphin [27] with 62 nodes, 159 edges and two ground truth communities (Fig. 2) (iii) American College Football [28] having 115 nodes, 616 edges with twelve ground truth communities (Fig. 3) and finally, (iv)Books about US Politics [29] with 105 nodes, 441 edges and three ground truth communities (Fig. 4). Henceforth, we refer the datasets Zachary's Karate Club, Bottlenose Dolphin, American College Football and Books about US Politics to as D1, D2, D3 and D4 respectively for the sake of brevity. Parameter Setting We performed sensitivity analysis with the parameter combinations presented in Table III on all datasets using our proposed variants. We conducted 10 runs for each parameter combination. We computed the product of the highest Modularity and the highest NMI obtained towards the finish of each run and then computed the mean of those products (over 10 runs) for each parameter combination. Any parameter combination producing the highest average product of NMI and Modularity is considered the best combination. The best parameter combinations obtained for all datasets are presented as follows. It may be mentioned that in problems where the ground truth is unknown, it is impossible to compute NMI. Therefore, we recommend decision making based on Modularity taking cue from several works in literature. structures have 4 communities in each of the. Out of these four, two are sub communities of the community present in the ground truth community and other two are sub communities of another community present in the ground truth community structure. Furthermore, The community structure with the highest NMI obtained using NSGA-III-KRM turned out to be identical to the ground truth community structure. The optimal community structure of D2 network depicted in Fig. 9 and Fig. 11, with the highest modularity values obtained for the best parameter combination (mentioned in the subsection 8.1) respectively for the two variants turned out to be one and the same. This community structure has five communities. Out of these five, one turned to be the same present in the ground truth and other four are the sub communities of another community present in the ground truth community structure. The optimal community structure of D2 network is depicted in Fig. 10 with the highest NMI is obtained for the best parameter combination (mentioned in the subsection 8.1) by using NSGA-III-CCM. Here, one community turned out to be the same one present in the ground truth and other three communities are the sub communities of another community present in the ground truth. The optimal community structure of D2 network is depicted in Fig. 12 with the highest NMI is obtained for the best parameter combination (mentioned in the subsection 8.1) by using NSGA-III-KRM. It yielded the same structure as the ground truth community structure. The optimal community structure of D3 network depicted in Fig. 13 and Fig. 15 with the highest Modularity obtained for the best parameter combination (mentioned in the subsection 8.1) respectively for the two variants turned out to be the same. It has 10 communities. Out of these, 4 turned out to be identical to that in the ground truth, 3 are similar to those in the ground truth but with two or three extra nodes, while the remaining 3 are similar to those in the ground truth with two or three less nodes. The optimal community structure of D3 network with the highest NMI obtained for the best parameter combination (mentioned in the subsection 8.1) by using NSGA-III-CCM is depicted in Fig. 14. It has 13 communities in it. Out of these, 9 turned out to be identical to the ground truth, 2 are similar as in the ground truth but with one or two less nodes, while the remaining 3 contains nodes of two small communities present in the ground truth. The optimal community structure of D3 network with the highest NMI obtained for the best parameter combination (mentioned in the subsection 8.1) by using NSGA-III-KRM is depicted in Fig. 16. It contains 11 communities in it. Out of these 11, 6 turned out to be identical to the ones in the ground truth, 3 are similar as in the ground truth but with two or three extra nodes, while the remaining 2 are similar to those in the ground truth but with 1 or 2 less nodes. The optimal community structure of D4 network depicted in Fig. 17 and Fig. 19 with the highest Modularity obtained for the best parameter combination (mentioned in the subsection 8.1) respectively by using both variants turned out to be identical. It has 5 communities in it. Out of these 5, 2 are sub communities of two communities present in the ground truth having two extra nodes belonging to another communities. Other 3 contains nodes belonging to third community in the ground truth and nodes left out in above two communities. The optimal community structure of D4 network with the highest NMI obtained for the best parameter combination (mentioned in the subsection 8.1) by using NSGA-III-CCM is depicted in Fig. 18. This community structure has 4 communities in it. Out of these 4, 2 are sub communities of two communities present in the ground truth but with two extra nodes belonging to other communities. Other 3 contain nodes belonging to the third community in the ground truth and nodes left out in above two communities. The optimal community structure of D4 network with the highest NMI obtained for the best parameter combination (mentioned in the subsection 8.1) by using NSGA-III-KRM is depicted in Fig. 20. This community structure has 3 communities in it. Out of these 3, 2 are the sub communities of two communities present in the ground truth having two extra nodes belonging to another communities. Remaining one contains nodes belonging to the third community in the ground truth and nodes left out in above two communities. As Modularity is widely used for comparison in the literature, we too compared the Modularity values yielded by different state-of-the-art approaches in the recently published paper [12] with the optimal Modularity obtained by our methods. This is despite the fact that Modularity as an objective function in both the proposed formulations. This is done for the purpose of comparision only. Accordingly, in Table I Table I that our proposed NSGA-III variants achieved the best or equal Modularity value compared to the remaining approaches. The average NMI for all the datasets obtained by both variants using the best parameter combination are presented in the Table II. The communities with the highest Modularity obtained by both proposed variants are one and the same, when compared with the ground truth communities. The plots of the sensitivity analysis are depicted in Figs. S. 1 to S. 8 in supplementary material. We observed from Table I that NSGA-III-KRM outperformed NSGA-III-CCM on two datasets D2 and D3, while producing same result on D1. This is attributed to the more information contained in NSGA-III-KRM vis-a-vis NSGA-III-CCM in that the former obtained communities closer to the ground truth. However, both variants of NSGA-III outperformed MOEA/D variants i.e. MOEA/D-III-KRM and MOEA/D-CCM on all datasets with respect to average Modularity. This is because of the superiority of NSGA-III over MOEA/D in obtaining more diverse and better convergent solutions. Further, to know the diversity and convergence aspects of the solutions obtained by the proposed methods and to see how close the obtained Pareto front is to the true Pareto front or Pareto optimal surface, we computed the ratio of HV and IGD values of solution set obtained at the end of each run. Then, we computed the average HV/IGD ratios for each parameter combination. The results obtained are presented in the Tables S. I to S. VIII, available in the supplementary material. The ratio HV/IGD is indeed proposed for the first time as a proxy for the empirical attainment function plots used in the bi-objective optimization algorithms because a similar kind of plot is not yet proposed in the literature for multi/many objective optimization algorithms. This is another significant contribution of the study. The results indicate that our proposed variants yielded the best or identical results in terms of Modularity. Hence, we conclude that our proposed variants have found community structures in a network with high Modularity, indicating that the nodes in the communities are thickly connected with one another and nodes in different communities are well separated, which is a hallmark of this study. We also proposed a new measure, which is an alternative to the empirical attainment function plot available in bi-objective optimization framework.
2020-05-08T01:00:52.763Z
2020-05-07T00:00:00.000
{ "year": 2020, "sha1": "11c961deafc1b47482338cb89e0535b4c2fc6d04", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s42979-020-00382-x.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "11c961deafc1b47482338cb89e0535b4c2fc6d04", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
220525612
pes2o/s2orc
v3-fos-license
Satellite-to-Earth Quantum Key Distribution via Orbital Angular Momentum In this work, we explore the feasibility of performing satellite-to-Earth quantum key distribution (QKD) using the orbital angular momentum (OAM) of light. Due to the fragility of OAM states the conventional wisdom is that turbulence would render OAM-QKD non-viable in a satellite-to-Earth channel. However, based on detailed phase screen simulations of the anticipated atmospheric turbulence we find that OAM-QKD is viable in some system configurations, especially if quantum channel information is utilized in the processing of post-selected states. More specifically, using classically entangled light as a probe of the quantum channel, and reasonably-sized transmitter-receiver apertures, we find that non-zero QKD rates are achievable on sea-level ground stations. Without using classical light probes, OAM-QKD is relegated to high-altitude ground stations with large receiver apertures. Our work represents the first quantitative assessment of the performance of OAM-QKD from satellites, showing under what circumstances the much-touted higher dimensionality of OAM can be utilized in the context of secure communications. I. INTRODUCTION As one of the most important applications in quantum communications, Quantum Key Distribution (QKD) has been proven to provide unconditional security [1]. Recently, real-world implementations of satellite-based QKD (e.g. [2,3]) have pointed the way towards globalscale and highly-secure quantum communication networks [4]. The originally proposed QKD protocols (e.g. [1,5,6]) mainly utilize 2-dimensional encoding. However, other QKD protocols have been generalized to the case of high-dimensional encoding (e.g. [7]), and their unconditional security has been proved (e.g. [8][9][10][11]). Quantum information can be encoded in any degree of freedom (DoF) of the photon, but most of the mainstream implementations of QKD (e.g. [2][3][4]) rely on polarization encoding -a typical 2-dimensional encoding scheme that limits the capacity of QKD systems due to an intrinsically bounded Hilbert space. The Orbital Angular Momentum (OAM) of light has been considered as a promising DoF for quantum communications [12]. Unlike the polarization of light, the OAM of light can take arbitrary integer values [13]. The corresponding OAM eigenstates form an orthonormal basis that allows for quantum coding within a theoretically infinite-dimensional Hilbert space, opening up new possibilities for high-capacity quantum communications. As a key resource for quantum communications, entanglement can be encoded in OAM via the spontaneous parametric down-conversion (SPDC) process [14,15]. The distribution of OAM-encoded entanglement through the turbulent atmosphere has been intensively investigated in terrestrial free-space optical (FSO) channels (e.g. [16][17][18][19][20][21]) with some demonstrating distribution over 3 km [22]. A recent experiment suggests that OAM entanglement distribution could be feasible over an FSO channel of more than 100 km [23]. Besides the generation and distribution of OAMencoded entanglement, other recent efforts have paved the way for the practical implementation of OAM-QKD. Any OAM superposition state can be efficiently encoded in single photons thanks to the versatility of a spatial light modulator (SLM) (see e.g. [24,25]). The sorting of OAM-photons has also been made possible (e.g. [26][27][28]), enabling the capability of performing multi-outcome measurements. Implementations of OAM-QKD have been demonstrated in laboratory conditions with 2dimensional (e.g. [29]) and higher-dimensional (e.g. [29][30][31][32]) encoding. Efforts have also been made to investigate the practical feasibility of performing OAM-QKD in turbulent terrestrial FSO channels [33,34]. Outside the laboratory, OAM-QKD has been demonstrated over turbulent FSO channels of 210 m [35], and 300 m [36]. Considering other types of medium, OAM-QKD has also been demonstrated over a 3 m underwater link [37] and a 1.2 km optical fiber [38]. However, most existing research on OAM-QKD has not considered the context of a satellite-based deployment. As such, the feasibility of long-range OAM-QKD via satellite is still not clear. Previously we have studied the OAM detection performance in satellite-to-Earth communications [39], and the feasibility of OAM-based entanglement distribution via satellite [40]. In this work, we explore the feasibility of satellite-to-Earth OAM-QKD. Our main finding is that, contrary to conventional wisdom, such QKD is indeed feasible. More specifically, we find that utilizing quantum channel information enables satellite-to-Earth OAM-QKD over a wide range of dimensions under all anticipated circumstances, including the circumstance where a sea-level ground station with a reasonably-sized receiver aperture is used. If channel information is not used, then feasible satellite-to-Earth OAM-QKD is con-fined to large telescopes situated at high-altitude observatories. The remainder of this paper is as follows. In Section II we introduce the necessary background knowledge on OAM eigenstates, atmospheric propagation of light, and the generalized OAM-QKD protocol. In Section III we detail the system model for satellite-to-Earth OAM-QKD. In Section IV we present our key results on satellite-to-Earth OAM-QKD. In Section V we explore the use of quantum channel information to improve the practical feasibility of satellite-to-Earth OAM-QKD. Finally, concluding remarks are provided in Section VI. II. BACKGROUND A. OAM eigenstates OAM-QKD protocols utilize OAM eigenstates and their superpositions for quantum encoding. In cylindrical coordinates, the general form of an OAM eigenstate is given by where r and θ are the radial and azimuthal coordinates, respectively, z is the longitudinal distance, l is the OAM quantum number, p is the radial node number, and R p,l (r, z) is the radial profile. OAM eigenstates with different l values are mutually orthogonal. In this paper, we choose R p,l (r, z) to be Laguerre-Gauss functions, making OAM eigenstates correspond to the Laguerre-Gaussian (LG) mode set [41]. R p,l (r, z) is expressed in Eq. (2) as where w(z) = w 0 1 + (z/z R ) 2 , w 0 is the beam-waist radius, z R = πw 2 0 /λ is the Rayleigh range, λ is the optical wavelength, k = 2π/λ is the optical wavenumber, and L |l| p (x) is the generalized Laguerre polynomial. We denote the single-photon OAM eigenstate of the LG pl mode as |pl , and this notation is further simplified to |l as we only consider the p = 0 subspace. We denote the set {|l , −∞ < l < ∞} as the OAM basis and use it as the standard basis throughout this paper. Denoting the dimension as d, the standard basis of d-dimensional OAM-QKD contains d mutually orthogonal OAM eigenstates and thus spans a d-dimensional Hilbert space. Throughout this work we will denote such a d-dimensional Hilbert space as the encoding subspace H d . In this work, we consider a maximum OAM number of 4 to construct the encoding subspace H d . Specifically, we use the same approach adopted in [21] to construct the encoding subspace H d . For d = 2, we consider a 2-dimensional encoding subspace spanned by a pair of OAM eigenstates with opposite OAM numbers (i.e. H 2 = {−l 0 , l 0 } with l 0 ≤ 4). For d = 3, we consider a 3-dimensional encoding subspace spanned by a pair of OAM eigenstates with opposite OAM numbers and the OAM eigenstate with zero OAM number (i.e. H 3 = {−l 0 , 0, l 0 } with l 0 ≤ 4). For d > 3, more OAM numbers are involved. For example, for d = 4 the 4dimensional encoding subspace is spanned by two pairs of OAM eigenstates with opposite OAM numbers (i.e. B. Mutually unbiased bases Denoted by M β = {|ξ (β,s) , β = 1, . . . , d + 1, s = 0, . . . , d − 1}, mutually unbiased bases (MUBs) are orthonormal bases defined on a d-dimensional Hilbert space such that where δ denotes the Kronecker delta function. MUBs play an important role in QKD since any system prepared in a state in one MUB gives outcomes with equal probability 1/d if measured in any other MUB. Therefore, if the eavesdropper measures the quantum signal in a wrong basis, she will acquire no information (in fact, she will introduce a disturbance). It has been proven that, for a prime-power dimension d there exists a complete set of d + 1 MUBs [42]. In this work we consider a variety of dimensions ranging from d = 2 to d = 9. When d is a prime number (i.e. 2, 3, 5, 7 in this work), a complete set of d + 1 MUBs is found as eigenstates of different Weyl operators in the set {Z, XZ n | n = 0, 1, . . . , d − 1}. The Z operator is defined as where |j denote the standard basis elements, and ϑ = exp (i2π/d). The X operator is defined as When d is a prime-power number but not a prime number (i.e. 4,8,9 in this work), the construction of a complete set of d+1 MUBs becomes a harder task. In this work we adopt the sets of MUBs given in [43,44] for these dimensions. The only non-prime-power dimension considered in this work is d = 6. Since the maximum number of MUBs is not known for an arbitrary dimension, for d = 6 we use only the 2 MUBs generated from the set {Z, X} (note that this has a negligible impact on the findings of this work). In OAM-QKD, the standard basis is the OAM basis, thus any |ξ (β,s) is a superposition of OAM eigenstates that span the encoding subspace H d . C. Optical propagation through turbulent atmosphere The turbulent atmosphere is a random medium with random inhomogeneities (turbulent eddies) of different size scales that are upper-bounded and lower-bounded by an outer scale L outer and an inner scale l inner , respectively. These turbulent eddies give rise to small random refractive index fluctuations, causing continuous phase modulations on the optical beam. This leads to random refraction and diffraction effects, imposing distortions on the optical beam as it propagates through the atmospheric channel. Under the paraxial approximation, the propagation of a monochromatic optical beam ψ through the turbulent atmosphere is governed by the stochastic parabolic equation [45] where R = [x, y, z] T denotes the three-dimensional position vector (in Eq. (6) we use Cartesian coordinates for simplicity), ∇ 2 T = ∂ 2 /∂x 2 + ∂ 2 /∂y 2 is the transverse Laplacian operator, and δn(R) = n(R) − n(R) represents the small refractive index fluctuation, with n(R) being the refractive index at R. Note that the turbulent atmosphere satisfies n(R) = 1 and δn(R) 1 [45]. In this work we numerically solve Eq. (6) using the splitstep method [46][47][48], which has been widely used to study atmospheric optical propagation under a variety of conditions. This method models the atmospheric channel using multiple slabs with a phase screen located in the midway of each slab. Two free-space (vacuum) propagations with one random phase modulation in between are repeatedly performed for each slab until the beam reaches the receiver plane [47]. The split-step method has also been used to study the entanglement evolution of OAM-photon pairs in horizontal atmospheric channels, providing quantitative agreement with analytical results [19,20]. D. Generalized OAM-QKD protocol QKD protocols can be described and implemented in both the prepare-and-measurement (P&M) paradigm and the entanglement-based (EB) paradigm. Although most implementations of QKD are based on the P&M scheme, all P&M QKD protocols have their EB equivalences (note that EB QKD has also been demonstrated over the satellite-to-Earth channel, see e.g. [3]). Furthermore, the EB paradigm is usually adopted to simplify the security analysis. Throughout this work we adopt the EB paradigm for OAM-QKD. Here we briefly recall the procedures of a d-dimensional OAM-QKD protocol utilizing N B (N B ≥ 2) MUBs. 1. Alice first generates entangled photon pairs. For every pair of the entangled photons Alice keeps one photon at her side and sends the other photon to Bob through a quantum channel. 2. For every photon pair, Alice and Bob randomly (and independently) choose one of the N B MUBs and perform a d-outcome measurement on their corresponding photon, giving each of them a d-ary symbol. 3. Alice and Bob start the sifting process where they reveal the MUBs that they used for their photon measurements. Specifically, they generate a sifted key by only keeping the symbols from the photon pairs jointly measured in the same MUB. 4. In the parameter estimation process, Alice and Bob compare a small subset of their sifted data to estimate the average error rate Q. 5. With the knowledge on Q, the two parties then carry out subsequent processes, including reconciliation (which mainly includes error correction) and privacy amplification, to produce a final secret key that Eve has no knowledge on. A. System settings Throughout this work we denote the satellite and the ground station as Alice and Bob, respectively. In this section, we describe the system settings for satelliteto-Earth OAM-QKD (as illustrated in Fig. 1(a)). The ground-station altitude is denoted as h 0 , the satellite zenith angle at the ground station is denoted as θ z , and the satellite altitude at θ z = 0 is denoted as H. The channel distance L is given by L = (H −h 0 )/cos θ z . We denote the aperture radius at the ground station receiver as r a . To perform OAM-QKD, Alice is equipped with an on-board SPDC source that generates entangled OAMphoton pairs. Both Alice and Bob are equipped with versatile OAM mode sorters that can randomly switch between all available MUBs and perform the corresponding d-outcome measurements. The schematic diagram in Fig. 2 illustrates our deployment strategy for satellite-to-Earth OAM-QKD, in addition to all effects we consider. These include turbulenceinduced crosstalk, loss (due to a finite-sized aperture), misalignment (due to imperfect beam tracking), and tomography noise (which leads to imperfect channel conjugation when classically entangled light is used as a probe to characterize the quantum channel). Unless otherwise specified, the following assumptions are adopted throughout this work: Alice 1. We assume that Alice and Bob are perfectly timesynchronized, and they will discard any event where the photon sent by Alice does not click any of Bob's detectors. 2. We assume that the OAM mode sorters used for measurement have a separation efficiency of unity and introduce no additional loss. We restrict ourselves to the infinite key limit, therefore the sifting efficiency is set to 1. We also assume a reconciliation efficiency of 1. 4. In the security analysis we assume that Eve controls the quantum channel and performs a collective attack. B. Satellite-to-Earth atmospheric channel Turbulence characterization The strength of the optical turbulence within a satellite-based atmospheric channel can be described by the structure parameter C 2 n (h) as a function of altitude h. C 2 n (h) can be described by the widely used Hufnagel-Valley (HV) model [45] where A is the ground-level (i.e. sea-level, h = 0) turbulence strength in m −2/3 . In the above equation, v rms is the root-mean-square wind speed in m/s which is given by where V (h) is the altitude-dependent wind speed profile. In this paper we adopt the Bufton wind speed profile [45] where V g is the ground-level wind speed. The effect of the atmospheric turbulence on a propagating beam is quantified by two parameters, namely the scintillation index σ 2 I and the Fried parameter r 0 . The scintillation index is the normalized variance of the intensity. For satellite-to-Earth channels under weak-tostrong turbulence, this parameter is given by [45] with σ 2 R being the Rytov variance, plane. For satellite-to-Earth channels, this parameter is given by [45] Channel modeling To perform the split-step method we divide the satellite-to-Earth atmospheric channel into N S slabs bounded by specific altitudes h j with j ranging from 1 to N S (note that h 0 is the ground-station altitude, and a larger j indicates a higher altitude). For the j th (j ≥ 1) slab bounded by h j and h j−1 , its thickness can be estimated as ∆L j = (h j −h j−1 )/cos(θ z ) (note that j ∆L j = L). In order to characterize the turbulence within each slab, both σ 2 I and r 0 are evaluated locally (for the turbulent volume of their corresponding slab). We denote the scintillation index and the Fried parameter for the j th slab as σ 2 Ij and r 0j , respectively. To accurately model the atmospheric channel using multiple slabs with a phase screen located in the midway of each slab, we meet the two conditions described in [46] (i.e. σ 2 Ij < 0.1 and σ 2 Ij < 0.1σ 2 I ) by setting N S and h j through a numerical search. A schematic illustration of our channel modeling (with N S = 6) is provided in Fig. 1(b). In our simulations N S ranges from 6 to 12 depending on specific settings. After determining the widths of the atmospheric slabs, the realizations of the corresponding phase screens are generated using the Fast-Fourier-Transform (FFT)-based spectral domain algorithm [49]. This method involves the filtering of a complex Gaussian random field using the phase power spectral density (PSD) function of the atmospheric turbulence. In this paper, we adopt the modified von Karman model, giving the phase PSD function for the j th slab where f is the magnitude of the two-dimensional spatial frequency vector in the transverse plane in cycles/m, f 0 = 1/L outer , and f m = 0.9422/l inner [48]. For the free-space propagation, we utilize the FFTbased angular spectrum method (for details of this method one can refer to e.g. [48,50]). In this study we utilize a physical optics propagation library named PROPER [51] to perform this method. Quantum state evolution To illustrate the undesirable decoherence effects caused by the atmospheric turbulence, we formally describe the evolution of an OAM eigenstate within a satellite-to-Earth channel. Assuming that Alice sends a singlephoton OAM eigenstate |l t to Bob's ground station through an atmospheric channel. Under one realization of the atmospheric channel, the evolution of such a singlephoton OAM eigenstate can be described by a unitary operator U turb (L) [21]. Denoting the received state as |ψ lt , we have The received single-photon state can be expanded in the OAM basis as where c l,lt (L) = l|U turb (L)|l t . In this work, the evolution of a single-photon OAM eigenstate is simulated by the atmospheric propagation of the corresponding classical LG beam via the split-step method. In Fig. 3 we plot the intensity and phase profiles of an LG 03 beam after vacuum propagation (i.e. propagation without atmospheric turbulence) and one realization of atmospheric propagation. After a vacuum propagation, we have c l,lt = δ l,lt due to the orthogonality of OAM eigenstates. After atmospheric propagation, however, the turbulence-induced distortions lead to crosstalk. At the receiver, |ψ lt is generally a superposition of OAM eigenstates, and thus it is no longer orthogonal to any OAM eigenstate. The resulting crosstalk causes entanglement decay and thus degrades the performance of OAM-QKD. C. OAM-QKD over satellite-to-Earth channel Now let us analyze the performance of OAM-QKD protocols introduced in Section II D over the satelliteto-Earth channel. Specifically, we are interested in the achievable QKD performance over the satellite-to-Earth channel. The QKD performance is quantified by the secret key rate K in bits per sent photon, and throughout this work we use the unit bits per photon for short. For a d-dimensional OAM-QKD protocol, Alice generates OAM-photon pairs, each pair being in the maximally entangled state where H d is a d-dimensional encoding subspace. From each pair one photon is sent to Bob through a satelliteto-Earth quantum channel. At the output, the quantum state shared between Alice and Bob before any measurement is given by where 1 denotes an identity operator acting on Alice's photon, and H ∞ denotes an infinite-dimensional Hilbert space. Although the initial state |Φ 0 can be considered as a finite-dimensional state living in the H d ⊗ H d subspace, due to the crosstalk it spreads over the entire infinitedimensional Hilbert space. Since a practical system can only utilize a finite-dimensional encoding subspace, a necessary procedure is to project the output state |Φ turb onto the original H d ⊗ H d subspace. This procedure is realized by a post-selection at Bob's side, giving a postselected (and un-normalized) state where O is the filtering operator acting on Bob's photon. Since Bob has no information on U turb (L), his filtering operator O is equal to Π d = lp∈H d |l p l p |. By setting O = Π d the post-selected state in Eq. (18) can be explicitly given as 1 Note that |Φ ps is not normalized. In fact, the atmospheric propagation and the post-selection together form a completely positive (and non-trace-preserving) map Π d U turb (L). It is obvious that the post-selection results in a loss of photons. However, this operation will not give Eve any information, since the lost photons are simply discarded by Alice and Bob and will not be used in key generation. Since we are not interested in any specific realization of the atmospheric channel, we perform an ensemble average of |Φ ps over different channel realizations. After averaging over channel realizations and performing renormalization, the averaged state shared between Alice and Bob can be given as a mixed state described by where · · · denotes an ensemble average, and T = tr ( |Φ ps Φ ps | ) is the trace required for renormalization. Note that T quantifies the photon survival fraction after post-selection. Now we briefly recall how security analysis is performed and how key rate is calculated for our OAM-QKD protocols (for a complete and rigorous security analysis, one can refer to [10,11]). Utilizing the photons that survive the post-selection, the secret key rate K 1 can be expressed as 2 where I(A : B) is the classical mutual information between Alice and Bob, and χ(A : E) is quantum information between Alice and Eve. Considering the fact that Eve holds a purification of ρ AB , χ(A : E) can be explicitly given as where S(·) denotes the von Neumann entropy, a = 0, · · · , d − 1 denotes Alice's measurement outcome, p(a) denotes the probability distribution of a, and ρ B |a is the state of Bob's photon conditioned on a. In the security analysis it is assumed that all errors are caused by Eve's eavesdropping attempts. The average error rate Q can be expressed as Starting from Eq. (21), (22), K 1 is found to be a function of Q [10,11]. For a d-dimensional QKD protocol utilizing all (d + 1) MUBs, K 1 can be calculated as 2 Note, here the key rate is per photon actually used in the key generation. For every photon sent (transmitted) it can be 'lost' either by being not hitting the receiver, or by not being postselected via the projection operation. The parameter T states the fraction of sent photons that survive both these loss events. Effectively, the finite-sized receiver aperture is absorbed into the post-selection process. Recalling a non-unity photon survival fraction T , the achievable secret key rate K is given by IV. NUMERICAL EVALUATION OF QKD PERFORMANCE In this section, we numerically evaluate the performance of the satellite-to-Earth OAM-QKD protocols analyzed in Section III C. We carry out Monte Carlo simulations to numerically evaluate the secret key rate K. First, we generate 4000 independent realizations of the satellite-to-Earth channel. For each channel realization we perform a series of atmospheric propagations using the split-step method to obtain a realization of |Ψ ps (see Eq. (18)). Afterwards, realizations of |Ψ ps are used to obtain T and ρ AB (see Eq. (20)). Then Q is evaluated from ρ AB (see Eq. (23)), and K 1 is then evaluated from Q (see Eq. (24)). Finally, K can be evaluated using K 1 and T (see Eq. (25)). A. General settings We restrict ourselves to the case of a low-Earth-orbit (LEO) satellite with a maximum satellite altitude H = 500 km. We consider two zenith angles, θ z = 0 • and θ z = 45 • , giving a maximum channel distance of L ∼ 500 km and L ∼ 700 km, respectively. Note that a higher H leads to worse performance. This is because the beam radius on atmospheric entry increases with H (due to diffraction), and this in turn results in increased distortion on the beam. A larger θ z also leads to worse performance, since the photon travels a longer distance within the turbulent atmosphere. Unless otherwise stated, when we refer to our results we will mean for all considered H values (i.e. from 200 km to 500 km) and for all considered θ z values (i.e. 0 • and 45 • ). Also, throughout this work QKD performances are compared at the same satellite altitudes under the same zenith angles. For the atmospheric parameters, we set A = 9.6 × 10 −14 m −2/3 which accords with a realistic setting adopted in [52]. We set V g = 3 m/s, giving a value of v rms = 21 m/s. We set L outer = 5 m and l inner = 1 cm for the atmospheric turbulence [45,53]. For the optical parameters, we set λ = 1064 nm in accord with existing entanglement sources (e.g. [54]), and set w 0 to 15 cm. The loss of signal at the receiver will be a function of several system parameters as well as the atmospheric conditions. The beam width at the receiver is critical in determining the loss of signal, and is largely dependent on system parameters such as transmitter aperture (sets the beam waist w 0 ), and the distance between the satellite and the ground station. For a given optical wavelength, a given beam width at the receiver, and a given channel distance, the transmitter beam waist required can be easily determined. However, in our calculations we simply set the beam waist (in effect the transmitter aperture). To orientate ourselves, we note that the Micius satellite (which orbits at an altitude of about 500 km), with an aperture size of 0.3 m, provided a beam width of 12 m at ground level at a channel distance of 1200 km [2,3]. For a satellite altitude of 500 km, the smallest beam width at ground level we will have in our calculations will be 2.2 m, corresponding to a 1 dB loss at a receiver aperture of 1 m radius (for a sea-level receiver, and a zenith angle of 0 • , this corresponds to our w 0 = 15 cm). We perform all simulations using a numerical grid of 2048 × 2048 points with a spatial resolution 3 of 5 mm. In generating the random phase screens using the FFTbased method, 3 orders of subharmonics are added using the method introduced in [55] to accurately represent the low-spatial-frequency components contributed by largescale turbulent eddies. B. Ideal circumstances We first explore a rather ideal circumstance for the receiver. Adopting all settings of Section IV A, we initially set the ground-station altitude to h 0 = 3000 m to avoid the strong atmospheric turbulence near the sea level. We also first adopt a large receiver aperture of r a = 4 m, thus providing a zero-loss scenario. First we investigate the performances of 2-dimensional and 3-dimensional OAM-QKD for different l 0 values under such an ideal circumstance. For 2-dimensional OAM-QKD, we find that a large l 0 value generally leads to a higher secret key rate. Under θ z = 0 • , positive key rates of 0.03, 0.05, 0.06 bits/photon can be achieved at H = 500 km for l 0 = 2, 3, 4, respectively. Under θ z = 45 • , we observe a reduction in the secret key rate ranging from 70% to 100%. Specifically, only l 0 = 4 leads to a positive key rate of 10 −3 bits/photon at H = 500 km. For 3-dimensional OAM-QKD, we find that a larger l 0 value does not always lead to a higher secret key rate. Under θ z = 0 • , despite the observation that l 0 = 1 does not lead to any positive key rate, there is no significant correlation between the achievable secret key rate and l 0 for l 0 = 2, 3, 4. Moreover, no positive key rate can be achieved at H = 500 km for any considered l 0 value. 3 Note, the spatial resolution (i.e. the grid spacing in the transverse plane) could be adaptively varied along the propagation path to minimize numerical errors in FFT-based wave propagation methods [48]. However, in this work we fix the spatial resolution throughout the simulation. To validate this we perform a vacuum propagation over the length of the channel and compare the resulting simulated beam profile with an independentlyderived analytical profile at the same channel distance. Such a test is performed for all considered channel distances and for LG beams with all considered OAM numbers, and we find that all numerical errors are negligible. We also note, when no phase modulation is set at the phase screens, our simulation results give U turb (L) = 1. Under θ z = 45 • , we find that no positive key rate can be achieved by 3-dimensional OAM-QKD. By comparing the QKD performances, we find that the performance of 3-dimensional OAM-QKD is overall inferior to the performance of 2-dimensional OAM-QKD over the satelliteto-Earth channel for a given l 0 value. We then compare the performances of OAM-QKD of dimensions ranging from 2 to 9 under θ z = 0 • , and find that the QKD performance decreases as the dimension increases 4 . Specifically, we find that no performance advantage can be achieved, by OAM-QKD of any dimension, against 2-dimensional OAM-QKD at H > 200 km. Furthermore, we find that OAM-QKD of dimensions larger than 5 achieve no positive key rate at all considered satellite altitudes and under all considered zenith angles. No positive key rate can be achieved by OAM-QKD of dimensions larger than 2 under θ z = 45 • . It is widely anticipated that the use of higherdimensional QKD can improve noise resistance and lead to a higher secret key rate. However, all the observations reported in this subsection clearly indicate that an increased dimension cannot improve the performance of OAM-QKD over the satellite-to-Earth channel, even under the ideal circumstance. This finding can be explained by the fact that the maximally OAM-entangled state of a higher dimension is less robust against turbulence (note that the similar phenomenon has been observed in e.g. [21]). This will lead to an increased error rate which can be large enough to nullify the advantage of higherdimensional QKD. Therefore, a lower secret key rate is achieved in spite of a higher photon survival fraction (due to an enlarged encoding subspace). In other words, the theoretical capacity advantage provided by increasing the dimension in OAM-QKD is negated by the atmospheric turbulence over the satellite-to-Earth channel. C. Realistic circumstances Now we extend our scope to more realistic circumstances. Specifically, we discuss the impact of loss, and a lower ground-station (receiver) altitude, on the feasibility of satellite-to-Earth OAM-QKD. Loss The main source of loss in a satellite-to-Earth channel is diffraction loss. Diffraction loss is state-dependent in OAM-QKD since OAM eigenstates with different OAM numbers experience different amounts of diffraction [56]. In order to investigate the impact of loss on the feasibility of satellite-to-Earth OAM-QKD, we adopt all settings of Section IV B except setting the radius of the receiver aperture to r a = 1 m. At H = 500 km and under θ z = 0 • , setting r a = 1 m gives losses of 1 dB, 3.4 dB, 6.9 dB, 11.3 dB, 16.7dB to OAM eigenstates with OAM numbers 0, 1, 2, 3, 4, respectively. We then re-evaluate the performances of 2-dimensional and higher-dimensional OAM-QKD. In Fig. 4 we compare the performances of 2dimensional OAM-QKD, achieved with r a = 1 m and r a = 4 m (zero loss), under θ z = 0 • and h 0 = 3000 m. From this figure, we see that the loss degrades the performance of 2-dimensional OAM-QKD, and such a performance degradation is more significant for a larger l 0 value. Higher-dimensional OAM-QKD is more sensitive to loss. For 3-dimensional OAM-QKD, after setting r a = 1 m we find that no positive key rate can be achieved at H > 300 km under θ z = 0 • . For OAM-QKD of dimensions larger than 3, setting r a = 1 m we find that no positive key rate can be achieved at H > 250 km. Comparing the performances of OAM-QKD of different dimensions under loss, we find that 2-dimensional OAM-QKD is more robust against loss compared to higher-dimensional OAM-QKD. Indeed, the loss has a greater impact on higher-dimensional OAM-QKD due to its state-dependent nature (see related discussions in e.g. [57]). Receiver altitude Lowering the ground-station altitude increases the turbulence strength, and intuitively this can degrade QKD performance. To see whether satellite-to-Earth OAM-QKD is feasible under lower ground-station altitudes, we adopt all settings of Section IV B except for lower h 0 values. In Fig. 5 we compare the performances of 2dimensional OAM-QKD, under θ z = 0 • , under different ground-station altitudes h 0 . From this figure, we clearly see that the use of a lower ground-station altitude degrades the performance of 2-dimensional OAM-QKD at a given satellite altitude. We see that a positive key rate can still be achieved for l 0 = 4 at H = 500 km under h 0 = 1500 m. Under θ z = 45 • we find that no positive key rate can be achieved at H > 300 km under h 0 = 2000 m. OAM-QKD of a higher dimension is more sensitive to h 0 . For 3-dimensional OAM-QKD, we find that a larger l 0 value is not more robust against performance degradation. We also find that, for h 0 = 2000 m, no positive key rate can be achieved by 3-dimensional OAM-QKD at H > 250 km even under θ z = 0 • . For OAM-QKD of dimensions larger than 3, for h 0 = 2000 m we find that no positive key rate can be achieved at H > 200 km even under θ z = 0 • . Sea-level receiver with reasonably-sized aperture Then we adopt all settings of Section IV A and jointly set r a = 1 m and h 0 = 0 m to reflect a more realistic scenario where a sea-level receiver with a reasonably-sized aperture is used. Unfortunately, we find that no positive key rate can be achieved by OAM-QKD of any dimension, even under θ z = 0 • . V. FEASIBILITY THROUGH CHANNEL INFORMATION In all previous sections we have assumed that channel knowledge is unavailable. It is intuitive to think that channel information can be used to improve QKD performance. Indeed, in quantum communications a natural paradigm is to characterize the quantum channel through quantum process tomography (QPT) [58,59], and cancel out the turbulence-induced effects accordingly [60]. However, QPT is performed at the single-photon level, making the real-time characterization of quantum channels a challenging job. Recently it has been discovered that the state evolution of the classically entangled DoFs is equivalent to the state evolution of quantum entangled photons [60]. Such an equivalence allows for the use of non-separable (i.e. DoF-entangled) states of classical light to characterize the quantum channel (e.g. [33,[60][61][62]). By performing a state tomography on the output classical light, the quantum channel can be readily characterized in real time (for a comprehensive tutorial, see e.g. [63]). In this section we explore the use of quantum channel information to improve the practical feasibility of satellite-to-Earth OAM-QKD. Inspired by [33,60], we utilize the quantum channel information acquired through a real-time quantum channel characterization utilizing non-separable states of classical light, and apply a quantum channel conjugation at the ground station. By quantum channel information we mean the Kraus operator of the channel, and by quantum channel conjugation we mean the application of a quantum conjugate filter that cancels out the turbulence-induced crosstalk. A. General method In this section, we demonstrate how a conjugate filter could be found, and we also analyze the impact of using that filter on OAM-QKD. The non-separable states of classical light we use for channel characterization is given in the general form [61] where the first DoF D m is a DoF that is not affected by the turbulent atmosphere (e.g. polarization or wavelength), the second DoF D m as H (1) and H (2) , respectively. To faithfully characterize the quantum channel under study, it is required that dim(H (1) ) = dim(H d ) and H (2) = H d . In an OAM-QKD protocol Alice prepares OAMphoton pairs in the maximally entangled state |Φ 0 described by Eq. (16). To help characterize the quantum channel Alice also generates classical light in the corresponding non-separable state |Φ C 0 described by Eq. (26). While sending one photon of each entangled photon pair to Bob, Alice simultaneously sends the classical light through the same channel 5 . Since the state evolution of |Φ C 0 is equivalent to the state evolution of |Φ 0 , under a specific channel realization Bob can characterize the onesided OAM quantum channel in the encoding subspace H d by performing a state tomography on the received classical light. Specifically, Bob finds the Kraus operator M that satisfies where |Φ C turb = (1 ⊗ U turb (L))|Φ C 0 denotes the state of the received classical light at Bob's side. Note that the right-hand side of Eq. (27) is known to Bob via his state tomography. The Kraus operator M can be expressed in its polar decomposition as where U is a unitary operator and |M | = √ M † M is a positive Hermitian operator. |M | can be expressed in its spectral decomposition where γ j and |v j denote the eigenvalues of |M | and their corresponding eigenvectors, respectively. Note that |v j can be expressed as superpositions of the standard basis elements. Considering the fact that γ j are smaller than 1, the conjugate filter cannot be directly constructed as M −1 . This is because |M | −1 has eigenvalues larger than 1 and thus cannot be physically implemented due to a violation of the no-cloning theorem 6 . To construct a conjugate filter that does not violate the no-cloning theorem, inspired by the idea in [33] we consider a conjugate filterM that 5 We assume that the classical light is made orthogonal to the quantum signal using polarization or wavelength multiplexing techniques. For example, if the polarization (wavelength) DoF is used to construct the non-separable state in Eq. (26), the wavelength (polarization) DoF should be used for multiplexing. We note that the turbulence effect on a propagating beam is wavelength-dependent, and this can potentially cause errors in quantum channel characterization. For simplicity, we assume that the wavelengths used for multiplexing and for constructing the non-separable state are chosen to be close enough to the wavelength of the quantum signal. Under such an assumption, the wavelength-dependent nature of the turbulence effect becomes negligible (see discussions in e.g. [61]). 6 One can show that, directly applying M −1 leads to a noiseless amplification. Such an operation is not allowed in a deterministic fashion by the no-cloning theorem. achievesM M ∝ 1. Specifically, we construct the conjugate filter asM where γ min = min{γ j , j = 1, . . . , d}. Note thatM in Eq. (30) is a local Procrustean filter which can be physically implemented (see experimental demonstrations of OAM Procrustean filters in e.g. [15,64,65]). Under every channel realization, Bob constructs the Kraus operator M , constructs the conjugate filterM , and appliesM on his photon. Therefore, Alice and Bob share a post-selected state of the form Note that |Φ ps is not normalized. From Eq. (31) it can be seen that the quantum channel conjugation results in a probabilistic entanglement distillation. After averaging over channel realizations and performing renormalization, ρ AB is now given by where T = γ 2 min is the photon survival fraction when the quantum channel conjugation is applied. Note that T can be interpreted as the probability of success of the quantum channel conjugation (which can be as low as 10 −4 in extreme cases). Following the descriptions in Section III C, ρ AB and T can be then used to evaluate the secret key rate K. It can be inferred from Eq. (32) that Q = 0 is achieved by the quantum channel conjugation at the cost of a low photon survival fraction. Here we summarize the assumptions made, regarding the quantum channel characterization and the quantum channel conjugation, in this section. For simplicity, we assume that the channel Kraus operator M is constructed without error (i.e. perfect state tomography on classical light), and the exact conjugate filterM is applied to Bob's photon without error. Furthermore, these operations are assumed to be performed in real time. Such an assumption indicates that the time taken to perform a state tomography on the classical light is less than the coherence time of the atmospheric channel (typically on the order of milliseconds [45]). Although no experiment has been demonstrated so far to indicate how fast such a state tomography can be done, in principle all the projective measurements required by such a state tomography can be done simultaneously with high signal-to-noise ratio. In addition, a recent work [66] has demonstrated that a complete state tomography can be done in one shot if the classical light is in a pure state (note that this is valid under every specific channel realization). We further notice that the paradigm adopted here resembles the concept of Adaptive Optics (AO) where a servo loop system tracks (with a wavefront sensor) and corrects (with a deformable mirror) the turbulence effect in a real-time fashion. Given the fact that current AO systems have no significant trouble keeping up with the temporal evolution of turbulence, we believe that the quantum channel characterization and the following quantum channel conjugation can also be performed in real time. B. QKD performance with quantum channel conjugation To numerically investigate the impact of the quantum channel conjugation on QKD performance, we adopt the settings of Sections IV B, IV C and re-evaluate the performances of satellite-to-Earth OAM-QKD of different dimensions. In 2-dimensional OAM-QKD, we assume that the vector vortex beam is used for quantum channel characterization (for a detailed review on vector vortex beams one can refer to e.g. [67]). Specifically, Eq. (26) is explicitly given by where |R , |L denote right and left circular polarization states, respectively. In 3-dimensional OAM-QKD, the vector vortex beam cannot be used due to the constraint of the 2-dimensional Hilbert space imposed by the polarization DoF. It has been proposed in [61] that the wavelength DoF of light is a promising candidate to replace the polarization DoF of light in |Φ C 0 . Specifically, Eq. (26) is explicitly given by where λ m denote different wavelengths. Although no experiment has been demonstrated so far, the use of classical light in a state described by Eq. (34) is theoretically feasible [61]. Since there is no fundamental limitation on dimension if the wavelength DoF is adopted, we also use this paradigm for quantum channel characterization in OAM-QKD of higher (d > 3) dimensions. First, we compare the QKD performances achieved with and without the quantum channel conjugation under the ideal circumstances in Section IV B. We find that, with the help of the quantum channel conjugation, positive (and improved) secret key rates can be achieved by OAM-QKD of all considered dimensions at all considered satellite altitudes under all considered zenith angles. We also find that the use of smaller OAM numbers leads to a higher secret key rate. For 2-dimensional and 3-dimensional OAM-QKD, this means a smaller l 0 value leads to better performance. For OAM-QKD of dimensions larger than 3 this means using OAM numbers as small as possible to construct the encoding subspace leads to a better performance. Comparing the performances achieved by OAM-QKD of different dimensions, we find that an increased dimension can improve the performance of OAM-QKD over the satellite-to-Earth channel when the quantum channel conjugation is applied. Specifically, we find that 5-dimensional OAM-QKD achieves the highest performance at H ≥ 300 km under θ z = 0 • . Under θ z = 45 • , 3-dimensional OAM-QKD achieves the highest performance at H ≥ 250 km. Then, we adopt the settings in Section IV C 3 and evaluate the performance of OAM-QKD achieved with the quantum channel conjugation under the realistic circumstance where a sea-level (i.e. h 0 = 0 m) ground station and a reasonably-sized (i.e. r a = 1 m) receiver aperture is used. We find that, with the help of the quantum channel conjugation, positive secret key rates can be achieved by OAM-QKD of all considered dimensions. Such an observation not only holds under θ z = 0 • , but also holds under θ z = 45 • (where a higher loss and more severe turbulence effect is anticipated). Specifically, in Fig. 6 we present the QKD performances achieved with the quantum channel conjugation under θ z = 45 • . From this figure we can see that OAM-QKD of dimension 3 outperforms OAM-QKD of all other considered dimensions. Finally, we move to a better circumstance where a high-altitude (h 0 = 3000 m) ground station with a reasonably-sized (i.e. r a = 1 m) receiver aperture is used, and we evaluate the performance of OAM-QKD achieved with the quantum channel conjugation. In Fig. 7 we plot the resulting QKD performances under θ z = 45 • . We first again find that the quantum channel conjugation leads to positive secret key rates for all dimensions at all considered satellite altitudes, and that 3-dimensional OAM-QKD achieves the highest QKD performance. Comparing the results in Fig. 7 and Fig. 6, we clearly see the significant performance improvements provided by a higher ground-station altitude. In summary, the quantum channel conjugation leads to positive (and improved) secret key rates at all considered satellite altitudes under all considered zenith angles, even under loss and a low ground-station altitude of 0 m. The quantum channel conjugation also enables the theoretically-predicted secret key rate advantage provided by an increased dimension in OAM-QKD over the satellite-to-Earth channel. C. Additional noise contributions We note that in our calculations we have assumed perfect tomography implemented in real-time. Of course, in practice this perfect outcome can never be realized. The accuracy and timescale for implementation of any tomography are a function of the specific measurements pursued and the number of signals analyzed e.g. [68]. However, given the coherence timescale of the channel is of order a millisecond, and that we are using classical light as the probe (in effect no limit on sample size), it could be anticipated that enough signals could be analyzed in real-time providing infidelities between the true and reconstructed quantum states less than 5% [68]. The presence of tomography noise will manifest itself in our key rate calculations through the design of a conjugate filter targeted at a different (erroneous) state, that ultimately leads to the state produced possessing less than maximal entanglement. The error rate Q, therefore, becomes non-zero which in turn impacts our final key rate (see Eq. (24)). We also have assumed that our channel noise is entirely a consequence of phase perturbations and loss (the latter leading to vacuum contributions to the state). Although beam misalignment caused by turbulence-induced beam wander is negligible in the downlink from satellite to Earth, direction tracking errors in the transmitter and/or receiver may also cause misalignment (recall the satellite are in low orbit and moving across the sky in timescales of minutes). The presence of beam misalignment will lead to additional cross-talk in the received state, which will manifests itself in our key rate equations through as smaller survival fraction in the measurement process (see Eq. (25)). In Fig. 8 we illustrate the impact on our final key rate as a function of the additional noise terms discussed above. Here, the performance of OAM-QKD is shown for a dimension of 3, and with the noisy channel conjugation and misalignment in the beam applied 7 . The settings are for a satellite altitude of 500 km, and a sea-level ground station with r a = 1 m under θ z = 45 • . The considered 3-dimensional OAM-QKD protocol utilizes the encoding subspace H 3 = {−1, 0, 1}. The losses are 2.7 dB and 7.4 dB to OAM eigenstates with OAM numbers 0 and 1, respectively. We can see that non-zero key rates can be found for a wide range of noise conditions. Beyond misalignment of 0.225 m or infidelity of 18% the key rate rapidly falls to less than 10 −6 bits/photon. The region of nonzero key rates is at noise levels within current experimental reach. Note, that other detector noise components not explicitly mentioned, such as shot noise, dark counts and losses, are anticipated to be small relative to real-world misalignment noise, e.g. [69]. Any additional detector noise can be readily mapped to an equivalent misalignment error of Fig. 8. Recall, that the classical signal is to be set by the system at much stronger intensity than the quantum signal. As such, most additional detector noise components can be made to have an impact on QKD rates well below the impact caused by a 0.05 m misalignment error. It should be noted that the quantum channel conjugation investigated in this work is not the only technique that can aid satellite-to-Earth OAM-QKD. It has been shown that AO techniques could improve the performance of OAM-based entanglement distribution in FSO channels (see e.g. [20,21]). AO techniques use a nonentangled classical light source as a probe, and their ability to negate turbulence heavily depends on the number of movable elements used for the required receivermirror deformation. But we note that phase perturbations across the transverse plane of the beam, when coupled to diffraction, leads to scintillation, and this cannot 7 The effect of misalignment is modeled as a deterministic misalignment operator acting on Bob's photon before the quantum channel conjugation. In other words, the magnitude (ranging from 0 m to 0.3 m) and direction (fixed at +45 • ) of misalignment are constant under all channel realizations. The effect of a noisy channel conjugation is modeled as a deterministic depolarizing channel acting on Bob's photon after a perfect quantum channel conjugation, and the infidelity of such a depolarizing channel is used to quantify the channel conjugation error. We recognize our modeling of additional noise terms will not be an exact match to the real-world noise contributions. be completely negated by AO techniques. However, it is certainly the case that AO applied before any channel conjugation will only lead to improvement in the above results, particularly with regard to corrections of beam wander and direction tracking. No report on the actual use of AO within the context of OAM entanglement distribution through long FSO channels is currently available. In practice, we anticipate the channel conjugation method used here will lead to better negation of the atmospheric turbulence relative to AO, if either techniques is used on its own. However, further research on the coupling of quantum channel conjugation and advanced AO techniques may prove fruitful. VI. CONCLUSIONS The OAM of light has been considered as a promising DoF that gives access to a higher-dimensional Hilbert space, leading to potential higher capacity quantum communications. In this work we explored the feasibility of performing satellite-to-Earth QKD utilizing the OAM of light. Specifically, we numerically investigated the performances of OAM-QKD of different dimensions achieved with different OAM numbers at different satellite altitudes H under different zenith angles θ z . We found that utilizing the OAM of light in satellite-to-Earth QKD is indeed feasible between a LEO satellite and a high-altitude ground station. First, we considered an ideal circumstance where a high-altitude ground station with a large receiver aperture (no loss) is used. We then moved to less ideal circumstances and discussed the feasibility of satelliteto-Earth OAM-QKD under loss and a lower groundstation altitude. However, we found that no positive secret key rate can be achieved at a sea-level ground station when a reasonable-sized aperture is used. We then explored the use of quantum channel information as a means to improve the feasibility of satellite-to-Earth OAM-QKD. We assumed such information is acquired through a real-time quantum channel characterization utilizing non-separable states of classical light, and we used this information to perform a quantum channel conjugation at the ground station. We found that the quantum channel conjugation significantly improves the feasibility of OAM-QKD, and leads to positive secret key rates even under circumstances where a sea-level ground station with a reasonable-sized aperture is used. We also found that the quantum channel conjugation enables a key rate advantage (provided by the higher dimensions of OAM-QKD) to be realized.
2020-07-16T01:01:21.536Z
2020-07-15T00:00:00.000
{ "year": 2020, "sha1": "7e61ed7006d6cdd15e7327c4c4576a679fea60dd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2007.07748", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7e61ed7006d6cdd15e7327c4c4576a679fea60dd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244301120
pes2o/s2orc
v3-fos-license
Effective Young’s Modulus Estimation of Natural Fibers through Micromechanical Models: The Case of Henequen Fibers Reinforced-PP Composites In this study, Young’s modulus of henequen fibers was estimated through micromechanical modeling of polypropylene (PP)-based composites, and further corroborated through a single filament tensile test after applying a correction method. PP and henequen strands, chopped to 1 mm length, were mixed in the presence of maleic anhydride grafted polypropylene (MAPP). A 4 wt.% of MAPP showed an effective enhancement of the interfacial adhesion. The composites were mold-injected into dog-bone specimens and tensile tested. The Young’s modulus of the composites increased steadily and linearly up to 50 wt.% of fiber content from 1.5 to 6.4 GPa, corresponding to a 327% increase. Certainly, henequen fibers showed a comparable stiffening capacity of PP composites than glass fibers. The intrinsic Young’s modulus of the fibers was predicted through well established models such as Hirsch or Tsai-Pagano, yielding average values of 30.5 and 34.6 GPa, respectively. The single filament test performed to henequen strands resulted in values between 16 and 27 GPa depending on the gauge length, although, after applying a correction method, a Young’s modulus of 33.3 GPa was obtained. Overall, the present work presents the great potential for henequen fibers as PP reinforcement. Moreover, relationships between micromechanics models and filament testing to estimate Young’s modulus of the fibers were explored. Introduction In recent years, natural fibers have experienced growing demand as plastic reinforcement/filler as a result of the growing environmental consciousness in our society and the need amongst manufacturers for eco-friendly materials [1,2]. Natural fibers are viewed as a sustainable alternative to conventional synthetic fibers (i.e., glass, carbon, aramid) because of their biobased, biodegradable, renewable, and recyclable nature. Further, natural fibers have a low density (around 1.35-1.55 g/cm 3 ), can be purchased at relatively low cost, are non-abrasive for the processing equipment, and are non-harmful to human beings [3][4][5][6]. The incorporation of natural fibers in thermoplastic matrices has shown a reasonable enhancement of the mechanical properties in terms of stiffness and rigidity. However, the strength improvement is more reliant on the fiber-matrix interfacial characteristics [7]. These characteristics shown by natural fiber polymer composites have made them suitable for use in various fields such as automotive and building and construction, their use being especially attractive for applications demanding low-density materials [8,9]. In general, it is suggested that the sustainable character of natural fibers may contribute to ecosystem health and sustainability, whereas their low cost, ease of processability, and physicomechanical performance may fulfill the technical and economic interest of industry [10]. Natural fibers can be obtained from different lignocellulosic feedstocks such as wood [11,12], annual plants [13][14][15], agroforestry residues [16,17], or recycled paper and board products [18,19]. In this work, the case of henequen (Agave fourcroyedes), which is native from Yucatan (Mexico) and a close relative to the sisal plant (Agave sisalana), was studied and evaluated as polypropylene (PP) reinforcement. Despite natural fibers offering many benefits, the lack of compatibility between the lignocellulosic and polymeric materials poses a barrier to the full development of the composites' properties [20]. Such incompatibility is explained by the different surface chemistry of natural fibers and thermoplastic materials. For instance, polyolefins such as polypropylene (PP) and polyethylene (PE) are non-polar and hydrophobic. Moreover, natural fibers have polar groups (i.e., hydroxyl groups) at their surface that provide hydrophilicity to these materials. Consequently, the fibers may self-aggregate via hydrogen bonding and be unevenly dispersed throughout the matrix phase, particularly at elevated fiber contents. The stress-transfer capacity may also drop considerably given the lack of fiber-matrix adhesion, finally affecting the mechanical properties of the material [21]. The scarce fiber-matrix compatibility may also lead to insufficient wetting of the fibers, contributing to voids formation, and decreasing the water and gas barrier properties [22]. Overall, addressing the issue of fiber-matrix compatibility and poor interfacial adhesion has been one of the most challenging tasks in the composite sector [23]. The above-mentioned shortcomings of natural fiber composites can be avoided, or at least mitigated, by surface modification of the fibers (i.e., alkaline, acetylation, silane, graft copolymerization treatments, among others) to make them more hydrophobic [24], or by the addition of coupling agents that promote the interactions between both phases [25]. The latter seems to be a more cost-effective and efficient manner, at least on the basis of large-scale production, to address the issue of the adhesion between fibers and matrix. In the case of polypropylene (PP)-based composites, the use of maleic anhydride grafted polypropylene (MAPP) has shown effective enhancement of the fiber-matrix compatibility [26][27][28]. Briefly, MAPP's interaction mechanism is based on the formation of chemical bonds via esterification with the hydroxyl groups at the fiber surface. In addition, the PP chains from MAPP may diffuse through the PP matrix via a self-entangling mechanism (physical interactions) [29]. Soleimani et al. [30] stated that the addition of a 5 wt.% of MAPP to flax fiber-reinforced PP composites improved the physical and mechanical properties (water absorption, tensile strength, and impact strength). Nayak et al. reported that the presence of MAPP combined with NaOH treated sisal fibers incremented the tensile strength by 28.22 % in comparison to the uncoupled composite, with almost negligible effects on the tensile modulus [31]. As explained, improving the fiber-matrix compatibility is required to obtain technically competitive materials. It is accepted that for engineering and structural applications, the most relevant properties to consider with natural fiber composites are stiffness and dimensional stability [32,33]. The stiffness of materials is generally evaluated by Young's modulus parameter, which is in essence the slope of the line at the elastic zone of the tensile stress-strain curve. The Young's modulus of the composite can be modeled through micromechanics analysis to study the relationships within composite microstructure in terms of, for example, morphology, orientation, and fibers' intrinsic properties [34,35]. Micromechanical modeling also allows the prediction of the anisotropic behavior of the composites, which is deemed useful to evaluate the performance of the materials under different loading conditions [36]. A more straightforward manner of determining the intrinsic mechanical properties of the fibers is by submitting them, at the indicated conditions and specifications, to a single filament tensile test. However, the process is tedious since many tests must be performed to attain reliable results, due to the irregular shape and morphology of the strands, which generally leads to high standard deviations. Further, the testing equipment may induce an error in the measurement of Young's modulus. As a consequence, the literature shows that the intrinsic Young's modulus obtained from raw fibers via single fiber testing, and from the fibers inside the composite through mathematical modeling, can be noticeably different [37]. In brief, in this study, composite materials based on PP, henequen strands chopped at 1 mm length, and MAPP were prepared at fiber contents ranging from 10 to 50 wt.%. The stiffness of the composites, reflected in Young's modulus parameter, were evaluated both at macro and micro scales. In parallel, henequen strands were submitted to a single filament test to evaluate its Young's modulus, the values of which have been posteriorly corrected as for the error induced by the equipment-specimen interactions. In general, the present works stand as a thoughtful and innovative study on the potential of henequen strands as PP reinforcement in substitution to conventional synthetic fibers and on the relationships of the intrinsic Young's modulus of natural fibers obtained via micromechanics models and direct tensile testing. Materials The polymer matrix was a PP (ISPLEN PP090 G2M) provided by (Repsol, Tarragona, Spain). The injection grade PP presents a melt flow index of 30 g/10 min (230 • C; 2.16 kg) and a density of 0.905 g/cm 3 . A modified MAPP (Epolene G3015) was incorporated into the materials as a coupling agent. MAPP was acquired from (Eastman Chemical Products, San Roque, Spain). Henequen strands from Agave (Agave fourcroyedes) were kindly supplied by Centro de Investigación Científica de Yucatán (CICY) (Mérida, Mexico). The lignocellulosic material contains 68.1% of cellulose, 18.2% of hemicellulose, 8.7% of lignin, 3.7% of extractives, and 1.3% of ashes. The strands were initially chopped to 1 mm in length without any further treatments. The average diameter of the strands was previously determined by optical microscopy at 220.8 µm [38]. The obtained fibrous-like material was effectively blended with PP at different percentages. Methods The work plan of the present investigation is briefly schematized in Figure 1. Preparation of Composite Materials Composite materials were prepared using a Brabender Plastograph mixer (Brabender ® , Duisburg, Germany) equipped with two counter-rotating screws to obtain a well-dispersed material. First, PP and MAPP were introduced together to the mixing chamber at 180 • C and 80 rpm until the mixture was completely melted. Henequen fibers, chopped at 3 mm length, were then incorporated into the process by keeping constant the temperature and rotation speed. Reduction of henequen strands length is crucial to avoid entanglement between the strands. The reduction of the length of the henequen filament allows a correct dispersion of the fibers and therefore homogeneity of the composite material. The resulting mixture was discharged from the mixing chamber, cooled down, and grounded using a knife mill equipped with a 5 mm mesh screen at the bottom to ensure the size homogeneity of the granules. The obtained granules were stored at 80 • C for 24 h to remove the remaining moisture. Injection Molding of Standard Specimens and Mechanical Characterization Dog-bone specimens were produced with a steel mold in an injection molding machine Meteor-40 (Mateu and Solé, Spain). The injection process was carried out at temperatures of 175 • C at zone 1, 175 • C at zone 2, and 190 • C in the nozzle, regarding the three heating areas of the injection machine, in addition to what is required by ASTM D3641. Before tensile testing, the dog-bone specimens were kept in a climate control chamber at 23 • C and 50% of relative humidity (during 48 h), according to ASTM D618. The tensile properties were determined using a universal testing machine (INSTRON 1122) working at 2 mm/min, according to ASTM D638 specifications. The Young's modulus was evaluated using an MFA 2 extensometer (Velbert, Germany) in dog-bone specimens for a more precise elongation measurement. Morphological Analysis of the Fibers During the compounding processes, fibers may suffer some morphological changes. As a result, it is important to evaluate the morphology of the fibers after compounding. To this end, the composite materials were submitted to a Soxhlet extraction, using decahydronaphthalene (decalin) as the solvent, to dissolve PP and recover the reinforcing fibers. The morphological features of the fibers, mainly length and diameter, were determined using a morphological analyzer MorFi Compact from Techpap SAS (Grenoble, France). Scanning Electron Microscopy Henequen strands were observed by scanning electron microscopy (SEM) (Zeiss DMS 960) to allow a qualitative evaluation of the appearance and irregularities in the shape of the strand. Single Filament Test The characterization of the filaments consisted of selecting filaments at random and adhering them individually on cardboards with rectangular orifices of equal measurement, as is shown in Figure 2. The filaments were carefully aligned and held with glue. According to the specific normative of the test described in standard ASTM D3822, the test was performed at different gauge lengths of 1, 3 4 , 1 2 , and 1 4 inches (25.4, 19.05, 12.7, and 6.35 mm, respectively). Coupling Agent Optimization The inherent incompatibility between henequen fibers and polypropylene (PP) may inevitably lead to weak interfacial adhesion, scarce stress-transfer capacity, creation of void spaces, and poor water barrier properties. Such issues can be avoided, or at least mitigated, by incorporating maleic anhydride grafted polypropylene (MAPP) into the composites, which may act as a bridge between the lignocellulosic and polymeric phases. MAPP's interaction mechanism has been thoroughly illustrated and discussed in previous works [39,40]. Figure 3 shows the effect of different MAPP percentages (from 2 to 8 wt.%) on specific energy consumption (SEC) and the tensile properties (strength, Young's modulus, and elongation at break) for 40 wt.% henequen fibers-reinforced PP composites. From Figure 3a one can see that the SEC followed a similar trend to strength and elongation. In general, energy consumption may increase with melt viscosity due to the increased torque. Hence, it is suggested that the viscosity of the composites containing a 4 wt.% of MAPP was higher than in the other materials probably due to the improved adhesion and dispersion of the phases, finally contributing to energy consumption. Young's moduli of the 40 wt.% reinforced composites were remarkably higher than that of unreinforced PP (1.5 ± 0.1 GPa) and almost independent of the amount of MAPP. Figure 3b shows that the mean value of Young's moduli remained almost the same despite the presence or the percentages of MAPP. Then, the presence of MAPP did not substantially affect Young's modulus of the composites. This agrees with the literature which states that the stiffness of composite materials should not be affected by the quality at the interfacial boundary, and thus, by the presence of coupling agents [39,41]. From Figure 3c, it is observed that the tensile strength of the composite without MAPP was close to the neat matrix (27.6 ± 0.5 MPa), evidencing scare fiber-matrix compatibility. The addition of MAPP progressively increased the tensile strength up to a 4 wt.% of the coupling agent, where the tensile strength reached 47.2 ± 1.0 MPa (+ 71% increment). At higher MAPP contents, the strength decreased again, which could indicate the saturation of the hydroxyl groups, resulting in self-entanglement and slippage of MAPP molecules [41,42]. The elongation at maximum stress ( Figure 3d) followed a similar tendency to the tensile strength. In this case, the parameter increased from 1.9 % (0 wt.% MAPP) to 3.3 % (4 wt.% MAPP). It is worth noting that MAPP may favor the adequate dispersion and distribution of henequen fibers throughout the matrix phase by avoiding fibers' self-aggregation, and this could also enhance the strength and elongation at break properties of the material. Analysis of the Young's Modulus PP-based composites with varying amounts of henequen fibers, from 10 to 50 wt.%, and 4 wt.% of MAPP were prepared. Table 1 presents the evolution of Young's modulus and elongation at maximum stress of the composites. Further, the evolution of these properties is represented with the fiber volume content in Figure 4. The Young's modulus of the composites increased steadily and linearly ( Figure 4) with the fiber volume fraction from 1.5 GPa (neat PP) to 6.4 GPa (50 wt.% reinforced composite). The linear increase of the mechanical property with the fiber content has been regarded as an indicator of adequate dispersion and distribution of the fibers within the polymeric phase. This is considered a remarkable outcome since some cellulose-rich lignocellulosic fillers/fibers tend to aggregate at elevated percentages, critically decreasing their mechanical performance. Indeed, it has been reported that a 15-25 wt.% of fiber content is optimal for polyester and polyolefin-based composites [43][44][45]. Thereby, the fact that the composites could be effectively charged up with a 50 wt.% of henequen fibers, with no significant drop in their elongation capacity, is considered relevant especially for the obtention of low-cost and mechanically competitive materials. The composites showed comparable stiffness to those glass fiber (GF)-reinforced PP composites. For instance, the 30 wt.% henequen-reinforced composites exhibited similar Young's modulus than a composite containing a 40 wt.% of GFs, yet offering the advantage of lower-weight, and more environmentally friendly material [46][47][48]. The manufactured composites also showed a greater stiffening potential than other natural fiber-based composites from wood resources [49]. Fiber Contribution to the Young's Modulus The stiffening potential of henequen fibers can be trustfully compared with other types of fibers by defining a Fiber Tensile Modulus Factor (FTMF). The FTMF can be obtained by rearranging the well-known modified Rule of Mixtures (mRoM). The mRoM combines the contribution of the reinforcement and matrix, separately, to Young's modulus of a composite, as expressed in Equation (1). In Equation (1), Young's modulus of the composite, fibers, and matrix are represented by E C t , E F t , and E M t , respectively. η e represents the modulus efficiency factor and such factor is introduced to the model to correct the stiffening efficiency of the fibers. Such factor principally depends on the orientation and morphological features of the fibers and hence can be expressed as the product between a modulus orientation factor (η o ) and modulus length factor (η l ). The FTMF can be obtained by isolating the contribution of the fibers to Young's modulus of the composite, described by η e ·E F t , as shown in Equation (2). The FFMF is obtained from the slope of the line by representing E C t − E M t · 1 − V F against V F at each fiber content. The FFMF is deemed an adequate indicator of the stiffening potential of the fibers and can be used for comparison purposes. In this work, the FFMF obtained for henequen fibers was compared to other fiber typologies such as wood fibers [49] and glass fibers [46], which were selected as main representatives inside the categories of natural and synthetic fibers. The FFMF of henequen, glass, and wood fibers were 14.4, 32.8, and 10.5, respectively. Out of these values, it is possible to estimate that by keeping constant the fiber volume fraction, wood fibers and glass fibers should have a stiffening effect of 0.73 and 2.3 in comparison to henequen fibers. Regarding weight percentages, Figure 5 states that 50 wt.% of wood fibers, 40 wt.% of henequen fibers, and 30 wt.% of glass fibers can stiffen PP similarly. Estimation of the Intrinsic Young's Modulus of Henequen Fibers The modulus efficiency factor (η e ) and intrinsic Young's modulus of the fibers are unknown variables in the mRoM. A possible way to estimate the intrinsic Young's modulus natural fibers is through micromechanical modeling. Two well-established models that have offered an effective prediction of such intrinsic property are Hirsch and Tsai-Pagano models. Tsai-Pagano model can be combined with Halpin-Tsai equations, hereby abbreviated as TP & HT model. The Hirsch model is described in Equation (3) [50]. The factor β is introduced to the Hirsch model to correct the contribution of the fibers to Young's modulus. β values close to 0.4 have proved to adequately reproduce the results obtained experimentally for natural fiber composites processed using injection molding [51]. Unlike the Hirsch model, the Tsai-Pagano model in combination with Halpin-Tsai equations incorporates morphological data from the fibers [52]. Tsai-Pagano model is shown in Equation (4) [53], whereas the respective longitudinal modulus (E 11 ) and transverse modulus (E 22 ) are computed according to Halpin-Tsai equations, as shown in Equations (5) and (6), respectively [54]. For the application of Halpin-Tsai equations, the mean fiber length and diameter of the fibers are required. As known, compounding and injection molding of the composites may be accompanied by a significant reduction in the fiber length and such effects may vary with the fiber content [55,56]. Such morphological changes were studied, and the results are reported in Table 2. The intrinsic Young's modulus calculated from the Hirsch model and TP & HT model is also incorporated in Table 2. As hypothesized, Table 2 indicates that henequen fibers, initially chopped to 1 mm in length, were significantly shorter because of the mixing and injection molding processes. This is due to the high shear forces applied to fibers during compounding processes that lead to fiber attrition. At higher reinforcement contents, such deterioration is more pronounced because of the increment of the mixture viscosity and shearing forces. Besides, the diameter of the fibers remained very stable around 25.3-25.6 µm. It should be noted that the diameter of the initial strand was around 220 µm, which indicates that during the compounding process the single fibers were detached from the original bundle as a consequence of shearing. Overall, the average Young's moduli of henequen fibers were set of 30.5 and 34.6 according to Hirsch and TP & HT models, respectively. In Figure 6, the estimated Young's moduli are compared with the values obtained from the single filament tensile tests. Results showed that with an increase in the gauge length, Young's modulus increased from 16.0 GPa to 27.0 GPa (from 6.35 to 25.4 mm, respectively). Figure 6 also shows relatively high standard deviations, reflecting the variability in Young's modulus measurements. Being Young's modulus a fundamental property of the material, such differences were not expected but can be explained by the test conditions and the equipment. The measurements of the fiber deformations were made without an extensometer, and the displacement of the clamps was used to compute Young's modulus. When the load is applied there is a slippage between the fibers and the clamps adding an error to the fiber deformation value. Such error mainly depends on the type of grips and relative slippage of the filament. The relative slippage of the filaments is expected to be more pronounced at lower gauge lengths, and thus, it could also explain the increment of Young's modulus with the gauge length [57]. Experimental values can be corrected by isolating the fiber slippage from Young's modulus evaluation following a method that contemplates the interactions between the testing equipment and the specimen under test. Such a method was first introduced by Guimarães et al. (1978) [58] and further reviewed by other authors [59]. The method has been typically applied for the correction of Young's modulus of glass and carbon fibers, though, in this work the method is further considered for henequen strands. The corrected Young's modulus (E F * t ) is given by Equation (7). where θ o is the slope of the line resulting from the graphical representation of the force (N) and extension (mm). K m is a constant that represents the error generated by the testing machine. The gauge distance and cross-sectional area of the filament under test are represented by L o and A o . If tgθ o −1 is represented against L o /A o at the different gauge lengths, it is possible to obtain the value of K m at the y-intercept and E F * t at the slope of the line. Therefore, the corrected Young's modulus for henequen strands was found to be 33.3 GPa. Remarkably, this value is situated between Hirsch and TP & HT models, which suggests the good agreement between micromechanics models and the tensile testing of the strands. This contrasts with other studies which stated that micromechanics analysis and single fiber tests do not typically yield similar fiber properties [37]. At this point, the utility of micromechanics models to predict the intrinsic properties of natural fibers is underlined. Within micromechanics models, the Hirsch model does not require any input of morphological data as in the case of the Tsai-Pagano model, since it uses only experimental data from the tensile test. Overall, the Hirsch model stands as a simple, yet effective, way to estimate the intrinsic properties of natural fibers. As expected, the intrinsic modulus is notably higher for glass fibers (18.8 GPa), though, their specific properties are in the same order because of the notably higher density of glass fibers. This makes natural fibers, and thus henequen fibers, especially attractive for low-weight material applications as for the automotive industry. Modulus Efficiency, Length, and Orientations Factors Knowing the intrinsic Young's modulus of the fibers, it is possible to calculate the modulus efficiency factor (η e ) from the mRoM. The implication of the fiber length on the efficiency factor can be estimated by a modulus length factor (η l ) according to Cox [60] and Krenchel [61] model, as shown in Equation (8). In Equation (7), the mean fiber length and mean fiber diameter is represented by l F and d F , respectively. ν stands for the Poisson's ratio of PP (0.36). Once the modulus efficiency and length factors are computed, a modulus orientation factor (η o ) can be simply isolated from the following relationship: η e = η o ·η l . Table 3 collects the modulus efficiency, length, and orientation factors at the different reinforcement contents. It is observed in Table 3 that the modulus efficiency factor acquires values between 0.46 and 0.47. The values are inside the expected range for natural fiber-reinforced polymer composites, between 0.4 and 0.6 [62]. It is suggested that the fibers could effectively stiffen the composite material. The relatively high values obtained for the modulus length factor reveal the importance and significance of morphology, precisely aspect ratio, on providing stiffness to the material, being the modulus orientation factor somewhat lower, around 0.5. Conclusions In this work, henequen strands were chopped to 1 mm length and melt-extruded with polypropylene for the development of composite materials. A 4 wt.% of maleic anhydride grafted polypropylene was incorporated into the composites to address the issue of interfacial adhesion. Stiffness, hereby represented by Young's modulus property, is considered a crucial parameter to glimpse the competitiveness and performance of composite materials, especially for structural applications. The results show that the addition of henequen fibers to polypropylene significantly incremented Young's modulus from 1.5 to 6.4 GPa up to a 50 wt.% of reinforcement, corresponding to an increment of 327%. The behavior of the composites was modeled through micromechanics analysis. Micromechanics modeling provides interesting fiber properties such as intrinsic Young's modulus, stiffening efficiency, the influence of morphology and orientation. The Young's moduli of henequen fibers were evaluated using the Hirsch model and Tsai-Pagano model combined with Halpin-Tsai equations. The models delivered an average Young's modulus of 30.5 and 34.6 GPa, respectively, which were found in the same order of magnitude as other natural strands such as abaca and hemp. Such values were compared to the measurements performed to the initial henequen strands through direct filament tensile testing where, after applying a correction method, a similar Young's modulus was attained at 33.3 GPa. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-11-18T16:13:41.436Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "502b18cb322f2be8dc4dffe01a716cc85504ee95", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/13/22/3947/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec736afbd7a08770b0940287ae4ab6a5f64ac2d0", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
231760480
pes2o/s2orc
v3-fos-license
Establishing Normative Data on Singing Voice Parameters of Children and Adolescents with Average Singing Activity Using the Voice Range Profile Purpose: The purpose of this study was to establish and characterize age- and gender-specific normative data of the singing voice using the voice range profile for clinical diagnostics. Furthermore, associations between the singing voice and the socioeconomic status were examined. Methods: Singing voice profiles of 1,578 mostly untrained children aged between 7.0 and 16.11 years were analyzed. Participants had to reproduce sung tones at defined pitches, resulting in maximum and minimum fundamental frequency and sound pressure level (SPL). In addition, maximum phonation time (MPT) was measured. Percentile curves of frequency, SPL and MPT were estimated. To examine the associations of socioeconomic status, multivariate analyses adjusted for age and sex were performed. Results: In boys, the mean of the highest frequency was 750.9 Hz and lowered to 397.1 Hz with increasing age. Similarly, the minimum frequency was 194.4 Hz and lowered to 91.9 Hz. In girls, the mean maximum frequency decreased from 754.9 to 725.3 Hz. The mean minimum frequency lowered from 202.4 to 175.0 Hz. For both sexes, the mean frequency range ∆f showed a constant range of roughly 24 semitones. The MPT increased with age, for boys and girls. There was neither an effect of age nor sex on SPLmin or SPLmax, ranging between 52.6 and 54.1 dBA and between 86.5 and 82.8 dBA, respectively. Socioeconomic status was not associated with the above-mentioned variables. Conclusion: To our knowledge, this study is the first to present large normative data on the singing voice in childhood and adolescence based on a high number of measurements. In addition, we provide percentile curves for practical application in clinic and vocal pedagogy which may be applied to distinguish between normal and pathological singing voice. adjusted for age and sex were performed. Results: In boys, the mean of the highest frequency was 750.9 Hz and lowered to 397.1 Hz with increasing age. Similarly, the minimum frequency was 194.4 Hz and lowered to 91.9 Hz. In girls, the mean maximum frequency decreased from 754.9 to 725.3 Hz. The mean minimum frequency lowered from 202.4 to 175.0 Hz. For both sexes, the mean frequency range ∆f showed a constant range of roughly 24 semitones. The MPT increased with age, for boys and girls. There was neither an effect of age nor sex on SPL min or SPL max , ranging between 52.6 and 54.1 dBA and between 86.5 and 82.8 dBA, respectively. Socioeconomic status was not associated with the above-mentioned variables. Conclusion: To our knowledge, this study is the first to present large normative data on the singing voice in childhood and adolescence based on a high number of measurements. In addition, we provide percentile curves for practical application in clinic and vocal pedagogy which may be applied to distinguish between normal and pathological singing voice. Introduction The voice is an integral part of communication, enabling social interaction in adults and children alike. During childhood and adolescence, processes like growth, maturation, and puberty have an important influence on the development of a child's voice. Vocal disorders during that time can have a severe impact on a child's social behavior and academic achievement, possibly leading to higher unemployment rates in later life [1][2][3]. Those ailments are not rare as prevalence in children is estimated at 6% [4], other studies even stating a percentage as high as 24%, especially in urban areas [5,6]. These vocal illnesses may present themselves as throat clearing or coughing, the voice becomes strained or tires, the voice gets low or hoarse, difficulty in being heard, sensation of tension, pain, or lump in the throat, and voice breaks while talking. The most common manifestation in childhood is a hyperfunctional voice disorder with vocal nodules as secondary organic changes of the epithelium [6,7]. Therefore, detecting these illnesses should be a focus of phoniatricians and pediatricians. Previously, study populations were mostly too small to derive normative data, some consider exclusively vocally trained or untrained children or specific age groups like infants or boys during voice change [2,8,9]. Due to the lack of normative data, the interpretation of the singing voice remains subjective. This study aimed to establish normative data on the singing voice of mostly untrained children and adolescents, who represent a typical distribution of singing activity in a normal population. Due to an exceptionally high number of 1,578 of participants, solid normative data could be created. The voice range profile (VRP) is a common diagnostic tool to qualify and quantify basic parameters of the singing and speaking voice. Titze [10] described it as a standard nonspeech task for the assessment of laryngeal motor control to assess the pitch and the loudness. Being a noninvasive, semiobjective tool, it is useful to visualize the extent of the frequencies someone may produce sung at low intensity and high intensity, thus showing the dimension of the vocal frequency and intensity. It might assist the diagnosis of voice problems. Accordingly, irregular findings might indicate voice disorders like hoarseness, glottal chinks, and nodules or functional misuse of the voice [11,12]. As this study aims to provide normative data, percentile curves have been estimated as well. Percentile curves are commonly used in pediatrics to visualize the age-varying distribution of a parameter. Therefore, the method facilitates the assessment of an individual's performance in the context of the sex-and age-adjusted norm. In addition to frequency and intensity ranges this study establishes normative data on the maximum phonation time (MPT) which is used as an indicator of phonatory control [13]. Furthermore, a reduction of MPT is a measure of voice impairments according to the International Classification of Functioning, Disability and Health. Hillel et al. [14] suggest that any shortening of the MPT indicates potential vocal disabilities, like vocal nodules, chronic laryngitis vocal polyps, and others [15]; therefore, it can be used to assess an individual's voice health status [16]. Besides the general description of the voice profile, the relationship between socioeconomic status and children's voice was assessed. So far, only a few studies have investigated possible associations even though social background plays an important role in health. In lower-income families deficient communication culture due to the lack of a framework for objective argumentation, or patient listening, as well as high noise level in the family environment, is more common [17]. This might lead to long-term overstraining of vocal capabilities and thus to the development of vocal disorders [18]. These traits might also be found in larger families, disregarding their social status. Moreover, associations between a child's speaking-voice intensity and the personality, especially extraversion and emotional stability, which are more common in families with higher socioeconomic background, were described by Poulain et al. [19]. Smillie et al. [20] state that socioeconomic status has an impact on the children's speaking voice, with children from lower social backgrounds showing more vocal disorders. Therefore, this study tries to show whether socioeconomic status has an influence on the singing voice. Study Population This study is part of the LIFE Child Study, an ongoing large prospective cohort study, which started in 2011 (clinical trial No. NCT02550236) [21,22]. The LIFE Child Study has been designed to understand how (epi)genetic, metabolic, and environmental factors influence health and development in children and adolescents in modern society. More than 4,000 children, adolescents, and their families have been recruited so far. The VRP examinations were conducted to gain information on the natural development and variation of voice parameters during childhood and adolescence. The term adolescence used in this study describes par-Folia Phoniatr Logop 2021;73:565-576 DOI: 10.1159/000513521 ticipants at the beginning of puberty up to the age of majority of 18 years, as in most countries. Between 2011 and 2016 one VRP measurement of each of the 1,578 children aged between 7.0 and 16.11 years was collected. 91 measurements had to be excluded as described below. Thus, the analyses are based on a total of 1,487 measurements. Voice Measurement The VRP included speaking voice as well as singing voice tasks. The speaking voice of the participants was analyzed by Berger et al. [23] and will not be a part of this study. The singing voice was assessed with frequency and intensity parameters while reproducing defined pitches presented by the examiner using an electric piano, thus we describe a specific function or phonatory capabilities of the larynx. The sequence of tones started at a pitch in the middle of the speaking voice range of each subject lowering down to a minimum and advancing subsequently to higher frequencies until a maximum was reached. First, the subject was asked to sing as softly as possible, afterward as loudly as possible. If a participant was able to use the high register, this was included in the measurement. Throughout the recording, the investigators provided the subjects with hand signals to coach them for further lowering/increasing their volume. The examiner was allowed to give instructions and advice even during the recording, as deemed necessary. In case of software malfunction or vocal failure, the examiner could delete the respective recording and start a second attempt. The VRP visualizes each frequency and their correspondent intensities. In addition, the subject was instructed to hold a sung tone for as long as possible within a comfortable pitch and loudness quantifying the MPT. To exclude any training effect of the MPT, this procedure was only done once. All examiners had been trained and certified by phoniatricians. Furthermore, to ensure high quality over the entire study period and to increase examiner reliability, they were periodically (every 3 months) full-time recertified and supervised by the above-mentioned specialists. To reduce external noise all measurements were performed in a soundproof room. Ambient noise did not exceed 40 dB according to the standard operating procedures recommended by the Union of the European Phoniatricians [24]. Singing voices were recorded using DiVAS ® software (XION Medical, Berlin, Germany) running on a portable Windows-based PC. It includes a self-calibrating XION USB-microphone-headset with a constant distance of 30 cm from the subject's mouth, which secures the exact measurement of the parameters, without prior calibration. Unlike adults, who are supposed to stand at a defined spot, the participant could rest on his parent's lap in order to reduce timidity or movements during the examination. Those comprehensive standard operating procedures ensure a high and stable quality of the registration of the voice parameters. In this study, the highest and lowest fundamental frequency (f 0max [Hz], f 0min [Hz]) were taken into account. The difference between both, ∆f [semitones] (f 0max -f 0min ), that is, the frequency range of an individual, was expressed in semitones using the scientific pitch notation (A0 = a' = 440 Hz) to allow a straightforward comparison. Furthermore, the sound pressure level (SPL [dBA]), was expressed by the loudest (SPL max ) and the softest (SPL min ) tone sung by each participant, and their margin ∆SPL was described. Finally, the MPT (s) was assessed. Exclusion Criteria To ensure a high data quality 91 measurements had to be excluded. To obtain reliable and analyzable data on a singing voice > 4 tones sung softly and another 4 sung loudly needed to be accurately recorded for each child. All measurements were assessed individually, not only visually, but also audibly. In the course of this the accuracy of the pitch-matching was assessed. 72 measurements did not meet the prerequisites. In addition, 12 children were not vocally healthy at the day of examination, having an upper respiratory infection or were hoarse. 7 measurements had to be removed due to technical problems. Hence of the total of 1,578 measurements, 1,487 could be evaluated. Sociodemographics and Singing Activity To determine the socioeconomic status of each participant the adjusted Winkler index was applied, taking into account education and occupational status of the parents as well as the equivalent household income. The resulting score can be classified into upper, middle, and lower class [25,26]. The children, if needed with the help of their parents, were asked via a standardized questionnaire following the classification by Fuchs et al. [27] about their degree of vocal strain and vocal training and whether they played a wind instrument, as there are indications in the literature that this may lead to increased strain on the vocal apparatus. Based on the classification of singing activity defined by Fuchs et al. [27], the study population showed that 90.8% of the participating children had no voice training, and sing only spontaneously and not in front of an audience. 9.2% of the children engage in occasional or regular organized singing and have voice training in a large group up to individual lessons. All children were included while estimating reference percentiles because this composition is typical for a western population. Statistical Analyses Statistical analyses were conducted using R for Macintosh version 3.4.1 (RStudio Inc., Boston, MA, USA). Linear regression and multiple linear regression models were used to analyze the association of age, sex, and socioeconomic status as independent variables. p values ≤0.05 were considered to be statistically significant. To estimate percentile curves of the voice frequency an LMS-type method implemented as a generalized additive model for location, scale, and shape was applied (GAMLSS). The method is recommended by the World Health Organization for generating agedependent reference values. It was used in the World Health Organization multicenter growth study to establish the new international growth standard [28,29]. Ethics The study was designed in accordance with the Declaration of Helsinki and under the supervision of the Ethics Committee of the University of Leipzig (Reg. No. 264- . Each participant and the authorized representative were informed about the study program, the long-term use of data, potential risks of participation, and the right to withdraw from the study. Informed written consent was provided by all parents and, from the age of 12 years, by the children themselves. Dienerowitz Data Summary In summary 1,487 measurements were considered. Of those 756 (50.8%) were male and 731 (49.2%) female. Table 1 shows the voice characteristics of the study population stratified by age. Frequency: Effects of Age and Sex The mean highest fundamental frequency for males as well as the mean lowest fundamental frequency of male singing voices did not change significantly up to the age of 12 (p ≤ 0.05, f 2 = 0.32). During voice change, it lowered significantly until the mean lowest and highest frequencies reach a stable level at the age of 15. The largest slope might be seen for males during the voice change between 12 and 15 years of age where the maximum frequency decreased by 117. The mean highest fundamental frequency of female singing voices changed significantly with increasing age. It shows, in contrast to male singing voices, a linear lowering of 8.2 Hz (95% CI 11.9-4.6) per year (p ≤ 0.05, f 2 = 0.152). Likewise, the mean lowest fundamental frequency of female singing voices lowered 3.0 Hz (95% CI 3.6-2.4) per year (p ≤ 0.05, f 2 = 0.143). In females the mean maximum frequency also lowers continuously and significantly, but not starkly, from 754.9 (G 5 ) at the age of 7 to 725.3 (F♯ 5 ) at the age of 16 years (p ≤ 0.05, d = 0.53). A similar trend can be observed for the minimum frequency from 202.4 Hz (AЬ 3 ) at the age of 7 to 175.0 Hz (F 3 ) at the age of 16 years. Female voices lower approximately 2 semitones per year during adolescence. Figure 1 shows the percentile curves for f 0max and f 0min in males and females. Note the decrease of the margin for frequency at the age of 12 years for males. Figure 2 shows the percentile curves for the variable ∆f which is the difference between f 0max and f 0min . ∆f is expressed in semitones to ensure an easy comparison. There was no significant age trend in males and females. The frequency range of around 24 semitones (2 octaves) stays stable over age. As expected, the frequency lowers with age which can be attributed to adolescence for both sexes. Sex has a steady effect on the frequencies. There is a significant difference independent from age with males showing a low-er maximum and minimum frequency at all ages (p ≤ 0.05, f 2 = 0.57; Table 1). SPL: Effects of Age and Sex In males the mean SPL max of the singing voice lowers significantly over age. It shows a linear lowering of 0.05 dBA (95% CI 0.11-0.05) per year (p ≤ 0.05, f 2 = 0.124). A similar development can be seen for SPL min with a decrease of 0.15 dBA (95% CI 0.23-0.06) per year (p ≤ 0.05, f 2 = 0.117). Males Age, years In females the means of SPL max (b = -0.001 [95% CI -0.12 to 0.12], p = 0.63) and SPL min (b = -0.34 (95% CI -0.12 to 0.05), p = 0.44) show no significant age trends. Figure 3 shows the percentile curves of the SPL max and SPL min for both sexes. Maximum Phonation Time The variable MPT significantly increased over age. 0.05, f 2 = 0.36). We found a significant difference between the sexes with males showing significantly longer MPT than females (p ≤ 0.05, d = 0.43; Fig. 4a). In 7-year-old children, boys were able to hold a sung tone for an average of 11.1 s; being at the age of 16 years, they were able to hold it for around 15 Figure 4 shows the respective percentile curves. Effects of Winkler Score The score was only available for 811 participants due to missing data. The age and sex distributions of the subsample did not differ from those of the entire sample. 27.1% of the participants belong to the lower-income families, 66.8% to the middle and 6.1% to the upper class, which resembles the distribution in Germany. For f 0min , f 0max , and ∆f there were no significant differences between the socioeconomic classes. In general, there were also no significant differences between the socioeconomic classes for SPL min and SPL max . Socioeconomic status shows no influence on the variable MPT. Discussion The aim of this present study was to amass normative data on the singing voice of children and adolescents. Having an unmatched large group of examinations of children aged between 7 and 16.11 gives this unique opportunity to create solid normative data. Siupsinskiene and Lycke [30] describe that it is important for otorhinolaryngologists or phoniatricians to have a quantification of voice quality in order to be able to assess any vocal pathology. This study provides a high quality of measurements and consecutively equally valuable normative data: first, by implementing standard operating procedures as described above; second, by the regular supervising of the measurement process by trained phoniatricians, and third, by re-evaluating the given data in a lengthy process both visibly and audibly. This study helps to quantify the voice quality and describe a healthy voice. The percentile curves of these variables might help visualize the development and give a tool for appraising the singing voice. A comparison with previous findings might be complex, as most previous studies have used the fundamental frequency to depict the frequency of each individual by letting a subject read set texts [31][32][33]. Here, one goes further by describing the minimum frequency as well as the maximum frequency. It gives a much broader view on the children's singing voice. The authors also believe that a frequency range as done here shows the extent of a healthy voice of a general population of children and adolescents, following the findings of Wuyts et al. [5]. Alterations especially in the sense of a limited vocal range may be an indication of possible vocal disorders and lead to further clinical examination. This study shows values for the minimum frequency for boys of 194.4 Hz (G 3 ) at the age of 7 and similar ones for females of the same age (202.4 Hz [AЬ 3 ]). These findings resemble the data of nonsinging children described by Siupsinskiene and Lycke [30]. The contrast of frequencies between the sexes is easily visible when comparing 16-year-old males with a minimum frequency of 81.9 Hz (GЬ 2 ) and females with 175.0 Hz (F 3 ), the latter resembling the values found by Lycke and Siupsinskiene [34] for female nonsingers. A similar descent is shown with the variable maximum frequency as depicted above. The expectable effect of voice change is seen here for both males and females. The lowering of the male voice goes hand in hand with other data of the boy's voice during voice change [35][36][37]. It can be described by roughly one octave. This is true for the female voice as well, but not as strongly, as it only lowers by 2 semitones [38,39]. The frequency range of about 24 semitones stays constant during ageing and is the same for females and males which is shown by other authors as well [11,12,30]. As > 90% of the participants are nonsingers the authors of this study deem these frequency variables as appropriate to describe voices of vocally healthy children in a population of mostly vocally inactive people. The SPL as a parameter to describe the voice of an individual has been used by other authors as well. Their findings resemble the ones depicted in this study [8]. The minimum SPL in this study is found to be around 53 dBA and the maximum SPL 85 dBA. Schneider-Stickler [40] described that a healthy voice should reach 90 dBA, even if other studies found lower values [8]. Others describe slightly higher values, even up to 97 dBA [11,41]. Reasons for this gap in SPL max could be due to the motivation of the children and the study situation where the collection of voice parameters was only one part of the many aspects and examinations of the LIFE Child Study. MPT as an important variable to describe the healthy voice is an indicator of phonatory control [14]. In this study normative data for boys aged 7 years shows values for MPT at 11.1 s, for females 10.7 s which supports findings of other authors [42]. When growing up, MPT extends for boys up to 15.9 s and for girls to 14.4 s. This can also be seen in values described by Fuchs et al. [43], who stated that the MPT for boys is 3 s longer than for females. This can be explained by the physical development during puberty, as a child's growth generally increases lung capacity [44]. Some authors reckon that various morbidities tend to shorten the MPT [14,16,45], others discuss the influence of singing activity on the MPT. Yet, Wendler et al. [46] state that too much focus is put on the measurement of the MPT. The difference of male and female voices might also be contributed to the fact that females have a higher prevalence of posterior chinks and these are quite common during puberty [47]. The authors of this study, though, reckon that it is necessary to have reliable data on MPT. Even if not decisive to discriminate between vocally healthy and invalid children, they at least provide an auxiliary mean to support any suspicion. Further studies are needed to validate the implementation of MPT as an appraisal for a healthy voice. The percentile curves give a tool to appraise the MPT of a subject quickly and easily. Similarly to pediatrics, these percentile curves, not only of the variable MPT, give an overview about individual results and may be used to compare them with those of other children. It thereby adds great value to the clinical routine of a phoniatrician and pediatrician as with the help of this acquired normative data one is able to find irregularities with a fast and noninvasive examination. Socioeconomic class as an important influence for a child's health has been widely described [48,49]. This led to the inclusion of socioeconomic data in this study to determine whether any differences between the social class regarding the children's voice may be described. Here socioeconomic status is depicted in 3 classes as described above. Other studies describe an influence regarding vocal fold nodules [20] [3]. Those described data in another way, so any comparison might be complex. Also, there are significant confounders when associating singing voice with socioeconomic status, like the number of people living in a family or the access to health care, which were not addressed in this study [17,50]. The data given in this study established no significant differences between the 3 social classes regarding all the parameters. One might conclude that although social class influences other ENT-and phoniatrics-related diseases, it does not have an impact on the extent and characteristics of a child's singing voice. Limitations In this study the authors only took data from an area in Germany. To describe singing voice on a global scale multicentric studies need to be conducted. In addition to that, the authors did not check for a range of ethnicities or native languages so the normative data might be only suitable for Caucasian subjects due to limited representativeness [51,52]. Further studies need to be conducted to show whether the VRP is the preferable tool for assessing vocal disorders. Musical deficits or prodigy might interfere with the diagnosis. Also, this study might be limited due to the fact that no further distinction was drawn between singers and nonsingers. The participants represent the typical distribution of singing activity in a normal population. Of course, individual conditions, like regular singing in choirs, can lead to a significant extension of the parameters. Additionally, the authors think to have a profound insight of the voice of a child one needs to assess both speaking and singing voice. Further correlations between those 2 are in need to be examined. Important information might be added in a longitudinal study as this study is a crosssectional one. Conclusion This present study describes reliable data of the singing voice of children with the variables f 0min , f 0max , ∆f, SPL min , SPL max , ∆SPL, and MPT for the first time. Its validity is substantiated by its unmatched high number of measurements. Using the VRP as a tool to examine a child's voice at all ages seems feasible and reliable for gathering the vocal dimension of each subject. Thus, these data allow a valid appraisal of reference values for the given variables of the singing voice of mostly vocally inactive people. The different socioeconomic backgrounds present in society seem to have no effect on vocal parameters in children without voice disorders. With this gained normative data the professional who deals with children's voices is able to distinguish between normal parameters of the singing voice and parameters that need further investigations to follow and may intervene early if necessary. Statement of Ethics All procedures performed in studies involving human participants are in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards and under the supervision of the Ethics Committee of the University of Leipzig (Reg. No. 264- . Informed written consent was provided by all parents and, from the age of 12 years, by the children themselves.
2021-02-03T06:20:11.711Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "e1e00f4d8e21a303ec8fc9133ec3be87927606d2", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/513521", "oa_status": "HYBRID", "pdf_src": "Karger", "pdf_hash": "560ca48850ea9dbcdbb2210f33d32778de68f081", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210354463
pes2o/s2orc
v3-fos-license
Young People’s Conceptions and Practices of Safety in Online Environments: An Examination of Challenges, Theoretical Perspectives, Current Research, Findings, and Potential Instructional Interventions This ongoing research builds on investigations undertaken by Medina & Todd (2016, 2017) that focus on children’s safety in online environments. As part of scholarly traditions centering on information and digital literacy, an emerging discourse and arena of research and service development is centering on the concept of digital wellbeing. Digital wellbeing is defined as the capacity of individuals to look after personal health, safety, relationships and work-life balance in digital settings. This paper, focusing on the specific aspect of digital safety as one dimension of digital wellbeing authorities, the development of government policies (Ofcom, 2017), and the proliferation of web-based online safety programs. In much of the discourse, safety is framed in terms of protection of children from potential online risks: web content, user generated content, sexual content, sexual messages (sexting), stranger contact, sexual exploitation and online grooming, cyberbullying, personal data misuse, and misrepresentation of identity in online environments (Livingstone, et al, 2012(Livingstone, et al, , 2015. The rise in public anxiety and alarm about both the capacity and extent that the internet is putting children and young people at risk, and a level of moral panic over the impact of technology on childhood innocence, are further shaped by media portrayals of cases (Haddon & Görzig, 2018). For example, ABC News headline April 24, 2019: "16 Alleged child predators used social media to lure kids for sex throughout New Jersey", the ITV report (January 4, 2017): "'Kayleigh's Love Story': Police release powerful film on dangers of online grooming" ( https://www.itv.com/news/2017-01-04/kayleighs-love-story-police-release-harrowing-video-ondangers-of-online-grooming/ ) and The Independent (March 15, 2019): " Young Children Can Easily See Disturbing Content On Youtube Despite Age Restrictions" ( https://www.independent.co.uk/life-style/gadgets-and-tech/youtube-kids-children-videos-age-re striction-peppa-pig-a8824261.html ). At the same time, there is criticism of the efficacy of school-based interventions that seek to develop digital practices in relation to digital safety. According to the US based Crimes Against Children Research Center , half of the young people in the U.S. report receive internet safety programs in schools, but little is known about what educational messages, if any, make a difference (Jones, Mitchell & Walshe, 2014). This review posed the question "are effective prevention strategies being used? It found that internet safety education is not incorporating proven educational strategies and are not founded on a strong understanding of how young people participate in digital social media, nor an understanding of practices they engage in to ensure their digital safety. They also indicated that the program materials they examined were directed toward elementary school aged children or focused on "digital literacy" topics such as privacy settings, online reputations, and avoiding e-scams (2014, p.2). For some years the school library community has embraced these concerns through the articulation of information and digital literacy programs. This begs the question: Are the information / digital literacy frameworks predominant in Library and Information Science practice an appropriate positioning for building our understanding around conceptions and practices in relation to children's safety in online environments? The American Library Association's digital-literacy task force offers this definition: "Digital literacy is the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills." http://www.ala.org/ala/acrl/acrlissues/acrlinfolit/infolitoverview/introtoinfolit/introinfolit.htm . In his review of the information literacy landscape, with a focus on developing sustainable future for the information and digital literacy agenda, Todd (2017) highlighted a number of issues which included: (1) decades of terminological confusion / power / authority / identity / territory struggles, (2) a plethora of understandings, descriptions and models of information and digital literacy / literacies, (3) hundreds of information literacy models based either on small-scale single research or untested hypothetical models, (4) a similar pattern being emulated with digital literacy, without meta-analysis, limited theorizing theory development and pedagogical development, (5) little intellectual critique, and failure to make explicit the various theoretical stances that underpin standards, models, practices, (6) little exploration of what constitutes meaningful pedagogy for information and digital literacy instruction and interventions, and (7) limited substantive articulation of the impacts / benefits of information literacy agendas, beyond mastery of a range of information literacy skills: Plethora of Information Literacy Models (Image retrieved 8 th July Google image search: "information literacy models") Growing plethora of Digital Literacy Models (Image retrieved 8 th July Google image search "digital literacy models") In a recent critique of digital and media literacies, Hobbs (2017) argues that educational responses in terms of information & digital literacy are typically framed in terms of skills, often presented as a checklist of skills to be taught. She further argues that digital literacy education generally emphasizes the negative effects of media and attempt to use digital literacy education to mitigate those effects, and in addition, digital literacy education tends to target a specific "problem" where a particular vulnerability to media messages is identified and a safety intervention is designed. According to Hobbs, the problem is compounded when researchers develop a short-term, (often) grant-funded intervention and report on informal learning practices that involve children and youth who participate in digital media literacy programs or online communities. In terms of assessment and impacts, she indicates use of a predominance of scaled self-report measures, drawing on standards-based approaches to information and digital literacy; as well as performance-based measures, all determined by adults. While there is a plethora of small-scale research studies involving school students in terms of digital competencies, digital literacy, information literacy, and digital citizenship, much of this work focuses on information handling skills development and the attainment of educational standards around a range of pre-determined digital skills, as well as the development of sets of instructional interventions to foster the development of these standards. One of the weaknesses identified here is that educational interventions tend to be skills-centric, without being positioned in a deeper understanding and evidence of the broader sociocultural landscape and its collective and institutional practices, where learners are at, their online experiences, and their own conceptions and practices surrounding safety in online environments. Indeed, it might be said that the skills around children's safety are assumed and driven by concerned adults and the voices of children in terms of their own understandings and practices, are largely absent. The problematic of safety in online environments is further compounded by multiple terms such as internet safety, media safety, online safety, digital safety, and cyber safety. Safety is rarely defined, and when it is, these tend to be circular definitions such as "safety is about trying to be safe", without explication of what "safety" and "safe" are or articulating underpinning assumptions. In these definitions there are implied notions of guiding and protecting children by others and that children are not capable of protecting themselves. "Protection" tends to be the dominant meaning. The boundaries of online safety essentially are an adult consensus about range of risks, and especially a focus on risks and excluding opportunities, and the need to protect children from these risks. Often risks and harm are tied together, even though conceptions or evidence of harm are unclear, given the ethical aspects of measuring harm. (Livingstone, et al, 2012). Overall, the scholarly and media discourses present the substantive debates around who is responsible for empowering and protecting children online: government, educators, industry (web content and service providers), and families. These debates are also beginning to address several other aspects, such as the continuities between children's online and offline worlds -online activities are viewed as extensions and modifications of practices located in everyday life (Haraway, 1985, Chayko, 2016. Chayko argues that "digital life is simply real life", and that terms such as "virtual", "cyberspace" even "digital" are misleading in that they imply something almost, but not quite real. (p. 60). In addition, there are calls to reject over-celebratory and offensive notions of "digital natives" and "digital immigrants (Prensky, 2001), and to reject technological determinism accounting for radical societal transformations due to technology (Alder, 2006). There are also calls to address the public anxiety and moral panic over impact of technology on childhood innocence and freedom, and to move beyond panicky accounts of the dangerous internet based on high-profile small number of cases, and to understand that while safety is important, protection must be balanced against enabling children's rights, pleasures and opportunities, including opportunities for risk-taking. Chayko (2016) states that in the midst of robotics, automation, and devices immersion, we need to focus on a critical set of dynamics and realities around human agency (Chayko, 2016, p.60). All of these discourses and debates thinking speak to moving to a more encompassing view of the child and human agency constructed around the notion of digital wellbeing. According to JISIC (2017), digital wellbeing refers to the "Capacity to look after personal health, safety, relationships and work-life balance in digital settings" (JISC, 2017). It gives attention to re-aligning technology with humanity's best interests, acting safely and responsibly in digital environments, using personal digital data for positive wellbeing benefits, using digital media to foster community actions and wellbeing, managing digital stress, workload and distraction, and acting with concern for the human and natural environment when using digital tools. These complexities raise the challenge of researchers engaging in child-centered research. In child-centered research, the concept of the child is considered to be socially constructed: "knowledge creation rather than knowledge extraction" and which heavily influenced by social interactions and conventions (Clark & Moss, 2011, p. 4;Berger & Luckmann, 1966). It recognizes that children should not be limited by dominant perceptions of them as not possessing the capabilities and competencies to be involved in research (Hart, 1992;James 2007, Corsaro, 2014. It utilizes methods that "capture the nature of children's lives as lived" rather than studying their actions in contrived situations (Hogan & Greene, 2005, p. 3). It also entails an openness to the use of methods that are suited to children's level of understanding, knowledge, interests and particular location in the social world (Johnson et al, 2014), and it privileges children's voices and perspectives regarding their own experiences: "participative approaches and techniques" that position participants as "experts of their reality (Hepworth et al. (2014). (2012) is particularly significant here. This extensive study was a large-scale survey of 25,142 children from ages 9-16 carried out in 25 European Union by the EU Kids Online Network (available at: http://www.lse.ac.uk/media-and-communications/research/research-projects/eu-kids-online). It involved 100 researchers from diverse disciplinary backgrounds who studied children's and parental perceptions in relation to: (1) how children use the internet -scoping children's internet use (amount, device, how, location of use); Livingstone et al's work (2) what children do online -mapping of online activities (opportunities exploited, skills developed, risky practices engaged in); (3) what online factors shape their experience -opportunities / risks encountered such as positive content, user generated content, sexual content/messages, stranger contact, bullying, personal data misuse. (4) Identifying the outcomes for children -benefits / harm such as learning, self-esteem, sociality, values, in/excluded, coping / resilience, bothered / upset, abuse. The children were interviewed face-to-face to obtain responses, and for more sensitive questions, were given a questionnaire form to complete on own. For each child, one parent / carer was given questionnaire with matching questions. The findings are extensive. They show the widespread extent of young people's substantive engagement in online social networking, their active engagement in building friendship circles which involve the sharing of personal data which did not seem to make them especially vulnerable to data misuse. Children play "pretend" from childhood, and developmental theories indicate that adolescents do experiment with their identities and self-presentation, and this happens to a limited extent in online environments and was less common than expected. Overall there was no evidence that experimenting with self-presentation is associated with actually experiencing harm from online risks. Cyberbullying is an important risk with 19% of the children indicating that they have experienced some form of bullying in previous 12 months. Children who experience more psychological difficulties are more likely to be victims or perpetrators of cyberbullying. 15% respondents aged 11-16 said they had seen or received sexual messages in last 12 months, and some saw this as a form of electronically mediated flirtation, although the boundary between what is fun and what is coercive not clear for children. 23% of respondents had encountered sexual images online and offline (including TV, video, film) mostly due to accidental pop-ups rather than deliberately seeking it. Overall, of those who had encountered such images, 32% were bothered by it, which translates into 4% of total population of children; most said they coped well and got over the experience quickly. In relation to stranger danger, among 9% of those who had offline meetings met with someone with a connection to family member or friend. Older children, especially those who engage in online and offline risky behavior, are the ones more likely to go to offline meetings with complete strangers. (Livingstone et al, 2012). What is safety? One of the key gaps in all of the digital safety literature, the missing link so to speak, is the absence of any explication of what safety is, and the absence of examining safety as a theoretical construct: Is safety something you do or part of what you do? -for example, drive safely Is safe something that you be? -for example, I promise to be safe Is safety something you take? -for example, Take safety precautions Is safety something you ensure? -for example, Ensure the health and safety of others Is safety a place you go to? -for example, The children were taken to safety Is safety a real thing or do you just feel it? -It looks safe, or does it feel safe? Is safety something you think or actually are -for example, I'm worried about my safety but am I really safe here? Is safety something that just exists when you aren't in danger? -for example, The workplace is safe because it is hazard free What about when something is called "the safest" or "the safest way" -for example, is that a perception, has worked before, or based on fact and data or just luck? (Quebec WHO) As mentioned earlier, safety in the context of discussions of digital safety, implies notions of guiding and protecting children by others and that children are not capable of protecting themselves. The boundaries of online safety essentially are an adult consensus about range of risks, and especially a focus on risks and excluding opportunities. Theories and conceptions of safety are however explicated in a body of literature related to human resources, industrial safety, automobile and airline safety, and safety in health care environments. Journals such as the Jour nal of Safety Research and Jour nal of Safety Science present a range of theoretical and empirical research and a number of conceptualizations of safety can be identified. Theories of Safety: 5 Conceptual Approaches A broad review of the safety science literature identifies five conceptual approaches to thinking and theorizing about safety. These are briefly elaborated here: 1. Safety as defenses in depth (Reason, 1990(Reason, , 1997Vincent et al, 1998): this theory focuses on a systemic understanding of the organizational conditions that provoke human error, including systems safety, as well as identification of gaps and inadequacies as a basis for reducing error. 2. High reliability theory and safety (Roberts & Rousseau, 1989): this theory has its origins in organizational sociology, and focuses on how organizations could achieve consistent, failure-free performance over prolonged periods of time in the face of variable and demanding conditions. This conceptualization underpins much of the work on safety in health care environments. 3. System dynamics and safety (Amalberti, 2001): this theory seeks to depict the dynamic pressures that cause a system to migrate towards the boundaries of safe operations over time; it combines a dynamic systems view of safety and risk with a psychological appreciation of the behavioral drivers underlying violations. 4. Safety as collective mindfulness (Weick, Sutcliffe & Obstfeld, 1999): in this approach, mindfulness is characterized by a continuous effort involving all stakeholders to understand and update routines, procedures, perceptions, expectations and actions based on experience and foresight, and to anticipate and become aware of the unexpected, and have the means for containing the unexpected. 5. Safety as resilience (Hollnagel, Woods & Leveson, 2006): this approach draws on resilience theory in psychology, and focuses on the process of adapting well in the face of adversity, trauma, tragedy, threats or even significant sources of risk; it gives attention to interventions based on developing resilience in problematic situations. It is the latter two theoretical approaches that have salience in relation to children's conceptions and practices of safety in online environments (Condly, 2006). These approaches highlight the importance of moving from protectionist paradigms to empowerment paradigms, to understanding diverse experiences, enabling deference to expertise including the voices of children as key stakeholders, developing individual and team alertness, and building flexibility and adaptability in the provision of interventions and solutions to safety. In the EU Study (Livingstone, et al, 2011), it was concluded that in the daily lives of children, exposure to risks is part of everyday life, and digital safety interventions should focus on resilience -the development of positive patterns of adaptation in the context of risk or adversity -and copingefforts to adapt to stress or other disturbances created by the stressor or adversity. This focus on coping and resilience actually emerged out of the EU data. In identifying how children responded to the various threats and risks, three categories of responses emerged which highlight the importance of moving beyond protectionist approaches to empowerment approaches: (1) fatalistic response : ignore. Hope goes away, limit use of internet; (2) communicative response seek social support and talk to someone (peers, parents). Most predominant strategy identified in study, and (3) Proactive response -being adaptive, trying to reduce or eliminate harm in the future: deleting messages, deleting content, blocking senders eg cyberbulling, especially when strong level of being upset does foster trying to fix the problem: all proactive strategies that improves resilience. Current Research The current research, undertaken by Medina (2019) builds on the two studies as reported in Medina & Todd (2016a, 2016b. The first study, briefly summarized here, involved 148 students in Grades 5 -10 in an international school in the Middle East. It utilized a self-reported response to 28 checklist items developed by the Open University UK titled "Being digital: Digital literacy skills checklist" available at: http://www.open.ac.uk/libraryservices/beingdigital/accessible/accessible-pdf-35-self-assessmentchecklist.pdf It also included open-ended questions on how the school library can help in relation to the development of digital literacy. It specifically identified helps needed in terms of: (1) research processes and effective reading in digital environments; (2) digital safety, personal safety, technical safety and managing technical disruptions; (3) intellectual property: citation, authority, copyright, information ethics; and (4) knowledge construction: information evaluation, organization, analysis and synthesis. In this study, students recognized the need to develop their own competencies in relation to staying safe online. The second study involved 425 Students in Grades 5 -10 in two schools in Philippines. The students participating in this study took part in a series of regular library classes on the general theme of digital awareness and safety. They undertook a group mind mapping activity to map their collective understanding of unsafe websites and safety responses. In total, there were 38 groups with 5-12 students per group. They focused on specifying how they recognize whether a website is safe or not, and what are some of the actions they take to ensure they are safe in online environments. These students identified six types of unsafe websites: (1) sexual and violent content, (2) malware pop-ups and spam, (3) privacy and security issues, (4) technical errors/virus/auto downloads, (5) unsolicited sharing of problematic information on social media, and (6) search engines providing access to unsolicited sites. This study also indicated that students do have quite an extensive knowledge of the web environment. They had specific knowledge of technical terms such as Deep Web, Torrent; and specific knowledge of problematic web sites and malicious files, including pornographic sites. Their predominant conception of being unsafe online seemed to center on aspects of technical access, technical structures, and the potential for technical harm. There was limited acknowledgement of role of self in the safety equation such as stranger danger, establishing privacy boundaries, cyberbullying indicators, managing offensive posts, and dealing with problematic interactions and images. For these students, "unsafe" was predominantly seen as a system-generated problem, not as a personal-social-interaction problem. In their mind maps, they provided little explication of an active role of self in the digital environment. Emerging out of this study is an important need to understand more fully how young people conceptualize safety in digital environments, as well as the practices they engage in, if any, to stay safe in this landscape. This is presented in Medina's current research (Medina, 2019. Research Goals This current study aimed to explore senior high school students' and school librarians' conceptions of digital safety including their processes, actions, and practices, as they engage with the digital world. The study sought to respond to these following questions: 1. What do students think it means to be safe online? Sub-question: What do students do themselves to be safe online? 2. What do school librarians think it means to be safe online? Sub-question: What do school librarians do themselves to be safe online? 3. What existing library programs are implemented by school librarians in relation to digital safety? 4. How, if at all, do school librarians develop digital safety with students through library instruction? Based on the findings With the findings from participants, this study also sought to create a digital safety plan that can support library instructional intervention and programs across curriculum-based schools. Methodology The study used qualitative and quantitative methods that sought to understand the conceptions of students and school librarians related to digital safety. A total of 50 students and 10 school librarians in Qatar participated in the study: students answered an online survey while school librarians responded through a structured interview. The online survey was administered through Google Survey with 24 self-report questions and one open-ended question while interview has 11 questions pertaining to their conceptions and practices around digital safety. An invitation was sent to schools using the public directory of schools available through the Qatar National Library website. The schools that confirmed through this invitation were asked to provide a school head's approval. The researchers visited these confirmed schools and gave an orientation about the study and their students' role. After securing the approval from the head, students needed to submit parents' consent signifying that their children were allowed to participate. Those students who completed the consent form with their parents' signature were also asked to provide their personal consent that they are willing to do an online survey. Those students who have completed all approvals were allowed to proceed in the online survey. Table 1 shows the list of participating schools. For UK Curriculum 1 and 2, only school librarians completed the consent forms while the rest were able to provide and complete the required documents. Table 1 International Association of School Librarianship https://iasl-online.org Students' Digital Life The data provide perspectives of students' experiences in digital environments which are presented in terms of: frequency of Internet use (survey question 3), activities they do (survey question 1), how they spend their time online (survey question 2), and use of social media (survey question 6, 7, and 8). Table 2 shows that students are actively engaged in using Internet for non-academic related works. This also tells us that Internet use is part of everyday life for most students. Only few students mentioned that they are not daily users of the Internet. Activities That Students Engage In Online As shown in Table 3, the majority of students are active online users for both academic and non-academic related activities. What this shows us that their online inreractions are part of their daily everyday life. Some of these center on downloading information, playing video games, updating profile online, posting their activities and chatting with others online. Online Business (Art Commissions) 1 2% Table 4 shows the range of Internet activities that students engage in. These include surfing the web for non-related school work, research for school, and chatting with someone on a chat site or instant messaging. The data show that students use Internet in various activities related to their personal, academic and social needs. Table 5 shows that participants are active users of various social media platforms. Facebook seems to be the most popular among them, followed by Instagram and Twitter. Pinterest and WhatsApp appear to be the least popular. Evident from this data is that students use more than two social media to be connected online. Concerns for digital safety Regarding their interactions online, Table 6 shows that some students are "always" concerned about their safety online. A significant number of students indicate that they "sometimes" feel concerned about their online interaction. Only a small number of students report that they are "never" concerned about safety in their online engagement. Table 7 shows what students have posted online. The majority of the student share information about their real name, age or date of birth. More than quarter indicate that they post their personal images or friends' images online. The city where you live 37 74% Online Practices The name of your school 27 54% The names of any local cities 20 40% Your cell phone number 16 32% The names of local sports teams (including your school teams) 26% Your home address 9 18% The name of a teacher 3 6% Sharing Information to Strangers In terms of sharing information to strangers, Table 8 shows that more than half of the students share their real name. Almost half of them post their real age to unknown individuals. Only a small number share their images, cellular number and their local cities to strangers. Also, only small number of them share their home address and teacher's name to those people that they have never met. It is consistent with the findings reported earlier that students use Internet to interact with different people. Students were also asked about what they consider when they post online. Almost all indicate that home address should be kept confidential. Additionally, a large number of participants believe that their mobile number is also a private information. They think that the city in which they live, name of school, and personal information could be unsafe to post. This data highlights an important issue regarding Internet safety practices that students still need proper guidance in their online interaction and practices. Images (photos or videos) of friends 10 20% Your cell phone number 10 20% The names of any local cities 10 20% The name of your school 9 18% The names of local sports teams (including your school teams) 5 10% Your home address 4 8% The name of a teacher 4 8% Links 1 2% Table 9 shows students' views on anonymity and their representation of their online identities. A significant number of students indicate that "It is okay for people to log on anonymously". Almost half report that "It is not okay for people to any of the above". What we can see here is that participants are active in creating fake identities and being anonymous when interacting online. Table 9 Online Identities and Practices (n=50) Frequency Percentage It is okay for people to log on anonymously 31 62% It is okay for people to create a fake identity 11 22% It is okay for people to log on as someone older 6 12% It is okay for people to log on as someone younger 1 70% It is okay for people to log on as a different gender 5 10% It is NOT okay for people to do any of the above 20 40% Further Commentaries The last open-ended question seeks to determine students' insights about what kind of help they need from their teachers/librarians related to digital safety. Three themes were identified: Practical Tips and Advice. Almost half of participants express that they need support in terms of tips and advice around Internet safety. These include identifying dangerous sites, strategic use of information searching, learning online etiquette, recognizing online threats, determining online warning signs, providing practical activities on digital safety, setting an acceptable password, and listing "do and don'ts" when online. Some individual comments include: "They can recommend some tips on how to be aware if the possible threats that can be found while surfing the internet (P15); "by giving us warnings (P17) Instructional Support Thirteen students emphasize that they need classroom-based support to help them be safe online. They suggest that teachers and librarians provide lessons, workshops, counselling, and activities about digital safety. Some illustrative examples of their comments include: "could present some seminars and workshops" (P38); "Teachers and librarians can help by lecturing students about internet etiquette (P40); "Teaching us how to avoid certain situations that could lead to harmful websites" (P47); and "By having quarterly counseling about the status online" (P47). Setting Boundaries and Restrictions Seven participants believe that putting restriction or limited access to various websites inside the school could be an excellent way to help them safe online. Some recommendations involve putting parental control, monitoring programs, blocking some adult websites, and allowing only those useful and education websites only. Some illustrative examples of their comments include: "Set up restrictions on certain websites that could only be accessed by students" (P.31); "They can provide for parent's (access) ONLY" (P32); "Monitoring potential dangerous online behavior by blacklisting sites where fraud and posers are prevalent" (P34); and "Library computers should be only used for school purposes only" (P36). Interviews with School Librarians School Librarians' Conceptions Of Digital Safety The majority of the school librarians recognize the significant roles of digital safety as part of students' learning development. Four of them believe that Internet safety plays a role in protecting their online identities and personal privacy. Some of the comments include: Digital safety is important because with the vast information coming from the internet or cyber space, there is the GIGI or 'garbage in -garbage out'. And we don't like our students to obtain unreliable information and/or become victims of false information from unsafe websites (P3); and "It is critical for all users to practice digital safety to protect their and their family's personal information / wellbeing. Practicing good habits early on will become invaluable later in life" (P4) Existing Library Programs Used By School Librarians In Relation To Digital Safety The study aimed also to identify any existing library activity related to digital safety. It was found that six librarians do not implement any activities or programs that support digital safety initiatives. One librarian comment that they have "filtering activity" which is facilitated by IT department. One librarian mentioned that they conduct information literacy sessions to teach important points on digital safety. One librarian from a French school reported that digital safety is part of their curriculum as mandated by the Media Education in partnership with the French government. Some of the comments include: "We have library lectures pertaining on internet use, identifying reliable websites, and the importance of plagiarism" (P3); and "In collaboration with the Spanish teacher who created a brochure on Internet safety, we work together to promote the importance of this. We follow the guidelines provided by the government particularly on Media Education" (P6). Resources Used by School Librarians to Teach Digital Safety The findings show that more than half of the school librarians have not used any resources to facilitate digital safety instructions. One librarian mentioned "Google Scholar" as their reference in teaching online safety. One however highlighted the collaboration with the IT department to provide the skills and resources on technology. Only one librarian implemented a structured digital safety curriculum with the guidance and support of resources from the French government. Overall, digital safety is not a priority among school librarians who participated the study, and this seems to become a new responsibility that must take into consideration in the field of school librarianship. Moving Forward: Research Opportunities and Professional Practice With increasingly younger children using the Internet on their own, and substantial uptake by schools in terms of pedagogical applications and learning outcomes, the there is a growing need for ongoing research that examines not just the risks and opportunities they face on the web, but also delving into their complex thinking and practices in relation to safety in online environments. Such information is critical for developing safe information systems and empowering children to be proactive in their own safety and enabling educators to frame instructional interventions that nurture thinking about and practicing digital safety. This calls for a deeper exploration of child-centered approaches to research. It is clear that children are deeply enmeshed in the online environment, and researchers and educators must recognize the central importance of learning from them in order to help them develop information practices around safety that are sustainable and durable. This is a research, educational and social justice challenge. It calls for synergies of theoretical frameworks, methodologies, practices, applications and interventions that contribute to resilience and coping. Researchers are called to develop more child-centered approaches to data collection, and to use these as ways to enable school librarians to develop evidence-based approaches in their daily practice. These might include approaches as child-as-expert conversations, interviews / conversations on risks and opportunities, beliefs in internet knowledge such as measures of objective knowledge (actual knowledge) and subjective knowledge (perceived knowledge), parallel interviews with children and parent / carers; descriptions of individual experiences / cases / critical incidents (such as Critical Incident Technique, (Flanangan 1953); use of drawings (Merriman & Guerin, 2006), photovoice and screen capture to capture moment-in-time experiences through photos and screen images, and child-expert response to hypothetical scenarios. These approaches move beyond the typical check lists of digital skills / competencies, and self-reporting of digital skills, or measures of ability to perform specific skills. There is also potential for engaging children in the analysis and synthesis of their data, both as an elaborative and confirmatory approach to the data. The current research provides some useful indicators of educational interventions. It is critical that interventions are not based on exaggerations of the nature and scale of risks, rather, focus on the development of coping and resilience strategies. There is some evidence that school librarians need to engage more actively in education for digital safety that goes beyond digital skills checklists, and teaching to such lists. There is a need to empower children to cope, provide advice to parents on how to mediate, and ensure school websites contain appropriate positive support and guidance, and not just technical blockages. Educators are challenged to avoid top-down interventionist approaches which tend to be negative and ascribe blame and fear (this is akin to bullying tactics). It is important to develop active strategies equip children to manage online risks themselves in so far as they are able and practical to do. Educators have a role in enabling children craft meaningful profiles and establish what constitutes a good profile. Attention should be given to building resilience, coping and self-efficacy also through developing awareness of self-help resources that build understanding and provide proactive strategies that do not overdramatize the risks. This might include access to anonymous help lines where children can discuss their issues in anonymity and privacy. This is also about the library being a safe and trusted place. In all of these approaches, there is need to ensure that strategies show the continuity and integration of online and offline experiences. The digital safety agenda also challenges educators to open up communication avenues that create opportunities seek social support and talk to someone (peers, parents, teachers). Build trust is important. According to the Livingstone et al study (2012), when encountering different risks, children are not usually likely to talk to a teacher -data showed they identified friend, mother / father, brother / sister, and a trusted adult over teacher. Teachers were trusted not in terms of seeking support if children were upset about something related to the internet. Collectively, such approaches and initiatives come rethinking digital safety in terms of theories of collective mindfulness and resilience, rather than decontextualized sets of skills. Biographical Note Dr Ross Todd is associate professor in the School of Communication and Information at Rutgers, the State University of New Jersey. He is Director of the Center for International Scholarship in School Libraries (CISSL), at Rutgers University. His scholarly work primarily focuses on the engagement of people and their information worlds, and understanding how this engagement can facilitate professional action and change, and make a difference to individuals, organizations, societies and nations. Current research interests center on adolescent information seeking and use, with particular emphasis on young people's conceptions and practices in relation to digital safety in online environments. Virgilio G. Medina Jr is Librarian at the Qatar National Library Doha, Qatar. He recently completed his Master of Arts -Library and Information Studies at the University College London (Qatar Campus). He has worked in Middle East countries as a school librarian for prior to his appointment at the Qatar National Library. His professional and research interests center on young people's engagement with learning technologies, young adult services, and information and digital literacies, including aspects of digital wellbeing and digital safety.
2019-10-10T09:16:25.816Z
2019-10-08T00:00:00.000
{ "year": 2019, "sha1": "a4bbee035e3b984f40f676041cf8362dfe3a03f1", "oa_license": "CCBYNC", "oa_url": "https://journals.library.ualberta.ca/slw/index.php/iasl/article/download/7377/4289", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "277fadb6a13c3f8a83e639148023be960d84ede2", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Sociology" ] }
49526611
pes2o/s2orc
v3-fos-license
A Prospective Pilot Study of the Biometrics of Critical Care Practitioners during Live Patient Care using a Wearable “ Smart Shirt ” Objective: To measure the biometrics (heart rate, heart rate variability) of critical care physicians during live clinical patient scenarios. Design: Participants wore the Hexoskin biometric smart shirt (Hexoskin, Carrè Technologies, Montreal, Quebec) during live clinical activities. Setting: 24-bed tertiary care children’s hospital pediatric intensive care unit Subjects: Pediatric critical care attendings and fellows. Interventions: Heart rate (HR), respiratory rate (RR), and heart rate variability (HRV) were recorded during clinical shifts. Activities included subject baseline (SB), patient rounds (PR), tracheal intubation (TI), and central line insertion (CL). Measurements and main results: Mean HR for the activities SB, PR, TI, and CL were 81 ± 3.65, 85 ± 4.75, 99 ± 10.83, and 108 ± 8.97 beats per min, respectively. Mean standard deviation dispersion perpendicular and along the axis of identity (SD1/SD2) were 0.244 ± 0.038, 0.220 ± 0.022, 0.180 ± 0.050, and 0.167 ± 0.015, respectively. P values for mean HR, max HR, and HRV were significant when comparing SB with TI (0.010, 0.027, and 0.001) and CL (0.007, 0.001, and 0.012) but not when comparing with PR (0.026, 0.125, and 0.321). Comparison of SD1/SD2 for TI versus CL showed no statistical significance, P=0.578. Poincaré plots confirmed the similar patterns of physiologic activation. Subject baseline and PR plots were fan-shaped, suggesting primary parasympathetic input. TI and CL were torpedo-shaped, suggesting sympathetic activation. Conclusion: Study of the biometrics of physicians as they deliver real-time critical patient care is feasible using wearable technology. Critical care activities requiring not only thought, focus, and planning but also the physical execution of technical skills, such as IT or CL insertion, resulted in higher levels of sympathetic activation. Further study of physicians from various specialties and different levels of experience, the use of stress mitigation techniques, and correlation with procedural success or failure is warranted. Introduction Biometrics, the science of using measureable characteristics to describe individuals, often is grouped into physiologic or behavioural categories [1,2].Examples include fingerprints and retinal vessel patterns, which are distinctive to an individual.There is growing interest, however, in measurement of physiologic biometrics, such as heart rate (HR), with regard to health and wellness. Continuous measurement of patient vital signs is standard practice and has led to improvements in morbidity and mortality.In the developing world, as many as 50-80% of children with septic shock die.In developed countries with pediatric intensive care units (PICU), that figure is as low as 13.9% [3].Most health professionals, while good at monitoring patients, are ignorant of their own biometrics while delivering care under stressful conditions. Until recently, measurement of HR, respiration rate (RR), and heart rate variability (HRV) required cabled monitors in a physiology laboratory or the use of the bulky, portable Holter apparatus.This made study of live conditions challenging.Devices like the Fitbit (San Francisco, CA), Jawbone (San Francisco, CA), and Nike Fuelband (Beaverton, OR) are sold as a way to lose weight and improve wellness and employees are incentivized with lower insurance premiums for compliance.Roughly 3.3 million fitness bands/trackers were sold between April 2013 and March 2014 [4].scenarios as a measure of physician stress.We hypothesize it is feasible to measure biometric changes in critical care physicians as they deliver care to patients. Population The protocols and methods for this study were approved by the Nemours/Alfred I. duPont Hospital for Children Institutional Review Board (project number 780981).In total, one critical care attending (male), two senior fellows (both male), and two junior fellows (one male, one female) participated.There were no exclusion criteria as this was a voluntary, prospective, observational study. Study design The Hexoskin Smart Shirt (Hexoskin, Carrè Technologies, Montreal, Quebec) is marketed as the most advanced biometric shirt available, measuring more body metrics than any other wearable technology product and with greater precision.Hexoskin contains two respiratory loops and three cardiac dry textile electrodes.Its integrated activity sensor, respiratory sensor, and heart sensor measure data in real time and record for up to 14 h via a small (40 g) "brain" contained in a pocket pouch.Data synchronize with a phone application and cloud storage and can be analyzed through a secure website. The electrocardiographic sensor is a one-channel, 256 Hz detector with variability rates from 30 to 220 beats per min (bpm) making it suitable to perform HRV analysis.Acceleration and activity level, as well as step counting, are standard with each device.Energy expenditure in the form of kilocalories is calculated.Breathing rate, minute ventilation, and a calculation of oxygen consumption are also reported. Finally, the device can also measure inactivity and sleep parameters by tracking total sleep duration, sleep position changes, time asleep in each position, and an estimation of sleep efficiency.The shirt demonstrated low variability, good agreement, and consistency of data in scientific studies [5][6][7]. Study subjects wore the Hexoskin Smart Shirt during PICU shifts after establishing baseline rest measurements.Heart rate, RR, and HRV were recorded as opportunities arose during shifts.The PICU activities included patient rounds (PR), tracheal intubation (TI), and central line (CI) insertion.Data were time marked via the paired Bluetooth application and then electronically encrypted, password protected, and stored on the Hexoskin website. Heart rate variability Heart rate variability is the beat-to-beat changes manifested through regulation of the autonomic nervous system, temporal changes, and respiratory variation.Standard deviation (SD) 1 is the dispersion of point's perpendicular to the axis of the line of identity and represents an instantaneous beat-to-beat variability.Standard deviation 2 is the dispersion of points along the axis of the line of identity and represents continuous beatto-beat variability [8][9][10].Standard deviation 1 determines the width of the ellipse (short-term variability), whereas SD2 equals the length of the ellipse (long-term variability) [11].The SD1/SD2 ratio represents the randomness in the HRV time series [11,12]. In 1996, the European Society of Cardiology and the North American Society of Pacing Electrophysiology created the HRV standards, which VivoSense (Vivonoetics, San Diego, CA) software utilizes to obtain their values.Poincarè plots provide a graphical representation of HRV by plotting R-R interval (n+1 along Y-axis) against the previous R-R interval (n along X-axis). A line of identity is drawn through the plot, with SD1 representing the dispersion of points perpendicular to that line and SD2 representing points along the line.A long, slender "torpedo" shape is representative of sympathetic activity; a fan or "comet" shape represents a balance between parasympathetic and sympathetic activity.Calculation of SD1/SD2 ratios provided numerical quantitation of parasympathetic/sympathetic activity. Statistical analysis Paired t test was performed on maximum HR, average HR, absolute and percentage change in HR, and HRV (using SD1/SD2) as markers of physiologic stress.Heart rate maximum was calculated using the Tanaka formula. Heart rate reserve, defined as the difference between maximum possible HR and resting HR, provides insight as to the intensity of the activity.Calculations compared subject baseline (relaxed while at home) with activities recorded during PICU work.VivoSense software generated HRV calculations and analysis.Each recording was analyzed for artifact with high sensitivity and low interpolation.All sessions had greater than 95% quality based on validated software metrics.Individuals were compared with self to eliminate composition and demographic differences. Participant demographics Table 1 presents the summary demographic data for each participant.Average age was 34.8 ± 5.03 years, weight 90 ± 15.28 kg, height 1.74 ± 0.06 m.Body mass index was similar among all males (mean 29.6), with the female fellow being substantially less (23.1). Primary determinants Mean, minimum, and maximum HR; SD; and absolute and percentage change in HR were calculated for participants in each activity.These data are presented in During PRs, participants obtained recordings ranging in duration from 49 min, 51 s to 3 h, 23 min, 56 s.Average HR during rounds was 85 ± 4.75 bpm.Heart rate ranged from 57 to 118 bpm.There was no statistically significant difference when comparing mean baseline HR with PR (P=0.259).On average, the HR of participants increased 4.4 bpm (5.2%).There was also no statistically significant difference when comparing resting baseline HR maximum with PR HR maximum (P=0.126).Of note, the average and maximum HR for the third-year fellow was greater than for the remainder of the group. For the TI activity, participants obtained recordings ranging in duration from 9 min, 2 s to 27 min, 47 s.Average HR during intubation was 99 ± 10.83 bpm.Heart rate ranged from 77 to 147 bpm.There was a statistically significant difference when comparing mean baseline HR to TI (P=0.001).On average, the HR of participants increased 19.8 bpm (24.4%).There was also a statistically significant difference when comparing baseline HR maximum to TI HR maximum (Ps=0.027). During CL placement, participants obtained recordings ranging in duration from 14 min, 9 s to 40 min, 53 s.Average HR during CL placement was 108 ± 8.97 bpm.Heart rate ranged from 86 to 138 bpm.There was statistically significant difference when comparing mean baseline HR to CL placement HR (P=0.007).On average, the HR of participants increased 27 bpm (34.3%).There was also statistically significant difference when comparing baseline HR maximum to CL placement HR maximum (P=0.001). When comparing TI to CL placement, mean HR was 101 and 108, respectively.The greatest rise in HR among all tasks performed was seen during CL insertion (34.4%).However, statistically, there was no significance between these two activities, with a P-value of 0.277 for the mean HR and 1.0 for the maximum HR.Although the CL placement activity produced higher mean, maximum, and change in HR, statistically, these activities could not be differentiated. Heart rate variability Data reporting HRV parameters are summarized in Table 3.When comparing SD1/SD2 ratios, the baseline mean was 0.244 ± 0.038.Patient rounds mean was 0.220 ± 0.022.There was no statistically significant difference between these activities (P=0.321).The SD1/SD2 mean for intubation was 0.180 ± 0.050 (P=0.001).The SD1/SD2 for CL placement was 0.167 ± 0.015 (P=0.012).When comparing SD1/SD2 TI with CL placement, there was also no statistical significance, (P=0.578).The SD1/SD2 data followed the same pattern as the HR mean and maximum.Table 4 provides a summary of p values for comparison of the mean HR, maximum HR and SD1/SD2 ratios during the measured activities to baseline readings.Supplemental Figure 1 depicts the Poincaré plots for each participant and each activity; this is the graphical representation of SD1 and SD2.As stated previously, the wider the data points, the more parasympathetic activity; the narrower the plot, the greater the sympathovagal activity.For each participant, baseline and PR plots were fan-or comet-like, suggesting a greater degree of parasympathetic input and less overall stress on the participant.This agrees with the SD1/SD2 values for each activity.Endotracheal intubation and CL placement were much more torpedo-shaped, suggesting a greater degree of sympathetic input (Figure 1). Discussion In this project, using a "smart shirt," the authors captured the biometric parameters of pediatric critical care physicians while caring for live patients.Although there have been a handful of studies in a simulated environment, this is the first to quantify the biometrics of the physicians in real time while caring for PICU patients [13][14][15][16].The mobile technology associated with this smart shirt allowed the physicians to accurately record their own vital signs while they delivered care. It is stressful to care for critically ill patients in the PICU.By comparison, studies of police work have shown that officers suffer both physical and psychological job stress.Chronic stress and over-production of cortisol has been linked to the reduction of lymphocytes and is suggested as a culprit for the high rate of hospital admissions found in the police population [17,18].A meta-analysis of over 300 articles by Segerstrom and Miller cited that stressors with short temporal parameters elicit potentially beneficial changes in the immune system (fight or flight response); however, as stressors become more chronic, more components of the immune system are affected in a detrimental way [19]. Job stress is also associated with physician burn out.A survey of 6,880 physicians by the American Medical Association and Mayo Clinic evaluated the prevalence of burnout between 2011 and 2014 [20].Burnout rates were higher for all specialties in 2014 with nearly a dozen specialties experiencing more than a 10% increase.Reasons cited included poor h, low pay, stressful conditions, work-life balance inequity, excessive paperwork, and regulatory mandates.Self-analysis of biometrics cannot remedy all of these factors but could improve the effects of stress on practitioners. In the mid-1950s, Selye described general adaption syndrome as a physiologic explanation of the body's reaction to stress within a three-phase model: alarm phase, resistance phase, and exhaustion phase [21].This cascade of events activates the hypothalamic-pituitary-adrenal axis, increasing cortisol and subsequently causing release of norepinephrine and epinephrine, leading to a reflexive increase in HR and RR [22,23]. During insertion of a central venous catheter or breathing tube in an ill pediatric patient, there were statistically significant increases in both mean HR and maximum HR.This phenomenon was expected, as the procedural skills seen in pediatric critical care carry with them certain expectations.The patients are ill and often unstable with smaller target blood vessels than adults.Peripheral access in children is often difficult to achieve and maintain.The trachea of an infant or child is anatomically different from that of adults; it is smaller, more anteriorly located, and cone-shaped.Children have a larger tongue, relative to mouth size, and an omega-shaped epiglottis that are difficult to control [24,25].Additionally, time to desaturation in children due to a smaller functional residual capacity makes TI a time-sensitive procedure.Patient rounding by comparison is a more cerebral exercise that often affords time to make and change decisions.In an attempt to quantify which of these two procedures was more stressful, a comparison was made based on HR variables alone.We expected that TI would be more stressful than CL, but we did not find a statistically significant difference.However, CL insertion did show a 34% increase in baseline HR compared with only 24% for intubation.Perhaps because CL placement is a longer, multi-step sterile process requiring hand-eye coordination and ultrasound guidance and because it comes with the risk of vessel injury and arrhythmia, some find it more stressful. Journal of Intensive and Critical This study also employed HRV as a more accurate method of stress analysis in human subjects.In the 1960s, Hon and Lee [26] and Wolf [27] described HRV as paramount in our understanding of the interplay between stress and physiology.Decreased variability is used as an outcome marker after myocardial infarction, in diabetic neuropathy, and in quantifying the degree of mental stress.Exercise physiologists have used HRV to optimize training and recovery for athletes [28]. Standard techniques of HRV include linear methods, time or frequency domain analysis, and geometric methods.Time domain methods are based on the beat-to-beat intervals and the subsequent SD of those intervals.Frequency domain methods (power spectral density) assign bands of frequency and count the number of intervals that match or fall into each segment.Time and frequency analysis is limited by assumptions made about the data, such as ectopic beats leading to skewed data, and must be standardized without using recordings of differing durations.Recent data suggest that linear HRV calculations fail to capture upwards of 85% of a subject's HRV, calling into question its validity [29]. Exercise science suggests that geometric or non-linear Poincaré plots are surrogates of time-and frequency-domain analysis to assess HRV.They provide better repeatability and reliability with smaller random error.They may be more suitable for diagnostic purposes and for assessing individual treatment effect.Also, despite computational challenges, the geometric analysis of Poincaré plots and SD1/SD2 ratios were more accurate [28,30]. Baseline and PR activities by Poincaré plot were fan-shaped with lower sympathetic tone when compared with torpedoshaped TI and CL plots.Comparison of SD1/SD2 ratios between subject baseline and the individual activities confirmed the findings shown in the simplistic HR analysis.Rounding activities were comparable to resting readings.However, both TI and CL insertion SD ratios indicated statistically significant differences with a high degree of sympathetic activation.Again, this may reflect what is at stake, or at least the perceived stakes, for a physician performing a challenging procedure. Questions not answered in this study are the focus of a planned multicenter collaboration.Comparison of HRV measurements between various levels of training within critical care and amongst different specialties with similar scope of practice is planned (emergency medicine and anesthesia). At this time, it is unclear if there is a reduction in stress levels associated with training and experience.Some postulate that being unaware of potential consequences in a stressful situation allows the subject to work more freely and unencumbered by anxiety.An interesting survey of medical students found that dealing with the subject of death was particularly stressful.Students reported sometimes coping with alcohol.However, as they progressed in school (one would assume resulting in more experience), they reported a 50% increase in their alcohol intake [31].Perhaps, knowing potential consequences results in more, not less, sympathetic activation explaining the higher attending response seen during intubation in this study. The small number of subjects and inability to compare between training levels limits this study.Other limitations of the study involve the Hexoskin Smart Shirt itself.Reliable readings for HR require the shirt to be tight fitting to the skin.In addition, elastic straps designed to adhere the chest and abdominal sensors directly to the skin should be worn to reduce motion artifact.Finally, since it was difficult to predict exactly when a procedure would occur, ultrasonic gel was applied to the leads to increase conductivity. A limitation that enhanced the study involved the Hexoskin's 14 h battery life.Rather than record for an entire 15 to 24 h shift, the project focused on shorter discreet activities.This prevented sifting through many h of data, much of which was non-stressful. While this study demonstrates it is possible to quantitate physician biometrics, the true practice implications are unclear.Studies of physician stress are evolving, but impact on burnout, career longevity, and physician general health is unknown.Comparison between chronic and acute stressors, call shift variation, patient complexity, and census volume all need to be considered. Conclusion Critical care activities requiring not only thought, focus, and planning, but also the physical execution of technical skills such as TI or CL insertion, resulted in higher levels of sympathetic activation and were more stressful.Further study of real-time critical care activities in practitioners with various levels of experience, the use of stress mitigation techniques, and correlation with procedural success or failure is warranted. Table 2 . Resting baseline HR average for all participants was 81 ± 3.65 bpm.Heart rates ranged from a minimum of 60 bpm to a maximum of 101 bpm.Despite differences in age, sex, and body composition, resting baseline was similar for all participants.Due to the small sample Journal of Intensive and Critical Care ISSN 2471-8505 size, there was no analysis to determine if level of athletic fitness correlated to HR averages. Table 1 Baseline characteristics of subjects ( * Max HR is 208-0.7 * Age (years) ** HRR is max heart rate-resting heart rate (utilized baseline HR to determine resting HR)). Table 2 Summary of measured activities (Mean HR; shown in bold type followed by range and standard deviation in parentheses.Percentage change in mean HR from resting state shown in parentheses after absolute change in heart rate). Table 3 Summary of heart rate variability. Table 4 Summary of p-values for each activity when compared to baseline resting state.
2018-06-29T00:52:31.565Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "e6b36255410c46944a9bdef86527568559ce2098", "oa_license": "CCBY", "oa_url": "http://criticalcare.imedpub.com/a-prospective-pilot-study-of-the-biometrics-of-critical-care-practitioners-during-live-patient-care-using-a-wearable-smart-shirt.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "e6b36255410c46944a9bdef86527568559ce2098", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232046801
pes2o/s2orc
v3-fos-license
Usual Presentation Has Odds: Unilateral Tibial Hemimelia in One of Dizygotic Twins Tibial hemimelia is a relatively rare congenital tibial longitudinal deficiency (approximately 1 per 1 million live births), unilateral or bilateral, with a relatively intact fibula. Hemimelia results from a disruption of the lower limb developmental field during embryogenesis due to slow or even abort of chondrification process, which results in leg length discrepancy. Affected leg commonly appears short and deformed with knee, ankle, and foot involvement. It may present with a variety of associated anomalies. Surgical treatment varies according to the type and degree of deformity, and reconstructive interventions are still limited. Reported cases of tibial hemimelia are very infrequent, especially tibial hemimelia in twins. Usually, the cases were in single embryo or less frequently in one of the monozygotic twins, but no reported cases regarding tibial hemimelia in one of the dizygotic twins as this article reports. Introduction There is a wide range of congenital long-bone anomalies, including Amelia and Hemimelia. Amelia is defined as an absence of the entire extremity, while Hemimelia is the term used to describe a partial or complete congenital absence of a limb's distal half, as in radial, ulnar, tibial, and fibular hemimelia. Complete hemimelia is an extremity deficiency from the elbow or knee level with an absence of the distal elements. Paraxial (longitudinal) hemimelia is a deficiency of one of the forearm or leg bones extending into the extremity's most distal parts [1]. Tibial hemimelia is a rare congenital anomaly characterized by insufficiency of the tibia longitudinally with a comparatively intact fibula. The occurrence of congenital deficiency of the tibia is approximately 1 per 1 million live births [2][3][4][5][6][7]. The percentage of monozygotic twins is about 0.8%; therefore, the occurrence of tibial hemimelia in monozygotic twins is 1 case per 125 million [4]. The first reported case of tibia hemimelia was in 1861 [3]. Tibia hemimelia commonly appears as a short and deformed leg with knee and ankle involvement, and the foot will be in an abnormal position. The precise etiology of tibial hemimelia is still unclear. However, the families with possible autosomal dominant or autosomal recessive inheritance are reported in the literature [8]. But, still no clear evidence of X-linked inheritance [9]. In Hospital Universiti Sains Malaysia, we reported a rare case of 16-month-old baby presented with a unilateral left distal tibial hemimelia with equinovarus deformity in one of the dizygotic twins without a family history of this deformity. Moreover, in this case, the condition is associated with hypospadias, and one dominant artery in the distal left lower limb. As the reconstructive options are limited, tighten tendon released, and Z-Plasty performed to increase the foot angle. Case Presentation We reported a 16-month-old pre-term baby boy who is a second of the dizygotic twins, born via emergency lower segment cesarean section due to breech presentation of presenting twin. A baby was diagnosed with unilateral left distal tibial hemimelia with a severe equinovarus deformity and one dominant artery in the distal left lower limb. The mother passed through uneventful pregnancy without any comorbidity, and there is no consanguinity with her husband. A family history of a similar congenital anomaly was negative. On examination, the baby was not dysmorphic with intact lip and palate; glans hypospadias with minimal chordae is associated comorbidity. The hip is stable, and the right lower limb completely normal with intact movement. Left limb anterolaterally bowed with shortening of the left tibia. Left fibula head and proximal tibia are palpable. The patient can extend a left knee actively but knee flexion up to 100 degrees. The left foot is small, supinated, and medially rotated with severe equinovarus deformity, as shown in Figure 1. The patient cannot dorsiflex the left foot but can extend and flex the left toes, although a big toe is underdeveloped. 1 2 Discussion Embryonic mesodermal connective tissue gives origin to all connective tissues in the body, including cartilage and bone. Balanced endochondral and intramembranous ossification will result in average shape and growth of the long bones. Lower limb development involves complex and precise gene interactions that control bones' positional development [10]. The variety of congenital lower extremity shortening is wideranging and can include any leg bones and joints. In most cases, the definitive cause of long bone deficiency is unknown. However, some factors, such as mutations or teratogens/teratogenic drugs and early compromise of blood supply can cause impairment of mesenchymal condensation and slow, or even abort the onset of chondrification process and cause leg length discrepancy associated with severe permanent morbidity related to abnormal weight-bearing and compromised ambulation. Tibial hemimelia or tibial deficiency is uncommon and markedly less frequent than the fibular variant [10,11]. Its incidence is approximately about 1 per 1 million live births, with 0.8% occurring in monozygotic twins (about 1 case per 125 million) [4]. Usually, the clinical spectrum showed a pronounced family history of affected patients [9]. In literature, there are few recorded cases of tibial hemimelia in twins, specifically in identical or monozygotic twins. In 2003, Dayer and Kaelin published a case of tibial hemimelia in monozygotic twins, while in 2010, Leite et al. reported a case of tibial hemimelia in one of the identical twins [4]. Generally, there are no recorded cases, percentages, or data in the literature, about incidence in one of the dizygotic twins, with no family history of tibial hemimelia, which is against usual presentation. Clinically, tibial hemimelia was classified into four types according to Jones classification [12], which was based on radiological criteria, but most recently, Paley published a new classification based on progressive deficiency patho-anatomy and as such serves to guide reconstructive options [2]. Tibial hemimelia may present as solitary disorder (unilateral or bilateral) or be a part of more complex malformation syndromes [7,8]. Surgical treatment is varying according to the type and degree of deformity. Reconstructive interventions are still limited but significantly improved over the last decades. Other options include ankle arthrodesis, tendons lengthening or transpositions [13], and amputation, which is likely performed with a prosthetic foot, especially for the severely affected limb. Conclusions Tibial hemimelia is a rare variety of congenital lower extremity shortening, and is very infrequently reported in the literature, especially tibial hemimelia in twins, with no reported cases in one of the dizygotic twins to normal parents without a family history of this deformity. In conclusion, some odds that may happen against usual presentation should be considered to elicit possible underlying causes and risk factors. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-02-26T05:09:48.185Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d0043cc0b09d48cdab16816d83b861de22419d8a", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/50056-usual-presentation-has-odds-unilateral-tibial-hemimelia-in-one-of-dizygotic-twins.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0043cc0b09d48cdab16816d83b861de22419d8a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267048081
pes2o/s2orc
v3-fos-license
Stochastic Inversion of a Tomographic Pumping Test: Identifying Conductivity Horizontal Correlation and Longitudinal Macrodispersivity Two important properties for modeling flow and transport in heterogeneous aquifers are the horizontal integral scale (Ih) of hydraulic logconductivity (Y = ln K) and the longitudinal macrodispersivity (αL). Estimating the former generally requires K measurements in many wells, which are typically not available. In this work, we present a method for estimating Ih and αL from hydraulic tomography (HT) pumping tests. The approach is based on stochastic inversion, assuming head and K are stationary random space functions. Head variance and mean head gradient are calculated from measured head data at large distances from the pumping well and for large times from the start of pumping. These are then used in a theoretical formula for head variance in uniform mean flow to obtain Ih and subsequently αL, requiring only previous knowledge of the vertical integral scale (Iv) and the logconductivity variance σY2 $\left({\sigma }_{Y}^{2}\right)$ . The method is applied to data from an HT pumping test at the Boise Hydrogeophysical Research site and results for Ih and αL are in the range previously reported in the literature. Particularly promising results are found for αL, which is seen to be robust, with relatively little variation for different values of Iv and σY2 ${\sigma }_{Y}^{2}$ . Introduction The spatial variability of aquifer hydraulic conductivity K is generally associated with an enhanced rate of solute spreading in groundwater transport, coined as macrodispersion.This is an important topic related to a number of applications such as contaminant and colloidal transport in aquifers, kinetics of geochemical reactions, and groundwater bioremediation.In view of the interest in this topic, a considerable effort has been invested in developing various methods of aquifer heterogeneity characterization using field measurements.In the present paper we focus on the hydraulic tomography (HT) method.It consists of conducting pumping at varying intervals along a number of wells and measuring the pressure response by transducers in numerous discrete intervals of observation wells in and surrounding the volume of investigation.The identification of K spatial distribution from measurements of the pressure head H is known as a solution to the inverse problem.Two main inversion approaches were advanced in the past, as discussed in the following. The first approach consists of casting the flow equations in a numerical form with K values at the nodes of the spatial grid regarded as unknowns, while the measured H are known at a set of nodes.This HT analysis entails solving an inverse problem with thousands of unknown variables.While theory on HT inverse problems dates back to early publications such as Carrera and Neuman (1986), Gottlieb and Dietrich (1995) and Yeh et al. (1996), it is still a very active field of study (e.g., Cardiff & Barrash, 2011;Hochstetler et al., 2016;Zha et al., 2018;Zhao & Illman, 2018); a thorough review of HT literature and field studies is presented in the work of Cardiff and Barrash (2011).Nevertheless, there are various problematic aspects of the HT inverse methods, the first and foremost of which is that the problem is ill posed (Bohling & Butler, 2010).In addition, to achieve accurate estimation many data measurements are required, which are often not available.The result of HT inversion is usually presented as a spatial map of hydraulic conductivity, and these maps are often misinterpreted as a deterministic output, despite that the method is in fact Bayesian and only offers a "most likely" result.The method is also extremely demanding in terms of computational effort.Finally, the HT results are also difficult to verify in field experiments since pointwise measurements, or independent tests for comparison, are usually scarce and most of the literature focuses on validation using synthetic examples.Thus, only a limited number of studies include application to high resolution 3D field experiments (Berg & Illman, 2011;Cardiff et al., 2012Cardiff et al., , 2013;;Hochstetler et al., 2016;Tiedeman & Barrash, 2020). Abstract Two important properties for modeling flow and transport in heterogeneous aquifers are the horizontal integral scale (I h ) of hydraulic logconductivity (Y = ln K) and the longitudinal macrodispersivity (α L ).Estimating the former generally requires K measurements in many wells, which are typically not available.In this work, we present a method for estimating I h and α L from hydraulic tomography (HT) pumping tests.The approach is based on stochastic inversion, assuming head and K are stationary random space functions.Head variance and mean head gradient are calculated from measured head data at large distances from the pumping well and for large times from the start of pumping.These are then used in a theoretical formula for head variance in uniform mean flow to obtain I h and subsequently α L , requiring only previous knowledge of the vertical integral scale (I v ) and the logconductivity variance ( 2 𝑌𝑌 ) . The method is applied to data from an HT pumping test at the Boise Hydrogeophysical Research site and results for I h and α L are in the range previously reported in the literature.Particularly promising results are found for α L , which is seen to be robust, with relatively little variation for different values of I v and 2 . In addition to the drawbacks of HT discussed above, some important applications, such as prediction of solute plume spreading by groundwater flow, require global parameters characterizing the permeability spatial variability rather than a pointwise distribution of K.In the same vein, a typical procedure entails generating K values in a large zone of the aquifer, extending past the one investigated by the HT, for instance using geostatistical tools and this also requires a statistical characterization of the K distribution.Then the salient question is whether it is possible to identify these global parameters directly from the distribution of the head measurements, bypassing the need for a pointwise inversion. To address these questions, a second approach, which we have adopted in the past and pursue here, is a statistical analysis.It entails modeling the aquifer logconductivity field Y = ln K as a stationary random space function which is then characterized by a few statistical parameters: the geometric mean 〈Y〉 = ln K G , the variance 2 and the horizontal I h and vertical I v integral scales, respectively, associated with the assumed axi-symmetric auto-correlation function.They are exhaustive if Y is modeled as multi-Gaussian.The stochastic inversion aims at deriving these parameters from the measured H, which is regarded as a realization of a random field as well.Subsequently, realizations of K values can be spatially generated using standard algorithms such as Sequential Gaussian Simulation. This work deals with the second approach discussed above.While the general idea has been presented in geostatistical literature (e.g., Firmani et al., 2006;Zech et al., 2015), the current approach is based on discussions in Indelman et al. (1996) and Indelman (2001).It was further developed in recent years (Cheng et al., 2019(Cheng et al., , 2020) ) and applied to a synthetic case in Bellin et al. (2020).It was only very recently, in Cheng et al. (2022a) that the stochastic inversion was implemented in a field experiment of HT, as described in the following.The solution for the head H 0 (x, y, z, t) in a homogeneous unconfined aquifer depends on the three properties: K, the specific storativity S s and the specific yield S y .The approach of Cheng et al. (2022a) was to fit each measured H(t) curve at a given location with an H 0 (t) model, where the three parameters in H 0 are coined as equivalent properties.In simple words, the equivalent properties: K eq , S s,eq , and S y,eq , are the constant properties of an aquifer whose head response fits the measured head response in the actual heterogeneous aquifer.For heterogeneous aquifers the equivalent properties vary in space and the aim of Cheng et al. (2022a) was precisely to analyze the spatial variation of their statistical moments, that is, the mean and variance (a brief recapitulation of the method and results is given in Section 2.3).In principle, the equivalent properties represent a kind of spatial average over the volume extending from the pumping source to the observation point.At a sufficiently large distance R the mean K eq value levels off and tends to the aquifer constant effective conductivity in mean uniform flow, which is a key parameter for modeling flow under natural gradient conditions.The identification of the effective properties was one of the main results of Cheng et al. (2022a).However, there was no estimation of correlation (i.e., integral) scales or macrodispersivity in that work and results were given only for mean and variance of aquifer equivalent properties. The aim of the present study is to develop a procedure for estimating logconductivity horizontal integral scale I h and longitudinal macrodispersivity α L from HT data and to implement the procedure on the Boise Hydrogeophysical Research Site (BHRS) using data from the 2011 Boise Hydraulic Tomography Test (BHTT).We seek the identification of the parameters which quantify the local K statistics, namely K G , 2 , I h , I v , by using the same measurements from the BHTT which were used in the analysis of Cheng et al. (2022a).The approach suggested in previous literature (e.g., Bellin et al., 2020) is to derive a solution of the direct problem for the mean head 〈H〉, which satisfies the appropriate boundary and initial conditions and then to identify the parameters by a best fit with the measured mean head .We attempted to implement this approach by using the solution derived in Indelman (2003) for a constant rate pumping source in an elastic aquifer.Due to the complexity of the problem, the theoretical solution for 〈H〉 was derived by a first order approximation in 2 .We attempted to apply the inverse procedure by matching the solution of Indelman (2003) to the mean head computed from measurements in BHTT for early pumping times in which flow is storativity dominated (i.e., elastic aquifer).This did not succeed due to the difficulty of separating in the measured signal the leading order term from the one related to heterogeneity.In other words, we found that for the BHRS, the heterogeneity is not significant enough to clearly quantify its impact on the mean head during early pumping times. To deal with this issue we turn to the measurement based head variance 2 , which is much more promising than 〈H〉 since it is entirely related to heterogeneity.The theoretical first order approximation of 2 for flow to a source in an unbounded domain was derived by Severino et al. (2008) in terms of six quadratures, which is much too computationally demanding, rendering its use untenable.Severino (2011), Severino et al. (2019), and Severino 10.1029/2023WR036256 3 of 18 and Cuomo (2020) could reduce the number of quadratures for a line source which represents a fully penetrating well in a confined aquifer, but this is not the case we are interested in (i.e., unconfined aquifer and pumping in an interval).To obtain a theoretical expression related to 2 that can be calculated, we implement an additional approximation, namely assuming that the theoretical 2 pertains to a uniform mean flow driven by the local mean horizontal head gradient ℎ = ∕ (Di Dato et al., 2019).We use the head measurements at the largest times and distances from the pumping source to calculate 2 , where we expect our assumption of uniform mean flow to be most applicable.Under these conditions, a simple analytical expression provides a relationship between the measurement based One of the motivations for focusing on identification of I h is that it is typically challenging to estimate because of the scarcity in direct measurements of K in the horizontal direction.The encouraging result presented in this work is that the range of identified I h is in agreement with previous estimates in the literature.An even more promising finding is that the value of the longitudinal macrodispersivity in mean uniform flow = 2 ℎ is robust, that is, practically independent of the selected 2 values.Moreover, it is in agreement with the values pertinent to a large number of transport experiments analyzed by Zech et al. (2023) for aquifers of a mild level of heterogeneity.It is emphasized that to the best of our knowledge, the present study is a first attempt to identify flow and transport parameters by stochastic inversion for a continuous pumping field test.In view of the encouraging results we conclude the paper by proposing types of future tomographic tests which are tailored to the purpose of identifying transport parameters. The plan of the paper is as follows.In Section 2.1 we present the method developed in this work.Section 2.2 provides general information on the BHRS and details of the HT field experiment conducted in 2011.This is followed by a summary of previous relevant analysis carried out using the BHTT data in Section 2.3.Sections 3.1 and 3.2 present the implementation of the method presented in Section 2.1 on the BHTT data, with Section 3.1 focusing on the mean head and variance, while Section 3.2 are the results for I h and α L .In Section 3.3, we consider the possibility of reducing the amount of field data and evaluate its impact on the results.Finally, Section 4 presents a short summary, the main conclusions of this work and future recommendations. Method for Identifying I h and α L In this section we detail the method for estimating logconductivity longitudinal integral scale and the macrodispersivity from HT data.Overall, the method consists of calculating the head variance from the pumping test field measurements, denoted by 2 , and comparing with a theoretical expression for head variance, denoted by 2 .We note that the method is described here only in general terms, while many additional details are provided in the following sections during its implementation on the BHTT data. We consider that a tomographic constant rate pumping test has been carried out in an unconfined aquifer of thickness D. Figure 1 depicts the layout of the model corresponding to the field test, with pumping source and observation points appearing alongside the coordinate system.Notice that the pumping location is assumed to be at a point in space, though in reality this is the center of the pumping well segment.Considering a Cartesian coordinate system, the pumping source is assumed to be located at x = y = 0 and z = Z (note that Z < 0), where z = 0 is the top of the aquifer (e.g., the initial water table).We also use a cylindrical coordinate system appropriate for the axisymmetric nature of the problem, with = ( 2 + 2 ) 1∕2 , φ = tan −1 (y/x), z and the pumping location is therefore at R = 0, z = Z.The tomographic test data consists of many head time series H(t) at the various observation points (R, φ, z) and for various tests conducted at different pumping locations Z.The measured pressure head is therefore denoted as H Z (R, φ, z, t).We further assume that previous analysis has been conducted on the HT data and estimates of 2 and I v have been made.To model the pumping test at late times, a theoretical model has been previously developed and used extensively, particularly in Cheng et al. (2022a).The model (H 0 ) assumes a homogeneous aquifer of constant hydraulic conductivity K, specific yield S y and a point source discharge Q located at z = Z with a lower boundary condition of no flow: ∂H 0 /∂z = 0 at z = −D (D is aquifer thickness), and a linearized water table boundary condition of S y ∂H 0 /∂t + K∂H 0 /∂z = 0 at z = 0.The late time assumption allows to neglect the impact of the elastic storage. The details of the problem and the derivation of the solution have been presented in Dagan (1966) and the final expression is given by were J 0 is the zero'th order Bessel function of the first kind. An analytical expression of 2 pertaining to mean uniform flow driven by a given constant head gradient J = (J h , J v ) in an unbounded domain, has been derived by Dagan (1989, Equation 3.7.16 therein).Thus, with H = 〈H〉 + h and 〈H〉 = H 0 = −J ⋅ x, the first-order approximation of 2 = ⟨ℎ 2 ⟩ (where 〈〉 denotes averaging) is given by where (3) Equation 2 was obtained for an axi-symmetric exponential logconductivity autocorrelation and additional details of the derivation can be found in Appendix A. We propose to apply Equation 2 to field data from HT tests in an approximate manner by taking advantage of the asymptotic properties (in space and time) of the mean solution H 0 (Equation 1) for a source pumping from an unconfined aquifer.It was shown by Dagan and Lessoff (2011) that for a fixed R and large t the solution H 0 (Equation 1) tends to the steady one for a source in a confined aquifer.Thus, for sufficiently large R and t, the head becomes vertically uniform and head gradient varies only horizontally with distance from the well, that is, In other words, for a sufficiently large R and t, when I h ≪ R, the flow is slowly varying and can be approximated locally by a steady uniform flow.This approximation will be thoroughly examined and discussed in Sections 3.1 and 3.2.Substituting J v = 0 in Equation 2, the key formula for identification of the statistical parameters becomes where f and F h are defined in Equation 3. We now detail the steps of the procedure for estimating I h and α L .The first step of the method consists of smoothing the head data signals H(t) by fitting them with a smooth function and filtering out any H(t) curves that appear nonphysical, as will be further discussed in Section 2.2.The second step is to isolate the filtered head data H(t) which pertains to observation locations with approximately the same depth as the pumping location, that is, z ≈ Z.These observation locations are then grouped together based on their distance from the pumping source and their depth, that is, H(t) are grouped with similar R and z ≈ Z values.This is further discussed in Section 2.3.Now the grouped head data have similar R and z, only differing by their φ coordinate.These can be averaged to obtain mean head ( ) .We note that we remove the subscript Z from the head notation from here on, since we consider only z ≈ Z.Similarly, the grouped measured head data can be averaged to obtain the head variance as follows where the averaging is over the data with varying φ. The third step of the method is to calculate the value of 2 from the measurements which are approximately in the regime of uniform mean flow with negligible vertical gradient (J v ≈ 0).This is done by taking only the data from the largest times t = t max and largest distance from the pumping source R = R max .These head measurements are used in Equation 5 max) . The fourth step is to obtain the horizontal head gradient J h from the theoretical solution of H 0 given by Equation 1. H 0 is used to represent the field data average head ( ) by the following approach.We first calculate the equivalent properties K eq and S y,eq for each head data H(R max , φ, z, t) by a best fit of H 0 to H(t), for the late time portion of the curves, at each location.This is further discussed in Sections 2.3 and 3.1.After averaging over φ we arrive at K eq and S y,eq for each measurement depth z.Substituting these K eq and S y,eq values back in Equation 1provides us with H 0 (R max , z, t) which represents (max, , ) .Now we can calculate the derivative of the average head at the largest available time, that is, After averaging J h (z) over z we obtain one final value of J h (R max , t max ) for the measurements with the largest time and distance from the pumping well. The fifth and last step of the procedure is to substitute the value of max) in the left hand side of Equation 4 and values of J h (R max , t max ), 2 and I v (the latter two are assumed known from previous analysis) in the right hand side of Equation 4. This allows to calculate I h .We note that Equations 1 and 4 are central to the method described above and each pertain to different flow conditions; the former describes well flow in a homogeneous medium, while the latter applies to uniform mean flow in a heterogeneous aquifer.However, each are carefully used for a different objective, that is, the former for the horizontal head gradient dominated by the mean flow and the latter for the head variance resulting from heterogeneity. While the parameter I h is of general interest, its main application is to solute transport and more precisely to its impact on the longitudinal macrodispersivity α L .The latter quantifies the rate of spreading of solute plumes due to the conductivity spatial variability and it has been derived by a first-order approximation in 2 (e.g., Dagan, 1989).Thus, after a transient period, α L stabilizes at the asymptotic constant value given by Despite the first order assumption, this relationship was shown to apply also to large values of 2 (Fiori et al., 2017) based on accurate numerical simulations.Furthermore, it was also shown to be approximately obeyed in a few transport field experiments conducted in highly monitored aquifers (Zech et al., 2023). Description of BHRS Hydraulic Tomography Pumping Test The BHRS is a heavily investigated site located in proximity to the Boise River, about 15-km upstream from downtown Boise, ID, USA.It mainly consists of a fluvial aquifer composed of coarse cobble, gravel and sand sediments, confined below by a clay layer and unconfined from above.The thickness of the aquifer is approximately 16 m, with slight variations.The site is described in detail and characterized in numerous previous publications, for example, Barrash and Clemo (2002), Barrash and Reboulet (2004), Moret et al. (2004), Barrash et al. (2006), Moret et al. (2006), Clement and Knoll (2006), Clement and Barrash (2006), Clement et al. (2006) ln , is reported to have values of 0.36-0.92with an overall value reported to be 0.49 (Barrash & Cardiff, 2013). We consider data obtained in a field campaign in the summer of 2011 and published previously in Cardiff et al. (2013), Cheng et al. (2022a).Partially penetrating pumping tests were carried out using various 10 cm diameter wells located onsite.The wells onsite are depicted in Figure 2a and consist of 13 wells distributed in a pattern of two rings; an outer ring (wells C1-C6) and an inner ring (wells B1-B6), surrounding the central well A1.Constant rate pumping was carried out, in turn, at various depths with 1 m intervals in three pumping wells: A1, B1, and C1.Furthermore, pressure was recorded in different combinations of seven surrounding wells: B3, C1, C2, C3, C4, C5, and C6.The test setup for an example case in which pumping is carried out at well B1 and observations in wells B3, C3, C4, C5, and C6 is illustrated in Figure 2b.Each observation well was equipped with a packer-and-port string consisting of seven 1 m open intervals separated by a 1 m inflatable packer above and below.This allowed for independent measurements of head as a function of time H(t) at successive 1 m intervals in each observation well.Altogether, 2,472 measurements of H(t), corresponding to different observation and pumping locations, were available for data analysis.Pumping flow rate and duration varied between the different tests, with rates in the range 25.35-60.68L/min and duration between 13 and 15 min. The data processing procedure is described in detail in Cheng et al. (2022a).The H(t) curves are first smoothed by fitting them with a MATLAB Lowess Smoothing function.Then, the data is filtered by removing curves which are suspected to be nonphysical (see Cheng et al., 2022a for more details).Finally, we consider only head measurements which are approximately at the same depth as the pumping interval location.This was found to help simplify the analysis, reducing the impact of boundaries and allowing to focus on horizontal flow.Thus, if we assume a Cartesian grid in which the pumping location is at x = 0, y = 0, and z = Z, we can describe a head measurement as H Z (x, y, z, t), where z is the elevation of the measurement point and Z is the elevation of the pumping location (see Figure 1).Only H Z (x, y, z, t) data in which |z − Z| < 1 m are used in the analysis of this work and therefore the subscript Z will not be necessary from here on, as we remember that z ≈ Z.We note that the pumping and measurement vertical locations are described as points in space (z and Z) although they represent a meter long interval.This has been found to be a sufficient representation for modeling purposes.After this data selection we are left with only 235 measured head curves, as detailed in Table 1 (see next section for additional information). Summary of Previous Data Analysis Herein is a brief description of the data analysis and results of Cheng et al. (2022a), which will be the starting point for this work.The goal of that work was to estimate aquifer equivalent property statistical moments from the measured data.A procedure was presented and carried out for the BHTT data leading to estimates of mean and variance of the equivalent properties: K eq , S s,eq , and S y,eq .Equivalent properties at a given point are defined as those pertaining to a homogeneous medium, under the same boundary conditions and pumping source, which lead to approximately the same head response H(t) as the one measured in the field experiment at that point.Results showed that average equivalent properties decrease with horizontal distance from the pumping well and appear to stabilize at sufficiently large distances, in line with existing theory for K eq .The squared coefficient of variation showed similar behavior, with values indicating a weakly heterogeneous aquifer.Estimated values are presented in Table 2 of Cheng et al. (2022a) and are seen to be in agreement with literature values for BHRS. A main component of the method proposed in Cheng et al. (2022a) was a separation of each measured head signal H(t) into three time periods.First, is the early time period (0 < t < t E , where t E is the endpoint of the early time period), in which water is drawn from the elastic storage and the impact of the water table is negligible.Second, is the late time period (t > t L , where t L is the initial time of the late time period), in which water is drawn mainly from water table drainage and the elastic storage is negligible.Third, is the intermediate time period (t E < t < t L ) in which both elastic storage and water table drainage are significant.It was shown that separate analysis of the early and late time periods is beneficial because it reduces the number of parameters which need to be identified in an inverse procedure, making the process more efficient and accurate, avoiding the intermediate time period in which models are more error prone.Furthermore, it was shown that results for mean and variance of K eq are consistent between the early time and late time analysis, that is, the approach is robust and reliable. In this work, our goal is to obtain additional aquifer characterization which was not achieved in Cheng et al. (2022a), namely the horizontal correlation scale of conductivity and the macrodispersivity.This will be carried out by analysis of the late time period data.In Table 1 we present the distribution and quantity of H(t) curves from the BHTT data, which were used for our analysis.The data is clustered into groups within a range of 2 m depths and approximately 0.8 m for distances from the pumping source (R), for example, for Z = −6 m and R = 9.6 m, the range is actually −7 < z, Z < −5 and roughly 9.1 < R < 9.9 m with 10 available H(t) curves from all the pumping tests.This is the remaining data after the processing and selection carried out for a late time period analysis, that is, this was the data previously used in Cheng et al. (2022a) for estimating the moments of K eq and S y,eq .It should be noted that for the data of Table 1, it was found that t L ≈ 200 s. Background to Implementation of New Method The stochastic solution to the inverse problem, which we are interested in, consists of identifying the statistical structural parameters characterizing Y = ln K, namely K G , 2 , I h , I v (or the associated f = I v /I h ), with the aid of measured H statistical parameters.The general approach forwarded in the literature is to derive the solution of the direct problem for the random H (or the associated K eq ), to obtain its statistical moments and subsequently to identify the structural parameters by a best fit of measured and theoretical H moments (see, e.g., Dagan, 1989).We attempted to implement this approach on the BHTT data without success, as explained in the next paragraph. In the case of a three-dimensional pumping source representing a well, one of the approaches was to derive 〈K eq 〉 as function of R in steady flow and then to identify the structural parameters by a best fit with the measurement based (e.g., Bellin et al., 2020;Indelman et al., 1996).A similar approach can be taken by using the solution of Indelman (2003) for time dependent 〈H〉 considering a pumping source with constant discharge in an elastic confined aquifer.The two approaches discussed above are based on solutions obtained by a first-order approximation in 2 and have not been tested against field experiments.We have attempted to implement both approaches using the BHTT data and this, as far as we are aware, is a first such endeavor.Our first attempt was to use calculated by Cheng et al. (2022a, Figure 7 therein), however, it was not conducive since there was no clear trend of with R. In the second attempt, we used the Indelman (2003) solution and tried to fit the theoretical 〈H〉 with the measurement based as functions of time in the early period.This attempt was also not successful since the measured did not allow for separation of the zero order approximation and the heterogeneity induced term of order 2 which make up the theoretical solution.A more promising approach is to fit the theoretical head variance 2 and the measurement based one 2 , since they are directly related to heterogeneity.Deriving 2 for steady source flow by Severino et al. (2008) involved performing 6 quadratures numerically, making it not applicable to our procedure due to computational cost.The number of quadratures was reduced to 4 by Severino et al. (2019) for a line source representing a fully penetrating well in a confined aquifer, which is still numerically demanding and does not apply to the configuration of the BHRS.We therefore turn to the mean uniform flow approximation and the method described in Section 2.1. Analysis of Head Mean and Variance Based on BHTT Measurements In this section we implement steps 2 and 3 of the method described in Section 2.1.The uniform mean flow approximation adopted in our method requires considering data from large times and distances from the pumping well.Therefore, we selected from Table 1 the two distances R 1 = 9.6 m and R 2 = 12.4 m.These are the largest distances which still provide a significant number of data sets.Furthermore, we concentrate on head measurements at the large times t 1 = 450 s and t 2 = 600 s; t 2 is the total duration of the test while t 1 was selected for the purpose of comparison.It is recalled that for both t j (j = 1, 2) the regime is that of pumping mainly from the water table specific yield, that is, the late time period (see Section 2.3). To derive the mean head (, , ) , we observe that in Table 1 for each R and z the data bank consists of a number of head signals H(R, z, φ, t) which differ in the value of the azimuthal angle φ (remembering that Z ≃ z).Due to the assumed axi-symmetric stationarity, the mean head is obtained by averaging over φ.For example, taking R 1 = 9.6 m and z 3 = −6 m, there are 10 available time dependent measured signals which are averaged to arrive at (1, 3, ) .Subsequently, the resulting four profiles over the vertical (, , ) , for i, j = 1, 2 and z k = −2k (k = 1, …, 8), are depicted in Figure 3.It is emphasized that the values for z 1 = −2 m, z 7 = −14 m, and z 8 = −16 m are based on a small number of measurements (see Table 1); furthermore, these measurement locations are close to the aquifer boundaries and influenced by the boundary conditions.For this reason we use only the data in the central layer between the depths −12 m ≤ z k ≤ −6 m, for which the approximation of locally uniform flow is rather justified (see discussion in Section 3.2). The homogeneous aquifer solution presented in Equation 1 can be used to represent the average head in a heterogeneous aquifer by substituting K and S y with the equivalent properties: K eq and S y,eq .Modeling the BHTT data with Equation 1 implies a point source is used to represent the actual interval, that is, replacing the line source with a point source at the center of the interval.In Cheng et al. (2022a), this was found to be an accurate approximation for R > 4.2 m in Table 1.Following the procedure of Cheng et al. (2022a), the values of the equivalent properties K eq and S y,eq for each measured head signal H(R, φ, z, t) indicated in Table 1 was obtained by a best fit of the measured H(t) with the solution H 0 (R, z, t, Z), where Q was taken as the field test pumping rate value for an interval with center at Z = z.Furthermore, we averaged the values of K eq and S y,eq over φ and over −12 m ≤ z k ≤ −6 m for all the signals of sources located at R 1 or R 2 , with the resulting values given in the legend of Figure 3.These values are consistent with the mean ones presented in Cheng et al. (2022a, Figures 7a and 10a).The resulting four plots of H 0 (R, z, t, Z) as function of z for R 1,2 and t 1,2 are presented in Figure 3 (this is denoted H 0 (R i , z, t j ), where z ≈ Z).It is seen that the agreement between the measurement based (solid lines) and the H 0 profiles (dashed lines), in the interval of interest −12 m ≤ z k ≤ −6 m, is quite good.Furthermore, for both measured and computed profiles there is practically no difference between those pertaining to t 1 = 450 s and t 2 = 600 s.This result strengthens our assumption that a quasi-steady regime is achieved for t > 450 s at the selected R = R 1,2 . We examine now the main parameter of interest, namely the head variance, which quantifies the impact of heterogeneity.The calculation procedure is described in Section 2.1.For each of the 4 values of R i , t j and the different 2 = 2 × 10 −6 m 2 for R 2 = 12.4 m, respectively, valid for t > 450 s.These will serve us in the following in order to identify I h and α L . Identification of I h and α L at BHRS In Section 2.1, we discussed the approximation which allows us to obtain a simple theoretical expression for 2 (see Equation 4).It involves assuming sufficiently R and t so that the flow is slowly varying and can be approximated locally by a uniform flow.In the previous section, we found H 0 (R, z, t) representing the mean measured head in the late period for the largest R 1,2 and the largest t 1,2 in the central zone −12m < z < −6 m (see Figure 3).Thus, the idea is to use this H 0 function to calculate J h via Equation 6.In words, we assume that locally the mean flow behaves like a steady uniform one driven by the head gradient prevailing in the late period for the largest R and t attained in the BHTT.The validity of this simplifying assumption is examined next. Mean Head Gradient (J v and J h ) for Large t and R We turn to evaluate whether the mean flow pertaining to the BHTT at the values of R 1,2 and t 1,2 in the central zone, can by regarded as locally uniform with negligible vertical head gradient, as far as the computation of 2 is concerned.We will use H 0 to represent , as depicted in Figure 3.While a full theoretical solution for 2 in source flow may provide a definitive answer, at present we shall only examine the behavior of J h = ∂H 0 /∂R and J v = ∂H 0 /∂z.These two expressions were derived analytically by differentiation of Equation 1 and then the plots were drawn by plugging in values of Q, K eq , and S y,eq pertaining to the BHTT (K eq = 2.4 × 10 −4 m/s and S y,eq = 0.06 are taken from Cheng et al. (2022a)) and examining their behavior for the values of R 1,2 , t 1,2 of interest.Thus, in Figure 5a we depict the dependence of J v upon R at the boundaries of the central layer z = z 3 = −6 m and z = z 6 = −12 m at times t 1 = 450 s and t 2 = 600 s.First, it is clear that the flow is quasi-steady, as the results for t 1 and t 2 practically coincide, particularly for the lower layer (z = −12 m).Furthermore, |J v | decreases toward zero with increasing R and for large distances (R = R 1,2 ) a fairly constant value of J v is seen with varying R and z.This behavior is in line with uniform mean flow in which J v is approximately zero and constant with z and R. In a similar manner we display the variation of J h with R in Figure 5b.Again there is little difference between J h at t 1,2 , and J h drops with R, varying slowly for R > R 1 = 9.6 m.However, J h slightly varies between z 3 and z 6 and also for R > R 1 , as the assumption of uniform flow is only an approximation.In the computations which follow we adopt as representative values the average J h across z which is close to J h at z = (z 3 + z 6 )/2 = −9 m.The pertinent values are J v = −10 −3 , J h = −2.9× 10 −3 for R 1 = 9.6 m and J v = −8.8× 10 −4 , J h = −2.1 × 10 −3 for R 2 = 12.4 m.It is emphasized that the contributions of the vertical J v and horizontal J h mean head components to 2 of Equation 2 are proportional to 2 and 2 ℎ , respectively.Thus, for the aforementioned values, the relative contribution (∕ℎ) 2 is equal to 0.12 and 0.17 for R = 9.6 m and R = 12.4 m, respectively.In view of these relatively small values and the various approximations discussed, we may neglect the contribution of J v altogether and regard the flow as horizontal with J h given by the above values. Estimation of I h and α L We now substitute the calculated values from Sections 3.1 and 3.2.1 in Equation 4. 1), for t and z indicated in the legend. 10.1029/2023WR036256 11 of 18 R 2 = 12.4 m.Then, we replace the corresponding values of J h = −2.9× 10 −3 and J h = −2.1 × 10 −3 for R 1 = 9.6 m and R 2 = 12.4 m, respectively, in the right hand side of Equation 4. The result is an equation linking the three parameters 2 , I h and I v .We selected I h as the one to be identified while adopting for 2 and I v values from the literature for the BHRS.This choice is motivated by practical considerations regarding aquifer characterization.Indeed, the common approach for estimating the properties consists of collecting K data in a few boreholes, where K is measured along the vertical.This data is typically used for estimates of 2 and I v based on the vertical variogram of Y.In contrast, estimation of I h requires information from many boreholes distributed in the horizontal plane to allow deriving the Y horizontal variogram, which is quite prohibitive in many applications.Furthermore, I h is the key parameter for identification of a L .For this reason we focus on searching I h for a few estimated values of the other parameters. which was estimated in that 3D HT inversion of field data from the BHRS.Finally, the highest value we considered is 2 = 0.5 , which is the common value attributed to the BHRS, obtained from slug test analysis carried out in Barrash and Cardiff (2013).Literature values of I v = 2 m have been presented in Cardiff et al. (2013) and I v = 1.2, 1.5 m in Cardiff et al. (2011).Here, we will consider the range of I v = 1 − 2 m and use a representative overall value of I v = 1.2 m which appears in Barrash and Cardiff (2013). We begin the analysis with Figure 6, showing a general representative plot of Equation 4, that is, I h /I v as function of 2 2 for the considered R 1,2 and t 1,2 .It is seen that the value of t 1,2 is immaterial, as the curves coincide; however, the results are somewhat different for the R 1,2 values.It is not clear whether this represents a trend related to the approximation of uniform mean flow (which generally applies for R → ∞) or to the discrepancy between the spread of measured 2 ℎ values for the two R values, as observed in Figure 4.At any rate, the differences are not large and in order to assess the I h values pertinent to the BHRS we have averaged them over the two values of R and present the results as a function of I v for each of the selected 2 separately in Figure 7.It is seen that for the literature value of I v = 1.2 m the magnitude of I h varies between 2.7 and 7.2 m for 2 = 0.5 and 2 = 0.18 , respectively.These are in good agreement with values of I h previously stated in the literature for BHRS, which were found to be in the range of 4-10 m (Barrash & Clemo, 2002;Cardiff et al., 2011Cardiff et al., , 2013)).These results are also within the range of values identified in previous literature for other intensively studied aquifers (Table B1 in Zech et al., 2023).Even considering the entire range of 1 m < I v < 2 m in Figure 7, values of I h remain within the interval of roughly 2-8.5 m, showing robustness of the results.Next, we turn to estimating the value of α L from the BHTT, which is one of the main objectives of the present study.We achieved this by using Equation 7, that is, multiplying I h by 2 , and results are presented in Figure 8 as function of I v for the selected three values of 2 .The somewhat surprising and encouraging result is the robustness of α L as reflected by the closeness of its values, irrespective of 2 values.Thus, it is seen that α L ≈ 1.3 m for I v = 1.2 m while it varies between 0.8 and 1.6 m for the entire range of explored I v values. Based on a thorough analysis of tens of transport field experiments Zech et al. (2023) have suggested to divide aquifers into three classes of heterogeneity level (see Figure 1 . For each level a range of α L values was recommended as a preliminary choice in applications, in absence of information on 2 and I h .Thus, for weak heterogeneity the mean value, based on data of 13 aquifers, is α L = 1.1 m (standard deviation 1.1 m).It is gratifying to find out that the value α L identified in the present study for the BHRS, α L ≈ 1.3 m, is indeed close to the one characterizing aquifers of weak heterogeneity, in agreement with the 2 < 1 criterion. Impact of Reduced Field Data The field pumping test configuration underlying the previous analysis includes three pumping (A1, B1, C1) and 8 observation wells (see Figure 2a), with a total number of 66 signals used for identification at R 1 = 9.6 m and R 2 = 12.4 m in the central z zone (see Table 1).The investment in the large number of wells and of measurements is justified for the experimental setting of the BHTT, for high-resolution investigations of contaminated sites and for in-situ remediation of contaminated source zones considering toxic plumes.However, for the application of the procedure to common plume-scale contamination issues (see next section) such an investment is not practical.It is therefore of interest to employ the extensive data basis in order to examine the impact of reducing the number of wells and measurements on the outcome of the analysis. For the purpose of evaluating the impact of reducing the available field data, we have considered pumping along only a single well and measurements along only two observation wells.The pumping wells taken into account are A1 and B1, separately (see Figure 2a), with measured pressure signals originating from wells C4 and C5 (for pumping at A1) or C3 and C4 (for pumping at B1).This implies using only data pertaining to R = 9.6 m and results in a total number of measured H(t) in the central zone (−12 m < z < −6 m) of 14 for A1 and 28 for B1, respectively.The difference is due to the measurements filtered out, as explained in Section 2.2.It is emphasized that the combination of A1 and B1 signals (42 in total) exhaust their contribution to the total number used in the analysis of the preceding section, since the pumping of C1 did not take any part in the analysis for R = 9.6 m.This is observed by summing the contributing signals in the R = 9.6 m column in Table 1 to find the same number (44).However, the following analysis accounts for the impact of reducing the number of pumping wells and of measurements as we will consider each case separately; pumping from A1 with only 14 signals and pumping from B1 with only 24 signals.The following analysis follows the same steps of the preceding section which dealt with the ensemble of all three pumping wells. The first step was the derivation of K eq and S y,eq for each H(t) by best fit of H 0 , toward calculation of H 0 representing the average measured head .Averaging over z in the central zone led to the following values: K eq = 2.18, 2.28, 2.25 × 10 −4 m/s and S y,eq = 0.078, 0.051, 0.057 for A1, B1 and combined A1 and B1, respectively.It is seen that the new values for A1 and B1 are quite robust and close to those of the combined set, which have already been presented in the legend of Figure 3.The same is true for the plots of H 0 as function of z for t = 600 s (Figure 3) which rely on K eq and S eq values, although these are not plotted for A1 and B1 separately here for brevity.We plot the measurement averaged head () in Figure 9 for R = 9.6 m and t = 600 s alongside the result using all of the data (A1 and B1), as well as the fitted H 0 using K eq and S y,eq from all the data (replicated from Figure 3).It is seen that in the central zone of −12 m < z < −6 m all the curves practically coincide.Although based on the A1 data solely is slightly less regular, it is still quite close. We can conclude at this point that the computed H 0 , as well as the measurement based mean head for the total set, are close to based on a single pumping well and two observation wells with 28 measured signals and in fairly good agreement even for 14 measured signals.Therefore, we can proceed with the calculations of 2 ℎ for these cases in the following. The head variance 𝐴𝐴 𝜎𝜎 2 as function of z for the A1 and B1 head measurements is calculated and the results are presented in Figure 10.It is seen that as expected the variance is smaller for A1, with the average value over z of 2 = 3.4 × 10 −6 m 2 , while for B1 the result is 2 = 5.6 × 10 −6 m 2 ; the latter is close to the variance obtained from all measurements of 2 = 6 × 10 −6 m 2 , presented in Figure 4. Thus, the smaller number of A1 signals underestimates the total variance, whereas B1 and the two corresponding observation wells capture it.However, the salient question is what is the impact upon the identified parameters I h and α L .This is achieved by the same procedure that was carried out previously for all the data, using Equation 4 and substituting the above 2 values with the previous test parameters values 2 = 0.18 , 0.36, 0.50, and I v = 1.2 m.The final results are I h = 4.9, 2.5, 1.9 m for A1 and I h = 8, 4.1, 3 m for B1, as compared with I h = 7.2, 3.7, 2.7 m using all the data (Figure 7).Thus, A1 underestimates I h by about 30% the values for the complete set of data while B1 overestimates by 10%.The agreement can be viewed as quite satisfactory in view of the uncertainty affecting I h . Finally, the key parameter = 2 ℎ is practically independent of 2 , precisely like in Figure 8 for the total set of data.Comparison for I v = 1.2 m, yields α L ≈ 0.9 m for A1, 1.5 m for B1, and 1.3 m for the total data set (Figure 8).Again, the agreement can be considered as quite good in view of the imprecise determination of α L in field studies (e.g., Zech et al., 2023). Summary, Conclusions, and Future Application This work presents a method for estimating logconductivity horizontal integral scale I h and longitudinal macrodispersivity α L of a phreatic aquifer from measured head data obtained in a HT test consisting of pumping from intervals along a few wells and measurement of the head response H(t) by ports along observation wells.The method adopts a stochastic inversion approach: the hydraulic properties are regarded as stationary random space functions characterized by a few statistical parameters and the aim is to identify them with the aid of those derived from the measurements of the random head field.The focus is on Y = ln K, of assumed axi-symmetric auto-correlation, completely characterized by the statistical parameters K G (the geometric mean), 2 (logconductivity variance), and horizontal and vertical integral scales I h and I v , respectively.The main idea of the new method is to calculate the variance of the measured head 2 and the horizontal of the mean head J h from the HT test data and then substitute this in a theoretical relationship pertaining to mean uniform flow driven by a constant head gradient.The method is applied to data from an HT test carried out in 2011 at the BHRS, which included three pumping and eight observation wells. We began our analysis by averaging the observed head on intervals of R (the horizontal radial distance from the pumping wells) and of z (the vertical distance from the water table at z = 0 extending to z = −D, the bottom), that is, over rings surrounding the pumping wells.This was done for the largest time t = 600 s (and t = 450 s for the sake of comparison) in which the flow regime is dominated by water table drainage.The random head is assumed to be stationary over the rings due to the axi-symmetric nature of the Y field and of the well flow.We focused on the two largest R and on the aquifer central interval −12 m < z < −6 m for which there were many available measurements.We could approximate quite well the mean head by the solution pertaining to a pumping source in a homogeneous phreatic aquifer provided we replaced K and S y by their equivalent values, in the spirit of Cheng et al. (2022a).Attempts to identify the Y statistical parameters by using the expected value of K eq or of H and a first-order solution in 2 of the direct problem were not conducive.Instead, we used the head variance 2 derived from measurements in the central zone.In absence of a simple first-order solution for the well flow problem, we made the additional assumption that at the relatively large R and t considered here 2 is related to the local mean head gradient and to the statistical parameters by the simple relationship pertaining to mean uniform flow in an unbounded domain.Exchanging the spatial 2 and the ensemble 2 , we could obtain a simple relationship between the parameters 2 , I h and I v .Since I h is the most difficult to derive by direct measurements of K due to few wells which are typically available, we concentrated on the estimation of I h and adopted a few values of 2 and I v from previous literature on the BHRS.The resulting range of identified I h values is in agreement with previous estimates, suggesting that the new method presented here is promising. We then used the same data in order to estimate the longitudinal macrodispersivity by the formula = 2 ℎ .It turned out to be quite robust, practically independent of the selected 2 values and weakly dependent on I v .The identified α L ≈ 1.3 m for the literature value of I v = 1.2 m is in agreement with the values for aquifers of a similar level of heterogeneity as quantified by 2 .This is an important and encouraging result toward application to solute transport prediction.The assumptions and requirements for implementing the new approach are as follows.First, we assume that H and K are stationary random space functions.Then we use data obtained at large horizontal distances from the pumping well and at long times, assuming that under these conditions the flow is approximately uniform in the mean with a negligible vertical head gradient.Furthermore, we approximate the local head gradients using an analytical solution with parameters ⟨⟩ and ⟨⟩ .Finally, we require previous estimations of ⟨⟩ , ⟨⟩ , 2 and I v , which can be calculated in various methods, for example, the first three are calculated as explained in Cheng et al. (2022a). Since the present study is a first attempt to apply stochastic inversion to a continuous pumping field test, it is of interest to discuss the future use of the methodology for aquifer characterization.First, we recommend additional theoretical and numerical investigations in order to validate the results based on the various simplifying approximations adopted in the study.This may include developing a simplified first-order solution as well as numerical simulations for deriving the head variance in 3D radial flow, considering confined or phreatic aquifers.Nevertheless, assuming the present results are validated, it is possible to draw a few tentative recommendations following this study. An envisioned possible characterization configuration to guide plume-scale investigations and treatments, following the results presented in Section 3.3, is a much less extensive and costly pumping test than the full tomographic test.Our results show that the pumping test could consist of only a pumping well and two observations wells at the same distances R (must be sufficiently large R) and at different horizontal angles.We found that for the BHTT this reduced setup, with a total number of pressure signals of around 20, could lead to estimates close to those obtained by the complete configuration.An overall suggested characterization procedure could consist of first measuring K values along the three wells by existing technologies (e.g., Dietrich & Leven, 2009) in order to derive the logconductivity mean and variance, as well as its vertical integral scale.Subsequently, pumping can be carried out along a few small intervals in one of the wells and pressure signals measured at same elevations in the two other observation wells.Based on the mean head and its variance, derived from the signals in the observation wells, the equivalent conductivity, and the resulting horizontal integral scale and longitudinal macrodispersivity shall be derived along the lines of the present study. Appendix A: Derivation of the Head Variance in Mean Uniform Flow Two key equations of this work are Equations 2 and 3, which relate the head variance 2 to the various parameters of the problem.It was derived in the book by Dagan (1989, Section 3.7.2).However, the reader interested in the details of the procedure will encounter difficulties since the developments rely on different preceding sections of the book.For this reason we recapitulate here the derivation in a complete and concise manner. A1. The Log-Conductivity Variance The random log-conductivity Y = ln K is written as Y = 〈Y〉 + Y′(x) where 〈〉 stands for average, 〈Y〉 = ln K G , K G is the geometric mean assumed to be constant and the random Y′( x For the exponential ρ = exp(−r′) integration can be carried out analytically and A2. The Head H(x) First Order Solution The equation of steady flow satisfied by the random head ∇ ⋅ (K∇H) = 0 can be rewritten as ∇ 2 H + ∇Y′ ⋅ ∇H = 0.The head field is split into H = H 0 + h, where H 0 is the solution for a homogeneous medium (Y′ = 0) satisfying the boundary conditions and h is a perturbation associated with spatial variability.For a mean uniform flow stemming for instance from constant heads applied on two planar boundaries, we have H 0 = −J.x+ const where J is the constant head gradient.Thus, 〈H〉 = H 0 while ∇ 2 h + ∇Y′ ⋅ ∇h = J ⋅ ∇Y′, with h satisfying homogeneous boundary conditions.At first-order in Y′, h satisfies ∇ 2 h = J ⋅ ∇Y′ with neglected terms of order Y′ 2 .Application of the Fourier Transform to the last equation yields the solution ĥ() = − ( ⋅ ∕ 2 ) Ŷ ′ () and by inversion () = −(2) −3∕2 ∫ ( ⋅ ∕ 2 ) Ŷ ′ ()exp(− ⋅ ). 𝑯𝑯 From the expression of h above the variance at order 2 is given by The covariance of the Fourier Transforms of the stationary Y′ is given by where δ is the Dirac operator (see for instance Dagan (1989, Section 1.6)).Substitution in the preceding equation yields 2 = (2) −3∕2 2 ∫ ( ⋅ ∕ 2 ) 2 ρ() , which is constant.We consider a head gradient J(J h , J h , J v ) that is, of horizontal and vertical components.The two components, for flow parallel to the bedding or normal to it, respectively, can be written as Integration can be carried out analytically by switching to spherical coordinates for k′: ′ 1 = ′ sin cos , ′ 2 = ′ sin sin , ′ 3 = ′ cos , dk′ = k′ 2 sin θ dθdϕdk′ within the limits θ(0, π), ϕ(0, 2π), k′(0, ∞).Substitution of the above expression ( ′ ) = ( 2 ℎ ) (8∕) 1∕2 ( 1 + ′2 ) −2 for an exponential covariance and the triple integration leads to the final expressions of Equations 2 and 3. Figure 1 . Figure 1.Schematic illustration of the pumping test model (taken from Cheng et al., 2022a) with pumping source located at (0, 0, Z) and measurement point at (R, φ, z). Figure 2 . Figure 2. The Boise Hydrogeophysical Research Site-well layout (taken from Cardiff et al., 2013).(a) The well distribution map from an aerial view.(b) The experimental setup for the case of pumping at B1 and five observation wells.Measurement locations (blue and green circles) and pumping locations (red circles) are indicated. Figure 3 . Figure 3. Mean head as a function of depth z calculated from Boise Hydraulic Tomography Test data for two R and t values indicated in the legend.The fitted H 0 curves (dashed lines) with equivalent properties (K eq = 2.3 × 10 −4 m/s, S y,eq = 0.057 for R = 9.6 m; K eq = 1.9 × 10 −4 m/s, S y,eq = 0.048 for R = 12.4 m) are plotted for reference.The shaded area represents the interval of interest away from the boundaries −12 m ≤ z k ≤ −6 m. z k we averaged [ (, , , ) − (, , ) ] 2 over φ, as prescribed by Equation 5, to obtain 2 (, , ) .The results are summarized in Figure 4, which displays the values of 2 along the vertical.It is seen that as expected 2 varies in a more irregular manner with z k than , with no clear trend and the scatter of values is larger for R 1 = 9.6 m than for R 2 = 12.4 m.Again, the results for t 1 = 450 s and t 2 = 600 s practically coincide, particularly for the latest time period of t 2 = 600.Averaging 2 over the interval of assumed uniformity −12 m ≤ z k ≤ −6 m we arrive at the 2 key values of 2 = × 10 −6 m 2 for R 1 = 9.6 m and Figure 4 . Figure 4. Variance of head from measured data for different depths z and for two R and t values indicated in the legend.The overall head variance average in the range of −12 m ≤ z k ≤ −6 m is presented with dashed lines.Circle size is proportional to the number of data points available. 2 in the left hand side of Equation 4 is replaced by its measured values of Figure 5 . Figure 5. Vertical (J v , plot (a)) and horizontal (J h , plot (b)) mean head gradient as a function of R, calculated by derivation of H 0 (Equation1), for t and z indicated in the legend. Figure 6 . Figure 6.Integral scale ratio I h /I v = 1/f as a function of 2 calculated via Equation 4 for R 1,2 and t 1,2 .Diamonds indicate results for values of 2 2 representative of the Boise Hydrogeophysical Research Site. Figure 8 . Figure 8. Macrodispersivity as a function of vertical integral scale for three values of 2 and for t = 600 s. Figure 7 . Figure 7. Horizontal integral scale (average results for R 1,2 ) as a function of vertical integral scale for three values of 2 and t = 600 s. Figure 9 . Figure 9. Mean head as a function of depth z calculated from the reduced Boise Hydraulic Tomography Test data via pumping well A1 or B1 only.The from all data and the fitted H 0 (dashed lines) are plotted for reference. Figure 10 . Figure 10.Variance of head for different depths z from the reduced Boise Hydraulic Tomography Test data via pumping well A1 or B1 only.The overall head variance average in the range of −12 m ≤ z k ≤ −6 m is presented with dashed lines.Circle size is proportional to the number of data points available. 𝑣𝑣] ) is the stationary fluctuation.The two-point covariance of Y′ is given by = ⟨ ′ () ′ ( + )⟩ = 2 () .The auto-correlation ρ is modeled as axi-symmetric, with the common simplified representation ρ(r′), where ′ 1∕2 , with I h and I v the horizontal and vertical integral scales, respectively, and f = I v /I h < 1, the anisotropy coefficient.A common form of ρ is the exponential ρ = exp(−r′), which will be adopted here without loss of generality (other types are given in Dagan (1989, Section 3.2.3)).We shall use frequently the Fourier Transform Ĉ = 2 ρ() , where as usual () = (2) −3∕2 ∫ ()exp(.) , where dr = dr x dr y dr z , i is the imaginary unit and ∫ stands for the triple integral ∫ yields ρ(r) = (2π) −3/2 ∫ρ(k) exp(−ik ⋅ r)dk.For the particular case of ρ function of r′ it is useful to adopt the transformation ′ = ∕ℎ , ′ = ∕ℎ, to calculate 2 for each depth z and then results are averaged over different depths.The final result is a single variance value represented by 2 (max, Table 1 Number of Data Points (After Filtering) for Late Time Period Analysis, Considering Each Pumping Elevation Z and Observation Location R (Within 0.8 m of Specified Values) and for |z − Z| < 1 m
2024-01-20T16:07:58.595Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "68e6de2059888cfc167912db52d70f96af75bfd6", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023WR036256", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "be9b12d90d090e024123dfe344ae6143a231c7f8", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
271279940
pes2o/s2orc
v3-fos-license
Distant Multilevel Spinal Metastasis Secondary to Hypopharyngeal Squamous Cell Carcinoma Head and neck squamous cell carcinomas account for most head and neck malignancies. While multi-modality treatment may be offered for locally advanced cancer, distant metastasis still occurs in a significant number of patients. This paper aims to present a rare case of a patient who developed bony metastases in the cervical spine from a primary hypopharyngeal malignancy status post-laryngopharyngectomy. We report a case of a male patient presenting with acute-on-chronic hypercapnic and hypoxic respiratory failure with two months of dysphagia and weight loss. On arrival, a barium swallow revealed mucosal irregularity of the upper thoracic esophagus as well as narrowing and stenosis. A direct laryngoscopy with biopsy revealed squamous cell carcinoma of the hypopharynx. CT neck and chest were obtained for staging. He underwent a total laryngopharyngectomy, bilateral neck dissections, and a free flap. His final staging was pT4aN2c cM0. Three months post-admission, during inpatient radiation therapy, the patient reported midline neck pain with focal bone tenderness, and an MRI was obtained of his cervical and thoracic spine with a report concerning spinal metastasis.A subsequent bone biopsy showed findings consistent with osseous metastasis from a primary hypopharyngeal squamous cell carcinoma. After multidisciplinary goals of care discussions, the patient ultimately decided to be discharged to inpatient hospice. This report highlights a rare case of hypopharyngeal carcinoma metastasis to the cervical spine. Despite its rarity and poor prognosis, such a metastasis should be considered in the differential diagnosis of patients with a history of hypopharyngeal squamous cell carcinoma and localizing symptoms. Introduction Head and neck squamous cell carcinomas (HNSCCs) arise from the mucosal epithelium of the upper aerodigestive tract and account for most head and neck malignancies, excluding well-differentiated thyroid cancer.The prevalence of HNSCCs varies globally, but their risk factors are well-established, as 75-85% of HNSCCs are attributed to tobacco use and alcohol consumption.Oropharyngeal cancer is increasingly associated with human papillomavirus (HPV) infection, especially HPV-16.FDA-approved vaccines covering HPV-16 and HPV-18 offer potential prevention for HPV-positive HNSCC [1,2]. HPV-negative oropharyngeal cancer behaves similarly to HNSCCs from other aerodigestive sites such as the oral cavity and the larynx and is primarily associated with tobacco use.No effective screening strategy exists, with early detection relying on physical examination.While some premalignant lesions progress to invasive cancer, many patients present with advanced-stage HNSCCs.Treatment involves surgical resection followed by adjuvant (chemo)radiation or radiation-based treatment (with chemotherapy typically given concurrently for advanced cases) [1,2]. We report a case of hypopharyngeal squamous cell carcinoma status post-laryngopharyngectomy that, while undergoing radiation therapy, developed spinal metastases.We share histopathological findings from pharyngeal and spinal biopsies obtained during our patient's hospitalization. Case Presentation The patient is a 66-year-old male with a past medical history significant for alcoholic cirrhosis (42 drinks/week), tobacco use (35 pack years), COPD (forced expiratory volume in 1 second/functional vital capacity=67%, long-acting beta agonist/short-acting beta agonist daily, no history of exacerbations, home O 2 ) and pulmonary hypertension, presenting with acute-on-chronic hypercapnic and hypoxemic respiratory failure and two months of dysphagia and weight loss (30 pounds).The patient was in his usual state of health until two months prior to presentation when he noticed progressive shortness of breath with a worsening chronic productive cough.Compared to baseline, he felt more fatigued, had decreased appetite, and noticed he was losing weight.His home O 2 needs had increased.Hours before presenting to the emergency room, his O 2 saturation was reported to be 54% (baseline 70-80%), and he had been getting gradually more confused, altered, and fatigued. Upon arrival at our institution, the patient was intubated and admitted to the medical ICU for acute-onchronic hypoxemic respiratory failure.While the etiology was unclear at the time, the patient rapidly improved, was extubated within 24 hours, and was back to baseline home oxygen requirements.During the hospitalization, a modified barium swallow study was performed to evaluate his dysphagia, and it showed stenosis and mucosal irregularity of the upper thoracic esophagus.A subsequent esophagogastroduodenoscopy revealed a hypopharyngeal mass.Direct laryngoscopy with biopsy was performed, which confirmed a fungating mass overlying the supraglottic, glottic, right piriform, and postcricoid subsites, without esophageal extension.CT soft tissue of the neck showed a large predominantly right-sided hypopharyngeal and supraglottic mass, measuring greater than 4 cm with right thyroid cartilage invasion and tumor beyond the larynx staged at cT4N2cM0.The patient underwent a total laryngopharyngectomy, bilateral neck dissections, and a free flap reconstruction.Pathology from the surgery showed a 5.5 cm tumor centered in the arytenoepiglottic fold, extending to the right pyriform sinus, bilateral posterior pharynx, bilateral hypopharynx, and right upper esophageal mucosa and muscularis propria.The mass abutted the thyroid cartilage, and both lymphovascular and perineural invasion were identified; only one lymph node was negative for metastatic carcinoma.Histology determined a squamous cell carcinoma (Figure 1). FIGURE 1: Hematoxylin and eosin (H&E) stained sections (100x) show the native larynx with infiltrating squamous cells forming sheets and nests. The cells have apparent cytoplasm and form "keratin pearls". He was then started on IMRT radiation at 250 cGy in 20 fractions.He received 20 total doses of radiation over one month.His hospital course was complicated by MSSA bacteremia, requiring a pause in radiation therapy, and significant hypoxia required 6 L/min oxygen via trach collar throughout the stay.Subsequently, our patient reported midline back pain, and there was concern for potential osteomyelitis from the MSSA pneumonia.An MRI of the thoracic and cervical spine showed diffuse marrow heterogeneity, with concern for extensive osseous metastatic disease involving the cervical and thoracic spine.It was noted to potentially involve C6, C7, T5, T6 and T12 vertebral bodies (Figures 2A, 2B).Other areas of focal marrow enhancement were seen throughout the spinal cord. STIR: Short tau inversion recovery A biopsy was performed at the T12 vertebrae, showing evidence supporting a squamous cell carcinoma compatible with the patient's known hypopharyngeal primary (Figures 3A-3D).Immunohistochemistry of the sample was positive for CK5/6 and p40.The patient then went on to receive a palliative course of XRT to the spine. Discussion HNSCCs account for roughly 4.5% of cancer diagnoses and deaths worldwide, with 890,000 new cases and 450,000 deaths annually [3].In 2024, there are 58,450 estimated new cases of HNSCCs and 12,230 estimated deaths, representing 2.0% of all new cancer deaths in the US [4]. Metastasis of HNSCCs is considered to occur in approximately 10%-20% of cases [5,6].Metastases are most commonly identified in the regional cervical lymph nodes, but distant metastases can be seen in the lung, liver, and mediastinum with rare reports of sites identified in skeletal muscle [7], heart [8], and spinal cord [9].Metastases to the bone have historically been thought to be a rare occurrence (estimated at ~1% of distant metastases), but recent literature suggests this is more common than previously understood (5%-10%) [10].In cases of HNSCCs, bony metastasis to the lumbar spine, pelvis, shoulder, thorax, face, and extremities have been identified at rare, but reasonable, rates of approximately 4% [11].However, the presence of hypopharyngeal squamous cell carcinoma metastasis to the cervical and thoracic spine is exceedingly rare and has yet to be documented with histologic confirmation. In our patient, multiple bony metastases to the cervical and thoracic spine were identified radiographically and confirmed histopathologically.H&E-stained sections of the vertebral biopsy show bone with infiltrating squamous cells forming sheets and nests (Figure 2A).These cells have abundant cytoplasm, moderately pleomorphic nuclei, and form "keratin pearls" (Figure 2B), pathognomonic of SCCs.The subsequent IHC staining definitively identified these lesions as metastatic SCC by demonstrating stains positive for P40 (Figure 2C) and Cytokeratin (CK) 5/6.P40 is a nuclear marker with expression in squamous cell differentiation that is utilized for its superior specificity for SCC [12].Furthermore, the lesion stains positive for CK 5 and 6 (Figure 2D), a marker highly predictive of primary tumor of squamous epithelial origin when differentiating metastatic carcinomas of unknown primary site [13]. Despite the rarity of bony metastasis from HNSCCs and regardless of the patient's prognosis, metastasis should be considered in the differential diagnosis of patients with a history of hypopharyngeal SCC and localizing symptoms.Symptoms, as in our patient's case, may include focal bone pain and tenderness.Localized pain and swelling could potentially limit the range of motion or even cause deformities.More obvious signs of bony metastases include pathologic fractures, spinal cord compression (motor or sensory deficits, bladder, or bowel incontinence), and radicular symptoms because of compression of spinal nerve roots in spinal metastases.The presence of clinical symptoms is often a significant factor in accurately narrowing the differential diagnosis [14].However, the simple fact of the rarity of HNSCC metastasis to the bone may trigger attribution and availability heuristics, which in this case would have failed the clinician [15].A history of chronic neck pain (secondary to past trauma, degenerative processes, etc.) would have further confounded the diagnostic schema and challenged clinical decision-making. As a result of his newly confirmed diagnosis and its poor prognosis, and in congruence with the discussed goals of care, our patient received palliative XRT in accordance with the society recommendations of the American Society for Radiation Choosing Wisely guidelines [16].Beyond palliative XRT, other therapeutic options for metastatic SCC exist for patients with better performance status and differing goals of care.The current standard of care for these patients is pembrolizumab/cisplatin/5-fluorouracil or pembrolizumab alone for PD-L1 positive tumors.Immunotherapy with immune checkpoint inhibitors or a combination of chemoimmunotherapy is an option as well [17].Patients with metastatic HNSCCs that express programmed cell death protein 1 and its ligand (PD-L1) should be treated with pembrolizumab or the combination of pembrolizumab and the first-line chemotherapy mentioned above [18]. Conclusions This report is a rare account of cervical and thoracic bony spinal metastases from hypopharyngeal squamous cell carcinoma.New localizing symptoms of patients with a history of advanced HNSCCs should be given attention as they may represent pathologic recurrence of disease. FIGURE 2 : FIGURE 2: (A) A STIR MRI sequence of the lower cervical and thoracic spine.The radiologist noted diffuse marrow heterogeneity specifically in C7, T5, and T12 vertebral bodies.(B) A T2 weighted MRI demonstrating a sliver of epidural soft tissue along the right aspect of T12, noted with an arrow.No significant cord compression was observed, however.
2024-07-19T15:06:30.117Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "46dcfdf43463d77a054173a08b7beb8abf3d086d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7759/cureus.64715", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0e5276cadc7b009e25ba01cb9edb3de9c7e2b6c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267346943
pes2o/s2orc
v3-fos-license
Ethnicity, physical activity, and the dietary intake of boarding high school students: A photovoice mixed method study This study explores the dietary intake of female students and its associated factors to understand their meal experiences in a boarding high school setting through a photovoice study. A mixed-methods study included 60 students in paper-and-pencil assessment and 8 students in photovoice activities. The nutritionist measured the anthropometrics and trained the students to keep multiple 24-hour food records. A linear regression model was used to examine the associations and was complemented by a thematic analysis of the story from the produced photos. The result shows most students were at normal weight, and overnutrition was 26.7%. The mean of students’ total energy, protein, fat, and carbohydrate intake was 1182 kcal, 38.7 g, 64.2 g, and 141.7 g, respectively, lower than national recommendations. Ethnicity and physical activity were found to be associated with students’ dietary intake. In the photovoice study, 14 photos were produced, and six major themes emerged during focus group discussions. Sundanese and less physically active students tended to have higher dietary intakes. To conclude, students were aware of food shortages at school and made efforts to supplement their meals, but appropriate food choices were not made. It is necessary for schools to address students’ meal preferences on school menus, promote healthy food, and provide practical nutritional information. Unhealthy dietary practices cause overnutrition, which is commonly found in the form of irregular mealtimes and snacking behavior among adolescents, especially high school students with busy lifestyles [8][9][10].Primarily, adolescents' food consumption is shaped at home.They are aware of family meals and the food prepared by their mothers [10,11].However, soon after starting school, peers' opinions and food at school tend to lead high school students to avoid eating proper meals [8-10, 12, 13].High school-aged adolescents in Indonesia prefer to consume more sweet, confectionary, and greasy foods than other age groups [14].Their calorie and protein intake do not tend to meet the recommendations [14]. Boarding schools may ensure that high school students have regular mealtimes, as the school provides breakfast, lunch, and dinner.In regular school, skipping breakfast is common among students, which affects their activities [9,15] and daily nutrient intake [16] and leads to negative health outcomes [17].A study of boarders in Kilimanjaro observed that their average intake of macronutrients met the recommended daily allowance (RDA) [18].However, Kenyan boarding high school students did not have adequate energy and nutrient intake [19].Like the aforementioned nations, the majority of the boarding schools in Indonesia offer three daily meals.With a concern for school readiness to apply the school lunch program in terms of resources, boarding school appears to be a potential starting point [20].But again, the actual nutritional status of boarding school students in Indonesia is scarce to discuss.Rimbawan, et al. [20] found that students' nutrient adequacy in boarding schools was lower than the standard.The other study explained that with the lower intake, the total energy of the students was mostly from snacks for those who were overweight and main meals for those who were stunted [21]. This paper, therefore, aimed to add to those explorations of dietary intake among female high school students in a boarding school and sought to examine its associated factors.The selection of female participants was predetermined due to previous research indicating variations in health practices among high school students based on gender.Female students were able to apply a better daily lifestyle, but most nutrient intake remained lower, and their fat intake was higher than that of their male counterparts [18,22].With biological changes during the adolescent phase, females ofhigh school age are more prone to nutrient deficiencies [23].Overweight female adolescents are likely to become obese women in adulthood [24].Further investigation was conducted in this study to fully capture the meal experience and how the students perceived a balanced meal through photographic storytelling. Methods This study used a mixed-methods design with female boarding high school students.A quantitative study aimed to assess students' dietary intake and its associations with measured variables, such as weight status, sociodemographic factors, physical activity, and nutritional knowledge, while a qualitative study aimed to capture students' meal experiences through a photovoice study [25].Data collection for quantitative and qualitative information was conducted concurrently from January to March 2022.Information from the qualitative results was then used to provide further explanation of the quantitative results [26]. Participants Recruitment for the study was conducted in a school setting after purposively selecting one boarding high school with an onsite learning school permit during the study period with a 100% face-to-face class.Students who passed the inclusion criteria-female students in grades 10 and 11 who were willing to participate and had completed parental consent formswere scheduled for measurements between January and March 2022 during the first period of school.The assessment encompassed anthropometric measures as well as the completion of self-reported questionnaire that included dietary records, sociodemographic information, physical activity, and nutritional knowledge.Students who had dietary restrictions and/or chronic disease or who failed to complete all the measurements were excluded from the study.The students were also encouraged to participate in a photovoice study.The photovoice study initially recruited 10% of the total participants until saturation was achieved. Instrumentation 2.2.1. DietaryIntake The dietary intake of the students was obtained using a three-day food record method of two non-consecutive weekdays and one weekend day.The students were trained on how to complete the form prior to the study period [27], listing all food they consumed throughout the day, including portion estimation in household measures with a food image aid [28].The study used food record forms as recommended for a nutritional survey in Indonesia [14]. Anthropometrics Anthropometric measurements were conducted by trained personnel, measuring the body fat percentage, body weight, and height of the students using a Tanita weight scale typeBC-541 with a feature to measure body composition and a Seca 206 wall-mounted measuring tape.Body fat percentage [29] and World Health Organization (WHO)Reference 2007 [30] were used to evaluate the students' body fat and weight status, respectively.Generated using WHO AnthroPlus software, Body Mass Index (BMI)for age was applied, presenting the distribution of students' weight status as normal weight for a z-score of -2 SD to 1 SD, overweight for a z-score more than 1 SD, obese for a z-score more than 2 SD, and underweight for a z-score less than -2 SD.The value of the z-score was used for further statistical analysis. Sociodemographic Factors We obtained sociodemographic data from the students using a self-reported questionnaire to collect information such as date of birth (to calculate age in years and months), ethnicity (an open-ended question that was grouped into Betawinese, Javanese, Sundanese, and others based on distribution), parents' education (middle school, high school, diploma, bachelor, master, and PhD), and monthly stipend (Indonesian Rupiah/IDR). Physical Activity The Physical Activity Questionnaire for Adolescents (PAQ-A) was used to measure students' physical activity [31].Students whose score was 2.75 or more were considered physically active [32].The value of the PAQ-A score was used for further statistical analysis.The English-originated PAQ-A questionnaire was translated into Bahasa by experts, including a nutritionist, a lecturer, and a high school English teacher, prior to the study. Nutritional Knowledge The nutritional knowledge of the students was obtained using a researcher-made general nutrition knowledge questionnaire from a literature review [33] following Indonesian food-based dietary guidelines [34].The questionnaire was reviewed by three nutritionists using inter-objective congruence (IOC) methods and was pilot-tested with 30 high school students.Finally, a general nutrition knowledge questionnaire consisting of 45 questions was used after reaching the Kuder Richardson formula (KR-20) value of 0.7, which was considered acceptable [35].The questionnaire included topics on nutrients and sources (12 items), food groups (17 items), dietary recommendations (11 items), and food effects on health (5 items), with dichotomous answers (multiple choice, yes/no, true/false) and scored as 1 for a correct answer and 0 for a wrong answer.The general nutrition knowledge questionnaire was primarily outlined in English and was translated into Bahasa by experts, including a nutritionist, a lecturer, and a high school English teacher. Data Analysis Data analysis was performed using SPSS version 28.0 (Chulalongkorn University license).Descriptive statistics of participants' characteristics are presented in frequencies with percentages for categorical data and means (standard deviation; SD) or medians (interquartile rank; IQR) for continuous data.A simple correlation was analyzed using Spearman's rank correlation test.To examine the extended correlation between the students' characteristics and dietary intake, a multiple linear regression was performed.Regression models were also tested to show plausible correlations.Total energy and protein intake-dependent variables were log-transformed, as the data were skewed.Significance was set at p < 0.05 with a 95% Confidence Interval (CI). Photovoice Procedure The photovoice part included three sequential activities: the initial meeting, photo-taking, and focus group discussion (FGD) [25].During the initial meeting, the researcher introduced the photovoice method, the objective of the study, the photo-taking task, and ethics concerns to the students.The session's emphasis was to build rapport between the students and the research team.The students were asked to produce at least one photo describing what they perceived as balanced meals and/or the experience of eating those meals.They were given two weeks to take photos before the FGD session.All the photos were then displayed during the FGD, which lasted about 60-90 minutes.It began with each photographer sharing the stories behind their photo(s) and was followed by commentary from the rest of the group members.FGD guidelines and questions were adapted from previous studies [25,36] and were related to 1) the food and the experience, 2) linkage with a balanced meal, and 3) comments or suggestion to peers.The guidelines were also pre-tested with female high school students and young female adults. FGDs were held face-to-face and separately with junior and senior students based on their ease in explaining their thoughts and experiences.The FGD was moderated, recorded, and transcribed verbatim by the researcher for analysis.Thematic analysis was performed based on the transcript of the FGD for the photographs' storytelling [37][38][39].Data saturation was reached when no new information emerged during additional FGDs.Written and verbal consent for the recording process was obtained from all photovoice participants prior to the study. Quantitative Results A total of 60 female boarding high school students were recruited.The mean age of the students was 15.75 (SD 0.7)years, and students majoring in science subjects were predominant (81.7%)Table 1.Betawinese (33.3%),Sundanese (26.7%), and Javanese (23.3%) were found to be the top three ethnicities among the participating students.Almost all of the students' parents had completed high school.Specifically, 51.7% of the mothers and 35.0% of the fathers had attained high school, and 38.3% of the fathers had attained a bachelor's degree.The students received an average allowance from their parents of approximately IDR 400,000.00, or USD 26.2 per month.Students' physical activity scores were averaged at 2.1 (IQR 0.6), which means they were physically inactive based on the PAQ-A [32].The mean score of students' knowledge of general nutrition was 64 (SD 7.3) out of 100. Nearly one-third of the students (26.7%) were overweight or had a BMI z-score value of more than 1 SD.The rest of the students had a normal BMI (73.3%), and no underweight cases were found.The mean BMI z-score, which was 0.47 SD, or in the normal weight category, also reflected this.The proportion of overweight students was higher when the students were assessed using body fat percentages [29].There were 58.3% overfat students, and 30% of them were obese.Table 2 shows the body fat percentage measurement results.It was not only overweight students based on the BMI who had an excess of body fat but also almost half of the students with a normal BMI (43.2%).The dietary intake of the students is summarized in Table 3.Of the 60 female students, the mean total energy intake was 1,182 kcal, with a mean protein, fat, and carbohydrate intake of 38.7 grams, 64.2 grams, and 141.7 grams, respectively, in their daily consumption.The average of the students' energy distribution from protein, fat, and carbohydrate intake was 13.0%, 38.0%, and 48.7%, respectively.The food intake of all students was below the Indonesian RDA [40].Moreover, energy from fat seemed to contribute more to students' daily intake, leaving carbohydrate proportions lower than recommendations.Table 4 presents the direct correlation between the independent variables and dietary intake.Significant inverse correlations were found between physical activity and total energy (rs = -0.327,p = 0.011), protein intake (rs = -0.267,p = 0.039), fat intake (rs = -0.278,p = 0.032), and carbohydrate intake (rs = -0.282,p = 0.029) of the students.Multiple linear regression was performed to examine the association between students' characteristics and total energy, protein intake, fat intake, and carbohydrate intake as dependent variables.There was no significant association between sociodemographic factors (Model 1, Table 5) and dietary intake.Physical activity was again found to be negatively correlated with students' total energy ( = -0.125,p = 0.032) when it was included in the model with other characteristics, such as stipend, BMI, and nutritional knowledge (Model 2, Table 5).The ethnicity of the students, especially Sundanese, was found to be associated with total energy ( = 0.137, p = 0.025) (Model 3, Table 5).Both physical activity and being Sundanese were significantly associated with dietary intake after adjusting for the maximum plausible correlated variables for age, stipend, ethnicity, BMI, physical activity, and nutritional knowledge (Model 4, Table 5).Specifically, physical activity was negatively correlated with all dietary intakes: total energy ( = -0.151,p = 0.011), protein intake ( = -0.161,p = 0.011), fat intake ( = -16.940,p = 0.040), and carbohydrate intake ( = -44.996,p = 0.021).Being Sundanese was positively associated with total energy ( = 0.150, p = 0.013), fat intake ( = 16.445,p = 0.048), and carbohydrate intake ( = 39.695,p = 0.044). Qualitative Results Of the 60 female students, eight were involved in the photovoice study.They represented grades 10 (n = 4) and 11 (n = 4) and produced 14 photos related to their perspectives and experiences with balanced meals and general nutrition.The saturation of the information was reached after the FGD.Six major themes emerged through the discussion: (1) awareness of having unbalanced meals; (2) reflecting on the daily meal and self-defining a balanced meal; (3) consideration of the food lacking at school; (4) having a tendency to improve meal experience or taste; (5) being receptive during the COVID-19 situation; and (6) experienced-based support for peers. Awareness of having unbalanced meals.The first theme describes students' awareness of their everyday menu at school.The students shared their meal experiences and thoughts related to food composition.They also expressed disappointment with not getting food when the food was out.This is very interesting; the food was so lacking compared with the content of Isi Piringku.It only consisted of rice and tempeh.(4, 11 th grade) The food already contained nutrients, but it was not complete.There were no vegetables, fruit, or meat (animal-based protein).Actually, there was chicken in the soup, but I did not get it.(5, 10 th grade, Figure 1). Reflecting on the daily meal and self-defining a balanced meal.The second theme describes the definition of balanced meals from high school students' perspectives.Participating students used their daily food when explaining what a balanced meal was for them.Balanced is when the content of the plate includes rice, vegetables, tofu, tempeh, eggs, and meat, and there should be fruits too.(5, 10 th grade) Food that contains protein, carbohydrates, and fiber in balanced portions, not too much or too little, so it's just the right portion.(3, 11 th grade, Figure 2) It includes everything we need, such as vegetables, fruit, rice, tempeh, tofu, and milk.(6, 10 th grade) Consideration of the food lacking at school.The thirdtheme depicts students' responses to facing a lack of meal experiences at school.They made some effort, such as preparing or asking someone to bring food from home or buying some food at the canteen to complete their meals. I don't like the menu with telur balado (boiled egg with chilli sauce).I usually take the rice only and look for other foods, like abon sapi (meat floss) that I brought from home or some fried food like fritters bought from the canteen (5, 10 th grade). I look for foods that have vegetables in them like soto, consisting of chicken, rice noodles, cabbage, and choy sum, and eat it during the day when the school menu has no vegetables (7, 10 th grade). Having a tendency to improve meal experiences or taste.This theme depicts students' efforts to have a good meal despite having the same menu every seven days.Students used the school canteen when they did not want to eat as usual.This is Boncabe (chili flakes).I brought them from home and added it myself because I like spicy food.I add it on occasion if I feel like eating spicy food (5, 10 th grade). Because I like sweet soy sauce and crackers, I bought it myself at the canteen (2, 11 th grade). Being receptive during the COVID-19 situation.The students expressed excitement about a new drink that the school provided during the COVID-19 situation.This theme describes students' acceptance of Wedang Jahe, (hot ginger drink) with and without scientific reasons. There is Wedang Jahe, it's really good for immunity, as currently we are having a pandemic, so there are lots of benefits from drinking the wedang (3, 11 th grade). I got this food, so I took a photo, and I got Wedang Jahe too (7, 10 th grade).It [Wedang Jahe] is served by the school every Tuesday and Saturday morning, which started when COVID-19 happened (5, 10 th grade). Experience-based support.The students expressed their concerns about having balanced meals based on their own experiences studying in boarding school. If you don't get a balanced meal, you can look for it like I usually do, for example, by bringing fruit (from home) and drinking lots of water to be able to concentrate.Then, if you don't like the food, don't just skip it.It's better to find o ther food so that your stomach is full (7, 10 th grade).We can eat food like in the picture (sugary drinks and deep-fried food from the canteen), but not too often, and we have to balance it with food that follows the Isi piringku because it will benefit us in the future as adults.If we often eat unhealthy food, then the results will appear when we are old.While we are still young, we have to eat those balanced meals, so when we are old, we will not get sick so easily.(4, 11 th grade, Figure 3). Discussion In this study, we found that female boarding high school students' dietary intake was lower than national recommendations.Factors associated with this intake were the students' ethnic origins and physical activity levels.The students were aware of the food lacking at school, and they attempted to supplement their meals and promote eating balanced meals to their peers. Students' average intake of total energy, protein, and carbohydrate only reached half of the national recommendations, while their average intake of fat nearly reached the threshold but was still below the recommendation [40].The intake was also lower than in previous studies among general high school students, private high school students, and boarding high school students [18,19,41,42].Food environment and busy school routine were reasons given for the students' dietary intake, as indicated in the photovoice study.With finite choices and repetitive food, the school meal did not help students meet the recommendations for total energy, carbohydrate, and protein intake [18,19,41].Food available at school was also greasy, carbohydrate-rich, and lacking in protein [43].Although the school had a full-service kitchen, which would make it possible to supply healthier foods, these were unavailable because menu and food decisions were made by school staff, [44] who required practical nutritional guidelines [45].Fish provision on the school menu inversely indicated wealth compared to meat or chicken [46].The concern was acknowledged by the school principal, who emphasized the importance of affordability for the institution. Despite the overall low intakes, this study discovered that being Sundanese increased the likelihood of a student having higher intakes of total energy, fat, and carbohydrates.This is intriguing, as previous studies have stated otherwise.The Sundanese are mainly known for eating a plant-based diet [47, Consuming less saturated fat and more fresh vegetables indicates a more anti-inflammatory profile for the Sundanese [47].However, they may experience dietary transition due to migration or their new communities' dietary patterns [48].Dietary practices among students were also influenced by these environments [11,[49][50][51].The school, with the aforementioned fixed menu and tuck shops, gave the students food.Having a shop inside the school that only sells salty, deep-fried food and sweets was associated with a higher consumption of these unhealthy foods [49].Still, students stated their intentions to choose healthy food if it was accessible, as convenience was important due to their busy lifestyles [51]. Depicted through FGDs in photovoice activities, the way Sundanese students coped with the food shortage at school supported the findings.Sundanese students, like other students, had food they disliked.As they were able to remember which menu was served on which day, they chose not to eat the food and substituted it with other food they had prepared at home or bought at the shop.The foods disliked by Sundanese students were telur balado (boiled egg topped with chili), stirfried morning glory/long bean/chayote, karedok (mixed raw vegetables with peanut sauce), fried yellow tofu, and mackerel tuna.Their substituted foods were abon sapi (sweet meat floss), vegetable fritters, omelettes, sunny-side-up, instant noodles, batagor (fried fish cakes with peanut sauce), chocolate buns, and other snacks.When these students did not eat the food they disliked (vegetables or less oily food), they tended to eat more fried food, instant noodles, and sweet food or snacks that were high in fat and carbohydrates as well as total energy.It was noteworthy that the students understood complete food but did not make healthier choices [15].It may be that the Sundanese students snacked on sweets because they were used to eating traditional carbohydrate-rich desserts [48]. Another factor associated with students' dietary intake in the present study was physical activity.Students' physical activity was mostly categorized as inactive, following the PAQ-A manual [32].Despite COVID-19 measures, the school was run onsite and had no restrictions on outdoor activities.However, the students spent most of their time sitting during recess, with little sport or physical activity.The trend is similar for adolescents in other developing countries [13,52,53].Although the school allocated another 15 minutes every two days specifically for physical activity besides mandatory physical education (PE), most participating students were just standing instead of moving their bodies.To achieve adequate physical activity, it was suggested that PE educators lead PE classes or allocate outdoor time [53]. The findings of our study indicate a constant negative correlation between increased levels of physical activity and the consumption of total calories and macronutrients.This finding resonates with previous studies.Among the Portuguese, consumption of free sugar was lower in regularly active adolescents [54].Another study discussed the same finding: that with more physical activity, the consumption of added sugar or sugar-sweetened beverages was less likely [55].Also, a lower intake of total energy, carbohydrates, and fat was observed among those who had a higher step count [56].Healthy foods or behaviors were found to be associated with physical activity.Female adolescents with sedentary behavior over 8 hours had a higher likelihood of fast food and carbonated drinks than the brief ones [55]. During photovoice FGDs, we were able to observe the students' awareness of sufficient water consumption with regard to being mobile between school and dorm activities.The students were concerned about the experience and advised their friends to drink enough water to remain alert during classes and other activities.In a boarding school setting, there was no item measuring this condition in the PAQ-A questionnaire.This may be another detail that participants in this study included as less physically active students. The findings of the present study indicated that increasing students' physical activity may reduce their intake of total energy, fat, and carbohydrates, which is necessary as many female students are overweight or overfat.Supporting existing habitual physical activity, the school principal could consider letting the PE teacher or an advisor with PE experience accompany the students during morning stretching schedules.The school is also encouraged to consider students' preferences when deciding on the menu cycle and to counsel in-school shops to provide healthier food or snacks, such as steamed food rather than deep-fried food.Furthermore, the provision of fundamental nutritional information can offers student and staff valuable knowledge to facilitate improved decision-making regarding their dietary selections. Our study recruited students from one boarding school, which may limit the generalizability of the findings.Also, despite obtaining food records for multiple days and providing training for the students, self-reported dietary intake may include inaccuracies in students' actual intake.We relied on students' memories and abilities to estimate food portions that differed for each individual.Future research may consider assessing dietary intake through 24-hour food recall or food weighing for better estimations. Conclusion Female students were prone to having inadequate dietary intake, despite being provided with three meals a day.The intake was found to be correlated with students' ethnicity and physical activity.The photovoice activity revealed that the food provided and available in the canteen influenced the students' food consumption.The students were aware of balanced meals and possible unmet needs.Consequently, they made efforts to complete and improve their meals, including purchasing food from the canteen.Students showed good acceptance of new food provided by the school, regardless of the persistent lack of food. The findings on student's dietary intake added to the fact that meals provided in boarding schools need to be improved.Photovoice justified it with the student's meal experience.Properly balanced meal concepts and good adaptation to food deficiencies, which were conveyed by the students could, assure the understanding of the Indonesian nutrition slogan among adolescents.Finally, the present study suggests that a nutrition program is considered necessary in boarding school to support student's diet and nutritional status. Figure 1 . Figure 1.Example of student's awareness of having unbalanced meal. Table 5 . Multiple linear regression model of student's characteristic and dietary intake.
2024-02-01T16:15:54.593Z
2024-01-25T00:00:00.000
{ "year": 2024, "sha1": "b6786a34330f242fe268461113e6f40986e499a3", "oa_license": "CCBYNC", "oa_url": "http://www.ijirss.com/index.php/ijirss/article/download/2628/446", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3805bba7743d3d4da8ffbe424ec2791b43259c88", "s2fieldsofstudy": [ "Sociology", "Education" ], "extfieldsofstudy": [] }
182479056
pes2o/s2orc
v3-fos-license
BIODEGRADATION OF ENERGY PLANTS SUBSTRATES WITH CULINARY-MEDICINAL MUSHROOMS 2019 • No 1 • АГРОЕКОЛОГІЧНИЙ ЖУРНАЛ The processes of decomposition of cellulose and lignin, the most stable natural polymers, and at the same time, the main components of vegetation and wood, which accumulate annually in huge amounts on the Earth, have attracted the attention of scientists during the last decades. The ability to effectively degrade lignocellulosic substrates by basidial macromycetes opens up wide possibilities for their use in biotechnologies for the processing and utilization of waste from the agro-industrial complex. Taking into account the darting growth of biofuel production in the world, development of technologies for biodegradation of essential volumes of unreturned residues of bioenergy plants is actual [1, 2]. For Ukraine, an important resource for the production of biofuels is the introduction of highly productive varieties of plants from Poaceae, Polygonaceae, Malvaceae, Amaranthaceae, Asteraceae, Fabaceae and Brassicaceae. The collection of energy plants has been created in the M.M. Grishko National Botanical Garden of NAS of Ukraine, and includes 365 species, varieties and forms [1]. Mushrooms using plant residues (including cereal straw, oats, sunflower husk, corn rods, flax, other wastes from the agro-industrial complex) are a powerful source of extracellular enzymes which are capable to destroy lignocellulose of these substrates to low molecular weight compounds. Today, in addition to valuable nutritional properties of edible species, for mushrooms it is known approximately 130 medicinal functions, including antitumor, immunomodulating, antioxidant, radical scavenging, cardiovascular, anti-hypercholesterolemic, antiviral, detoxification, hepatoprotective, antidiabetic, etc [3, 4]. For these reasons, mushrooms certainly are a valuable biotechnological object. The aim of present research was the estimation of degradation substrates of energy plants, linear growth rates of mycelium, and obtaining of mushroom biomass (fruit bodies and mycelium) of some culinary-medicinal mushrooms during cultivation on nontoxic residues of energy plants selected in Ukraine. The processes of decomposition of cellulose and lignin, the most stable natural polymers, and at the same time, the main components of vegetation and wood, which accumulate annually in huge amounts on the Earth, have attracted the attention of scientists during the last decades. The ability to effectively degrade lignocellulosic substrates by basidial macromycetes opens up wide possibilities for their use in biotechnologies for the processing and utilization of waste from the agro-industrial complex. Taking into account the darting growth of biofuel production in the world, development of technologies for biodegradation of essential volumes of unreturned residues of bioenergy plants is actual [1,2]. For Ukraine, an important resource for the production of biofuels is the introduction of highly productive varieties of plants from Poaceae, Polygonaceae, Malvaceae, Amaranthaceae, Asteraceae, Fabaceae and Brassicaceae. The collection of energy plants has been created in the M.M. Grishko National Botanical Garden of NAS of Ukraine, and includes 365 species, varieties and forms [1]. Mushrooms using plant residues (including cereal straw, oats, sunflower husk, corn rods, flax, other wastes from the agro-industrial complex) are a powerful source of extracellular enzymes which are capable to destroy lignocellulose of these substrates to low molecular weight compounds. Today, in addition to valuable nutritional properties of edible species, for mushrooms it is known approximately 130 medicinal functions, including antitumor, immunomodulating, antioxidant, radical scavenging, cardiovascular, anti-hypercholesterolemic, antiviral, detoxification, hepatoprotective, antidiabetic, etc [3,4]. For these reasons, mushrooms certainly are a valuable biotechnological object. The aim of present research was the estimation of degradation substrates of energy plants, linear growth rates of mycelium, and obtaining of mushroom biomass (fruit bodies and mycelium) of some culinary-medicinal mushrooms during cultivation on nontoxic residues of energy plants selected in Ukraine. MATERIALS AND METHODS To study the growth rates, biodegradation of substrates and fructification, pure cultures of four species of culinary-medicinal mushrooms -Pleurotus ostreatus (Jacq.) P. Kumm RESULTS AND DISCUSSION According to P. Baldrian [5], fungi play an important role in the processes of lignocellulose transformation in the natural environment due to their ability to simultaneously destroy both polysaccharides and polyphenols in the organic layer of soil. At the same time, as some of the main provisions of the regulation of enzyme activity are clear, the question of the production and diversity of enzymes at the molecular level remains open. Unlike other researchers, this author emphasizes that in the processes of decomposition of lignin and cellulose not only the saprotrophic basidiomycetes and ascomycetes are involved, but the contribution of ectomycorrhizal fungi can also be significant [5,6]. V. Elisashvili, and G. Kvesitadze pay a special attention to cultivated culinary-medicinal species of Pleurotus ostreatus and Lentinus edodes, their biotechnological potential as a source of oxidative and hydrolytic enzymes. The authors propose the use of substrates after their cultivation and harvesting as a cheap source of lignocellulolytic enzymes, in particular, for bioremediation. Enzymatic degradation of lignocellulosic substrates is carried out mainly with the participation of oxidoreductases and hydrolases, which also determines the significance of these enzymes in the process of biodegradation of polymers and xenobiotics that are difficult to hydrolyze [7]. I. Pavlov et al. [8] note that the leading mechanism of bioconversion is the action of a multi-enzyme complex whose synthesis depends on the growth substrate, physiological and biochemical characteristics of the producer strain and its genetic features. First of all, the linear growth rate of selected mushroom strains pure cultures on agar medium with the addition of aqueous extracts of the investigated substrates at a temperature of 25°C was determined. In this experiments a maximal growth rates for all strains (higher than in the case of control medium -8° wort agar) were observed on medium with addition of aqueous extract of S. saccharatum. The cultivation of pure cultures of studied strains of lignotrophic mushrooms on pre-shredded (1-3 cm), moisturized and sterilized substrates from energy plants showed the ability of each species to overgrown the substrates for a certain period of time. It should be noted that the primary volume of substrate significantly decreasing at the end of the experiment (in 1.7-2.8 times) (Fig. 1, 2). Besides, we studied the mushroom culture growth on the mixed substrates of energy plants and oak sawdust (OS) in ratio 1:1. In a case of 100% oak sawdust substrate the rate of mycelia growth was strongly suppressed in P. ostreatus and P. eryngii (apparently, it is connected with a low acidity of substrate, pH 3.8). In other cases, the addition of oak sawdust also inhibited the mycelia growth (pH were in the range of 6 to 6.5). The rates of overgrowing substrates at 25°C ranged from 7 days (in P. ostreatus) to 12 days (F. velutipes), the fruit bodies appeared in G. lucidum (28 and 35 days) and F. velutipes (34 and 38 days) respectively on S. saccharatum and С. sativa substrates. For the formation of fruitification stage, the mushroom cultures were kept indoors with daylight and at a temperature of 20°C. It should be noted that among studied lignotrophs, F. velutipes and G. lucidum more actively utilized energy plants substrates and showed high capacity to form fruit bodies in culture (Table). In variants with P. ostreatus, which have a maximal rates of overgrowing all sub- strates (with the exception of oak sawdust), abundant formation of primordia was observed. Mycelial growth of King Oyster (P. eryngii) was most dependent of the substrate, and this species could form fruit bodies on substrates of SS and CS (Fig. 3). Thus, the obtained results indicate that among the investigated substrates, the most promising for biodegradation with lignotrophic mushrooms is Sorghum saccharatum and Сamelina sativa. All studied strains showed the overgrowth of substrates of energy plants and a significant decrease of their volumes. Mushroom biomass (fruit body and mycelium) produced as a result of the proposed cycle, grown on non-toxic waste, can also be used in culinary and medicinal aims and as effective fertilizer. Primary biodegradation of energy plants substrates with mycelium of lignotrophic mushrooms is the preparation of this material for the subsequent cascade of biochemical transformations involving groups of different systematic groups of micromycetes (including yeast fungi), bacteria for the formation in the bioreactors of the final biofuel product. Further studies of biodegradation potential of macromycetes for lignin-cellulose complexes and the development of biofuel mycotechnologies require the selection and study of the cultural features of highly effective strains promising for the production of biofuels from energy plants of different uses (plants with high content of sugar, carbohydrate, oil etc.). CONCLUSIONS Among the investigated substrates of energy plants, the most promising for biodegra-dation with studied lignotrophic culinary-medicinal mushrooms are Sorghum saccharatum and Сamelina sativa. Mushroom biomass produced in the proposed cycle, when grown on non-toxic waste, can also be used for food and medical purposes, or as a valuable fertilizer.
2019-06-07T21:51:06.218Z
2019-04-09T00:00:00.000
{ "year": 2019, "sha1": "b8b026992f5bb6315d4e5944acb203924edd01ab", "oa_license": null, "oa_url": "http://journalagroeco.org.ua/article/download/163268/166998", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "045cda0ed1f733c6af737adf5a03096df783ffea", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
251111006
pes2o/s2orc
v3-fos-license
Poverty, ICT and Economic Growth in SADC Region: A Panel Cointegration Evaluation : Although there is a wealth of evidence on the relationship between economic growth and poverty or poverty and information and communication technology (ICT), there are few studies on the interaction between the three factors. The triangle relationship between poverty, information and communication technology, and economic growth in SADC were investigated in this study from 2005 to 2019. The research looked at how ICT and economic growth impact poverty in SADC countries, using the instruments of the Mean-Group FMOLS, Mean-Group DOLS, and Robustness Mean-Group Estimators in achieving its major objective. The principal component analysis was employed in generating a single index value for ICT and the data were subjected to relevant econometric tests to achieve robust results. Findings showed that all the variables exhibited poverty-reducing effects in SADC except inflation. Results confirm the existence of the “leapfrogging” hypothesis for the region. It is necessary to strengthen existing bilateral links among member nations of the area to maintain the benefits of ICT’s poverty-reducing impacts, economic growth, financial development, and trade openness. As applicable in other advanced and some emerging economies, the digital competence of the region needs to be synchronized for effective ICT service delivery. Introduction In recent years past, Information and Communication Technology (ICT) has taken the center stage of global economic activities.Contesting the fact that it has turned the world into a global village is, to say the least.Apart from employment generation, its contributions to the GDP of countries in both the regional and country-specific terms had been quite overwhelming.Although the social and economic situations of countries within the SADC differ marginally, it is also important to state that the degree of ICT services usage depends on member states' economic power.While the argument on the relationship between poverty, ICT, and economic growth rates continues, we consider it sacrosanct to provide a deeper understanding of the extent to which ICT has contributed to economic growth and alleviate poverty with the SADC as an economic bloc The idea of today's SADC was mooted at a summit in Windhoek, Namibia in 1992 when the heads of the frontline states (Angola, Botswana, Mozambique, Tanzania, and Zambia) (the original members) met to sign the Southern Africa Development Community (SADC) Declaration and Treaty to replace the Southern African Development Coordination Conference (SADCC).The driving spirit of the region was based on what they referred to as "Towards a Common Future" with their secretariat located at Gaborone in Botswana.As of today, there are sixteen-member countries (Angola, Botswana, Comoros, Democratic Republic of Congo, Eswatini, Lesotho, Madagascar, Malawi, Mauritius, Mozambique, Namibia, Seychelles, South Africa, United Republic of Tanzania, Zambia, and Zimbabwe) following the admission of Comoros in 2017.The objectives of this economic bloc as contained in its article of formation are: i. Development and economic progress; poverty reduction; raising the standard of living of Southern Africans and providing support to the socially disadvantaged through regional integration.ii. to develop a set of shared political principles, systems, and institutions iii.Peace and security promotion and defense iv.promotion of self-sustaining development based on member states' interdependence and collective self-reliance v. Complementarity between national and regional strategies and programs is achieved.vi.Promotion and maximizing of productive employment and resources, effective environmental preservation, and building and consolidating the region's long-standing historical, social, and cultural affinities and links. However, one method to achieve these lofty goals, particularly the first, is through ICT investment.As a result, member nations signed the Information and Communications Technology Declaration in 2001 to promote regional ICT policy and strategy.The treaty intends to promote sustainable economic development, technology, and the closing of the digital divide between the area and the rest of the world, among other things.and between as of 2018, the total GDP of the region accounted for 25.6% of Africa's GDP with an estimated value of US$721.3bn at an annual growth rate of 1.8%; with services (ICT inclusive) contributing 59.4%.The region occupies a land space of 556,781 km 2 with an average life span of 61 years.Since the global financial crisis of 2007-2009, the GDP growth rate of the region has been on a downward trend.For instance, the GDP reduced from 6.8% in 2007 to 0.2% in 2009 and went up to 4.5% in 2012, down again to 2.1% in 2017 and 1.2% in 2018 [1].Accounting for the slow growth rate is a rising inflation rate, increasing government debts as well as low commodity prices.A close review of mobile phone users within the region shows that South Africa constitutes the highest number of usages.Within ten years, the number of mobile telephone users in South Africa increased from 33.96 m in 2005 to 87.99 as of 2015.Although with a smaller population, countries such as Eswatini, Comoros, and Seychelles have mobile phone users of 941,000; 424,786; and 148,244 respectively as of 2015.In terms of internet users and surprisingly, Seychelles recorded the highest percentage with 58.12 percent against South Africa's 51.91 percent as of 2015.At the lowest level of the ladder are the Democratic Republic of Congo (3.79%) and Madagascar (4.17%) during the same period [2].Another challenge facing the region just as in other regions in sub-Sahara Africa is poverty.As reported in the [3], almost half of the population of the region survives on less than 1 US$ per day.To tackle poverty which has become an overarching priority in the region, the Regional Poverty Reduction Framework was to take effect in 2013.Other economic issues facing the region, like in other parts of Africa, include the creation of a climate conducive to regional integration, economic growth, poverty eradication, and the channeling of resources toward sustainable development.Another aspect of this study is to investigate the causal relationship between ICT and poverty reduction within the SADC region.However, there are two opposing views about this relationship.At the extreme are those who shared the view that ICT does not reduce poverty if it cannot provide for the basic needs of the poor [4].Furthermore, it has been argued that only the rich people can afford the cost of procuring ICT and then enjoy the services while the downtrodden masses who can hardly afford a square meal per day cannot.Unfortunately, the low-income countries within the SADC account for over 60% of the total population of that region.Similarly, the study on whether there is any correlation between access to ICT and poverty in South Africa by [5] produced mixed results.It was observed in the study that areas with a high level of poverty are expected to experience low access to ICT services than those areas with a low poverty rate which are likely to enjoy more access to ICT. In a textual analysis of policymakers in selected African countries appraising the nexus between ICT and poverty reduction, ref. [6] concluded that policies on ICT in Africa are being tailored toward empowering the poor.Furthermore, ref. [7] appraise the link between ICT and poverty reduction in Romania and confirmed the role of ICT in reducing the poverty level of that country.On the proliferation of mobile phones, ref. [8] asserted that ICT innovation has become a major force to reckon with in the developmental process of any economy.On the other hand, ref. [9] submitted that despite the technological innovation from the socioeconomic context, opinions differ on the ICT-economic growth hypothesis.Further, studies such as [10,11] revealed a bidirectional relationship between ICT and economic growth while a unidirectional causality was established in the study by [12].Regarding an Afrobarometer survey of 34 African countries, ref. [13] said that the much-celebrated achievements in the growth rate of Africa's economy have not reduced the continent's poverty rate nor improved the people's living standard.This opinion was corroborated by [14] in his submission that the potential for ICT explosion in sub-Saharan Africa might not be realized because of the dominance of foreign nationality at the expense of the rich cultural values and traditional institutions.Evidence abounds to show the relationship between economic growth and poverty or poverty and Information and Communication Technology (ICT) but studies on the nexus between these three variables are still scanty in development economic literature.Not only that, but different opinions also permeate the triangular relationships of these variables.Ref. [15] opined that ICTs' relevance transcends the creation of sources of income for the most disadvantaged people in society, the improvement in health and educational services, and the reduction in poverty level.Other researchers have also alluded to the benefits of ICT, especially in the areas of global integration, economic growth enhancement, provision of new opportunities for the people, and as an antidote to poverty alleviation [15][16][17][18][19][20][21] As pointed out [22], one key benefit of ICT in Africa is the ability to reduce and eradicate poverty. In 2020 ICT accounted for half of the total GDP of the countries of the world.However, despite the appreciable and impressive achievements in terms of economic growth in the SADC region, poverty remains an issue in the region.Our pertinent research question centers on the ambivalence regarding the triangular relationship between ICT, economic growth, and poverty in the SADC region.Nonetheless, the dearth of strong empirical evidence about the triangular association between poverty, ICT, and economic growth, especially in the regional context remains unresolved by previous studies, necessitating further research interests.Furthermore, whether there is an existence of a "leapfrogging hypothesis" in the SADC is also an issue of debate amongst policymakers.Thus, our research questions were: (i) What is the direction of causality among these variables [poverty, ICT, and economic growth]?(ii) Is there a long-run effect of ICT and economic variables [economic growth, financial development, and trade openness] have poverty-reducing effects?and (iii) Does the leapfrogging hypothesis exist in the SADC region? Related Review of Literature: [Poverty, ICT, and Economic Growth] The relationship between ICT and economic growth, on the one hand, and ICT and poverty reduction, on the other hand, has been verified by empirical investigations.Others in the economic literature have looked into the relationship between economic growth and poverty eradication.The nexus between poverty and economic growth, on the other hand, is the core of the relationship between poverty and ICT.The understanding here is that ICT evolution impacts poverty positively through economic growth [23][24][25].As early as 2003 when researchers predicted that ICT was fast becoming a vital engine for global economic advancement, its link with economic development and poverty reduction was described as an extraordinary venture [26,27].Other frontline apostles of the nexus between ICT and poverty alleviation include [28,29].According to [28], if the Millennium Development Goals are to be taken seriously, the contribution of ICT to poverty reduction should be a major issue in the international debate.However, other leftists believe that ICT constitutes additional expenses to the poor and therefore, adds to their level of poverty [30].They found a direct relationship between a low level of poverty and higher usage of ICT and vice versa [16,31].Ref. [15] employed Eurostat data from 2014-2017 and applied the Partial-Least Square for European Union countries on the relationship between investment in ICT and sustainable economic development.One of the study's primary findings is that ICT has a positive link with GDP in the European countries studied.Ref. [18] investigated the relationship between ICT, foreign direct investment, and economic growth for the BRICS economies using OLS with fixed effects, the FMOLS, the DOLS, and the groupmean estimator techniques on data from 2000 to 2014.They discovered that ICT positively aided economic growth for the BRICS economies.However, depending on their level of ICT usage, this link differs from country to country.Ref. [17] used the Autoregressive Distributed Lag approach to examine the relationship between ICT and economic growth in India.They found that ICT had a beneficial impact on the country's economic growth. Although [32]'s submission on the effects of ICT on economic growth in European Union countries supported theoretical and empirical findings that ICT infrastructure is a major driver of economic growth in EU countries, it was concluded that the degree of the effects varies among member countries depending on the technology under consideration.In a previous study by [10] on the causal link between ICT (telecom) and economic growth in EU nations, it was discovered that these variables have a bidirectional relationship in countries with high incomes but a unidirectional association in those with lower incomes.They concluded that ICT is not a significant determinant of economic growth in developing countries.This implies that opinions on the impact of ICT on economic growth are still divided.Ref. [33] found that improvements in ICT had a statistically favorable link with economic growth in their investigation of the impacts of ICT on economic growth in 54 African countries.The results of the estimated pooled OLS and GMM models backed up the "leapfrogging" hypothesis, indicating that Africa can use ICT to skip developmental phases at both the regional and continental levels.Similarly, [34] found that ICT increases growth at both the global and regional levels in their study of the influence of ICT on economic growth.Statistically, the leapfrog hypothesis concerning the relationship between ICT and economic growth was equally verified by the study, which used the OLS, Pooled OLS, Two-Staged Least Squares (2SLS), and GMM approaches.The above viewpoints differ from [35] findings, which disproved the leapfrogging hypothesis.Further investigation into the relationship between ICT and economic growth in Sub-Saharan Africa has yielded varied results [32,36,37].In their study, Ref. [37] found that ICT proxies such as fixed telephone lines, mobile phones, and internet usage have a positive and statistically linear effect on economic growth in SSA, but the effect of nonlinear statistical analysis shows that these proxies slow down the economic growth of the region.Ref. [38] looked at population growth, GDP per capita, ICT (internet users), and inflation as economic drivers in Nigeria, and found that increases in inflation, GDP per capita, and population negatively affect ICT, and so negatively affect Nigeria's economic growth.Ref. [19] study of the influence of ICT diffusion on economic growth in 45 developing Middle East, North Africa (NEMA) and SSA nations found that mobile phones, internet usage, and broadband adoption were the region's key economic drivers.Only the fixed telephone element of the examined ICT variables demonstrated a negative link with economic growth within the research regions when using a two-step General Moment Method for data from 2007 to 2016.As a result, many low-income earners in the tested nations do not profit from the benefits of ICT.The interchangeability of fixed and mobile telephones could be one cause for this.This is a mixed result that requires further investigation to see how it applies to SADC member countries.While ref. [39] showed that ICT does not contribute much to economic growth in Japan, Ref. [40] revealed a low contribution from ICT to growth in Latin America.Examining the causal relationship between ICT and economic growth in ten Latin American countries, Ref. [11] revealed that a two-way causality exists in eight out of the ten countries under review.However, it was acknowledged that differences in the measurement of ICT could be the cause of the different results.Development in ICT cum financial inclusion goes hand in hand as the relationship between these variables is a symbiotic one [41].This suggests ICT as a requirement for financial inclusion that enhances sophisticated products that will ultimately support the expansion of the ICT sector.Another dimension to the ICT-economic growth nexus tagged "ICT-finance-growth" relationship is the argument supporting the influence of financial development.The panel analysis of [42] latest study on ICT spread and the finance-growth nexus in ECOWAS demonstrates that only the coupling of financial development with ICT has a major impact on economic growth.Financial development almost always has an indirect impact on economic growth through interactions with ICT.This was in contrast to a previous study by [43], which found a granger causality between ICT penetration, financial development, and economic growth in both the short and long run.Overall, governments in the regions should increase their ICT investment if and only if the desired influence of financial development on economic growth is to be felt in the regions.In continuation with the ICT-growth-poverty paradox is the relevance of human and infrastructural development in SADC.Ref. [44] emphasized that economic growth in SADC requires investment in infrastructural facilities such as ICT and human capital development.As against the use of the static panel technique, the study employed dynamic panel data and shows that infrastructural development affects economic growth positively.As revealed in the study, the region has challenges in the areas of power supply, good roads network, inadequate supply of clean water, and high cost of ICT acquisition; all of which have been eroding the economic growth aspiration of the region and thereby reducing the standard of living of the people and contributing to their poverty level.In the same spirit, Ref. [45] applied regression analysis to examine the link between ICT: mobile, internet, telephone, and human development in SADC and revealed that the usage of these components exerts positively on human development. Further, Refs.[46,47] confirmed the leapfrogging hypothesis in their various studies in that ICT affects economic growth positively in developed and developing economies while [35] arrived at a contrary conclusion.This is an indication that opinions differ on the effects of ICT on economic growth.Other studies such as [34]) also showed that the effects of ICT on economic growth in developing economies are stronger than in developed countries.This was in tandem with the findings of [21],) where financial inclusion was seen as an instrument of economic growth and poverty alleviation. The above review shows that research into the causality between poverty, ICT, and economic growth has not received much attention in the literature and hence the need to bring the relationship to the fore.The integration between these three concepts is expected to result in a new paradigm and create opportunities for sustainable growth and recovery from the current economic challenges facing the SADC region. Data and Methodology To achieve our research aim, this study employed the instruments of the Mean-Group FMOLS, Mean-Group DOLS, and Robustness Mean-Group Estimators in addressing these identified research areas in 16 SADC countries from the period 2005-to 2018.This period coincides with the evolution of ICT and the fourth economic revolution that was characterized by the digital explosion. Description of Variables and Data The analysis is the link between poverty, ICT, and economic growth in 16 SADC countries from 2005 to 2019.We sourced data mainly from the World Bank-World Development Indicator (WDI) and SADC statistical yearbook.It is a panel data study with financial deepening, trade openness, and inflation as additional explanatory variables.Poverty (POV it ): We measure poverty using the household final consumption expenditure per capita growth (annual percentage).This measurement is preferred especially against the popular headcount ratio because of data availability (See [25,48]).Apart from the fact that this method of measurement had been used extensively in the literature [23,33], it is often favored in the sense that required information on the poverty gap and incidence of poverty, among others, may not be readily available since we are dealing with Less Developed and Developing countries.2. ICT (ICT it ): Our measurement of ICT for this study consists of multiple indexes of the following three variables: fixed telephone (measured as fixed telephone subscription per 100 users), mobile line telephone subscribers (measured as mobile cellular subscription per 100 inhabitants), and internet users (measured as secured internet server per 1,000,000 inhabitants).This had been extensively used in the literature. (For instance, see [19,42]).However, the study adopts the Principal Component Analysis (PCA) in arriving at a single composite index value for sub-variables of the ICT (see [18,42,43]).This approach has the benefits of harmonizing ICT components into a single linear form [46,47]. 3. Economic growth (GDPgr it ): Economic growth is represented by the growth rate of GDP per capita, and it is measured at (constant 2010 US$) by the United Nations and the World Bank's measurement of economic growth [49][50][51][52]. We also include some variables as a control for the study.These are: (i).Financial development, (measured by domestic credit to the private sector); (ii) Trade Openness (measured as trade percentage of the GDP) and (iii) Inflation (measured by consumer price index-2010 = 100).The countries in SADC are very interconnected, interdependent, and interrelated and this informed our decision to include trade openness as an additional control variable.Ref. [53] demonstrated that the effect of financial development on economic growth at a lower level of inflation is greater than at a higher level of inflation.This suggests that financial development and inflation jointly affect growth even though at different frequencies. Table 1 presents the summary of the variables and their measurement. Analytical Technique Following [21,54,55], the triangular relationship between poverty, economic growth, and information and communication technology commences as follows: where POV is poverty, ICT denotes information and communication technology while ECO represents economic growth, Expressing Equation (1) in econometric form becomes: Here, POV it denotes poverty, α denotes country-specific intercept, ICT it is information and communication technology, ECO it denotes economic growth, and Z it is a vector of other exogenous variables that affect poverty i.e., financial development (FD), trade openness (TOP), macroeconomic uncertainty proxied by inflation rate (INF) (see [42,53,54]) while i denotes the countries, 1, 2, . . ., N; t is the time period, 1, 2, . . .,T and ε it a time-varying error term.The parameters to be estimated are denoted by β, φ and ϕ.The a priori outcomes of these parameters are β > 0, suggesting that the effect of ICT on poverty is expected to be positive; φ > 0, indicating the poverty-induced effect of economic growth and ϕ i > 0, showing the positive effects of other exogenous variables except inflation which is expected to exert a negative impact on the core variables. Principal Component Analysis (PCA): The study employs the principal component analysis (PCA) to generate the ICT index value from mobile cellular subscription (MCS) per 100 people, fixed telephone subscription (FTS) per 100 people, and secure internet server (SIS) per 1,000,000 people [18,42].Principally, the PCA is an approach that transforms a set of series into a smaller composite index.Steps involved in the computation of PCA have been well documented in extant studies [18,43,50,56].We decomposed the MCS, FTS, and SIS as a weighted average called ICT value. Cross-sectional dependence: In panel data analysis, the usual assumption is that disturbances in panel models are cross-sectionally independent, especially when a large cross-section (N) is involved [55].Meanwhile, in reality, the cross-sectional dependence in panel analysis appears to be the rule of the game, thus it cannot be underestimated [57,58].Therefore, assuming cross-section independence may pose serious problems that may result in estimator inefficiency and invalid test estimates.Pesaran's cross-sectional dependence (CSD) test is to detect potential CSD by detecting the systematic residual correlation across different units through pairwise correlation coefficients between regressors or residual series [58].Following [59], given a panel data of the following: where i = 1, 2, . . ., N and t = 1, 2, . . ., T, and X is an n-dimensional column vector of regressors.The null hypothesis is that there is no cross-section dependence, and it is explicitly stated as: H 0 : ρ ij = Corr(ε it , ε jt ) = 0 for i = j.Where ρ ij is the productmoment correlation coefficients of the residuals for a balanced panel sample expressed in Equation (4) as: Thus, the cross-section dependence test as proposed by Pesaran obtained a statistic based on the average of the pairwise product-moment correlation coefficients, ρ ij as follows: Given a relatively small T as the case of this study, the asymptotically standard normal Pesaran CD test is represented in Equation ( 5).The decision rule here is that if the t-statistic value is significant, the Pesaran CD test rejects the null hypothesis of no cross-section dependence and concludes that there is cross-section dependence in the panel data. Unit Root Tests In panel data, some unit root tests are applicable to homogeneous panels while others apply to heterogeneous panels.Put differently, panel unit root tests are further classified as first-generation and second-generation panel unit root tests.The first-generation unit root tests apply to homogeneous panels while the second-generation unit root tests apply to heterogeneous panels (these tests allow the unit root processes to vary among the cross-section units).Homogeneous panels are assumed to have common unit root processes for all the cross-section units, examples include the Levin-Lin-Chu (LLC) test, Breitung test, and Hadri test for this study.Thus, the null and alternate hypotheses for homogeneous unit root tests (except the Hadri test as an extension of the KPSS test to panel data which reverses the order of the hypotheses) are set as: H 0 : ρ = 1 (Indicates the presence of unit root). H 1 : ρ < 1 for all i (i.e., stationary for all cross-section units).The null and alternate hypotheses for heterogeneous panels are set as: H 1 : ρ < 1 for some i (i.e., stationary for some cross-section units and non-stationary for others). However, a problem with heterogeneous panel unit root tests is that it is difficult to identify which cross-section unit (country) parades non-stationarity, and which one exhibits stationarity.In addition to the stated heterogeneous unit root tests, cross-sectionally augmented IPS (CIPS) suggested by [60] is employed to deal with the CSD.The test permits for correlation among error terms across units to address the potential spurious inferences [58]. Westerlund Cointegration Test: To check for the long-run relationship between poverty, ICT, economic growth, financial development, trade openness, and inflation, this study utilizes the [60] panel cointegration test.It does not allow for an equal length of time series [61].This approach is not only reliable for cross-sectional time-series studies such as the case at hand, but its outcomes are also robust and reliable as a cointegration technique [18].The [60] approach is divided into segments: the Group statistics (G τ and G α ) and the Panel statistics (P τ and P α ) based on ECM.The hypotheses of the group statistics are stated thus: Null: H 0 : There is evidence of no cointegration in at least one of the cross-section units. Alternative: H 1 : There is cointegration in at least one of the cross-section units. That is, H G :λ i K = 0 for all i that is tested as against H i G :λ i K < 0 for at least i.In summary, the two grouped-mean tests have the alternative hypotheses of cointegration in at least one cross-section unit. For the panel statistics tests: Null: Adjustment to equilibrium is homogeneous across cross-section units.Alternative: Evidence of cointegration. The assumption here is that λ i K = λ K for all i, H 0 P = λ i K = 0 as against H i P :λ i K < 0. Fully Modifies OLS and Dynamic OLS Tests: The next step is the estimation of the coefficients of the long-run association, using the FMOLS and DOLS approaches.The choice of these methods was informed by the fact that they give standard error estimates that are consistent and make them robust for statistical inferences [43].Consider the following model: where: i = number of cross-sectional units; t = time; α i is the country-specific effects, POV i,t is poverty, β represents the vector parameter (k,1) and u i,t stands for the stationary disturbance terms. It is assumed that: (i) κ i,t = (k,1) vector of the explanatory variables; (ii) κ i,t are I(1) for the cross section units such that (iii) κ i,t = κ i,t−1 + ε i,t After correcting for the serial correlation and accounting for endogeneity in the OLS estimator, the FMOLS is finally specified as: where: P ÔV * i,t signifies the transformed variable for POV i,t that accounts for correction of endogeneity and δεµ is the term for serial correlation correction. The DOLS regression estimation is also specified as: where: α i is the country-specific effect and λ ik is a lead coefficient.µ i,t is the error term.Pairwise Dumitrescu-Hurlin (D-H) Panel Causality Test: A further step in this study was to examine the causality among the variables using the D-H causality test).The test is formulated as follows: where: Y it and X it represent the observable stationarity variables while a uniform lag order (K) is assumed with stable panel data.λ i is the autoregressive parameter and β i stands for the coefficient of the regression. The D-H test is appropriate whether T > N or N > T.Although this technique assumes no cross-sectional dependency, yet, it generates robust, strong, and reliable estimates [55] for both balanced and heterogeneous panels. In a variety of studies, the causality between poverty and ICT has been justified just as the correlation between economic growth and ICT had been established.Furthermore, other studies have shown the relationship between ICT and economic growth for low, medium, and high-income countries and established that causality exists between these two variables.The studies show that ICT as a leapfrogging factor in the attainment of the economic developmental status of the developed economies such that a bi-directional relationship between ICT and GDP was proved.The expanded Neoclassical growth model explains how inputs of production function which include labour and technological advancement increase output.In addition, it demonstrates how technology, labor, and capital could all work together to lessen poverty in an economy.Other studies such as [4,19,43], further confirmed that long-run and positive correlation exists between ICT and economic growth.The advent and increase in the use of ICT, which has reduced the world into a global village, was confirmed using econometric methods which include panel quantile regression and the control of fixed effects by [4] as having positive causation effects on economic growth. Also, [50] confirmed a bi-directional association between energy consumption represented by ICT and economic growth in investigating the validity of the symbolic transfer entropy test between 1970 and 2015 in OECD countries.By implication, ICT caused economic growth and vice versa in both studies.Mixed causalities were observed by [37] for 32 sub-Saharan African countries in a study on the effect of financial development and economic growth in Africa.In the study, 24 out of these countries exhibited no causality while 8 revealed causalities running from financial sector development to economic growth.Nonetheless, the poverty-economic growth nexus examined by several other scholars indicates the existence of a unidirectional association that runs from poverty reduction as represented by employment generation to GDP such that increasing the rate of employment enhances productivity.The main finding in the causality relationship between economic growth and infrastructural development for top ranking African countries by showed a unidirectional effect that runs from infrastructural facilities such as ICT to economic growth.The study further argued that the positive association between ICT and GDP can address the problem of poverty in Africa. Study Limitations One major limitation of this study is the missing out on some digital technologies in the composition of ICT components due to data availability challenges.These include broadband subscription, mobile web services, artificial intelligence, e-commerce, social media, and smart device application which are all products of the ICT revolution.They contribute immensely to human awareness, innovation, and economic growth and therefore cannot be underestimated.In addition, the issue of investment which according to the neoclassical and endogenous growth models is a major determinant of economic growth.Subsequent studies should therefore endeavor to incorporate these variables to give a broader understanding of the impact of ICT on economic growth within the SADC economic region. Findings and Discussions The issue of cross-sectional dependence has become an issue in panel data analysis.The Pesaran CSD was employed in this study to account for the associated shortcomings that may result from cross-sectional reliance between the countries under consideration. Table 1 above presents the result of the Pesaran cross-dependence test which shows that the null hypothesis of the presence of cross-sectional dependence is rejected at a 1% significance level.The presence of cross-sectional reliance suggests that variations in variables between the countries under consideration can influence one another. Panel Unit Root Tests: To account for the presence of unit root in the panel data, and determine the order of integration, the Levin-Lin and Chu (LLC) and Breitung panel unit root tests were performed on the series.It is evident from Table 2 that all the variables: poverty (POV), information and communication technology (ICT), economic growth (ECO), financial development (FD), trade openness (TOP) and inflation (INF) were made stationary at their first difference I(1).The lags and leads were automatically selected by AIC criterion. Based on the results of the G τ , P τ and P α statistics presented in Table 4, the null hypotheses of no cointegration are rejected at p < 0.01 level of significance.The Mean-Group FMOLS, Mean-Group DOLS and Robustness Mean-Group Estimators Table 5 presents the results of the three estimation models employed in this study: The Mean-Group FMOLS, Mean-Group DOLS and Robustness Mean-Group Estimators.From these results, the two core variables namely information and communication technology (ICT) and economic growth (GDPGR) were positive and statistically significant at 1% level.The coefficients of the two variables indicate that 1% increase in ICT and economic growth will have corresponding effects on the consumption pattern of the people and by implication, cause poverty to reduce.In a way, there is evidence supporting that the leapfrogging argument holds for the SADC region [33,46,47].These findings concur with extant literature on the positive effects of ICT on poverty reduction on one hand [7,8,21,23,32] and economic growth and poverty on the other hand [12,19].With respect to other specifications, the outcomes of the long-run coefficients for financial development (FD) and trade openness (TOP) were equally positive and statistically significant at 1% and 5% levels respectively on poverty.In support of these findings are [62][63][64][65][66][67].In summary, all the variables under consideration exhibited poverty-reducing effects in SADC sub-region of sub-Saharan Africa except inflation.The effect of inflation was poverty-inducing at a 1% level of [68][69][70].The results of the Zbar-statistics as shown in Table 6 indicate that a bidirectional causality existed between ICT and poverty, economic growth and poverty, ICT and economic growth, ICT and financial development, ICT and trade openness and economic growth and trade openness.On the other hand, a unidirectional causality existed from financial development to poverty, trade openness to poverty, inflation to poverty and economic growth to inflation.In conclusion, above analyses suggest that the countries within the SADC will benefit more as a region in terms of economic growth, poverty reduction, trade openness and ICT penetrations.For instance, [34] opined that the effect of ICT on economic growth in developing economies is stronger than the developed countries. Concluding Remarks In this article, the long-run relationship between poverty, ICT, and economic growth was evaluated in 15 SADC countries using the panel cointegration approach for the period 2005-2019.To account for the presence of cross-sectional dependence which may render our analyses invalid, the [59] CD test was carried out with the rejection of the null hypothesis of the presence of cross-sectional dependence in the variables.The outcomes of the panel unit root tests further revealed that the series was made stationary at their first differencing which paved the way for the Westerlund cointegration test.It was established that long-run association exists among the variables of interest namely poverty, information, and communication technology, economics, and the other established variables in the literature.Empirically, it was further established that the long-run effects of ICT, economic growth, financial development, and trade openness are poverty-reducing and hence, ICT was seen as a tool for accelerating economic growth in SADC as a region.This provides an answer to our second research question.Furthermore, the study established the existence of leapfrogging hypothesis for this region and provides an answer to the third research question.However, inflation remains a major challenge to the region as the effect was poverty-inducing.In determining the first research question, the [71] causality test were performed on the variables to know the directions of the variable's causality.A bi-directional causality existed between ICT and poverty, economic growth and poverty, ICT and financial development, and ICT and trade openness while unidirectional causality was established running from poverty and ICT to trade openness. Important policy implications of the study are as follows: 1. For the continued benefits of poverty-reducing effects of ICT, economic growth, financial development, and trade openness to be maintained, there is the need to strengthen the existing bilateral relationship among member countries of the region. 2. As applicable in other advanced and some developing economies, the digital competence of the region needs to be synchronized for better and more effective service delivery. 3. In addition, forward-looking policies that will review the roaming rates from countries outside the region should be considered.This is expected to improve the region's revenue, and economic activities and ultimately enhance the ICT-poverty reducing outcome for the region. However, the negative effects of inflation within the SADC region call for joint efforts in terms of policy formulation and implementation to ameliorate its adverse effects on the region.Further studies should dwell much on the use of primary data with Logit and Probit techniques.Not only that, but the composition of ICT should also be expanded to include broadband penetration, e-government, telecommunication infrastructure, and online services to serve as robust checks for further studies.Also, digital technologies as spillover effect can further be investigated. Heterogeneous panels are assumed to have, possibly different unit root processes, examples of which include Im, Pesaran & Shin test (IPS), Maddala-Wu test, and Choi test. Table 1 . Summary of the measurement of variables and sources of data. Table 3 above shows the unit root tests.These outcomes justified the use of [60] cointegration test which equally suggests that long-run association exists between the variables. Table 3 . Panel Unit Root Tests. *** and **, signify a rejection of null hypotheses at 1% and 5% error levels in that order respectively. Table 5 . The Mean-Group FMOLS, Mean-Group DOLS and Robustness Mean-Group Estimators. ** represents p < 0.001 significance level of rejection.Results were based on annual data (panel) from 2005-2019.4 lags were selected based on Akaike Information Criteria.E-view 9.5 was used in performing the test. *
2022-07-28T15:18:19.679Z
2022-07-25T00:00:00.000
{ "year": 2022, "sha1": "8a8d2b4f975f558efa6c880009a84d9ee5658887", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/14/15/9091/pdf?version=1658744631", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ab85560760d3457f4b2461ecf5de5b14881580ad", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
9337882
pes2o/s2orc
v3-fos-license
Dependences of the van der Waals atom-wall interaction on atomic and material properties The 1%-accurate calculations of the van der Waals interaction between an atom and a cavity wall are performed in the separation region from 3 nm to 150 nm. The cases of metastable He${}^{\ast}$ and Na atoms near the metal, semiconductor or dielectric walls are considered. Different approximations to the description of wall material and atomic dynamic polarizability are carefully compared. The smooth transition to the Casimir-Polder interaction is verified. It is shown that to obtain accurate results for the atom-wall van der Waals interaction at shortest separations with an error less than 1% one should use the complete optical tabulated data for the complex refraction index of the wall material and the accurate dynamic polarizability of an atom. The obtained results may be useful for the theoretical interpretation of recent experiments on quantum reflection and Bose-Einstein condensation of ultracold atoms on or near surfaces of different nature. I. INTRODUCTION The van der Waals interaction is the well known example of dispersion forces and there is an extensive literature devoted to this subject (see, e.g., monographs [1,2,3]). These forces are of quantum origin and they become detectable with a decrease of separation distances between atoms, molecules and macroscopic bodies. Further miniaturization, which is the main tendency of microelectronics, brings more and more attention to the investigation of fine properties of the van der Waals interaction. The van der Waals force between an atom (molecule) and a cavity wall has long been investigated. In Ref. [4] its interaction potential was found in the form of V 3 (a) = −C 3 /a 3 in nonrelativistic approximation (a is the separation between an atom and a wall). The coefficient C 3 was calculated and measured for different atoms and wall materials, both metallic [5,6,7] and dielectric [8,9]. The theoretical and experimental results were shown to be in qualitative agreement. More precise measurements were performed in Refs. [10,11]. Currently the van der Waals interaction attracts considerable interest in connection with experiments on quantum reflection of ultracold atoms on different surfaces [12,13]. With the increase of separation distances up to hundreds nanometers and more to several micrometers, the relativistic and thermal effects become significant changing the dependence of the van der Waals force on separation. At moderate separations up to 1 µm for atoms described by the static atomic polarizability near a wall made of ideal metal at zero temperature the interaction potential was found by Casimir and Polder [14] in the form V 4 (a) = −C 4 /a 4 . Both the van der Waals and Casimir-Polder interactions are of much importance in connection with the experiments on Bose-Einstein condensation of ultracold atoms confined in a magnetic trap near a surface [15,16,17]. They may influence the stability of a condensate and the effective size of the trap [17]. Conversely, the Bose-Einstein condensates can be used as sensors of the van der Waals and Casimir-Polder forces. The presence of these forces leads to the shift of the oscillation frequency of the trapped condensate [18]. Note that in application to ultracold atoms not their temperature but the temperature of the wall is the characteristic parameter of the fluctuating electromagnetic field giving rise to the van der Waals interaction [18,19]. It is common knowledge that the precision of frequency shift measurements is very high. Interpretation of these measurements requires accurate theoretical results for the van der Waals and Casimir-Polder interaction beyond the expressions given by the simple asymptotic formulas (in fact coefficients C 3 and C 4 are not constants but depend on both separation distance and temperature, and there is smooth joining between the formulas at some intermediate separations). In the case of the Casimir-Polder forces such results were obtained in Ref. [19] for different atoms near a metal wall with account of finite conductivity of a metal, dynamic atomic polarizability and nonzero temperature. In Ref. [18] the influence of the Casimir-Polder force between Rb atoms and sapphire wall onto the oscillations of a condensate was investigated. In the present paper we find accurate dependences of the van der Waals atom-wall interaction on the dynamic polarizability of an atom and conductivity properties of wall material. As an example, two different atoms are considered (metastable He * and Na), and metallic (Au), semiconductor (Si) and dielectric (vitreous SiO 2 ) walls. All calculations are performed within the separation distances 3 nm ≤ a ≤ 150 nm (for Au at larger separations the accurate theoretical results for the Casimir-Polder interaction were obtained in Ref. [19]). The theoretical formalism for the exact computation of the van der Waals interaction is given by the Lifshitz formula [20,21,22] adapted for the configuration of an atom near a wall. At small separations, characteristic for the van der Waals force, it is necessary to use the complete optical tabulated data for the complex index of refraction in order to find the behavior of the dielectric permittivity along the imaginary frequency axis (at separations a ≥ 150 nm, as was shown in Ref. [19], the dielectric function of the free electron plasma model can be used in the case of an Au wall to find the Casimir-Polder interaction). We compare the results obtained by the use of complete data for the dynamic polarizability of an atom and the ones given by the single-oscillator model. This gave the possibility to obtain more accurate results than in Ref. [23], where the single-oscillator model was used for a hydrogen atom near a silver wall, and also to determine the accuracy of the singleoscillator approximation for the dynamic polarizability in the calculations of the van der Waals interaction. It is shown that to calculate the atom-wall van der Waals interaction with an error less than 1% at a separation of several nanometers both the complete optical tabulated data of the wall material and the accurate atomic dynamic polarizability should be used. The paper is organized as follows. In Sec. II we briefly present the main formulas and notations for the van der Waals interaction between an atom and a cavity wall. Sec. III contains the accurate theoretical results for van der Waals interaction of He * and Na atoms with an Au wall. In Sec. IV the analogical results are presented for semoconductor (Si) and dielectric (vitreous SiO 2 ) walls. Sec. V contains our conclusions and discussion. II. LIFSHITZ FORMULA FOR VAN DER WAALS ATOM-WALL INTERACTION The Lifshitz formula for the free energy of atom-wall interaction (wall is at a temperature T at thermal equilibrium) can be presented in the form [19,22] is the atomic dynamic polarizability, k B is the Boltzmann constant, ξ l = 2πk B T l/ are the Matsubara frequencies, l = 0, 1, 2, . . . , δ lk is the Kronecker symbol, and the reflection coefficients for two independent polarizations of the electromagnetic field are r (ξ l , k ⊥ ) = ε l q l − k l ε l q l + k l , In Eqs. (1) and (2) the notations are also introduced, where ε l = ε(iξ l ) is the dielectric permittivity computed at the imaginary Matsubara frequencies, k ⊥ is the wave vector in the plane of the wall. We will apply Eq. (1) in the separation region 3 nm ≤ a ≤ 150 nm which corresponds to the van der Waals interaction (near the left-hand side of the interval) and transition domain to the Casimir-Polder interaction. In fact, in this region at room temperature T = 300 K the temperature effect is negligible. For the sake of convenience in numerical computations we, however, do not make the approximate change of the discrete summation for integration over continuous frequencies and use the original exact Eq. (1). For further application in computations, we introduce the dimensionless variables where ω c ≡ ω c (a) = c/(2a) is the characteristic frequency of the van der Waals interaction. In terms of the new variables the reflection coefficients (2) are In a nonrelativistic limit Eq. (5) leads to which gives the usual estimation for the value of the van der Waals constant at the shorter separations. Remind that Eq. (7) practically does not depend on temperature. By using the Abel-Plana formula [24] it can be approximately represented by In the next two sections Eqs. (5)-(7) will be used for accurate calculations of the van der Waals force between different atoms near the surfaces made of metallic, semiconducting and dielectric materials. III. VAN DER WAALS INTERACTION OF He * AND Na ATOMS WITH GOLD WALL To calculate the van der Waals free energy of atom-wall interaction one should substitute the values of the dielectric permittivity of the wall material and dynamic polarizability of the atom at imaginary Matsubara frequencies into Eqs. (5) and (6). We consider the separation distances a ≤ 150 nm (at larger separations the analytical representation for F was obtained in Ref. [19] using Fermi velocity and ω p is the plasma frequency [25]. As was proved in Ref. [25], at much larger separations (in fact, starting from a ≈ 3 nm) the usual Lifshitz formula, given by Eqs. (1) and (5) is already applicable. Within the separation region under consideration the characteristic frequency ω c reaches and even exceeds (at the shorter separations) the plasma frequency (for Au we use ω p = 1.37 × 10 16 rad/s [26]). By this reason in our case the plasma or Drude dielectric functions are not good approximations for the dielectric permittivity in all relevant frequency range and one should use the complete tabulated data for the complex index of refraction for Au to calculate the imaginary part of the dielectric permittivity Imε(ω) along the real frequency axis. The dielectric permittivity along the imaginary frequency axis is found by means of the dispersion relation [27] The available tabulated data for Au extend from 0.125 eV to 10000 eV (1 eV = 1.519 × where γ = 0.035 eV is the relaxation frequency. It should be reminded also that Eqs. (1), (2), (10) are free from contradiction with the Nernst heat theorem which arise when the Drude dielectric function is substituted into the Lifshitz formula at nonzero temperature in the configuration of two parallel plates made of real metal (see Refs. [28,29] for more details). The computational results for Au are presented in Fig. 1 where log 10 ε(iξ) is plotted as a function of log 10 ξ starting from the first Matsubara frequency (at T = 300 K one has ξ 1 ≈ 2.47 × 10 14 rad/s and log 10 ξ 1 ≈ 14.4). Other data to be substituted into Eq. (5) are the values of the atomic dynamic polarizability at imaginary Matsubara frequencies. The accurate data (having a relative error of about 10 −6 ) were taken from Ref. [30] for the atoms of metastable He * and from Ref. [31] for Na (see also the graphical representation in Fig. 3 of Ref. [19]). It is interesting to compare the values of C 3 (a, T ) obtained by the use of the highly accurate data for the atomic dynamic polarizability and in the framework of the single oscillator model where for He * it holds α(0) = 315.63 a.u., ω 0 = 1.18 eV [32] and for Na it holds α(0) = 162.68 a.u., ω 0 = 1.55 eV [33] (1 a.u. of polarizability is equal to 1.48 × 10 −31 m 3 ). The computational results for the van der Waals coefficient C 3 in the case of Au wall versus separation are represented in Fig. 2 for metastable He * (a) and Na (b) by solid lines. A few calculated results for the values of C 3 are presented in Table I at T = 300 K for different separations indicated in the first column. In columns 2 and 3 the values of C 3 for He * atom are computed for ideal metal and by the use of the optical tabulated data for Imε, respectively, and in both cases with an accurate atomic polarizability. In column 4 the optical tabulated data for Imε were used in combination with the single oscillator model for the atomic polarizability of He * . In column 5 the plasma model dielectric function was used in calculations together with an accurate atomic polarizability of He * . In columns 6-9 the calculational results for a Na atom are presented in the same order. As is seen from Fig. 2 and Table I Table I for He * , and 7 and 9 for Na one can conclude that the error, given by the plasma model, decreases from 6.3% for He * and 10% for Na at a = 3 nm to 0.8% for He * and 1% for Na at a = 150 nm. This illustrates the smooth joining of our present results for the van der Waals interaction obtained by the use of the optical tabulated data for Au with the analytical results of Ref. [19] for the Casimir-Polder interaction found by the application of the plasma model. The nonrelativistic asymptotic values of C 3 can be calculated by the immediate use of Eqs. (7) and (9) combined with the optical tabulated data for Imε and the accurate atomic polarizability. This leads to the results C 3 ≈ 1.61 a.u. for He * and C 3 ≈ 1.37 a.u. for Na in rather good agreement with the data of columns 3 and 7 of Table I computed at the shortest separation a = 3 nm. Note, however, that the asymptotic values, achieved at separations a < 3 nm, may be already outside of the application region of the used theoretical approach (see discussion in the beginning of this section). As was shown in Ref. [19], the account of the atomic dynamic polarizability strongly affects the value of the Casimir-Polder interaction if to compare with the original result [14] obtained in the static approximation. We emphasize that in the case of the van der Waals interaction the influence of dynamic effects is even greater than in the Casimir-Polder case. Thus, if we restrict ourselves by only static polarizability of He * atom, the values of C 3 are found to be 11.6 and 1.64 times greater than those given in column 3 of Table I at separations a = 3 nm and a = 150 nm, respectively. IV. VAN DER WAALS INTERACTION OF He * AND Na ATOMS WITH SEMI-CONDUCTOR AND DIELECTRIC WALLS In this section we apply the formalism of Sec. II to find the accurate separation dependences of the van der Waals interaction between He * and Na atoms and Si or vitreous SiO 2 wall. The chosen separation interval 3 nm ≤ a ≤ 150 nm is the same as in Sec. III. In the case of dielectric and semiconductor surfaces there are additional interactions due to the charged dangling bonds at separations 1-1.5 nm (see, e.g., Ref. [34]). This is a further factor restricting the application of the conventional theory of van der Waals forces at very short distances. The tabulated data for the complex refraction index of Si extend from 0.00496 eV to 2000 eV [26]. This permits not to use any extension of data to smaller frequencies when using Eq. (9) in order to find the dielectric permittivity at all contributing imaginary Matsubara frequencies. The computational results for Si are presented in Fig. 3a where ε(iξ) is plotted as a function of log 10 ξ (ξ is measured in rad/s). The static dielectric permittivity of Si is equal to ε 0 = 11.66. Substituting the obtained results for ε(iξ) and also the data for the atomic dynamic polarizability of He * and Na (the same as in Sec. III) into Eqs. (5) and (6), one finds the dependences of the van der Waals parameter C 3 on separation. The results are shown in Fig. 4a (for He * ) and Fig. 4b (for Na). The solid lines are obtained by the use of the accurate atomic dynamic polarizabilities, and the long-dashed lines by using the single oscillator model given by Eq. (11). The short-dashed lines are obtained with the accurate dynamic polarizability but on the assumption that the dielectric permittivity does not depend on frequency and is equal to its static value. At the shortest separation a = 3 nm the error in C 3 due to the use of the static dielectric permittivity is approximately 13% for He * and 24% for Na. In Table II a few calculated values of C 3 at T = 300 K are presented at separations listed in column 1. In columns 2 and 3 the values of C 3 for He * are computed by the use of a static dielectric permittivity and optical tabulated data for Imε, respectively, and in both cases with the accurate atomic dynamic polarizability. In column 4 the data for Imε were used in combination with the single oscillator model for He * dynamic polarizability. In columns 5-7 the same results for a Na atom are presented. [26]. This is also quite sufficient to calculate the dielectric permittivity at all contributing Matsubara frequencies by Eq. (9) with no use of any extension of data. The dependence of ε(iξ) as a function of log 10 ξ for SiO 2 is shown in Fig. 3b. The static dielectric permittivity of SiO 2 is equal to ε 0 = 4.88. The obtained results for ε(iξ) and the data for the atomic dynamic polarizability of He * and Na are substituted into Eqs. (5) and (6). The resulting dependences of C 3 on separation are shown in Fig. 5a (for He * ) and Fig. 5b (for Na). As in Fig. 4, the solid lines are related to the use of the accurate dynamic polarizabilities, the long-dashed lines to the single oscillator model, and the short-dashed lines to the use of the static dielectric permittivity and an accurate dynamic polarizability. Table III, containing a few calculated results, is organized in the same way as Table II related to the case of a semiconductor wall. It permits to find errors resulting from the use of the static dielectric permittivity instead of the accurate dependence of ε(iξ) on frequency, and a single oscillator model instead of an accurate dynamic polarizability for the atom near the dielectric wall. Thus, at a = 3 nm the use of the static dielectric permittivity instead of the optical tabulated data leads to 78% error in the value of the van der Waals coefficient C 3 for He * and to 95% error for Na. These errors decrease to 2.1% and 6.9%, respectively, if one uses the dielectric permittivityε ≈ 2.13 corresponding not to the zero frequency but to the frequency region of visible light. With the use ofε the largest errors in the value of C 3 are achieved, however, not at the shortest separation but at the largest separation considered here a = 150 nm (15% for He * atom and 12.7% for Na atom). At this separation the use of the static dielectric permittivity ε 0 leads to 56.6% error (for He * ) and 62% error (for Na). By the comparison of columns 3 and 4 in Table III we conclude that at a separation a = 3 nm the use of the single oscillator model results in 5% error for He * atom and in 3% error for Na atom. At a = 15 nm the corresponding errors are 3.6% and 1.2%, respectively. At a separation a = 150 nm the errors due to the use of the single oscillator model are 0.6% for He * atom and practically zero for Na atom, i.e., the single oscillator model is sufficient. V. CONCLUSIONS AND DISCUSSION In the foregoing we have performed accurate calculations of the parameter C 3 describing the van der Waals atom-wall interaction for the atoms of metastable He * and Na near The magnitude of the error, given by one or another approximation used, depends qualitatively on the type of the atom. By way of example, for Na atom the use of a single oscillator model leads to less errors than for He * independently of wall material. The performed investigation permits to make a conclusion that the accurate calculations of the van der Waals atom-wall interaction at short separations with the error no larger than 1% require the use of both complete optical tabulated data of wall material and accurate dynamic polarizability of an atom. This is distinct from the case of the Casimir-Polder interaction with a metallic wall which can be described with no more than 1% error using and by the optical tabulated data (b), and the accurate atomic dynamic polarizabilities; in column (c) semiconductor is described by the optical tabulated data and the dynamic polarizability of an atom is given by the single oscillator model. separations computed for the dielectric (vitreous SiO 2 ) described by the static dielectric permittivity (a) and by the optical tabulated data (b), and the accurate atomic dynamic polarizabilities; in column (c) semiconductor is described by the optical tabulated data and the dynamic polarizability of an atom is given by the single oscillator model.
2017-09-22T01:31:12.981Z
2005-03-03T00:00:00.000
{ "year": 2005, "sha1": "f8fc9c7e74f17f6b877640857188656239a578ee", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0503038", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b5f530adaf1d7777485407cd4599400bf44fa83c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259090790
pes2o/s2orc
v3-fos-license
Remdesivir in pregnant women with moderate to severe coronavirus disease 2019 (COVID-19): a retrospective cohort study Data on the efficacy of remdesivir in Coronavirus Disease 2019 (COVID-19) are limited in pregnant patients since they have been excluded from clinical trials. We aimed to investigate some clinical outcomes following remdesivir administration in pregnancy. This was a retrospective cohort study conducted on pregnant women with moderate to severe COVID-19. The enrolled patients were divided into two groups with and without remdesivir treatment. The primary outcomes of this study were the length of hospital and intensive care unit stay; respiratory parameters of hospital day 7 including respiratory rate, oxygen saturation, and mode of oxygen support; discharge until days 7 and 14, and need for home oxygen therapy. Secondary outcomes included some maternal and neonatal consequences. Eighty-one pregnant women (57 in the remdesivir group and 24 in the non-remdesivir group) were included. The two study groups were comparable according to the baseline demographic and clinical characteristics. Of the respiratory outcomes, remdesivir was significantly associated with a reduced length of hospital stay (p = 0.021) and also with a lower level of oxygen requirement in patients on low-flow oxygen [odds ratio (OR) 3.669]. Among the maternal consequences, no patients in the remdesivir group developed preeclampsia but three patients (12.5%) experienced this complication in the non-remdesivir group (p = 0.024). Furthermore, in patients with moderate COVID-19, the percentage of emergency termination was significantly lower in remdesivir group (OR 2.46). Our results demonstrated some probable benefits of remdesivir in respiratory and also maternal outcomes. Further investigations with a larger sample size should confirm these results. Supplementary Information The online version contains supplementary material available at 10.1007/s10238-023-01095-0. Introduction Coronavirus Disease 2019 (COVID-19) is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and has affected more than 623 million individuals worldwide according to the data from the World Health Organization (WHO) as of October 13, 2022 [1]. Since the onset of the COVID-19 pandemic, several pharmacological treatments have been introduced using drug repurposing and have shown promising results for the treatment of moderate to severe cases [2]. SARS-CoV-2 uses RNA-dependent RNA polymerase (RdRp) for replication, and remdesivir, a nucleotide analog, inhibits the SARS-CoV-2 replication via selective suppression of viral RdRp in vitro [3,4]. Over the past 2 years, several studies and trials have demonstrated the efficacy of remdesivir treatment in non-pregnant adults with severe COVID-19 [5,6]. However, studies such as the WHO Solidarity trial have reported that remdesivir had no effect on the length of hospital stay or survival compared with the standard of care [7]. Other trials have yielded equivocal results regarding the effects of remdesivir [8,9]. Pregnant patients are more vulnerable to severe or critical features of COVID-19 [10]. For instance, the risk of intensive care unit (ICU) admission and the need for mechanical ventilation is approximately two to four times higher among pregnant than matched non-pregnant women [11][12][13]. Physiological changes in the immune and cardiovascular systems are probably the underlying reason for the increased susceptibility of pregnant women to viral respiratory infections [14]. The risk of severe disease increases especially when respiratory infection occurs in the third trimester [11][12][13]15]. In addition, pregnancy outcomes could be impacted by COVID-19, resulting in the higher rates of preterm birth and cesarean section [16]. Data on the administration of remdesivir in COVID-19 pregnant patients are scarce since pregnant women have been excluded from clinical trials of remdesivir. Nonetheless, Remdesivir was authorized in pregnant women with severe COVID-19 as of March 21, 2020, through a compassionate-use program [10]. In this retrospective cohort study, we aimed to explore the association between remdesivir and pregnancy outcomes in patients with moderate to severe COVID-19. Study design and participants This was a retrospective cohort study conducted on a group of pregnant women with moderate to severe COVID-19 hospitalized in the four affiliated hospitals of Shahid Beheshti University of Medical Sciences from September 2020 to March 2022. The study protocol was approved by the Institutional Review Board (IRB) and adhered to the ethical principles outlined in the Declaration of Helsinki. All patients were provided with informed consent to use the de-identified data. Exclusion criteria were: (1) administration of remdesivir after 2 days of hospitalization in the remdesivir group, (2) multiple-pregnancy, (3) creatinine clearance less than 30 ml/min, (4) aspartate transaminase (AST) or alanine transaminase (ALT) greater than 5 times the upper limit of normal (ULN), (5) multi-organ failure, (6) history of obstructive or restrictive lung disease, (7) receiving other anti-viral agents such as Tocilizumab (8) patients not receiving corticosteroids (due to probable effect of corticosteroids on disease course, all included patients had received corticosteroids). Data and outcomes The required data were extracted from the electronic medical records of four hospitals. These data included maternal age, gestational age, gravidity and parity, past medical history, signs and symptoms of COVID-19, respiratory parameters, and laboratory measurements (Tables 1, 2, 3). As receiving or not receiving remdesivir was the main predictor variable of the study, the included patients were categorized into two groups with and without remdesivir treatment. Intravenous (IV) remdesivir was administered 200 mg as a single dose on day 1, followed by 100 mg once daily for 5 days, within 48 h of hospitalization for the remdesivir group. Note that remdesivir was provided by the same company for all patients in the affiliated hospitals. Corticosteroids were initiated in the 24 to 48 h following hospitalization. According to the national protocol, dexamethasone was given intravenously/orally (6 mg/day) or prednisolone orally (40 mg/day) for 10 days or till discharge. If the gestational age was between 24 and 33 weeks and 6 days, a therapeutic dose of corticosteroid was administered for the maturation of fetal lungs (dexamethasone: 4 doses of intramuscular (IM) 6 mg every 12 h or betamethasone: 2 doses of IM 12 mg every 24 h). Pharmacological thromboprophylaxis with enoxaparin or unfractionated heparin (UFH) was conducted in those without anticoagulant contraindication as per the national protocol. The primary outcome measures of this study were the length of hospital and ICU stay, respiratory rate on day 7, oxygen saturation on day 7, mode of oxygen support on day 7, hospital discharge until day 7 and day 14; and the need for home oxygen therapy. The secondary outcome measures of the study were maternal and neonatal outcomes, which are listed in Table 4. Statistical analysis The Statistical Package of Social Science Software (SPSS version 20, IBM Company, USA) and R software (version 4.2.2) were used for data analysis. Categorical and continuous variables were demonstrated as frequency (percent) and mean (standard deviation), respectively. Student t-test (two-tailed) was used to compare the continuous variables between the remdesivir and non-remdesivir groups. The Chisquare test or Fisher's exact test was used for the comparison of categorical variables. We considered some variables as confounders in regression analyses, including maternal age, gestational trimester, and COVID-19 severity. Poisson regression was used to evaluate the association between receiving remdesivir and the length of hospital stay (in day). A negative binomial regression was carried out to investigate receiving remdesivir and the length of ICU stay (in day) association. Exponentiated coefficients (incidence rate ratio (IRR) and 95% confidence interval (CI)) were estimated for each model. Linear regression analysis was done to assess the association between receiving remdesivir and respiratory rate on day 7 or oxygen saturation on day 7. As the sample size was sparse, a Bayesian approach was applied to analyze the categorical outcomes. Therefore, a Bayesian binary or multinomial logistic regression analysis was done to investigate the correlation between receiving remdesivir and the mode of oxygen support on day 7, hospital discharge until days 7 and 14, and the need for home oxygen therapy, using R package of "rstanarm" version 2.21.3., function "stan_glm". A p value of < 0.05 was considered statistically significant. Baseline demographic and clinical characteristics We initially evaluated the electronic data of 304 pregnant patients who were admitted to our four hospitals between September 2020 and March 2022. A total of 81 pregnant women (57 in the remdesivir group and 24 in the non-remdesivir group) were included in this study. Figure 1 shows the flowchart for the analytical cohort derivation. None of the included patients were previously vaccinated against COVID-19. Maternal age ranged from 19 to 49 years (mean: 33. 19) in the remdesivir group and from 22 and 44 years (mean: 31.88) in the non-remdesivir group. Most patients were in the third trimester of pregnancy (61.4% in the remdesivir group and 50% in the non-remdesivir group). The two study groups were comparable according to the baseline demographic and clinical characteristics (Table 1). COVID-19 characteristics and laboratory data As shown in Table 2, signs, symptoms, and duration of symptoms prior to admission were comparable between the two study groups. In either group, fever/chills, cough, and shortness of breath were the most frequent symptoms. The proportion of moderate and severe COVID-19 patients was comparable between the groups (p = 0.083). In addition, there was no significant difference in respiratory parameters of the two groups, including respiratory rate, oxygen saturation, and the mode of oxygen support on the day of admission. As shown in Table 3, the two study groups were comparable based on all laboratory measurements. Also, liver transaminase levels did not show any significant difference between the two groups. Primary outcomes including clinical and respiratory parameters Poisson regression revealed that there was a significant association between receiving remdesivir and reduced length of hospital stay (IRR = 0.791; 95% CI 0.655-0.955), (Supplementary Table 1). However, negative binomial regression and linear regression did not represent a significant association between receiving remdesivir and the length of ICU stay (p = 0.196), respiratory rate on day 7 (p = 0.197), and oxygen saturation on day 7 (p = 0.096) (Supplementary Tables 2 to 4). Bayesian binary logistic regression demonstrated a significant relationship between receiving remdesivir and hospital discharge until day 7 (odds ratio (OR) = 2.718; 95% CI Table 7). Moreover, the results of Bayesian multinomial logistic regression revealed that there was a significant association between receiving remdesivir and changing the level of oxygen support from low-flow oxygen on admission day to breathing in ambient air without supplemental oxygen on day 7 (Supplementary Table 8), so that those patients of remdesivir group, had a greater chance of being no longer in need of oxygen support on day 7 (OR 3.669; 95% CI 1.350-9.025). However, such a reduction in oxygen requirement from high-flow on admission day to low-flow on day 7 was not statistically significant between the two groups (OR 1.00; 95% CI 0.247-4.055). Secondary outcomes including maternal and neonatal findings As shown in Table 4, Intrauterine fetal death (IUFD) occurred in only one case in the remdesivir group, but there was no maternal mortality among all patients. None of the patients in the remdesivir group developed preeclampsia; however, three patients (12.5%) in the non-remdesivir group were affected. As shown in the univariate analysis (unadjusted for confounders), the proportion of developing preeclampsia was statistically different between the two groups (p = 0.024). Moreover, the results of Bayesian multivariate logistic regression (Supplementary Table 9) indicated a statistically significant association between receiving remdesivir and less chance of developing preeclampsia (OR 27.11; 95% CI 3.67-221.41). Emergency termination via vaginal delivery or cesarean section was indicated in a total of 19 patients (Table 4). The percentage of emergency termination did not differ including all patients (with moderate or severe COVID-19), but it was significantly lower in remdesivir compared with the non-remdesivir group in patients who experienced moderate Covid-19 (11.1% vs 42.9%) (p = 0.042). Moreover, the results of Bayesian multivariate logistic regression (Supplementary Table 10) demonstrated a statistically significant association between not receiving remdesivir and the need of emergency termination in moderate COVID-19 (OR 2.27; 95% CI 1.11-6.05), so that the proportion of emergency termination was significantly lower in patients who had received remdesivir. There was no significant difference in the percentage of oligohydramnios between the groups. Other maternal and neonatal outcomes, such as new-onset hypertension (HTN), deep vein thrombosis (DVT), acute respiratory distress syndrome (ARDS), preterm delivery, birth weight centile, 5-min Apgar score, neonatal intensive care unit (NICU) admission, and neonatal death showed no significant difference between the two groups. Discussion In this cohort study conducted on pregnant patients with moderate to severe COVID-19, we planned to assess some impacts of remdesivir administration within 48 h of admission. Our results demonstrated that remdesivir might reduce hospital stay and the probability of preeclampsia development, and also it can lead to a lower level of oxygen requirement in patients in need of low-flow oxygen. The other positive effect was less chance of emergency termination in patients of moderate COVID-19. Since the onset of COVID-19 pandemic, several studies of non-pregnant population have shown inconsistent results regarding the effectiveness of remdesivir [7][8][9]. In addition, data on how COVID-19 is treated during pregnancy and also studies of remdesivir in this population are more limited [19]. In a multicenter observational study by Burwick et al., 86 patients, including 67 pregnant and 19 post-partum women with severe and critical COVID-19 were treated with a 10-day course of remdesivir leading to a high rate of clinical recovery. Their results revealed a reduction in the level of oxygen need in 96% of pregnant and 89% of postpartum women on day 28 of hospitalization. This effect was found in patients on any type of oxygen support from low-flow to high-flow oxygen and mechanical ventilation [10]. Note that there was no matched control group in their study. Although data on the efficacy of remdesivir is not obviously clear [7,9,20], recently some guidelines such as National Institutes of Health (NIH) tend to support remdesivir use in adults on no or minimal supplemental oxygen [21]. They believed that remdesivir might be less effective in more severe and patients on ventilatory support [9,22]. Consistent with these recommendations, our results showed the decreased requirement of supplemental oxygen in patients who were supported with only low-flow and not high-flow oxygen. In another study on 35 pregnant women with moderate COVID-19, Nasrallah et al. [23] reported that early administration of remdesivir within 48 h of admission led to early clinical recovery; however, delayed treatment was associated with a longer recovery as well as hospitalization. They considered recovery as discharge on day 7 or breathing in ambient air. Furthermore, all patients (17 patients) on early remdesivir treatment experienced clinical improvement on day 7, but only 3 out of 11 patients without remdesivir did [23]. In our cohort study, we excluded those who received remdesivir after 48 h of admission to overcome possible confounding effect of delayed treatment. Our finding revealed that remdesivir may reduce hospital stay and also it can lead to an earlier discharge, because our finding showed a significant correlation between receiving remdesivir and discharge till days 7 and 14. Previous data have shown that course of COVID-19 is associated with a higher incidence of preeclampsia due to impaired renal function [24,25]. However, no study has been reported any correlation between remdesivir and the development of preeclampsia. Our results demonstrated that remdesivir might significantly decrease the probability of preeclampsia. By consensus, in the study by Burwick et al., none of the pregnant patients experienced preeclampsia [10]. As such, the reduction in preeclampsia in patients treated with remdesivir could be due to improved renal function. Recent data on the impact of remdesivir on renal function are not completely consistent in the general population. In the Elec et al. study [26], which investigated the effect of remdesivir on patients with a history of renal transplant, they presented that remdesivir does not cause renal impairment, but also leads to increased eGFR (estimated glomerular filtration rate). On the contrary, some studies have reported the adverse role of remdesivir in renal function [27]. The favorable impact of remdesivir might be through inhibition of NF-kb (nuclear factor-kb) and MAPK (mitogen-activated protein kinase) signaling pathways and thus suppressing the expression of NLRP3 (NOD-, LRR-and pyrin domain-containing protein 3) inflammasome leading to improvement lipopolysaccharide-induced acute kidney injury [28]. In contrast, a number of reports have shown that NLRP3 inflammasome is involved in the pathophysiology of preeclampsia [29][30][31]. It is suggested that future studies investigate the potential role of remdesivir in preventing preeclampsia in pregnant women with COVID-19. The other finding of our study was a significantly lower chance of emergency termination in patients with moderate disease severity in remdesivir (11.1%) compared with non-remdesivir group (42.9%) (OR 2.46). More research is needed as no previous studies have reported the such effect of remdesivir. This study has its own strengths and limitations. Participants who were not receiving corticosteroid treatment or who had received the COVID-19 vaccine were not included in the analysis for optimal matching. We also considered some confounding factors as covariates in the analysis, including maternal age, gestational trimester, and COVID-19 severity. There are two main limitations in this study that could be addressed in future research. The first is the relatively small sample size, even though our sample size is comparable to the literature studying related questions. Considering widespread vaccination and the reduction in moderate to severe cases of COVID-19, we were not able to increase the sample size. The second limitation is the short-term phase of evaluation of respiratory and clinical outcomes (day 7 and day 14 of hospitalization). We could nevertheless suggest some novel findings despite these restrictions. However, further research with a longer duration and a larger sample size of pregnant patients should be undertaken. Conclusion Our results represented that remdesivir may lead to a less hospitalization time and a reduction in oxygen need while patients are on low-flow oxygen. In regards to maternal outcomes, it might help to less probability of preeclampsia development and also a lower chance of emergent termination in moderate COVID-19. As with the majority of studies, the design of the current study is subject to limitations. We recommend that further studies with a larger sample size should verify these new findings. Author contributions TA and MK designed the work. MMS and SO collected data. ZN, NR and FEM analyzed and interpreted data. TA and MMS wrote the initial draft of the manuscript. PP, MM and SSG edited the manuscript and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Funding The authors were not supported financially for the preparation of this research. Data availability The datasets analyzed in the current study will be shared via the following e-mail address: maryam.masoumi.sh@gmail. com. Declarations Competing interest All authors declare that they have no potential conflicts of interest. Ethics approval The protocol of this study was approved by the Institutional Review Board (IRB) and performed according to the ethical principles outlined in the Declaration of Helsinki. Consent to participate All participants were provided with informed consent to use the de-identified data. Consent to publish Patients signed informed consent regarding publishing their data.
2023-06-07T06:17:49.980Z
2023-06-05T00:00:00.000
{ "year": 2023, "sha1": "385f1685fd7d22cbeb8d4d3f3a34abeb9482110c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2c5a46c04ef5502a7ff14759aae66089f2a673f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234116913
pes2o/s2orc
v3-fos-license
Receptor mechanism of infarct-limiting effect of adaptation to normobaric hypoxia The aim of the study was to investigate the involvement of bradykinin, cannabinoid and vanilloid (TRPV1 channel) receptors in the implementation of the infarct-limiting effect of chronic normobaric hypoxia (CNH). Materials and methods. The study was performed on male Wistar rats ( n = 117) weighing 250–300 g. Adaptation to CNH was modeled for 21 days at 12% pO 2 , 0.3% pCO 2 and normal atmospheric pressure. A day after adaptation of rats to CNH coronary artery occlusion (45 min) and reperfusion (2 h) was performed. In the study the following compounds were used: selective cannabinoid CB1 receptor antagonist rimonabant (1 mg/kg), selective cannabinoid CB2 receptor antagonist AM630 (2.5 mg/kg), selective bradykinin B2 receptor antagonist HOE140 (50 μg/kg), and vanilloid receptor (TRPV1 channel) antagonist capsazepine (3 mg/kg). All antagonists were administered 15 min before coronary artery occlusion. Results. Adaptation to normobaric hypoxia promoted the formation of the pronounced infarct-limiting effect. The blockade of B2 receptor eliminated the infarct-limiting effect of CNH. Blockade of cannabinoid or vanilloid receptors did not affect the infarct-limiting effect of CNH. Conclusion. The infarct-limiting effect of CNH depends on the activation of B2 receptor, and the adaptive increase in cardiac tolerance to ischemia/reperfusion does not depend on cannabinoid or vanilloid receptors. INTRODUCTION It is known that in chronic moderate hypoxia, nonspecific myocardial resistance to damage during ischemia and subsequent reperfusion is formed. However, the pathways of forming myocardial resistance during adaptation to hypoxia remain poorly understood. In particular, the receptor mechanisms of this phenomenon have not been sufficiently studied. Earlier, we found the participation of opioid receptors in the infarct-limiting [1] and cytoprotective [2] effects of adaptation to continuous hypoxia. However, other receptor mechanisms remain unexplored. At the same time, an important role of bradykinin, cannabinoid and vanilloid receptors in the regulation of the heart's tolerance to ischemia/reperfusion during ischemic and remote preconditioning is known [3][4][5][6][7]. The aim of this study was to investigate the participation of bradykinin, cannabinoid, and vanilloid receptors (TRPV1 channels) in the implementation of the infarct-limiting effect of continuous normobaric hypoxia (CNH). MATERIALS AND METHODS The study was performed on male Wistar rats (n = 117) weighing 250-300 g. Animals of the experimental groups (adapted to hypoxia) were exposed to CNH (12% pO 2 , 0.3% pCO 2 ) at normal atmospheric pressure in the chamber for 21 days [1]. Monitoring the state of the gaseous medium was carried out using the TCOD-IR and OLC 20 sensors (Oldham) and the Bio-Nova-204G4R1 apparatus (NTO Bio-Nova) through the MX32 control unit (Oldham). 24 hours before the start of the experiment, the animals of the experimental group were removed from the hypoxic chamber. Groups of rats of normoxic control were kept under standard vivarium conditions. Before the coronary occlusion procedure, the animals were anesthetized with a-chloralose (100 mg/kg i.p.). During subsequent manipulations, the animals were subjected to artificial ventilation with atmospheric air, which was carried out using the SAR-830 Series ventilator (Central Wisconsin Engineers Inc., Schofield, USA) through an intubated trachea. To perform coronary occlusion, the chest was opened at the intercostal space to the left of the sternum, the heart was freed from the pericardium and a ligature was placed on the left descending coronary artery in its upper third for 45 minutes. Reperfusion was performed by releasing the ligature with visual control of the restoration of coronary circulation by hyperemia of the ischemic region [8]. The duration of reperfusion was 2 hours. To determine myocardial infarction size, the ligature, previously placed on the left coronary artery, was again tightened; the isolated heart was washed through the aorta with physiological saline, and stained with 5% potassium permanganate solution. After washing the myocardium with saline, the right ventricle was separated, both ventricles were weighed, the left ventricle was dissected into sections 1 mm thick parallel to the axis of the heart. Sections of the left ventricle were stained with a 1% solution of 2,3,5-triphenyltetrazolium (37° C, 30 minutes) and fixed for 1 day in a 10% solution of neutral formalin [8]. Slices were scanned (Scanjet G2710), the size of the necrosis zone and the area at risk (ischemia/ reperfusion zone) were determined planimetrically using the application software package. The magnitude of the infarct size was expressed as a percentage of the area at risk. The following drugs were used in the study: selective cannabinoid CB1 receptor antagonist rimonabant (1 mg/kg), selective cannabinoid CB2 receptor antagonist of AM630 (2.5 mg/kg), selective bradykinin B2 receptors antagonist HOE140 (50 μg/ kg), and vanilloid receptor (TRPV1 channels) antagonist capsazepine (3 mg/kg). All antagonists were administered 15 minutes before coronary artery occlusion. The choice of doses of pharmacological agents was based on the previous data [9][10][11][12]. Statistical data processing was performed using the Statistica 6.0 software (StatSoft, Inc.). The mean value (M) and standard error of the mean (SEM) were calculated. The significance of differences between groups was determined using the nonparametric Mann -Whitney U-test. The critical significance level was taken as p = 0.05. RESULTS Adaptation to normobaric hypoxia led to the formation of a pronounced infarct-limiting effect, the size of the infarct formed during coronary occlusion-reperfusion, defined as the ratio of the size of necrosis to the risk zone, was 38% less than in non-adapted rats. It should be noted that the hypertrophy of the right ventricle of the myocardium is characteristic of the state of chronic hypoxia (Table 1). It was found that the inhibition of cannabinoid CB1 receptors by the selective antagonist rimonabant did not lead to a change in the infarct size in rats adapted to CNH (Table 1). These data indicate that CB1 cannabinoid receptors are not involved in the formation of the infarct-limiting effect of CNH. The injection of the selective CB2 cannabinoid receptor antagonist AM630 also did not affect the infarct size during coronary artery occlusion and reperfusion in rats adapted to CNH ( Table 1). The administration of the selective cannabinoid receptor antagonists to non-adapted rats did not lead to a change in the infarct size during the subsequent coronary occlusion (Table 1). These data suggest that CB1 and CB2 cannabinoid receptors are not involved in the infarct-limiting effect of CNH. Note: * p < 0.05 compared with the control group, Mann -Whitney U-test. AN -area of necrosis, AR -area at risk (here and in Table 2). Blockade of bradykinin receptors by the selective antagonist HOE140 contributes to an increase in the infarct size in rats adapted to CNH (Fig. 1). Moreover, in non-adapted rats, blockade of the bradykinin receptors did not affect the infarct size. These data indicate that bradykinin receptors are involved in the formation of the infarct-limiting effect of CNH. Inhibition of vanilloid receptors (TRPV1 channels) by the selective blocker capsazepine did not affect the infarct size in rats after a course of CNH or in non-adapted animals ( Table 2). The obtained data allow us to conclude that there is no connection of TRPV1 channel activation and the formation of cardioprotection during adaptation to normobaric hypoxia. DISCUSSION The problem of myocardial protection in ischemic damage remains relevant, despite significant progress in this area. The reason for this is the lack of effective cardioprotective drugs that do not have strong side effects. Currently, beta-blockers, alpha-2-adrenoreceptor agonists, calcium channel blockers, nitrates, statins, and macroergic compounds are proposed to protect the myocardium from ischemic-reperfusion injury [13]. The effectiveness of a number of these drugs is insufficient for anti-ischemic protection, which, in the presence of many side effects, casts doubt on the feasibility of the use of the indicated drugs. Thus, the search for new means for myocardial protection during ischemic-reperfusion exposure remains an urgent task of modern pharmacology. One of the ways of a directed search for such agents is a study of the mechanisms of non-specific adaptive resistance of the myocardium to ischemic damage. Thus, it is known that the myocardium of animals subjected to moderate chronic hypoxia is more resistant to ischemic effects than the myocardium of intact animals [1,8,14]. A study of this phenomenon has been conducted for 60 years, but many aspects of the formation of adaptive myocardial stability remain unexplored, its receptor mechanisms remain poorly understood. Previous studies in our laboratory have shown participation of opioid receptors in adaptive cardioprotection [1,2]. This work revealed that bradykinin receptors are also involved in the mechanism of triggering the defense mechanism when adapting to chronic hypoxia. Both types of these receptors are known to be associated with Gi/o proteins that are located on the membrane of cardiomyocyte. Successive activation of receptors (opioid or bradykinin) and Gi/o proteins triggers an intracellular kinase mechanism that turns off protein kinase C, NO-synthase, tyrosine kinase, and subsequently activates ATP-sensitive potassium channels of mitochondria [14]. The result of the latter is inhibition of the opening of the pore that regulates the permeability of mitochondria (MPTP), an increase in the resistance of mitochondria to calcium ions, an improvement in the energy metabolism of mitochondria, and thus a decrease in the sensitivity of cells to the damaging effects of ischemia and reperfusion [15]. CONCLUSION The obtained results allow us to present bradykinin receptors as one of the key mechanisms for the formation of the infarct-limiting effect of continuous normobaric hypoxia. Taking into account the data on the important role of opioid receptors in cardioprotection in CNH [1,2], we can talk about the implementation of the infarct-limiting effect of chronic hypoxia through Gi/o-protein-coupled opioid and bradykinin receptors. Cannabinoid receptors and TRPV1 channels do not participate in the infarct-limiting effect of adaptation to normobaric hypoxia.
2021-05-11T00:04:38.971Z
2021-01-07T00:00:00.000
{ "year": 2021, "sha1": "72ce86691359460c8429e17db88d49da41b5a93c", "oa_license": "CCBY", "oa_url": "https://bulletin.tomsk.ru/jour/article/download/4161/2880", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "485e7497dde94d498dfb2415b0eb03977271fcec", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
259643448
pes2o/s2orc
v3-fos-license
diGitaL eXCLuSion oF SeniorS aS a barrier to LeadinG a produCtive LiFe in tHe FourtH induStriaL revoLution Purpose – Analysis of the reasons for cyber exclusion of the elderly and a statistical illustration of this phenomenon in Poland Research method – Literature review, analysis of current statistical data, synthesis. Results – The introduction of modern ICT-based solutions can significantly help older people to remain independent and maintain a high quality of life. This is facilitated by strategic initiatives funded by the European Union, encouraging public units to take action to increase digital competences in society. introduction The acceleration of the ageing process affects the size, structure and scope of societal needs, but it also forces authorities to look for new ways to provide public and social services efficiently [www 1]. As a consequence of recognising the demographic threats to the effective functioning of countries, the WHO prepared a new strategy document called Healthy Ageing and declared 2020-2030 as the Decade for Healthy Ageing [www 2, www 3]. The provisions of the indicated strategy are supposed to lead to the intensification of activities aimed not only at improving the quality of life of the elderly currently living, but also to urge governments of individual countries to undertake activities aimed at preparing the next generations for healthy old age. Measures to prevent discrimination, social exclusion and marginalisation of older people have been identified as most important. It is also crucial to increase their empowerment and presence in society and to improve the care and services offered to this age group. Solutions resulting from digitalisation and digitisation typical of the Fourth Industrial Revolution were identified in the Healthy Ageing Strategy as having a significant impact on the efficiency of public and social service delivery. Digital technologies, especially technologies such as computers, smartphones, tablets and the Internet, have increasingly permeated all aspects of human life for more than two decades now. The problem, however, is that different age groups use them with different intensity [Hunsaker, Hargittai, 2018;Seifert, Rössel, 2019]. This raises the question of why older adults use the Internet rarely or not at all [Schulz et al., 2015]. An explanation for this phenomenon is increasingly being sought by more and more researchers not only in the endogenous determinants that characterise the elderly population, but also in the exogenous ones -pointing to the role of public governance units in counteracting, mitigating and preventing digital exclusion of people in this age group. Early research on digital exclusion emphasised the importance of endogenous factors, i.e. mainly psychological factors, arguing that older adults are less likely to use the internet and other ICTs because they show higher levels of computer anxiety [Cattaneo et al, 2016;Charness, Boot, 2009;Lee et al, 2011;Neves et al, 2013;Silver, 2015], frustration with the difficulty of using user interfaces [Damodaran et al, 2013;Gatto, Tak, 2008;Hussain et al, 2017], negative attitudes towards technology [Kamin et al, 2017;Reisdorf, Groselj, 2017], and concerns about online safety, mainly in relation to the appropriation of personal data [Gatto, Tak, 2008;Hussain et al, 2017;Lee et al, 2011]. The main determinants preventing the use of ICT in older age [Bakaev et al., 2008;Charness, Boot, 2009] categorised as endogenous, also included health-related barriers, e.g. poor eyesight, hand tremor or cognitive impairment, making ICT use difficult and sometimes even impossible [Charness, Boot, 2009;Cresci et al, 2010;Damodaran et al, 2013;Gatto, Yes, 2008;Hussain et al, 2017;Lee et al, 2011;Lelkes, 2013], as well as low education and income [Charness, Boot, 2009;Cresci et al, 2010;Lelkes, 2013;Neves, Amaro, 2012]. Some studies even suggest that it is not the age alone, but rather a combination of experience and education level that determines the level of computer anxiety among older people [Fernández-Ardèvol, Ivan, 2015]. The last finding emphasizes the role of different forms of education as an opportunity to increase Internet use among older people. While research has identified many endogenous factors that explain why older adults do or do not use the Internet and other ICTs, less attention has been paid to examining the extent and effects of interventions to support the digital inclusion of older people by public management units, i.e. exogenous determinants. For a long time, research on such interventions has focused on online or traditional ( face-to-face) training as essential activities for supporting older people in their use of the Internet [Černá, Svobodová, 2018;Damodaran et al, 2013;Esteller-Curto, Escuder-Mollon, 2012;Fernández et al, 2016;Kokol, Stiglic, 2011;Sitti, Nuntachompoo, 2013;Yamauchi et al, 2008] which has led to personalised learning being seen for many years as the most commonly used strategy to prevent digital exclusion in older age. However, such individualised accounts of Internet use and non-use in older age are being increasingly questioned. One of the more critical approaches to this topic has been framed as the so-called material praxeology of ageing with technologies [Wanka, Gallistl, 2018, p. 14]. It assumes that the use or non-use of ICTs in old age is not the result of a conscious decision or an individual learning process, but depends on a variety of factors operating within the citizen's social field, mainly the activity of public governance units, the content of discourses and the availability of digital technologies. From this perspective, not using the Internet and other ICTs in old age is not an individual process, but a "co-constituted process in a social field, composed of actors, discourses and power relations" [Wanka, Gallistl, 2018, p. 14]. The above statement implies that public governance units can significantly influence the elimination of cyber-exclusion of older people and thus contribute to increasing their use of social services provided such as using the Internet and other ICTs. At the same time, it is important to note that while production systems are changing and adapting rapidly to the challenges of digital transformation, redistribution/social care systems dependent on limited public funding -are poorly innovative. Firstly, digital transformation is creating a new era of industrial production (Industry 4.0/Industry 4.0), which strongly influences the social sphere and builds a rudiment for the development of Government 4.0, Healthcare 4.0, Welfare 4.0 or Society 5.0. The above-mentioned phenomenon may be called an external modernisation effect of states and their public management entities. Secondly, the digitalisation of the state and the public service sector produces internal modernisation effects. These are related to the digitisation of processes related to the labour market, social welfare or health care, among others, and the technical environment, such as the spread of Internet connections and the expansion of broadband. In addition, internal modernisation consists of institutional support for the development of individual skills and abilities that digitisation requires in order to participate in social and professional life. Related to this is the question of how the state and its public governance units will deal with digital exclusion -and what public policy solutions can be identified to counter the effects of digital exclusion among older people in particular. In order to seek an answer to the problem formulated in this way, it is necessary to define the phenomenon of digital exclusion and characterise its level. digital exclusion -the concept Digital exclusion is a relatively new concept that has emerged as a side-effect of the effects of the third industrial revolution, particularly evident now during the fourth industrial revolution. Every industrial revolution affects relations in society and causes a transformation of its social structures. The first and second revolutions were a long-term process (the first one started at the end of the 18th century, the second, at the turn of the 20th century). The next two (the third, in the 1970s and 1980s, the fourth is ongoing) are characterised by a rapid acceleration of change in the economic and social spheres. J. Rifkin [Rifikn, 2001, p. 86] stated that the third industrial revolution occurred "immediately after the Second World War and has had a significant impact on the way societies organise their economic activities since the early 1990s". The advent of computers in the 1960s and the development of computerisation introduced large-scale automation of production and radical changes in the labour market, which contributed to the formation of the information society. As a consequence of the third revolution, in addition to the economic transformation, significant changes in the social sphere concerning the functioning of individuals, social groups and entire societies became a reality. The rapid technical progress associated with computerisation and informatisation revolutionised everyday life. Digital technologies have begun to permeate business activities and almost all aspects of daily and public life causing many activities to move into the digital space. The development of information and communication technologies has led to knowledge and information becoming strategic resources, rather than labour and capital as before. The computer, the Internet and various digital techniques for producing, collecting and using information have underpinned the creation of the information society [Maier, Emery, Hilliard, 2001, pp. 107-109]. The fourth industrial revolution currently underway is characterised by the explosion of the ubiquitous and mobile Internet, the formation of the cyber-physical system, the emergence of new developments such as the Internet of things, the Internet of services, the Internet of all things and the possibility of cloud computing (so-called clouds) [Schwab, 2018, pp. 17-21]. As M. Szpunar [Szpunar, 2018, p. 194] emphasises, for an information society living in the era of the fourth industrial revolution, the Internet has become the dominant techno logy that determines the course of a number of socio-cultural processes. It is also the main source of knowledge distribution and the primary tool for social interaction. Many researchers [Ampuj, Koivisto 2014, pp. 447-463] argue that, according to D. Bell's conception, the development of ICT in the information society makes knowledge a common good of the whole society, and free and equal access to it is the basis of democracy. The democratisation of knowledge allows all people to benefit from it without any restrictions. The ease of access to information creates the conditions for greater diffusion of knowledge, as the necessary information can be easily and affordably obtained electronically. In reality, however, the situation is much more complicated, as emphasised by D. Batorski [2009, p. 224] stating that "not everyone has access to new technologies and not everyone can benefit from them. With the growing importance of computers and the Internet in practically all spheres of life, those who cannot or will not be able to use these technologies will become increasingly disadvantaged and excluded from social life". The development of the information society generates digital exclusion, which is a specific form of social exclusion. Due to the complex nature of the phenomenon of social exclusion, its unambiguous definition poses a number of difficulties. Common to various definitions is the emphasis that it is a complex, multi-causal and multidimensional phenomenon involving a lack of resources and rights. Ch. Gore and J. B. Figueiredo [2003, p. 18] emphasise that social exclusion can be considered as a condition (equated with relative deprivation), but also as a process that makes it difficult for a part of society to access economic resources, social goods and institutions that affect its destiny. As with the term social exclusion, the concept of digital exclusion is also difficult to define unambiguously. Ł. Arendt [2010, p. 28] defines this phenomenon as the occurrence of "inequalities in various levels of access to computers and the Internet and the use of the possibilities of information and telecommunication technologies for personal and professional purposes, conditioned by the level of information skills of the individual and/or the organisation". The notion of digital exclusion is defined somewhat more broadly by D. Batorski and A. Płoszaj [2012, p. 8]. The cited authors draw attention to the occurrence of "differences between people who have regular access to information and communication technologies and are able to use them effectively and those who do not have such access. These differences are related both to physical access to the technologies and more broadly to the skills and resources needed to use them. The problem of digital exclusion is not about the use of technology per se, but rather about the differences in life chances, labour market situation, and opportunities to participate in social and cultural life that emerge between users and non-users or those with insufficient skills of use." The characteristics of social and digital exclusion were presented by Ł. Tomczyk [2013, p. 118], attributing specific characteristics and the consequences associated with them to each of the mentioned disadvantageous social phenomena (Table 1). Analysing the designations of digital exclusion proposed by Ł. Tomczyk, it can be noted that the development of modern technologies may in many cases become a factor generating or deepening the stratification of societies. J. van Dijk [2005, p.16] emphasises that "social equality is under threat, as some groups of people participate more in the information society than others. Some take advantage of the opportunities it creates, others are unable to do so. Technology enables a better distribution of knowledge, but its complexity and cost may exacerbate existing inequalities and even create large groups of 'misfits' who do not fit into the information society"; he also points to the need to intensify efforts directed at education and the growth of digital skills and competences. Inability to connect with members of the public through available synchronous (instant messaging, chat rooms) and asynchronous (e-mail, discussion forums, communication tools). Limitation or inability to use public institutions Lack of opportunity to use e-government services (e.g. Internet voting, administrative support -submitting applications and requests via the network). Discrimination. Lack of regulations with effective practical transfer to minimise: white spots related to the access and education of non with IT expertise. Lack of market access to universal services and trade Deprivation of the ability to purchase goods online and to use other e-services (e.g. e-banking, e-health, e-libraries). Cultural absenteeism Limited access to Internet-generated culture (music, artistic creations, e-literature). internet use by senior citizens in poland Population groups at risk of being digitally excluded are heterogeneous communities. There are many typologies of factors differentiating digital exclusion, but practically all of them point to age, education level and place of residence as important. Referring to the characteristics of the digitally excluded by age, it can be noted that older people are in the most difficult situation. This is confirmed by the results of research conducted by both the Central Statistical Office [www 5, www 6] and CBOS [www 7]. Although the percentage of seniors using the Internet is systematically increasing, it is still significantly lower than among the general population. While in 2015 there were 21.3% of Internet users aged 65+ in this age group, 6 years later this number increased to 58.3%. The digital divide between the generations is significant and increases with age. The gap in Internet use is particularly evident in relation to young people. In the 16-24 age group, the share of Internet users in 2021 was 98.4%. There is therefore a clear generation gap in terms of the skills needed to use the Internet (Table 2). Source: [www 6. Part 2 Table 4]. Frequency of Internet use in the last 3 months indicates that for the vast majority of users aged 16-24 it is a tool used daily or almost daily (97.5%). J. van Dijk [van Dijk, 2005, p.13] calls this group of users the information elite. As people get older, Internet use becomes less intensive and of those aged 65-74 using the Internet in the last 3 months, only 31.3% do so daily or almost daily. In addition, given that almost 42% of older people do not use the Internet at all, the scale of digital illiteracy among older people calls for action at various levels to include older people and develop their digital competences. People in the 55-64 and 65-74 age groups indicate [www 6, Part 2, Table 5] various reasons for not using the Internet, including the lack of need to use it (regarding the age group: 16.0% and 32.4%), lack of appropriate skills (13.4% and 27.1%), lack of appropriate equipment (3.6% and 8.%) and the fact that other people do it for them (3.5% and 7.5%). At the same time, it should also be noted that the most frequently used equipment is a smartphone or mobile phone, much less so a laptop or desktop computer. Older people most often used the Internet to communicate, to participate in various forms of entertainment and to seek information about their own or their loved ones' health. In each of the distinguished areas of Internet activity, the groups of people aged 55-64 and 64-74 show a significantly lower level of use of opportunities to satisfy their needs using modern technologies. The analysis of the data in Table 3 also reveals a low level of Internet use for dealing with administrative matters by people both in the 65-74 age group (15.8%) and by people from the so-called 'preold age' in the 55-64 age group (35.0%), while for 62.3% of the population aged 25-54 it is an obvious medium of communication with institutions [www 6, Part 2, Table 11]. As E. Ziemba and T. Papaj [2023, p. 362] write "currently, digitisation is coordinated with the goals and tasks set by the eGovernment Action Plan, and above all the closely related Operational Programme Digital Poland (POPC)". In the POPC for 2014-2020, the digitisation of public management is dedicated to Priority Axis II, e-government and open government, including four measures: Measure 2.1 High availability and quality of public e-services; Measure 2.2 Digitisation of back-office processes in government administration; Measure 2.3 Digital availability and usability of public sector information; Measure 2.4 Creation of services and applications using public e-services and public sector information. Many activities of the European Union are in line with the indicated trend, including, among others, the increase of financial resources for ICT development in the last programming periods under the cohesion policy (2006-2013, 2014-2020, 2021-2027) Summary Access to digital devices and digital literacy support people's independent functioning in both economic, administrative, public and social spaces. However, the digital revolution can mean potential problems in accessing a variety of services, including public and social services for many people. The effective delivery of these to citizens is an issue facing decision-makers in governments implementing the welfare model. Changes in social structures and institutions, growing aspirations and needs of the society, changes in lifestyles and work, development of technologies and new forms of professional and social activities, changes within the family and interpersonal relations, growing willingness of citizens to participate and co-determine in decision-making processes, result in a growing demand for wide access to developed services provided also with the use of ICT. However, even the best-prepared IT infrastructure support will not be effective if its addressees do not see the benefits of digitisation and are not digitally literate. The IT revolution and the emergence of new communication tools create unlimited opportunities for accessing different types of information and services, but they also increase the risk of a "communication gap", which particularly affects older people. Older people are less responsive to changes in the way their needs are met and find it more difficult to adapt to new requirements (e.g. e-registration or e-prescriptions), which means they are also at a greater risk of digital exclusion. The low participa-tion of older people in using the Internet may be due to the presence of so-called 'hard barriers' and/or so-called 'soft' barriers. Hard barriers, i.e. financial barriers, coverage barriers and equipment barriers, i.e. those related to the lack of access and usability, are less and less decisive for digital exclusion. Soft barriers, on the other hand, are increasingly decisive factors for not using the computer and the Internet. They include, above all, the lack of skills to use new technologies, the lack of knowledge about what opportunities are offered by computers and the Internet, fears related to the use of the Internet, the lack of interesting services and content, the lack of interest in modern technologies and the need to use the Internet and the related self-exclusion. This manifests itself in an aversion to novelty and change and in the belief that learning new skills only lasts until a certain age [www 8]. The main barriers in not using the Internet [www 2, CSO 2021b] in the pre-old age group (55-64) and among the elderly (65-74) are soft barriers -the lack of appropriate skills -13.4% and 27.1%, respectively, and the lack of need to use the Internet -16.0% and 32.4%, respectively. In the sphere of hard barriers, health problems preventing the use of equipment unsuitable for the limited capabilities of seniors dominate [www 2, CSO 2021b]. As mentioned earlier, a number of strategic initiatives have emerged in recent years that have been directed at taking action to reduce digital exclusion, particularly for older people. Of course, strategies will only achieve their objectives if people are willing to change their behavioural patterns and the public sector undertakes initiatives aimed at counteracting digital exclusion, especially among older people. To sum up the discussion, it should be noted that the use of digital technologies by older people is important for their use of goods and services. Combating digital exclusion of seniors is therefore not only one of the factors of building the so-called silver economy, but also moving towards social welfare.
2023-07-11T16:28:17.422Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "3bbbcaadf8dc65ed30774fc943d1276cf2aca5e7", "oa_license": "CCBY", "oa_url": "https://repozytorium.uwb.edu.pl/jspui/bitstream/11320/15021/1/Optimum_1_2023_A_Fraczkiewicz_Wronka_M_Zralek_S_Ostrowska_Digital_Exclusion_of_Seniors.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e17e6a0a6e5e1713e62c7d841de0b6ba7b9c6fc7", "s2fieldsofstudy": [ "Sociology", "Computer Science", "Political Science" ], "extfieldsofstudy": [] }
250282378
pes2o/s2orc
v3-fos-license
Atrial fibrillation and risk of progressive heart failure in patients with preserved ejection fraction heart failure Abstract Aims Understanding of the pathophysiology of progressive heart failure (HF) in patients with heart failure with preserved ejection fraction (HFpEF) is incomplete. We sought to identify factors differentially associated with risk of progressive HF death and hospitalization in patients with HFpEF compared with patients with HF and reduced ejection fraction (HFrEF). Methods and results Prospective cohort study of patients newly referred to secondary care with suspicion of HF, based on symptoms and signs of HF and elevated natriuretic peptides (NP), followed up for a minimum of 6 years. HFpEF and HFrEF were diagnosed according to the 2016 European Society of Cardiology guidelines. Of 960 patients referred, 467 had HFpEF (49%), 311 had HFrEF (32%), and 182 (19%) had neither. Atrial fibrillation (AF) was found in 37% of patients with HFpEF and 34% with HFrEF. During 6 years follow‐up, 19% of HFrEF and 14% of HFpEF patients were hospitalized or died due to progressive HF, hazard ratio (HR) 0.67 (95% CI: 0.47–0.96; P = 0.028). AF was the only marker that was differentially associated with progressive HF death or hospitalization in patients with HFpEF HR 2.58 (95% CI: 1.59–4.21; P < 0.001) versus HFrEF HR 1.11 (95% CI: 0.65–1.89; P = 0.7). Conclusions De novo patients diagnosed with HFrEF have greater risk of death or hospitalization due to progressive HF than patients with HFpEF. AF is associated with increased risk of progressive HF death or hospitalization in HFpEF but not HFrEF, raising the intriguing possibility that this may be a novel therapeutic target in this growing population. Introduction Chronic heart failure (HF) is a leading cause of mortality and morbidity worldwide. 1,2 It is thought to develop because of conditions impacting negatively on left ventricular (LV) function, including ischaemic heart disease, hypertension and valvular heart disease. HF has traditionally been viewed as a failure of LV systolic function, with reduced LV ejection fraction (EF) used to define systolic dysfunction, assess prognosis, and select patients for therapeutic interventions. 3 However, it is well established that HF can occur in the presence of LVEF in the normal range: this so-called HF with preserved EF (HFpEF), now accounts for a substantial proportion of clinical cases of HF. 4,5 It is similarly well-established that patients with HFrEF, after an initial insult to LV function and a period of stable symptoms can enter into a downward spiral of declining LV systolic function, characterized by fluid retention, symptomatic deterioration, hospitalization requiring intravenous loop diuretics, and premature death. 6 Clinical trials of drugs targeting activation of the renin angiotensin aldosterone (RAAS) and sympathetic nervous system (SNS), shown to reduce risk of death and hospitalization due to progressive HF in patients with HFrEF, have not shown such favourable results in patients with HFpEF. 7 The results of these trials, and the encouraging results from the recent EMPEROR-preserved trial, 8 suggest that some of the mechanisms leading to progressive HF in patients with HFpEF are shared and others may differ from patients with HFrEF, although studies have not yet addressed this fundamental question, nor have studies directly compared risk factors for progressive HF in unselected patients with a new diagnosis of HFpEF or HFrEF. Our aim was to explore a wide range of potential risk factors that are differentially associated with progressive heart failure outcomes in patients with HFpEF versus HFrEF. Methods We performed a prospective cohort study of all patients referred to a secondary care specialist HF clinic, from a primary care catchment of over 750 000 people between 1 May 2012 and 1 May 2013, with suspicion of HF based upon clinical signs and symptoms of HF and elevated NT-pro-BNP. Upon arrival at the clinic, demographic details, medical history, height, weight, and medical therapy were recorded, and patients underwent clinical assessment. A venous blood sample was taken for measurements of full blood count, electrolyte concentrations, and assessment of renal and liver function. Blood pressure was taken (right arm recumbent), and 12-lead electrocardiography and trans-thoracic echocardiography were performed. Prognostic nutritional index (PNI), which assesses nutritional status and inflammatory/ hepatic function based on clinical marker values using the following equation: 10 × serum albumin concentration in g/dL + 0.005 × total lymphocyte count per mm 3 , 9 was calculated for each patient. Vital status data were collected using linked Hospital Episode Statistics and Office of National Statistics mortality data. The study complied with the Declaration of Helsinki and received S251 ethical approval (CAG 8-03(PR1)/2013). Natriuretic peptides NT-pro-BNP concentration was measured in samples taken in primary care using the Immulite 2000 assay (Siemens Healthcare Diagnostics, Camberley, UK) in the biochemistry laboratory at the Leeds Teaching Hospitals NHS Trust. The inter batch coefficient of variation was 8.9% at 350 pg/mL and 5.9% at 4100 pg/mL. Echocardiography Two-dimensional trans-thoracic echocardiography was performed by senior cardiac sonographers (J. G., M. P., and J. E. L.) blinded to NT-pro-BNP measurements. Left ventricular (LV) dimensions, left ventricular ejection fraction (LVEF), LV mass, left atrial (LA), and LV Doppler measurements were calculated according to the American Society of Echocardiography (ASE) and European Association of Cardiovascular Imaging (EACI) guidelines, 10 and LV mass and LA volume were indexed to body surface area. Electrocardiography Standard 12-lead ECGs were recorded at 25 mm/s and analysed by a senior cardiologist blinded to patient characteristics. Classification of atrial rhythm status. Patient's atrial rhythm status was determined by their ECG at the clinic visit. Duration of AF was determined by medical records, and patients with persistent or permanent AF were categorized as having AF. Patient classification Patients were categorized using the European Society of Cardiology 2016 guidelines on the diagnosis of HFrEF or HFpEF. 11 We did not divide patients with EF < 50% into mid-range and reduced ejection fractions and instead included all patients with EF < 50% as HFrEF. Patients with signs and symptoms of heart failure, and NT-proBNP >125 pg/mL and an LVEF <50% were classified as HFrEF, patients with signs and symptoms of heart failure, an NT-proBNP >125 pg/mL and an LVEF >50% and relevant structural heart disease (left atrial volume index (LAVI) > 34 mL/m 2 or a left ventricular mass index (LVMI) ≥ 115 g/m 2 for men and ≥95 g/m 2 for women) or diastolic dysfunction (E/e0 ≥ 13 or a mean e0 septal and lateral wall <9 cm/s) were classified as HFpEF. Patients with signs and symptoms of heart failure an NT-proBNP >125 pg/mL and not meeting the ESC criteria of either HFrEF or HFpEF were classified as neither HFrEF nor HFpEF, their final diagnoses can be found in Supporting Information, Table S1. Classification of patient outcomes Patient follow-up continued for a minimum of 6 years in surviving participants. HF hospitalization was a priori defined, using patient records as a new onset or worsening of signs and symptoms of heart failure with evidence of fluid overload requiring at least 24 h overnight hospitalization and the use of intravenous diuretics, 12 and progressive HF death was defined if death occurred after a documented period of symptomatic or hemodynamic deterioration. 13,14 The combined endpoint of progressive heart failure was determined as either first HF related hospitalization or HF/cardiac related death. AF in HFpEF identifies risk of progressive HF death or hospitalization 3255 Statistical analysis All statistical analyses were performed using IBM SPSS statistics version 26 (IBM Corporation, Armonk, NY, USA). Normal distribution of data was confirmed using skewness tests. Continuous data are presented as mean ± standard deviation or median [interquartile range] if non-normally distributed; categorical data are shown as percentage (number). Groups were compared using two-sided Student's t-tests or ANOVA for normally distributed continuous data, Mann-Whitney or Kruskal-Wallis tests for non-normally distributed continuous data, and two-sided Pearson χ 2 tests for categorical data. Survival of groups was compared with Kaplan-Meier curves and log-rank tests, or Cox proportional hazards regression analysis, for which non-normally distributed data were log10 or natural log transformed to achieve normality. To explore if the extent of association between specific covariates and the composite outcome of progressive HF death or hospitalization was statistically different between people with HFrEF and HFpEF, interaction terms were added to models. Statistical significance was defined as P < 0.05. Results Between 1 May 2012 and 1 May 2013, 982 patients with suspected heart failure and NT-proBNP >125 pg/mL were referred. Of these, 22 had insufficient quality echocardiographic images to assess cardiac structure and function and so 960 patients were included in this analysis. Patient characteristics Of the 960 patients referred, HFpEF was the most common diagnosis (n = 467; 49%) followed by HFrEF (n = 311; 32%) and neither HFpEF/HFrEF (n = 182; 19%). As shown in Table 1, patients with HFpEF were older than those with HFrEF, more often female, more likely to have a history of hypertension, and less likely to have a history of ischaemic heart disease than patients with HFrEF. As expected, patients with HFpEF had significant differences in LVEF, compared with patients with HFrEF, but all other echocardiographic variables were similar. The number of patients prescribed disease modifying medical therapy was typical for a population newly referred with suspicion of HF. Factors associated with progressive heart failure hospitalization or death in heart failure with preserved ejection fraction or heart failure with reduced ejection fraction During the follow-up period there were 125 episodes of progressive heart failure hospitalization or death, 66 (53%) of these occurred in patients with HFpEF (of which 33 (50%) were due to progressive HF death and 33 (50%) were due to progressive HF hospitalization), and 59 (47%) events in those with HFrEF (of which 29 (49%) were attributable to progressive HF death and 30 (51%) due to progressive HF hospitalization). In patients with HFpEF, 14% died or were hospitalized due to progressive HF during the follow-up period, compared with 19% of patients with HFrEF ( Figure 1B), age-sex adjusted hazard ratio was 0.67 (95% CI: 0.47-0.96; P = 0.028). Univariate predictors of hospitalization or death from progressive heart failure in HFpEF and HFrEF are shown in Table 2. Among a range of potential prognostic markers, the only factor differentially associated with risk of hospitalization or death due to progressive heart failure in HFpEF versus HFrEF was the presence of atrial fibrillation (p for interaction = 0.021), which persisted after adjusting for age and sex ( Table 3); survival curves for with and without atrial fibrillation in HFpEF or HFrEF are shown in Figure 2A,B. We then examined characteristics of patients with HFpEF and HFrEF with and without atrial fibrillation ( Table 4). In patients with HFpEF~36% had atrial fibrillation and in HFrEF 34% (P = non-significant). Patients with HFpEF and atrial fibrillation were older, more likely to be male, have a faster resting heart rate and lower systolic blood pressure, these differences were not apparent in the HFrEF group. We therefore performed further analysis to account for the potential influence of these factors in the interaction between atrial fibrillation and HFpEF in association with progressive heart failure adverse outcomes. After adjusting for age, sex, heart rate and systolic blood pressure, the interaction between atrial fibrillation and HFpEF persisted, suggesting that these factors did not contribute substantially to the interaction. We divided patients into tertiles of NT-proBNP and this value, at baseline, predicts death and/ or hospitalization due to progressive heart failure in patients with both HFpEF (log rank P < 0.001) and HFrEF (log rank P < 0.001) (Supporting Information, Figure S1), there was no significant difference between HFpEF and HFrEF, confirming our interaction analyses ( Table 3). Discussion Through exploiting a unique prospective cohort study specifically designed to examine prognostic markers in patients with new onset HFpEF or HFrEF we present novel findings that significantly add to our understanding of the pathophysiology of HFpEF. We show that patients with HFpEF have a reduced but important risk of hospitalization or death due to decompensated HF compared with patients with HFrEF, we also show that atrial fibrillation is the only marker of increased risk of hospitalization or death due to decompensated HF in patients with a new diagnosis of HFpEF distinct from patients diagnosed with HFrEF. Characteristics of patients with European Society of Cardiology defined heart failure with preserved ejection fraction Consistent with our earlier reports of patients with HFpEF, 15,16 the ESC criteria for the diagnosis of patients with HFpEF identified a cohort which was older, predominantly [17][18][19][20] Many of these studies did not examine progressive HF death or hospitalization in these patients. Atrial fibrillation a predictor of progressive heart failure in patients with heart failure with preserved ejection fraction In univariate analysis we found a number of shared predictors of risk of hospitalization or death due to decompensated HF in patients with HFpEF and HFrEF. The only marker of hospitalization or death due to decompensated HF that discriminated between patients with HFpEF and HFrEF was atrial fibrillation Figure 1 Long term outcomes of patients with either heart failure with reduced (HFrEF) or preserved (HFpEF) left ventricular ejection fraction. Survival curves of (A) total survival and (B) death or hospitalization from progressive heart failure over 6 years in patients presenting to secondary care with suspected heart failure classified according to European Society of Cardiology 2016 guidelines. which was associated with a greater than two-fold increase in risk of decompensated HF in patients with HFpEF. After adjustment for a number of variables including resting heart rate, sex, age and systolic blood pressure, atrial fibrillation remained differentially associated with progressive HF outcomes between patients with HFpEF and HFrEF, suggesting other factors may account for this intriguing observation. Atrial fibrillation and heart failure with preserved ejection fraction Atrial fibrillation is a common co-morbidity in people with HFpEF and may precede, coincide with, or develop following a diagnosis of HFpEF. 21 In the present study, almost 40% of patients with HFpEF had atrial fibrillation at presentation. In longitudinal studies, the development of atrial fibrillation after the diagnosis of HFpEF has been shown to increase the risk of death. 22 11 and for the first time demonstrates that at first diagnosis of HF, despite a similar prevalence of atrial fibrillation as patients with HFrEF, patients with HFpEF and atrial fibrillation are more than twice as likely to die or be hospitalized urgently due to progressive heart failure. Interestingly, data from the CASTLE-AF randomized trial of AF ablation in patients with HFrEF and EF < 35% demonstrated that patients assigned to ablation had reduced incidence of death of HF hospitalization. 24 Benefits were observed with a reduction in AF burden from 60% with medical therapy to 25% with ablation, suggesting that a reduction in the time spent in AF may be enough to provide clinical benefit. These data contrast with the results of our study; however, CASTLE-AF had a relatively small number of participants, lack of blinded randomization and treatment allocation, 25 and a relatively high number of patients dropped out or were lost to follow-up. Sartipy et al. also present data at odds to ours, from the Swedish HF registry, in demonstrating adverse outcomes in patients with HFrEF and AF. 26 However, hospitalized patients accounted for 64% of the patients recruited into the Figure 2 Long term outcomes of patients with either heart failure with reduced (HFrEF) or preserved (HFpEF) left ventricular ejection fraction by presence or absence of atrial fibrillation. Survival curves of death or hospitalization from progressive heart failure showing the adverse effect of atrial fibrillation in those with (A) heart failure with preserved ejection fraction but not in those with reduced ejection fraction (B). Table 3 Absolute and adjusted hazard of hospitalization or death due to progressive heart failure in HFpEF and HFrEF due to atrial fibrillation Progressive heart failure HR (95% CI) P Progressive heart failure HR (95% CI) P original Swedish registry, 27 suggesting a more unstable population than our ambulatory cohort. HFpEF and AF share similar risk factors and pathophysiological mechanisms, 28 and while our data do not identify mechanisms underpinning this relationship, possibilities emerge from previous studies. One possibility is that HFpEF may be a result of a systemic disorder, which exerts a delete-rious influence on the ventricle as well as on the atria. 29 A second possibility is that changes in left atrial geometry are central to the pathogenesis of atrial fibrillation induced progressive heart failure in patients with HFpEF. 30 Atrial involvement in HFpEF is well recognized: disadvantageous remodelling of the left atrium, and an excess of incident atrial fibrillation is consistently observed in patients with Table 4 Characteristics of patients presenting to secondary care based on the European Society of Cardiology guidelines for the diagnosis of heart failure with preserved ejection fraction (HFpEF) and heart failure with reduced ejection fraction (HFrEF) with and without atrial fibrillation HFpEF. 22,30 Recent studies suggest that left atrial function and remodelling are independently associated with the onset of HF in the asymptomatic healthy population. 31 Sanchis et al. reported that up to 45% of patients presenting with new-onset symptoms to a dedicated HF clinic had left atrial dysfunction as the unique underlying mechanism of their HF symptoms, further supporting left atrial dysfunction as a potential driver of the HFpEF syndrome and a key pathogenic factor in its progression. 32 In addition to atrial geometry and function, reduced left ventricular filling due to the lack of 'atrial kick' associated with AF might be particularly important in patients with HFpEF, due to the elevated filling pressures and impaired ventricular relaxation experienced by these patients. 33 The strong clinical and epidemiological affinity of AF and HFpEF supports the potential of a common mechanistic substrate for the two diseases, inflammatory and fibrotic biomarkers predict AF and HFpEF and metabolic disorders have been linked to growth and inflammatory effects of epicardial adipose tissue. 22 The results from the EMPORER-Preserved trial, 8 and the post-hoc analysis of the TOPCAT trial 34 raise the opportunity to learn how sodium-glucose cotransporter 2 (SGLT2) inhibitors and aldosterone antagonists could influence these common mechanisms and impact on the deleterious AF/HFpEF relationship. To our knowledge, ours is the first study to identify a distinct and potentially treatable baseline clinical feature that is linked to a specific outcome in patients with HFpEF, thereby raising the intriguing possibility that electrical or pharmacological treatment of atrial fibrillation aiming for sinus rhythm in patients with HFpEF has the potential to slow disease progression. Strengths and limitations of current study This report has several strengths compared with earlier work in the field. While our own work, 15,16 and that of others, [17][18][19][20] confirms that HFpEF per se has a more favourable prognosis than HFrEF, a strength of our report is the unselected nature of the cohort studied resulting in a mean age of over 83 years for HFpEF patients attending the clinic from a large and diverse adult population, hence being truly representative of patients now presenting on a day-to-day basis. A second strength is comprehensive assessment of mode of death and hospitalization, providing a deeper understanding of the natural history of HFpEF. Some limitations need to be highlighted. We did not collect change in medical therapy, change in atrial rhythm status or imaging data during the follow-up period, limiting our ability to relate any change in these characteristics to outcome data. Our study, being single centre may limit generalization; however, the diverse characteristics of the area served by our centre recently described by ourselves, 35 mitigates against this potential weakness. We did not examine LV function invasively so our categorization of HFpEF relied on non-invasive assessment of clinical status. The observational nature of the study, whilst opening new avenues for investigation, mean our insights into mechanisms of disease aetiology are hypothesis generating. Conclusions HFpEF is a growing healthcare problem associated with significant morbidity and mortality. The mechanisms underlying the development and complications of HFpEF are poorly understood. Our dataset demonstrates that patients with HFpEF have reduced risk of progressive HF than patients with HFrEF. The critical finding that atrial fibrillation may drive the progression of disease in patients with HFpEF provides a platform to develop and evaluate treatments targeting atrial fibrillation for the burgeoning group of patients suffering from HFpEF. Figure S1. Long term outcomes of patients with either heart failure with reduced (HFrEF) or preserved (HFpEF) left ventricular ejection fraction by tertiles of NTproBNP. Survival curves of death or hospitalization from progres-sive heart failure showing for tertiles of NTproBNP in those with (A) heart failure with preserved ejection fraction and (B) heart failure with reduced ejection fraction.
2022-07-06T06:16:58.514Z
2022-07-04T00:00:00.000
{ "year": 2022, "sha1": "eac475590b6b57d3f504020d83f4658fe1aac8a8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/ehf2.14004", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "3121ae6f57c28ec6a27571e755b3245275780ef2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244782281
pes2o/s2orc
v3-fos-license
Intellectual Potential as a Factor in Achieving the Socio-Economic Purposes of the Territory Development The globalization of the modern economy entails not only progressive changes and scientific and technological breakthroughs, but also the inevitability of crisis phenomena. In this regard, innovation activity and its basis – intellectual potential, i.e. a certain stock of achievements, knowledge and experience, acquire special value. Each region chooses its own direction in its development, this is due, of course, to individual factors, but the determining factor is the intellectual potential of the territory, the degree of development of which determines whether the region can achieve innovative development that affects the production level of the entire country. The article studies the intellectual potential of the subject of the Russian Federation by determining the prevailing sectors of the economy and the needs of their development. A comprehensive analysis of an important social and economic indicator is defined, which is a theoretical and practical tool for choosing a strategy for the development of the intellectual potential of a particular region. By comparing the resources of different regions, the need for an exceptionally rational choice of a particular industry was proved. An important social and economic indicator was studied, which contributes to the rational choice of the development of the region, based on the intellectual potential of the territory and the intellectual capacity of employees. INTRODUCTION A correct understanding of the nature of a particular region has a fruitful impact on its development, since it is impossible to demand cultural and research activities from a predominantly innovative and industrial region.It is quite problematic to develop the intellectual potential of the region in the field of, for example, digital technologies, while developing agriculture or tourism in the territory to a greater extent.It is important to take into account the resources and characteristics of the territory when developing its intellectual potential.Thus, an incorrect redistribution or financial investment can lead to: low turnover of the relevant and working sphere in a particular region and, of course, the ineffectual use of funds.Intellectual potential is a tool not only for socioeconomic analysis, but also for organizational management of financial and investment activities that determine the most profitable and effective areas of investment in a particular region.When determining the practical importance of this institution, there is a need for its theoretical component: the definition of the term, the nature of its origin, and the study of primary sources. The impact of human intelligence on the development of the economy is difficult to overestimate.It is no coincidence that many researchers have paid great attention to this phenomenon in their works.These include J. Galbraith, who in the second half of the XX century formulated the concept of intellectual potential.In a letter to M. Kalecki in 1969, the scientist associated this concept with "intellectual activity" [1].The scientific literature considers two similar, but not identical, definitions: potential and capital.For further study, it is necessary to determine which activity is the basic one for each of them."Intellectual capital" is interpreted as the totality of human abilities and the possibility of their optimal application.Whereas "intellectual potential" refers to the use by an economic entity of its reserves and opportunities in the field of interest.T. Stewart in 1991 formulated the definition of intellectual capital as the sum of all the knowledge of the company's employees: "... patents, processes, management skills, technologies, experience and information about consumers and suppliers".Combined together, this knowledge increases the competitiveness and economic efficiency of the firm.In turn, V. M. Shepelev considers the potential in the context of the probabilistic possibilities of the subject's activity in certain circumstances.It assigns a crucial role to the impact on the staff.According to its concept, in order to improve the efficiency of the enterprise, it is necessary not only to study and improve the level of knowledge and professional competencies of employees, but also to apply modern management strategies, create new conditions for the maximum disclosure of human capabilities. So, the most important contribution to the study of this issue, in our opinion, was made by T. Stewart, since he defined "intellectual potential" as the sum of all skills, and V.M. Shepelev, since, in his works, the role of the subject in the management of existing intellectual capabilities was outlined.Based solely on the content of the above works, it is possible to formulate a definition of "intellectual potential" but it is already applicable to the territory.Thus, "intellectual potential" can be understood as a set of opportunities, potential, resources, and the relevant conditions of the region, but this concept also includes the organizational and analyzing activities of the authorized entity for the rational and effective management of the opportunities available in a particular region.In our opinion, the content of the "intellectual potential of the territory" requires, as T. Stewart pointed out, "the most valuable asset", i.e., employees, subjects with certain knowledge and skills that realize the potential of the region.So, these are clearly numerous and interrelated components that create and multiply intellectual potential by their actions.In the future, it is necessary to give the officially existing definition.Loseva O. V. in her understanding and theoretical justification of the definition of "intellectual potential" also confirms the validity and relevance of the definitions given above.In addition, it includes two interrelated components in the term.First, it is the resource potential, which represents the mental abilities of the staff, their accumulated professional experience, creativity, and unconventional thinking.Secondly, it is the potential of the conditions created in the region to maximize the potential of the staff. MATERIALS AND METHODS The materials for the study were economic statistics, scientific and other literature on personnel management, periodicals, data from statistical collections, as well as materials from official sources on the Internet.The quantitative data of the sources of the formation of the intellectual potential of the region, and the indicators that form the reserve for the development of the intellectual potential of the region (Rostov region) were analyzed. The analysis of the level and prospects for the development of the intellectual potential of the region and the identification of prospects for industries and areas of intellectual potential development is carried out using such methods of scientific knowledge as: − historical research method-was used in the study of the prerequisites for the emergence of the definition of "intellectual potential"; − statistical method of research.This method, along with the synthesis method, allowed us to collect and summarize the information received about the state of the issue, i.e., about the prospects for the development of the intellectual potential of the region and to understand the vectors of the development of the region in this way; − graphic method.This method was used in the work in the greatest way, which allowed us to formulate a methodology for assessing the innovative potential of the region and to show graphically an algorithm for identifying the prospects of industries and areas of development of the intellectual potential of the Rostov region, which will allow us to convey information to the reader as effectively as possible. The study was conducted on the example of the Rostov region.The data for the study were taken for 2020-2021. The main hypothesis of the study: the development of the intellectual potential of the region should be based on the development of economic sectors that represent the intellectual core of the region, which will significantly increase the effectiveness of actions to achieve the socioeconomic challenges of the development strategy of the region and the country as a whole. RESULTS After determining the theoretical component of the studied economic and organizational institution, the definition of the intellectual potential of the region becomes a primary task, since the identification of the two above-mentioned components, first of all, allows for the rational use of forces and means, paying attention to the actual existing opportunities, and not "imaginative representations of a particular consequence, when using a certain type of resource".Secondly, the definition of the above components allows you to quickly influence the inefficient or undeveloped potential of a particular region by identifying the problem area.In the future, an important theoretical issue is the choice of a strategy for the development of the intellectual potential of the Rostov region, the preference of which depends on the development of certain enterprises that are of the greatest interest to the region.The following approaches to the assessment of the intellectual potential of the region, i.e. the strategy, were identified: Advances in Economics, Business and Management Research, volume 195 1. Production-industry approach.Within the framework of this approach, the prospects and opportunities of the leading economic sectors of the region are analyzed and evaluated.For each industry, indicators are developed that determine its effectiveness and the value of its contribution to the gross regional product. 2. Statistical approach.The disadvantages include insufficient consideration of the specific features of a particular region.Thus, after the above hypotheses, we can conclude that the second concept is "incentive" in nature, since it is not aimed at the development of a specific industry that distinguishes a particular region, but involves the maintenance and implementation of common resources that are not individual in nature. The Southern Federal District is represented by numerous subjects, each of which has distinctive properties, but none of them contains city-forming enterprises.This implies the choice of a universal concept and the setting of appropriate tasks for enterprises.For example, Rostov-on-Don has a fairly developed cultural and educational environment, which explains the numerous schools and universities, where among the latter there is the Southern Federal University, the Don State Technical University, the Rostov State University of Economics and others. For a more complete study of the state of the cultural and educational environment, we will analyze the educational environment that contributes to the development of the intellectual potential of the region.In total, there are 67 universities, 47 youth scientific associations, 55 councils of young specialists, and 13 organizations of young scientists in the Rostov Region.At the level of departments, faculties and laboratories, there are more than 250 youth research teams.Since 2009, the Council of Young Scientists and Specialists has been operating under the Government of the Rostov Region.But this does not imply competition with other regions that specialize exclusively in scientific and educational activities.So the following subjects of the Russian Federation will stand out: Kaluga, Nizhny Novgorod, Tambov, Tver, Vladimir, Yaroslavl, Moscow region, which have more extensive potential in this area than the Rostov region [2].Thus, the educational activities of the Southern Federal District and the Rostov Region are aimed exclusively at maintaining the existing resources and providing the social infrastructure of the city, thereby complying with the established strategy and the corresponding tasks [3,4].When answering the question "What determines the intellectual potential" of the region, in our opinion, we should be guided by the initial strategy, since it determines the type of region, potentials and resources, unique or widespread. In connection with the entry into force of the Strategy of Scientific and Technological Development of the Russian Federation and the Program "Digital Economy of the Russian Federation", the task was set for the regions and the country to enter a new innovative stage of development, including to make digital transformations in all spheres of activity of the population of the regions and the country as a whole [5].During the Covid 19 coronavirus pandemic, the entire country was forced to switch to a remote format in educational and professional activities, which created the need to adapt digital technologies in a short time.However, in the current situation, the labor market has also changed, creating a need for personnel who have mastered digital competencies, which directly affects the vector of development of the intellectual potential of the Rostov region [2]. Continuing the analysis of the activities of Rostov-on-Don, it is necessary to determine the priority sectors that are developing on the basis of the existing potential, namely, soil resources [6].So it is agricultural activity that is the prerogative of the Rostov region, earlier it was determined that there was no "competition" in the field of educational and cultural activities with others, but in the field of agriculture, the Rostov region occupies one of the leading positions, which is provided by the resource needs of society and various economic enterprises.For example, providing food products to markets and shops, agricultural fairs, etc.The cultivated grain crops are widely used by livestock enterprises, which are also common in the Rostov region.Thus, agriculture is the predominant "core" of the strategy, resulting in the development of related industries: animal husbandry, crop production, gardening.The following indicators of the volume of investments in the Rostov region confirm the "special" attention aimed at the development of the agricultural sector: annual investments in the agricultural sectors of the Rostov region amount to about 30 billion rubles.It is obvious that without state support, this industry did not receive such a volume of investments.Thanks to the grant support, 11 new farms were created.By studying the official website of the Government of the Rostov region, the following state of agricultural activity was determined: the share of agriculture in the Rostov region in the Russian production of agricultural products is 5%, and in the Southern Federal District -30%, i.e. we refer the agricultural sector to the priority sectors of the economy [1].In addition to agriculture, the following priority clusters are registered here [7]: ⎯ innovative cluster "Don Dairy products" for the production and processing of dairy products; ⎯ road cluster; ⎯ innovation and technology cluster "Southern Constellation"; ⎯ innovative territorial cluster of marine instrumentation "Marine Systems"; ⎯ Information and communication technology cluster; ⎯ national industrial cluster of agricultural engineering; ⎯ Volgodonsk Industrial Cluster of Nuclear Engineering; ⎯ territorial cluster "Don Valley". DISCUSSION The research made it possible to identify the intellectual core of the region, which consists of clusters of the region, large enterprises and priority sectors of the economy.So, the listed clusters of the region are also the intellectual core, so they are of interest for studying [8].Thus, the Rostov region needs to develop the intellectual potential of the agricultural, transport, digital, information and communication, educational, food, scientific, technical and instrument-making spheres for the full development of the contained resources.That is, it is important for educational institutions of secondary special and higher education to prepare graduates with competencies, knowledge and skills in the listed priority areas of activity of the Rostov region.Despite the studied components of the intellectual potential of the territory, consisting of two components, the structure of the "intellectual potential" is much broader and consists of the following elements (Drawing 1). The elements of intellectual potential are presented as follows: ⎯ human potential.This element duplicates one of the components of intellectual potential, since the implementation of the resources of the region requires the professional skills knowledge.Every region needs labor personnel, and in this regard, the question arises about the quality characteristics.The question "What kind of people do the region need" can be answered with certainty, it requires professionals in their field who have not widespread knowledge from the textbook.We need people who understand the" intellectual nature " of the region, and the basic needs of its implementation. ⎯ organizational capacity.At the beginning of the study, we independently defined the term intellectual capital, where we put forward the assumption that the components of the intellectual potential of the territory, an organizational entity that exercises general management over the activities of the enterprise and embodies the resources of the region in their activities.After studying the relevant literature, this hypothesis was rejected, but in this structure, the organizational potential is important.In our opinion, this element is not only a control, but also a link between the resource capabilities of the region and the intellectual capabilities of the region [9][10][11]. ⎯ information potential.As S.E.Egorova and N.G.Kulakova point out, this is the relationship between conditions and resources, i.e. the use of accumulated knowledge and skills, their improvement, on the one hand, and the possibility of developing innovations, on the other [12]. ⎯ competitive potential.As with economic enterprises, competition has a fruitful effect on the use of innovations, new technologies, etc.In our opinion, such a factor cannot be excluded in relation to the region.For example, to increase the indicators of agricultural development in one region, in relation to another, it is possible to use innovations in agricultural machinery, etc.The need for such an application arises in connection with the emergence of the competitiveness factor. the investment potential lies not only in the material financing of the industry or their combination, but also determines the solvency of the region, which also reflects its economic condition.The material component of any activity has a special aspect, since full and timely Figure 2 The model of statistical assessment of the intellectual potential of the region based on the identified priorities of its development financing allows you to plan activities, but also in some industries, a successful organization is impossible and monetary support is necessary in the shortest possible time.In this regard, such a resource as the "provision of subsidies from the budget" is of a social, economic and regulatory personality [13]. 1) The social direction is implemented by performing the function of the state to provide material assistance in accordance with the Decree of the Government of the Russian Federation "About granting subsidies" №761 dated December 14, 2005, 88-FL "About supporting small and medium-sized businesses" dated 14 June 1995, 209-FL "On the development of small and medium-sized businesses" dated July 24, 2007.The social function is traced in the definition and payment of subsidies. 2) The definition of the economic nature does not require a detailed explanation, since the support of enterprises that develop the main industries of a particular region supports not only the development of economic potential, but also the implementation of the internal resources of the region, which supports the chosen strategy for the implementation of the intellectual potential of the territory. 3) The regulatory nature is determined by the ability of a particular region, through the provision of subsidies, payments. The resource component of the intellectual potential of the region is the aggregate potential of organizations located on its territory [14].Of particular value are organizations that have a high probability of forming scientific and technical or industrial clusters [15].In our opinion, this term is called "core", since a certain enterprise concentrates around itself a wide network of others, usually having a contiguous character "with the center", and the region, provides organizational and financial assistance in their construction.So, for example, the Rostov region, whose main strategy is agriculture, where the "intellectual core" will be "Rostselmash", which directly produces agricultural machinery and all the clusters available on its territory.Having identified the priority areas for the development of the intellectual potential of the region, we will form a model for statistical assessment of the intellectual potential of the region (Drawing 2). According to the model shown in the figure 2, we propose indicators for assessing the intellectual potential, with the help of which it will be possible to determine the level of the intellectual potential of the region in the future (table 1). In our opinion, the intellectual potential of the Rostov region is defined as a resource, since: 1) the main opportunities, potentials, abilities of the Rostov region are in "fertile soils", as a result of which, as the main strategy for the development of intellectual potential, agriculture was identified; 2) the intellectual potential of the Rostov region cannot be defined as "intellectual", since agriculture is engaged in the production of exclusively resources that ensure the functioning of such areas as: animal husbandry, horticulture, crop production, which exclude intellectual orientation; 3) the "intellectual core" of the Rostov region, through the conducted research, was identified as "Rostselmash", engaged in the production of heavy agricultural machinery, and not a scientific center, educational institution, etc.; the final argument is the presence in the Rostov region of numerous economic enterprises "collective farms" engaged in the collection and processing of "harvest", and not scientific activities. CONCLUSION As a result of the study, a comprehensive analysis of an important social and economic indicator was determined, which represents the theoretical and practical tools for choosing a strategy for developing the intellectual potential of a particular region.By comparing the resources of different regions, the need for an exceptionally rational choice of a particular industry was proved, since an erroneous definition leads to the depletion of the resource and financial capabilities of the region. "Intellectual potential" as a scientific term, has undergone a long theoretical development by comparing scientific opinions.As a result, he acquired practical tools in the form of a strategy that provides for the presence or absence of unique opportunities in the region, so scientists have fixed the following types of strategies (approaches): "Production-industry "and "Statistical approach".The strategic choice is interdependent with the production tasks of the enterprises of the corresponding region, which was determined by comparing the Rostov region, which has no scientific and cultural potential, and the cities of different regions with an exclusively scientific and educational direction.Also, based on the analysis that allowed us to identify the vectors of intellectual potential development, it can be argued that the Rostov region needs to develop the intellectual potential of agricultural, transport, digital, information and communication, educational, food, scientific and technical and instrument-making spheres for the full development of the contained resources.That is, it is important for educational institutions of secondary special and higher education to prepare graduates with competencies, knowledge and skills in the listed priority areas of activity of the Rostov region. Within the framework of the conducted research, an important social and economic indicator was studied, which contributes to the rational choice of the development of the region, based on the intellectual potential of the territory and the intellectual capabilities of employees.Economic development is an important component of society, and it also implies the realization of constitutional human and civil rights.Thus, each region needs rational economic development, which provides intellectual potential. Figure 1 Figure 1 Elements of the methodology for assessing intellectual potential Table 1 .i 2 . Indicators for assessing the intellectual potential of the Rostov region.The sphere of intellectual potential development of the Rostov region Indicators for assessing the achieved level of intellectual potential of the region Calculation method 1. Innovation and entrepreneurship sphere Innovative products and services (Qi) in the total volume of goods and services shipped (Q) Income of the population from business activities (Рba) in the total amount of cash (Р) organizational, technological, or marketing innovations (Оi), to the total number of organizations surveyed for the entire period (О) Cultural and educational sphere Employees who have completed additional training, advanced training, internship, from among the economically active population Employee training costs related to the development and use of information and communication technologies (ICT) in the total amount of ICT costs
2021-12-02T16:33:06.492Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d40cd940669781cf9c56e3b7ee1d11616b6b213b", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125964591.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "af59c4424616ea0fc958dabc0513c1725769195e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
17032941
pes2o/s2orc
v3-fos-license
Odd Poisson Bracket in Hamilton's Dynamics Some applications of the odd Poisson bracket to the description of the classical and quantum dynamics are represented. Introduction Mathematicians ( first of them was Buttin 1 ) proved that in the phase superspace apart from the usual even Poisson bracket there also exists another bracket of the Poisson type namely the odd Poisson bracket (OB) having the nontrivial Grassmann grading. In physics the OB has firstly appeared as an adequate language for the quantization of the gauge theories in the well-known Batalin-Vilkovisky scheme 2 . However, the apparent dynamical role of the OB was not understood quite well till papers 3,4 in which a possibility of the reformulation of Hamilton dynamics on the basis of the OB was proved for the classical systems having an equal number of pairs of even and odd (relative to the Grassmann grading) phase coordinates. Earlier, the prescription 5 for the canonical quantization of the OB was suggested, and several odd-bracket quantum representations for the canonical variables were also obtained. In contrast with the even Poisson bracket case, some of the odd-bracket quantum representations turned out to be no equivalent 6 . Recently the direct connection of the odd-bracket quantum representations for the canonical variables with the quantization of the classical Hamilton dynamics based on the OB has been established 7 . I concentrate my attention in the report on the dynamical aspect of the OB, that is on the description with the help of the OB of both the classical and quantum dynamics for the systems in superspace. The report is organized as follows. The main properties of the odd bracket are presented in Section 2. In Section 3 it is shown that Hamilton's equations of motion obtained by means of the even Poisson bracket with the help of the even Hamiltonian can be reproduced by the odd bracket using the equivalent odd Hamiltonian. The oddbracket quantum representations for the canonical variables are described in Section 4. In Section 5 the problem of quantization of the systems with odd bracket is considered on the simplest example of the supersymmetric one-dimensional oscillator. Properties of the odd Poisson bracket First, we recall the necessary properties of various graded Poisson brackets. The even and odd brackets in terms of the real even y i = (q a , p a ) and odd η i = θ α canonical variables have, respectively, the form (1) where ← ∂ and → ∂ are the right and left derivatives, and the notation ∂ x = ∂ ∂x is introduced. By introducing apart from the Grassmann grading g(A) of any quantity A its corresponding bracket grading g ǫ (A) = g(A) + ǫ (mod 2) (ǫ = 0, 1), the grading and symmetry properties, the Jacobi identities and the Leibnitz rule are uniformly expressed for the both brackets (1,2) as where (3a)-(3c) have the shape of the Lie superalgebra relations in their canonical form 8 with g ǫ (A) being the canonical grading for the corresponding bracket. In terms of arbitrary real dynamical variables x M = (x m , x α ) = x M (y, η) with the same number of Grassmann even x m and odd x α coordinates the odd bracket (2) takes the form The matrixω M N , inverse toω M Nω M Nω and consisting of the coefficients of the odd closed 2-form, in view of the odd bracket properties (3a)-(3c) can be represented in the form of the grading strength where g(M) = g(x M ) and ∂ M = ∂/∂x M . The coefficients of the 1-formĀ(d) = dx MĀ M satisfy the conditions As can be seen from (6)ω M N is invariant under gauge transformations with functionsχ as parameters. Classical dynamics in terms of the odd Poisson bracket Let us consider the Hamilton system containing an equal number n of pairs of even and odd with respect to the Grassmann grading real canonical variables. We require that the equations of motion of the system be reproduced both in the even Poisson-Martin bracket (1) with the help of the even Hamiltonian H and in the odd bracket (2) with the Grassmann-odd HamiltonianH, that is 3,4 , where t is the proper time. Using definitions (4) and (1) together with (5),(6) the relations (9) can be represented as the equations to derive the unknownH andĀ M under the given H and the even matrix ω M N corresponding to the even bracket (1). In order to solve Eqs. (10) it is convenient to use such real canonical in the even bracket (1) coordinates x M which contain among canonically conjugate pairs the pair consisting of the proper time t and the Hamiltonian H. It follows from Eqs.(9) that the rest of the canonical quantities z M would be the integrals of motion for the system considered: even I 1 , ..., I 2(n−1) and odd Θ 1 , ..., Θ 2n . In terms of these coordinates x M Eqs. (10) take the form The quantitiesĀ M ,χ andH can be expanded in powers of the Grassmann variables Θ α asĀ The Θ α coefficients are the real Grassmann-even functions of the even variables x m = (t, H, I 1 , . . . , I 2(n−1) ) and are chosen to be antisymmetric in the indices contracted with Θ α . In terms of these functions the gauge transformations (8) have the form where the expansion in the components with different symmetries of the indices has been used for the tensor antisymmetric in all indices but the first The additive character of the transformations for the functions B [αα 1 ...α 2k ] (k = 0, 1, . . . , n − 1) allows us to put them equal to zero in the expression (12b) forĀ α by choosing χ αα 1 ...α 2k = −B [αα 1 ...α 2k ] . This gauge choice amounts to the following gauge condition Θ αĀ α = 0 . Using this condition and Eqs.(11), we obtain the equalitȳ which, being substituted again into Eqs.(11), leads to the simple equations Thus, in consequence of (15), the solution of Eqs.(11) forĀ M andH in the chosen gauge resides in that the nonzero coefficients A mα 1 ...α 2k−1 and B (αα j )α 1 ...α j−1 α j+1 ...α 2k in expansions (12a,b) forĀ M are the arbitrary functions (denoted as a mα 1 ...α 2k−1 amα 1 ...α 2k−1 and b (αα j )α 1 ...α j−1 α j+1 ...α 2k , respectively) of all, except the proper time t, even variables H and I 1 , . . . , I 2(n−1) , and the odd Hamiltonian is expressed in terms of these functions with the help of Eq.(14). Using the gauge transformations (13) with the arbitrary functions χ α 1 ...α 2k−1 (t, H, I), we obtain the general solution of Eqs.(11) in the arbitrary gauge: Note that the solution of the analogous problem of finding the even brackets and the corresponding even Hamiltonians, which lead to the same equations of motion has a similar structure but with the difference that the odd quantitiesĀ M ,χ andH has to be replaced by the even ones. Thus, we extended the notion of the bi-Hamiltonian systems onto the case when the pairs of the Hamiltonian-bracket, giving the same equations of motion, have an opposite Grassmann grading. Quantum representations of the odd Poisson bracket The procedure of the odd-bracket canonical quantization given in 5,6 resides in splitting all the canonical variables into two sets, in the division of all the functions dependent on the canonical variables into classes, and in the introduction of the quantum multiplication * , which is either the common product or the bracket composition, in dependence on what the classes the co-factors belong to. Under this, one of the classes has to contain the normalized wave functions, and the result of the multiplication * for any quantity on the wave function Ψ must belong to the class containing Ψ. This procedure is the generalization on the odd bracket case of the canonical quantization rules for the usual Poisson bracket {. . . , . . .} P ois. , which, for example, in the coordinate representation for the canonical variables q and p is defined as where Ψ(q) is the normalized wave function depending on the coordinate q. In 5,6 two nonequivalent odd-bracket quantum representations for the canonical variables were obtained by using two different ways of the function division. But these ways do not exhaust all the possibilities. In 7 a more general way of the division is proposed, which contains as the limiting cases the ones given in 5,6 . Let us build quantum representations for an arbitrary graded bracket under its canonical quantization. To this end, all canonical variables are split into two equal in the number sets, so that none of them should contain the pairs of canonical conjugates. Note that to make such a splitting possible for the even bracket (1), the transition has to be done from the real canonical self-conjugate odd variables to some pairs of odd variables, which simultaneously are complex and canonical conjugate to each other. Composing from the integer degrees of the variables from the one set (we call it the first set) the monomials of the odd 2s + 1 and even 2s uniformity degrees and multiplying them by the arbitrary functions dependent on the variables from the other (second) set, we thus divide all the functions of the canonical variables into the classes designated as ǫ Os and ǫ Es , respectively. For instance, in the general case the odd-bracket canonical variables can be split, so that the first set would contain the even y i (i = 1, . . . , n ≤ N) and odd η n+α (α = 1, . . . , N − n) variables, while the second set would involve the rest variables 7 . Then the classes of the functions obtained under this splitting have the form where the factors before the arbitrary function f (η i , y n+α ) denote the monomials having the uniformity degrees indicated in the exponents. These classes satisfy the corresponding bracket relations Es } form a superalgebra with respect to the addition and the quantum multiplication * ǫ (ǫ = 0, 1) defined for the corresponding bracket as Note, that the classes ǫ O0 and ǫ E0 form the subsuperalgebra. In terms of the quantum grading q ǫ (A) of any quantity A introduced for the appropriate bracket, the grading and symmetry properties of the quantum multiplication * ǫ , arising from the corresponding properties for the bracket (3a,b) and Grassmann composition of any two quantities A and B, are uniformly written as With the use of the quantum multiplication * ǫ and the quantum grading q ǫ , let us define for any two quantities A, B the quantum bracket ((anti)commutator) [A, B} ǫ (under its action on the wave function Ψ that is considered to belong to the class E) in the form 5−7 If A, B ∈ ǫ E , then, due to (19c), the quantum bracket between them equals zero. In particular, the wave functions are (anti)commutative. If A or both of the quantities A and B belong to the class ǫ O , then in the first case, due to the Leibnitz rule (3d), and in the second one, because of the Jacobi identities (3c), the relation follows from the definitions (18) and (20) that establishes the connection between the classical and quantum brackets of the corresponding Grassmann parity. Note, that the quantization procedure also admits the reduction to O o ∪ E o . The grading q ǫ determines the symmetry properties of the quantum bracket (20). Under above-mentioned splitting of the odd-bracket canonical variables into two sets, the grading q 1 equals unity for the variables y i ∈ 1 O , η i ∈ 1 E (i = 1, . . . , n ≤ N) and equal to zero for the rest canonical variables y n+α ∈ 1 E , η n+α ∈ 1 O (α = 1, . . . , N − n). Therefore, in this case the quantum odd bracket is represented with the anticommutators between the quantities y i , η i and with the commutators for the remaining relations of the canonical variables. If the roles of the first and the second sets of the canonical variables change, then the quantum bracket is represented with the anticommutators between y n+α , η n+α and with the commutators in the other relations. In 5,6 the odd-bracket quantum representations were obtained for the cases n = 0, N, containing, respectively, only commutators or anticommutators. Quantization of the systems with the odd Poisson bracket As the simplest example of using of the odd-bracket quantum representations under the quantization of the classical systems based on the odd bracket 7 , let us consider the one-dimensional supersymmetric oscillator, whose phase superspace x A contains a pair of even q, p and a pair of odd η 1 , η 2 real canonical coordinates. In terms of more suitable complex coordinates z = (p − iq)/ √ 2, η = (η 1 − iη 2 )/ √ 2 and their complex conjugatesz,η, the even bracket is written as and the even Hamiltonian H, the supercharges Q 1 , Q 2 and the fermionic charge F have the forms H = zz +ηη ; The odd HamiltonianH and the appropriate odd bracket, which reproduce the same Hamilton equations of motion, as those resulting from (21) with the even Hamiltonian H (22), i.e., which satisfy the condition (9), can be taken asH = Q 1 and The complex variables have the advantage over the real ones, because with their use the splitting of the canonical variables into two setsz,η and z, η satisfies simultaneously the requirements necessary for the quantization both of the brackets (21), (23). Besides, any of the vector fields ǫ XA i = −i{A i , . . .} ǫ for the quantities {A i } = (H, Q 1 , Q 2 , F ), describing the dynamics and the symmetry of the system under consideration, is split into the sum of two differential operators dependent on eitherz,η or z, η . For instance, from (21)-(23) we have The diagonalization does not take place in terms of the variables x A = (q, p; η 1 , η 2 ). In accordance with the above-mentioned splitting of the complex variables, we can perform one of the two possible divisions all of the functions into the classes, which are common for both of the brackets (21),(23), playing a crucial role under their canonical quantization and leading to the same quantum dynamics for the system under consideration. Ifz,η are attributed to the first set, then the corresponding function division is ǫ Os = (zη) 2s+1 f (z, η) ; ǫ Es = (zη) 2s f (z, η) . systems based on this bracket. We should apparently expect that these representations are also applicable for the quantization of more complicated classical systems with the odd bracket. Acknowledgements The author is sincerely thankful to Professor Abdus Salam, the International Atomic Energy Agency and UNESCO for hospitality at the International Centre for Theoretical Physics,Trieste. He would also like to thank the Organizers of the School and Workshop on Variational and Local Methods in the Study of Hamiltonian Systems, in particular Profs. A. Ambrosetti, A Bahri, G.F. Dell'Antonio, G. Vidossich, who kindly give the opportunity to make the report at the Workshop.
2014-10-01T00:00:00.000Z
1995-03-30T00:00:00.000
{ "year": 1995, "sha1": "4048587524c070a2d743af3cc35e57e6868afd88", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "32cd8c441f94b8716eec1b755d3b4d918e1e2fb0", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8033527
pes2o/s2orc
v3-fos-license
Overexpression of Bmi-1 contributes to the invasion and metastasis of hepatocellular carcinoma by increasing the expression of matrix metalloproteinase ( MMP )-2 , MMP-9 and vascular endothelial growth factor via the PTEN / PI 3 K / Akt pathway Hepatocellular carcinoma (HCC) is one of the most common malignant tumours and it carries a poor prognosis due to a high rate of recurrence or metastasis after surgery. Bmi-1 plays a significant role in the growth and metastasis of many solid tumours. However, the exact mechanisms underlying Bmi-1-mediated cell invasion and metastasis, especially in HCC, are not yet known. In the present study, we sought to evaluate the expression of Bmi-1 in HCC samples and its relationship with clinicopathological characteristics and prognostic value, we also investigated related mechanisms underlying Bmi-1-mediated cell invasion in HCC. Our results showed that Bmi-1 is upregulated in HCC tissues compared to matched non-cancer liver tissues; and its expression is positively associated with tumour size, metastasis, venous invasion and AJCC TNM stage, respectively; multivariate analysis showed that high expression of Bmi-1 was an independent prognostic factor for overall survival. In addition, the shRNAmediated inhibition of Bmi-1 reduced the invasiveness of two HCC cell lines in vitro by upregulating phosphatase and the tensin homolog deleted on chromosome 10 (PTEN) expression, inhibiting the phosphatidylinositol 3-kinase (PI3K)/Akt signalling pathway and downregulating the expression and activities of matrix metalloproteinase (MMP)-2 and MMP-9 and vascular endothelial growth factor (VEGF). These data demonstrate that Bmi-1 plays a vital role in HCC invasion and that Bmi-1 is a potential therapeutic target for HCC. Introduction Hepatocellular carcinoma (HCC) is one of the most common and lethal cancers in the world and is the second leading cause of cancer-related death in China (1).Despite remarkable progress in HCC diagnosis and treatment, the prognosis of patients with HCC remains very poor due to the high rate of intrahepatic and distant metastasis after resection or transplantation (1).The 5-year survival rate is limited to 25-39% after surgery and systemic therapy with cytotoxic agents provides marginal benefit (2).Therefore, the discovery of molecules and/or signal transduction pathways essential to the carcinogenesis and malignant behaviour of HCC cells, especially their invasion and metastasis, is important for improving the prognosis of HCC patients. B cell-specific Moloney murine leukaemia virus insertion site 1 (Bmi-1), a member of the Polycomb family (PcG) of proteins, which repress the transcription of their target genes via an epigenetic mechanism (3)(4)(5), was originally identified as an oncogene cooperating with c-Myc in a murine lymphomagenesis model (6).Subsequent studies identified the essential role of Bmi-1 in embryonic development and the maintenance of self-renewal of both normal and malignant human mammary stem cells (7).Bmi-1 also regulates cellular processes including cell cycle progression, apoptosis and senescence as well as immortalisation by repressing the INK4A locus, which encodes two tumour repressor proteins, p16 Ink4a and p19 Arf (mouse homologue of human p14 ARF ) (8) and inducing telomerase activity (9).In addition, there is accumulating evidence that Bmi-1 is overexpressed in a variety of human malignant neoplasms, such as melanoma (10), breast cancer (11), bladder cancer (12), pancreatic cancer (13) and HCC (14)(15)(16).Furthermore, Bmi-1 is involved in tumour development and progression and is associated with a poor prognosis (17).For example, Bmi-1 expression is significantly correlated with nodal involvement, distant metastasis and clinical stage of colon and gastric cancers (18,19).Overexpression of Bmi-1 was associated with the invasion of nasopharyngeal carcinomas and predicted poor survival (20).Inhibition of Bmi-1 leads to decreased invasion of cervical cancer cells (21).Taken together, these data strongly indicate that Bmi-1 contributes to more aggressive behaviour of cancer cells, particularly with respect to invasion and metastasis.However, the exact mechanisms by which Bmi-1 mediates tumour cell invasion and metastasis, especially in HCC, remain largely unknown. In the present study, we examined the expression profile of Bmi-1 in patients with HCC and compared Bmi-1 expression with clinicopathological parameters by immunohistochemical analysis.We also determined the survivals and prognostic value of Bmi-1 expression for HCC patients by Kaplan-Meier method and Cox proportional hazards model.Finally, we evaluated the effects of Bmi-1 depletion on the invasive behaviour of HCC cell lines in vitro and investigated potentially related mechanisms. Materials and methods Tissue specimens.Sixty-two HCCs and corresponding non-cancer liver tissues were obtained from patients of the Department of Hepatobiliary Surgery, Xijing Hospital of the Fourth Military Medical University (Xi'an, China), between March 2004 and September 2006.Informed consent for research use of the specimens was obtained for all cases and all study protocols were approved by the Ethics Committee for Clinical Research of the Fourth Military Medical University.None of the patients received radiotherapy or chemotherapy before routine surgery.All of the specimens were fixed in 10% buffered formalin solution and embedded in paraffin and consecutive 4-µm-thick sections were cut. Immunohistochemistry. Paraffin-embedded sections were deparaffinised with xylene, rehydrated and then immersed in 3% hydrogen peroxide solution for 10 min to inhibit endogenous peroxidase activity.For antigen retrieval, slides were boiled in 0.01 mol/l sodium citrate buffer (pH 7.0) for 10 min in a microwave oven.After being blocked with 1% bovine serum albumin (BSA), the sections were incubated with mouse monoclonal anti-Bmi-1 antibody (1:50, Abcam, Hong Kong, China) at 4˚C overnight.Following incubation with biotinylated secondary antibody, a streptavidin-biotin complex/ horseradish-peroxidase was applied.Finally, antibody binding was visualised with 3, 3'-diaminobenzidine (DAB) and counterstained with hematoxylin.The primary antibody was replaced by PBS in negative controls.Two pathologists who were blinded to the clinical and histopathologic outcomes evaluated the results of the staining independently.The Bmi-1 expression was scored for staining intensity and extent of involved tissue.The staining intensity was scored as 0 (no staining), 1 (weakly stained), 2 (moderately stained), or 3 (strongly stained).The extent of staining was scored as 0 (<5%), 1 (5-25%), 2 (26-50%), or 3 (>50%), according to the percentage of positively stained cells.The sum of the intensity and extent scores was used as the final staining score ranging from 0 to 9. We defined Bmi-1 expression according to the final scores as follows: 0-1, negative; 2-9, positive. Cell culture.Three human hepatocellular carcinoma cell lines, HepG2, SMMC-7721 and MHCC97-H and a normal hepatocyte cell line, HL-7702, were obtained from American Type Culture Collection (Manassas, VA, USA) and were maintained in DMEM medium (Gibco, Gaithersburg, MD, USA) supplemented with 10% fetal bovine serum (Invitrogen, Carlsbad, CA, USA) at 37˚C in a humidified chamber with 95% air and 5% CO 2 . Construction of lentiviral vectors and transfection.Lentivirus vectors for human Bmi-1 small hairpin RNA (shRNA) encoding a green fluorescent protein (GFP) and a puromycin resistance gene were constructed, packed and purified by GeneChem Corp. (Shanghai, China).Bmi-1 shRNA was designed according to the human Bmi-1 mRNA sequence (GenBank accession no.NM_005180).The shRNA target sequence was 5'-CGGAAAGTAAACAAAGACAAA-3' and a negative control shRNA was provided by GeneChem.Cells were seeded in 24-well plates overnight before transfection for a target confluence of 30-50%.For transfection, according to the MOI value (number of lentiviruses:number of cells), the appropriate amounts of lentiviruses mixed with medium containing polybrene were added to the cells.After 24 h of transfection at 37˚C, the medium was replaced by fresh DMEM medium containing 10% FBS.Three days after transfection, cells were selected with 2 µg/ml puromycin for 3 days and harvested for subsequent studies. RNA extraction and quantitative real-time PCR.Total RNA was extracted with TRIzol reagent (Invitrogen) according to the manufacturer's instructions.Total RNA (1 µg) was reversetranscribed into cDNA using the Primescript RT reagent kit (Takara, Japan) in accordance with the manufacturer's instructions.Bmi-1 expression levels were quantified by real-time quantitative polymerase chain reaction (PCR).Bmi-1 mRNA levels were standardised to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) as a reference housekeeping gene.The forward primer for Bmi-1 was 5'-GCTTCAAGATGGCCGC TTG-3'; the reverse primer was 5'-TTCTCGTTGTTCGATGC ATTTC-3'.The forward primer for GAPDH was 5'-GCACCGT CAAGGCTGAGAAC-3'; the reverse primer was 5'-TGGTGA AGACGCCAGTGGA-3'.Quantitative real-time PCR was performed in a Bio-Rad iCycler IQ™ 5 (Bio-Rad, Hercules, CA, USA) with SYBR Master Mix (Takara) according to the manufacturer's instructions.Each reaction was performed in a final volume of 20 µl containing 2.0 µl of appropriately diluted cDNA, 1.0 µl (10 µM) of forward and reverse primers specific for human Bmi-1 or GAPDH, 10 µl of SYBR Premix Ex Taq and 6.0 µl of water.The cycling conditions were as follows: a denaturation step at 95˚C for 3 min; 40 cycles of denaturation at 95˚C for 10 sec, specific annealing at 59˚C for 30 sec and elongation at 72˚C for 30 sec.At the end of the cycles, the temperature was raised to 95˚C for 1 min.The melting curve was achieved by first cooling samples to 55˚C for 1 min, followed by 81 cycles (30 sec/cycle) in which the temperature was raised by 0.5˚C per cycle to a maximum temperature of 95˚C. Protein extraction and western blot analysis.Cells were lysed in ice-cold RIPA lysis buffer containing 50 mM Tris-HCl (pH 7.4), 1% Triton X-100, 5 mM EDTA, 1 mM leupeptin, 1 mM phenylmethylsulfonyl fluoride, 10 mM NaF and 1 mM Na 3 VO 4 and then centrifuged at 20,000 g for 30 min at 4˚C to remove debris.Protein concentrations were determined by a BCA assay (Pierce, Rockford, IL, USA).Equal amounts of cell lysate protein were subjected to SDS-polyacrylamide gel electrophoresis (PAGE) and transferred to polyvinyl difluoride (PVDF) membranes.Membranes were blocked with 5% non-fat dry milk in Tris-buffered saline with Tween-20 for 1 h, then incubated overnight at 4˚C with specific primary antibodies.Primary antibodies against Bmi-1 were purchased from Abcam and primary antibodies against MMP-2, MMP-9, VEGF, PTEN, Akt, p-Akt and GAPDH were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA).The membranes were next incubated with horseradish peroxidaseconjugated secondary antibodies and then developed with an enhanced chemiluminescence detection system (Amersham Life Science, Piscataway, NJ, USA) according to the manufacturer's instructions. Invasion assay in vitro.Transwell cell culture chambers (8-µm pore size; Millipore, Billerica, MA, USA) were used for in vitro invasion assays.The upper side of the filter was covered with Matrigel (Collaborative Research Inc., Boston, BD, USA) (1:3 dilution with DMEM free of serum) before the assays.Cells (5x10 5 ) were serum-starved for 24 h and then transferred in 350 µl serum-free DMEM to the upper chamber and DMEM with 15% fetal bovine serum was added to the lower chamber as a chemoattractant.The cells were incubated under normoxic conditions for 24 h.Cells on the upper side of the filter were removed and cells that remained adherent to the underside of the membrane were fixed in 4% formaldehyde and stained with 0.5% crystal violet for 10 min.For pharmacological inhibition assays with LY294002, cells were pre-treated for 2-4 h and the treatment continued during the invasion experiment.Finally, the number of invasive cells was counted in ten contiguous fields of each sample and the average was determined. ELISA assay.An enzyme-linked immunosorbent assay (ELISA) (Amersham, Buckinghamshire, UK) was used to quantify the individual activities of MMP-2, MMP-9 and VEGF.The samples were thawed on ice and all reagents were equilibrated to room temperature; assays were carried out according to the manufacturer's instructions. Statistical analysis.The data are expressed as the means ± SD.Correlations between clinicopathological variables and Bmi-1 expression were analysed with Pearson's χ 2 tests.Survival curves were calculated using the Kaplan-Meier method and compared using the log-rank test.The Cox proportional hazard model was carried out to explore the value of clinicopathological factors and Bmi-1 expression on survival.Variance analysis between groups was performed by one-way ANOVA and the significance of differences between control and treatment groups was tested using Dunnett's multiple comparisons test.All statistical analyses were performed using the SPSS software package (SPSS, Chicago, IL, USA).P<0.05 was considered statistically significant. Results Overexpression of BMI-1 in HCC tissues.We evaluated 62 tissue specimens from HCC patients by immunohistochemistry for Bmi-1 expression.Consistent with previous reports (14), Bmi-1 protein was mainly observed in neoplastic epithelial cell nuclei.Positive staining for Bmi-1 protein was observed in 46.8% (29/62) of HCC tissues.By contrast, no staining or only weak staining was observed in normal liver tissues.Staining of representative samples is presented in Fig. 1. Overexpression of Bmi-1 was associated with the progression of HCC.We compared Bmi-1 expression with the clinicopathological parameters of 62 patients to investigate the clinical significance of Bmi-1 expression during hepatocyte carcinogenesis.As shown in Table I, there was no correlation between the expression of Bmi-1 and certain clinical features, such as age, gender, tumour location, histological grade, satellite lesions, tumour number and AFP level.However, Bmi-1 expression was strongly associated with tumour size, metastasis, venous invasion and AJCC TNM stage.This result indicated a correlation between Bmi-1 expression and HCC invasion and metastasis. High Bmi-1 expression is associated with the adverse prognosis of HCC and is an independent prognostic factor.To evaluate the overall survival rate of HCC patients in relation to Bmi-1 expression, we carried out Kaplan-Meier survival analysis and log-rank test.The result demonstrated that patients with positive Bmi-1 expression had a significantly shorter 5-year survival rate than patients with negative levels of Bmi-1 expression (P<0.001,log-rank test; Fig. 2). Bmi-1 expression A shRNA vector that co-expresses GFP was generated for stable and efficient Bmi-1 reduction in HCC cells and the transfection efficiency was assessed by fluorescence microscopy.Almost all HepG2 and MHCC97-H cells were successfully transduced with lentivirus shRNA vector (Fig. 4A and B).These results confirmed that Bmi-1 shRNA was successfully introduced into the HepG2 and MHCC97-H cells. As shown in Fig. 4C-F, endogenous Bmi-1 mRNA and protein levels were significantly reduced in HepG2 and Table II.Univariate and multivariate analyses of overall survival for 62 HCC patients. Suppression of Bmi-1 repressed invasion of HCC cells in vitro. Because high Bmi-1 expression was positively associated with venous invasion (P= 0.009) and metastasis (P= 0.025), we further determined whether Bmi-1 was involved in the invasion and metastasis of HCC.To examine whether suppression of Bmi-1 in HCC cell lines affected their invasive properties, we conducted transwell invasion assays in vitro.The numbers of HepG2 and MHCC97-H cells transfected with Bmi-1 shRNA vectors invading through the filter were markedly lower than the number of the negative control groups and mock groups (Fig. 4G and H).Bmi-1 knockdown dramatically inhibited the invasiveness of HepG2 and MHCC97-H cells. Suppression of Bmi-1 decreased the expression of MMP-2, MMP-9 and VEGF. Because Bmi-1 knockdown inhibited HCC cell invasion, we also investigated its effect on metastasisrelated genes.MMP-2, MMP-9 and VEGF play important roles in cancer invasion and metastasis ( 23), including HCC (22).We determined the protein levels of these three genes by western blotting after transfection.As shown in Fig. 5A Therefore, we investigated whether PTEN was upregulated in HCC cells with Bmi-1 knocked down.As shown in Fig. 5H, PTEN levels were increased in HCC cells with Bmi-1 knockdown compared to the mock groups and the control groups.These results demonstrated that PTEN was upregulated by Bmi-1 silencing.PTEN is a tumour suppressor with phosphatase activity that can inhibit tumour metastasis via negative regulation of the PI3K/Akt pathway (25).Moreover, the PI3K/Akt signalling pathway is known to play a major role in signalling pathways responsible for the invasion and migration of various cancers (26).Furthermore, PTEN regulates the expression of MMPs and VEGF in HCC (27).Upregulation of Bmi-1 can activate the PI3K/Akt pathway (24).Therefore, we considered that Bmi-1 participates in the invasion and metastasis of HCC by activation of the PI3K/Akt pathway.To test this hypothesis, we examined the levels of phosphorylated Akt and total Akt.Western blot analyses showed less phosphorylated Akt in HCC cells with Bmi-1 knockdown compared to the negative control groups and mock groups but no change in the total amount of Akt.This experiment demonstrated that knockdown of Bmi-1 inhibited the Akt pathway (Fig. 5H). To further study whether Bmi-1 participates in the invasion and metastasis of HCC cells via PI3k/Akt pathway, HepG2 and MHCC97-H cells were treated with the highly specific PI3K/Akt pathway inhibitor LY294002.LY294002 (10 µM) alone reduced HCC cell invasion.However, treatment with LY294002 in HCC cells with Bmi-1 knockdown did not further reduce the invasion ability compared to HCC cells treated with LY294002 alone or HCC cells with Bmi-1 knockdown alone (Fig. 5I and J).These results suggested that Bmi-1 may promote HCC cell invasion through the activation of the PI3K/ Akt pathway with subsequent regulation of MMP-2, MMP-9 and VEGF expression. Discussion HCC is the fifth most common malignancy in the world and the third most common cause of cancer-related death (28) and the high recurrence rate of intra-hepatic and distant metastasis is a major obstacle to improving the survival of patients with HCC (1).Therefore, it is vital to clarify the mechanisms and identify key factors underlying invasion and metastasis to develop novel treatments and cures.In this study, we identified and functionally characterised Bmi-1 as an important player in HCC progression.Our study demonstrates that Bmi-1 is overexpressed in HCC tissue and cells and its overexpression contributes to invasion and metastasis by increasing the expression of MMP-2, MMP-9 and VEGF via the PTEN/ PI3K/Akt pathway.Recently, many studies have revealed that Bmi-1 is upregulated in a variety of human malignancies and is involved in tumour invasion and metastasis.In breast cancer, overexpression of Bmi-1 is associated with lymph node involvement and distant metastasis (29).In addition, in colon cancer, Bmi-1 expression is significantly correlated with nodal involvement, distant metastasis and clinical stage (18).In this study, we examined the Bmi-1 expression in HCC samples and corresponding non-cancer liver tissues.We found that Bmi-1 was significantly overexpressed in HCC tissues compared with matched normal liver tissues, which is consistent with previous reports (14,15).Of note, a previous study reported that Bmi-1 was also positively expressed in surrounding non-cancer liver tissues and cirrhotic liver but not in distant normal liver tissue (16), which suggested that Bmi-1 might play a role in the early stages of HCC.We determined that overexpression of Bmi-1 was strongly associated with tumour size, metastasis, venous invasion and AJCC TNM stage, while it was not correlated with other clinicopathological parameters, such as age, gender, tumour location, histological grade, satellite lesions, tumour number and AFP level.Our study suggests that Bmi-1 may participate in late progression and aggressive biological behaviour of HCC.Our results were consistent with those of Sasaki et al (15), which indicated that the expression of Bmi-1 and EZH2 was heterogeneous and associated with vascular infiltration, histological grades and cell proliferativity in HCC and HC-CC.However, in conflict with our findings were the reports of Effendi et al (14) and Wang et al (16), which indicated that Bmi-1 expression did not correlate with any clinicopathological parameters, including tumour size, histological differentiation, metastasis and recurrence.These differences across studies may be due to the tissue samples being obtained from HCC patients with different stages of disease or may reflect population differences.Notably, the distribution of disease stages in these studies differed.Another explanation for the discrepancies might be the different protocols used for immunohistochemistry, including antibody dilution, development time and the positive criteria applied, especially the score used to discriminate positivity.For example, in the study of Wang et al (16), cytoplasmic staining of Bmi-1 was considered as positive as well; however, in the other three studies including ours, cells were considered positive for Bmi-1 only when nuclear staining was observed.To further understand the significance of Bmi-1 expression in HCC, multi-centre studies and additional samples are necessary. Moreover, the Kaplan-Meier analysis showed that patients with positive Bmi-1 expression had significantly worse overall survival compared to patients with negative Bmi-1 expression, indicating that Bmi-1 protein may serve as a factor of poor prognosis for patients with HCC.The multivariate analysis found Bmi-1 expression could be an indicator of worse patient outcome, independently of known clinical prognostic indicators such as TNM stage.These data suggest that high Bmi-1 expression is correlated with worse patient outcome and may serve as an independent prognostic factor for patients with HCC, similar to pancreatic cancer (13) and nasopharyngeal carcinoma (20). An important finding of our study was that Bmi-1 was positively associated with metastasis and venous invasion of HCC.To further investigate the role of increasing Bmi-1 expression on HCC invasion, we stably knocked down Bmi-1 expression in two HCC cell lines by transfection with lentiviral vectors expressing Bmi-1-targeting shRNA.The suppression of Bmi-1 expression significantly inhibited the invasion of HCC cells in vitro.In breast cancer and nasopharyngeal cancer, silencing endogenous Bmi-1 expression can reduce the motility and invasiveness of cancer cells (20,29).Mouse xenograft studies indicate that coexpression of Bmi-1 and H-Ras in breast cancer cells can induce an aggressive and metastatic phenotype with an unusual occurrence of brain metastasis (30).These findings indicate that Bmi-1 contributes to increased aggressive behaviour in cancer cells. Tumour invasion and metastasis are complex, multistage processes by which cancer cells undergo genetic alternations that result in their acquisition of the ability to degrade and migrate through the extracellular matrix (ECM) (31).Of the several families of ECM-degrading enzymes, the most extensive are matrix metalloproteinases (MMPs), which are a large family of structurally related zinc-endopeptidases that collectively degrade all essential components of ECM, including type IV collagen, laminin, proteoglycans and glycosaminoglycans (32).Among the previously reported human MMPs, MMP-2 and MMP-9 play the most important roles in tumour invasion and metastasis because of their specificity for degrading the basement membrane (23,33).Many studies indicate that MMP-2 and MMP-9 are correlated with an aggressive, invasive or metastatic tumour phenotype and participate in the invasion and metastasis of cancers, including HCC (34,35). Another important molecule involved in tumour cell invasion and metastasis is vascular endothelial growth factor (VEGF).Angiogenesis is essential for carcinogenesis and tumour growth and metastasis.The most potent tumour angiogenic factor, VEGF, can stimulate the proliferation of endothelial cells in many human cancers.VEGF expression is commonly upregulated in tumours and plays a key role in invasion and migration of tumour cells (36), including HCC (22). These results indicate that MMP-2, MMP-9 and VEGF play an important role in HCC cell invasion.Therefore, we hypothesised that these metastasis-related proteins were involved in Bmi-1-mediated invasion.To test this hypothesis, we investigated the expression and activities of MMP-2, MMP-9 and VEGF.Bmi-1 knockdown decreased the expression and activities of MMP-2, MMP-9 and VEGF.These results suggest that Bmi-1 knockdown inhibits HCC cell invasion by suppression of MMP-2, MMP-9 and VEGF.Meng et al demonstrated that knockdown of Bmi-1 inhibits lung adenocarcinoma cell migration and metastasis by diminishing VEGF secretion via the PTEN/PI3K/Akt signalling pathway (37) and Jiang et al showed that Bmi-1 promotes the aggressiveness of glioma by activating the NF-κB/MMP-9 signalling pathway (38).However, the potential mechanisms of interaction between Bmi-1, MMPs and VEGF in HCC invasion are poorly understood. It is known that the PI3K/Akt signalling pathway is involved in many cellular processes including proliferation, apoptosis, cell cycle progression, cell motility, angiogenesis, invasion and metastasis (39).The PI3K/Akt signalling pathway also regulates the expression of MMPs and VEGF (26,27).In this study, Bmi-1 knockdown reduced phosphorylated Akt levels, accompanied by inhibition of the protein expression and activities of MMP-2, MMP-9 and VEGF.We further found that inhibition of PI3K/Akt pathway with LY294002 in HCC cells with Bmi-1 shRNA did not block the invasion ability of these cells to a greater extent.Thus, downregulation of Bmi-1 leads to inhibition of the PI3K/Akt pathway and its downstream targets (MMP-2, MMP-9 and VEGF) and ultimately reduces the invasion of HCC cells. The tumour suppressor gene PTEN is one of the most commonly lost or mutated phosphatase genes in a variety of human cancers, including HCC (40).PTEN antagonises PI3K/ Akt signalling, thereby negatively regulating aggressive tumour behaviour.One previous study showed that upregulation of Bmi-1 can activate the PI3K/Akt pathway by downregulating the transcription of PTEN via a direct association with the PTEN gene locus (24).We also found that Bmi-1 knockdown increased the expression of PTEN. Taken together, Bmi-1 is upregulated in HCC tissues compared to adjacent normal liver tissues and overexpression of Bmi-1 is associated with tumour size, metastasis, venous invasion and AJCC TNM stage.High Bmi-1 expression is associated with the adverse prognosis of HCC and is an independent prognostic factor for overall survival.Bmi-1 enhances the invasion of HCC cells in vitro by inhibiting the expression of PTEN, thereby activating the PI3K/Akt pathway and ultimately increasing the expression and activity of MMP-2, MMP-9 and VEGF.Therefore, inhibition of Bmi-1 could be useful as a therapeutic strategy to inhibit invasion and improve survival in HCC. Figure 2 . Figure 2. Overall survival curves of 62 HCC patients after curative hepatectomy assessed by Kaplan-Meier analysis according to Bmi-1 expression.Patients with positive Bmi-1 expression were significantly associated with shorter overall survival (P<0.001,log-rank test). Figure 3 . Figure 3. Bmi-1 expression in three HCC cell lines (HepG2, SMMC-7721 and MHCC97-H) and an immortal hepatocyte cell line (HL-7702).A real-time PCR was performed to assess the mRNA levels of Bmi-1.(B) Western blot analyses were performed to assess the expression levels of Bmi-1. , transfection of HepG2 and MHCC97-H cells with Bmi-1-shRNA vectors reduced MMP-2, MMP-9 and VEGF protein levels.We confirmed the effect of Bmi-1-shRNA on MMP-2, MMP-9 and VEGF levels by ELISA.As shown in Fig.5B-G, Bmi-1 knockdown in HCC cells significantly decreased MMP-2, MMP-9 and VEGF levels.These data indicate that the effects of Bmi-1 on invasion may be mediated by MMP-2, MMP-9 and VEGF.Suppression of Bmi-1 increased PTEN expression and decreased p-Akt expression.One previous report indicated that Bmi-1 can downregulate the transcription of PTEN(24). Figure 4 . Figure 4. Suppression of Bmi-1 repressed invasion of HCC cells in vitro.(A and B) Efficiency of lentivirus shRNA vector transfection.Left, fluorescence microscopy of transfected HCC cells (x200).Right, light microscopy of transfected HCC cells (x200).(C-F) Endogenous Bmi-1 mRNA and protein levels were significantly reduced in HepG2 and MHCC97-H cells with Bmi-1 shRNA vectors compared with the negative control shRNA vector-transfected cells and untransfected cells examined by real-time PCR and western blotting, respectively.(G and H) Transwell invasion assay of HepG2 and MHCC97-H cells with knockdown of Bmi-1 vs the negative control shRNA vector-transfected cells and untransfected cells.MOCK, untransfected cells; NC, negative control shRNA vector-transfected cells; KD, Bmi-1 shRNA vector-transfected cells.The data represent the means ± SD. * P<0.05, compared to KD groups. Figure 5 . Figure 5. Suppression of Bmi-1 decreased the expression and activities of MMP-2, MMP-9 and VEGF via increased PTEN expression and decreased p-Akt expression.A protein expression levels of MMP-2, MMP-9 and VEGF were examined by western blotting.(B-G) The proteolytic activity of MMP-2, MMP-9 and VEGF was measured by ELISA.MOCK, untransfected cells; NC, negative control shRNA vector-transfected cells; KD, Bmi-1 shRNA vector-transfected cells.* P<0.05 compared to KD groups.(H) Protein expression levels of PTEN, phosphorylated Akt (p-Akt) and total Akt (Akt) were examined by western blotting.(I and J) HepG2 and MHCC97-H cells were treated with Bmi-1 shRNA vectors and/or 10 µM LY294002 to evaluate invasion capacities.* P<0.05 compared to non-treated HCC cells.The data represent the means ± SD.
2017-06-21T05:54:09.311Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "9a5dd2ead84d6a86bbfb8f93d5eb2494436b083c", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/ijo/43/3/793/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9a5dd2ead84d6a86bbfb8f93d5eb2494436b083c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233171311
pes2o/s2orc
v3-fos-license
Recent trends in Pediatric Hematology Oncology fellowship match and the workforce impact In the manuscript authored by Macy et al., global trends of the pediatric subspecialty workforce are described by reporting the numbers of applicants entering fellowship training from 2001 to 2018. They conclude there has been an increase in the number of physicians entering pediatric subspecialty training. We would like to highlight discrepant changes in accrual into Pediatric Hematology Oncology (PHO) fellowship training, compared to other subspecialties (Table 1). Among the larger fellowship programs, Macy et al. reported PHO to have a marked increase in the number of individuals entering fellowship 2008–2018. However, in the recent PHO fellowship matches there has been a significant and consistent decrease in total applicants. The 2021 fellow match resulted in an 8% decrease in applications and left nearly half (44%) of PHO fellowship programs unfilled (Table 2). Across the country, PHO programs have been scrambling to fill these unoccupied positions. More broadly, fellowship program directors and division chiefs are trying to understand In the manuscript authored by Macy et al., global trends of the pediatric subspecialty workforce are described by reporting the numbers of applicants entering fellowship training from 2001 to 2018. 1 They conclude there has been an increase in the number of physicians entering pediatric subspecialty training. We would like to highlight discrepant changes in accrual into Pediatric Hematology Oncology (PHO) fellowship training, compared to other subspecialties (Table 1). Among the larger fellowship programs, Macy et al. reported PHO to have a marked increase in the number of individuals entering fellowship 2008-2018. However, in the recent PHO fellowship matches there has been a significant and consistent decrease in total applicants. The 2021 fellow match resulted in an 8% decrease in applications and left nearly half (44%) of PHO fellowship programs unfilled (Table 2). Across the country, PHO programs have been scrambling to fill these unoccupied positions. More broadly, fellowship program directors and division chiefs are trying to understand (Table 2). Interestingly, the number of programs and positions has increased consistently during this time, with 72 PHO fellowship programs (representing 176 positions) participating in the 2021 Match. Other pediatric subspecialty programs did not demonstrate the same decline in fellowship applications (Table 1). Herein we try to offer some reasons that may have contributed to these findings. Macy et al. highlight that one of the most common factors influencing resident career decisions is future job opportunity. We hypothesize that challenges in the PHO workforce may be contributing to the decrease in PHO fellow applications. The job market has been perceived by graduating fellows to be more competitive in the past several years. Fellows often report few job prospects upon graduation in "desirable" practices or geographical locations. Alternatively, they choose to continue training in a subspecialty fellowship to augment their research portfolio and increase clinical expertise in a particular niche, such as neurooncology, hemostasis, or stem cell transplant, with the hopes of "becoming more marketable" for highly sought after academic positions. 2 We worry that this additional training, along with a perception of a paucity of faculty positions in PHO, is negatively influencing resident subspecialty decisions. The PHO provider workforce is also changing, which may be impacting employment opportunities for fellow graduates. Hord et al. reported that from 2012 to 2015 PHO practices employed more advanced practice practitioners, who perform some of the roles of a PHO physician and may decrease the number of faculty positions in a given practice. 3 Additionally, subspecialty compensation may also contribute to the decrease in PHO fellow applications. According to the American Association of Medical Colleges 2019-20 Faculty Salary Report, Pediatric Critical Care has the highest median salary at the assistant professor level of any pediatric subspecialty, approximately $250,000 per year. Fields with higher median salaries tend to have the most applicants and the highest percentages of filled fellowship spots compared to those with lower median compensation (Table 1). Pediatric residents choosing a subspecialty likely are incorporating future wage potential into their career decisions, which may contribute to increased interest in higher paying pediatric specialties. Recently, we have witnessed additional external factors impacting training programs. In 2018, restrictive US immigration and travel policies were implemented, which may have influenced international clinicians applying and entering US fellowship programs. During this time period, we have seen a decline in the number of PHO international applications. The coronavirus disease 2019 pandemic has also had a devastating economic impact on academic medical centers and has resulted in an employment freeze in many markets, thus limiting PHO faculty positions for graduates. The pandemic likely has also contributed to the decrease in fellowship applications in 2020 as pediatric residents have deferred future career decisions to remain closer to home and family until quarantine and social distancing measures abate. We certainly hope that we see a rebound of pediatric residents with an interest in a career in PHO, but it is essential to be aware of the changing landscape of our specialty. Further investigation is warranted into how these factors are affecting career decision making of pediatric residents to better understand fellowship applications as well as pediatric subspecialty workforce overall. ADDITIONAL INFORMATION Competing interests: S.C.B. serves on the advisory board for Ipsen Pharmaceuticals and Fennec Pharmaceuticals. Consent statement: No consent was required for this commentary. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2021-04-08T05:15:55.571Z
2021-04-07T00:00:00.000
{ "year": 2021, "sha1": "6acd571354d8a0399e23fc7ed357d4b77b6683be", "oa_license": null, "oa_url": "https://www.nature.com/articles/s41390-021-01505-7.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e070a9178f6573e31dd0a02eb7c95bb122f671f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118411586
pes2o/s2orc
v3-fos-license
A Note on the Calculation of Averages in Superconducting Cable-in-Conduit Conductors We show that there are two different ways of calculating the average electric field of a superconducting cable in conduit conductor depending on the relation between the current transfer length and the characteristic self-field length. I. INTRODUCTION Measuring the volt-ampere characteristic of superconducting cable-in-conduit conductors is one of the most important way to completely characterize their DC behavior. In order to measure the voltage drop along the conductor, voltage taps are placed on the conduit (a Ti or stainless-steel jacket) at some distance apart in the high field region of the sample. This distance should be greater than the length of the twist pitch of the last cabling stage in order to comply with the request of complete transposition of strands. The superposition of the magnetic field produced by the transport current in the sample and the external background field B 0 results in a linear magnetic field gradient in the sample cross-section as shown in Fig.1. On a certain line the magnetic field will vary between a minimum B min and a maximum value B max . Each strand among the thousands of strands in the cable 'travels' on a complicated spiraled path trough this field gradient and has therefore different locally defined critical FIG. 1: Two typical strand trajectories in a given section of the cable with a field gradient as a result of the superposition of the uniform background field with the self-field of the cable. In this configuration the field gradient points in the Ox direction. currents. Two typical strand trajectories are shown in Fig.1. Assuming that all strands are charged uniformly with current by a perfect joint, as soon as the strands arrive in the high field region of the cable, the condition is met that some strands will happen to carry a current higher that the local critical current imposed by the local magnetic field in the cable at this position. Depending on the degree of transversal conductance between the strands, a more or less intensive current transfer process starts. If the transversal conductance is high and the length of the strand path is large, at steady state the current in each strand will be modulated following more or less the magnetic field pattern seen by the strand on its path. The measured electric field is then an average of the electric fields generated by the strands in the cable. The average electric field can be calculated by integrating the electrical field along the length of the strands. Assuming that the cable is ergodic 1 this average is equal with the cable cross-section ensemble average calculated using the strand geometrical probability distribution 2 . This geometric averaging and some subtleties involved in their calculation is the object of the present investigation. II. TWO CHARACTERISTIC LENGTHS IN CABLE Excepting the cables made of insulated strands, in real cables there is always a certain current transfer possible. Depending on the inter-strand transversal resistance, the current transfer length L CT is defined as the length needed to balance an initial current inhomogeneity. Extending (without proof) to a full-size cable the relation proposed earlier by Ekin 3 for the filaments in a strand, we can define a current-transfer length L CT by where D c is the cable diameter, ρ t is the transversal resistivity, a measure of the inter-strand contact resistance and ρ c is the resistivity criterion, ρ c = E c J c defined with the help of the electric field criterion E c and the critical current density J c . n is the power-law index from the nonlinear current-voltage characteristic of the superconducting strands The strand position in the cable cross-section P (x(z), y(z)) can be described with a very good accuracy by the following equations: where: z is the axial coordinate along the cable axis, r k and p k are the radius and the twist-pitch of the k-th stage and S the number of stages. The index i ∈ 1..N and the phases φ i are introduce to describe different strands with different initial positions. N is the number of strands in the cable. We adopt here the convention that the twistpitch increases with the k index, the smallest being p 1 and the greatest p last = p S . If we follow and record the position of one strand at enough many slices at coordinate z n = np S , an integer multiple of the last twist-pitch length, we will see that these recorded positions cover almost uniformly the cross-section of the cable. Moreover, if we look at all strands position in one slice we will see a similar uniform distribution. From a statistical point of view we can not distinguish between the two pictures. In this case we say that the cable is ergodic and we can replace the average over the length of one strand by an average over all strands in one slice. Stated in a more simple way, the average over the length L of any physical cable property X can be replaced by a cross-section average with a suitable probability distribution function w(x, y) with s the coordinate along the cable length and X is the average of X. The fact that the cable cross-section is circular and is covered by a magnetic field distribution in the form of a linear gradient as shown in Fig.1 forces us to calculate the probability distribution function of the strands having coordinate between x and x+dx but an arbitrary value of y. All this strands feel the same magnetic field. This new distribution function can be calculated with the relation where dN (x) is the number of strands having coordinate between x and x + dx and N the total number of strands in the cable. If the strands are uniformly distributed in the cable cross-section with a density n 0 , dN (x) and N are proportional to the area of the stripe of width dx (Fig.1), dN (x) = n 0 dA = 2n 0 r 2 c − x 2 dx and the total cross-section area of the cable, N = n 0 A = n 0 πr 2 c . After substitution in Eq.5 one obtains If the twist pitch length of the last stage p last is long, the strands on the high field region of the cross-section field gradient will have a local critical current which is lower than the carried current and the strands on the low field region of the gradient, a critical current higher than the own current. This is illustrated in Fig.2a. In this case the averages can be calculated with a different probability function, p(B) which can be calculated with the help of the probability distribution w(x). Let us represent the magnetic field distribution in the cable as where B 0 is the background magnetic field and k a proportionality constant. As shown in Fig.1 We treat now x as a random variable defined on the interval [−r c , r c ] with the distribution w(x) given by Eq.6. By virtue of Eq.7, B(x) is also a random variable. Let us calculate now the probability distribution function of B(x), a function of the random variable x with probability distribution function w(x). We start with the standard definition of a probability distribution of a random variable which is a function of another random variable with known distribution The integral in Eq.8 can be calculated with the help of the following known relation in the theory of δdistribution function This function is the probability distribution of the field B ∈ [B min , B max ]. If we perform a change of variable B − B 0 → B, the probability distribution function can be written as where now the variable B ∈ [−∆B, ∆B]. Let us apply this relation to the calculation of the average critical current in a cable. From Eqs.4 and 6 we have If the twist pitch is shorter than the current-transfer length, the current-carrying capacity of the strand will be determined by the critical current at the most highly field in the cable cross-section. The strand who crosses the B max position at x = r c will keep its current I = I c (B max ) all over the way down from that section and will create a circular region of constant current A n . The same is true for all other strands who pass a region with a field intensity B 0 < B < B max thus creating circular regions of constant current A n−1 , A n−2 , ..., A 2 , A 1 , A 0 as shown in Fig.2b. The linear field gradient on the cable cross-section, created by the overlapping of the self and background fields, is replaced by a circular current distribution. Applying Eq.5 to this case and observing that dN (x) = n 0 2πxdx and N = n 0 πr 2 c we obtain the distribution function for this case where x ∈ [0, r c ]. The probability distribution of the field can be calculated similar with the calculation performed before for the long twist pitch case. We have with B in the range [B 0 , B max ]. If we change the field variable B − B 0 → B and observe that kr c = ∆B we can write the probability distribution as where B ∈ [0, ∆B]. The average of critical current calculated with this distribution is and applies if the last twist pitch of the cable is shorter than the current-transfer length. III. INCLUDING THE ANGULAR DISTRIBUTION IN THE AVERAGE PROCESS The average calculated with the geometrical probability distribution function does not take into account the fact that not all strands cut the cable section at the same angle. A longitudinal cut trough a cable reveals this problem. As shown in Fig.3, the cut-off cross-section of strands close to the cable center is very large (elongated ellipses) indicating that these strands are almost parallel with the cable axis. Moving away from cable axis in the radial direction, the cross-section area of the strand-cuts decrease reaching a minimum (almost circular) at the cable edge, where the strands are almost perpendicular to the cable axis. Therefore, besides the uniform distribution of strand position in the cable cross-section which leads to the geometrical probability distribution w(x) of Eq.6, we have also a radial distribution of strand angles relative to the cable axis. Both distribution overlap, coexist and are interrelated. The cable phase-space is therefore not restricted to the set of points {x i (z), y i (z)} i∈N . It should be extended to the set {x i (z), y i (z), x i (z), y i (z)} i∈N with x i = dx i (z)/dz and y i (z) = dy i (z)/dz, where x i and y i are the tangent of the angles the strand makes with the Ox and Oy axis. In this section a new average formula will be developed which accounts for this extension. Usually, one calculates the average electric field by considering a geometry as in Fig.1 where the vertical stripe of width dx includes all strands sensing the same local magnetic field B(x). The strand cut-angle has a radial distribution as illustrated in Fig.3. The angular average is calculated using a set of concentric rings of width dr containing strands having the same cut-angle θ(r) as shown in Fig.4. The variation of the cut-angle with the radius is encoded in the θ dependence of r, a smooth convex function. In order to calculate the probability distribution function, we make a change of variable from initial variables x, y to the variables x, r. The transformation is x ≡ x(r, x) = x; y ≡ y(r, x) = r 2 − x 2 (18) The average electric field E is calculated with the basic relation where A is the cross-section area of the cable. If we use the Jacobian, Eq.19 we get where we have taken into account that only the normal component of the current density J cos(θ) contributes to the axial electric field. Separating the variables one finally obtains This formula has a very simple structure. The average electric field is first the "sum" of local electric fields in the cable cross-section of constant field times a weighting factor w(x) which is contained in the second (the r-integral). The limiting case is when all strands are parallel to the cable axis. Then, θ(r) = 0 for all r ∈ [0, r c ] and we get which is the well known geometric probability distribution function 2 for a simply connected cable with circular cross-section. In other words we can keep the standard formula for the average electric field but with a modified weight or probability distribution function Unfortunately, the integral in Eq.25 has no analytical solution in terms of simple functions. It is remarkable that the angular distribution keeps track of the powerlaw index n. In 4 it was inferred, based on unpublished numerical simulation that θ(r) as a function of r is given by where θ 0 ∼ 43 • . It can be seen that this dependence satisfies the condition that the strand angle is small close to the cable axis and large at the cable edge. IV. CONCLUSIONS The analysis presented in this paper show that there are two limiting cases concerning the current transfer in cable in conduit conductors which influence the average procedure which must be performed when calculating different cable properties. Let us follow the strand position as it meanders in the cable cross-section starting from a position at or very close to the B min position. The initial current in the strand is I ≤ I c (B max ). As the strand is moving in regions with higher field B ≥ B min where I ≥ I c (B) some current transfer ought to take place to neighboring strands. If the transversal interstrand resistance is negligible small, the excess current, δI = I − I c (B) is easily transfered and the current in the strand follows nearly the variation of I c (B(x)) in the cable cross-section. The consequence is that the cable cross-section is divided in vertical stripes where the field and the current in the strands are constant and the average is calculated with the distribution function from Eq.6. If the transversal resistance is very large, the surplus current can not be transfered and strands with an excess current will penetrate in the high field region of the cable. In this case the cable cross-section is divided in circular regions where the local magnetic field varies linearly between B 0 at the center and B max on the cable boundary. The average is calculated with a different distribution function Eq.15. The boundary value of the transversal resistivity, ρ * t between the two regimes can be set by comparing the current-transfer length, Eq.1 with the length of the last twist pitch of the cable p S Solving for ρ * t one gets As can be seen, if the inter-strand transversal resistivity ρ t ≤ ρ * t , the current redistribution is very effective. The limit value is proportional to n the power -law index of the strands, increases with the square of the last twist pitch length and is inverse-proportional to the square of the cable diameter. Cables with large n, long last-stage twist-pitch length p S and small diameter D c have large ρ * t values and better tolerance to higher transverse interstrand contact resistance. In both cases formulas for the calculation of the probability distribution function for the magnetic field are also presented. A second issue treated in this paper is connected with the fact that the strands in a cable-in-conduit conductor do not cut the cable cross-section at right angles nor at any other constant angle. The cut angle is rather distributed, increasing monotonically from a small value for strands near the center of the cable to almost 90 • at the cable border. The strand is therefore characterized not only by the position coordinates x i (z) and y i (z) in the cable cross section but also by the angle it has with a given transversal cut through the cable. The cable phase-space must be extended to the set {x i (z), y i (z), x i (z), y i (z)} i∈N with x i = dx i (z)/dz and y i (z) = dy i (z)/dz. The new distribution function, taking into account this effect is given in Eq.25. The statistical mechanical approach sketched in this paper, based on the concept of viewing the strands as particles in movement could be very useful for the complete understanding of the complicated thermal and electromagnetic properties of cable-in-conduit conductors with twisted strands and multi-stage structure. We used here only one concept borrowed from statistical mechanics, the concept of ergodicity which allows one to replace the average over the length by an average over the cable cross-section.
2013-09-23T12:44:26.000Z
2013-09-23T00:00:00.000
{ "year": 2013, "sha1": "82ed6088a5af41a3cf7e108835562349e8ece9c3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "82ed6088a5af41a3cf7e108835562349e8ece9c3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
229101702
pes2o/s2orc
v3-fos-license
The diagnostic and prognostic value of nuclear matrix protein 22 in bladder cancer Backgrounds This study aimed to evaluate the diagnostic and prognostic value of urine nuclear matrix protein 22 (NMP22) for bladder cancer. Methods A retrospective analysis was performed on 229 patients with bladder cancer who underwent transurethral resection of bladder tumor between 2015 and 2018 in our hospital. The sensitivity of NMP22 was calculated to evaluate the diagnostic value of NMP22. Logistic regression analyses were applied to investigate the prognostic value of NMP22 for pathologic features in bladder cancer. Results The sensitivity of NMP22 for the detection of bladder cancer was 28.82%, and the false negative rate was 71.18%. The sensitivity of NMP22 for the detection of low-grade disease and high-grade disease were 11.11% and 38.51%. NMP22 had significantly higher sensitivity for the detection of high-grade bladder cancer (P<0.001). The sensitivity of NMP22 for the detection of Ta, T1 and T2 disease were 20.78%, 50.85% and 25.00% respectively (Ta refers to noninvasive papillary carcinoma, T1 refers to tumor invades subepithelial connective tissue, T2 refers to tumor invades muscularis propria). NMP22 had significantly higher sensitivity for detection of T1 disease (P<0.001). Univariate and multivariate logistic regression analysis suggested NMP22 might be an independent prognostic factor for high-grade (P<0.001) and T1 disease (P<0.001) in patients with bladder cancer. Conclusions Although the sensitivity of NMP22 was significantly higher in the detection of T1 and high-grade bladder cancer, the false negative rate was high. Besides, the NMP22 might be a prognostic factor for high-grade and T1 bladder cancer, but considering the limitations of this study, further studies are needed. important treatments for resectable disease. For unresectable bladder cancer, chemotherapy and radiotherapy are classical treatments, and the emergence of immunotherapy and targeted therapy in recent years provides more treatment options for bladder cancer that cannot be controlled by radiotherapy and chemotherapy (2,3). Considering that the prognosis of unresectable bladder cancer is significantly worse than that of resectable bladder cancer (3), the early diagnosis of bladder cancer is an important topic in clinical. In the early diagnosis of bladder cancer, in addition to the traditional cystoscopy and urine cytology, molecular biomarkers are also used more and more because of its noninvasive and easy to implement. The molecular biomarkers of bladder cancer, that is, the components with diagnostic value in the urine of patients with bladder cancer, include exfoliated tumor cells, proteins, genes, and tumor metabolites. The detection of these biomarkers can provide valuable information for the diagnosis and follow-up of bladder cancer. At present, there are many biomarkers for molecular diagnosis of bladder cancer. Some detect specific proteins in urine, such as bladder tumor antigen (BTA) test, nuclear matrix protein 22 (NMP22) test, Cytokeratin 8 and 18 fragments test; some detect DNA in urine, such as AssureMDx test; some detect mRNA in urine, such as Xpert Bladder Cancer test and CxBladder Detect test; some detect tumor associated cellular antigens or aneuploidy for chromsomes in urine sediment, such as ImmunoCyt test and UroVysion test (4). Among these examinations, NMP22 is one of the most widely used in clinical practice. Nuclear matrix proteins (NMPs) was isolated by Berezney in 1974 (5). As a protein accounting for about 10% of all nuclear proteins, the NMPs built the nuclear matrix together with peripheral lamins and pore complexes, and played an important role in the DNA replication and transcription (6). In 1996, a urinary protein named NMP22 was isolated by Keesee (7). Later it was reported that in bladder cancer the malignant transitional cells contained up to 80 times higher concentration of NMP22 than normal transitional cells (8). In non-muscle invasive bladder cancer (NMIBC), NMP22 were positive in 71.8% of cases, while the cytology were positive in 42.8% of cases for comparison (8). NMP22 assay had gained United States Food and Drug Administration (FDA) approval as an aid in the initial diagnosis of bladder cancer, and had been applied in clinical for years. With the increasing popular use of the NMP22 in clinical, the sensitivity and specificity of NMP22 had been verified by several studies within past few years (8)(9)(10), however, the prognostic value of NMP22 in bladder cancer had not been investigated yet. Considering this, we conducted this study evaluating the association between urine NMP22 and the pathologic features in patients with bladder cancer to find if NMP22 could be a prognostic factor in bladder cancer, meanwhile the diagnostic value of NMP22 for bladder cancer was studied as well. We present the following article in accordance with the REMARK reporting checklist (available at http://dx.doi.org/10.21037/ tcr-20-1824). Patients A retrospective analysis was conducted to identify patients with bladder cancer at Peking University Cancer Hospital between the year 2015 and 2018. These patients would be enrolled: (I) adult patients aged 18 years or older, male or female; (II) diagnosed with bladder cancer and had not been performed with any kind of surgery; (III) performed with TURBT; (IV) bladder urothelial carcinoma was confirmed by pathology report with complete grading and staging information; (V) with urine NMP22 assay report. The following patients will be excluded: (I) complicated with other urogenital diseases, including acute or chronic inflammation of the urinary system; (II) combined with tumors in other sites; (III) complete pathology reports are not available. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). This study was approved by the Ethics Committee of Peking University Cancer Hospital (approval No. 150204) and informed consent was taken from all the patients. Assay methods The Alere NMP22 BladderCheck Test kit (Scarborough, Maine, USA.) was used to qualitatively detect the urine NMP22. This test used a lateral flow immunochromatographic strip encased in a plastic cartridge to detect NMP qualitatively in the patient's urine sample. The antibodies in the lateral flow immunochromatographic strip were monoclonal antibodies (MAbs) raised against nuclear mitotic apparatus protein (NuMA) extracted from a cervical cancer cell line by the method of Fey and Penman (11). Two different MAbs were used, one as a capture antibody and one as a reporter antibody. All samples were processed according to the instruction provided by the manufacturer. The voided urine samples were collected in plastic urine container and tested within 2 hours under room temperature. According to the written instruction and previous studies (10), a urine NMP22 level of 10 units/mL or above was considered to be positive. Study design Clinical characteristics data collected included patients medical record number, age, gender, NMP22 assay results. The pathologic features included histological tumor grade and pathologic tumor stage (T stage). A single voided urine sample was collected prior to TURBT in all patients. The pathologic staging and histological grading of the bladder cancer were based on the American Joint Committee TNM Staging System for Bladder Cancer (7th edition, 2010) (12). Ta refers to noninvasive papillary carcinoma, T1 refers to tumor invades subepithelial connective tissue, T2 refers to tumor invades muscularis propria. All these data were obtained through a review of our hospital's electronic medical record. Statistical analysis methods Descriptive statistics were used to summarize patients' characteristics. Categorical variables are presented as numbers and percentages, continuous variables are presented as median and interquartile range (IQR). Patients were divided into two groups by the NMP22 results, NMP22 negative group and NMP22 positive group. The t-test was used to compare the continuous variables between these two groups. Chi-square test and crosstabs test were used to compare the categorical variables between these two groups. Then patients were divided into two groups by the pathologic grade results. The t-test was used to compare the continuous variables between patients with low grade bladder cancer and those with high grade disease. Chisquare test and crosstabs test were used to compare the categorical variables between these groups. Later all the patients were divided into three groups by the pathologic T stage (Ta, T1 and T2 group). The One-way ANOVA was used to compare the continuous variables between these three groups. Chi-square test and crosstabs test were used to compare the categorical variables between these groups. The logistic regression analysis was performed to determine the potential prognostic factor for pathologic outcomes in patients with bladder cancer using univariate and multivariate analysis. Statistical analysis was performed using Stata 14 for windows (StataCorp, College Station, TX, USA). All tests were two-sided and a value of P<0.05 was considered statistically significant. Data Patients' clinicopathologic characteristics were shown in Analysis and presentation The sensitivity of NMP22 was shown in Table 2 and Figure 1. The sensitivity of NMP22 for the detection of bladder cancer was 28.82%, and the false negative rate was 71.18%. No significant difference could be detected regarding age between NMP22 positive patients and NMP22 negative patients (P=0.095). There was no significance in the distribution of gender between these two groups (P=0.618). The sensitivity of NMP22 for the detection of low-grade disease and high-grade disease were 11.11% and 38.51%. NMP22 had significantly higher sensitivity for the detection of high-grade bladder cancer (P<0.001). The sensitivity of NMP22 for the detection of Ta, T1 and T2 disease were 20.78%, 50.85% and 25.00% respectively. NMP22 had significantly higher sensitivity for the detection of T1 disease (P<0.001). Univariate and multivariate logistic regression analysis for prognostic factors associated with pathologic tumor grading were presented in Table 3. Univariate and multivariate logistic regression analysis for prognostic factors associated with pathologic tumor grade were presented in Table 4 CI: 1.69-5.74) were statistically significantly associated with pathologic grade, suggesting that gender (male) and NMP22 (positive) might be independent prognostic factors for T1 bladder cancer. Univariate and multivariate analyses suggested age was not a prognostic factor for pathologic stage (P=0.216 and P=0.433 separately), while age (older), gender (male) and NMP22 (positive) might be independent prognostic factors for pathologic high-grade bladder cancer. Discussion So far, no examination is perfect in the early diagnosis of bladder cancer. Cystoscopy was the gold standard method, but it was invasive, expensive, and inconvenient. Cytology was relatively cheap and convenient, but the sensitivity for low grade disease was not enough (13). NMP22 as a cheap, convenient assay with enough sensitivity and specificity, was recommended for daily practice in some studies (14,15). But in other studies, it's reported that its diagnostic performance was limited (16,17). According to Wang and colleagues' meta-analysis, the reported NMP22 sensitivity was extremely varied, among 5.56% to 100% (15). There were huge differences among conclusions of different studies as well. In this study, only 66 out of 229 patients with pathologic confirmed bladder cancer were with positive NMP22. The sensitivity was 28.82%, and the false negative rate was 71.18%. We thought this false negative rate was too high to recommend using NMP22 alone in the early diagnosis of bladder cancer. The reason why there is such a large gap in the sensitivity of NMP22 detection is to start with its detection principle. NMPs are the non-chromatin network framework of the nucleus, which determine the morphology of the nucleus and organize the DNA into three-dimensional structure (18). They play an important role in the process of DNA replication, transcription, RNA processing, gene expression regulation and so on (19). NMPs are a kind of insoluble proteins, but they can be decomposed in the process of apoptosis and released into the surrounding environment. More than ten kinds of NMPs have been identified, some of which are tissue-specific and tumor specific. NMP22 is one of many nuclear matrix proteins, which is specific in urothelial cells. The content of NMP22 in cancerous urothelial cells is 80 times higher than that in normal cells (8). NMP22 in bladder cancer cells can be released into the urine in the form of cleavage fragments or complexes during cell apoptosis. The detection of these components in urine can help to determine whether there is bladder cancer or not. However, the disadvantages of this detection are also obvious. Due to the inconsistent rate of apoptosis and exfoliation of bladder cancer cells, the concentration of NMP22 released into urine will also change constantly. Therefore, the concentration of NMP22 in urine is not stable, but will change with time. For the same bladder cancer patient, the NMP22 concentration may be quite different between the first urination in the morning and the urine excreted after drinking a lot of water in the afternoon. Repeated tests may improve the detection rate, but the cost may be unacceptable to patients; at the same time, there is no relevant research on the relationship between specific detection times and detection rate. In addition, the test urine is taken by the patients themselves, so whether the patients keep the urine according to the doctor's instructions is uncertain, which may affect the experimental results as well. Current clinical studies have also found this phenomenon. In different studies (15), the sensitivity and specificity of NMP22 are quite different, which is also the reason why we believe that there is a risk of using NMP22 alone. However, this does not mean that NMP22 is worthless. During cystoscopy, some early bladder cancer or carcinoma in situ may be difficult to detect by naked eyes, but these tumor cells can release NMP22 into the urine. If combined with cystoscopy and NMP22 detection, the detection rate of bladder cancer could be significantly increased. Relevant studies have confirmed that, combined with NMP22 and cystoscopy, the detection rate of bladder cancer can be as high as 99% (9). Therefore, it is the most important to fully understand the advantages and disadvantages of NMP22 test and reasonably apply it in clinical. Considering the sensitivity of NMP22 might be different between different pathologic stages and grades. Jamshidian and colleagues conducted a research and reported the sensitivity of the NMP22 for Ta, T1, T2 disease were 31.3%, 90.0% and 66.7% respectively (20). In our study, the sensitivity of the NMP22 for Ta, T1 and T2 disease were 20.78%, 50.85% and 25.00% respectively. The NMP22 test had significantly higher sensitivity for the detection of T1 stage tumors (P<0.001). In terms of pathologic grade, In Jamshidian's study the sensitivity for grade 1, 2 and 3 disease were 66.7%, 81.8% and 84.6% respectively. In this study, the positive of NMP22 for low-grade and highgrade disease were 11.11% and 38.51%. These results could not be compared directly since the grading systems were different, still we can find that the sensitivity of the NMP22 was significantly higher for the detection of highgrade disease (P<0.001) in our study. Wang and colleagues' pooled analyses had similar conclusion (15). Although the sensitivity of NMP22 was significantly higher in the detection of T1 and high-grade bladder cancer, the false negative rate was too high to recommend using NMP22 alone in the early diagnosis of bladder cancer, as missing the NMP22 negative tumor was dangerous to any patient. Detection of NMP22 expression in tissues might increase the sensitivity of the test, but NMP22 assay is designed for noninvasive detection of bladder cancer, and this will lose the significance of NMP22 as a noninvasive test. In addition, if the tissue has been obtained, the diagnosis of bladder cancer can be made directly by pathological analysis, and NMP22 detection is not needed. The expression of NMP22 in other tumors is related to the specificity of NMP22 detection. NMP22 is one of many nuclear matrix proteins, which specifically exists in urothelial cells. The content of NMP22 in cancerous urothelial cells is 80 times higher than that in normal cells (8). Therefore, NMP22 is designed for the detection of bladder cancer, and the research on NMP22 is almost all concentrated in the field of bladder cancer. We searched the relevant reports and found only two studies in other cancer (21,22). These two studies analyzed the value of NMP22 in the diagnosis of renal cell carcinoma, and found that NMP22 might have diagnostic value for renal cell carcinoma, but there is no follow-up study. Prognostic factors refer to the factors that can help to predict the prognosis of patients, usually predict the survival and recurrence of patients. In this study, we analyzed the predictive value of NMP22 in the pathological grading and staging of bladder cancer. That is to say, if NMP22 is positive, would the patient be more likely to be diagnosed with bladder cancer of a certain grade and stage. There are many prognostic factors in bladder cancer, including oncogene and tumor suppressor gene (Ras, ErbB, Rb, TP53, p21), cell proliferation and apoptosis related indicators (Ki-67, Fas, FasL), vascular endothelial growth factor (VEGF), epidermal growth factor (EGF), transforming growth factor (TGF), etc. These factors have certain value in predicting the survival and recurrence of bladder cancer (23). In terms of the prognostic value of NMP22 for bladder cancer, or the association of the NMP22 and the pathologic features of bladder cancer, we identified two previous studies. Zippe and colleagues (24) analyzed 18 patients with biopsy confirmed bladder cancer and 312 patients with benign disease of the bladder, and found there was no difference in NMP22 values when grade I and II cancers were compared with grade III cancers, and no significant difference between superficial (Ta/Tis/T1) versus invasive cancers (T2/T3). Jamshidian and colleagues (20) studied 76 patients with bladder cancer and 75 volunteers without bladder cancer, and found a significant association between the level of urine NMP22 and pathologic stage and grade of bladder cancer. In our study, logistic analysis revealed the NMP22 were independent prognostic factor for high-grade and T1 bladder cancer. Our conclusion was different from Zippe's conclusion, the possible explanation might be that only 18 patients with bladder cancer were enrolled in Zippe's study, and this sample size might be too limited to detect the significant difference. In Jamshidian's study, 75 patients were included, that was a larger sample size compared with that of Zippe's study, and the conclusion was similar with ours. In a word, we had the biggest sample size and the conclusion was that the NMP22 might be a prognostic factor for high-grade and T1 bladder cancer, and further studies were needed to clarify this conclusion. However, some weakness could be identified in this study. This was a retrospective study with limited sample size, and the patients with T2 disease were not followed, that meant some patients underwent radical cystectomy might have different pathologic results latter. The survival was not analyzed in this study as well, because non-muscle invasive disease and muscle invasive disease should not be mixed in the same survival analysis. For patients with non-muscle invasive disease, the time is not long enough to detect the difference of survival between NMP22 negative patients and NMP22 positive patients. For patients with muscle invasive disease, 16 cases were too small a sample size to detect the significant difference of their survival. In addition, in this study, we did not analyze the specificity of NMP22 test in detecting bladder cancer, for only patients confirmed with bladder cancer were included. In this retrospective study, all patients were diagnosed with bladder cancer in outpatient, admitted to hospital for surgery, and confirmed as bladder cancer by postoperative pathology. These patients routinely completed NMP22 examination after admission. The sensitivity of NMP22 for bladder cancer was calculated by the formula: number of NMP22 positive bladder cancer patients/(number of NMP22 positive bladder cancer patients + number of NMP22 negative bladder cancer patients) ×100%. The specificity formula was: number of NMP22 positive bladder cancer patients/(number of NMP22 positive bladder cancer patients + number of NMP22 positive healthy patients) ×100%. Since we did not include any patients without diagnosis of bladder cancer, we did not analyze the specificity of NMP22 detection. Besides, some influencing factors were not taken into consideration in this study, for example the smoking status, body mass index (BMI), etc. Conclusions In conclusion, although the sensitivity of NMP22 was significantly higher in the detection of T1 and high-grade bladder cancer, the false negative rate was too high to recommend using NMP22 alone in the early diagnosis of bladder cancer. Besides, the NMP22 might be a prognostic factor for high-grade and T1 disease, but considering the limitations of this study, further studies are needed to clarify this conclusion. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). This study was approved by the Ethics Committee of Peking University Cancer Hospital (approval No. 150204) and informed consent was taken from all the patients. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2020-12-03T09:05:31.380Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "cefc62173548583ea5d09947d973b370f0993c4c", "oa_license": "CCBYNCND", "oa_url": "https://tcr.amegroups.com/article/viewFile/45685/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1a05656d6fc55d80a30123a5a8cf66be068c783", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203503059
pes2o/s2orc
v3-fos-license
Improved consistency of bond-line thickness when conducting single lap-shear joint tests Graphical abstract Method details Despite its recent criticism, single lap-shear joint testing (SLJ) is still the most commonly used method for the characterization of adhesive joint behavior and strength in industries such as the automotive, aerospace and oil & gas [1]. The SLJ test configuration is often criticized due to the number of setup parameters that can negatively affect the consistency of the test results unless controlled. Two such key parameters are correct alignment of the substrates and controlling the bond-line thickness of the adhesive. Due to the fact that the shear and peel stress distributions are concentrated at the edge of the joint overlap area, ensuring the joint maintains its rectangular shape with no overfill or under-fill at these free edges is imperative to maintain consistency with this joint setup [2]. This method aims to ensure correct alignment and consistent bond-line thickness are maintained with every series of tests, therefore reducing the likelihood of error for single batches and for batch-to-batch comparisons. Similar to the recommendations in the original standard [3], the method involved preparing the test joints in panels of at least five bonded samples. For this method, a bond-rig was designed which was able to accommodate ten bonded lap shear samples per test (the. stp files are included in the Resource material). This new bond-rig design ensured correct alignment of the samples and controlled the bond-line thickness of each sample. The design also ensured that the fillet at the edge of the adhesive joint are the same dimensions from batch-to-batch therefore ensuring that the geometric parameters that may influence bond strength of each bonded sample are kept consistent. `As can be seen from the CAD model of the new bond-rig in Fig. 1, the bond overlap length was controlled at 12.7 mm as per ASTM D1002-10 [3]. Each lap shear coupon was 101.6 mm in length, therefore by aligning each sample with the +x and -x edges of the bond-rig the overlap length was controlled at 12.7 mm. As can be seen in Fig. 1, the top level was 1.92 mm higher than the bottom level. This gave a bond-line thickness of 0.3 mm when taking into account the thickness of the lap shear samples (1.62 mm). This bond-line thickness was selected as it fell into the optimal bond-line thickness range of the 2-part Scotchweld 2216 epoxy adhesive used in the early parts of this work, as recommended by the manufacturer, 3 M. It was also found to be suitable for a 1-part epoxy film adhesive used later, namely Hexcel 300 g/m 2 Redux 312. With the bond-rig designed to control the bond-line thickness and sample alignment a number of samples were bonded to determine if the new design had improved the consistency of the joints. Previous bonding studies utilized a 15 mm non-stick release film to prevent any samples sticking to the bond-rig as a result of adhesive over-fill once pressure was applied. However, wrinkling of the non-stick release film during curing was found (see Supplementary material) to have a negative effect on maintaining a consistent bond-line thickness in the joint and this was evident after several curing cycles. To improve consistency and remove this effect, a CoBlast FEP (Fluorinated Ethylene Propylene) release coating was deposited on the parts. The process of depositing a fluoropolymer using CoBlast has previously been reported for the deposition of lubricious coatings on superelastic nitinol for medical device applications [4]. The application of this release layer eliminates the need for any release films or sprays. This one-step surface treatment greatly improves the accuracy of the bond-line thickness and is an optional improvement on the recommended joint setup in the original method [3]. The method of clamping the bonded samples was investigated as there were inconsistencies in the bond-line thickness depending on the position of the samples in the bond-rig. Two clamping methods were investigated as can be seen in Fig. 2(a) and (b). The initial setup (Fig. 2(a)) utilized drilled and tapped holes as an anchor-point to secure a clamping bar which ran the full length of the bond area. However, it was found this resulted in an uneven bond-line within any given sample, with thickness increasing from left to right, as per Fig. 3. This was attributed to the unsupported overhang of the clamping arm on the right hand side causing additional squeeze-out, as compared to the supported left side (AE 34% variation from average value). It was also noted that hand-tightening of the clamping arm screws resulted in varying bond-line thicknesses, often below the anticipated 300 mm. In order to obtain a more consistent bond-line thickness both within and between samples, an alternative clamping arrangement was investigated, as per Fig. 2(b). Here, weights are placed along the bonding rig running the full length of the y-direction. These weights exerted a uniform force on each bonded sample during the cure cycle. This approach was found to significantly reduce the variation of bond-line thickness between samples, with an average bond-line thickness of 254 mm falling within the desired range. The variation of bond-line thickness within any given sample was reduced to less than 10% of the average thickness. This second approach was successfully implemented in a recent study comparing surface preparation techniques and finishes of bondjoints [5] (Supplementary Fig. S1). Conclusion By ensuring equal weight distribution and using a bond-rig as detailed in Fig. 1 it is possible to greatly improve the consistency of the setup parameters and reduce the likelihood of over or under-fill in the joint. The additional use of the CoBlast FEP release coating further improves the setup process, ensuring that the joint maintains its rectangular shape along with a consistent bond-line thickness across the test panels. The release surface has been used extensively over the course of a year without issue (over 100 cure cycles); the samples can still be removed with ease with very little wear or adhesion to the surface evident. Noticeable improvements have also been observed in the consistency of bond strength results, demonstrating that once the setup parameters are maintained consistent, the single lap-shear joint test can provide very valuable information on the behavior and strength of adhesive joints and reduce user effects. Furthermore, the presence of bond-line defects such as porosity or micro-voids could be eliminated by incorporating a vacuum bagging system with the bond-rig discussed here within. This was not included in the present work due to time constraints but is recommended for future work if more accuracy is required.
2019-09-17T03:01:57.744Z
2019-09-10T00:00:00.000
{ "year": 2019, "sha1": "ca1d101ec444c0fd8bd732549bb6ce6425eca626", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.mex.2019.09.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab900362835233201e8e2486dc9e7584884c8375", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
247307582
pes2o/s2orc
v3-fos-license
Disasters as Ambivalent Multipliers: Influencing the Pathways from Disaster to Conflict Risk and Peace Potential Through Disaster Risk Reduction Disasters, including disaster-related activities, have been shown to precipitate, intensify, and lengthen violent conflicts, yet disasters have also demonstrated the potential to reduce violent conflict, encourage cooperation, and build peace. Disaster-conflict and disaster-peace literature has sought to establish causal and linear relationships, but research has not explored with the same rigour the causal mechanisms linking these phenomena in long-term processes of social–political change and how they are influenced by human actions and inactions. This research fills this gap by drawing on in-depth interviews with disaster risk reduction (DRR) professionals in 25 disaster- and conflict-affected countries in South Asia, the Middle East, and Africa to analyse the pathways leading from disasters and disaster-related activities to violent conflict and peace. The findings highlight how these pathways can be deliberately swayed towards peace potential through DRR. disasters (including those influenced by climate change) may instigate social-political instability and precipitate, intensify, or lengthen violent conflicts ('conflicts') (Eastin, 2016;Gawande et al., 2017;Nel & Righarts, 2008;Raleigh & Kniveton, 2012). Extant literature has made strides towards unpacking the relationships between disasters, conflicts, and peace, but the findings are mixed. Some have argued that disasters may increase the risk of conflict, but not uniformly or deterministically (Bernauer et al., 2012;Ide et al., 2020;Slettebak, 2013). Others have found that disasters do not increase the risk of conflict (Couttenier & Soubeyran, 2013) and, in some circumstances, may contribute to cooperation and peace through 'disaster diplomacy' (Kelman, 2012). This article argues that these seemingly incompatible findings are generated by three main oversights in the literature. First, significant research focuses on the relationship between disaster and conflict or disaster and peace, but fewer studies seek to understand these relationships in concert and form synthetic conclusions on how disasters may be related to both conflict and peace. Second, disasters are often conceptualised as exogenous events or 'shocks' rather than recognising them as complex processes that are socially constructed over long time horizons and intertwined with human actions and inactions (exceptions include Siddiqi, 2018;Wisner et al., 2004). Third, most research has sought to establish a causal relationship between disasters and conflicts, with comparatively less attention paid to the investigation of causal pathways and mechanisms (Xu et al., 2016) (exceptions include Ide et al., 2020;Van Baalen and Mobjörk, 2018), and even less research identifies how concerted action can shift these pathways. As a result, there remains a lack of integrated understanding of the disaster-conflict-peace nexus and scant guidance on when, where, and how to act to reduce disaster-related conflict risks and increase peace potential. The dominant discourse that disasters simply increase conflict risks validates securitised and top-down approaches to disaster management as a form of social control rather than providing multifaceted support to disaster-and conflict-affected populations (Baker & Ludwig, 2018;Pyles et al., 2017). The present research contributes a more cohesive and grounded understanding of the disaster-conflictpeace nexus by presenting empirical evidence from 32 in-depth interviews with DRR professionals at the front lines in 25 disaster-and conflict-affected countries in the Middle East, South Asia, and Africa. The analysis (1) investigates disaster-conflict and disaster-peace pathways in places affected by ongoing violent conflicts, (2) elucidates how human actions and inactions determine these pathways, and (3) lays new groundwork for how DRR can influence these pathways towards peace. Theorising the Disaster-Conflict Relationship Disasters connect to conflicts through a complex chain, beginning with how disasters arise. The United Nations Office for Disaster Risk Reduction (UNDRR) defines disaster as 'a serious disruption of the functioning of a community or a society at any scale due to hazardous events interacting with conditions of exposure, vulnerability and capacity, leading to one or more of the following: human, material, economic and environmental losses and impacts' (UNDRR, 2017). This definition underscores that disasters are not natural but are socially constructed through patterns and histories of human actions and inactions that give rise to vulnerabilities (Ball, 1975;Chmutina & von Meding, 2019;O'Keefe et al., 1976). Where disasters occur in conflict-affected contexts, conflicts thus also contribute to the root and proximate causes of disasters (Peters, 2019(Peters, , 2021Wisner et al., 2004), suggesting that the relationship between disasters and conflicts is complex and may be reinforced through disaster and conflict There remains a lack of integrated understanding of the disaster-conflict-peace nexus and scant guidance on when, where, and how to act to reduce disaster-related conflict risks and increase peace potential. vulnerabilities (Peters & Kelman, 2020;von Uexkull et al., 2016). Human actions and inactionsincluding disaster mitigation and prevention, preparedness, response, recovery, and reconstructionare referred to in this article as 'disaster-related activities', and they are part and parcel of why and how disasters manifest as well as how they may contribute positively or negatively to conflict risks. Disasters and disaster-related activities are often coupled in this article, as they often cannotand should notbe artificially separated. When disasters are construed as exogenous and time-bounded natural events, the causal chains linking them to conflicts are misunderstood. For example, exaggerating the role of climatic factors in the 2006-2010 drought in Syria underplays human culpability in both the creation of drought and the Syrian Civil War commonly associated with it (2011-ongoing) (De Châtel, 2014;Fröhlich, 2016). Decades of water mismanagement and large-scale overexploitation of water resources instrumentally contributed to extreme drought conditions, and the Syrian regime failed to respond to the drought in ways that lessened vulnerabilities and averted further humanitarian crisis. Other countries in the Levant experienced the same meteorological conditions but not the same armed conflict and political instability. This underscores that disaster-conflict links cannot be explained through the severity or frequency of natural hazards alone (Brzoska, 2018). Layered Conditions and Pathways from Disaster to Conflict and Peace This literature review focuses on the role of resource scarcities and distribution in linking disasters with violent conflicts and peace. When they are not avoided, disasters almost by definition lead to increased needs of survivors, which may be paired with a decrease in available resources. Much disaster-conflict research has adapted the environmental scarcity thesis to explain how disasters may increase conflict risks through relative deprivation, frustration, and weakened state institutions (Homer-Dixon, 1994;Kahl, 2006;Slettebak, 2012). For example, drought conditions may escalate resource competition over limited natural resources and, thus, violent conflict amongst pastoral communities in northern Kenya (Njiru et al., 2012), and dwindling freshwater and land resources may drive resource competition and violent conflict between farmers and pastoralists in Nigeria (Audu, 2014). Van Baalen and Mobjörk (2018) identified that natural resource scarcity in East Africa can degrade livelihood conditions, increase and change migration and mobility patterns, shape how armed groups make tactical decisions, and provide opportunities to exploit local grievances. However, the presence of formal institutions (e.g., resource management structures) and informal institutions (e.g., customary practises of sharing resources) mediate the relationship between natural resource scarcity and conflict (Linke et al., 2018). Beyond natural resource scarcities, disasters can create scarcities in other basic resources, like food and water, which have also been connected with increased conflict risks (Nel & Righarts, 2008). The inequitable distribution of resources may further explain why disasters may lead to conflicts. Disasters are not 'great equalisers', and disaster-related activities can not only fail to mitigate conflict risks but also serve to reinforce them. Disasters have a propensity to impact marginalised groups more frequently and intensely due to the vulnerabilities forced upon them (Susman et al., 1983), but resources spent on disasters (including disaster relief and DRR) tend to centre on and benefit the most privileged groups in society (Cuny, 1983). Furthermore, disasters can be leveraged as opportunities to advance political objectives like targeted marginalisation through post-disaster aid and development (Harvey, 2014;Klein, 2007). The inequitable distribution of post-disaster resources can lead to new or heightened grievances, which, in addition to promoting conflict on their own, can also be exploited by armed groups (Van Baalen & When disasters are construed as exogenous and time-bounded natural events, the causal chains linking them to conflicts are misunderstood. Mobjörk, 2018). An influx of resources can lead to shifts in relative bargaining positions of conflict parties and generate favourable conditions for rebel groups to recruit supporters and gather arms (Brancati, 2007;Brzoska, 2018;Kikuta, 2019;Nel & Righarts, 2008). Enabling conditions common in conflict-affected contexts may promote these disaster-conflict pathways. Disaster-related resource scarcities alone may not lead to violence in the absence of contextual factors like 'negative othering', low power differences, and recent political change (Ide, 2015), and they may have more pronounced effects on violent conflicts where intense competition over resources preexists (Brancati, 2007) and where there is ethnic fractionalisation and ethno-political exclusion (Brzoska, 2018;Couttenier & Soubeyran, 2013;Ide et al., 2020;Schleussner et al., 2016). Weaknesses (and sometimes strengths) in the institutional setting, poverty, and resource-dependent livelihoods have also been identified as characteristics that increase conflict risks following disasters (Brzoska, 2018). Despite these pathways and conditions, disasters do not deterministically heighten conflict risks. For example, disasters can lead to patterns of conflict and cooperation in a single context that materialise at different points in the unfolding of a disaster and its response (Oliver-Smith, 1979;Pelling & Dill, 2010), and disasters have been linked to both humanising and 'othering' adversaries (Le Billon & Waizenegger, 2007). Disasters have been found to act as windows of opportunity for change (Birkmann et al., 2010) and may even encourage conflict de-escalation, cooperation between adversaries, or formal peace processes. Regimes that respond capably and compassionately can earn popular support and promote social cohesion (Olson & Gawronski, 2010;Slettebak, 2012). Disasters can provide an impetus to strengthen or create new institutions that engender trust-building, including inclusive institutions that address disasters (Brzoska, 2018;Hyndman, 2009). Disaster relief and recovery activities that address the root causes of strained social relationships may function as a form of conflict prevention or peacebuilding (Arai, 2012), and international support and resources for peacebuilding in places of geopolitical importance may further bolster openings for peace processes in the wake of disaster (Klitzsch, 2014). Exploring the Potential for DRR to Mitigate Conflict and Promote Peace A nascent body of literature has explored how DRR may promote peace in pre-disaster phases including sustainable development as well as in post-disaster reconstruction and learning phases (Peters et al., 2019a;Peters & Peters, 2021). The field of DRR takes actions to save lives by systematically understanding and acting upon the root causes of disasters, and DRR activities can be integrated into all disaster phases, including pre-disaster prevention, preparedness, and mitigation; recovery (e.g., distributing relief in ways that prevent additional deaths and promote coping capacities); and reconstruction and learning (e.g., 'building back better' and promoting a culture of prevention in sustainable development). Prevention may link DRR to peacebuilding, which encompasses a range of initiatives taken before, during, and after conflicts to prevent lapses or relapses into violent conflict (Peters & Peters, 2021). Yet, there is a lack of guidance on how to conduct DRR in conflict-affected contexts (Peters et al., 2019b). The globally agreed Sendai Framework for Disaster Risk Reduction (2015) and other mainstream DRR policies and frameworks do not address issues related to conflict and peace in an effort to be perceived as politically neutral, even though conventional approaches to DRR, such as working primarily with state actors, in conflict-affected contexts can directly or indirectly play into conflict dynamics (Peters, 2019). Despite the challenges, DRR is possible even in high-intensity conflict contexts (Mena & Hilhorst, 2020), an evidence base to which the present research contributes, and DRR has the potential to work 'on' (and not just around) conflicts through targeted and unorthodox approaches, such as engaging with non-state Beyond the equitable provision of resources, it may matter how DRR activities are designed and implemented to explicitly promote cohesion, cooperation, and peace armed groups, informal institutions, or affected groups whose conflict and disaster vulnerabilities overlap (Peters, 2019;Walch, 2018). Beyond the equitable provision of resources, it may matter how DRR activities are designed and implemented to explicitly promote cohesion, cooperation, and peace. Methods The present research seeks to explore and deepen an understanding of how disasters and disaster-related activities may be linked to violent conflict and to investigate to what extent and how DRR activities can influence these pathways and contribute to peace potential. To do so, I conducted in-depth semi-structured interviews with 32 DRR professionals with experience in designing, implementing, and/or evaluating DRR programming in conflict-affected contexts. The participants worked with development, humanitarian, and advocacy international and national non-governmental organisations (I/NGOs), multilateral organisations, and networks in 25 countries in Western Africa, Northern Africa, Middle Africa, Eastern Africa, Western Asia, and Southern Asia (see Figure 1) (30 participants from these regions, 2 from US/Europe) (see Appendix A for a description of the participants and the disaster and violent conflict contexts). I selected participants based on their on-the-ground and direct experience with DRR in conflict-affected contexts and their knowledge and interest in the topics of this research. I identified initial participants among participants of a pre-conference event on DRR in contexts affected by violence, conflict, and fragility hosted by the Overseas Development Institute at the Africa-Arab Platform on Disaster Risk Reduction in October, 2018 in Tunis, Tunisia. This participatory event opened formal space to discuss the challenges and imperative of implementing DRR in conflict-affected contexts. I recruited additional participants using snowball sampling methods, which are often used to identify participants with significant experience and knowledge, including those in conflict-affected areas (Cammet, 2006), by asking participants to refer DRR professionals with expertise on these topics. I terminated interviews when the study reached data saturation and new data became repetitive of previously collected data (Sandelowski, 2008). Each interview lasted approximately 90 min on audio conferencing platforms (primarily Skype and WhatsApp) from January to March 2019. Remote interviews enabled practitioners in difficult-to-reach regions experiencing armed conflict and crisis to be included in this study. I asked the participants semistructured interview questions (see Table 1) and tailored follow-up questions on the cumulative effects of disasters and disaster-related activities on violent conflict and peace and the atmosphere of vulnerability in which they arise rather than isolating the effects of a particular disaster on conflict and peace. All participants focused their responses on DRR programming, which was present to varying extents in all of the included conflict-affected contexts. Where DRR was integrated with disaster relief, these activities were considered relevant and described as 'disaster-related activities'. Disasters and social and political violence were considered relevant as described by participants which (1) occurred in the same places and time periods and (2) directly involved and affected the areas in which and people with whom participants conducted DRR programming. Participants discussed large-and small-scale 1 rare and frequent 2 disasters, which similarly impact conflict and peace (Brzoska, 2018). Even relatively small and unreported disasters can have profound effects on people's lives, especially those living in the margins. Participants discussed violent conflicts spanning various levels of Table 1. Semi-structured interview questions. 1. How do disasters impact conflict and/or peace dynamics? 2. Do disaster risk reduction (DRR) projects directly or indirectly contribute to conflict prevention or peacebuilding? If so, how? 3. Do DRR projects directly or indirectly cause or exacerbate conflict? If so, how? intensity and frequency (e.g., violent protests, mass riots, communal violence, one-sided violence, guerrilla activity, terrorism, and civil conflict) in active and post-conflict contexts. I transcribed the interviews and assigned them with unique codes corresponding with region and interview number. For participants with professional experiences in multiple countries or regions, I specified the contexts relevant to the data excerpts. I conducted a data-driven thematic analysis following the six phases of thematic analysis established by Braun and Clarke (2006): (1) becoming familiar with the data, (2) generating initial codes, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing the report. I reported the results with an overarching analytic narrative based on these themes which are representative of perspectives shared by all or nearly all participants, with succinct and illustrative extracts and quotes attributed to specific interviews. I retained and presented unique or contradictory passages to avoid artificially smoothing over tensions and inconsistencies within the dataset (Braun & Clarke, 2006). The last step of my methodology was to conduct participant validation by sharing a draft paper with the methods and synthesised results over email. I solicited participant feedback on accuracy and reflections on the overall content in an effort to co-create knowledge (Birt et al., 2016;Harvey, 2015). I incorporated participant corrections and comments into the final results. Natural Resource Scarcities Play a Limited Role in Driving Conflicts The data provided limited support for the theory that disasters drive violent conflict by creating or exacerbating conditions of natural resource scarcity, with evidence coming from East Africa. Multiple participants in the East Africa region discussed how droughts and floods can contribute to violent communal conflicts related to seasonal land and natural resource use involving pastoralists and farmers. One participant explained that violent communal conflict in Kenya and Ethiopia is driven by drought: 'Whenever there is water and pasture shortage, there is always conflict over resources' (EAfrica06). However, other evidence suggested that causation cannot be inferred from the sequencing of natural resource scarcity and conflict, and these relationships are not universal across places. Some groups in South Sudan employ violence when gathering and defending resources like food and ammunition at the end of dry season when resources are scarce, but they do so in anticipation of decreased mobility and access to resources during the impending rainy season (EAfrica11). Violent communal conflicts in Niger tend to occur during the short rainy season when livestock graze, but 'when dry season comes, then the conflict stops' (WAfrica15). Participants working in Ethiopia and South Sudan explained that drought may be reported by conflict parties as the cause of conflicts to justify and conceal what is actually political and territorial violence (EAfrica05; EAfrica11). Basic Resource Scarcities can Exacerbate Conflicts The data provided strong evidence that institutional failures to meet basic needs (e.g., food, water, power, medical care) surrounding disasters can exacerbate violent conflict risks. Virtually all participants discussed how disasters can increase needs for basic resources and services, which are often already high in conflict-affected areas. Across regions, these pressures can be met with weak formal and informal institutional capacities and unwillingness to cooperate to provide these resources and alleviate suffering. For example, a participant in Sudan described how youth attempted to become involved in disaster response following major floods in 2013 and 2015: The local government does not support these youth groups, because these are roles or services that the local governments should have been providing. They fear that the transfer to youth groups will reflect poorly on their image and…undermine their [political] power. (NAfrica03) Grievances may be doubly created by civil society's unmet needs and unmet desires to cooperate. Similarly, both governments and civil society may have a limited tolerance for international cooperation to meet needs. For example, the 2004 Indian Ocean Tsunami opened the door for INGOs to conduct humanitarian relief work in Sri Lanka, but after the emergency recovery period passed, the government asked most INGOs to leave the country due to suspicions of interference in national and local politics (SAsia29). I/NGOs working on disasters in conflict-affected areas inherently become part of the conflict, sometimes inadvertently and other times through directly attempting to influence politics. I/NGOs may also influence conflict dynamics through an influx of resources. For example, a participant working in Nepal described that militia groups extort I/NGOs and capture a considerable percentage of DRR resources, which can be used to support agendas for violence (SAsia25). Inequitable Distribution of Resources Contributes to Political Conflict Risks and Social Cleavages The data provided strong evidence that the inequitable distribution of resources is related to conflict risks. Many participants across regions explained that limited capacities for disaster-related activities may force them to be implemented unevenly or asynchronously, with some groups receiving benefits after others or not at all. For example, following the 2015 Gorkha earthquake in post-conflict Nepal, local governments distributed relief materials first and primarily to their political supporters, which threatened the national political peace process (SAsia23; SAsia24). In post-conflict Sri Lanka, frequent disasters create enormous needs, and the government provides resources mainly to target groups that align with their political agenda (SAsia29). All participants noted that disasters tend to impact marginalised groups more severely and frequently, but multiple participants across regions of the study disclosed that disaster-related activities taken before, during, and after disasters often inequitably distribute resources (such as disaster mitigation and recovery aid) in ways that reflect patterns of exclusion, marginalisation, and discrimination (i.e., structural violence). For example, according to the experiences of a participant in a region in India affected by violent communal conflict, post-disaster needs assessments are typically conducted by officials belonging to higher castes, who tend to prioritise the needs of those from the same caste and sideline those from lower castes in recovery and reconstruction (SAsia22). This not only leads to short-term effects on the ability of marginalised groups to meet their immediate needs, but inequitable 'building back better' efforts further disadvantage them for future disasters. In places like Bangladesh, historical patterns of structural violence can make marginalised communities wary to accept DRR resources and support out of fear that they will lose ownership of their land when it is improved, leaving them at continued high risk of disasters (SAsia21). The inequitable distribution of disaster-related resources can reinforce structural violence that fuels cleavages, and real or perceived deprivation may grow alongside the capacity of disaffected groups to fight. When needs are unmet by formal institutions, communities are left to meet their own needs by procuring (and defending) resources, sometimes violently. During extended periods of drought, villages with water points are often unwilling to share their own depleted resources, especially when their own needs are unmet, and increasingly distressed groups may resort to the violent appropriation of resources to survive, as seen in participant experiences in Kenya, Somalia, Cameroon, Chad, Niger, Nigeria, and Senegal (EAfrica09; MAfrica/WAfrica15). Disasters can motivate people to strengthen intragroup ties and 'other', exclude, and even place blame on outsiders, particularly where there is a turbulent history of interactions between groups. These social effects can escalate into violence. For example, a participant working in Burundi, the DRC, and Zimbabwe explained: The conflict may not initially be a violent conflict, but it may gradually move toward violent conflict when people feel their very survival is being threatened when they don't have access to a piece of land or area and where people who occupy those areas feel they are being threatened if they don't keep outsiders out of their areas. (EAfrica/MAfrica04) However, other participants described that people may at least temporarily support each other in the aftermath of a disaster. In some circumstances, the survival needs of a group more broadly defined (e.g., those affected by a disaster) take priority, and identity-based tensions may be temporarily set aside in places like Pakistan (SAsia26). Following deadly floods and landslides during active conflict in Cameroon, separatist groups and the military provided support and sympathy to victims and their families, which contributed to 'resolving conflict in some situations' (MAfrica13). However, this temporary social effect did not extend to a political or durable peace. DRR can Work Through the Environment DRR can play a central role in preventing disaster-related conflicts by preventing or mitigating disasters: where the manifestation of disasters contribute to the causes of conflict, disaster prevention can also contribute to conflict prevention. Disasters are in part created through human activities, like unsustainable resource management and patterns of development. For instance, people contribute to The inequitable distribution of disasterrelated resources can reinforce structural violence that fuels cleavages, and real or perceived deprivation may grow alongside the capacity of disaffected groups to fight. the creation of droughts in Kenya and Somalia due to the destruction of forests and environmental degradation (EAfrica09). Similarly, people cause floods in Sierra Leone through poor land management leading to erosion and discarding materials in waterways and overwhelming drainage systems. DRR activities can help avoid these disasters: 'After removing those [discarded] materials, everyone was happy, because this is the only year they never experienced flooding, but the rains were very heavy' (WAfrica17). Working on natural resources, regardless of the strength of its relationship with conflict, can also be an entry point for DRR to address diverse causes of conflict and contribute to peacebuilding. For example, DRR activities can mitigate disaster risks in ways that also create shared local benefits and profits, such as water reservoir projects in Ethiopia and Kenya, which can serve as bridges to cultivating prosocial relationships (EAfrica06). These DRR activities move beyond 'do no harm' and sometimes even explicitly seek to promote social transformation and peace: 'It [DRR] safeguards resources, equitable share and access to those resources, and conversations around peacebuilding…that begins with the acknowledgement that there is a reason why we fight', explained a participant in Kenya (EAfrica07). DRR can Work Through Politics Where parties are motivated to meet immediate post-disaster needs, there may be opportunities to temporarily mitigate conflict and reduce vulnerabilities. Following a disaster, adversaries may pause hostilities and even engage in short-term cooperation, like agreeing to open borders to allow for the movement of humanitarian supplies or displaced people in Afghanistan (SAsia18) or arranging a temporary truce to enable I/NGOs to deliver relief items in Kenya (EAfrica07). The combination of halted violence and provision of humanitarian support can offer reprieve to beleaguered communities with long-standing unmet needs and simultaneously reduce disaster and conflict vulnerabilities. A participant in Libya claimed, 'In a conflict situation, the best thing to happen is to have a natural disaster [sic]' (NAfrica02). However, this perspective was not shared by other participants. Disasters only rarely motivate political cooperation that improves local conditions at the peripheries of power, and communities engaged in high-intensity violence are most often left with meager or zero DRR support in the aftermath of disasters due to conditions of insecurity, based on participant experiences in Cameroon, Chad, Niger, Nigeria, and Senegal (MAfrica/WAfrica15). Unmet public needs can aggravate popular grievances and bring awareness to problems in governance, which can be leveraged to effect positive political change. For example, a DRR programme in the DRC works to empower citizens to hold their decision makers accountable for delivering DRR-related resources: The effect on the people is that there is a revolt or an awakening that leads them to demand more, which in a way is a very positive thing for citizens around the world facing difficult situations…to demand more from their governments and duty bearers (MAfrica04). Such DRR programming encourages peaceful collective action for change and fosters improved institutions and governanceboth of which can extend to providing alternatives to violence regarding non-disaster issues. Other participants suggested that disasters can stimulate the creation of new cooperative institutions between adversaries. For example, conflict parties in Afghanistan created a joint disaster response plan after experiencing a major disaster (SAsia18), though other participants noted that what is on paper does not always translate to practise in post-conflict places, based on participant experiences in Nepal and Sierra Leone (SAsia24; WAfrica17). Sustained DRR programming can help adversaries understand that the reduction of shared disaster risks requires cooperation to 'work together as one system' (WAfrica17). For example, early warning systems for flooding on the Helmand River and Kabul River can forgeand indeed depend oncooperation between riparian communities in Afghanistan with those in Iran and Pakistan, respectively, where upstream communities provide formal and informal flood warnings to downstream communities and, by doing so, strengthen good will and ties across political boundaries (SAsia18). DRR more often seeks to 'supplement and support' existing institutions involved in disaster-related activities, including through the provision of information and skills in programmes in Burundi, DRC, and Zimbabwe (EAfrica/MAfrica04). DRR programming is dependent on long-term relationships with local leadership in places ranging from Ethiopia and Kenya to Afghanistan (EAfrica06; SAsia18). A participant working in Kenya explained: It [DRR] requires a lot of diplomacy, a lot of networking, a lot of building relationships …We found that if you are able to get a community leader who becomes your advocate, they are able to champion the cause and translate some of the capacity-building interventions we have done for them and take it a step further [to reduce conflict] (EAfrica09). A participant working in Ethiopia and Kenya described that existing institutions and practices 'are the best community wealth' for conducting DRR, and including of clan leaders, elders, chiefs, community leaders, and religious leaders as well as government actors can provide avenues for 'approaching conflict in different ways' corresponding with local capacities for peace (EAfrica06). However, a participant working in Sierra Leone warned that DRR must be careful not to align closely with contested leaders, because 'certain communities might not be willing to be part and parcel about whatever that leader supports' (WAfrica17). DRR can Work on Societal Relationships Some of the most effective ways that DRR can influence causal pathways towards peace may be through community-based programming. In Yemen, DRR education and training programmes increase awareness of the norms that give rise to linked disaster-conflict risks and build capacities for youth-led peace activities (WAsia32). Fostering awareness and understanding of these patterns may represent a first step towards innovating solutions that reduce dual risks, and in-depth analysis into intersecting needs and vulnerabilities can be used to guide further programming. However, DRR programming must strike a balance with how it pushes for prosocial changes and gives 'respect to local traditions and involvement of local community leaders' to be effective in places like post-conflict Liberia (WAfrica16). Virtually all participants across regions emphasised the importance of community ownership of activities. Opportunities to reduce disaster and conflict risks typically occur over long-term programming and trust-building, according to a participant in Lebanon (WAsia30). DRR can appeal to intersectional identities that help people re-humanise their adversaries. DRR must be sensitive to the identity dynamics of active and latent conflicts to ensure that programming does not exacerbate violence by working first or primarily with one ethnic group, for example, even if they have objectively greater needs, which corresponds with 'do no harm' (EAfrica/MAfrica04). However, DRR programmes may also be able to influence former adversaries to find or create a common identity as they pursue solutions to 'the same needs and problems' and work for 'the greater good' (EAfrica09). DRR programmes in places like Egypt take an explicit future orientation to encourage people to create a safer society together rather than confront and become entangled in difficult aspects of a violent past (NAfrica01). Several participants working in Kenya, Libya, and Afghanistan explained that even people embroiled in violence want to play an active role in ensuring a better future for themselves and their children, and DRR can provide an entry point for creating that future together (EAfrica07; NAfrica02; SAsia18). A participant working in Libya shared, DRR can appeal to intersectional identities that help people re-humanise their adversaries. Here we are fighting at the same time what divides people in their communities and trying to create a forum for them where they see the commonalities between them and bring them back to how they can be more active citizens. It is important to focus on the process and not just the results…It's a challenge because you are dealing with human beings that are feeling hatred against them or fragmented society and feeling that they have no option or decision about the future for their children, so they don't have anything to lose in the context of manmade disasters or armed conflict. It [DRR] is like lighting a candle in the middle of this darkness (NAfrica02). DRR may also strategically focus on inclusion and accessibility for a less divisive identity cutting across conflict lines, such as people with health conditions or impairments: You would have way more people from different diverse backgrounds and ethnicities coming together and forming groups, since they consider themselves as persons with disabilities first and not that particular group or community or identity. [This is a] small building block from which bigger peacebuilding and conflict prevention actions could be made (EAfrica07). DRR programming can support the development of shared informal institutions based on shared challenges. For example, communities in post-conflict Sierra Leone transcended tensions to form disaster management committees: 'People helped each other, and this made people understand that they should help each other and work as a team in the face of challenges' not only in relation to disasters but also more broadly (WAfrica17). In flood response in Pakistan, a local policy was enacted for Muslim-based NGOs to provide aid only to non-Muslim communities and vice versa, which laid the groundwork for social practices of mutual assistance (SAsia26). In a disaster reconstruction project to build safer shelters in two villages embroiled in violent communal conflict in the DRC, the project facilitated a forgiveness ceremony to initiate the project, and this led to 'both sides of the village beginning to work together to build shelters for each other […] They were able to sort of turn that page of the conflict from the past' (MAfrica04). After the project concluded, villagers continued to build shelters across former conflict lines and doubled the number of shelters built through self-sustained cooperative action while also developing relationships resistant to violence. Policy Recommendations to Shift the Needle Forward The findings refute simple conclusions about disasters or hazards as a cause of violent conflict, instead highlighting how disaster-related activities mediate these relationships. The results demonstrate that disasters and disaster-related activities have the potential to influencebut not determineviolent conflict risk and peace potential in conflict-affected regions; disasters can magnify or ameliorate existing conflicts and shape how subsequent conflicts are addressed violently or nonviolently. This research provides evidence that DRR is possible to varying extents in diverse contexts affected by violent conflict, though organisations tend to avoid or conduct minimal programming amidst high-intensity armed conflicts. The majority of participants in this study were actively working on peace either through an explicit programme aim or as a necessary foundation for effective DRR. Disasters and disaster-related activities influence both conflict risks and peace potential through diverse and intersecting pathways that may occur simultaneously and restrain or catalyse each other, potentially stimulating uneven or even unexpected outcomes. For example, the findings suggest that a disaster can increase awareness of structural violence and poor governance, and this awareness may represent the first step towards overt conflict and eventually reordering relationships in ways that sustain peace (Curle, 1971). However, durable peace is far from a certain outcome where vulnerabilities to disasters and conflicts are mutually reinforcing. The findings also suggest that DRR can contribute to conflict risk and peace potential at different institutional scales. For example, different social groups may act cooperatively to reduce their shared disaster risks where state-sponsored services are limited, which could ameliorate communal conflict while magnifying conflict risk between civil society and the state. Regimes may try to avoid these conflict risks by stymying civil society participation in delivering resources, but this can inadvertently aggravate tensions, as was demonstrated in Sudan. Disasters may create spaces for new interactions involving governments and social groups, which could lead to novel sources of conflict or cooperation. DRR and disaster relief are often painted as politically neutral, but even seemingly innocuous activities (or lack thereof) can influence conflict risk and peace potential. It is important to recognise that in conflict-affected contexts humanitarian and development interventions including DRR become part of the conflict (Haider, 2014). Rather than striving to merely 'do no harm' (Anderson, 1999), this research provides evidence that DRR can actively encourage pathways to peace potential through activities taken before, during, and after disasters that reduce vulnerabilities, improve equitable resource distribution, encourage cooperation, and, in some cases, find opportunities for social and/or political (re)integration (see Table 2). This may involve addressing cleavages and challenging historical norms directly, and in other cases, such as in Egypt, it may be advantageous to avoid contentious issues and instead focus on the future. DRR may have the greatest opportunities for advancing peace potential where programming is designed to address multiple pathways that are self-reinforcing. The findings highlight the need for intentionality in the design and implementation of DRR in conflict-affected regions in order to reduce disaster and violent conflict risks and build peace, leading to the following policy recommendations: 1. The global DRR community must explicitly address issues of conflict and peace beyond 'do no harm', in order to influence the relationships between disasters, conflicts, and peace. DRR may inadvertently contribute to conflict, but only through careful design and implementation will it contribute to peace. International DRR mechanisms should support DRR practitioners working in places affected by violent conflict with trainings on integrated disaster and peace programming and facilitate partnerships with organisations focused on peacebuilding and conflict prevention. 2. DRR should invest in and support diverse societal capacities at multiple institutional levels, keeping in mind that existing leadership may not represent all affected groups, including those that are marginalised and alternative actors. DRR efforts should learn from diverse experiences of affected people on the ground, including local initiatives to address the context-specific linkages between disasters, conflicts, and peace already underway. Effective DRR depends on social and political inclusion and cooperation, and, in turn, DRR should seek to catalyse the development of prosocial relationships, leadership, and institutions that can facilitate alternatives to violence. This research relied on the experiences of DRR and not peacebuilding professionals, though applied work on disasters, peace, and conflict are not neatly partitioned. Future research should test contextand conflict-stage specific pathways for disaster-related activities to minimise conflict and support turning points towards peace. Disasters may be ambivalent multipliers of conflict and peace, but peace-oriented DRR activities can sway the tides of change towards peace. Disasters may create spaces for new interactions involving governments and social groups, which could lead to novel sources of conflict or cooperation.
2022-03-09T16:28:58.446Z
2022-03-07T00:00:00.000
{ "year": 2022, "sha1": "c25a3d1f085a51b75def93219fb8193818de79cd", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15423166221081516", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "a80ae1eaa7ea8e5e4cf958365e69c06320c73ef6", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [] }
78089401
pes2o/s2orc
v3-fos-license
The Non-Linear Relationship between Electricity Consumption and Temperature in Taiwan : An Application for STR ( Smooth Transition Regression ) Model This study builds non-linear econometric models to analyze the effects of temperature on electricity consumption in Taiwan by using the smooth transition regression (STR) model and the monthly time-series data from 1983 to 2012. The empirical results indicate that there is a non-linear relationship between electricity consumption and temperature in Taiwan. Furthermore, all the six estimated threshold temperatures are between 25.364 ̊C and 27.156 ̊C, and the average of threshold temperatures is 26.384 ̊C. It implies that Taiwan’s electricity consumption has a non-linear growth if average temperature is higher than the threshold temperature. In addition, the estimated threshold temperature has policy implications for Taiwan’s policy makers, meaning that the threshold temperature in this study can serve as a reference for framing policies of managing electricity demand in Taiwan. Introduction Electricity consumption is contributed by many types of human activities, such as heating, air conditioning, lighting in both business and residential sectors, and major contributions come from operating equipment in industrial sectors. Whilst lighting and operating equipment might not be directly linked to climate change, heating and air conditioning have a direct impact on air temperature [1]. All the climate-change-related impacts on electricity demand and supply can be easily observed from the quantifiable effects of temperature on the use of heating and air conditioning, and these numbers are usually described by different measurements based on the concept of heating degree days (HDDs) and cooling degree days (CDDs). HDDs is defined as the sum of negative deviations from the actually measured temperatures to the reference temperature (or base temperature) over a given time period; in contrast, CDDs indicates the sum of positive deviations from the average temperatures to the reference temperature over a given time period.The data frequency of the given time period is usually daily, weekly or monthly.The reference temperature is defined by the temperature level without additionally using electricity for heating or cooling.That is, if the air temperature is comfortable for humans, there will be less electricity consumption for heating or cooling. The reference temperature can be generally considered to be 18.3˚C (65˚F) [2]. However, Parkpoom and Harrison [3] used 11.7˚C (53˚F) to be the reference temperature in Thailand; Howden and Crimp [4] determined 17.5˚C (63.5˚F) to be the reference temperature for Sydney; Ahmed et al. [5] proposed 14.3˚C (57.7˚F) as the reference temperature for the State of New South Wales in Australia after their calculation; Zachariadis and Hadjinicolaou [6] employed 18˚C (64.4˚F) and 22˚C (71.6˚F) respectively to be the reference temperature of HDDs and CDDs for the area of Mediterranean Europe.In sum, there could be different reference temperatures within different geographical regions. Global warming could lead to increases in CDDs and decreases in HDDs, concluded by Benestad [7], whose report indicates that climate change could trigger more energy consumption due to air conditioning in the hot areas.De Cian et al. [8] used the panel data from 31 countries to investigate the relationship between energy consumption and variations in temperature.Their empirical results suggest that higher average temperature leads to more energy consumption during hot seasons in the warmer countries, but less energy is consumed during cold seasons in the colder countries.Hekkenberg et al. [9] assessed the electricity demand pattern in the relatively temperate climate of the Netherlands.They used daily data over the period from 1970 to 2007 to investigate possible trends in the temperature dependence of electricity demand.Although the Netherlands has the minimum electricity demand in the summer months, however, their empirical results showed significant increases in the temperature dependence of electricity demand in the months of May, June, September, October and during the summer holidays.That is, their alarming result sends a signal to raise future expectations for additional peaks of electricity consumption in summer under the in the influence of climate change. Moral-Carcedo and Vicéns-Otero [10] figured out that the relationship between electricity demand and temperature is nonlinear, and the nonlinearity is In their research, they created the variable of working day effect to capture the variations of electricity demand caused by the activities in the industrial and commercial sectors as well as by the behaviors of households during holidays and on working days.Hence, they could eliminate those effects from electricity demand, then focus more on the pure effects of temperature on electricity demand. Their results showed that the threshold temperatures of the TR model are 15.5˚C (59.9˚F) and 18.4˚C (65.1˚F), and the threshold temperature of the STR model is 18˚C (64.4˚F). Bessec and Fouquau [11] investigated the relationship between electricity demand and temperature in 15 European countries over the period from 1985 to 2000 using monthly data.They applied a panel smooth transition regression (PSTR) model to describe the relationship between electricity demand and temperature in those countries and find threshold temperatures for those countries. In addition, in order to estimate the pure effects of temperature on electricity demand, they also followed Moral-Carcedo and Vicéns-Otero [10], and used dummy variables to represent summer holidays and time trends to filter out other source of electricity consumptions.Their results showed that the nonlinear pattern was more pronounced in the warm countries among the 15 European countries. Lee and Chiu [12] used the PSTR model and took into account the potential endogeneity biases to examine the relationship between electricity demand and temperature of 24 OECD countries over the period from 1978 to 2004.They provided evidence of a U-shaped relationship between electricity consumption and temperature of 24 OECD countries, and the threshold temperature is approximately 11.7˚C (53˚F). In sum, to summarize the literature mentioned above, we can highlight two main findings.First, the relationship between electricity consumption and temperature shows nonlinearity in the past cases, so when establishing an econometric model for cases in Taiwan to estimate the effects of temperature on electricity consumption, we should consider possible nonlinear relationship between electricity consumption and temperature.Secondly, the threshold temperature has some policy implications, such as guidance for the management of electricity demand and supply, strategies for mitigating the impact of climate change on electricity. To give an example of policy implications on electricity management, the Taiwanese government has introduced a policy since the year 2011 to save energy by asking public sectors to operate air conditioners only if the air temperature is higher than 26˚C (78.8˚F).In addition, once the real threshold temperature is found, it can be applied to computation of the data of CDDs in Taiwan to describe the patterns between temperature and electricity consumption both in the past and in the future. Data Source and Descriptive In this study, we use monthly time-series data which cover the period from 1983 to 2012.The original data of electricity consumption per capita (kWh) are collected from MOEABOE [13], and the gridded dataset of historical climate information from TCCIP [14] is used to compute the monthly average temperature (˚C) over the period from 1983 to 2012.summer is between 0.556˚C and 0.840˚C, which indicates that we observe minor variations of temperature in summer in the past three decades in Taiwan. Figure 1 shows the average electricity consumption and the average temperature, respectively.We can see that the month of the largest electricity consumption per capita in a year is August; however, the month of the highest temperature in a year is July, meaning the non-temperature impacts on electricity consumption should be considered.Therefore, in the Section 2.2, we filter out the effects of non-temperature factors on electricity consumption. Filtered Electricity Consumption In order to examine the pure effects of temperature on electricity consumption, we firstly remove the effects of other factors on electricity consumption [10] [11]. Especially, Bessec and Fouquau [11] indicated that three major components must be considered when we filter out the other effects affecting electricity consumption.The first component is the demographic trend, the second component is the technological trend, and the third component is the monthly seasonality related to the activity.However, our data of electricity consumption are divided by population, so we can say that our data of electricity consumption have removed the effects of demographic trends.Then, we follow Moral-Carcedo and Vicéns-Otero [10] as well as Bessec and Fouquau [11], the two last components will be filtered out from electricity consumption by employing Equation (1), where t EC represents the electricity consumption at time t; t denotes the time trend; D is a dummy variable.We define the value of July and August equal to one, and the values of other months are equal to zero.The dummy variable is used to remove the effects of summer holidays on electricity consumption [10]. t FEC stands for the filtered electricity consumption, and it is also the estimated Figure 2 is the scatter plot of filtered electricity consumption and temperature over the period from 1983 to 2012.We also repot a regression line between filtered electricity consumption and temperature with a polynomial of order three. However, we cannot see the U-shape relationship between electricity consumption and temperature in Figure 2. In Taiwan, people usually use more gas and oil products for cooking and heating in the cold seasons (from Oct. to Feb.), and air conditioner for cooling in summer, which is the reason for the non U-shape relationship between electricity consumption and temperature in Taiwan.However, although Figure 2 looks like there is a positive linear relationship between electricity consumption and temperature, we still believe that there could be a threshold temperature in the relationship between electricity consumption and temperature.At this stage, we will employ smooth transition regression models [15] to investigate whether the nonlinear relationship exists between electricity consumption and temperature in Taiwan or not. Methodology and Empirical Model This study employs the STR model to analyze the pure effects of temperature on electricity consumption in Taiwan.However, before the estimation of the STR model, we firstly have to test whether the time series data are stationary or not; that is, we have to conduct a unit root test for each set of time series data.Hence, we will introduce the methodology of unit root test which we have used for this study in the following section. Unit Root Test The stationary of time series data is usually examined by the Augmented and Dickey Fuller (ADF) test and Philips-Perron (P.P.) test in the past literatures, and these two tests were provided by Dickey and Fuller [16] and Philips and Perron [17], respectively. However, both the ADF test and P.P. test have not considered the possibility of a structural break in the time series data.Therefore, to solve this problem, we employ the unit root test with structural breaks provided by Saikkonen and Lütkepohl [18] and Lanne et al. [19].If there is a shift in the data generating process (DGP) of the level data, it should be taken into account in the unit root testing.The shift function ′ and deterministic trend will be included in the DGP of the time series t I , such as Equation ( 2), ( ) where δ and φ are unknown parameters; t ε is the error term generated by , where δ is a scalar parameter between 0 and 1; and : Once a break point is fixed, Saikkonen and Lütkepohl [18] and Lanne et al. [19] suggested the unit root test (Equation ( 2)) could be estimated by the generalized least square (GLS) procedure under the null hypothesis of unit root.In addition, Lanne et al. [19] also provided the critical value for this unit root test. Smooth Transition Regression (STR) Model The STR model is widely used to describe the nonlinear relation of time series data.The univariate form of STR was proposed by Chan and Tong [20], subsequently developed by Luukkonen et al. [21] and Teräsvirta [15] [22].Areosa et al. [23] further showed the estimation of STR models with endogenous variables. Hence, according to Teräsvirta [15], the STR model can be specified as Equation (3). ( ) where t represents time dimension; t y is a dependent variable; ( ) represents the transition function with the transition variable t s ; π and θ represent the linear part of the model and nonlinear part of the model, respectively.γ is a slope parameter which shows the speed of transition from one regime to another regime, and c is also the extreme threshold of the transition variable.In the equation, , , ; , , , , t p y − indicates the op- timal autoregressive term for p lag lengths of dependent variable, and t q x − means the independent variable of q lag lengths.π and θ represent the linear part of the model and nonlinear part of the model, respectively.If γ → ∞ , the STR model will reduce to the threshold regression (TR) model, meaning that if the transition variable is larger than c, the transition function will be one.However, if the transition variable is smaller than c, the transition function will be zero.In addition, if 0 γ → , the STR model will change to a linear model. Generally, the transition function usually can be distinguished into two types of function forms, namely, the logistic function and the exponential function. , , 1 exp The first step of the STR estimation starts with examining whether there is a nonlinear relationship between the dependent variable and the transition variable or not.If the nonlinear relationship exists between the variables, then the second step of STR estimation will investigate the number of regime switch.Both of the two steps can be identified via the coefficient test on the following auxiliary regression, as Equation ( 6) shows: The null hypothesis of the linearity of the relationship between dependent variables and independent variables versus the alternative hypothesis of the nonlinearity of the relationship between dependent variables and independent variables can be examined by the null hypothesis of coefficient test as follows: : 0 The test statistic of H of which contain F distribution.The three coefficient tests extended from the null hypothesis are as the following: : 0 If the testing result shows that the rejection of H A grid search method is employed here to find the parameter γ and c, and the model with the minimum value of the sum of square residuals (SSR) from the grid search method will be used to provide an initial value of γ and c for the initial estimation of the STR model. Empirical Model Therefore, we establish a smooth transition regression model to describe the nonlinear relationship between electricity consumption and temperature.Considering the assumption of exogeneity for electricity consumption to temperature, Chen et al. [25] examined the relationship of Granger causality between energy consumption and CO 2 emissions using the data of 188 countries, and they only observed the unidirectional causality from energy consumption to CO 2 emissions.Chang [26] also suggested the unidirectional causality from electricity consumption to CO 2 emissions.Therefore, we can say that the increasing electricity consumption directly causes the rising of CO 2 emissions, indirectly leads to higher average temperature caused by global warming, and electricity con-sumption further increases again directly due to the higher average temperature. In short, electricity consumption could indirectly affect temperature through climate change in the long term, and in turn, temperature can directly affect electricity consumption [27].That is, we can estimate the STR models under the assumption of exogeneity.Hence, our empirical model is written as Equation ( 12), ( ) where t means time dimension; t FEC represents filtered electricity consumption at time t; t TEMP denotes temperature at time t; t u stands for the residuals with a mean of zero and constant variance; ( ) is the transition function with the transition variable t TEMP .As mentioned before, the parameters of γ, c, π 2 , and θ 2 are the key parameters of our following estimation. Unit Root Testing Before performing the unit root testing, we divide the samples into six sample groups, and the sample period is four years for each sample group.Table 3 reports the results of unit testing for each data series.We can see that the series t FEC is stationary for each sample period from the results of ADF test and P.P. test.On the other hand, the results of ADF test and P.P. test for t TEMP are sim- ilar to the testing results of the series t FEC which rejects the null hypothesis of unit root.This means that the series t FEC is stationary for each sample period. When we considers the DGP of series with possible structural breaks, the impulse dummy and shift dummy are used to detect possible structural breaks in this study.The result of unit root testing for structural breaks shows that all the null hypothesis of unit root is rejected at 1% significance level no matter the series are t FEC or t TEMP while the DGP of series includes impulse dummy va- riables for structural breaks, meaning all series are stationary for each sample period.However, when the DGP of series includes shift dummy variables for structural breaks, some reject the null hypothesis, others do not. In sum, we conclude that both the series t FEC and the series t TEMP are stationary at level based on our unit root testing; that is, we will estimate the empirical model with level data.Notes: 1) ***, **, * refers to the significance level at 1%, 5%, 10%, respectively.2) () refers to stand error. Estimated Result of the STR Model Subsequently, we establish the LSTR model for six sample periods, respectively. Table 5 represents the estimated results of the STR model for each sample periods, the appropriate lag length of the dependent variable is chosen by minimum AIC (Akaike Information Criterion) with maximum lag length of 10 lags. We only report the estimated coefficient of key parameters in Table 5 to focus on threshold temperature of electricity consumption. We can see that the estimated results of the linear part of all models indicate the positively significant relationships exist between electricity consumption and temperature in Taiwan.Furthermore, the largest coefficient of π 2 is estimated during the period from 2003 to 2007 (Model 5), meaning that during these 5 years, when temperature increases by 1˚C, people are the most sensitive to consume excessive electricity compared with other periods. In addition, for the estimated results of the nonlinear part, Model 2, Model 3 and Model 6 represents the positively significant relationships also exist between electricity consumption and temperature.It implies that when the transition va-riable (temperature) increases, the relationship between electricity consumption and temperature becomes much more positive. Turning to the slope parameter γ, the estimated values are from 4.088 to 39.400, and four of six models provide the significant value of parameter γ, in addition, for the estimated results of parameter c, the estimated values are from 25.530˚C to 27.156˚C, which implies that the electricity consumption is continuously transforming with logistic function when the temperature level reaches an inflexion point.For instance, as the result of Model 6, when the temperature reaches 26.884˚C, the relationship between electricity consumption and temperature becomes more sensitive, and increases in temperature cause nonlinear increases in electricity consumption. At this level, we can make three remarks.Firstly, the estimated value of threshold parameter c (25.530˚C to 27.156˚C) is different from the value of 18.3˚C generally used as the reference temperature of cooling degree days (CDDs) in the past literatures.Secondly, the average of all threshold value is 26.384˚C, and this value is not only close to the official reference temperature of CDDs in Taiwan, but also similar to the threshold temperature of CDDs used by Holtedahl and Joutz [28] (74˚F and 80˚F).Thirdly, the threshold value of temperature is not always a fixed value; the number could fluctuate in different time periods.Moreover, we can reasonably infer that El Nino Southern Oscillation (ENSO) is one of the reasons for varying threshold temperatures in different time periods.We believe that El Niño events will lead to warmer winters and hotter summers; that is, there will be more hot days in a year when an El Niño event occurs.Needless to say, more hot days could change how people use electricity.For instance, as an El Niño event makes people feel hot in a warmer winter, people would use air-conditioners to create a comfortable indoor temperature by lowering the temperature, which leads to a lower the threshold temperature.On the contrary, a La Niña event makes colder summers and winters colder, which would also encourage people to use air-conditioner due to a higher temperature; this could lead to a higher threshold temperature. To add this into consideration, we define ENSO score based on the data of Oceanic Niño Index (ONI) sourced from CPC [29].According to the definition of ENSO events, we assign a score of 4 to −4 to distinguish a very strong El Niño year (a score 4), a strong El Niño year (a score 3), a moderate El Niño year (a score 2), a weak El Niño year (a score 1), a neutral year (a score 0), a weak La Niña year (a score −1), a moderate La Niña year (a score −2), and −3 for a strong La Niña year (a score −3).Therefore, we can see the relationship between ENSO scores and threshold temperatures from Figure 3. Model Diagnostics The quality of the estimated nonlinear model would be examined against misspecification like what we conducted on the linear model.Specification tests such as the serial correlation test [30], the ARCH-LM test [31], a normality test, a parameter constancy test and the no remaining nonlinearity test are employed for Conclusions This study discusses the relationship between electricity consumption and temperature in Taiwan for the period from 1983 to 2012.In order to get more information from our data, we divide all samples into six groups with sample period of five years for each group before conducting the empirical estimation.Furthermore, we employ the STR model to estimate the nonlinear relationship between electricity consumption and temperature for each sample period.In addition, we also find threshold temperatures on the nonlinear relationship between electricity consumption and temperature for each sample period. The empirical results show that there are positively significant effects of temperature on electricity consumption in Taiwan for each sample period.When we only focus on the estimated results for the linear part of the model, we can find that the purely linear effects of temperature on electricity consumption keep rising over the whole sample period.However, some estimated results for the nonlinear part of the model are positively significant, while others are not.That is, we cannot conclude that the purely total effects of temperature on electricity consumption also keep magnifying over the whole sample period. On the other hand, we figure out the threshold temperature estimated using the STR estimation for each sample period.The threshold temperature is 26.892˚CFurthermore, the estimated threshold temperature carries the same meaning with the reference temperature of CDDs.That is, if the air temperature is higher than the threshold temperature, it will lead to increases in electricity consumption.For instance, policy makers could use the threshold temperature to be the reference temperature, and thus they could propose a policy to ask people to reduce electricity consumption when the air temperature is higher than the reference temperature in order to save electricity and to promote efficiency of using electricity. In addition, increasing use of air conditioning resulted in temperature rise for a comfortable living environment seems to be a serious problem of power supply in Taiwan under the influence of global warming.Santamouris et al. [32] concluded that a 1% increase in temperature under the threshold temperature of 24˚C in warm countries would lead to a 3.5% increase in peak electricity demand. That is, if the ambient temperature is higher than the threshold temperature, the risk of power shortage sharply rises.Now Taiwan is aiming for a future of a non-nuclear homeland and is actively developing renewable energy (mainly wind and solar power) to meet the estimated losses of nuclear energy.However, wind and solar power are not stable sources of power supply due to current limitations of power storage technology in Taiwan.Therefore, the threshold temperature could send a warning signal to not only promote the idea of energy saving but also suggest more power system expansion planning in preparation for more operating reserve in the future. Thus, the estimated threshold temperature will have policy implications for policy makers, who can use the threshold temperature in this study as a reference for making electricity management policies in Taiwan. reflected on the threshold temperatures.They employed the threshold regression model (TR) and the logistic smooth transition regression (LSTR) model to build S.-Y.Liao et al.DOI: 10.4236/me.2018.94038589 Modern Economy the relationship between electricity demand and temperature in Spain using daily data from 1995 to 2003. Figure 3 . Figure 3. Historical ENSO score and threshold temperature. Figure 4 Figure 4 graphs logistic transition function of six models, and the diagram of Model 1 to Model 6 are listed in Figure 4 from left to right and from up to down.We can see that Model 1 has the steepest slope of logistic transition function among the six models, meaning the speed of its transition between two regimes is the fastest during the period from 1983 to 1988.By contrast, Model 5 has the gentlest slope of logistic transition function among the six models, showing that the speed of its transition between the two regimes is the slowest during the period from 2003 to 2007. Figure 4 . Figure 4. Logistic transition function of six models which all are with transition variable of temperature. ( [1983][1984][1985][1986][1987],25.530˚C (1988-1992),25.364˚C(1993-1997),27.156˚C(1998-2002),26.477˚C(2003-2007), and26.884˚C(2008-2012).In sum, the average threshold temperature over the period from 1983 to 2012 is 26.384˚C.We can say that the pure effects of temperature on electricity become much more sensitive if temperature reaches the threshold temperature based on our empirical results.In addition, Taiwan has a subtropical climate with higher humidity and a higher yearly average temperature (at about 22˚C), meaning that Taiwan has a higher temperature compared to other countries with a temperate and frigid climate.Hence, estimated temperatures in this study are between 25˚C and 27˚C, which are reasonably comfortable for people living in subtropical climate such as Taiwan. That is, if global warming leads to more temperature de- Table 1 . Descriptive statistics on temperature. Table 2 . The estimated result of filtered electricity consumption. [24]use of its better small properties[15][21][24].If the testing result rejects the null hypothesis of H 1 , it indicates that the nonlinear model should be selected to describe the relationship between variables.Subsequently, we can select an appropriate nonlinear model via doing three types of coefficient tests, all 1 holds F distribution, and F test is suggested for coeffi-S.-Y.Liao et al.DOI: 10.4236/me.2018.94038595 Modern Economy cient test Table 4 shows the results of the nonlinear model test for each sample period, F 1 , F 2 , F 3 and F 4 represent the statistic of F test for H 1 , H 2 , H 3 and H 4 , respectively.Firstly, we can see that the P-value of F 1 for each period all reject the null hypo-mentioned before indicates, if F 2 or F 4 has the strongest significant P-value among F 2 , F 3 and F 4 , the appropriate model will be LSTR.Moreover, the testing results indicate that the appropriate models of each sample period are all LSTR models. thesis of the linear model at 1% significance level, meaning that we should consider the nonlinear relationship between electricity consumption and temperature in our empirical model for each sample period.Secondly, as the rules of S.-Y.Liao et al.DOI: 10.4236/me.2018.94038597 Modern Economy model selection Table 3 . Results of unit root test. Table 4 . Results of appropriate nonlinear model test. Table 5 . The estimated results of STR models. Table 6 . Testing for the serial autocorrelation of residuals (F-value).allstatistically non-significant, which means there is no additive nonlinearity in any of the STR model.The null hypothesis of parameter constancy test is constant parameters against the alternative hypothesis of smooth continuous changes in parameters.The testing results cannot reject the null hypothesis, and it indicates that the parameters of six models are constant in both regimes. We also check whether there is remaining nonlinearity in the models after all the STR models has been fitted, and the null hypothesis of no remaining nonlinearity test is no additive nonlinearity of the STR model.The testing results are S.-Y.Liao et al.DOI: 10.4236/me.2018.94038601 Modern Economy Table 7 . Results of specification tests.
2019-03-11T16:12:13.046Z
2018-04-08T00:00:00.000
{ "year": 2018, "sha1": "5616fd2ad2c134eaa02d8d62603c6d41a979d6d5", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=83621", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "5616fd2ad2c134eaa02d8d62603c6d41a979d6d5", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
221378131
pes2o/s2orc
v3-fos-license
Suturing the Anterior Cruciate Ligament Using a No. 16 Intravenous Catheter Needle in Avulsion Anterior Cruciate Ligament Injury Avulsion anterior cruciate ligament injuries are more common in pediatric patients. There are several methods of fixation available for these injuries (tibial intercondylar eminence fractures), such as the pullout suture technique, screw fixation, and suture anchor fixation. Currently, a pullout technique is widely used for fixation. We purpose a pullout technique method using a modified No. 16 intravenous catheter needle to suture the anterior cruciate ligament fiber instead of a suture hook or suture passer. We also use one anterior tibial tunnel for this arthroscopic pullout fixation technique to decrease the incidence of physeal injury in pediatric patients by using many tibial tunnels. A n anterior cruciate ligament (ACL) tear is one of the most common sports injuries. This injury usually results from twisting the knee during a sports activity. In adults, the injury site is usually at the midsubstance ACL tear or at the femoral footprint because bone is stronger than ligaments. This is in contrast to very young patients, in whom the skeletally immature bone is weaker than the ligaments, and thus avulsion ACL injuries are more common in pediatric patients. The key treatment of this injury is to anatomically fix the avulsion fragment to the fracture base. There are several methods of fixation available in these injuries, such as the pullout suture technique, 1-3 screw fixation, and suture anchor fixation. 4 For the pullout technique, there are 2 important steps to achieve a good stability fixation. First, the surgeon must suture the ACL bundle near the base of the bony fragment using a suture hook or a suture passer. This device is quite expensive and not available in many countries for patients with limited finances. The second step is to create more than one tibial tunnel, which most surgeons use for pullout fixation. [1][2][3] In pediatric patients, more tunnels create a greater risk of physeal injury. With regard to these problems in a pediatric patient, we experimented with a method using a modified No. 16 intravenous (IV) catheter needle (Nipro, Bridgewater, NJ) with a PROLENE No. 1 suture (Ethicon, Raynham, MA) to suture the ACL fiber instead of a suture hook or suture passer, and, using method, we also required only one anterior tibial tunnel technique for this arthroscopic pullout fixation technique. Surgical Technique The patient was prepared and sterilely draped in the supine position. A tourniquet was applied and inflated at 250 mm Hg. The knee was set at 90 flexion. Knee arthroscopy was then performed with anterolateral (AL), accessory AL, and anteromedial (AM) portals. The arthroscopic sheath with a camera was inserted into the knee joint via the AL portal. An arthroscopic diagnosis was performed, and the result showed an avulsion injury of the ACL from the tibial bone (Fig 1). After arthroscopic diagnosis, the camera was then moved to the accessory AL portal. A no. 16 IV catheter needle was bent into a 45 crescent curve shape using 2 needle holders (Fig 2 A and B) and loaded with a PROLENE No. 1 suture. The modified no. 16 IV catheter needle was introduced through the AM portal and used to suture the ACL bundle above the bony fragment at the anterior half of the ACL from the medial side ( Fig 3A) to the lateral side ( Fig 3B). A suture retriever was used to grasp the PROLENE No. 1 suture through the AL portal. The modified no. 16 IV catheter needle was used to stich the ACL bundle from the medial side to the lateral side at the posterior half of the ACL. Two sutures of PROLENE No. 1 were replaced with ETHIBOND No. 5 (Ethicon, Somerville, NJ) by a shuttle relay technique (Fig 4 A and B). The 4 limbs of the ETHIBOND No. 5 were grasped and brought out through the AM portal using the suture passer. A shaver was inserted through the AM portal to remove the fibrous tissue and prepare the base of the avulsion fragment and fracture base for fixation ( Fig 5). A guide pin was inserted from the AM aspect of tibia to the anterior border of the fracture base using an ACL aiming device. We used the elbow type of aiming device because the key is to use the tip of the aiming device to compress the avulsion fragment during the tibial tunnel creation (Fig 6). The ENDO-BUTTON drill bit was used to drill a hole to create the tibial tunnel. A loop suture of ETHIBOND No. 2 was inserted through the tibial tunnel using the guide pin then the looped suture of the ETHIBOND No.2 was passed through the AM portal using a suture passer. The 4 limbs of ETHIBOND No. 5 were passed through the tibial tunnel by a shuttle relay technique. The 4 limbs of ETHIBOND No. 5 were pulled down to reduce the avulsion fracture and are tightened over a 3-hole small plate at the AM surface of the tibia. The knee stability was good after examination with Lachman test and anterior Drawer test. Postoperative radiographs (Fig 7 A and B) showed good reduction of the avulsion fracture compared with the preoperative radiographs (Fig 7 C and D). The entire surgical technique is shown in Video 1, with audio narration. Tables 1 and 2 present the key points, advantages, and disadvantages, pitfalls, and some tips for using this technique. Postoperative Management The patient's knee was immobilized in the full extension position with a knee brace for 3 weeks, during which the patient was allowed to walk with partial weight-bearing on the operated-on leg axillary crutched. At 3 weeks, the patient began range-ofmotion exercises and was encouraged to slowly increase the amount of weight they could bear on the leg. Discussion The principle treatment of an avulsion ACL is to fix the bony fragment to the fracture base. In contrast, the purpose of ACL injury treatment is to reconstruct the ACL using an autograft or allograft. In the past, the gold standard treatment for avulsion ACL was open reduction and internal fixation. There were many surgical approaches to fix the bony fragment, such as screw, suture anchor, and pullout fixations. Nowadays, arthroscopic surgery is widely used rather than open surgery, but the methods of fixation are the same. The benefits of arthroscopic fixation are that it is less invasive, and the patient normally has a faster recovery. Screw fixation is a good technique for securing and fixing the bony fragment, but it is most suitable for large The suture anchor technique provides good stability for the avulsion fragment by compressing the bony fragment to the fracture base and can achieve good clinical outcomes. There are 2 places in which the suture anchor can be applied, at the edge of the fracture site or at the central bed of the fracture. With the pullout suture technique, there are 2 important steps for proper fixation of the avulsion fragment. First, the ACL fiber must be pierced near the base of the bony fragment, a step for which most surgeons use a suture hook or suture passer to pierce the ACL fiber. These devices are expensive and can be used only once or twice. For this modified technique, we experimented with using a modified no. 16 IV catheter needle as the suture hook and suture passer. The surgeon can adjust the needle to create the desired curve, i.e., a direct curve, a right 45 curve, or a left 45 curve, using a needle-holder. This device is inexpensive and available in all hospitals. When using this device, surgeons should use it gently and avoid excessive manipulation because there is a risk of breaking the tip of the catheter. The second step is to create the tibial tunnel. The surgeon can create 1, 2, 3, or 4 tunnels. 5,6 More than 1 tunnel can provide a good security of fixation. We created 1 anterior tunnel to pull the suture through. Normally, the pitfall of this type of pullout technique is that an anterior gap remains after fixation. The sutures that are passed through the anterior tunnel compress the avulsion fragment. The key is to preserve the posterior fibrous tissue at the posterior border of the fragment as the posterior hinge. Creating only 1 tibial tunnel is also a benefit in patients with an open physis, as the more tunnels there are, the greater the chance of physeal injury. To achieve a good clinical outcome, there are some key points to be noted when using this surgical technique: In subacute and chronic cases, the surgeon should be careful to remove all of the fibrous tissue between the avulsion fragment and the fracture base. The surgeon should prepare the bone base between the avulsion fracture and fracture base before the fixation. The posterior soft tissue at the posterior border of the avulsion fragment and fracture base should be left intact and used as a hinge after securing the fixation. The surgeon should test the knee stability after the fixation. To avoid physeal injury, the surgeon should create the tibial tunnel above the physis or create more than one tunnel as indicated by the situation. The elbow-type ACL aiming device should be used to create the tibial tunnel because the tip of the aiming device can also be used to hold the avulsion fragment. Pitfalls To prevent a postprocedure anterior gap remaining at the fracture site, the avulsion fragment should be fixed with the leg in the extension position It is a risk for the tip of catheter to break if the surgeon uses it with excessive manipulation.
2020-08-13T10:08:26.503Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "b6b035d41f9437aedbac4ab899de9098bc7a39c9", "oa_license": "CCBYNCND", "oa_url": "http://www.arthroscopytechniques.org/article/S2212628720301079/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed7dd4aa28d6bac29ea7a5649933b7b7efc94fa9", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
231392021
pes2o/s2orc
v3-fos-license
Post-traumatic stress disorder and its associated factors among people who experienced traumatic events in east African countries, 2020: a protocol for systematic review and meta-analysis Background Post-traumatic stress disorder (PTSD) is the most commonly reported mental health consequence following disasters and traumatic events, either natural or man-made. Nothing is written regarding its pooled prevalence and pooled estimate of factors. Therefore, this study aimed to determine the pooled prevalence of PTSD and estimate the pooled effect of associated factors. Methods An English version of published articles will be retrieved using the following; PubMed/Medline, Africa-wides, Science Direct, Cochrane Library, Global Health, Google Scholar, EMBASE, and psycINFO. Research reports will be searched from October 10/2020 to November 10/2020. The research reports quality will be assessed using the Newcastle–Ottawa Scale. Relevant information from the searched research reports will be extracted in a Microsoft Excel format. After extraction, the data will be imported to STATA version 14.0 for analysis. An appropriate guideline for a systematic review and meta-analysis report will be used, i.e. the Preferred Reporting Items for Systematic reviews and Meta-Analyses. A random-effects meta-analysis model will be used to estimate the Der Simonian and Laird’s pooled prevalence of PTSD and its associated factors. Discussion This study aims to determine the pooled prevalence of PTSD and estimate the pooled effect of associated factors. Several kinds of research have reported the increasing magnitude of PTSD and its determinants in a different population. This might be due to reasons, such as little attention being given to the issue. Therefore, this study will try to fill this gap by giving new evidence-based results to attract policymakers’ attention. Background According to the Diagnostic and Statistical Manual of Mental Disorders 5th edition (DSM-5), Post-Traumatic Stress Disorder (PTSD) is diagnosed in people who have experienced traumatic events in their day-to-day activities directly or indirectly. Traumatic events include being exposed to actual or threatened death, serious injury, being diagnosed with a life-threatening disease, torture, sudden unexpected death of a loved one, and military combat or sexual violence [1]. PTSD manifestations are found to be common among people who have experienced traumatic events when compared to their counterparts [2]. PTSD is the most commonly reported mental health consequence particularly for both man-made events like disasters and traumatic events [3]. The burden of untreated PTSD is enormous since it causes prolonged morbidity, impairment in day-to-day activities, and poor quality of life in all dimensions including health, productivity, and social interaction [1,4,5]. It affects all groups of the population who have been experienced stressful life events regardless of the individuals' characteristics including gender, age, race [6,7]. PTSD contributes to a substantial percentage of the burden of disability both in the developed and developing world [8,9]. According to the World Mental Health Survey conducted recently, PTSD was among the most frequently occurring and debilitating psychiatric disorders reported to cover 54.8% and 41.2% disability in developed and developing countries, respectively [10]. Globally, around 8 million people developed PTSD per single year [11]. Stress-related disorders including PTSD were projected to be the second leading cause of disability by the year 2020 in a survey by the World Health Organization to estimate the burden of disease [12]. PTSD has been reported to account for about 0.4% of the total years lived with disability and it has been estimated to increase to 0.6% globally [13]. The global economic burden of stress-related mental illness is expected to rise in the coming decade [12]. Several studies have been conducted to report the prevalence of PTSD and its determinants among people exposed to traumatic life events worldwide. The lifetime prevalence of PTSD in the United States of America (USA) was shown to be 8% in the general population [14]. A study from Iran reported the prevalence of PTSD among commercial motor vehicle drivers to be 19.2% [15]. Another study from Korea reported a one-year prevalence of PTSD among subway drivers to be 5.6% [16]. A study from Israel among people exposed to terrorism showed that the prevalence of PTSD was 9.4% [17]. In a large epidemiological survey conducted between 1997 and 1999, among survivors of war or mass violence in low-income countries, the prevalence rate of assessed PTSD was 37.4% in Algeria, 28.4% in Cambodia, 15.8% in Ethiopia, and 17.8% in Gaza. In this survey, conflictrelated trauma was a risk factor for PTSD that was present in all four samples. Torture was abundantly reported in all samples except Cambodia. Psychiatric history and current illness were risk factors in Cambodia and Ethiopia. The poor quality of the camp was associated with PTSD in Algeria and Gaza. Daily hassles were associated with PTSD in Algeria. Youth domestic stress, death or separations in the family and alcohol abuse in parents were associated with PTSD in Cambodia [18]. There have been a number of studies conducted in Ethiopia which have demonstrated the variety of traumas experienced and their associated factors. A crosssectional study conducted at Addis Ababa; Ethiopia reported the prevalence of PTSD to be 22.8% among survivors of a road traffic accident. In these study factors, such as being female, having poor social support, duration since the accident (1-3 months), and having depression were reported as significantly associated with PTSD [19]. In another cross-sectional study conducted in southwest Ethiopia, the prevalence of PTSD was reported to be 12.6%. Factors, such as a history of a near-miss road traffic crash, depression, and high cannabis use, were reported as having a significant association with PTSD [20]. Another study conducted among landslide survivors in Ethiopia showed the prevalence of PTSD to be 37.3%. In this study, factors including female sex, divorce, sustained physical injury, having a history of mental illness, family history of mental illness, poor social support, and high perceived stress were reported as significantly associated with PTSD [21]. PTSD has been investigated in different studies conducted in east African countries. Although overall there is a high prevalence of PTSD and mental health co-morbidities following traumatic events, the reported prevalence varies greatly in different studies conducted. There is also a great variation in the factors associated with PTSD. Therefore, the main goal of this systematic review and meta-analysis study will be to determine the pooled prevalence of PTSD and associated factors among people who have experienced traumatic events in east African countries. Research questions about the pooled prevalence and associated factors of PTSD This study plans to examine the pooled prevalence of PTSD among people who experienced traumatic events in east African countries in 2020. Literature has indicated different prevalence of PTSD among different groups of people exposed to traumatic events, this systematic review and meta-analysis study will try to examine the degree of variation in the prevalence of PTSD between countries and population groups. The source of heterogeneity will also be identified by sorting the data based on their methodologies. This systematic review and meta-analysis are expected to determine the pooled effect estimates of factors associated with PTSD among people who experienced traumatic events in East African countries, 2020. Identification and selection of studies A systematic review of published English language research which reports the prevalence and associated factors of PTSD among people who experienced traumatic events in east African countries will be considered. A search of research reports will be made by the following databases: PubMed/Medline, Africa-wides, science direct, Cochrane Library, Global Health, Google Scholar, EMBASE, and psycINFO. Research reports from October 10 to November 10/2020 will be included. The primary search items will be (prevalence OR epidemiology OR magnitude) AND (post-traumatic stress disorder OR PTSD OR acute stress disorder) AND (associated factors OR predictors OR risk factors) AND (Ethiopia) AND (Kenya) AND (Uganda) AND (Tanzania). An appropriate guideline for a systematic review and meta-analysis report will be used, i.e. the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA-p) [22]. Inclusion criteria All relevant research reports which will be available on the search until November 10/2020 will be included based on the following inclusion criteria. (1) a study which reported the prevalence of PTSD (2) conducted in East African countries (3) a study which applied probable sampling technique Exclusion criteria The following studies will be excluded. (1) Experimental, qualitative, and psychometric studies (2) Studies which are not written in English (3) Studies which cannot be fully accessed after a request is made from their author by email Data extraction MS and ET will independently extract all the necessary data using a standardized data extraction format. These data extraction formats will be prepared to include the following items: the first author, publication year, region of the study conducted, sample size, a screening tool used to detect PTSD, response rate, the prevalence of PTSD, and associated factors with PTSD. Cross-checking will be done by MS and ET following searches. There will be further discussion to achieve consensus and double extraction will be made to solve any disagreements between the two authors. If required, we will contact the original authors for more clarification. We will apply kappa statistics to indicate the difference between observed and expected agreements between authors, at random or by chance only. We will also conduct a sensitivity analysis to assess the robustness of meta-analytic results. Outcome measurements We have two objectives in this systematic review and meta-analysis study. These are to determine the pooled prevalence of PTSD among people exposed to traumatic events in East African countries and to estimate the pooled effects of associated factors with PTSD among people who experienced traumatic events in these countries. The pooled prevalence of PTSD will be calculated using STATA version 14.0. The pooled effect estimate of associated factors with PTSD will be calculated. The odds ratio will be prepared from the searched research reports using two by two tables. Quality assessment The quality of the research reports included in this systematic review and meta-analysis study will be assessed using the Newcastle-Ottawa Scale for cross-sectional studies quality assessment [23]. Articles that meet the minimum requirements, i.e. at least 50% of the quality assessment criteria and high quality meaning articles that score 6 out of 10 scales will be included for analysis. Statistical procedure The relevant information from the searched research reports will be extracted in a Microsoft Excel format. The data will then be imported to STATA version 14.0 for analysis. The characteristics of the original articles will be presented using texts, tables, and forest plots. The standard error of prevalence for each original article will be calculated using the binomial distribution formula. The prevalence of the reported researches will be checked for heterogeneity using a heterogeneity χ 2 test and I 2 test. A random-effects meta-analysis model will be used to estimate the Der Simonian and Laird's pooled prevalence of alcohol use and associated factors. Publication bias will be checked by performing Egger's correlation and Begg's regression intercept tests at a 5% significant level. If there is evidence of publication bias in our analysis, we will perform a Duval and Tweedie non-parametric 'trim and fill' analysis to formalize the use of funnel plot, estimate the number and outcome of missing studies, and adjust for theoretically missing studies. Subgroup analysis will be conducted to identify the impact of variables in a particular group for the prediction of the pooled prevalence of PTSD. We will perform a leave-one-out sensitivity analysis to check how the predicted pooled prevalence of PTSD and its conclusion alters when a single study result is removed from the analysis. We will use the sensitivity analysis result to identify the possible source of heterogeneity if we detect it during analysis. The prediction interval will be computed to reflect the variation of pooled prevalence of PTSD in different setting. We will make recommendations for further research. Discussion This study aims to determine the pooled prevalence of PTSD and estimate the pooled effect of associated factors. Several research studies using differing methods have reported the magnitude of PTSD and its determinant factors in a varying population, and shown that the prevalence is increasing. This might be due to reasons, such as little attention being given to the issue. This study will try to fill this gap by giving new evidence-based results to attract policymakers' attention. The results of this study will hopefully identify questions for future research. This study will be reported using a standardized reporting guideline for systematic review and metaanalysis of observational studies that is the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA-P) checklist [22]. We can mention that this study's plan to include all observational studies for systematic review and metaanalysis is a strength. The strength of this study is also the independent research report searching, selection and data extraction pursued by two independent reviewers. This study will be limited if a high level of heterogeneity is observed during analysis. Abbreviations PRISMA-P: Preferred Reporting Items for Systematic Reviews and Meta-analysis; PTSD: Posttraumatic stress disorder; USA: United States of America.
2021-01-10T14:28:08.844Z
2021-01-09T00:00:00.000
{ "year": 2021, "sha1": "25acb8a0317503664e85c3f866507b58c86ef944", "oa_license": "CCBY", "oa_url": "https://annals-general-psychiatry.biomedcentral.com/track/pdf/10.1186/s12991-020-00324-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6535448c7af02d08e059886fbcb72b142077aaed", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
55279188
pes2o/s2orc
v3-fos-license
Lightning characteristics in Eastern Mediterranean thunderstorms during di ff erent synoptic systems Introduction Conclusions References Figures Back Close Full Abstract Thunderstorms activity takes place in the Eastern Mediterranean mainly along the boreal fall and winter seasons during synoptic systems of Red Sea Trough (RST), Red Sea Trough that closed a low over the sea (RST-CL), and Cyprus Low (during fall -FCL and Winter -WCL).In this work we used the Israeli Lightning Location System ground strokes dataset (between October 2004 and December 2010) for studying the lightning strokes properties and their link to the thermodynamic conditions in each synoptic system. It is shown that the lightning activity dominates over sea during WCL and FCL systems (with maximum values of 37 strokes per 25 km 2 day −1 in WCL, and 54 in FCL) and have a dominant component over land during the RST and RST-CL days.The stronger instability (high CAPE values of 621 ± 466 J kg −1 ) during RST-CL days together with the higher altitude of the clouds' mixed-phase region (3630 ± 316 m) result in higher ground strokes density during this system (compared to all other) but lower fraction of positive ground strokes (3 ± 0.5 %).In general the fraction of positive strokes was found to be positively correlated with the wind shear values in the layer between 0 and −25 • C. It increases from the 1.2±1 % in early fall to 17±7 % in late winter, (during FCL and WCL days) and can be linked to the decrease in the sea surface (and lower troposphere) temperature during those months, due to an impact on the charge centers vertical location. The diurnal cycle in the lightning activity was examined for each synoptic system.During WCL conditions no preferred times were found along the day (as it relates to the timing of frontal systems).During the fall systems (FCL and RST-CL) there is a peak in lightning activity during the morning hours, probably related to the enhanced convection driven by the convergence between the eastern land breeze and the western synoptic winds.The distributions of peak currents in FCL and WCL systems also change from fall to winter and include more strong negative and positive strokes toward the end of the winter. Introduction Thunderstorms in the Eastern-Mediterranean (EM) region are associated with three synoptic systems: Cyprus Low (CL), Red Sea Trough (RST), and with a hybrid system of Red Sea Trough that closes a low over the sea (RST-CL), (Levin et al., 1996;Altaratz et al., 2003;Ziv et al., 2009).Those storms occur between the boreal fall and early spring (September-April).Throughout summer the area is influenced by the subsidence of the subtropical high that inhibits deep convection. CL is a mid-latitude low pressure system that is usually generated in the bay of Genoa near the lee side of the Alps (Buzzi and Tibaldi, 1978) and moves eastwards over the Mediterranean-Sea (Alpert et al., 1990;Shay-el and Alpert, 1991).The low level low is accompanied by an upper level trough that transports cold air into the region.According to Shalev et al. (2011) who analyzed lightning activity over the region, more than 71 % of winter-time lightning flashes and 41 % of fall ones occur under this synoptic system.While transported over the Mediterranean, the cold continental air mass is destabilized and its moisture content increases due to the interaction with the warm sea.In proximity to the eastern coast of the Mediterranean Sea, the friction due to the land-sea interface and the convergence between the western synoptic winds and the eastern land-breeze, further aids convection (Heiblum et al., 2011;Ziv and Yair, 1994;Ziv et al., 2009).Under such conditions thunderclouds develop over the sea, and then are advected into land. RST is a low-level trough that extends from the African Monsoon along the Red-Sea towards the EM (Dayan et al., 2001;Kahana et al., 2002).When the upper level trough location enables southern winds in the mid-troposphere, it triggers transport of tropical moisture towards the EM.Under such conditions thunderclouds generally develop over the south and eastern part of the EM, exhibiting relatively high cloudbases, ∼ 2-3 km above sea-level.These conditions occur mainly along fall months (October-November), and are associated with ∼ 43 % of the flashes at this time of the year (Shalev et al., 2011).In some cases the RST system extends towards the Introduction Conclusions References Tables Figures Back Close Full Mediterranean-Sea and forms a close low over the sea, in a complex hybrid situation. Here we named this type of system as RST-CL.Previous works that studied lightning activity over Israel used two types of measuring systems.Levin et al. (1996) investigated the link between the polarity of lightning strokes and the ambient wind conditions using the Tel-Aviv CGR3 lightning flash counter.Yair et al. (1998) used the same detection system for examination of the ratio between intra-cloud (IC) to cloud-to-ground flashes along the year.Both works were limited to a small domain around the city of Tel-Aviv due to the short detection range of the CGR3 instrument (Mackarres and Darveniza, 1994).Altaratz et al. (2003) and Shalev et al. (2011) surveyed the climatology of ground lightning flashes over a period of several years using data from the national Israel Lightning Location System (ILLS).Altaratz et al. (2003) investigated lightning characteristics over land vs. over the sea while Shalev et al. (2011) discussed the link between the temporal-spatial properties of lightning strokes and the synoptic conditions.They proposed that spatial patterns of lightning strokes can be used as a proxy for the prevailing synoptic condition. In the present work we study the thermodynamic conditions during specified synoptic systems and their link to the generation of lightning strokes and their properties.We focus on the main synoptic systems that produce electrical activity over the EM: (a) WCL, (b) FCL, (c) RST and (d) RST-CL.So the ground strokes characteristics, during these four synoptic systems, are analyzed and linked to the prevailing thermodynamic conditions. Israel Lightning Location System (ILLS) The ILLS is composed of eight sensors located along Israel (marked on Fig. 5).The present configuration consists of three types of sensors: five are electric field based, two are magnetic field sensors and one is a combined sensor that monitors both mag-Introduction Conclusions References Tables Figures Back Close Full netic and electric fields.The detection algorithm is based on the time of arrival and magnetic field direction techniques to retrieve information on the peak current intensity, polarity, location and time of impact of ground strokes.The data used in this study was retrieved by the ILLS between October 2004 and December 2010.During the period between 2004 and 2007 only seven detectors were operational. The detection efficiency over Israel, where the point of impact is covered by all sensors, is estimated to be 80-90 % (Yair et al., 2014).The spatial detection accuracy over this region is ∼ 500 m and the temporal resolution for detection of successive strokes is ∼ 15 µs.The detection efficiency decreases as moving away from the network center (Katz and Kalman, 2009).At distances that are larger than 100 km from the Israeli coastline it is estimated to be only 50 %. In this study we analyzed strokes data and not flashes.Based on Shalev et al. (2011) findings the multiplicity of cloud-to-ground flashes detected by the ILLS is 1.1 (using thresholds of 1 km and 0.2 s).Therefore we choose to use the raw strokes data as it represents flash data to a good approximation and it does not force the usage of thresholds. Sensitivity test of detection efficiency As stated above, the detection efficiency of the ILLS depends on the distance from the center of the network.To reduce errors caused by undetected weak strokes we investigate the detection efficiency as a function of location and peak current.The focus is on negative strokes because weak positive strokes, with peak currents smaller or equal to 10 kA, are excluded from the dataset since they are considered as attributed partly to intra-cloud discharges (Cummins et al., 1998). The detection efficiency is estimated here by analyzing the distributions of peak currents as a function of distance from the center of the ILLS detectors network.The center was determined here at 32 • 4 N and 35 • E and the analysis was done for days of WCL only.However, similar results are obtained for the other synoptic systems as well.Introduction Conclusions References Tables Figures Back Close Full The distributions for specified narrow peak currents ranges are presented as a function of longitudes (Fig. 1a and c) and latitudes (Fig. 1b and d).Black boxes mark a distance of 250 km from the approximate center of the ILLS (marked by a black arrow). Figure 1 shows that small peak current strokes (< 10 kA) are detected only near the center of the detectors' array.This difference is more pronounced in the longitudinal direction.In order to avoid this bias to small currents near the center of Israel, we chose to eliminate negative strokes with peak currents smaller than 10 kA from the analysis. In order to obtain sufficient statistics, from one hand, and to minimize errors due to detection efficiency, on the other hand, the study region is limited to a radius of 250 km from the ILLS center.In addition, days with less than 20 detected strokes are excluded from the dataset.Detailed description of the ILLS detection efficiency and location errors is given by Kats and Kalman ( 2009) and Manoochehrnia et al. (2007). Classification of lightning strokes according to the synoptic system Daily parameters of lightning strokes i.e., peak current intensity [kA] and polarity, location [lat.and long.degree] and time of impact in ground [UTC], are grouped according to the prevailing synoptic conditions and the season.The seasons are defined according to the boreal seasonal distribution: September to November as fall and December to February as winter.The synoptic conditions (see Table 1) are defined based on examination of daily maps in 1 • × 1 • resolution of wind direction at 925 hPa and mean sea level pressure, produced by the Global Data Assimilation System (GDAS, Kanamitsu, 1989).The maps of 12:00 and 24:00 UTC are examined with respect to the timing of the detected strokes. For specifying CL conditions, the following criteria are used: Only days that precisely meet the above criteria (see Table 1) are selected for the analysis.Hence the presented analysis does not cover the entire occurrence of lightning in the study region (17.3 % of the strokes data is omitted from the analysis). The classification of lightning days for the four synoptic systems produced 51, 15 and 12 lightning days for the conditions of CL, RST-CL and RST, respectively, for the fall season.The corresponding numbers of strokes is 80 861, 57 026 and 5118.For the conditions of WCL, 128 lightning days with a total of 98 963 strokes are analyzed. The GDAS dataset was also used for determination of thermodynamic conditions during each synoptic system.The height of the 0 and −25 • C isotherms and the Convective Available Potential Energy (CAPE) values were estimated for the area between 32 • -33 • N and 34 • -35 • E as representative values for our study area. MODerate resolution Imaging Spectro-radiometer (MODIS) Data from the MODIS instrument onboard the Aqua satellite is used for estimation of sea surface temperature (SST).The temperature is estimated for the marine area bounded between 30 • -34 • N and 32 • -36 • E, using the 11 µm band and in spatial resolution of 16 km 2 . Results This section presents characteristics of lightning strokes over the EM analyzed for specified synoptic condition and season.It includes three subsets for the fall season RST, RST-CL and FCL, and one subset for the winter season, WCL.The link to the thermodynamic conditions is examined first.Introduction Conclusions References Tables Figures Back Close Full Thermodynamic conditions First the characteristic thermodynamic conditions prevailing during each synoptic system are examined because they determine the properties of thunderclouds (e.g.: vertical dimension, updraft speed, water and ice content) and hence the electrical activity. Figure 2 presents the height and depth (thickness) of the atmospheric layer located between the 0 and −25 • C isotherm levels.In this atmospheric layer resides the mixed phase region of clouds that is the most relevant to the non-inductive charging mechanism which involves graupel and ice particles and supercooled water (Takahashi, 1978;Saunders et al., 1991;Saunders, 2008).Examination of the vertical location and depth of this atmospheric layer as a function of synoptic condition and season can shed light on the potential for thunderstorms development and lightning production. During the WCL days the 0 • C isotherm is located around 2 ± 0.5 km (Fig. 2, y axis), much lower than its mean location in the fall synoptic systems, due to the much colder conditions prevailing in the winter.During RST conditions, the location of the 0 • C isotherm is the highest (3.7 ± 0.5 km), and it is 3.4 ± 0.4 km during RST-CL, and 3.1 ± 0.5 km during FCL.It means that the mixed phase region in thunderclouds during the fall season is located higher in the atmosphere, compared to the winter thunderclouds, with the RST thunderclouds located in the highest position within this season (Fig. 2, red centroid).The depth of this atmospheric layer, between 0 and −25 • C (Fig. 2, x axis) can teach us about the instability of the atmosphere during thunderstorms events.It can be deduced from the thermal lapse rate along this atmospheric layer, where we expect to find the mixed-phase region in thunderclouds.A thinner layer implies more unstable conditions, meaning a larger lapse rate (a difference of 25 • C along a shorter distance). Deeper layer, on the other hand, represents smaller instability.During RST conditions this atmospheric layer is the thinnest, hence represents the most unstable conditions compared to the other three synoptic systems.Introduction Conclusions References Tables Figures Back Close Full The instability of the atmosphere can be represented by the mean CAPE as well.The CAPE represents the entire atmospheric column and not only the specific layer between 0 and −25 • C as examined in Fig. 2, and it is a commonly used parameter for characterization of the thermodynamic conditions during thunderstorms.Figure 3 presents the daily CAPE values per synoptic system as a function of the daily number of detected strokes.Centroids of the four synoptic systems are marked in circles. The relative position of the mean CAPE values for the four different synoptic systems is quite similar to the one showed in Fig. 2 by the depth of the layer between the 0 and −25 • C. Clearly, a general increase trend in ground strokes production, with the increase in CAPE values, is observed.The trend is in agreement with many previous studies conducted around the globe (Randell et al., 1994;Williams and Stanfill, 2002) and in Israel (Shalev et al., 2011).Larger CAPE values (stronger instability) that characterize the synoptic systems of the fall season can better support the required conditions for charge separation within thunderclouds.Those requirements for efficient charging include strong updrafts (Deierling and Petersen, 2008), and enhanced graupel and ice mass fluxes (Deierling et al., 2008).These factors have been shown to be correlated with stronger electrical activity.So the conditions during the RST-CL events create the strongest instability and as a consequence, the best conditions for electrification (as expressed by the daily number of strokes in Fig. 3).During the fall season the temperature of the lower atmosphere and the SST (see Sect. 3.2.4.b) are still fairly high therefore the combination with cold air in the upper troposphere creates very unstable conditions (Ziv et al., 2009). The RST data in Fig. 3 (red circle) is exceptional regarding the fall season systems and is located below the line connecting the other synoptic systems data.It represents fewer strokes for high values of CAPE, which seems to contradict the previous findings. A possible explanation for this contradiction is higher relative fraction of IC flashes that are not detected by the ILLS.The IC flashes can be attributed to the relatively higheraltitude of the mixed phase region in the clouds during the RST days, as implied by the higher level of the 0 • C isotherm in Fig. 2. Clouds located higher in the atmosphere Introduction Conclusions References Tables Figures Back Close Full are expected to produce higher ratio of IC to CG flashes (Pierce, 1970;Prentice and Mackerras, 1977;Yair et al., 1998). It can be noticed that the characteristic CAPE values during fall and winter thunderstorms in Israel are small compared to CAPE values measured around the globe during summer thunderstorms (∼ 1000's J kg −1 , Williams and Renno, 1993;Williams et al., 2005).The CAPE in our region is similar to other winter thunderstorms like the ones typical for Japan (Suzuki et al., 2011). A positive linear correlation between the CAPE values and the number of strokes per day is found for WCL (R = 0.41) and FCL (R = 0.46).The two linear fits and equations are presented in Fig. 3.The rate of increase in the number of strokes per day, on the order of 4.6 strokes per unit of CAPE, is similar in both seasons. Next we examine in more details the characteristics of the lightning strokes and their link to the thermodynamic conditions. Number of daily strokes as a function of month and synoptic system Figure 4 shows the monthly averages of number of strokes per day (bars) for the four synoptic systems subsets.The average number of thunderstorm days per month is plotted in red curve.The error-bars indicate the inter-annual variation per month. The highest electrical activity per thunderstorm-day occurs during October in RST-CL system (2734 ± 974 day −1 ).The smallest number of strokes per day is for the FCL system during September.This may be attributed to the very small dataset for this month that includes only one day with 85 strokes.The inter-annual variability in the number of strokes per day (indicated by black error-bars) is very large for the fall months.It demonstrates the large inter annual variability in the magnitude of electrical activity.There are on average only a few days of electrical activity during these months.During winter, on the other hand, the inter-annual variance is small with a mean value of 733 ± 97 (Fig. 4d).Introduction Conclusions References Tables Figures Back Close Full The level of electrical activity per day, as measured by the number of ground strokes, is highly correlated with the mean value of CAPE per specific synoptic system (Fig. 3).The high flash number per day, during RST-CL systems is supported by the relatively high CAPE values during these thunderstorms days and is in agreement with Ziv et al. (2009). Spatial distribution The general spatial distribution of lightning strokes detected by the ILLS exhibits a butterfly shape, as is clearly shown in Fig. 5c.This phenomenon is a direct result of the system configuration and the resultant detection efficiency, due to the elongated position of the sensors.The western part of the detection area that spreads mostly over the sea and coastal regions is electrically active during all synoptic systems (Fig. 5a-d).The eastern region that covers a continental area, mainly over eastern Israel, Jordan and Syria, is mostly associated with lightning activity during RST and RST-CL systems. Under conditions of WCL, the higher strokes density is detected over the sea and near the coast (Fig. 5a).This location of stronger electrical activity can be associated with larger instability over the sea during the winter, as the sea is a source for moisture and heat (Shay-El and Alpert, 1991).In addition, the convergence of the eastern landbreeze with the prevailing western synoptic wind near the coast is a key mechanism for enhanced convection and intensification of thunderclouds (Heiblum et al., 2011).During FCL days, the strokes density over the sea is larger than in WCL days and the region of high density is wider.This can be explained by the higher instability during FCL days (higher CAPE, Fig. 3) which enables significant convection over the whole eastern Mediterranean Sea, without the help of convergence near the shore like in the winter.For the RST and RST-CL days there is high activity both over the sea and over land.During RST conditions the main region of activity is determined by the location of the trough's axis.A western axis, with regard to the coast line, creates a significant activity over the sea and an eastern axis over land.This is the reason for the two separate centers that are shown in Fig. 5d.Introduction Conclusions References Tables Figures Back Close Full These results demonstrate again the higher strokes' density during fall compared to the winter season.The average values of number of strokes per pixel (25 km 2 ) per day are much higher in Fig. 5b and c compared to Fig. 5a.Maximum daily values of strokes per 25 km 2 are 37 in WCL, 54 in FCL, 33 in RST and 131 in RST-CL. Diurnal cycle The diurnal cycle of lightning activity during different synoptic systems is presented in Fig. 6.It shows significant differences in the timing of activity between the various systems.For example, the lightning occurrence during RST and FCL synoptic conditions, both during fall, exhibits opposite cycles.In RST conditions the cycle has of two peaks of higher electrical activity, one centered in the afternoon hours (around 17:00 LT), and a second peak in the middle of the night (around 24:00 LT, Fig. 6a).The afternoon peak demonstrates the significance of solar land heating that drives stronger convection. During FCL, on the other hand, there is one single peak in electrical activity centered at morning times, around 09:00 LT (Fig. 6c).The likelihood for lightning activity during late morning hours is ∼ 50 % higher than in the evening hours.This diurnal cycle can be explained by the build-up of the convergence near the coast during the night and early morning, between the synoptic western wind and the eastern land breeze that reaches its maximal magnitude in the early morning hours.During RST-CL system (Fig. 6b) the single peak near 06:00 LT is less significant and is earlier than the morning peak in the FCL system.During the rest of the time the likelihood is almost constant at ∼ 3 %.During winter, under WCL conditions, the probability is ∼ 4 % along the entire day, suggesting that the likelihood of electrical activity at any given hour is about equal (Fig. 6d).This could be related to the timing of the passage of cold fronts over the study region.Yet, one can see two minor peaks centered around 06:00 and 22:00 LT.The early morning peak is similar to the one found for the conditions of RST-CL and similar to what was reported by Altaratz et al. (2003) for CL conditions.Introduction Conclusions References Tables Figures Back Close Full Fraction of positive strokes The relative part of positive strokes, out of total cloud to ground strokes, as a function of month and for different synoptic systems is presented in Fig. 7. Rakov and Uman (2003) discussed a global upper average limit of 10 %.Our results show that under FCL and WCL systems there is a strong increase in the fraction of positive strokes along the months.The fraction increases from 1.2 ± 1 % in early fall (FCL) to 17 ± 7 % at late winter (WCL).Higher fraction of positive strokes during winter storms was observed in previous studies over this region (Yair et al., 1998) and studies conducted worldwide (e.g.: Hojo,1989;Ezcurra et al., 2002;Finke and Hauf, 1996).This tendency in the fraction of positive strokes emphasizes the impact of the changing thermodynamic conditions on thunderclouds properties and hence on the processes of charge separation.Rakov and Uman (2003) reviewed several mechanisms that can explain an increase in the fraction of positives stokes in the cold season. The first mechanism is related to the decrease in the distance between the positive charge center in the cloud and the ground.During winter, due to the decrease in the tropospheric temperatures and in the altitude of the mixed phase region in clouds (see Fig. 2), the upper positive charge center is closer to the ground.This may increase the probability of positive strokes.Additional proposed mechanism is the better exposure of the upper positive charge center, located near cloud top and in the anvil, to the ground due to stronger wind shear in the winter (Brook et al., 1982;Levin et al., 1996;Williams and Yair, 2006). To further explore the dependence of positive strokes' fraction on local thermodynamic factors, we examine its link to the (a) wind-shear and (b) SST in the next section.a. Wind shear Yair et al. (1998) and Levin et al. (1996) showed an exponential dependence of the relative part of positive ground flashes on the wind shear over Israel.Following the above studies and using the GDAS database we estimate wind-shear according to the prevailing winds at the levels of 0 and −25 • C isotherms (not shown). NHESSD Introduction Conclusions References Tables Figures Back Close Full Our results propose a linear dependence under conditions of CL, with regression coefficients of 0.29 for FCL and 0.32 for WCL.The correlations suggest that for an increase in wind shear of 1 s −1 the fraction of positive strokes increases by ∼ 15 to 18 %.This finding supports the hypothesis of the "tilted dipole" (Brook et al., 1982): when the cloud is tilted the screening of the upper positive charge center from the negatively charged surface is less effective and thus the likelihood for positive strokes from the upper positive charge center increases. The different fraction of positive strokes during WCL and FCL systems conditions, even with similar magnitudes of wind-shear, suggests that the generation of positive strokes is strongly related to the thermodynamic conditions of the atmosphere (Fig. 3) and the vertical dimension of the clouds (Fig. 2), which was shown to be different under these two synoptic systems. b. SST The contribution of SST, as a key factor controlling the thermodynamic conditions of the troposphere is examined here.Figure 8 shows a strong monthly negative correlation (R = −0.99) between the SST and the fraction of positive strokes.The dependence shows that for every ∼ 1.5 • C decrease in SST there is a linear increase of ∼ 1 % in the fraction of positive strokes.The colder the sea surface, the larger is the fraction of positive ground strokes. This finding supports the suggested mechanism of higher fraction of positive strokes due to smaller distance between the upper positive charge center in the cloud and the surface, as discussed above.According to this idea the likelihood for positive strokes increases with decreasing SST due to the derived impact on the temperature of the atmosphere, the smaller vertical development of the cloud and the distance of the positive charge center from the ground (Kitagawa and Michimoto, 1994).Figure 2 indeed indicates that the mixed-phase layer base height is lower by ∼ 1-1.5 km during the winter season. Conclusions References Tables Figures Back Close Full Distributions of peak currents Figure 9a presents the peak currents distributions for negative and positive strokes for the four synoptic systems.There are more negative strokes than positive ones under all synoptic systems (Fig. 9a).The median peak current for the negative strokes in all systems is similar, between −18.1 and −20 kA.The positive strokes distribution has a longer tail meaning higher probability for larger peak currents compared to the negative strokes.Therefore their distributions' median values, for the different synoptic system are at least 20 % higher than the corresponding absolute negative values. The larger subsets for FCL and WCL conditions enable us to follow variations in the monthly peak currents distributions, from early fall to late winter for condition of CL. Figure 9b presents the monthly peak currents distributions for negative and positive strokes, between September and February.Over time, both the fraction and maximal magnitude of strong currents of both polarities increase.Similar trends have been observed over the east coast of the United States (Orville et al., 1987) and over the Sea of Japan (Hojo et al., 1989).The zoom-in window (Fig. 9c) demonstrates that the increase in strong currents comes simultaneously with a decrease in the fraction of weak negative currents between −35 and −10 kA.The reasons for the increase in magnitude of the strong currents along the year are unclear, and will be studied in future work. Discussion and conclusions The main synoptic systems that produce thunderstorms over the eastern Mediter- tion, i.e.: synoptic systems.These lightning properties are directly related to the thermodynamic conditions prevailing during each system (Ziv et al., 2009). During RST conditions the thunderclouds are located higher in the atmosphere (the level of the 0 • C isotherm is 3.7 ± 0.5 km) and the atmosphere is very unstable (mean CAPE value of 515 ± 615 J kg −1 ).The source of moisture for these clouds is in midtroposphere transport from Africa (Kahana et al., 2002).The mean number of ground strokes per day is ∼ 80 % lower compared to RST-CL days, even though the difference in the CAPE values is minor between those two types of synoptic systems.A possible reason for this difference can be the higher location of the thunderclouds in RST events that can suggest higher production of IC flashes that are not detected by the ILLS.This assumption is in agreement with Yair et al. (1998) who reported a maximum in the IC to CG ratio in the fall.Under RST conditions the strokes are distributed mostly over the sea or over land, east and south of Israel but less over the central and northern parts. The main peak of activity is in late afternoon and in the middle of the night.However, there is some uncertainty with these results due to the small dataset measured for this synoptic system.The low fraction of positive strokes (2.5 ± 0.8 %) is correlated with the higher location of the mixed-phase part of the thunderclouds in the atmosphere, that leads to a larger distance between the upper positive charge center in the thundercloud and the ground (Rakov and Uman, 2003). During RST-CL events the mean CAPE values are about 17 % higher (621 ± 466) compared to during RST events and the average daily amount of ground strokes is about five times larger.The strokes are distributed over the sea and above the entire land area of Israel.There is a peak in the thunderstorms electrical activity during early morning hours, correlated with the time of maximal convergence over the sea between the synoptic west wind and the easterly land breeze (Heiblum et al., 2011).Although Another type of synoptic system that appears in the fall months is the FCL.This synoptic system is similar to the WCL system as evident from the position of the low pressure in the surface maps.The difference between the two seasons is in the thermodynamic conditions driving significant differences in the properties of thunderclouds and their lightning strokes production. During FCL the 0 • C isotherm is located about one km higher in the atmosphere compared to its location in WCL days.The CAPE is about three times higher in FCL events and the mean number of strokes per day is about two times larger.The geographic center of the electrical activity is over the sea during both systems.The diurnal cycle of FCL presents a peak in late morning times while during WCL there is no significant peak, likely due to the arbitrary temporal passage of the cold fronts which is the major driver of the electrical activity. The trend in the fraction of positive strokes steadily increases from 1.2 ± 1 % in FCL days in September (early fall) to 17±7 % in WCL systems in February (late winter).We note that during CL events (combining both FCL and WCL) the fraction of the positive strokes is correlated with the wind shear (within the 0 and −25 • C layer) and inversely correlated with the SST.A colder environment suggests shorter distance between the positive charge centers and the ground.This finding is in agreement with other studies that pointed to higher fraction of positive strokes in winter storms in Japan (Suzuki, 1992). The distributions of peak currents in CL conditions also change from fall to winter and include more strong negative and positive strokes toward the end of the winter, with larger median peak currents of positive strokes.This finding is in agreement with observation of lightning properties in winter storms in Japan, where similar conditions occur (Matsui and Hara, 2014). Overall, the electrical activity over the eastern Mediterranean is somewhat unique with respect to mid-latitude and equatorial regions, as it takes place during fall and winter and not during summer time.It resembles the electrical activity in the Japanese Introduction Conclusions References Tables Figures Back Close Full winter in many respects (Williams and Yair, 2006).The winter conditions dictate smaller vertical extent and weaker dynamics.Further work is needed to better quantify the link between the statistical properties of lightning strokes and the macro/micro physical properties of thunderclouds.Such research has the potential to better quantify the complex relationships between the dynamic and thermodynamic conditions that together determine the nature of electrical activity by thunderstorms.Full Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | (a) a center of low pressure at sea level, located north or west of the Israeli coast, (b) northwest to southwest winds at 925 hPa level over the central coast of Israel and (c) the absence of significant low pressure system south of Israel.For condition of RST the criteria are (a) a center of low pressure at sea level located south or south-east of Israel, (b) northeast to Discussion Paper | Discussion Paper | Discussion Paper | southeast winds at 925 hPa and (c) the absence of dominant low system north or west of the Israeli coast.For conditions of RST-CL: (a) two distinct centers of low pressure, one located south of Israel and the other one north (or west) of the Israeli coast and (b) western winds in low levels. Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | ranean are Red Sea Trough (RST), Red Sea Trough that closed a low over the sea (RST-CL) and Fall Cyprus Low (FCL) during fall, and Cyprus Low during the winter (WCL).This study presents the statistical properties of lightning strokes over the Eastern Mediterranean (spatial and temporal distribution, diurnal cycle, fraction of positive strokes and distribution of peak currents) as a function of season and synoptic condi-Discussion Paper | Discussion Paper | Discussion Paper | the wind shear is comparable to what is found for WCL conditions, the average fraction of positive strokes in RST-CL is only 3 ± 0.5 %, similar to the RST case.This can be explained by the large vertical separation between cloud base and the surface, which inhibits discharge from the upper positive charge center.Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Author contributions.The first two authors, Y. Ben Ami and O. Altaratz, had equal contribution.Discussion Paper | Discussion Paper | Discussion Paper | Figure 1 .Figure 5 . Figure 1.Distribution of peak currents during WCL system [25 km 2 ].Data is plotted as a function of longitudes (a and c) and latitudes (b and d).Distributions are calculated for steps of 5 kA in peak current intensities.(a and b) are for peak currents between 0 and −25 kA and (c and d) present the range between −25 and −45 kA.Black boxes mark a distance of 250 km from the center of the ILLS which is marked by black arrows.Absolute currents larger than 45 kA are not shown. Table 1 . Criteria used for classification of synoptic systems.
2018-12-07T01:23:16.237Z
2015-06-10T00:00:00.000
{ "year": 2015, "sha1": "9d9040b28d34c6bdb5f54787b3bf0dfe22b8ea3d", "oa_license": "CCBY", "oa_url": "https://www.nat-hazards-earth-syst-sci.net/15/2449/2015/nhess-15-2449-2015.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "9d9040b28d34c6bdb5f54787b3bf0dfe22b8ea3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography" ] }
55617806
pes2o/s2orc
v3-fos-license
An Introduction to Exdysivity Index for Organizational Change Capability Assessment The dynamic change capabilities of organizations are prerequisite to the success, long-term growth and sustainability (Moran & Brightman, 2000; Andreeva & Victoria, 2006; Barreto, 2010; Halkos, 2012). Although organization development (OD) study involves planned changes that would help businesses to stay competitive in the marketplace, there is no effective and reliable change indicator that can reflect the need and level of change capabilities. Apparently, organizational change management requires multi-perspectives approach rather than a single approach to all change situations (Andreeva, 2008). To achieve the successful and sustainable change, an effective change measurement is the key (Moran, Baird & Brightman, 2000). This study aims to propose the development idea for a change indicator or so-called “exdysivity index (EI)” as the change capability assessment and requirement for change intervention at both international and individual organization level. The dynamic change capabilities of organizations are prerequisite to the success, long-term growth and sustainability (Moran & Brightman, 2000;Andreeva & Victoria, 2006;Barreto, 2010;Halkos, 2012).Although organization development (OD) study involves planned changes that would help businesses to stay competitive in the marketplace, there is no effective and reliable change indicator that can reflect the need and level of change capabilities.Apparently, organizational change management requires multi-perspectives approach rather than a single approach to all change situations (Andreeva, 2008).To achieve the successful and sustainable change, an effective change measurement is the key (Moran, Baird & Brightman, 2000).This study aims to propose the development idea for a change indicator or so-called "exdysivity index (EI)" as the change capability assessment and requirement for change intervention at both international and individual organization level. Introduction The fast-moving and volatile environment has forced businesses to stay agile and adapt themselves all the time.Historical success cannot guarantee the future existence in the marketplace.Companies with past success may find that their performance drops dramatically due to the lack of sufficient changes to critical facets of business operations and management.Organization change is important and necessary for long-term growth and sustainability. Organization development (OD) is process that can help organizations build their capacity to change and to achieve effectiveness in terms of financial performance, customer satisfaction and employee engagement (Cummings & Worley, 2009).The capacity to change is critical to improvement of the competitive advantage which is a key to the success of organizations. The success of organizations can be assured through the achievement of the goals and targets.Profit organizations focus on the financial achievement in terms of sales, return on investment, profitability, etc. Non-profit organizations focus on the objective fulfillment.However, not many organizations can maintain their competitiveness and sustainability in the long term.Andreeva & Victoria (2006) suggest that it is difficult to keep competitive advantage for long-term periods without development of the capability to change. The change capability involves a number of areas for both generic and specific competencies.Organizations need to know what constitutes the change capability.Currently, there is no reliable tool to measure, assess and provide the informative results that can disclose the strengths and weaknesses affecting the change capability of an organization or firm. As organizational change through the development of change capabilities is necessary for long-term survival (Barreto, 2010;Halkos, 2012), the success of change processes depend on a number of factors, for example: employees' perception of human relationship value (Jones, Jimmieson & Griffiths, 2005), alignment of the value system (Burnes & Jackson, 2011), matching of change management strategy with stages to overcome resistance sources (habits and perceived risks) (Aladwani, 2001;Self & Schraeder, 2009).In order to cope with organizational change successfully, Judge, Thoresen, Pucik & Welbourne (1999) addressed 7 traits influencing an individual (locus of control, generalized selfefficacy, self-esteem, position affectivity, openness to experience, tolerance for ambiguity, and risk aversion).Coping with organizational change was also related to both extrinsic (salary, job level, plateauing, job performance) and intrinsic (organizational commitment, job satisfaction) career outcomes.Lindell & Drexler (1979) commented that judgmental measures were still used as indicators of real organizational changes. However, in order to help drive the organizational change process effectively, a systematically developed indicator can be an alternative.This paper discusses the importance and necessity for OD community to develop a well-established mechanism that helps identify the level of change capacity.The measurement index under the name of "Exdysivity" is developed and proposed for future research. Business success and sustainability In the rapid changing environment, firms need to react proactively to ensure their distinctive competencies and sustainability.The average period for which firms are able to sustain competitive advantage has decreased significantly over time (Barreto, 2010).Firms find it harder to achieve long-term competitive advantage under hypercompetitive or high-velocity environments.Strategic management suggests what need to be done so that businesses can survive and maintain existence in the marketplace.In general, successful business operation is measured by a number of metrics such as sales growth, profitability and return on investment.It is the prime responsibility of the management to ensure that the operation is at the most efficient and effective level.As a result, the stakeholders are getting the return in the form of dividend, share price capital gain, and so on. It is important for businesses to understand the importance of change capability as one of the elements to enhance the competitive advantage and longterm survival.It is important for employees and organizations to adapt themselves to ensure the change effectiveness (Halkos, 2012). What constitute the business success and sustainability?Moran & Blauth (2009) argue that vision and strategy have been communicated extensively but day-to-day action is not emphasized enough to get buy-in and engagement.The challenge of the past and today change in view of leader of an organization can be compared as shown in the table 1 -The past and today leaders' view on change (Moran & Blauth, 2009) below: Table 1.The past and today leaders' view on change (Moran & Blauth, 2009) Organizational change capability The term used to describe the ability or capacity to change of an organization varies with extended difference to certain extent such as organizational capacity for change (OCC) (Judge & Elenkov, 2005), dynamic change capability (Andreeva & Victoria, 2006), dynamic capabilities (Barreto, 2010).Barreto (2010) proposes the definition of dynamic capability as "the firm's potential to systematically solve problems, formed by its propensity to sense opportunities and threats, to make timely and market-oriented decisions, and to change its resource base." The construct of organizational change capacity can be conceptually grouped into eight dimensions (Judge, W.Q. & Elenkov, D., 2005) as follows: 1. Trustworthy leadership: The ability of senior executives to earn the trust of the rest of the organization and to show the members of the organization the way to meet its collective goals.2. Trusting followers: The ability of the rest of the organization to constructively dissent and/or enthusiasm.3. Capable champions: The ability of an organization to attract, retain, and empower change leaders to evolve and emerge.4. Involved mid-management: The ability of middle managers to effectively link senior management with the rest of the organization.5. Innovative culture: The ability of the organization to establish norms of innovation and encourage innovative activity.6. Accountable culture: The ability of the organization to carefully steward resources and successfully meet predetermined deadlines.7. Systems communications: The ability of the organization to communicate vertically, horizontally, and with customers.8. Systems thinking: The ability of the organization to focus on root causes and recognize the interdependencies within and outside the organizational boundaries. The discussion of strategy to sustainable success of a firm has been widely discussed in the past two decades.Andreeva & Victoria (2006) argue that the organization's ability to sustain and renew its competitive advantages is most important under the continuously changing environment.Kruasom & Saenchaiyathon (2014) addressed the competitive advantage created from resource-based view with key strategies consisting of knowledge management capability, technological capability, innovative capability, and human resource capability. What kind of change is important to the company's success and survival? According to the research by several authors, the following are examples of key organizational change capabilities (Andreeva & Victoria, 2006;Halkos, 2012). • Superior product development and innovation • Business model change • Merger and acquisition integration • Work process change and improvement (Trkman, 2010).Some changes are generic and some are specific resulting in enhanced competitive advantages.However, change activities can be imitated by other organizations.Knowledge of change activities can be transferred through employees who moved from one company to another.Thus the focus should not be put only on what to change but also how to change.This could form a core capability of the organization.The elements of change capability should be identified so that they are improved for contribution to the performance of the organization.Andreeva & Victoris (2006) suggest that the change capability of an organization consists of 3 steps: 1) To see new opportunities for change development. 2) To realize what changes are needed. 3) To implement the changes successfully.Berreto (2010) suggests that dynamic capabilities evolve through mechanisms including learning, knowledge articulation, knowledge codification, trial and error, improvisation, and imitation. How to determine the success level of change capability In respect of all organizational change efforts performed, there is apparent aspect that the success rate is relatively low.Burnes & Jackson (2011) argue that there is substantial evidence that approximately 70% of all change initiatives fail.The risk of failure is greater than before (Moran & Brightman, 2000) such as in merger and acquisition (M&As) (Schraeder & Self, 2003).Measurement is key to successful and sustainable change (Moran & Brightman, 2000).The more an organizational goals can be quantified and progress toward these goals linked to individual performance, the more successful and long-lasting change is likely to be. Key performance indicators (KPIs) are used by many organizations for performance measurement and control.Two common KPIs that have been used for management purpose are productivity and efficiency.As change capability becomes one of the key success factors, it is worth considering how to determine the success of change efforts and interventions through the dynamic change capabilities. Most studies of change capability do not discuss on how an organization will know its change capability.There should be a reliable tool that can help assess the level of the change capability of an organization so that it can be compared with other competitors (Sullavan, 2000).With competition, organizations will thrive to become better than what they are or continue keeping the high level from dropping down. For stakeholders, the change capability indicator can become the target that can reduce the risks of lower performance and increase the confidence of the organization's management to survive in the long term. Action value and action plan It is substantially challenging to translate a sustainability strategy into action and drive it through a complex organization (Epstein & Roy, 2001).Chrusciel & Field (2006) identified action plan that addresses the critical factors for dealing with changes can increase the chance of successful change transformation.The action is important especially as the feedback after the assessment.Piderit (2000) argued that the change process should be egalitarian by fostering ambivalent attitudes toward change.The process of organizational change should include top-down, planned change and bottom-up approach. Emergence of exdysivity index Volberda (1996) identified hyper-competition as the force that moves firms to be more quickly and boldly in making change. How well an organization can renew its change capabilities?It is a continuous cumulative process to achieve fast-changing, unpredictable, and complex environment.Exdysivity is natural change process which can be cultivated in an organization to become operational routine.Exdysivity imitates the skin shedding of reptiles such as snake that naturally renew its skin many times a year for growth and more beautiful skin.Exdysivity focuses on action taking in the most natural manner as without action, changes cannot happen.High exdysivity organization needs to be proactive in developing the change capability and serve as a basis for competitive advantage. The main focus under exdysivity is action.Each action has impact to the operation contributing to growth and success of an organization.A different action renders different economic value.Without actions, value creation is unforeseeable. How to improve the exdysivity of an organization?This involves people skills, management system and organizational infrastructure such as information system, communication system and human resource development system.In order for an organization to manage the change process, the Malcom Baldrige's criteria for performance excellence suggested leadership, strategic planning, customer and market focus, information and analysis, human resource focus, process management, and benefit results as the framework (Chrusciel & Field, 2003). The exdysivity index consists of a set of measurement criteria that will be used to assess an organization.The major change capability areas under assessment include: • In order to understand the current status of an organization with regard to change capability, a systematic assessment should be developed and results should be disclosed so that necessary changes and interventions can be designed for improving the weaknesses or strengthening the strengths.The exdysivity level determination of an organization can be done through the conceptual criteria as shown in figure 2 -overview conceptual criteria for assessment of a firm's exdysivity below. There are 5 distinctive criteria under the assessment process as follows: 1. Strategic planning and execution -To become successful, organizations need to develop effective and powerful strategies in order to guide people to towards the achievement of the targets and goals.The formulated strategies have to be based on the capabilities of the organization with a clear-cut time line of the execution and implementation of related plans (Chrusciel & Field, 2006). Resource management and development - The available resources of an organization should be efficiently utilized to maximize the potential through effective management and operations (Yeung & DeWoskin, 1998).Resources cover both tangible and intangible.In this case human resource is considered one of the most valuable assets of the organization.The capability to use and develop existing resources is crucial for the success of the organization.3. Change culture and mindset institution -In order for change to become part of an organization's best practices, appropriate OD interventions including the readiness for change through employees' perceptions of organizational culture (Jones, Jimmieson & Griffiths, 2005) should be made to institute the change culture and mindset for the people or in other word, the inner shift in people's value, aspirations, and behaviors (Karp, 2004;Nah & Lau, 2001).This requires extensive challenge to the status quo and stimulus for change processes.Without natural change as part of the working culture, it could be difficult for an organization to maintain the effective capabilities.4. Action efficiency and effectiveness -Most past researches on change and dynamic capabilities or abilities do not focus on action taking and consider it as the key to the success of the capability development process.Action science needs further research so that the effectiveness of any concepts to be applied to an organization can be assured.5. Realization of growth, success, problem and failure -It is important or an organization to keep close monitoring on the development and improvement of the capabilities.Measuring the result of the performance can lead to understanding and adjusting the strategy, plan and action that will better respond to the fast-changing environment (Chrusciel & Field, 2006).Change process should address how a firm has created value from the past, present and future for its customers and other stakeholders (Karp, 2004).In term of the time dimensions, the assessment should be done in the manner that covers 3 major period of time i.e. past, present and future.The time interval should be traced back to at least 5 years.The present period may cover the current year to date.The future period can cover the next 3 to 5 years. The past time-frame assessment will be focused on the analysis and review of historical data to understand the performance in relevant areas.This may be started from the review of the financial performance such as sales, expenses, profit and loss.It can be expanded to cover other functions within an organization such as production, sales and marketing, human resource, logistics, information technology and so on. The present time-frame assessment aims at understanding the current status and how the responsible people within an organization react to the opportunity and threat.The environmental changes can be a critical factor that triggers the actions to be made.Thus the assessment of what is going on can give an impression of the up-to-date capability of the organization. The future time-frame assessment can help the evaluator to understand the capability of an organization in forward looking perspective and assess the propensity of the action taking by individual under each function and the team itself.The future planned capability development can help confirm the consistency and continuity of the competitive advantages that the organization has developed and maintained for long-term sustainability. The above conceptual assessment criterial should be supported by more details that should be designed in the more systematic way and with appropriate weight to be assigned to each checking items.The reliability of the measurement needs to be confirmed through the sufficient test activities before the methodology can be applied to the real system.Smith (2002) suggested success rates by type of measure as shown in table 2 below. Table 2: Success Rates by Type of Measure Once the instrument of the change capability measurement has been confirmed for its acceptance and reliability, the methodology and application can be started with consequences and reaction from the target organization.The expectation from the test result is comments and suggestions that the assessment body provides.This can lead to further improvement plan or interventions needed. The final result from the assessment activity can be in the form of an index.This index score ranges from 0.00 to 100.00 representing the level of change capability or exdysivity level.The highest exdysivity level is 100.00.Each capability has to be examined and measured with different weight to be assigned under each category before calculation to form the total score as per the designed formula.A report should be provided with explanation and suggestion for further improvements. Case Study and Discussion A qualitative research with a Thai retail company had been performed from 2014-2017 (disguised name "AA company").This case study is an example for a change capability effort that led to the consequences and results at the end of the study. AA company has been in fashion retail business for over 30 years in Thailand.It had grown up and passed difficult time during instable political period and the financial crisis in 1997.The company realized that the business was at risk of domestic uncertainty that affected the sales and performance from time to time.One strategy that the management selected was to expand the business overseas so that the revenue would not rely solely on domestic sales. After 2 years of feasibility study and due diligence activity, the company got approval from the board of director to acquire a group of companies in Malaysia (disguised name "BB company") with similar kind of business.BB company had been in fashion business in Malaysia for over 20 years with their own developed brands. In the first year after acquisition, the company needed to change the ERP system to comply with the GST (Goods and Services Tax) law which was firstly introduced effectively from 1 April 2015.The parent company (AA) had invested on SAP system which was complicated and quite high investment as compared to the proposed local cheaper one. After reviewing the business plan in 2014, the Thai country manager proposed to liquidate a shoe brand which had incurred continued loss for several years.This decision was made after taking into consideration the competitiveness and future growth plan that it was not worth continuing the business. In view of the management at AA, the inventory level and its aging was very high and needed urgent improvement.The buying budget was reduced and more clearance sale events had been implemented.At the same time, the design of the new products and the merchandise development had been centralized and controlled tightly. The changes had caused resistance and challenges from local staffs in Malaysia.At the same time, the sales had dropped continuously since the start of GST and also with higher competition from new brands and fierce price reduction.Some competitors launched significantly higher portion of low price range of new products. The cost at BB started to climb up with higher cost in new system and additional implementation of POS (point of sales) at all counters.The management fees from Thailand representing the allocated cost of the management at AA had been charged to BB.This resulted in deteriorating performance, especially the bottom line. Cost reduction plan had been introduced.This started from no salary adjustment and lower bonus in 2016.MSS (Mutual Separation Scheme) was introduced with expectation of reduced operational costs.This caused higher attrition rate as employees started looking for new jobs with higher salary and more secured job. In 2015 the restructuring was made at AA at the top management level including the CEO and CFO positions.Shortly after that, the country manager in Malaysia had been replaced by local person from outside.This caused the old staffs with low morale and finally the resignation of key management increased higher. Discussion The case study demonstrated the development of change efforts by both companies during the past 3 years in relation to the continuous changing environment.However, the consequences did not lead the companies to the better performance.BB company still struggled to survive from the economic difficulties.AA company had to consolidate the performance of BB company to the total group financial statement.It could be possible that the companies did not see clearly the critical success factors (CSF) for the project (Muller & Jugdev, 2012) and what areas of change that urgently needed attention (Gersick, 1991).In addition, certain improvement initiatives might need higher priority than others.That means the change content ('what needs to be changed') has influenced the change implementation method (Andreeva, 2008).Moreover, the launching of too many changes such as the new ERP system and re-organization has increased the stress to employees due to higher workload making change less attractive and this could lead to the failure of the change interventions (Vakola & Nikolaou, 2005).When implementing change, the management needs to be aware of the ways that personal issues can impact on employees' thought, feelings and behavior.According to Bovey & Hede (2001), a balanced approach to changes is necessary -both technical and human factors including unconscious processes such as defense mechanisms.In order to be successful at ERP implementation, Nah & Lau (2001) suggested 11 factors found critical consisting of 1).ERP teamwork and composition.2).change management program and culture.3).Top management support.4).Business plan and vision.5).Business process reengineering with minimum customization.6).Project management.7).Monitoring and evaluation of performance.8).Effective communication.9).Software development, testing and troubleshooting.10).Project champion.11).Appropriate business and IT legacy systems.If there was a tool that could help assess the total company's change requirements, it might help both companies to allocate proper resources to the important things first.Schraeder & Self (2003) suggested that the overall evaluation process for the merger and acquisition (M&A) should put more efforts to assess the cultural compatibility or fit prior to the engagement of two firms.That means making change alone is not enough for a company to survive in the long term.It is the matter of what needs to be changed and how it can be done to really deliver the positive results. Conclusion This paper reviews the importance of the organizational change capabilities from the researches mostly done in early of 21 st century. According to this study, a number of criteria for measuring the exdysivity of an organization were identified.Even though the developed systematic tool has not been tested in real setting, it proposed a challenging and potentially high impact to the OD community.A number of tests to be performed can be used as a reference for further study and development and expectedly can contribute to business community for better and sustainable performance which results in the reasonable return for all stakeholders. Exdysivity index can be applied for assessment of an organization's capability and effectiveness in pursuing the change process and change management.The result of the assessment can identify both strength and weakness areas that an organization should focus on.The disclosed area for improvement can lead to the development of best practices for the benefits of the organization. The development a reliable tool and system to disclose the ability to change of organizations become a challenge under today's volatile and fast-moving environment.A lot of research and development needs to be done in order to call for wider attention for interested academic people. This study opens up an opportunity for future research in respect of the development of reliable tool for the assessment of organizations on their capability to adapt and response to change.The benefits from this proposed index can lead to other related topics worth further research.However, sufficient fund is needed to support the survey that can establish the acceptable confidence level of the result. Strategic planning and execution capability • Resource management and development capability • Change culture and mindset institution capability • Action efficiency and effectiveness capability • Realization of growth, success, problem and failure capability As an organization needs to embrace change management as its competitive advantage, elements of change capability can contribute to the success of the organization by achieving the targets for both financial and non-financial.The change process will take into account the fast-changing environment and operational change improvement (see figure 1 -The relationship of exdysivity and change capability on organization target). Figure 1 - Figure 1 -The relationship of exdysivity and change capability on organization target Figure 2 - Figure 2 -overview conceptual criteria for assessment of a firm's exdysivity
2018-12-07T10:12:41.464Z
2017-11-21T00:00:00.000
{ "year": 2017, "sha1": "71083b188ad042d9075f1a421246dfd78baccc15", "oa_license": "CCBY", "oa_url": "https://osjournal.org/ojs/index.php/OSJ/article/download/1309/100", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "71083b188ad042d9075f1a421246dfd78baccc15", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
229298409
pes2o/s2orc
v3-fos-license
Efficacy of Compound Herbal Medicine Tong-Xie-Yao-Fang for Acute Radiation Enteritis and Its Potential Mechanisms: Evidence from Transcriptome Analysis Acute radiation enteritis (ARE) is a common complication with radiotherapy for pelvic and abdominal malignancy. This research is designed to investigate the efficacy of Tong-Xie-Yao-Fang (TXYF) on ARE and to explore the underlying mechanisms by microarray analysis. The ARE rat model was established by a single abdominal irradiation with a gamma-ray dose of 10 Gy. Next, the ARE rats were treated with distilled water, TXYF, and glutamine by gavage for 7 consecutive days according to the scheduled groups. For each group, the jejunal tissue was taken at 6 h after gastric lavage. The morphology of intestinal tissue was observed by hematoxylin and eosin (H&E) stain under a light microscope. The height of the villus and the thickness of the whole layer of the TXYF-treated groups were significantly ameliorative than that of the model control group. The transcriptome analysis was produced using the Agilent SurePrint G3 Rat GE V2.0 microarray. A total of 90 differentially expressed genes (DEGs), including 48 upregulated genes and 42 downregulated genes, were identified by microarray and bioinformatics analysis. Protein–protein interaction (PPI), Gene Ontology (GO), and Kyoto Encyclopedia of Genes and Genomes (KEGG) were conducted to explore the possible mechanisms of DEGs taking part in the TXYF-mediated therapeutic process for ARE. In conclusion, we reveal that TXYF has a protective effect on the intestinal tissue of rats with ARE and summarize several DEGs, suggesting the possible mechanisms of TXYF-mediated efficacy for ARE. Introduction In recent years, the clinical consensus for radiation therapy of tumors has been reached, and treatment is standardized gradually [1,2]. However, with the growing incidence of tumors and the increasing popularity of radiotherapy, increasing numbers of patients inevitably develop acute radiation enteritis (ARE) after radiation therapy for pelvic and abdominal malignancies [3]. ARE is a common intestinal complication during and after radiotherapy for abdominal and pelvic malignancies. Clinical manifestations of ARE commonly include abdominal pain, diarrhea, bloody stool, even sepsis, systemic inflammation, and multiple organ dysfunction syndromes, threatening patients' lives [4,5]. However, there is no standardized treatment for ARE, and symptomatic support is the main treatment at present. Although western medicine treatment can achieve certain effects in clinical practice, the overall effect is still not satisfactory. As one of China's traditional medical treasure houses, traditional Chinese medicine has unique advantages in the treatment of gastrointestinal diseases [6][7][8]. Tong-Xie-Yao-Fang (TXYF) is one of the classic prescriptions of traditional Chinese medicine, which consists of four herbal drugs: Rhizoma Atractylodis Macrocephalae, Radix Paeoniae Alba, Pericarpium Citri Reticulatae, and Radix Saposhnikoviae. This prescription has been widely applied to clinical treatment for gastrointestinal diseases in China, including rectal ulcer syndrome and irritable bowel syndrome [9][10][11]. Li et al. applied the UPLC-MS/MS method to identify eleven bioactive components of TXYF, including 1 lactone, 2 monoterpene glucosides, 1 alkaloid, 5 flavonoids, and 2 chromones, revealing the pharmacological mechanism of TXYF [12]. However, whether TXYF has efficacy on ARE and its potential mechanisms remain largely unclear. In this study, we established a physiologically relevant ARE rat model and observed the efficacy of TXYF on ARE by evaluating the pathological morphology of the small intestinal mucosa after irradiation and treatment. We further explore the potential mechanisms of TXYF treating ARE based on transcriptome analysis and bioinformatics analysis. Methods and Materials 2.1. Reagents and Instruments. TXYF was prepared with Rhizoma Atractylodis Macrocephalae, Radix Paeoniae Alba, Pericarpium Citri Reticulatae, and Radix Saposhnikoviae, which were purchased from the Wuxi Hospital of Traditional Chinese Medicine and composed in 6 : 4 : 3 : 2 proportions. Raw components were soaked in a 10 times volume of distilled water for 0.5 h and boiled twice, first for 1.5 h and then for 1 h. Two of the boiled ingredients were filtered, mixed together, and concentrated in a 1 : 1 ratio (100% concentration) and stored at 4°C for later use. TXYF was diluted in distilled water to a concentration of 4.92 g/mL and stored at room temperature before use. Animal Model. Forty-eight Sprague-Dawley (SD) male rats weighing 200-220 g (NO.201805475) were purchased from Changzhou Cavens Animal Co. Ltd (Changzhou, China). All rats were housed in Animal Experiment Center of Wuxi People's Hospital (SYXK(SU)2015-0004) maintained at constant temperature and humidity with a 12/12 h light/dark cycle according to the guidelines established by the Animal Core Facility of Nanjing Medical University. A total of 48 SD rats were randomly divided into four groups: A-D. Group A (n = 12) was given no treatment, while group B (n = 12), group C (n = 12), and group D (n = 12) underwent the whole abdominal irradiation at a single dose of 10 Gy. On the day 1 after irradiation, group A and group B were given distilled water, while group C was given TXYF (4.92 g/100 g) and group D were given glutamine (0.3 g/100 g) by gavage for 7 consecutive days. The volume of medicine was 2 mL/100 g/d, and the same volume of distilled water was given to groups A and B. The rats were euthanized by an excessive overdose of anesthetic sodium pentobarbital injection. For each group, the jejunal tissue was taken at 6 h after gastric lavage, and the morphology of intestinal tissue was observed by hematoxylin and eosin (H&E) staining (KenGEN BioTECH, Nanjing, China) under a light microscope. Three sections extracted from each samples were checked. The sections were independently evaluated by two pathologists. All experimental procedures were approved by the Supervisory Committee of Nanjing Medical University Animal Council. 2.3. RNA Extraction and Microarray Scanning. Three samples were extracted randomly from groups B and C, respectively. Total RNA was extracted from these six jejunal tissue samples by using the mirVana™ isolation kit (Ambion, Austin, TX, USA), quantified by the NanoDrop ND-2000 (Thermo Scientific), and the RNA integrity was assessed using Agilent Bioanalyzer 2100 (Agilent Technologies). The sample labeling, microarray hybridization, and washing were performed based on the manufacturer's standard protocols. Briefly, total RNA was transcribed to double-strand cDNA, then synthesized into cRNA, and labeled with Cyanine-3-CTP (OE BioTECH, Shanghai, China). The labeled cRNAs were hybridized onto the Agilent SurePrint G3 Rat GE V2.0 microarray. After washing, the arrays were scanned by the Agilent Scanner G2505C (Agilent Technologies). Microarray Data Analysis. Feature extraction (ver-sion10.7.1.1, Agilent Technologies) was used to analyze array images to get raw data. GeneSpring (version13.1, Agilent Technologies) was employed to finish the basic analysis with the raw data. To begin with, the raw data were normalized with the quantile algorithm. The probes that at least 100% of the values in any 1 out of all conditions have flags in "Detected" were chosen for further data analysis. Differential probes were then identified through fold change (FC) as well as P value calculated with t-test. The threshold set for up-and downregulated genes was the FC ≥ 1:5 and the P value ≤ 0.05. Afterward, hierarchical clustering was performed to display the distinguishable probes' expression pattern among samples. Finally, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis were applied to determine the roles of differentially expressed genes (DEGs) according to differential probes. Protein-Protein Interaction (PPI) Network Construction. The protein-protein interaction (PPI) network was constructed using the STRING database (https://string-db.org/) to load all the DEGs [13]. For all other parameters, the default settings were used. * .tsv format network files were loaded into the plug-in cytoHubba based on the Cytoscape software [14]. We defined the top 5 genes with the highest prediction scores calculated by the MCC algorithm. Besides, the network diagram of PPI was visualized with Cytoscape. 2.6. Statistical Analysis. All statistical analyses were performed on IBM SPSS Statistics 25.0. Most of the data were analyzed by Student's t-test or one-way ANOVA followed by Tukey's test. All data are presented as means ± SDs of five independent experiments if not noted. For all analyses, differences were considered statistically significant if P values were less than 0.05. Establishment of the Rat Model for Irradiation-Induced Acute Radiation Enteritis. There was no significant difference in the mental state and food intake of the rats in each group before irradiation. On the day 2 after irradiation, the rats in groups B, C, and D had worse mental states and less food intake than those in group A. Besides, ARE rats showed obvious diarrhea, while the control rats continued to defecate normally. For each group, the jejunal tissue was taken at 6 h after last gastric lavage, and H&E staining revealed dramatic destruction in the intestinal of irradiated animals, whereas the intestinal structure of control rats kept normal (Figures 1(b) and 1(c)). Further quantitative analysis confirmed the successful establishment of the ARE rat model. Compared with control rats, ARE rats receiving no treatments exhibited significant decreasing in villus height, crypt depth, mucosa, and full thickness (Figures 2(a)-2(d)). All these results confirmed the occurrence of ARE in rats from group B. 3.2. Efficacy of Tong-Xie-Yao-Fang on Acute Radiation Enteritis. As we described before, on the day 2 after irradiation, the rats in groups B, C, and D had worse mental states and less food intake than compared with group A rats. On the day 1 after irradiation, group A and group B were given distilled water, while group C was given TXYF and group D was given glutamine by gavage for 7 consecutive days. Glutamine is an effective drug for ARE in both clinical practice and laboratory according to previous publications [15,16]. On the day 7 after irradiation, the above symptoms of the rats in group B still existed and no improvement was exhibited; moreover, one rat died on day 3. However, the mental status and the response to the outside of rats from groups C and D were improved, and food intake was increased. H&E staining and quantitative analysis confirmed significant increase in the villus height and the full thickness in TXYF-treated rats, although depth of crypt and thickness of mucosa had no notable difference (Figures 1(b), 1(c), and 2(a)-2(d)). To conclude, these findings suggested that TXYF has obvious efficacy on ARE in vivo. Quality Control of Microarray Analysis. To explore the potential mechanism of TXYF-mediated efficacy for ARE, we next performed transcriptome analysis of jejunal tissues from groups B and C. Before the subsequent analysis, we applied the relative logarithmic expression (RLE) boxplots and principal component analysis (PCA) to control the quality of the microarray data. RLE boxplots revealed the symmetry of the data was good, suggesting the quality of total RNA was reliable ( Figure S1). Through PCA analysis, the distribution of samples was examined to verify the rationality of the experimental design and the uniformity of biological duplicate samples. As shown in Figure 3, samples from the same group are distributed closely in the twodimensional (Figure 3(a)) or three-dimensional space (Figure 3(b)), suggesting samples involved in this research were representative and biological repetitive. Identification of Differentially Expressed Genes. DEGs were applied to Student's t-test at univariate to check the differential expression levels. We totally identified 115 differential probes with the FC ≥ 1:5 and the P value ≤ 0.05 as potential candidates accounting for TXYF-induced efficacy (Figures 4(a) and 4(b)). The heatmap of hierarchical BioMed Research International clustering analysis was a useful tool to reveal the expression differences of differential probes intuitively. We can see the similarity of the differential probes abundance profiles (Figure 4(c)), exhibiting a satisfactory discriminatory value between the two groups. A total of 115 differential probes corresponded to 90 functional DEGs, including 48 upregulated probes and 42 downregulated probes. The list of 90 DEGs was shown in Table S1. Protein-Protein Network Construction and Enrichment Analysis. We searched those 90 DEGs in STRING and visualized the network using Cytoscape (Figure 5(a)). Based on the MCC algorithm, we extracted the top five genes (Alas2, Hba1, Hba2, LOC689064, and Hbb-b1) showing the closest connections with other genes, and Alas2 showed the closest connections ( Figure 5(b)). GO enrichment analysis was next performed to predict the functional roles of DEGs based on three aspects including biological processes (BP), molecular functions (MF), and cellular components (CC). Several functional roles of both upregulated genes (Figure 6(a), Table S2) and downregulated genes (Figure 6(b), Table S3) were uncovered. With the KEGG enrichment analysis, a total of eight pathways were identified, including three pathways related to upregulated genes and five DEG pathways related to downregulated genes (Table 1). Among all pathways, the Figure 4: Heatmap of clustering analysis of ARE rats and normal controls. (a) The raw data is normalized and converted into log base 2 logarithms, which are represented in a two-dimensional rectangular coordinate system plane as a scatter plot. (b) t test was used to analyze the differential probes which represented as a volcano plot. (c) Cluster assay was used to analyze the expression of 115 differential probes in normal and ARE groups. 5 BioMed Research International top-ranking enriched terms were TNF signaling pathway for downregulated genes, which is mostly associated with inflammation among all pathways (Figure 7). Discussion The mechanism of ARE occurrence is very complicated. Modern medicine holds the opinion that ARE is mainly caused by reduced mitosis of mucosal crypts, destruction of the intestinal mucosal barrier, and acute inflammation [17]. The intestinal epithelial tissue is renewed quickly, and it is renewed every 3-5 days, and the sensitivity of human tissues to radioactivity is proportional to its proliferative capacity [18]. Therefore, the rapidly proliferating intestinal epithe-lium is more sensitive to ionizing radiation and the risk of damage is also relatively large. Radiation could cause the mitosis of stem cells in the intestinal crypts to be inhibited, or even stopped, and degeneration and necrosis occur, interrupting the supply of cells to the villi, shortening the villi to bareness, and even destroying the structural integrity of the intestinal mucosa [19]. With the popularity of radiation therapy for tumors, the incidence of ARE is increasing largely. However, no standardized treatment has been established for ARE in clinical practice up to date. In this study, we established a physiologically relevant ARE rat model. The jejunal mucosal villous was obvious edema, accompanied by partial villous epithelial cells falling off and decreased crypt depth and inflammatory cell BioMed Research International infiltration, indicating the successful establishment of ARE model. After the application of TXYF, the villi were more complete, the mucosal edema was lighter, and the height of villi and crypt depth were increased in experimental rats, which significantly improved than those in the control group. Glutamine is an effective drug for ARE; encouragingly, no notable difference in bowel morphology was observed between the TXYF and glutamine group. It is suggested that TXYF could reduce tissue damage and accelerate intestinal repair, showing promising efficacy for ARE in vivo. In the past decades, vigorously developing highthroughput sequencing technology and computer-aided analytical methods have largely promoted the flourish of big data applications [20][21][22]. As an effective research strategy, transcriptome analysis has been widely used in several aspects of clinical or basic medical research [23]. Due to the multiple active ingredients of compound Chinese medicine, the mechanism of action tends to be complex. At present, transcriptome analysis has been applied to uncover mechanisms of compound Chinese medicine in clinical practice [24]. To explore the potential mechanism of TXYF in treating ARE, six samples extracted from ARE rats and treated with TXYF rats were submitted into transcriptome analysis. Finally, we totally identified 90 DEGs, and we next conducted the PPI network construction and enrichment analysis with the DEGs by GO BioMed Research International analysis to get a better view of the overall DEGs in TXYFtreated tissues. As the result showed, Alas2 exhibited the closest connections with other genes. It has been shown that growth hormone could increase Alas2 gene expression in the rat brain [25,26]. However, the roles of Alas2 in ARE have not been defined. GO enrichment analysis predicted the functional roles of both upregulated and downregulated DEGs. With the KEGG enrichment analysis, eight significant pathways were identified. Among all pathways, top-ranking enriched terms were Malaria and TNF signaling pathway for upregulated and downregulated genes, respectively. As one of inflammation-associated pathways, the roles of TNF signaling pathway in ARE have not been investigated. As we all know, inhibition of TNF pathways could suppress inflammation, and TXYF treatment downregulated Tab3, Bcl3, and Tnfaip3 gene expression, inhibiting TNF pathway to some extent. Overall, further research should be performed to explore the potential mechanisms of TXYF treating ARE. Figure 7: An overview of TNF pathway. Tab3, Bcl3, and Tnfaip3 were downregulated and had potential inhibited effects on TNF pathway. Conclusion In summary, we observe that TXYF has promising efficacy on ARE. Further research reveal several DEGs in the jejunal tissues in response to TXYF treatment, suggesting the possible mechanisms of TXYF-mediated efficacy for ARE. We hope to establish the theoretical basis of TXYF-based treatment for radiation enteritis. Data Availability Data will be provided on request through the corresponding author of this article.
2020-12-10T09:08:15.058Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "5b191cf43b33ff683ef8276de71fcc931b51a785", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2020/5481653.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd824ad0d82af2b74e9a21179b7973c78591a80b", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
207761410
pes2o/s2orc
v3-fos-license
Table 1: The PmCIPK gene family members in P. mume. Prunus mume is an important ornamental woody plant that grows in tropical and subtropical regions. Freezing stress can adversely impact plant productivity and limit the expansion of geographical locations. Understanding cold-responsive genes could potentially bring about the development of new ways to enhance plant freezing tolerance. Members of the serine/threonine protein kinase (CIPK) gene family play important roles in abiotic stress. However, the function of CIPK genes in P. mume remains poorly defined. A total of 16 CIPK genes were first identified in P. mume. A systematic phylogenetic analysis was conducted in which 253 CIPK genes from 12 species were divided into three groups. Furthermore, we analysed the chromosomal locations, molecular structures, motifs and domains of CIPK genes in P. mume. All of the CIPK sequences had NAF domains and promoter regions containing cis-acting regulatory elements of the related stress response. Three PmCIPK genes were identified as Pmu-miR172/167-targeted sites. Transcriptome data showed that most PmCIPK genes presented tissue-specific and time-specific expression profiles. Nine genes were highly expressed in flower buds in December and January, and 12 genes were up-regulated in stems in winter. The expression levels of 12 PmCIPK genes were upregulated during cold stress treatment confirmed by qRT-PCR. Our study improves understanding of the role of the PmCIPK gene family in the low temperature response in woody plants and provides key candidate genes and a theoretical basis for cold resistance molecular-assisted breeding technology in P. mume. Subjects Agricultural Science, Forestry INTRODUCTION Low temperature damage is an environmental stress that severely limits the geographic distribution and cultivation range of perennial plants (Weiser, 1970).Plants have evolved specific and effective molecular mechanisms to defend against low temperature injury.A number of functional genes of the cold response have been confirmed in plants, and some of these genes are closely related to Ca 2+ (e.g., C-repeat binding factor, CBF).Ca 2+ signals represent a universal transduction signal in plants that is translated by elaborate Ca 2+ -binding proteins, many of which function as Ca 2+ sensors and act on downstream responses (Kudla et al., 2018).The large number of probable SCaBP/CBL-PKS/CIPK combinations indicate that the Ca 2+ /SOS3/SOS2 signalling pathway is widely used in plants (Zhu, 2001(Zhu, , 2002)).Calcineurin B-like proteins (CBLs) form functional complexes with CBL-interacting protein kinases (CIPKs, SnRK3s) to relay plant responses to many environmental signals and to regulate ion fluxes (Hashimoto et al., 2012), and the CBL-CIPK complexes perform important functions in the signal transduction pathways in which Ca 2+ is a second messenger, especially for various non-biological signals that regulate ion transporter activity (Luan, 2009;Zhu, 2016).The function of the CBL-CIPK network has been investigated quite intensively in recent years.In Populus euphratica, PeCBL/PeCIPK complexes have been identified and shown to be functional in the regulation of Na + /K + homeostasis (Zhang et al., 2013a). During the last few decades, many CBL-CIPK complexes have been shown to be involved in signal transduction during responses to salt and osmotic stress conditions; however, few studies have concentrated on the role of the CBL-CIPK network during the cold stress response in plants.Recent studies have revealed that the CIPK gene family showed significant increases in transcript after cold stress treatments (Chen et al., 2011;Niu et al., 2018).The plasma membrane protein COLD1 senses cold stress and produces a cytosolic Ca 2+ signal.Calcium-dependent protein kinases (CPKs) and CBL-CIPK complexes transmit Ca 2+ signalling to activate the mitogen-activated protein (MAP) kinase cascade, and activated MAPKs induce the phosphorylation of transcription factors (TFs) such as calmodulin-binding transcription activators and inducer of CBF expressions (ICEs) genes to induce the expression of cold-responsive genes (Zhu, 2016).Expression patterns indicated that ZmCIPK genes were up-regulated under abiotic stress, and 19 ZmCIPK genes responded to cold stress (Chen et al., 2011).The protein kinase CIPK7 is activated by the CBL1 to enhance cold tolerance (Huang et al., 2011).The transcript levels of 4 BnaCIPK genes showed significant increases after cold stress treatment (Zhang et al., 2014).Overexpression of OsCIPK03 increased the tolerance of positive transgenic plantlets to cold stress (Xiang, Huang & Xiong, 2007). CIPKs exhibit a conserved modular structure comprising a CIPK-specific C-terminal regulatory domain and a junction domain (Stout, Foster & Matthews, 2004).The latter contains the phosphatase interaction domain (PPI) and the autoregulatory NAF domain (Albrecht et al., 2001).The NAF domain, as the minimum protein module required for interaction, is both essential and sufficient to mediate interaction with the CBL calcium sensor proteins (Leulliot et al., 2007).The NAF domain, a 24 amino acid domain named after the characteristic amino acids N, A, and F, is found in a plant-specific subgroup of CIPKs that interact with CBLs (Albrecht et al., 2001).Upon the interaction of CBLs with CIPKs, the auto-inhibitory NAF domain is released from the protein domain, producing an active kinase conformation (Weinl & Kudla, 2009).Whereas, the N-terminal part of CIPKs includes a conserved catalytic domain typical of serine/threonine kinases, the much less conserved C-terminal domain is unique to serine/threonine protein kinases (Stout, Foster & Matthews, 2004). Prunus mume is an important ornamental woody plant with diverse features that incorporates winter flowering, colorful petals, a characteristic aroma, and green branches (Zhang et al., 2018a).P. mume also exhibits early flowering and can enrich the landscaping of cold areas in early spring.As a woody plant native to southern China, P. mume, which tolerates temperatures as low as -19 C in the dormant period, has been domesticated for a long time, and partial species have been cultivated in East Asia.However, P. mume is more sensitive to low temperatures than other woody plants such as Acer negundo and Viburnum plicatum var.tomentosum (Irving & Lanphear, 1967).Therefore, low temperature plays a key limiting factor for P. mume survival and growth in regions of low temperature.Previous studies of CIPK genes have focused on herbaceous plants, no report of CIPK genes in woody plants.Recently, whole genome sequencing and genome resequencing of P. mume were completed, laying a foundation for exploring the molecular mechanism of cold resistance in P. mume at the molecular level (Zhang et al., 2012(Zhang et al., , 2018a)).Our aims are to clarify whether PmCIPK genes respond to low temperatures in P. mume and provide new insight into the further molecular dissection of biological functions for cold tolerance in perennial woody plants. Phylogenetic tree construction and calculation of Ka/Ks ratios To study the phylogenetic relationships between CIPK genes in P. mume and other species, CIPK proteins from three species (P.mume, A. thaliana, and O. sativa) and CIPK proteins from nine Rosaceae species were used in a multiple sequence alignment with ClustalX 2.0.11software (Larkin et al., 2007).Subsequently, a phylogenetic tree based on the sequences was constructed via the maximum likelihood (ML) method using 1,000 replicate bootstrap values and the Jones-Taylor-Thornton model using MEGA X (Kumar et al., 2018).Phylogenetic trees from CIPK sequences were annotated and embellished using the online Evolview tool (http://www.evolgenius.info/evolview/#login)(He et al., 2016). Gene structures, protein tertiary structures and motif prediction The exon/intron structures of PmCIPK genes were obtained with Gene Structure Display Server 2.0 (Hu et al., 2015) using genomic sequences and structural information.The PmCIPK protein sequences were submitted for multiple expectation maximization for motif elicitation (http://meme-suite.org/index.html)(Brown et al., 2013) analysis to identify conserved motifs and structural divergence.The PmCIPK proteins were submitted to Pfam and SMART (http://smart.embl-heidelberg.de/)for analysis and confirmation of the CIPK-specific functional domains.The tertiary structures and homologs of PmCIPKs were predicted using the online server Phyre2 (http://www.sbg.bio.ic.ac.uk/ phyre2/html) (Kelley & Sternberg, 2009). Plant material and qRT-PCR analysis A total of 6-month-old seedlings at 24 C under long-day conditions (16-h light/8-h dark) were used for examining the effect of PmCIPK genes on the cold response.We incubated seedlings in soil at 4 C at approximately 65% humidity.Leaves from treated seedlings were sampled at 0, 1, 4, 6, 12, and 24 h for total RNA isolation.First strand cDNA synthesis was performed using a TIANScript First Strand cDNA Synthesis Kit (Tiangen, Beijing, China) according to the manufacturer's instructions.qRT-PCR was carried out using a PikoReal real-time PCR system (Thermo Fisher Scientific, CA, USA) with SYBR Ò Premix ExTaq TM (TaKaRa, Dalian, China).The reactions were performed in a 10 mL volume containing 5 mL of SYBR Ò Premix Ex Taq II, 0.25 mL each of forward and reverse primers (Table S1), 0.5 mL of cDNA and 3 mL of ddH 2 O.The reactions were completed with the following conditions: 95 C for 30 s, 40 cycles of 95 C for 5 s and 60 C for 40 s, 60 C for 30 s, and an end step at 20 C. The analyses were confirmed in triplicate.The relative expression levels of PmCIPK genes were calculated using the 2 -DDCt method with the protein phosphatase 2A gene from P. mume as the reference gene.The final data were subjected to an analysis of variance test. Identification of CIPK genes in P. mume Based on the HMMER search using the CIPK model, 16 non-redundant PmCIPKs were identified in the P. mume genome, and 194 CIPKs were identified in the other 10 species from the Rosaceae genome.The putative CIPKs were named based on the hmmsearch E-value of NAF domain (Table S2).Among PmCIPK proteins, there were sequences with a high level of similarity (i.e., more than 51% identical on average).The predicted sizes of the 16 PmCIPKs ranged from 420 (PmCIPK15) to 508 (PmCIPK16) amino acids (aa), and average molecular weight (MW) was 51.24 kDa.The predicted isoelectric points (pI) varied from 7.01 (PmCIPK15) to 9.59 (PmCIPK3), and all of these proteins were alkaline (pI > 7) (Table 1).PmCIPK genes were unevenly distributed in the P. mume genome.The PmCIPK genes were located in all of the chromosomes and were densely distributed on Chr1, Chr2, and Chr3, which contained five, five and three genes, respectively.However, Chr4, Chr6, and Chr7 contained no PmCIPK genes (Fig. S1).A similar phenomenon of unbalanced chromosomal distribution of CIPK genes was also shown in the apple (Niu et al., 2018).The unbalanced distribution of genes may be related to species evolution and genetic variation. Phylogenetic tree analysis and calculation of Ka/Ks ratios According to multiple sequence alignments, we used the ML method to construct a phylogenetic tree of all CIPK sequences from P. mume, A. thaliana, and O. sativa.Based on the reviewed (SWISS-PROT) AtCIPKs and OsCIPKs, 16 PmCIPKs were divided into three groups (i.e., Group I, Group II, and Group III) (Fig. S2).To better understand the evolutionary relationships between CIPKs, a ML phylogenetic tree was constructed using the CIPKs from 12 species, including 10 Rosaceae species (Fig. 1).Similar to the tree structure described above, the CIPKs were divided into three groups, and every group included similar clades (I-a, I-b; II-a, II-b; and III-a, III-b).Group I was the largest group, containing 100 CIPKs, whereas group III was the smallest group, consisting of 77 members, indicating that CIPKs were distributed unevenly in the different groups.The CIPKs from the Rosaceae genus were distributed uniformly in every small clade, whereas CIPKs from O. sativa tended to cluster together.Notably, all intron-rich PmCIPKs were clustered in Group I, similar to the clustering of CIPKs from Zea mays (Chen et al., 2011).The PmCIPKs, PpCIPKs, and PaCIPKs were clustered together and had similar distributions in the phylogenetic tree. In the present study, three PmCIPK gene pairs were identified (PmCIPK11/3, PmCIPK16/3, and PmCIPK10/9), indicating that whole genome duplication/segmental duplications (SDs) may have been involved in the expansion of the CIPK gene family in P. mume (Table S3).During their evolution, a Ka/Ks ratio > 1 implies positive selection (adaptive evolution), a ratio ¼ 1 implies neutral evolution (drift), and a ratio <1 implies Motif prediction, gene structure, and protein tertiary structures The serine/threonine protein kinases are likely found in all eukaryotes and have roles in a multitude of cellular processes.The kinase, NAF, and PPI domains are the basic characteristics of the CIPK family in plants.The conserved domain showed that all 16 PmCIPK contained the kinase, NAF, and PPI domains (Fig. 2B).The results of MEME analysis showed that all PmCIPK proteins shared six motifs that were contained within the typical domains of plant CIPKs (protein kinase domain and NAF domain).A total of 14 distinct motifs were identified, and motif 1, motif 2, motif 3, motif 4, motif 5, and motif 9 were found in all of the PmCIPKs (Fig. S3).Furthermore, most of the PmCIPKs contained similar motifs, and these motifs are in similar places with similar sizes in the PmCIPK sequences.Similar permutations of motifs exhibited close phylogenetic relationships between the PmCIPK genes (Fig. 2).The exon-intron diversity is not only an important part of gene family evolution but also may influence their expression.To understand the annotated features of the PmCIPK genes, we used GSDS 2.0 to construct the coordinates of the exon-intron.The results showed that out of 16 PmCIPKs, 10 genes (62.5%) comprised the largest/smallest numbers of exons, which were mainly concentrated in the range of one to three, while six genes (37.5%) contained multiple exons.We found that closely clustered PmCIPK genes in the same clades have similar exon number, UTR exon number, and even the number of exons with significant differences in the PmCIPK genes.Interestingly, the evolutionary relationships of the PmCIPKs were divided into two groups based on different numbers of exons (Fig. 2C). The three-dimensional structures of 16 PmCIPK proteins were predicted (Fig. S4).Using a template model, all CIPK sequences were modelled with 100.0%confidence by the single highest scoring template.Five identical top-scoring proteins for all PmCIPK sequences were found.Hypothetical protein c6c9dB belonged to the serine/threonineprotein kinase MARK1, while the other four belonged to the protein kinase family.Serine/threonine-protein kinase MARK1 (c6c9dB) shared 28-45% sequence identity with the PmCIPKs, which was anticipated as it is a homologue of CIPK, with 100% probability.On the other hand, the other four protein kinases (c5ebzF, c4wnkA, c3pfqA, and c4cfh) shared 24-44% sequence identity with the PmCIPKs (Table 2; Table S4). Cis-acting regulatory elements and miR167/miR172 target site analysis The structure of a promoter affects the affinity of the promoter and RNA polymerase, thereby affecting the level of gene expression.Cis-acting regulatory elements related stress responses, including MYB, MYC, DRE, ABRE, and TC-rich repeats were identified in the promoter of PmCIPKs (Fig. 3).These elements were unevenly distributed in the promoter regions and the number of MYBs was relatively abundant.MYBs are considered crucial TFs in response to water, salt, drought, and cold (Li, Ng & Fan, 2015;Urao et al., 1993).Most of the PmCIPKs, except PmCIPK4, PmCIPK10, PmCIPK14, and PmCIPK15, contained ABRE (ACGTG) cis-promoter elements.ABRE is involved in abscisic acid (ABA) and VP1 responsiveness to abiotic stresses (Hattori, Terada & Hamasuna, 1995;Hobo et al., 1999).Meanwhile, DRE and TC-rich repeats were found in PmCIPK promoter regions, which respond to dehydration and cold stress or high-salt stress (Germain et al., 2012;Narusaka et al., 2003). MicroRNAs are important regulators of gene expression associated with abiotic stress in plants.Recent studies showed that miR167 and miR172 play a key role in the low temperature stress response in plants (Koc, Filiz & Tombuloglu, 2015;Zhang et al., 2008).In P. mume, we found that four miR172 (Pmu-miR172a-d) and four miR167 (Pmu-miR167a-d) members had been identified (Wang et al., 2014).Notably, two PmCIPKs (PmCIPK5 and PmCIPK6) were targeted by Pmu-miR172s, and only PmCIPK13 was targeted by Pmu-miR167s in the psRNATarget online software (Fig. 4).We drew the energy dot plot for Pmu-miR172a, Pmu-miR172c, and Pmu-miR167b, and the dG values were 3.7, 5.9, and 3.9 kcal/mol at 37 C in the plot file, respectively (Fig. S5).The energy required for the microRNA target is considered an important determinant of the response of mRNAs to miRNAs (Kertesz et al., 2007).miR167 and miR172 are highly conserved miRNA family members among plants (Jones-Rhoades & Bartel, 2004).Wang et al. (2014) have shown that Pmu-miR167 and Pmu-miR172 family members negatively regulate their targets during the flower bud development process. Expression pattern analysis of PmCIPKs In different tissues (i.e., bud, leaf, root, stem, fruit), 16 PmCIPK gene heat maps showed differential expression, suggesting specific functions at particular stages of development (Fig. 5A; Table S5).For instance, seven genes (PmCIPK1, PmCIPK2, PmCIPK5, PmCIPK6, PmCIPK9, PmCIPK12, and PmCIPK13) exhibited high expression in the leaf.All of the PmCIPK genes were expressed during the bud dormancy process, and most of them had specific functions at particular stages of development (Fig. 5B; Table S6).We found that 6 PmCIPKs in subset I showed high expression levels at the NF stage, whereas subset II genes were significantly down-regulated except for PmCIPK10.Some of the genes were significantly up-regulated during the dormancy-breaking process.For example, To further analyse the expression patterns of PmCIPK genes under cold response, PmCIPK genes were clustered based on different growth regions and periods.We found that the expression profiles of PmCIPK genes were significantly clustered in three different periods (autumn, winter, and spring).The expression levels of most PmCIPK genes was significantly up-regulated in autumn and winter, showing a response to cold acclimation and low-temperature stress.However, PmCIPK14 and PmCIPK15 showed high expression levels in the spring.According to the clustering analysis of PmCIPK genes in different regions, the results are similar to those obtained at different periods (Fig. 5C; Table S7).Most up-regulated PmCIPK genes were clustered in winter.The clustering results showed that the expression levels of most PmCIPK genes was up-regulated at low temperature.Among the up-regulated genes, we identified a SnRK (SNF1-related protein kinase, Pm004487) that could respond to the ABA complex and MAPK pathways of stress signalling in plants. To examine the signalling pathways among PmCIPK proteins, KOALA (KEGG Orthology And Links Annotation) analysis based on the K number assignment of KEGG GENES using SSEARCH computation was carried out.The PmCIPK13 sequence was identified in the scoring scheme for K number, and the K number was K07198.The K07198 number is associated with the AMP-activated protein kinase (AMPK) signalling pathway (ko04152).The major stress signalling pathways in plants are associated with the mammalian AMPK kinase, suggesting that these pathways may have evolved from an energy-aware pathway (Zhu, 2016).The catalytic domain of PmCIPK13 was similar to that of the mammalian AMPK.To further elucidate the putative function of PmCIPKs, the predicted interactions of PmCIPKs were established based on homologous proteins in A. thaliana (Fig. 6).The interaction network showed there is no direct interaction between the CIPKs, but the interaction network links between the CIPKs through CBLs interaction and has been reported in many species.PmCIPK13 may play an important role in the interaction network, which are associated with abiotic stress tolerance. Quantification of gene expression Cold acclimation was found to contribute to the maximum freezing tolerance of plants. To investigate the function of PmCIPKs, the expression levels of PmCIPKs under low temperature treatment were examined by qRT-PCR experiments.The PmCIPK genes were up-regulated or down-regulated in the time course expression levels during 4 C treatment (Fig. 7).We have compared the gene expression patterns of these CIPK genes across P. mume, A. thaliana, and O. sativa during cold stress and some CIPK genes expression levels are similar in the plants (Fig. S6).The majority of PmCIPKs were up-regulated within 4 or 6 h except for PmCIPK1, PmCIPK2, PmCIPK4, and PmCIPK7, and the highest expression levels were found at 6 h before they were down-regulated with the prolongation of cold treatment time.We found that the target genes of Pmu-miR172/167 (PmCIPK5, PmCIPK6, PmCIPK13) were up-regulated following cold treatment compared to those of the control while the expression levels of other genes (PmCIPK3, PmCIPK16) were relatively stable.Only PmCIPK7 was rapidly up-regulated at 24 h under 4 C treatment.The expression levels of PmCIPK5, PmCIPK6, PmCIPK8, and PmCIPK13 were up-regulated and had similar changes as the PmCBF gene that has been proven to be related to cold response (Zhang et al., 2013b). DISCUSSION Freeze stress hinders the growth of most plants and becomes an increasing threat to the expanded cultivation of plants such as the perennial woody plant P. mume.Wheat crops grown under normal growth temperature conditions are killed by freezing temperatures at approximately -5 C, but the species can survive temperatures as low as -20 C after cold acclimation (Thomashow, 1999).As reported in previous research, Ca 2+ plays an important role in the cold stress response.The functions of the CIPK-CBL complex are closely related to Ca 2+ .The expression levels of CBF/DREB1 genes were enhanced by a vacuolar Ca 2+ /H + antiporter (CAX1) to exhibit increased freezing tolerance (Catala et al., 2003).The cold stress-related genes were cloned, functional assays using the complete genome were carried out, and the exact roles of C-repeat/DRE binding factor (CBF/DRE) about cold tolerance have been investigated in P. mume (Sun et al., 2015;Zhang et al., 2013b).The expression levels of PmCIPKs were up-regulated or downregulated by cold treatment, however, there are differences in the expression levels of these genes (Fig. 7).These expression levels were similar to those of their homologs in wheat and maize (Chen et al., 2011;Sun et al., 2015), indicating that PmCIPKs might play important roles in cold response.Domestication has been a conventional breeding method for crop improvement, however, improvements to create useful traits, including those of P. mume, have required more time via conventional breeding.Plants have evolved during long-term breeding.The percentage of shared genes is as high as 55.0% among six Rosaceae species (P.mume, P. persica, F. vesca, P. avium, M. Â domestica, and P. yedoensis), as shown by genome comparative analysis involving TFs, functional genes and uncharacterized genes (Chagné et al., 2014).The P. mume genome contains nine ancestral chromosomes, which is unique in the Rosaceae family, revealing that the ancestral chromosomes evolved into eight current chromosomes in P. mume after 11 fusion events, seven current chromosomes in the strawberry after 15 fusion events, and 17 current chromosomes in the apple after one whole genome copy and five fusion events (Shulaev et al., 2011;Velasco et al., 2010;Zhang et al., 2012).We identified a range of 16-30 CIPK genes each from 10 Rosaceae species, whereas, the number of CIPK genes in Prunus species is relatively concentrated, ranging from 16 to 20.Among studies on the phylogenetic tree of PmCIPK, Prunus species showed closer affinity relationships in the Rosaceae family (Fig. 1), and genome resequencing analysis indicated a strong signature of introgressions in Prunus species (Zhang et al., 2018a).The diversification of Prunus genomes traces back to pre-66 Mya, and then a successive split period (36-44 Mya) arose (Chagné et al., 2014). The NAF domain is found in a plant-specific subgroup of CIPKs that mediate the interaction with the CBL calcium sensor proteins.The CIPK-CBL complexes may connect low temperature-induced calcium signalling with the ICE or SnRK2.6/OST1cascades, because CIPK-CBL binds to calcium and calmodulin (Zhu, 2016).Recent studies have shown that low temperature can activate SnRK2.6/OST1 and that SnRK2.6 interacts with and phosphorylates ICE1 to regulate the CBF-COR gene expression during cold stress induction and freezing tolerance (Ding et al., 2015).A total of 16 PmCIPK genes were identified based on the NAF domain, and most PmCIPK genes had significant transcript accumulation in the low temperature period.Among the up-regulated genes, we identified a SnRK (SNF1-related protein kinase, Pm004487) that could respond to the ABA complex and MAPK pathways of stress signalling in plants.Hormone signal transduction and MAPK signalling pathways play a critical role in abiotic stress in plants (Danquah et al., 2014).The binding of a CBL protein to the regulatory NAF domain of a CIPK protein causes the activation of the kinase in a calcium-dependent pattern (Albrecht et al., 2001).The CBL/CIPK complex acts as a Ca 2+ sensor involved in ABA signalling and stress-induced ABA biosynthesis pathways and contributes to the regulation of early stress-related CBF/DREB TFs (Guo et al., 2002). Tissue-specific transcription profiles of PmCIPK genes were identified and most of the CIPK genes were differentially expressed in different tissues and at different stages of dormancy release.PmCIPKs were expressed specifically in the leaf and stem, and had similar tissue-specific expression profiles to CIPKs from M. Â domestica and O. sativa (Kanwar et al., 2014;Niu et al., 2018).PmCIPK4, PmCIPK7, PmCIPK10, and PmCIPK11 were expressed in buds whereas PmCIPK5, PmCIPK9, and PmCIPK13 were expressed in fruits, suggesting that they might affect seed size and embryonic development, which is similar to AtCIPKs in A. thaliana (Eckert et al., 2014).Notably, PmCIPK3, PmCIPK7, PmCIPK8, PmCIPK12, PmCIPK15, and PmCIPK16 were expressed at low levels during the dormancy stages and quickly up-regulated during the dormancy release stage.We speculated that these PmCIPKs might play important roles in the processes of dormancy release.Expression of AtCBL1 is induced by cold stress and AtCBL1 associates with AtCIPK7, which mediates plant responses to cold stress (Huang et al., 2011).Similarly, BdCIPK31 improved osmoprotectant biosynthesis and ROS detoxification to enhance low temperature tolerance in transgenic tobacco (Luo et al., 2018).Here, PmCIPK1, PmCIPK2, PmCIPK5, PmCIPK6, PmCIPK10, PmCIPK13, and PmCIPK14 were significantly regulated in the depths of winter (Fig. 5) and might have been involved in the low temperature response to protect flower buds at subfreezing temperatures.When plants encounter cold stress, a series of cellular activities and molecular mechanisms can be activated that permit the plant to adapt to low temperatures (Shi, Ding & Yang, 2018).Previous studies have shown that miR167 and miR172 play key roles in the cold stress response in plants (Koc, Filiz & Tombuloglu, 2015;Zhang et al., 2008).In P. mume, two PmCIPKs (PmCIPK5 and PmCIPK6) were targeted by Pmu-miR172s, and only PmCIPK13, which belongs to Group I, was targeted by Pmu-miR167s (Fig. 1; Fig. S2).These genes contained multiple exons, and their target sites were all located in the coding region (Fig. 4).PmCIPK5 is highly homologous to the Arabidopsis AtCIPK3 (AT2G26980.4)gene.The expression of AtCIPK3 is responsive to cold, drought, ABA, high salt, and wounds, and AtCIPK3 can be used as a "node" between the ABA-dependent/ ABA-independent pathways in the cold response signalling pathway (Kim, 2003).Xiang, Huang & Xiong (2007) reported that the overexpression of the OsCIPK3 gene could dramatically strengthen tolerance to cold stress in transgenic plants.These proteins included a putative protein phosphatase 2C (PP2C) binding site (Vlad et al., 2009).The core ABA co-receptor complex, including the PP2C protein, plays an essential role in the response to various adaptive stresses (Guo et al., 2001).Three cold response genes (PmCIPK5, PmCIPK6, and PmCIPK13) were screened out, which laid a foundation for subsequent studies.However, there are still many gaps in our knowledge of the cold response mechanisms underlying cold acclimation processes, and further analyses are needed to increase our understanding of cold acclimation.This is not only essential for defining a molecular mechanism for the cold acclimation processes but also has important implications for agricultural production in geographical distributions where crop and horticultural plant species can be planted. CONCLUSIONS Although the molecular functions of some CIPK-CBL protein complexes have been verified in herbaceous plants A. thaliana and O. sativa, their functions in woody plants are still unclear.In this study, we performed the first genome-wide identification of the CIPK gene family in P. mume.Sixteen PmCIPK genes were identified, and 12 PmCIPK genes including Pmu-miR172/167-targeted genes (PmCIPK5, and PmCIPK6) were up-regulated by cold treatment.Nine PmCIPKs were highly expressed in flower buds in December and January, and twelve PmCIPKs were up-regulated in stems in winter.Our results suggest that the roles of PmCIPKs in regulating the stress response to low temperature may supply freezing tolerance to plants, especially in enduring the freezing period of winter weather. Figure 2 Figure 2 Phylogenetic analysis and gene structure of PmCIPK genes.(A) Phylogenetic tree was constructed based on the full amino acid sequences of PmCIPKs.(B) Motif distribution of PmCIPK proteins.Protein motif architectures were indicated using Pfam and MEME online tool.(C) Exons and introns structures of PmCIPK genes.The yellow round-corner rectangle represents exons, the black shrinked line represents introns, and the blue round-corner rectangle represents UTR.Full-size  DOI: 10.7717/peerj.6847/fig-2 Figure 3 Figure 3 Types and number of cis-promoters involved in the stress response.The x-axis represents 1.5 kb upstream promoter region of PmCIPK genes.The y-axis represents number of cis-promoters.Full-size  DOI: 10.7717/peerj.6847/fig-3 Figure 7 Figure 7 Expression level of PmCIPK genes under 4 ºC treatment.The expression level of the 16 PmCIPK genes (A-P) were gained by qRT-PCR.Protein Phosphatase 2A (PP2A) gene was used as the internal control to standardize for each reaction.Full-size  DOI: 10.7717/peerj.6847/fig-7 Table 1 The PmCIPK gene family members in P. mume.The Ka/Ks ratio of only one PmCIPK gene pair (PmCIPK10/9), which was 0.28, has meaning and shows there has been synonymous change.The divergence period of the PmCIPK10/9 gene pair ranged from 41.24 to 76.38 Mya. Table 2 The confidence and sequence identities of the homologous relationships of the PmCIPKs.
2019-07-04T04:51:48.729Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "52e50a5dc2ab95b0c833c568a6ab4308fa66f2c5", "oa_license": "CCBY", "oa_url": "https://peerj.com/articles/6847.pdf", "oa_status": "HYBRID", "pdf_src": "Unpaywall", "pdf_hash": "52e50a5dc2ab95b0c833c568a6ab4308fa66f2c5", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
252683156
pes2o/s2orc
v3-fos-license
Far-from-equilibrium attractors for massive kinetic theory in the relaxation time approximation We investigate whether early and late time attractors for non-conformal kinetic theories exist by computing the time-evolution of a large set of moments of the one-particle distribution function. For this purpose we make use of a previously obtained exact solution of the 0+1D boost-invariant massive Boltzmann equation in relaxation time approximation. We extend prior attractor studies of non-conformal systems by using a realistic mass- and temperature-dependent relaxation time and explicitly computing the effect of varying both the initial momentum-space anisotropy and initialization time on the time evolution of a large set of integral moments. Our findings are consistent with prior studies, which found that there is an attractor for the scaled longitudinal pressure, but not for the shear and bulk viscous corrections separately. We further present evidence that both late- and early-time attractors exist for all moments of the one-particle distribution function that contain greater than one power of the longitudinal momentum squared. Introduction A fundamental question in far-from-equilibrium relativistic dynamics in recent decades has been the extent to which relativistic viscous hydrodynamics can represent such dynamics.Exact solutions within the relaxation time approximation (RTA) to the relativistic Boltzmann equation have proven critical in evaluating the accuracy of dissipative hydrodynamic models.These solutions also shed light on the evolution of far-from-equilibrium attractors, which merge smoothly onto viscous hydrodynamics at late times, yet remain valid earlier when linearized treatments fail. While most studies have concentrated on conformal systems, there is growing interest in non-conformal systems where the dynamics are influenced by more than one scale.In this proceedings contribution, we extend prior exact solutions for a massive gas [1] by computing all moments of the one-particle distribution function.In addition, we use a temperatureand mass-dependent relaxation time and search for separate attractors for the shear and bulk viscous corrections.The work we report on, which originally appeared in Ref. [2], goes beyond prior works in which it was assumed that the relaxation time was either constant or inversely proportional to the temperature. We found in Ref. [2] that moments with greater than one power of the longitudinal momentum squared, possess both forward and early-time (pull-back) attractors.However, we found that although the shear and bulk viscous corrections do not have early-time attractors on their own, the difference between them does, indicating an attractor in the scaled longitudinal pressure.These findings are consistent with and expand upon findings of other groups. Setup We will assume Bjorken flow, in which case in Milne coordinates one has u τ = 1 and u x,y,ς = 0, where τ is the longitudinal proper-time, τ = √ t 2 − z 2 , and ς is the spatial rapidity, ς = tanh −1 (z/t).Therefore, all scalar quantities depend only on the longitudinal proper time τ.We start with the RTA Boltzmann equation where p µ is the particle four-momentum, u µ is the four-velocity of the local rest frame, a • b ≡ a µ b µ , and C is the collisional kernel.This equation takes a simple form when it is written in terms of boost-invariant variables [1, 3] Here, the exact solution to eq. ( 2) is [3] where the damping function D is defined as , f is the oneparticle distribution function and f eq is the equilibrium distribution that is assumed to be a Boltzmann distribution.When written in terms of the boost-invariant variables, the equilibrium distribution function is The initial distribution function f 0 (w, p T ) is specified at τ = τ 0 and assumed to be of spheroidally-deformed form [4,5] , where ξ 0 is the initial anisotropy parameter and Λ 0 is the initial transverse momentum scale. The relaxation time τ eq is defined as τ eq (T, m) = 5 η γ( m)/T with γ( m) ≡ 3 κ( m) 1 + ε P .By plotting γ( m), we observed that γ( m) goes to unity in the massless limit and grows linearly at large m/T , which corresponds either to fixed temperature and large mass or fixed mass and small temperature [2].We note that γ( m) ≥ 1 implies that a massive gas always relaxes more slowly to equilibrium than a massless one. To understand the properties and dynamics of the system, we will work with the general moments of the one-particle distribution function [2,6,7], which can be expressed in terms of the boost-invariant variables as and these general moments scaled by their equilibrium values, i.e., M nl ≡ M nl /M nl eq where in the late-time limit (τ → ∞), if the system approaches equilibrium, then M nl → 1.By taking a general moment of eq. ( 3) and evaluating the integrals necessary, one obtains Finally, specializing to the case n = 2 and l = 0 and enforcing Landau matching ε(τ) = ε eq (T ), one obtains the following integral equation Since we obtain all the scaled moments, we can compute the viscous corrections and scaling by the equilibrium pressure.One finds that the bulk viscous correction can be expressed as Π ≡ Π/P = −m 2 M 00 − M 00 eq / M 20 eq − m 2 M 00 eq and the shear viscous correction as π ≡ π/P = 1 − M 01 + Π. Next, we obtain the following expression for the scaled moments in the Navier-Stokes limit by adding the shear and bulk corrections to the equilibrium result and scaling by the equilibrium moments within the 14-moment approximation [8] M nl,NS and Chapman-Enskog approximation [9] M nl,NS Results and Conclusions In this proceedings contribution, we report on our extension of prior research into the presence of attractors in non-conformal kinetic theory.We explored the time evolution of a large set of integral moments of the distribution function using an explicit solution to the boostinvariant Boltzmann equation within the RTA and released the computational code along with the paper [2].Our findings, summarized in fig. 1, are consistent with recent research that found late-and early-time attractors in the scaled longitudinal pressure but not in the shear and bulk viscous corrections when taken separately. In contrast to the conformal situation, our results indicate that no early-time attractor exists for moments with l = 0. We found that the Chapman-Enskog approximation better The solid black lines correspond to the attractor solution, the solid red lines are the first-order 14moment predictions in eq. ( 8), and the solid green lines are the first-order Chapman-Enskog predictions in eq. ( 9).Top: The non-solid lines are specific initial conditions initialized at τ 0 = 0.1 fm/c with α 0 = 1/ 1 + ξ 0 ∈ {0.12, 0.25, 0.5, 1, 2}.Bottom: the non-solid lines are specific initial conditions initialized with ξ 0 = 0 at τ 0 ∈ {0.01, 0.02, 0.04} fm/c.Note that for M 10 and M 01 , the 14-moment and Chapman-Enskog predictions are the precisely the same. agrees with the exact solution, especially for small masses and higher-order moments, compared to the 14-moment approximation (see fig. 1).Furthermore, we found that at late times, the first-order gradient expansion does not sufficiently explain the bulk viscous correction.Moreover, our analysis shows that there is a consistent approach to the forward attractor for all moments and a semi-universal behavior of the early-time dynamics for l = 0 moments for phenomenologically relevant initialization times.This semi-universality translates into a slight uncertainty of the attractor in non-conformal systems, but still supports the utility of attractors in heavy-ion collision models. Figure 1 . Figure 1.Scaled moments M nl as a function of rescaled time for the case m = 1 GeV and T 0 = 1 GeV.The solid black lines correspond to the attractor solution, the solid red lines are the first-order 14moment predictions in eq.(8), and the solid green lines are the first-order Chapman-Enskog predictions in eq.(9).Top: The non-solid lines are specific initial conditions initialized at τ 0 = 0.1 fm/c with α 0 = 1/ 1 + ξ 0 ∈ {0.12, 0.25, 0.5, 1, 2}.Bottom: the non-solid lines are specific initial conditions initialized with ξ 0 = 0 at τ 0 ∈ {0.01, 0.02, 0.04} fm/c.Note that for M 10 and M 01 , the 14-moment and Chapman-Enskog predictions are the precisely the same.
2022-10-04T06:42:08.598Z
2022-10-03T00:00:00.000
{ "year": 2022, "sha1": "fedb3ef927d8ff4fe07f9e2c0497b20ad679dc49", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP12(2022)143.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "e79e2df36f99035374419ea60e20a8d07967964c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238203941
pes2o/s2orc
v3-fos-license
Using latent class and quantum models to value equity in health care: a tale of 2 stories Cost-effectiveness analysis (CEA) with quality-adjusted life-year (QALY) was introduced to address health equity concerns in value assessment. However, QALY fails to capture patient preference. Stated preference methods (eg, discrete choice experiment [DCE]) have been increasingly used to incorporate patient preference into the value assessment framework in health care. Still, ones with a moral dimension such as health equity do not exist. The objective of this paper was to describe 2 stated preference approaches that can empirically value health equity. First, the decision-maker perceptions of the prevalence of equity dimensions in DCE choice tasks are identified. A latent class model based on random utility theory is proposed to derive the value of equity from the decision makers with different perceptions of the prevalence of equity dimensions. Second, equity attributes are incorporated in DCE choice tasks, and a quantum choice model, which can capture stochasticity during the decision process in the mind of the decision makers, is used to value the equity. These approaches will improve existing value assessment methods to address health equity adequately. To love another person is to see the face of God -Victor Hugo, Les Misérables. Despite the increasing interest in developing the assessments of novel value elements identified by the International Society of Pharmacoeconomics and Outcomes Research (ISPOR) Special Task Force, 1 relatively few attempts have been made to value equity. None of the US value assessment frameworks directly defined equity or proposed an empirical measure of the equity. Only some potential elements (eg, unmet need) were considered. 2 In 2017, cost-effectiveness analysis (CEA) with quality-adjusted life-year (QALY) was introduced to address health equity concerns in value assessment. 3 Primarily, health economists used 2 approaches, including equity impact analysis and equity trade-off analysis, to address these concerns. 3 The equity impact analysis quantifies the distribution of costs and effects by equity relevant variables. Two examples of the equity impact analysis include extended CEA and distributional CEA. On the other hand, the equity trade-off analysis quantifies tradeoffs between improving total health and other equity objectives. Two main approaches to equity trade-off analysis include equity constraint analysis and equity weighting analysis, which are counting the cost of equity and valuing the importance of equity, respectively. Similar to the equity impact analysis, CEA with QALYs has been used to capture the health loss associated with choosing a more equitable option in the equity constraint analysis, and QALY is used as a common metric in the equity-weighting analysis. Despite the wide use of the QALY metric in value assessment, it remains controversial because it is a single-dimensional and generic health measure. Furthermore, CEA with QALYs fails to capture patient preference and do not address the heterogeneity of patient preference. Although stated preference methods (eg, discrete choice experiment [DCE] and best-worst scaling [BWS]) have been increasingly used to incorporate patient preference into the value assessment framework in health care, the ones with a moral dimension, such as health equity, do not exist. There is, therefore, a critical need to improve existing methods that inadequately address health equity issues. Literature-suggested methods Using latent class and quantum models to value equity in health care: a tale of 2 stories Surachat Ngorsuraches, PhD ABSTRACT Cost-effectiveness analysis (CEA) with quality-adjusted life-year (QALY) was introduced to address health equity concerns in value assessment. However, QALY fails to capture patient preference. Stated preference methods (eg, discrete choice experiment [DCE]) have been increasingly used to incorporate patient preference into the value assessment framework in health care. Still, ones with a moral dimension such as health equity do not exist. The objective of this paper was to describe 2 stated preference approaches that can empirically value health equity. Proposed Approaches Only a few stated preference studies, which explicitly acknowledged and explored the moral dimensions of choice behaviors, exist. 4,5 One of the major challenges is that sometimes the moral choice is obvious, and, at other times, it is more latent or implicit. 4 A moral dimension may be considered to some extent in the choice situations of these stated preference studies; however, these studies do not explicitly consider the moral dimension of the decision. This paper proposed to use 2 approaches previously developed and applied to examine the moral dimension in the transportation field to overcome this challenge and address equity concerns in health care. 4,6 FIRST APPROACH This approach uses DCE with a latent class model to derive the value of equity. Figure 1 shows the steps of this approach. In general, a DCE describes various choice tasks of study treatment or health care service by its attributes (eg, efficacy, side effects, and costs). To engage patients in this approach, these attributes should be obtained from patient experiences. Each choice task contains various hypothetical alternatives with different attributes and levels, randomly combined by a rigorous method of DCE study design. Participants (eg, patients, providers, policymakers, and taxpayers) will be asked to choose 1 alternative that they prefer from each choice set. To value the equity and capture decision-maker preferences to contribute to value assessment under the consideration of equity, an individual's perception of the prevalence of equity dimensions in the choice tasks needs to be determined. Subsequently, responses to the DCE choice tasks from participants with different perceptions of the prevalence of equity dimensions are obtained. from various disciplines should be harnessed to rigorously translate people's considered preferences to address health equity issues and to give guidance to health care decision makers. 4 The objective of this paper is to describe 2 approaches that empirically value health equity and capture decisionmaker preferences to contribute to value assessment under the realization of health equity. These approaches can be used to ensure the systematic consideration of health equity in decision making, and they will also determine how equity enters the preferences of decision makers, indicated as another challenge by ISPOR Special Task Decision makers without equity concern Valuing attributes with equity concern Valuing attributes without equity concern Value of equity FIGURE 2 Conceptual Model of an Individual's Moral Choice Behavior a a Adapted from Chorus 2015. 5 preferences identified by the ISPOR Special Task Force as an unclear issue. 1 For instance, the latent variable perceived equity salience could be modified to capture whether the participants focus equity on overall well-being or specifically on equity in health. SECOND APPROACH This approach uses DCE that incorporates equity attributes for individual alternatives in choice tasks to derive the value of equity. Figure 3 shows the steps of this approach. Participants (eg, patients, providers, policymakers, and taxpayers) are asked to complete 2 sets of DCE choice tasks based on an individual's preference. The first set involves trade-offs among treatment or health care attributes, including cost. Traditional choice models (eg, mixed logit model) can be used for this first set of choice tasks. The second set also includes equity attributes (eg, increased benefits and decreased risks for others in study treatment or health care) meaning that the participants need to make choices with equity concerns. An experimental study indicated that people not only preferred a resource allocation rule that most benefited them but also judged it to be fairer and more moral. 7 In this given choice context, people may evaluate the choice tasks differently because of equity attributes. Their decisions would have changed depending on the acceptance or dismissal of the equity components. The study also suggested people could change their views about equity in a matter of minutes as they learned where their interests lay. Therefore, the stochasticity during the decision process in the mind of the individual decision maker needs to be captured to reflect behavioral realism. RUT, which follows the classic theories of probability and has dominated the choice modeling field for decades and has been criticized as the perceived equity salience variable are assigned with different probabilities to "decision makers with equity concern" and "decision makers without equity concern" classes. These 2 classes are specified to differ in terms of preference weights or values for study attributes, based on the marginal rates of substitution between each attribute and cost. In other words, 2 value sets of treatment or health care service attributes determined with and without equity concerns are obtained. These 2 value sets can be compared to reflect the value of equity. Preferences for the attributes under the realization of equity can be captured from the class of decision makers with equity concerns. For example, this approach can estimate social value for health gains from treatments for an underrepresented population (which can also be more specific populations). A DCE survey is designed to include health gain and cost as study attributes. Taxpayers are asked to respond to the survey. Marginal willingness-to-pay (WTP) for health gains can be calculated. Table 1 shows an example of the results. The marginal WTPs for health gains from the perspective of taxpayers with and without perceived equity salience are $X and $Y, respectively. Assuming $X and $Y to include the implicit value of equity, the difference between these amounts ($X and $Y) reflects the value of equity for 1 unit of health gain. This approach can also examine how equity enters the participant A literature review from the fields of economics and psychology was used to develop the conceptual model of an individual's moral choice behaviors and suggested to create a latent variable, "perceived moral salience," that can be used to reflect the individual's identification of moral dimension. 4 The model identifies the task environment, the individual's personality, and moral norms as factors influencing the individual's identification of moral dimension ( Figure 2). This model indicates that people can simply adjust their decision making when they encounter or identify a task environment with a moral dimension. Morality should at least to some extent be considered as a personality trait. As a result, different individuals from the same culture behave differently when encountered with the same moral situation. Many people prefer to stick with norms, even if these norms conflict with their personal preferences. Therefore, this paper proposes to construct the perceived equity salience variable as a function of task environment (eg, the presence or absence of explicit verbal cues about health equity), individual's personality (eg, social value orientation), and prevailing moral norm of equity to determine the individual's identification of a given choice situation as having an equity dimension or not. Based on random utility theory (RUT), the perceived equity salience variable helps develop a latent class model. Individuals with different levels of Marginal willingness to pay for health gain $X $Y TABLE 1 An Example of Results From the Latent Class Approach preference choice tasks. In 2020, a quantum choice model was introduced as a flexible new approach for understanding moral decision making in the field of transportation research. 6 Given the success of using the quantum choice model in cognitive psychology and transportation, 1 possible application is to use it to address health equity. This concept of a quantum choice model was adapted from Hancock et al. 6,8 Conceptually, a participant would consider a choice task containing 2 alternatives. Under the quantum probability theory, the participant starts with a belief state. At this belief state, the participant may either have some underlying preference in favor of 1 alternative or feel indifferent between the 2 alternatives. When a decision is made, the belief state is projected to the chosen alternative. The alternative that has the projected vector being inadequate in explaining moral choice behavior. It can be complex because decision makers may choose an alternative based not only on more attractive concrete attributes of the alternative but also how they believe the alternative to be an overall morally contentious option. 6,8 Recently, quantum probability theory has been introduced in cognitive psychology. 9 One of the key differences of the quantum theory from the classic theories of probability is that the distributivity law of "and" and "or" propositions-A λ (B v C) = (A λ B) v (A λ C)-does not need to hold. 9 This difference resulted in the creation of a new theory of probability, called quantum logic or quantum probability. This quantum probability can be used to efficiently reflect "changes in perspective or state of mind" of the respondents as a result of the incorporation of a moral attribute in stated Valuing attributes without equity concern Value of equity FIGURE 3 Using On the other hand, the basic model could provide the opposite results. Similar to the first approach, how equity enters the preferences of the participants can be examined by modifying the equity attributes, for instance, focusing on equity in overall well-being or specifically on equity in health. Conclusions This paper describes 2 novel approaches, including latent class and quantum models, to rigorously value equity in health care. The latent class model is based on RUT with the classical theories of probability that has been widely used in choice modeling, while the quantum choice model is flexible enough to capture complex decisions as such decisions under equity. These approaches will improve existing value assessment methods that inadequately address health disparities and underrepresented populations. DISCLOSURES This study received no outside funding. Ngorsuraches has received research grants from Bristol Myers Squibb and through the University of Utah and PhRMA Foundation. with a larger amplitude represents a higher probability of being chosen. However, when an equity attribute is added, it can impact the participant's choice by moving the participant's initial belief state (quantum rotation) to either "ethical answerability belief state" or "not ethical answerability belief state". As a result, the participant would start from this new state (changes in perspective or state of mind) instead of their initial belief state. Subsequently, the probabilities for choosing these 2 alternatives are altered. In other words, the quantum model can mathematically capture a change in perspective through a quantum rotation. Therefore, conceptually, the choice model is improved or fits differently when the equity attributes are considered. From this model, patients' preferences to contribute to value assessment under the realization of health equity are captured. Finally, the value of each attribute derived from the 2 sets of DCE choice tasks with and without equity attributes can be compared and used to reflect the value of equity. For example, this approach can be used to estimate social value for health gains from treatments. Two DCE surveys are designed with and without equity attributes such as health gain for individuals from an underrepresented population. Table 2 shows an example of the results indicating the observed choice share for an alternative A and the results from choice models. If the equity attribute is considered and the observed choice share of the alternative consistently
2021-09-29T06:17:19.366Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "da54381a1a749fd7e4b09b8033b675c258aff03c", "oa_license": "CCBY", "oa_url": "https://www.jmcp.org/doi/pdf/10.18553/jmcp.2021.27.9-a.s12", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1e21859a6f916808add1451f248519d8842616a9", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
62508001
pes2o/s2orc
v3-fos-license
A Numerical Case Study on Contact Analysis with Large Displacement In case of geometrically non-linear analysis of plane frame structures, BernoulliEuler beam elements considering the shear deformation to be zero are generally used. But, in node-element contact case, when a contact point is approaching to one of both element ends, diverging of unbalanced forces makes the analysis impossible because of neglecting of the shear deformation. However, in such a case, the element force equation with Timoshenko beam theory which considers the shear deformation is quite effective. In this study, the contact element consistent of both ends and a contact node derived by Timoshenko beam theory is applied to several case of numerical examples. As a result, its convergent performance can be verified in the almost part of element and got accuracy of the solutions. Introduction In case of geometrically non-linear analysis of frame structures, the general formulation of the finite element method (FEM) is to derive geometrical stiffness from the relation between strain inside of element's body and displacements of node.On the other hand, the tangent stiffness method (TSM), proposed by Goto, Hane and Tanaka [1], defines elements' behavior in the element force equation.Therefore, the geometrical stiffness, derived by the differential of the equilibrium equation between element edge forces and nodal forces, can express the non-linearity caused by rigid body displacement of each element. In contact case of node-element or element-element with large displacement, FEM may require so difficult and complex procedures to create a new node when the occurrence of contact.For example, Konyukhov et al. [3] had to use solid elements to simulate a contact model of simple 2-dimensional beam.On the other hand, TSM uses the element force equation, defined in the element coordinate system with a statically determinate and stable support condition.Therefore, it is so easy to add one or more contact points to an element, and to obtain solutions of perfect equilibrium as well as in case of using ordinary element with two nodes of both ends.Nizam et al. [4] applied a contact beam element with three nodes using Timoshenko beam theory to 2-deimensional frame structures.This "Timoshenko contact element" makes it easy to obtain solutions even if when the contact node approaches to an element end, because of effect of shear deformation.In this study, several numerical examples are shown and the rationality of the Timoshenko contact elements is verified. a Corresponding author : 15577017@edu.cc.saga-u.ac.jpTangent Stiffness Method (TSM) is a method to evaluate a geometric nonlinearity caused by the rigid body displacement of element, when the element deformation is defined in the element force equation.This makes the unbalanced forces converge by the iteration process that is equal to Newton Raphson method.In addition, strict compatibility and an equilibrium equation are applicable in the iteration process to converge the unbalanced force.Therefore, TSM has an extremely high convergence performance. General formulation Here, an element has two ends and the force element vector of both edges are represented by S. Considering a plane coordinate system, if the external force vector is represented by U, and the equilibrium matrix by J, the equilibrium condition can expressed by the following equation: By differentiation of Equation ( 1), the tangent stiffness equation can expressed as: Here, the differentiation of Equation ( 1) simultaneously extracts δS and δJ which enables the expression of a linear function of the displacement vector δu in the local coordinate system.Meanwhile, K o represents the element matrix, which also simulates the element behavior corresponding to the element stiffness.Meanwhile, K G is the tangent geometrical stiffness. Equilibrium condition in case of contact problem The tangent geometrical stiffness of contact problem can be obtained by differentiating the equilibrium equation.Figure 1 shows the element edge forces for a single contact element.On the other hand, Figure 2 shows the nodal forces.The rotation of point c is neglected, which the degree of freedom of the node is two.The vector of the element edge forces separate each other and are defined by the following equation: [ ] Furthermore, the vector U can be expressed as following: [ ] In other words, by differentiating Equations ( 3) and ( 4), the tangent geometrical stiffness can be obtained from the equilibrium between S and U. Timoshenko contact element Timoshenko beam elements are effective in case that the shear deformation can not be neglected. Figure 3 shows the equilibrium condition of a beam with elastic and stable support condition under the action of the axial force N, edge moments M i and M j , and contact force Y c . Figure 3 shows the location of the geometric and kinematic variables, and it is assumed that the contact force Y c is within the range of the beam.This local coordinate is a simple but accurate idealization for frictionless nodeelement contact problem.The element force equation of the Timoshenko contact element is given as Equations ( 5) to (8). Contact of a couple of cantilevers As shown in Figure 4, there are two cantilevers face each other, which are in contact beforehand.Both cantilevers have 8 equal divisions.Node 1 and node 18 are fixed perfectly, and compulsory displacement is given to V direction at node 1.Then, every contact point begins to slide.The material parameters in this example are E = 2.0×10 7 N/m 2 , A = 3.0×10 -4 m 2 , I = 2.2×10 -8 m 4 , v = 0.4 and G = 7.142×10 6 N/m 2 respectively.Increment per 1 step of compulsory displacement is 0.208m. Figure 5 shows the shape of solution at 10th incremental step, and Figure 6 shows the stable convergent process by TSM.It takes only 9 times of iteration to reduce the unbalanced forces 8 places on its convergent process, in spite of the case that many contact incidents occur simultaneously.Figure 7 shows the solution at 18th incremental step, in which every node approaches to a node on another cantilever extremely.The ratio of [lj/l], which is expresses the position of the contact node on an element (see Figure 3), is 0.9739.Even in such a complex condition, the solution can be obtained with stable convergent process of only 12 times iterations as shown in Figure 8. 02017-p.4 Three cantilevers and one contact point As shown in Figure 9, this example is consist of three independent cantilevers with 11 nodes and a contact node.Node1, node22, node23 and the contact node are fixed perfectly.The compulsory displacement is given to V direction at the contact node.The material parameters in this example are E = 2.0×10 7 N/m 2 , A = 3.0×10 -4 m 2 , I = 2.2×10 -8 m 4 , v = 0.4 and G = 7.142×10 6 N/m 2 .Figure 10 shows the shape of solution when the contact node reached to the cantilever A. Following to it, Figure 11 is the shape of solution when the node 11 on cantilever A contact to the cantilever B, and Figure 12 is when the node 12 on cantilever B contact to cantilever C. When the node12 approaches to the node 30 (Figure 13), the unbalanced force diverged as shown in Figure 14.In such a case, if the incremental calculation is re-started from the previous equilibrium solution, the converged solution can be obtained as shown in Figure 15 by giving larger compulsory displacement and switching the configuration of the contact element to the next door element.This is the simplest procedure to be possible to continue the analysis.The contact node approaches to the end of the cantilever A as shown in Figure16, and then jumps from cantilever A to cantilever B as shown in Figure 17.At the same time, the contact between the contact node and the cantilever A was canceled.On the same process as above, the contact node approaches to the end of the cantilever B as shown in Figure18, and then jumps from cantilever B to cantilever C as shown in Figure 19.Also in this case, the contact between the contact node and the cantilever B was canceled at the same time.Figure 20 is the solution that the contact node approaches to the end of beam C, and all the contact is canceled in Figure 21. Conclusions In this study, two numerical examples are shown and the rationality of the Timoshenko contact elements is verified.In the first example, Even if when the contact node approaches to the end of element and the shear deformation can not be neglected, the converged solution can be obtained by using Timoshenko contact element considered the shear deformation.In the second example, when the contact node approaches to an element end, the unbalanced force diverged.In such a case, the converged solution can be obtained by giving the contact node larger compulsory displacement at the previous equilibrium solution and making it move to the next door element.In another words, distortion problem becomes possible to prevent by using the numerical technique above.As a result, the convergent performance of Timoshenko contact element can be verified in the almost part of element.For the future prospects, it is considered that the converged solution can be obtained on contact analysis with large displacement in 3-dimensional frame structures. Figure 1 . Figure 1.Element edge forces for contact element.Figure 2. Nodal forces for contact element. Figure 2 . Figure 1.Element edge forces for contact element.Figure 2. Nodal forces for contact element. Figure 3 . Figure 3. Timoshenko contact element in simply supported beam coordinate. Figure 5 . Figure 5. Shape of solution at 10th step.Figure 7. Shape of solution at 18th step. Figure 7 . Figure 5. Shape of solution at 10th step.Figure 7. Shape of solution at 18th step. Figure 10 . Figure 10.First contact to cantilever A. Figure 11.Second contact of A to B. Figure 12 . Figure 12.Third contact of B to C.Figure 13.Node12 approaching to Node30. Figure 16 . Figure 16.Contact node approaching to end of cantilever A. Figure 17 . Figure 17.Contact node jumping from A to B. Figure 18 . Figure 18.Contact node approaching to end of cantilever B. Figure 19 . Figure 19.Contact node jumping from B to C. Figure 20 . Figure 20.Contact node approaching to end of cantilever B.
2018-01-15T03:30:31.610Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "fc8bc656ccee7c88c5ed9eaabc1ddd4c2ed908e6", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/10/matecconf_iconcees2016_02017.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fc8bc656ccee7c88c5ed9eaabc1ddd4c2ed908e6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
253137935
pes2o/s2orc
v3-fos-license
Public health work in Sweden during the COVID-19 pandemic Abstract Background and objectives This study examines the consequences of the COVID-19 pandemic for public health practice carried out at local and regional levels in Sweden. The work includes, for example, interventions in health care, schools and preschools, social services, and non-profit organisations. Methods By means of written questions and interviews in municipalities, regions, county administrative boards, networks, and organisations, we investigated whether public health-related interventions had decreased, increased, or changed as a result of the COVID-19 pandemic. Data were analysed by content analysis. Results The results show that a large number of interventions from a variety of local and regional actors aimed at broad target groups were cancelled or paused during the time of our survey. Eventually, many, but not all of the cancelled interventions were replaced with other options, most of which are included in the following themes: • Digital solutions and support over the phone instead of physical meetings. • Outdoor activities instead of indoor activities. • Organisational adaptations, for example, from drop-in visits to booked appointments and from open activities to scheduled visits. The interviews also revealed that public health issues had been highlighted and that existing collaboration structures were a success factor in managing the consequences of the COVID-19 pandemic. The risk of and concern for the spread of infection and compliance with the authorities’ recommendations were stated to be the main reasons why public health-related interventions had decreased, increased, or changed. Conclusions Both general public health practice and targeted interventions in health care and municipal activities have been cancelled or rescheduled according to our survey. Because many public health-related interventions have an equalising effect on health, this can be of great importance for groups that are socially, economically, or health-relatedly vulnerable. Background and objectives: The COVID 19 pandemic has highlighted how public health is dependent on many areas of society, and several aspects of public health can be affected. We have evaluated how the COVID-19 pandemic and the measures taken to reduce its spread have impacted public health in Sweden during 2020. Methods: We systematically compiled international research on the pandemic's impact on public health, we examined living conditions of groups at a particularly increased risk of ill health, and we collected and analysed Swedish data on lifestyles, health, injury, and illness during the pandemic compared to previous years. Results: Most people have, in one way or another, been affected by the pandemic and by societýs preventive measures. However, some groups have suffered more than others. Groups who were already at an increased risk of ill health before the pandemic have been most affected, e.g. in schools, on the labour market, and in society in general. There is a risk of increased health inequality, not only related to morbidity and mortality of COVID-19 during 2020, but also when it comes to the effects on living conditions. Conclusions: The consequences of the COVID-19 pandemic pose major challenges for public health, and the measures taken to limit its spread inter-relate with social and economic conditions. In Sweden, health inequalities have remained the same or increased over the years. Our study suggests that the consequences of the pandemic will reinforce health inequalities. It is too early to determine what the pandemic's full impact on public health will be. Nevertheless, health promotion and preventive measures need to be strengthened and prioritized in order to maintain good public health and reduce inequalities. Background and objectives: This study examines the consequences of the COVID-19 pandemic for public health practice carried out at local and regional levels in Sweden. The work includes, for example, interventions in health care, schools and preschools, social services, and non-profit organisations. Methods: By means of written questions and interviews in municipalities, regions, county administrative boards, networks, and organisations, we investigated whether public health-related interventions had decreased, increased, or changed as a result of the COVID-19 pandemic. Data were analysed by content analysis. 15th European Public Health Conference 2022 Results: The results show that a large number of interventions from a variety of local and regional actors aimed at broad target groups were cancelled or paused during the time of our survey. Eventually, many, but not all of the cancelled interventions were replaced with other options, most of which are included in the following themes: Digital solutions and support over the phone instead of physical meetings. Outdoor activities instead of indoor activities. Organisational adaptations, for example, from drop-in visits to booked appointments and from open activities to scheduled visits. The interviews also revealed that public health issues had been highlighted and that existing collaboration structures were a success factor in managing the consequences of the COVID-19 pandemic. The risk of and concern for the spread of infection and compliance with the authorities' recommendations were stated to be the main reasons why public health-related interventions had decreased, increased, or changed. Conclusions: Both general public health practice and targeted interventions in health care and municipal activities have been cancelled or rescheduled according to our survey. Because many public health-related interventions have an equalising effect on health, this can be of great importance for groups that are socially, economically, or health-relatedly vulnerable. Abstract citation ID: ckac129.007 Prevention of alcohol, drugs and tobacco during the COVID-19-pandemic -consequences and inequalities in Swedish municipalities Background and objectives: The COVID-19 pandemic and measures to prevent the spread of the virus challenged public health practice at the local and regional level in Sweden. The objective of this study was to follow-up how local preventive ADT prevention (alcohol, drugs, and tobacco) in Sweden was affected during 2020-2021. Methods: All Swedish municipalities (N = 290) were included in surveys on how the pandemic affected local ADT prevention. Response rates ranged between 82 and 91 percent. Quantitative data were analysed with reference to socioeconomic and demographic conditions. Qualitative data were analysed thematically. Results: A majority of the municipalities reported a decrease in ADT prevention, especially aimed at groups such as parents, children, and young people. There was no correlation between the decrease in municipal ADT prevention and sociodemographic conditions. A majority of the municipalities reported that activities were adapted, often with a digital approach. Adaptation of ADT prevention was less common in smaller municipalities and municipalities where residents had lower levels of education and lower incomes. An increase in activities, as a consequence of measures to prevent the spread of the virus, was more common in larger municipalities and municipalities with a greater proportion of residents with higher educational backgrounds and higher incomes. Conclusions: ADT prevention carried out by municipalities in Sweden was initially deeply affected by the COVID-19 pandemic and by measures to prevent the spread of the virus. However, activities were adapted over time, mainly with a digital approach. The ability to adapt differed depending on the sociodemographic conditions of the municipalities. Follow-up studies on ADT prevention and the consequences of the digital approach during 2021 will be presented at the conference. Background and objectives: The COVID-19 pandemic has posed challenges for traditional public health practice. In the area of alcohol, drugs, and tobacco, local and regional actors have largely moved from physical to digital solutions to handle the barriers imposed by the pandemic. To strengthen the knowledge base in the area, this project aimed to explore how the local transition to, and management of, digital solutions within alcohol, drugs, and tobacco prevention might support the policy drive in Sweden towards equity in health. Methods: This was a qualitative study where 13 local coordinators from 7 municipalities participated. Data were collected through 9 individual and 2 group interviews (semi-structured). The analysis was inductive and followed a thematic analysis approach to identify, analyse, and present patterns (themes) in the data. Results: Three themes were developed illustrating how the local implementation of digital solutions in the area of alcohol, drugs, doping, and tobacco prevention might support the transition towards equity in health by 'making time and resources available for development and innovation', 'improving the ability to reach and engage with vulnerable groups', and '(re)shape initiatives to act inclusively'. Conclusions: As illustrated by experiences of the local coordinators, the municipalities seemed to have managed the challenges of the pandemic in a good way. To a large extent, they appeared to have adapted their work to remain operational by transitioning into digital solutions. Considering that the pandemic has been challenging in various ways, the finding of ensuring operations were running should not be underestimated. However, besides being able to largely maintain a ''status quo'' in a time when traditional modes of working were inadequate or inappropriate, the results illustrated how the municipalities have added numerous (digital) tools to their toolbox for use in the continuing drive towards good and equitable health.
2022-10-27T15:11:11.835Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "8b61b7a97ec648635ac52ad72deeefa026486511", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/eurpub/article-pdf/32/Supplement_3/ckac129.006/46587378/ckac129.006.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85b739ad2afcfe15925801aebbd1b9edacbdc759", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
233669016
pes2o/s2orc
v3-fos-license
The Anti-tubercular Activity of Noni Fruitsto Inhibition Growth of Multi Drug Resistant-Tuberculosis Bacteria Multidrug-resistant tuberculosis (MDR-TB) is a tuberculosis infection that is resistant to the treatment at least two of the most powerful anti-tuberculosis drugs, such as Isoniazid and Rifampisin. Increased cases of MDR-TB in morbidity and mortality become obstacles in the control of tuberculosis (TB), thus requiring supportive treatment of natural ingredients that can contribute in the treatment of TB, such a noni fruit. The main objective of this study was extract of noni fruits to inhibition the growth of strain MDR-TB bacteria, and compered it with the anti-TB drugs. The Morinda c. Linn (Noni) fruits was extraced by ethanol (96%). The extract was filtered through whatman No.1 filter paper, evaporated to dryness on a water bath until the solvent evaporated completely and yield of the crude extract. The experiment were divided into 3 groups, i.e.: negative control: group I; positive control: group II; crude extracts noni fruit: group III: combinations of crude extracts noni fruit and anti-TB drugs (K, AK, and OF). Each group was divided into three groups’ doses of 30 mg/ml, 40 mg/ml and 50 mg/ml. The Anti-tuberculosis activities of extracts noni fruit and K, AK, and OF against TB-MDR bacteria were tested by susceptibility test using proportion method in Lowenstein-Jensen (LJ) media.The anti-tubercular activity of noni fruits was determined by the minimum inhibitory concentration (MIC) of the bacterial growth at various doses 30 mg/ml, 40 mg/ml, and 50 mg/ml. The research design used post-test only contol group, and analyzed using analysis of variance and post hoct test. The extracted of Morinda c.Linn (noni) fruits have antitubercular activity to inhibiton of growth MDR-TB bacteria at various doses (p value=0,000). At a dose 30 mg/mlthe mean rate of the growth colonies of MDR TB-bacteria whit the mean rate 59,00 ± 27,81, and at a dose 40 mg/ml was1,50 ± 2,81. While at a dose of 50 mg/ml the bacterial colonies of MDR-TB did not grow in media. The combination of Morinda c.Linn (noni) fruit with anti-tuberculosis drugs, was the smallest groups to inhibit and eliminate MDR-TB bacteria at a dose 30 mg/ml(0,00 ± 00.00). The experimental results confirmed the extracted of Morinda c.Linn (noni) fruits have antitubercular activity as well as anti-TB drugs, and the combination of the extracted of Morinda c.Linn (noni) fruits and anti-TB drugs was the best groups to inhibiton of growth MDR-TB bacteria. Introduction Tuberculosis (TB) is a contagious infectious disease that became the public health problems in the world [1]. Handling and controlling of TB is more complicated since the front line anti-tuberculosis drugs have gradually become ineffective for TB therapy [2]. The drug resistance could occurred due to the inadequate and unregularly drug utilization that cause gene mutation of the drug target such as Kat-G gene for isoniazid and rpo-B gene for rifampicin [3,4]. Increased cases of Multidrug-resistant tuberculosis (MDR-TB) become obstacles in TB control. According WHO Global Tuberculosis Control, MDR-TB patients receiving treatment have a cure rate of less than 60% because TB treatment takes a long time, so that patients with treatment dropouts and bacteriological numbers remain positive causing resistant bacteria to develop into extensively drugresistant tuberculosis (XDR-TB), abbreviation for strains resistant to first and second line anti-tuberculosis drugs [5,6]. Due to the facts mentioned above and to TB latency, new anti-TB drugs and better therapeutic strategies against TB are urgently required. New drug candidates should shorten standard treatments and effective against MDR-TB bacteria. In spite of the efforts require supportive treatment from a natural product that has therapeutic effects for health such as noni (Morindacitrifolia Linn) [7,8]. Noniis one of the medical plants that have been reported to have a therapeutic and nutritional value [7,9,10]. The leaves was largely used in traditional medicine and has been uses for arthritis, atherosclerosis, boils, burns, cancer, chronic fatigue syndrome, circulatory weakness, cold sores, congestion, constipation, diabetes, gastric ulcers, gingivitis, heart disease, hypertension and infections [11,12]. Root of noni has been reported significant inhibitory effects on the proliferation of human lung and colon cancer cells [13,14]. The fruit juice of noni contains a polysaccharide-rich substance withantibacterial, antiviral, antifungal, antitumor, anthelmintic, analgesic, hypotensive, anti-inflammatory activity and enhance the immune effects [15,16]. Previous pharmaceutical study showed that, extract ofnonifruits effectively inhibit gram-positive and gram-negative bacteriasuch as Staphylococcus aureus,P.aureginosa strains Steptococcusmutans [17] Bacillus subtilis, Proteus morgaii, Pseudomonas, Escherichia coli [7,18] moreover, the plants also utilizes to control the groups of pathogen bacteria,such Salmonella and Shigella [19]. Another anti-microbial in vitro assay was conducted on ethanol extract and hexane fractions noni leaves at 100 mg/ml reported to have an anti-tubercular activity in Mycobacterium tuberculosis cultures, with inhibition rates 0f 89%and 95% respectively [20]. [21] revealed that the ethanol extract of nonifruits showed that noni possessed in vitro anti-mycobacterial effect against Mycobacterium tuberculosisbacteriaat minimum IVCN inhibitory concentrations (MIC) 40 mg/ml [21].The anti-mycobacterialactivities of noni lead by the presence of active constituents such as secondary metabolites and lectins [22]. According phytochemical investigations of noni, the fruit contains of approximately 200 compounds from different parts of noni [23]. A number of major components in the noni plant was a scopoletin, flavonoids, octoaninic acid, potassium, vitamin C, terpenoids, alkaloids, antraquinones, which the ranged of the compounds antibacterial was 5.94g to 36.52g/100g of dry material [24][25][26]. The antimicrobial activity is generally attributed to various phytochemicals in noni extract that target bacterial cell membranes and cellular biochemical pathways [23,27]. The noni molecules react with lipids in bacterial cell membranes, resulting in increasing the cell-membrane permeability, disrupting the membranes, and breaking cell homeostasis, inactivate enzymes and denature proteins in bacteria [15,19]. Therefore the bacterial cell wall will be leakage of intracellular components, but also facilitate the movement of antimicrobial compounds into the cytoplasm, inducing cell death [28]. This study is to investigated the in-vitro anti-tubercular activity of extract noni fruit when is used as individual or combination with second-line anti-TB drugs against TB-MDR strain bacteria at doses of30mg/ml, 40mg/ml, and 50mg/ml. Minimum inhibitory concentration (MIC) values were used in order to assess the synergistic activity of extract noni fruit. Study design The research design used post-test only control group design [29]. Samples The samples of this research are strain bacteria of Mycobacterium tuberculosis resistantto the first-line anti-tubercular drugs (rifampicin and isoniazid) based on drugsusceptibility sensitive test at the Health Laboratory of West Java Provincewas derived from sputum of patients with active TB as much as 54 strain. The sample was added aseptically to the Lowenstein Jonsen (LJ) medium and incubated at 37°C for 2 weeks with an optimum pH of 7.0. IVCN Reagents and antibiotics Reagent utilized in this study were, ethanol, etilasetat, n heksana, aquabidestilata, iron were purchased from Sigma-Aldrich. Plantmaterial and Extraction The plant material of the present study is fresh mature of noni fruit that was collected during the rainy season in Cibeber, South Cimahi(2.8 Km from Cimahi city), West Java province, Indonesia. The plant voucher specimens of noni fruit (EN-no. 241571) were identified and authenticated at the Biomedical Sciences Laboratory of the Schoolof Health Sciences JenderalAchmadYaniCimahi.The fresh mature noni fruit was cleaned and sliced into small pieces, shade-dried at 50°C and ground to powder. The powdered noni fruit material (500 gr) was macerated with 500 ml ethanol (96%) in the increasing order of polarity from non-polar to high polar at 50°C for 2 hours. The extract was filtered through whatman No.1 filter paper, evaporated to dryness on a water bath until the solvent evaporated completely and yield of the crude extract. Experiment protocol The experiment were in-vitro experiment which divided into 3 groups, i. Statistical analysis All data were analysis using univariate analysis of variance, and the differences means ± standard deviation between each group were evaluated by least significant difference (LSD) and Duncan's. Statistical analysis in this study using confidence interval 95% (p≤0.05).The experimental protocol of the current study has been approved by the Commission of Health Research Ethics Faculty of Medical, Diponegoro University and General Hospital dr. Kariadi Semarang with the issuance of Ethical Clearance letter. Result The The inhibitor of single noni fruit showed that extract noni is the best inhibiting the growth of Mycobacterium tuberculosis resistant of ≥ 80% compared to a negative control and 24% when compared with the positive control (table 3). Discussion The anti-TBdrugs were formulations or as fixed-doses combination were the major rule in killing of MTB or TB-MDR bacteria [2]. Anti-TB drugs are strong bactericidal and bacteriostatic in inhibiting the growth of Mycobacterium bacteria. The target of Treatments of infectious diseases such as MDR TB facing serious problem worldwide, as microorganisms become resistant to multiple antimicrobial agents, which lead to increase TB cases incidence [31]. Based on that phenomena, development of new therapeutic from local plants traditionally such as noni fruit that may can efforts the affectivityof anti-TB for the treatment of TB [26]. Noni fruit is one of themedical plant as well as some other compounds in noni root and leaves, are all proved as antibacterial agents [24,25]. The results of the present study showed the crude extract noni fruit have a significant differenceof the growth of TB-MDR bacteria. The antibacterial activity of noni against certain infectious bacterial strains was reported [7,11,27,32,33]. [17] [20], and noni fruits killed M.tuberculosis bacteria at 40 mg/ml [21]. IVCN The anti-tubercularactivity extract noni might be influenced by the presence of secondary metabolites as phenolic compounds [34]. Phenolic compounds contained in the noni fruit ranged from 5.94 to 36.52 g /100 g of dry material [26,35]. Phytochemicals in noni molecules are associated with the defense mechanisms of plants by their repellent or attractive properties, protection against biotic and abiotic stresses, and maintenance of structural integrity of plants [30,36]. In addition the molecules of noni react with lipids in bacterial cell membranes, resulting in increasing the cell-membrane permeability, disrupting the membranes, and breaking cell homeostasis, inactivate enzymes and denature proteins in bacteria [18,19]. Therefore the bacterial cell wall will be leakage of intracellular components. It further disrupts the transport of important organic ions into the cells resulting in inhibition of growth even in cell lysis [36][37][38]. The decrease of MIC value showed that the combination of extract noni fruit and anti-TB drugs found to have synergistic activity that can inhibit the growth of TB-MDR at low doses. The combination of extract noni fruit and anti-TB drugs possible, makesynergism and complementary effects of active substances that can enhance the activity were more effective than the individual agents [39,40].The combinations will be improved bacterial potential of extract noni fruit and anti-TB drugs as combined antymycobacterial cell wall [41]. Based on several studies the extract noni effectively have an antibacteria activity that can inhibit and eliminate the bacteria.Another study showed that noni fruit was found relatively non-toxic [10,35]. Conclusion The present study brought out the fact that extract have anti-tuberculosis activ- Conflict of interest This study is important because lung Tuberculosis remains a global health problem, especially MDR TB disease increased even XDR TB due to drug resistance. Treatment strategy is required for supportive treatment of natural ingredients that Morindacitrifolia (noni), which may contribute to the treatment of TB. Noni is a plant that is easy to obtain and have side effects were relatively minor and easily accessible by the public. This study aims to explain the influence the anti-tubercular activity of extractand compounds noni fruit to inhibition the growth of MDR TB bacteria. This manuscript has not been published and is not under consideration for publication to any other journal or any other type of publication (including web hosting) either by me or any of my co-authors.
2021-05-05T00:09:20.303Z
2021-03-15T00:00:00.000
{ "year": 2021, "sha1": "1b56b9bc3196821ec5f6e06e33eacf566a8ccf7a", "oa_license": null, "oa_url": "https://knepublishing.com/index.php/KnE-Life/article/download/8774/14889", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e6616f95692a2dd4ebf0a1a4d79ba263797818fb", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59527054
pes2o/s2orc
v3-fos-license
Decrease in Population and Increase in Welfare of Community Cats in a Twenty-Three Year Trap-Neuter-Return Program in Key Largo, FL: The ORCAT Program The objective of this study was to evaluate the effect of a long-term (23-year) trap-neuter-return program on the population size of community cats in the Ocean Reef Community and to describe the demographic composition and outcome of enrolled cats. A retrospective study was performed using both cat census data collected between 1999 and 2013 as well as individual medical records for cats whose first visit occurred between 3/31/1995 and 12/31/2017. Medical record entries were reviewed to determine program inputs, cat outcomes, retroviral disease prevalence, and average age of first visit, sterilization, and death through 6/11/2018. Change over time was analyzed via linear regression. The free-roaming population decreased from 455 cats recorded in 1999 to 206 recorded in 2013 (55% decrease, P < 0.0001). There were 3,487 visits recorded for 2,529 community cats, with 869 ovariohysterectomies and 822 orchiectomies performed. At last recorded visit, there were 1,111 cats returned back to their original location, and 1,419 cats removed via adoption (510), transfer to the adoption center (201), euthanasia of unhealthy or retrovirus positive cats (441), died in care (58), or outcome of dead on arrival (209). The number of first visits per year decreased 80% from 348 in 1995 to 68 in 2017. The estimated average age of the active cat population increased by 0.003 months each year (P = 0.031) from 16.6 months in 1995 to 43.8 months in 2017. The mean age of cats at removal increased 1.9 months per year over time (P < 0.0001) from 6.4 months in 1995 to 77.3 months in 2017. The mean age of cats at return to the original location was 20.8 months, which did not change over time. The overall retrovirus prevalence over the entire duration was 6.5%, with FIV identified in 3.3% of cats and FeLV identified in 3.6%. Retrovirus prevalence decreased by 0.32% per year (P = 0.001), with FIV decreasing by 0.16% per year (P = 0.013) and FeLV decreasing 0.18% per year (P = 0.033). In conclusion, a trap-neuter-return program operating for over two decades achieved a decrease in population and an increase in population welfare as measured by increased average age of population and decreased retrovirus prevalence. The objective of this study was to evaluate the effect of a long-term (23-year) trapneuter-return program on the population size of community cats in the Ocean Reef Community and to describe the demographic composition and outcome of enrolled cats. A retrospective study was performed using both cat census data collected between 1999 and 2013 as well as individual medical records for cats whose first visit occurred between 3/31/1995 and 12/31/2017. Medical record entries were reviewed to determine program inputs, cat outcomes, retroviral disease prevalence, and average age of first visit, sterilization, and death through 6/11/2018. Change over time was analyzed via linear regression. The free-roaming population decreased from 455 cats recorded in 1999 to 206 recorded in 2013 (55% decrease, P < 0.0001). There were 3,487 visits recorded for 2,529 community cats, with 869 ovariohysterectomies and 822 orchiectomies performed. At last recorded visit, there were 1,111 cats returned back to their original location, and 1,419 cats removed via adoption (510), transfer to the adoption center (201), euthanasia of unhealthy or retrovirus positive cats (441), died in care (58), or outcome of dead on arrival (209). The number of first visits per year decreased 80% from 348 in 1995 to 68 in 2017. The estimated average age of the active cat population increased by 0.003 months each year (P = 0.031) from 16.6 months in 1995 to 43.8 months in 2017. The mean age of cats at removal increased 1.9 months per year over time (P < 0.0001) from 6.4 months in 1995 to 77.3 months in 2017. The mean age of cats at return to the original location was 20.8 months, which did not change over time. The overall retrovirus prevalence over the entire duration was 6.5%, with FIV identified in 3.3% of cats and FeLV identified in 3.6%. Retrovirus prevalence decreased by 0.32% per year (P = 0.001), with FIV decreasing by 0.16% per year (P = 0.013) and FeLV decreasing 0.18% per year (P = 0.033). In conclusion, a trap-neuter-return program operating for over two decades achieved a decrease in population and an increase in population welfare as measured by increased average age of population and decreased retrovirus prevalence. INTRODUCTION Trap Neuter Return (TNR) programs exist in large part to reduce population size and growth rate by decreasing reproduction (1)(2)(3)(4)(5). Reductions in population size are desirable due to concerns regarding wildlife predation, public health and nuisance factors (6). In addition to reducing population size or growth, TNR is also promoted as a method for improving cat welfare (3,4,(7)(8)(9)(10). TNR of free-roaming cats may decrease predation as compared to populations that are not sterilized or provided anthropogenic food sources (11). TNR allows for the provision of veterinary care, including vaccination against infectious disease, treatment of injuries and illnesses, and humane euthanasia for animals found to be suffering. It is also a method for promoting humane communities by avoiding euthanasia as a means of population control or nuisance abatement. Multiple studies have shown TNR to be effective in reducing population size or curtailing population growth, but they are complicated by the fact that many colonies are not geographically restricted (2,4,(12)(13)(14). The presence of a long-term TNR program with both population level and detailed individual information was a unique opportunity to study the impacts of sustained TNR on a geographically isolated population of freeroaming cats. The objective of this study was to evaluate the effect of a long-term (23 year) TNR program on the population size of community cats in the Ocean Reef Community and to describe the demographic composition and outcome of cats enrolled in the TNR program. These findings can be used by shelters and other invested parties to estimate the impact of TNR on cat welfare and provide input parameters for mathematical models used to estimate the impact of TNR programs on community cat populations. Study Community The community of Ocean Reef occupies ∼2,500 acres on the northernmost tip of Key Largo in the Florida Keys. It is a peninsula approximately four miles long and a mile wide, with a single gated road staffed 24 h a day leading into the community. This private club is bordered on three sides by water and on the fourth by protected state and federal conservation land. Ocean Reef contains ∼1,700 homes, although much of the occupation is seasonal and there is a correspondingly large number of seasonal workers. 1 Five unaltered cats were brought to Ocean Reef by a groundskeeper to perform rat control in the 1960s. While the cats controlled the rat problem successfully, by the 1980s, the number of cats had grown large enough to be themselves considered a nuisance to the increasing number of residents. Over 2,000 cats are stated anecdotally to have been present in the 1980s. Population control measures, which included lethal methods, were instituted to control the cat population. As an alternative to lethal measures, an individual resident began to trap cats and bring them to a local veterinarian for neutering. In 1995, the Ocean Reef Community Association (ORCA) supported the opening of a spay/neuter clinic in Ocean Reef and the formation of the ORCAT program to provide sterilization, care, and feeding to the free-roaming cats (15). In 2006, the Grayvik Animal Care Center opened, which contains a full-service veterinary and grooming clinic for the pets of residents in addition to a cat adoption center and sanctuary. There has been a single individual in the role of director of the ORCAT program since its inception, maintaining feeding stations, creating individual cat medical records and performing episodic surveys of the population. This position reports to the Vice President of Ocean Reef and is accountable for annual goals. Only two veterinarians have been the main provider of services for the population, one from 1995 to 1998, and the other since 1998. Surveys of the cat population were performed between 1999 and 2013. Documented population surveys were not executed after 2013, although cats continued to be cared for and TNR efforts continued. Surveys were recorded by marking feeding stations on a paper map and recording the total number of cats per feeding station. The number and location of feeding stations was determined by homeowner preference, convenience, and minimization of feeding station colony size. The initial number of feeding stations was large in order to facilitate complete trapping of colonies, which was easier with smaller numbers of cats per colony, and to minimize fighting between cats. All cat counts were performed by the caretaker. Cats were trapped when un-marked individuals were noted at feeding stations, or when previously sterilized cats required veterinary care. Individual medical records for each cat were maintained in paper files. Each cat's visit (check-in to check-out at the medical center) was documented in the medical record. At their first visit, cats were routinely neutered, marked by ear-tipping, vaccinated with FVRCP, rabies and FeLV vaccines, and dewormed (pyrantel pamoate, praziquantel). They were also tested for FIV antibodies and FeLV antigen; 2 cats that tested positive for either retrovirus were typically euthanized prior to administration of routine preventive care. Cats were determined to be euthanized for retrovirus status if they were euthanized concurrently with a positive test and there was no evidence that the cat was otherwise significantly unhealthy. A date of birth was estimated through the joint effort of the caretaker and veterinarian. Upon re-trapping, cats were provided with vaccine boosters for FVRCP, rabies and FeLV and medical care as required. Microchipping of cats was implemented beginning in mid-2005. Study Design A retrospective study was performed using both aggregate cat census data spanning years 1999-2013 as well as review of individual cat medical records for cats whose first visit occurred from 3/31/1995 through 12/31/2017. Feeding stations and their associated populations were geocoded to visualize the change in population over time through Geographic Information System mapping technology. 3 Geographic changes were visualized via hexbin maps in order to protect privacy. The paper-based medical records were coded and entered into a custom database. 4 The associated cat demographics and outcomes were used to generate descriptive statistics and graphs. 5 For population-level analyses based on individual records (estimated count, average age, and age structure of population) a likely date of death was calculated for each cat with an outcome of returned. The estimated date of death was determined by calculating the mean age for cats at outcome which had an outcome of DOA or euthanasia. This was compared to their age at return, and if younger, the difference was calculated and added to their date of return to determine a likely date of death. If older, an additional 12 months was added to the likely date of death. The data for the population-level analyses was then constructed by creating a scaffold consisting of each day contained within the study period and performing an outer join with the individual records to select cats with a date of birth less than or equal to the scaffold date and a date of death (or estimated death) greater than or equal to the scaffold date. Average age of the cat population per year was determined by calculating the age of each cat per year between birth and removal by death or likely death which included euthanasia, died in care, dead on arrival (DOA) or missing in action (MIA). The status of MIA was assigned to cats that had not been sighted at their usual feeding station for an unusual period of time, as determined by the caretaker. Cats removed from the active population by adoption were not included in the average age analysis. Linear regression was used to analyze change over time. 6 Significance was set at p < 0.05 for all quantitative analyses. Program Inputs A total of 1,691 gonadectomies were performed, including 869 ovariohysterectomies and 822 orchiectomies. Over 18% of cats (479) were found to be already sterilized at their first visit, whether from sterilization prior to the official ORCAT program started in 1995, duplicate cats, trapping efforts by individuals or from lost/abandoned cats. Of the cats found to be already sterilized, 196 (40.9%) were also previously ear tipped; however, 13 of these ear tipped cats were noted to not be ORCAT's. An additional 165 non-sterilization surgeries were performed to treat injuries. A total of 2,327 FeLV, 1,897 rabies, and 2,727 FVRCP vaccines were administered. Over 2,800 fecal examinations were completed, and 2,327 FIV/FeLV tests were performed. Of female cats undergoing ovariohysterectomy, 11.5% were pregnant, with a mean of 4 fetuses (range 1-6). Retroviral Prevalence The overall retrovirus seropositivity was 6.5%, with 9 cats positive for both FIV and FeLV. The overall prevalence of FIV was 3.3%, Cat Outcomes Outcomes for visits were classified as either returned or removed, with an average of 50.0% (range 16.7-83.3%) of visits ending in removal per year (Figure 10). Removal included adoption, transfer to Grayvik center, died in care, euthanasia, and DOA, while returned included outcomes of released and missing in action (MIA). Of the 1,869 visits ending in release, 318 (17.0%) released the cat to a different location than they had been trapped due to a conflict with the original location. For the final disposition (outcome of the last recorded visit), 1,111 cats were released back to their outdoor location, and 1,419 cats were removed via adoption (510), transfer to the adoption center (201), died in care (58), euthanasia of unhealthy or retrovirus-positive cats (441), or outcome of DOA (209), Figure 11. Six of 9 (67%) cats were euthanized for doublepositive retrovirus status, 61 of 73 (84%) for FeLV positive status and 45 of 67 (67%) for FIV positive status, with the remainder of the euthanized cats, 329 (75%), euthanized due to health. Cats that were DOA had cause of death split between trauma (43.1%), unknown (43.1%), trapped in fumigation tent (9.1%), and illness (4.8%). Trauma was primarily Estimated Age Structure and Sex Distribution The mean estimated age of cats at first visit was 21 months (95%CI 20 to 23), with a range of 0 (newborn) through 275 months. For cats sexually intact at first visit (2,026), the mean age was 11 months (95%CI 10 to 11), with a range of 0 through 204 months. For cats already sterilized and ear-tipped at first visit, the mean age was 70.3 months (95%CI 62.5 to 78.2), with a range of 6.7-204 months. For cats already sterilized, but with no documented ear-tip, the mean age at first visit was similar to previously sterilized cats with an ear-tip at 76.4 months (95%CI 69.1 to 83.6), with a range of 2.0-275 months. For previously sterilized cats, the age at first visit increased by 0.01 months per year (P = 0.043). There was no change over time in the age of cats intact at first visit. The estimated average age (calculated age of cats without an outcome of removed) of the active cat population increased by 0.003 months each year (P = 0.030; Figure 12). The estimated age structure fluctuated over time (Figure 13). Overall, the mean age of cats at removal was 41.3 months (95%CI 38.2 to 44.4), which increased 1.9 months per year (P < 0.0001). The mean age at adoption was 11.3 months (95%CI 9.2-13.5), which did not change significantly over time. The mean age at euthanasia was 82.1 months (95%CI 75.3 to 88.8) which increased over time by 4.0 months per year (P < 0.0001). The mean age of DOA/MIA cats was 58.7 (95%CI 51.2 to 66.2) Females accounted for 52% (95%CI 49.7 to 53.6) of the population at first visit. The mean age of females at first visit was 22.9 (95%CI 20.7 to 25.1), while it was 19.4 (95%CI 17.4 to 21.3) for males. Females that were intact at first visit had a mean age of 11.0 months (95%CI 9.8 to 12.1), while males intact at first visit had a mean age of 9.8 (95%CI 8.7 to 10.9). Females that were previously sterilized were the oldest at first visit with a mean of 79.3 months of age (95%CI 71.5 to 87.2), with males that were previously sterilized having a mean of 67.5 months of age (95%CI 60.1 to 74.9). Females had a mean age of 32.7 months (95%CI 30.0 to 35.5) at last visit, while males had a mean age of 31.0 (95%CI 28.3 to 33.6). Females intact at first visit had an age at last visit of 21.1 months (95%CI 18.9 to 23.4) while males had an age of 21.5 (95%CI 19.2 to 23.9). Females found to be sterilized at first visit had a mean age of 87.5 at last visit (95%CI 79.3 to 95.7) while males had a mean age of 78.6 (95%CI 70.8 to 86.3) at last visit. Population Estimate Compared to Census The model of the estimated cat population based on individual records was found to decrease significantly over time (P < 0.0001). The decrease was similar to the census values, with comparable slopes (−0.06 for the census, −0.05 for the model). The difference in count per year between the census values and the model for years included in the census ranged from −20 to 30%, with a mean difference of 3.4%. This model estimated the free-roaming population to be 83 in 2017 (Figure 14). DISCUSSION The findings of this study are congruent with prior intensive TNR sites which show a decrease in population over time (2,(16)(17)(18). The geographic restriction of this location and duration of the program partially address critiques of previous studies regarding the length of observation and unknown effects of immigration and emigration (12,19). population data available. 7 Changes noted in the age structure and modeled population in 2007 and 2013 were due to temporary disruption of the program's trapping efforts. In 2006 the program moved in to the new Grayvik Center, and focus was temporarily diverted from trapping. In 2012 and 2013 there was a temporary change in directorship, which resulted in decreased trapping efforts. The changes observed in the population numbers and age structure subsequent to these two disruptions underscores the importance of continuity in trapping efforts. Despite the geographically restricted location, there was evidence of a significant amount of introgression (sterilized cats that were not ear tipped), possibly cats brought by seasonal community members or workers that were lost or abandoned or cats from outside geographic areas that were deliberately abandoned. Previously sterilized but not ear-tipped cats most likely represent only 10-20% of lost or abandoned animals, given sterilization rates in at-risk populations (20). The high quality and visibility of the program, which provided food and veterinary care, may have encouraged abandonment of cats if owners believed that the cats would be well taken care of after abandonment. Abandonment may also have occurred if owners believed that cats would be better off under the care of the program rather than surrendered to a shelter where 7 http://www.hurricanecity.com/city/keylargo.htm they would face the risk of euthanasia. Interestingly, nine cats sterilized and with ear-tips were noted in the record to not have been sterilized or ear tipped through ORCAT, which suggests deliberate abandonment or, less likely, cats taken to alternative clinic for TNR surgery by an individual. Introgression, particularly of intact cats, has been noted to be a barrier to decreasing cat populations over time through TNR efforts (13,21,22). It is unclear whether the introgression observed here was higher or lower than other geographic areas. Access to this location is limited and controlled through a 24-h manned gate, decreasing the likelihood of casual abandonment of cats. It is also geographically isolated, decreasing the chance of cats migrating from adjacent locales. However, human occupation is highly seasonal, which may increase the chance of loss or abandonment by part-time residents and staff. Given the strict control and geographic isolation, required microchipping, sterilization, and licensure of cats might decrease introgression of intact cats. Retroviral prevalence decreased over time as expected given the elimination of significant risk factors (fighting, mating, vertical transmission) for infection via sterilization, removal of positive cats, and vaccination against FeLV. The point-of-care test that was employed to test for FIV and FeLV is reported to have the best performance for detecting FeLV, with a calculated positive predictive value of 100% for FeLV and between 50 and 84% for FIV depending on prevalence (23). The FeLV vaccine was an adjuvanted killed vaccine that required 2 doses 3-4 weeks apart for efficacy. Because of the inability to safely and humanely house unsocial cats for the duration necessary to booster the vaccine, many cats received only 1 dose. In addition, many cats did not receive recommended re-vaccinations. It is unknown what level of protection may have been afforded from a single FeLV vaccination, and it should be noted that not even fully vaccinated cats are completely protected from infection. For naturally exposed cats, infection with FeLV is approximately 3 times more likely in those unvaccinated as opposed to fully vaccinated (24). Limitations The data are limited as they were collected for programmatic record-keeping rather than epidemiologic analysis. The censuses were not collected at regular intervals, and the years of collection were not regularly spaced. The month of collection was not standard. Cat populations tend to be seasonal, with peak populations observed in the summer and the lowest populations observed in the winter and spring (25). However, neither month nor season were significant in this limited analysis. This may have been due to the preferential removal of juveniles, which make up the vast majority of seasonal variation, or simply a lack of sufficient data points. Classic markers of animal welfare (such as growth, reproduction, body damage, disease, immunosuppression, adrenal activity, behavior anomalies, and self-narcotization) (26) were either not systematically captured or were not captured in a way that could be compared to animals not enrolled in the TNR program and were limited to the measures of life expectancy and a single class of disease prevalence. These measures of cat welfare do not account for concerns regarding return rather than routine euthanasia of trapped cats that include the potential for increased animal suffering due to non-retroviral disease or trauma (in other words, that free-roaming cats would be better off dead). Another limitation is that all population estimates were counts by a single caretaker. Multiple population census methods would have been ideal, as caretakers may underestimate the number of cats (1). However, this caretaker was highly knowledgeable of the entire population, which she interacted with on a daily basis, which may minimize concerns regarding accuracy of the count. Twenty cats were added to census estimates by the caretaker to account for potential undercounting. The small size of each colony, particularly in later years, should also have made count estimates more accurate. Nearly all ages were estimates, which makes analysis of agerelated data more challenging. The estimated average age of the free-roaming cat population may be biased toward an older age as cats with undocumented removals may have continued to contribute to the average age of the population. This bias was minimized by intensive efforts on the part of ORCAT to document outcomes such as MIA and requests to the community to bring cats that were found dead to the clinic to be outcomed as DOA. Estimated date of death for cats with an outcome of released was based on the average age of death for DOA and euthanized cats, with cats older than that average age at time of release being estimated to live for only an additional 12 months. In conclusion, a TNR program operating for over two decades achieved a decrease in population and an increase in population welfare as measured by increased average age of population and decreased prevalence of retroviruses. AUTHOR CONTRIBUTIONS RK collected the data, created the database, entered data, analyzed the data, and was the main author of the manuscript. HC entered data, drafted the introduction of the manuscript and edited the entire manuscript. JL contributed to the study design and data analysis, funded data collection, and edited the manuscript.
2019-02-01T14:05:14.491Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "d59d06a1e7e42fb10bc7f19b82990c9e5a4dfa31", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2019.00007/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8bc277d3adf0e25d32ff93903714772d94ca6a7e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
18321696
pes2o/s2orc
v3-fos-license
E-selectin S128R polymorphism and severe coronary artery disease in Arabs Background The E-selectin p. S128R (g. A561C) polymorphism has been associated with the presence of angiographic coronary artery disease (CAD) in some populations, but no data is currently available on its association with CAD in Arabs. Methods In the present study, we determined the potential relevance of the E-selectin S128R polymorphism for severe CAD and its associated risk factors among Arabs. We genotyped Saudi Arabs for this polymorphism by PCR, followed by restriction enzyme digestion. Results The polymorphism was determined in 556 angiographically confirmed severe CAD patients and 237 control subjects with no CAD as established angiographically (CON). Frequencies of the S/S, S/R and R/R genotypes were found as 81.1%, 16.6% and 2.3% in CAD patients and 87.8%, 11.8%, and 0.4% in CON subjects, respectively. The frequency of the mutant 128R allele was higher among CAD patients compared to CON group (11% vs. 6%; odds ratio = 1.76; 95% CI 1.14 – 2.72; p = .007), thus indicating a significant association of the 128R allele with CAD among our population. However, the stepwise logistic regression for the 128R allele and different CAD risk factors showed no significant association. Conclusion Among the Saudi population, The E-selectin p. S128R (g. A561C) polymorphism was associated with angiographic CAD in Univariate analysis, but lost its association in multivariate analysis. Background E-selectin (endothelial leukocyte adhesion molecule; ELAM1) is an 11-kD cell surface glycoprotein expressed on endothelial cells after activation by cytokines, and mediates adhesion of circulating monocytes and lymphocytes to endothelial cells. This adherence to activated arterial endothelium is one of the earliest detectable events in the pathogenesis of atherosclerosis [1]. Doubleknockout mouse experiments suggested that E-selectin plays an essential role in both early and advanced stages of atherosclerotic lesion development and that mutations in cellular adhesion molecules like E-selectin may act as genetic risk factors for coronary atherosclerosis [2,3]. Additionally, the involvement of E-selectin in cardiovascular diseases is suggested by the fact that it is expressed only in activated endothelial cells. Amino acid change from serine (S) to arginine (R) at codon 128 (S128R), which corresponds to A > C nucleotide change at position 561 (A561C), in the epidermal growth factor-like domain of the E-selectin gene has been implicated in the pathogenesis of CAD in several ethnic groups, including Germans, Japanese, Americans, Chinese and Africans [4][5][6][7][8][9][10]. The 128R mutant allele was significantly higher in the CAD patients than in controls (12.6% versus 6.7%, 17.4% versus 7.1%, and 19.5% versus 10.6%) in Japanese, [4] German [11] and white American [6] populations, respectively. However, no previous studies are available on possible association of this polymorphism with CAD among Arabs. Furthermore, apart from a study, which did not find a link between this mutation and CAD in Austrian patients with diabetes mellitus [12], there is hardly any data in the literature pertaining to the possible association of this mutation for the different CAD risk factors. Therefore, the aim of this investigation was to evaluate the potential relevance of E-selectin 128R polymorphism for angiographic CAD and its risk factors in Arabs, using the Saudi population as a study model. Study population Two groups of Saudi individuals were recruited for the present study. The patient group comprised 556 candidates (396 males and 160 females; mean age 50 ± 16 yr) of Saudi Arabian descent with angiographically documented severe CAD. The inclusion criterion for CAD was the presence of angiographically determined narrowing of the coronary vessels by at least 70%, which we define as having severe disease. Exclusion criteria for CAD were major cardiac rhythm disturbances, incapacitating or lifethreatening illness, major psychiatric illness or substance abuse, history of cerebral vascular disease, neurological disorder, and administration of psychotropic medication. A second group of 237 individuals (105 males and 132 females, mean age 50 ± 17 yr) undergoing surgery for heart valvular diseases and those who reported with chest pain, but were established to have no significant coronary stenosis by angiography, were recruited as angiographed controls (CON). Exclusion criteria for this group included among others diseases such as cancer, autoimmune disease, or any other disorders likely to interact with variables under investigation. This study was performed in accordance with the regulations laid down by the Hospital Ethics Committee and all participants signed an informed consent. DNA preparation Five ml of peripheral blood were collected in EDTA tubes from all participating individuals after obtaining their written consent. DNA was extracted using the PURGENE kit from Gentra Systems (Minneapolis, MN, USA), and stored at -20°C in aliquots until required. Determination of CAD risk factors Serum cholesterol and triglyceride levels were measured as routine in the main Hospital Pathology Laboratory. Triglyceride levels >1.8 mmol/L and total cholesterol levels > 5.2 mmol/L were considered elevated. Diabetic patients either had a known history of diabetes mellitus or were diagnosed as such according to the American Diabetes Association criteria [13]. Diagnosis of myocardial infarction was based on the consensus specified by the European Society of Cardiology and the American College of Cardiology [14]. Body mass index (BMI) was determined for all participants, and individuals with BMI ≥ 30 were considered obese in accordance with the Center for Disease Control and Prevention (Atlanta, GA, USA). Information about all other risk factors was procured either through patient interviews or by referring to their medical records. Detection of the S128R (A561C) polymorphism This was carried out by polymerase chain reaction (PCR) amplification followed by PstI restriction enzyme digestion. For DNA amplification, we used the forward primer 5'-AGT AAT AGT CCT CCT CAT CAT G -3' and reverse primer 5'-ACC ATC TCA AGT GAA GAA AGA G-3', designed to amplify a 186 bp fragment of the E-selectin gene [6,11]. Each 25 µl PCR reaction contained 2.5 µl of 10X reaction buffer with MgCl 2 (Amersham Pharmacia Biotech, Piscataway, NJ, USA), 10 ρmol of each primer, 100 ρmol/µl each of deoxynucleoside triphophates (deoxyadenosine triphosphate, deoxyguanosine triphosphate, deoxycytidine triphosphate and deoxythymidine triphosphate) (Perkin-Elmer Corporation, Foster City, CA, USA) in Tris HCl buffer, 1 unit Taq DNA polymerase (Amersham Pharmacia Biotech, Piscataway, NJ, USA) and 50 ng genomic DNA template. The mixture was denatured at 95°C for 5 min and the PCR reaction was carried out for 35 cycles, in a GeneAmp 9600 PCR system (Perkin-Elmer Corporation, Foster City, CA, USA), under the following conditions: denaturation at 95°C for 1 min, annealing at 54°C for 45 sec, extension at 72°C for 1 min and final extension cycle of 72°C for 7 min. The PCR products were electrophoresed on a 1% agarose gel and detected with 0.5 µg/ml ethidium bromide to confirm the correct amplicon size. The products were digested using the PstI restriction enzyme (Stratagene, LaJolla, Calif, USA) and the resultant fragments resolved on a 4% MetaPhor agarose gel (FMC Bio-products, Rockland, Maine, USA) in TE buffer containing 0.5 µg/ml ethidium bromide. The sizes of the digested amplicons were determined using the 50-bp ladder (Amersham Pharmacia Biotech, Piscataway, NJ, USA). As a quality control, we confirmed by direct sequencing the genotype status of 384 random samples representing the three different genotypes. Statistical analysis Genotype frequencies in various groups were compared by Chi-Square test. Multivariable logistic regression was used to study the effect of the E-selectin 561 C allele (128R allele) on CAD status, incorporating other variables (coronary risk factors) into the model. Additionally, we tried multiple logistic regression models involving the inclusion of interaction of the genotype × different CAD risk factors. All analyses were performed using SPSS v.10 (SPSS Inc., Chicago, USA) statistical analysis software. A two-tailed p value < .05 was considered statistically significant. We did power analysis employing the nQuery Adviser version.4 using two scenarios described in the results section and we concluded from this analysis that the study was adequately powered. We found that the mutant 128R allele accounted for 11% in the CAD group, which was significantly higher than that in the CON group (6%). The odds ratio for the risk of CAD associated with the 128R allele was 1.76 (95% CI 1.14 -2.72; p = .007), thus indicating a significant association of this allele with CAD in our population. The variables showing an association (p =< .05) from Table 1, were then put into a stepwise logistic regression, in order to study the possible combined effect of the mutant 128R allele with other risk factors on angiographic CAD. The variables retained in the model were hypertension (p = .03), diabetes mellitus (DM) (p =< .001), hypercholesterolemia (p =< .001), hypertriglyceridemia (p = .008), MI (p =< .001) and gender (p =< .001) ( Table 2). As for the power calculation, we did power analysis employing nQuery Adviser version 4 using two scenarios as follow: First, In our study, when we compared the risk of CAD among (S/R and R/R) to S/S, we got an odds ratio of 1.67 (p = .023). When α =.05, using 2 sided test, proportion of S/R and R/R among CAD is .189 and among controls is .122. Number of controls = 237, number of cases = 556 (we used the average, which is 397). Using this information we have calculated the power in nQuery as 74%. Second, In our study, when we analyzed the risk of CAD among (R alleles) compared to S alleles, we got an odds ratio of 1.7 (p = .007). When α = .05, using 2 sided test, proportion of C alleles among CAD is .106 and among controls is .063. Number of controls = 474, number of cases = 1112 (we used the average, which is 793). Using this information, we have calculated the power in nQuery as 86%. From this power calculation, we concluded that our study is adequately powered. Discussion In the past 3 decades, the incidence of CAD is on the rise in Saudi Arabia. According to the biggest study conducted so far, the overall prevalence of this disease in Saudi Arabia is 5.5% [15]. The rise in the incidence of CAD has been attributed to the major changes in the life-style of the Saudi population. High-fat diets, obesity, diabetes, and smoking, all of which are considered CAD risk factors, have become more prevalent, and people are leading a more sedentary lifestyle. Two independent studies reported predicted an incremental scale for the development of CAD in the Saudi population because of sharp increases in CAD risk factors such as obesity, hypercholesterolemia, diabetes, hypertriglyceridemia, and high blood pressure [16,17]. Early detection of individuals genetically susceptible to CAD can lead to early intervention, and knowledge of genetic susceptibility to CAD has value in providing risk information and guiding decision-making. To our knowledge, this study is the first to evaluate the prevalence of E-selectin polymorphism (S128R) and its potential relevance for angiographic CAD and its associated risk factors in the Arab population. The first step was to determine the prevalence of different E-selectin genotypes among our general population of Saudi Arabs. We found that the S/S is the most abundant and R/R the least common genotype among Arabs. The prevalence of the 128R allele was 6% in our controls, which is almost similar to the rate observed in the Germans (7.1%) and Japanese (6.7%) and slightly higher that in Africans (3.7%) [9] and Chinese (0%) [10]. When we compared the genotype frequencies in CAD versus CON group, the odds ratio was significant, indicating an association between these genotypes and CAD. However, possibly because of the relatively small number of individuals with the S/R and R/R genotype, this analysis did not attain significance see Table 1. On the other hand, a significant odds ratio and p value was obtained when we compared the frequency of the 128R mutant allele in CAD and CON groups, pointing to an association of this allele with CAD among our Saudi Arab population. These results are comparable to the findings in the Japanese [4], German [11] white American [6] and Chinese [10] populations, but in contrast to a previous study in CAD patients with type 2 DM [12]. Although the frequencies of several classical risk factors for CAD, including elevated cholesterol and triglycerides levels, DM, age, hypertension, gender and MI were higher in the CAD patients compared to controls, when we entered these risk factors with the mutant 128R allele into a multiple variable logistic regression, the association was no longer significant. Conclusion In summary, the mutant 128R allele of the E-selectin gene is associated with angiographic severe CAD in Saudi Arabs. This association is lost after adjustment for traditional CAD risk factors.
2014-10-01T00:00:00.000Z
2006-06-06T00:00:00.000
{ "year": 2006, "sha1": "09534114ef07540d2a9e0eacaa769dcfd16d6a34", "oa_license": "CCBY", "oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/1471-2350-7-52", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91ddb8f5002d622da702b3606f5f637bdba59884", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248787352
pes2o/s2orc
v3-fos-license
Micro-Computed Tomography Soft Tissue Biological Specimens Image Data Visualization : Visualization of soft tissues in microCT scanning using X-rays is still a complicated matter. There is no simple tool or methodology on how to set up an optimal look-up-table while respecting the type of soft tissue. A partial solution may be the use of a contrast agent. However, this must be accompanied by an appropriate look-up-table setting that respects the relationship between the soft tissue type and the Hounsfield units. The main aim of the study is to determine experimentally derived look-up-tables and relevant values of the Hounsfield units based on the statistical correlation analysis. These values were obtained from the liver and kidneys of 24 mice in solutions of ethanol as the centroid value of the opacity look-up-table area under this graph. Samples and phantom were scanned by a Bruker SkyScan 1275 micro-CT and Phywe XR 4.0 and processed using CTvox and ORS Dragonfly software. To reconstruct the micro-CT projections, NRecon software was used. The main finding of the study is that there is a statistically significant relationship between the centroid of the area under the look-up-table curve and the number of days for which the animal sample was stored in an ethanol solution. H1 of the first hypothesis, i.e. that suggested the Spearman’s correlation coefficient does not equal zero (r 1 (cid:54) = 0) regarding this relationship was confirmed. On the other hand, there is no statistically significant relationship between the centroid of the area under the look-up-table curve and the concentration of the ethanol solution. In this case, H1 of the second hypothesis, i.e. that the Spearman’s correlation coefficient does not equal zero (r 2 (cid:54) = 0) regarding this relationship was not confirmed. Spearman’s correlation coefficients were − 0.27 for the concentration and − 0.87 for the number of days stored in ethanol solution in the case of the livers of 13 mice and 0.06 for the concentration and 0.94 for the number of days stored in ethanol solution in the case of kidneys of 11 mice. Introduction Computed tomography (CT) imaging is widely used in medical practice, mainly due to the non-invasiveness of this method, its good spatial resolution, and the relatively short acquisition time of the required images. In addition to conventional CT, which is mainly used both for diagnostic and therapeutic procedures [1], high spatial resolution CT or micro-CT (µCT) are also used in practice [2]. These systems work on the same basic principle as the typical CT, except that the X-ray-tube-detector system does not perform rotational motion, but rotates the specimen. However, their advantage lies in their ability to scan samples with much higher spatial (<50 µm) and contrast resolution. It is because of the non-destructiveness of this method in evaluating the scanned volume with high resolution that µCT devices are much used in both research and industry. Furthermore, even if the intrinsic noise of commercial µCT detectors is not a problem in many cases (especially in industry), the imaging of more delicate structures with a lower range of CT numbers (the same as Hounsfield units = HU), and the associated lower attenuation contrast between structures, such as soft tissues, is greatly hampered by this noise. The present work, in collaboration with the Department of Anatomy at the 3rd Faculty of Medicine, Charles University (Prague, Czech Republic), is primarily focused on the visualization of soft tissues in particular, as well as the search for a methodology that could optimize the overall visualization process and make it easier. This also involves research and experimentation with new procedures and contrast agents to achieve the required contrast between soft tissues that is needed for visualization [3][4][5][6][7]. Two papers dealt with similar issues. The first one, i.e., [8], used an image histogram for optimal look-up-table (LUT) setup. Thus, it is not the same intention. The second work, i.e., [9], uses only the LUT settings. However, in neither case was it possible to find a link between soft tissue type and Hounsfield units. We have been using µCT for a long time and we need to visualize soft tissues in experimental animals. For a long time, there is a tradition of contrast agent usage, but they are not always best and/or we often encounter problems when using them. This is why we tried to figure out some connection between LUT setting and HU values for individual structures and organs; even mentioned contrast agents could have been of some value in this. Additionally, we wanted to find out if there exist other possibilities of contrast enhancement, e.g., because of air bubbles in contrast agent for ultrasound. Especially this is considered when experimenting with PHYWE XR 4.0 in various structures in chicken tissues (phantom). Finally, we also think about semiautomated, or fully automated settings of LUT with fuzzy logic. Some of these ideas were verified within this study. The main aim of this study is to optimize soft tissue contrast visualization of biological specimens (mouse liver, kidneys) using a contrast agent and a proper LUT. Based on this aim we would like to statistically verify the relationship between the centroid of the area under the look-up-table curve and the number of days for which the animal sample was stored in the ethanol solution, as well as the concentration of the ethanol solution. Therefore, two hypotheses were formulated. H1 (Each occurrence of H1 means that it is an allernative hypothesis) of the first hypothesis states that the Spearman's correlation coefficient does not equal zero (r 1 = 0) in the case of relationship between the coordinates of the center of gravity (centroid) of the area under the LUT and the number of days for which the sample was stored in the ethanol solution. H1 of the second hypothesis states that the Spearman's correlation coefficient does not equal zero (r 2 = 0) in the case of relationship between the coordinates of the center of gravity (centroid) of the area under the LUT and the ethanol concentration in the solution. Materials, Devices, and Software The database of µCT images from the Dept. of Anatomy of the 3rd Medical Faculty of Charles University was used for the study. All intravenous contrast agent's applications follow similar pattern. At first there is introduced the contrast agent into the tail vein. After that follows animal euthanasia and µCT scanning. Concerning Aurovist contrast agent, solution from the ampoule is injected into tail vein under isoflurane anesthesia into the bloodstream. Follows scanning in the µCT apparatus. Concerning the timing, scanning is performed after complete distribution of Aurovist gold nanoparticles to the tissues of the animal, including both arteries and veins of the circulatory system. Preparation steps for experimental animals were the following. Animals (mice) were housed in the standard light/dark cycle in the First Faculty of Medicine, Charles University at standard room temperature. Mice were applied one ampulla of Aurovist contrast agent (15 nm of Aurum particles in 40 mg of solution, Nanoprobes, Yaphank, NY, USA) in the tail vein per one mouse. In case of whole mouse experiment, 2 h after Aurovist application were mice euthanized with pentobarbital and scanned in micro-CT. Timing was selected because rigor mortis develops within 2 h after death. Animals (chickens) were obtained by micro-surgery from the eggs in their 19 days of development. Firstly, a rectangular window was dissected out of the egg wall and solution of 40% KI to euthanize chicken (heartbeat stop) was applied intra-amniotically. After the removal of dead chicken from the egg it was placed into 80% of Ethanol for 7 days. Animals (rabbits) were donated by Prof. Berndt Minnich, University of Salzburg, Vascular Unit, Austria. Post-mortem, rabbits were pinned in supine position onto a wax plate. After opening of the abdomen, the aorta abdominalis was identified and separated from each other. A ligature around the aorta abdominalis ensured exclusive upper body casting. A tube was inserted into the artery cranially to the first ligature and tied in place (6.0 sutures). Blood vessels were rinsed after opening the aorta with 0.9% Ringer's solution (60 mL/h) at 37 • C until clear reflux appeared. Thereafter, the casting medium (Mercox-Cl-2B, Ladd Research Inc., Burlington, VT, USA) diluted with monomeric methyl methacrylate was injected at 99 mL/h by an electric syringe pump (Habel PSA 50, Sky Electronics S.A., Grenoble, France) or a pneumatic injection pump (ComServ OG, Ebenau, Austria). When the injected resin became viscous, animals were left in place for at least 30 min at room temperature (RT) to initiate resin polymerization. The upper bodies of the rabbits were cut off and tempered in water (60 • C, 24 h). Subsequently, water was replaced by potassium hydroxide (7.5%, 40 • C, twice for 24 h), followed by hydrochloric acid (2%, RT, 24 h) and formic acid (5%, RT, 15 min). Finally, casts were washed thrice and frozen in distilled water (−20 • C) and freeze-dried (FreeZone 77520, Labconco Corp., Kansas City, MO, USA). The database consisted of ex vivo chickens in and out of the shell and aged 5 to 19 days. Chickens outside the shell were preserved in 50-100% ethanol solution, some of them additionally in formaldehyde solution. In addition, a few chicken samples contained injected gold nanoparticles from the Aurovist contrast agent. Prior to acquisition, samples were left in air after removal from the solution and scanned with Bruker's SkyScan 1275 (Bruker MicroCT, Kontich, Belgium) after evaporation of the solution. Another part of the database consisted of ex vivo preparations of mice injected with Aurovist contrast agent containing 15 nm gold nanoparticles (AuNPs) prior to scanning. A 50% solution of Omnipaque contrast agent was then used intravenously before scanning the mouse organs themselves (liver, lungs, spleen, and heart). Some mouse organs, such as liver, kidney, brain, heart, and leg, were kept for one week in a solution of potassium iodide with concentration 100 mM (KI, Merck, Czech Republic) in 20% ethanol before scanning. Lastly, the database included corrosive preparations in which a resin (Mercox) [10] was infused into the vascular system and allowed to harden, the remaining tissue was then dissolved in lye by maceration and the sample was scanned. Corrosive preparations of rabbit and rat lung and heart were selected for the purpose of this study. For the implementation of the idea of blood vessels visualization in soft tissue using the contrast that would be created by the presence of air in the vascular cavities, chicken flesh was used as a phantom in which cylindrical tubes/capillaries of various diameters made of different materials were placed. These were two tubes (silicone), a glass capillary with sealed ends, and a plastic straw (bioplastic PLA). This phantom, depicted in Figure 1, was then scanned with the Phywe XR 4.0 (PHYWE Systeme GmbH & Co. KG, Göttingen, Germany) experimental setup. Appl. Sci. 2022, 12, x FOR PEER REVIEW 4 of 36 with sealed ends, and a plastic straw (bioplastic PLA). This phantom, depicted in Figure 1, was then scanned with the Phywe XR 4.0 (PHYWE Systeme GmbH & Co. KG, Göttingen, Germany) experimental setup. Two µ CT systems were used to scan the above samples. The ex vivo chicken samples were scanned with a professional µ CT system Bruker SkyScan 1275 located at the Department of Anatomy, 3rd Faculty of Medicine, Charles University, while the Phywe XR 4.0 experimental setup was used to scan the phantom of the chicken vasculature and capillaries at the Dept. of Biomedical Technology, Fac. of Biomedical Engineering, Czech Technical University in Prague. Bruker's SkyScan 1275 has been specifically designed for fast scanning using advances in X-ray source technology and efficient flat-panel detectors. This X-ray source has the selectable anode voltage of the assembly ranging from 20 to 100 kV and the max. available power is 10 W. The assembly includes an active 3 megapixel CMOS flat-panel detector with a maximum possible resolution of up to 4 μm (at maximum magnification) and it is capable of accommodating a sample diameter of 96 mm and a height of 120 mm. All used setups of the above mentioned µ CT are available within the Appendix B, i.e., Tables A2-A19. The XR 4.0 experimental setup from the German company Phywe was designed for teaching purposes and for laboratory experiments with X-rays. Its advantage lies, among other things, in the relatively easy and quick interchangeability of the X-ray tube and, therefore, facilitates experiments with different anode materials [11,12]. The XR 4.0 can also be used as a µ CT system thanks to an expansion kit supporting the computed tomography function. This set consists of a fixed copper X-ray tube that illuminates the specimen on a rotary table driven by a stepper motor with a minimum adjustable table rotation angle of 0.086°. The attenuated X-ray radiation is then incident on an XRIS detector (CMOS) with an active area of 5 × 5 cm 2 and a maximum possible resolution of 48 μm [11,12]. The selectable anode voltage of the assembly is 5-35 kV with Two µCT systems were used to scan the above samples. The ex vivo chicken samples were scanned with a professional µCT system Bruker SkyScan 1275 located at the Department of Anatomy, 3rd Faculty of Medicine, Charles University, while the Phywe XR 4.0 experimental setup was used to scan the phantom of the chicken vasculature and capillaries at the Dept. of Biomedical Technology, Fac. of Biomedical Engineering, Czech Technical University in Prague. Bruker's SkyScan 1275 has been specifically designed for fast scanning using advances in X-ray source technology and efficient flat-panel detectors. This X-ray source has the selectable anode voltage of the assembly ranging from 20 to 100 kV and the max. available power is 10 W. The assembly includes an active 3 megapixel CMOS flat-panel detector with a maximum possible resolution of up to 4 µm (at maximum magnification) and it is capable of accommodating a sample diameter of 96 mm and a height of 120 mm. All used setups of the above mentioned µCT are available within the Appendix B, i.e., Tables A2-A19. The XR 4.0 experimental setup from the German company Phywe was designed for teaching purposes and for laboratory experiments with X-rays. Its advantage lies, among other things, in the relatively easy and quick interchangeability of the X-ray tube and, therefore, facilitates experiments with different anode materials [11,12]. The XR 4.0 can also be used as a µCT system thanks to an expansion kit supporting the computed tomography function. This set consists of a fixed copper X-ray tube that illuminates the specimen on a rotary table driven by a stepper motor with a minimum adjustable table rotation angle of 0.086 • . The attenuated X-ray radiation is then incident on an XRIS detector (CMOS) with an active area of 5 × 5 cm 2 and a maximum possible resolution of 48 µm [11,12]. The selectable anode voltage of the assembly is 5-35 kV with an adjustable emission current in the range up to 1 mA. Setup with detailed setting is available within the Appendix B, i.e., Table A1. Methods In the field of digital image data processing (digital images), various point operators or functions are very often used to perform single-point assignment, or transform the value of an image point (pixel) from an input image (labeled a) at a given position (x, y) to another value within the output image (labeled b), but at the same position (x, y). In general, this situation is indicated in Figure 2. an adjustable emission current in the range up to 1 mA. Setup with detailed setting is available within the Appendix B, i.e., Table A1. Methods In the field of digital image data processing (digital images), various point operators or functions are very often used to perform single-point assignment, or transform the value of an image point (pixel) from an input image (labeled a) at a given position (x, y) to another value within the output image (labeled b), but at the same position (x, y). In general, this situation is indicated in Figure 2. Table Concept Let us consider an experiment where we multiply the pixel values in the input image by 1. If we consider the usual input dynamic range of grayscale digital images (i.e., 0 to 255, where 0 represents the black level and 255 represents the white level), then the output image will produce a result that is identical to the values in the input image. For example, an input value of 10 would be multiplied by 1 and the result would also be 10 and this would continue for all pixel values in all positions. This then results in a simple linear function that can be represented as a straight line at a 45° angle and passing through the origin of the coordinates, or it is a simple copy of the input image. Such an example can also be expressed very simply in mathematical notation, namely the slope equation of the straight line as reported in Figure 3. Table Concept Let us consider an experiment where we multiply the pixel values in the input image by 1. If we consider the usual input dynamic range of grayscale digital images (i.e., 0 to 255, where 0 represents the black level and 255 represents the white level), then the output image will produce a result that is identical to the values in the input image. For example, an input value of 10 would be multiplied by 1 and the result would also be 10 and this would continue for all pixel values in all positions. This then results in a simple linear function that can be represented as a straight line at a 45 • angle and passing through the origin of the coordinates, or it is a simple copy of the input image. Such an example can also be expressed very simply in mathematical notation, namely the slope equation of the straight line as reported in Figure 3. With this defined operator, we can easily adjust various changes to image parameters such as brightness and contrast, both when we need to highlight selected gray levels or when we need to suppress them. Examples of different transfer functions are shown in Figures 4 and 5. With this defined operator, we can easily adjust various changes to image parameters such as brightness and contrast, both when we need to highlight selected gray levels or when we need to suppress them. Examples of different transfer functions are shown in Example of a point operator (function) that implements a copy of the input image on the output. In this case, the value of the slope (i.e., k = 1), and the value of the y-intercept of a straight line where a line crosses y-axis of a graph (i.e., q = 0). With this defined operator, we can easily adjust various changes to image parameters such as brightness and contrast, both when we need to highlight selected gray levels or when we need to suppress them. Examples of different transfer functions are shown in Figures 4 and 5. From the above examples, it is clear that these transfer functions or LUTs allow to significantly influence the visualization process of the given image data. Thus, for example, the transfer function in Figure 4 (panel 2) affects the brightness of the image, in Figure 4 (panel 3) it affects the contrast, in Figure 4 (panel 6) a transfer function for realizing a socalled binary image (i.e., an image that contains only two levels, most often black and white) is shown, and yet another suitable example can be seen in Figure 5 (panel 12), which is suitable for correcting an image that was taken under greatly reduced or increased lighting conditions. The latter two examples are further illustrated in the image data, respectively, in Figures 6 and 7. From the above examples, it is clear that these transfer functions or LUTs allow to significantly influence the visualization process of the given image data. Thus, for example, the transfer function in Figure 4 (panel 2) affects the brightness of the image, in Figure 4 (panel 3) it affects the contrast, in Figure 4 (panel 6) a transfer function for realizing a so-called binary image (i.e., an image that contains only two levels, most often black and white) is shown, and yet another suitable example can be seen in Figure 5 (panel 12), which is suitable for correcting an image that was taken under greatly reduced or increased lighting conditions. The latter two examples are further illustrated in the image data, respectively, in Figures 6 and 7. Transfer Functions Related to the Hounsfield Units (HU) There is a very important process in CT and µ CT imaging, that is the actual conversion of the attenuation of the passing X-rays to the grey level. Based on the physiological properties of human vision, 256 gray levels are very often used. The range of so-called Hounsfield units (HU) is typically in the range −1024 to +3071, which is 4096 values, and this is a much larger dynamic range than the 256 gray level values. For this reason, only the so-called windows on a given tissue (limited HU range), which has a width of 256 values or greater, need to be displayed. Thus, it is never possible to see all tissues at once, see soft tissue window, bone window, etc. Realistically, this situation is often implemented by a transfer function that has HU values on the x-axis and gray level values from 0 to 255 on the vertical axis. The situation is also depicted in Figure 8, which is an example of this situation in NRecon software [15]. Transfer Functions Related to the Hounsfield Units (HU) There is a very important process in CT and µCT imaging, that is the actual conversion of the attenuation of the passing X-rays to the grey level. Based on the physiological properties of human vision, 256 gray levels are very often used. The range of so-called Hounsfield units (HU) is typically in the range −1024 to +3071, which is 4096 values, and this is a much larger dynamic range than the 256 gray level values. For this reason, only the so-called windows on a given tissue (limited HU range), which has a width of 256 values or greater, need to be displayed. Thus, it is never possible to see all tissues at once, see soft tissue window, bone window, etc. Realistically, this situation is often implemented by a transfer function that has HU values on the x-axis and gray level values from 0 to 255 on the vertical axis. The situation is also depicted in Figure 8, which is an example of this situation in NRecon software [15]. values, and this is a much larger dynamic range than the 256 gr reason, only the so-called windows on a given tissue (limited H width of 256 values or greater, need to be displayed. Thus, it is tissues at once, see soft tissue window, bone window, etc. Real often implemented by a transfer function that has HU values on values from 0 to 255 on the vertical axis. The situation is also dep is an example of this situation in NRecon software [15]. The practical impact of this situation is also the arrangement of the image data format in the form of so-called DICOM (Digital Imaging and Communications in Medicine) files. This file contains both image data and metadata [17] and each DICOM browser can display both. The structure of DICOM files consists of so-called tags and an example of these tags in relation to HU are the following items (i.e., the DICOM attribute number (0028,1053) whose name is "Rescale slope" and the DICOM attribute number (0028,1052) whose name is "Rescale intercept"). These parameters can then be substituted into the equation after the parameters k and q in Figure 3. These items can also be read. Transfer Functions Design Possibilities Within the Ctvox software a histogram of the attenuation is available together with an editor that can be used to freely modify the transfer functions as LUTs, i.e., the program can either simply connect individual markers or use them as nodes to create a so-called spline curve. The program then assigns to each voxel the corresponding degree of the currently selected parameter (the so-called emission color [18]) depending on the attenuation of the voxel in the spline mode called volume rendering. There are five parameters to choose from-opacity, red (R) channel, green (G) channel, blue (B) channel, and luminance (L), as reported in Figure 9c. If all three RGB channels are combined, only two parameters are available (i.e., opacity and luminance), and the voxels are assigned to their opacity and grayscale. In the other case, four parameters are available, namely, opacity together with all three RGB channels. In addition to assigning opacities, individual voxels can also be colored differently in this way, or segmented by color. In the case of opacity, there is available a purple opacity curve (initially a straight line, with just two markers) appears, in order to indicate that it is active. Opacity is the opposite of transparency. Hence, we will need to map low intensities to low opacities (i.e., high transparency), as shown in Figure 9c. program can either simply connect individual markers or use them as nodes to create a so-called spline curve. The program then assigns to each voxel the corresponding degree of the currently selected parameter (the so-called emission color [18]) depending on the attenuation of the voxel in the spline mode called volume rendering. There are five parameters to choose from-opacity, red (R) channel, green (G) channel, blue (B) channel, and luminance (L), as reported in Figure 9c. If all three RGB channels are combined, only two parameters are available (i.e., opacity and luminance), and the voxels are assigned to their opacity and grayscale. In the other case, four parameters are available, namely, opacity together with all three RGB channels. In addition to assigning opacities, individual voxels can also be colored differently in this way, or segmented by color. In the case of opacity, there is available a purple opacity curve (initially a straight line, with just two markers) appears, in order to indicate that it is active. Opacity is the opposite of transparency. Hence, we will need to map low intensities to low opacities (i.e., high transparency), as shown in Figure 9c. Results A total of 53 transfer functions (LUTs) were designed, 42 of which were validated and optimized to emphasize the soft tissues of selected biological samples from µ CT (mouse, rat, rabbit) using selected contrast agents (Aurovist, Omnipaque, KI in the ethanol solution, Mercox). The remaining 11 transfer functions could not be verified due to the absence of contrast in soft tissue samples from chickens. In total, 18 images and transfer functions are included within this text. Results A total of 53 transfer functions (LUTs) were designed, 42 of which were validated and optimized to emphasize the soft tissues of selected biological samples from µCT (mouse, rat, rabbit) using selected contrast agents (Aurovist, Omnipaque, KI in the ethanol solution, Mercox). The remaining 11 transfer functions could not be verified due to the absence of contrast in soft tissue samples from chickens. In total, 18 images and transfer functions are included within this text. Out of the 53 transfer functions, one transfer function was able to be suggested that would likely highlight the tongue of a 19-day-old chick scanned with the Bruker SkyScan 1275 [19] at an anode voltage of 40 kV. A total of 24 mice samples were statistically processed via correlation. Normality test was performed for all these samples in Matlab (Campus Wide Matlab, R2021a, The MathWorks, Inc., Natick, MA, USA) via Lilliefors test. These values were obtained for liver and kidneys of 24 mice in solutions of ethanol as a centroid value of the opacity LUT area under this graph (see example on Figure 10). Except for one dataset, all the others do not have a normal distribution according to this test. Results are available within Tables 1-4 below. From this follows that there is required to use Spearman s correlation coefficient. In Figure 10 test was performed for all these samples in Matlab (Campus Wide Matlab, R2021a, The MathWorks, Inc., Natick, MA, USA) via Lilliefors test. These values were obtained for liver and kidneys of 24 mice in solutions of ethanol as a centroid value of the opacity LUT area under this graph (see example on Figure 10). Except for one dataset, all the others do not have a normal distribution according to this test. Results are available within Tables 1-4 below. From this follows that there is required to use Spearman´s correlation coefficient. [20]. In case of a homogenous triangle, the resulting centroid's x coordinate is the average of the x coordinates of its vertices. The hypotheses mentioned within the Introduction were evaluated based on these data. This was a test of the alternative hypothesis that the Spearman correlation coefficient is not equal to 0. We used a relationship (see link within the [21]) where, given H0: r = 0, the t Student's statistic has a distribution with d.f. (degree of freedom): N-2 (see also link within the [22]). Overall, 18 LUTs from 53 LUTs were also validated on selected biological samples, selection of which is illustrated within this section and reported in Figures 9 and 11 -13. Figures A1 to A18. Part A1 (Figures A1 and A2) is related to the ethanol contrast, part A2 ( Figures A3-A7) to Aurovist, part A3 ( Figures A8-A11) to Omnipaque, part A4 ( Figures A12-A16) to potassium iodide (KI) in 20% ethanol solution and part A5 (Figures A17 and A18) to Mercox resin. Captions of these figures content detailed descriptions of all possible findings due to the better contrast visualization. It is the reason why these figures are included with higher spatial resolution within the Appendix A. with more details according to the contrast agent used, in Appendix A, specifically in Figures A1 to A18. Part A1 (Figures A1 and A2) is related to the ethanol contrast, part A2 ( Figures A3-A7) to Aurovist, part A3 ( Figures A8-A11) to Omnipaque, part A4 ( Figures A12-A16) to potassium iodide (KI) in 20% ethanol solution and part A5 (Figures A17 and A18) to Mercox resin. Captions of these figures content detailed descriptions of all possible findings due to the better contrast visualization. It is the reason why these figures are included with higher spatial resolution within the Appendix A. Discussion The main finding of this study is that there is a statistically significant monotonic relationship between the centroid of the area under the LUT curve and the number of days for which the animal sample was stored in the ethanol solution. H1 of the first hypothesis that the Spearman's correlation coefficient does not equal zero (r1 ≠ 0) regarding this relationship was confirmed. On the other hand, there is no statistically significant monotonic relationship between the centroid of the area under the LUT curve and the concentration of the ethanol solution, see Tables 1-4. In this case, H1 of the second hypothesis that the Spearman's correlation coefficient does not equal zero (r2 ≠ 0) regarding this relationship was not confirmed. Within Table 4, we can see relevant average values for selected storage days of the specimen and related Hounsfield units, i.e., CT numbers as well. These values are very important and can be assigned to the centroid parameters of a LUT curve incorporating the relevant parameter of days for which the animal sample (mouse liver or kidney) was stored in ethanol solution. Another finding of the study is that it is not possible to use one universal transfer function for proper soft tissue visualization of µ CT images. The use of an appropriate contrast agent has a very significant effect on soft tissue visualization. It was possible to visualize the capillaries inside the chicken flesh in the phantom vasculature. In the case of the mouse heart, it was possible to segment and colorfully distinguish the tissue of the heart muscle from the rest of the blood vessels. Due to the lack of one optimal and universal transfer function (LUT), a total of 53 transfer functions were designed using CTvox software [13] and optimized to highlight soft tissues in particular. In the case of the mouse liver, mainly its external structures can be seen, while in the case of the kidney, brain, and heart, the internal structures are partially visible. In the mouse leg specimen, the transfer function emphasizes partly soft tissue and partly hard tissue (bone). For the corrosive preparations, the soft tissue structures are shown by the contrast created between the Mercox resin and the air. Since the CT numbers of the synthetic resin are around 70 HU [2], it is possible to speak of a soft tissue equivalent in this case (range 40-80 HU). Discussion The main finding of this study is that there is a statistically significant monotonic relationship between the centroid of the area under the LUT curve and the number of days for which the animal sample was stored in the ethanol solution. H1 of the first hypothesis that the Spearman's correlation coefficient does not equal zero (r 1 = 0) regarding this relationship was confirmed. On the other hand, there is no statistically significant monotonic relationship between the centroid of the area under the LUT curve and the concentration of the ethanol solution, see Tables 1-4. In this case, H1 of the second hypothesis that the Spearman's correlation coefficient does not equal zero (r 2 = 0) regarding this relationship was not confirmed. Within Table 4, we can see relevant average values for selected storage days of the specimen and related Hounsfield units, i.e., CT numbers as well. These values are very important and can be assigned to the centroid parameters of a LUT curve incorporating the relevant parameter of days for which the animal sample (mouse liver or kidney) was stored in ethanol solution. Another finding of the study is that it is not possible to use one universal transfer function for proper soft tissue visualization of µCT images. The use of an appropriate contrast agent has a very significant effect on soft tissue visualization. It was possible to visualize the capillaries inside the chicken flesh in the phantom vasculature. In the case of the mouse heart, it was possible to segment and colorfully distinguish the tissue of the heart muscle from the rest of the blood vessels. Due to the lack of one optimal and universal transfer function (LUT), a total of 53 transfer functions were designed using CTvox software [13] and optimized to highlight soft tissues in particular. In the case of the mouse liver, mainly its external structures can be seen, while in the case of the kidney, brain, and heart, the internal structures are partially visible. In the mouse leg specimen, the transfer function emphasizes partly soft tissue and partly hard tissue (bone). For the corrosive preparations, the soft tissue structures are shown by the contrast created between the Mercox resin and the air. Since the CT numbers of the synthetic resin are around 70 HU [2], it is possible to speak of a soft tissue equivalent in this case (range 40-80 HU). Unfortunately, it was not possible to properly visualize soft tissues or organs from any of the chick scans. In fact, it is not certain that the scan of a stained chicken head of a 19-day-old chick at an anode voltage of 40 kV succeeded in highlighting just the tongue of this chick, or whether it is only the cartilaginous part of its beak. In ex vivo samples of chickens prepared in ethanol and formaldehyde solutions, there was a presumption of contrast in the µCT images through protein coagulation. This assumption was probably only partially borne out and the result shows the interior of the chicken stained differently in blue and red, corresponding to a slight difference in CT numbers of otherwise further visualizable soft tissue. Furthermore, the ethanol solution in whole chickens did not have the opportunity to penetrate and subsequently evaporate from the animal cavities, which would have caused the solution to be replaced by air, which would have further caused better contrast in the µCT images (comparable to the evaporation of ethanol from the cavities of the animal organs evaporated therein). It is also interesting to note that the plastic straw made of polyethylene has approximately the same CT number as the chicken meat and therefore cannot be distinguished from the meat as it passes through it. The glass capillary was the best of all materials to see, as the glass-air interface is a very high contrast object against soft tissue. This was followed by a larger diameter silicone capillary and the third was a smaller diameter capillary. Thus, in contrast to polyethylene, all other materials were clearly distinguishable from chicken meat (Figure 1). In the majority of cases, the output of CTvox shows that using a lower anode voltage is the optimal choice for scanning soft tissue on µCT. In the case of bone segmentation, the choice of voltage did not play any significant role. For a more prominent presentation of the results, the processing of the oldest chicks prepared in ethanol was deliberately chosen, mainly because the chick is not sufficiently anatomically developed until around day 19 after egg fertilization, as this is the age at which chicks usually hatch. When younger chicks were examined, many structures were not yet sufficiently developed and, therefore, not visible. Four chicken samples also contained the contrast agent Aurovist, consisting of 15 nm gold nanoparticles. Specifically, there were two scans at 10 days of age and another two at 12 days of age; unfortunately, no internal (soft) anatomical structures could be identified in this case either. This may have been due to the short age of the chicks, which may not have developed these structures properly yet. A second explanation could be that very low amount of Aurovist contrast agent was injected into the sample. The limitation of this study is the relatively small number of included images and contrast agents. Another limitation is the use of the purely experimental-based level of visualization, i.e., there is subjective dependence of operator and possible variable deviation. In the case of statistical processing of the data with different KI concentrations, only two different concentrations were used-50% and 100%-this was a limitation, also. Conclusions In addition to the contrast agents used above, there is possible to use other more suitable contrast agents with very interesting properties. The idea is to perform next study based on the optimal contrast agent selection with relevant LUT. There are prepared comparative experiments with the set of such contrast agents as Sonovue, Iomeron, Optison, Omnipaque, Visipaque, Xenetix, Optiray, and Ultravist to be able to select appropriate one. However, there is assumed utilization of the semi-automatic/automatic LUT setup. Informed Consent Statement: Not applicable. Data Availability Statement: The complete original data are not publicly available due to large volume. However, there are available all LUTs as images and *.tf files and relevant images as well. Please, use the link within the [23]. Acknowledgments: There were used uncommercial licenses of software measure CT from Phywe [11,12], CTvox [13] and NRecon [15] from Bruker and Dragonfly [14] from ORS. We would like to thank Phywe, Bruker, and Object Research Systems for their kind support and feedback. Conflicts of Interest: The authors declare no conflict of interest. Appendix A After reconstructing the data with NRecon, we designed several transfer functions in CTvox to emphasize the different structures of the selected samples, especially soft tissues. The CTvox outputs are divided into 5 subsections-according to the method of sample preparation or contrast used. This appendix is an additional section that contains details and data supplemental to the main text above (see the Results section). Figures of replicates for experiments of which representative data are shown in the main text above are added here with higher spatial resolution and detailed description. Acknowledgments: There were used uncommercial licenses of software measure CT from Phywe [11,12], CTvox [13] and NRecon [15] from Bruker and Dragonfly [14] from ORS. We would like to thank Phywe, Bruker, and Object Research Systems for their kind support and feedback. Conflicts of Interest: The authors declare no conflicts of interest. Appendix A After reconstructing the data with NRecon, we designed several transfer functions in CTvox to emphasize the different structures of the selected samples, especially soft tissues. The CTvox outputs are divided into 5 subsections-according to the method of sample preparation or contrast used. This appendix is an additional section that contains details and data supplemental to the main text above (see the Results section). Figures of replicates for experiments of which representative data are shown in the main text above are added here with higher spatial resolution and detailed description. Appendix A.2. Aurovist In ex vivo mice with a gold nanoparticles contrast agent (Aurovist), we have designed transfer characteristics/functions that colorfully highlight soft tissue structures, particularly in blood vessels and airways. Figure A2. Dyed 19-day-old chicken (acquisition at anode voltage 40 kV, contrast agent ethanol). Imaging after 5 days of ethanol fixation with incipient soft tissue contrast of the trachea, diaphragm, and heart muscle. The contrast in this phase of fixation also corresponds to the density of the surface structures of skin derivatives (feathers). Appendix A.2. Aurovist In ex vivo mice with a gold nanoparticles contrast agent (Aurovist), we have designed transfer characteristics/functions that colorfully highlight soft tissue structures, particularly in blood vessels and airways. Here, is clearly visible the vasculature of the kidneys and their placement in the retroperitoneum on the sagittal section, the vessels of the thoracic organs, the hepatic vessels of all lobes, and the supply of the intestines in the abdominal cavity and pelvic plexus. In the section at the level of the thorax, there is a detail of the pulmonary system, where, according to the symmetry of the distribution and branching of the vessels, we can also describe their loss due to occlusion. Figure A3. The Aurovist intravascular contrast displayed the entire vascular bed of the mouse. Here, is clearly visible the vasculature of the kidneys and their placement in the retroperitoneum on the sagittal section, the vessels of the thoracic organs, the hepatic vessels of all lobes, and the supply of the intestines in the abdominal cavity and pelvic plexus. In the section at the level of the thorax, there is a detail of the pulmonary system, where, according to the symmetry of the distribution and branching of the vessels, we can also describe their loss due to occlusion. Appl. Sci. 2022, 12, x FOR PEER REVIEW 18 of 36 Figure A4. Color resolution of mouse vessels on the body surface (red) and internal organs (green). This distinction also offers a comprehensive view important for the orientation and description of superficial and parietal vessels. In the head area is visible in the external carotid artery branching, then interscapular brown fat vessels, superficial thoracic branching of the thoracic artery and intercostal vessels, parietal lumbar and sacral vessels, as well as plexuses around the spinal canal and limb vessels. The blood vessels of the kidneys and liver here form a network corresponding to the location of the organs in the abdominal cavity, the blood vessels of the heart and lungs to the organs in the chest. This arrangement allows the vessels to be compared at different topographic sites in the body. Figure A4. Color resolution of mouse vessels on the body surface (red) and internal organs (green). This distinction also offers a comprehensive view important for the orientation and description of superficial and parietal vessels. In the head area is visible in the external carotid artery branching, then interscapular brown fat vessels, superficial thoracic branching of the thoracic artery and intercostal vessels, parietal lumbar and sacral vessels, as well as plexuses around the spinal canal and limb vessels. The blood vessels of the kidneys and liver here form a network corresponding to the location of the organs in the abdominal cavity, the blood vessels of the heart and lungs to the organs in the chest. This arrangement allows the vessels to be compared at different topographic sites in the body. Here, is a physiological finding. According to the drawing of the vasculature, we can also describe possible pathological conditions, reconstruction of the liver parenchyma, and its irregular vascular bed or proliferation of blood vessels in tumors. For corrosive slides, transfer characteristics/functions emphasizing only the soft tissue structures (not the soft tissue itself) have been proposed using a contrast between resin and air. The images show the rabbit and rat airways, and in the case of the rat, the heart is visible along with the vasculature as well. For corrosive slides, transfer characteristics/functions emphasizing only the soft tissue structures (not the soft tissue itself) have been proposed using a contrast between resin and air. The images show the rabbit and rat airways, and in the case of the rat, the heart is visible along with the vasculature as well. Figure A18. The corrosive specimen of the heart and blood vessels shows both the coronary arteries for the heart, the basic platform for incipient myocardial infarction, and the branching of the pulmonary artery and pulmonary artery diameters and their angles of spacing and segmentation. Here, is a physiological finding. The arrangement of the pulmonary arteries and their branching is disturbed in pulmonary embolism, vascular wall pathology and also in lung tumors. Appendix B This appendix is an additional section that contains details and data supplemental to the main text from the point of view of the experimental setting of both setups, i.e., Phywe and Bruker systems. Parameter Value Physical Unit object-source distance 310 mm Figure A18. The corrosive specimen of the heart and blood vessels shows both the coronary arteries for the heart, the basic platform for incipient myocardial infarction, and the branching of the pulmonary artery and pulmonary artery diameters and their angles of spacing and segmentation. Here, is a physiological finding. The arrangement of the pulmonary arteries and their branching is disturbed in pulmonary embolism, vascular wall pathology and also in lung tumors. Appendix B. This appendix is an additional section that contains details and data supplemental to the main text from the point of view of the experimental setting of both setups, i.e., Phywe and Bruker systems. The scan settings for scanning biological samples of chickens outside the shell with the SkyScan 1275 are shown in the following tables. The table header consists of a number followed by a capital letter D, this is the age of the chicken, which is given in days. This may be preceded either by another number followed by the capital letter E, in which case it is the concentration of the ethanol solution in which the chicken was prepared, or by the letter F, which indicates that the chicken was prepared in a formaldehyde solution. In four cases, the concentration of the Ethanol solution is preceded by the capital letter A, this is the contrast agent Aurovist. The scanning parameters for scanning biological samples of chickens in the shell with the SkyScan 1275 are shown in the following tables. The table header consists of a number followed by a capital letter D, this is the age of the chicken, which is given in days. The scan settings for scanning mouse biological samples with Aurovist contrast agent (letter A), rabbit and rat biological samples with Mercox resin contrast agent (letter M), and mouse organ biological samples with Omnipaque contrast agent (letter O) and potassium iodide (KI) contrast agent dissolved in 20% ethanol solution (letters KI) with the SkyScan 1275 kit are shown in the following tables.
2022-05-15T15:11:48.730Z
2022-05-12T00:00:00.000
{ "year": 2022, "sha1": "54ebde586fc620969ace339aee947b0d9684ca81", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/10/4918/pdf?version=1652424617", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6ede7fe7a5d2e8046a957999f55c4a2e05de6184", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
55000469
pes2o/s2orc
v3-fos-license
Effects of Growing Conditions of Marigold in Ilam District , Nepal Different growing conditions with temperature and sunlight variation could vary in flower growth and quality of marigold. Plant growth and flowering characteristics are compared in three different conditions viz. plastic house, shade house and open field. Three varieties of marigold viz. Marvel Yellow, Marvel Orange and Marvel Garland with similar cultural practices were grown and observed. Each variety with 6 replications was grown in three different growing conditions. Plant height, leaves formation, length of leaves, number of days to flower, number of flowers per plant and post harvest analysis were recorded. Plant growth characteristics followed by flowering behavior were found significantly better in plants grown under plastic house. During post harvest experiment Marvel Garland variety grown under plastic house with wet post harvest treatment was found significantly better compared to other conditions. Introduction Different growing conditions result in varied climatic attributes in flower cultivation.The growing conditions followed by the commercial flower producers depend upon various factors including temperature and daylight.Flower development of many ornamental annual flowering plants is harmonized according to the season by us-ing changes in day and night length, which indicates that the flowering response is related to photoperiodic response to these plants [1].Temperature also has a significant role in plant's development which linearly increases from its base temperature to the optimum [2].Change in pattern of rainfall in hilly regions of Nepal is also the main reason of changing in flowering, fruiting and harvesting time of major flowers.Different types of plant species which will affect agricultural crops are shifting from lower altitudes to higher altitudes [3].Growing of flowers in this cool climate is much better if growing condition could be managed better to maintain temperature and moisture.The vegetative as well as generative or productive phases of the crops have positive effect in the crops grown under the plastic tunnel compared to the open field [4]. In some flowers increase in day temperature has also resulted in decrease in the quality of flower [5].Moreover with increased growing day temperature higher leaf area along with earlier rate of flowering was observed in some earlier research [6].Reduction of daylight duration is also found to increase the flowering days of some species of ornamental plants [7].Higher temperature and daylight up to optimum level favor in the process of photosynthesis.Only 8% to 10% of the energy in sunlight is converted to assimilate in the form of reduced sugars [8].Increase in photosynthesis optimizes the process of plant growth and development [9]. Marigold is one of the most important flowers in the context of Nepal.It has religious as well as cultural importance.This flower is packed with the leaves of fern and mango leaves to welcome the people in different ceremonies as well as to offer the flowers to the god in different offerings [10].Ilam is one of the subtropical and hilly regions of Nepal with a huge amount of rainfall during the months from June to September.Marigolds are the flowers which could be grown in a wide variety of soils.Marigold requires mild climate for luxuriant growth and profuse flowering.During severe winter including frost plants and flowers are killed and blackened.Generally marigold flowers are grown almost throughout the year but the main usable time of this flower is Tihar festival [11].The local varieties of Ilam could not produce the quality flowers as in market demand, so Floriculture Association Nepal (FAN) wants to find out the production and quality criteria for other varieties in context of Ilam.So to promote the growth of the marigold cultivation and production in Ilam, this research is conducted to find out the growing condition of marigold varieties which are commercially grown in Nepal. Methodology The research was conducted in Department of Horticulture and Floriculture Management, Mahendra Ratna Multiple Campus, Ilam.The experiment was conducted from July to November 2013.Seeds of the three varieties of marigold (viz.Marvel Yellow, Marvel Orange and Marvel Garland) were sown in the solarized (the bed were covered with plastic for 21 days for solarization) nursery bed.Nursery bed of size 3 × 1 cm and 15 cm raised beds is prepared by adding 10 kg FYM per meter square.The seeds were sown in line by line with row to row spacing of 10 cm.Then the nursery bed was covered with semi open plastic tunnel.Watering was done in each day until plant germinates.After germination watering was done thrice a week. After four weeks, the seedlings were transferred in three different growing conditions (viz.shade house, semiopen plastic house and open field).Shade house was a place which was covered with thatch and partial sunlight (5 -6 hours) will enter in the house in the sunny day.Semi-open plastic house was the house covered with white plastic in the top and sides of the plastic house were partially open (with 15 cm all sides in the bottom for the purpose of aeration).In each growing condition there were 6 replications of marigold plants of three different varieties.The manuring of soil was done by nitrogen:phosphorus:potash in the ratio of 2:1:1 and 15 kg of compost manure/m 2 .The spacing of plant in different growing conditions was 30 × 30 cm.Watering was done thrice a week and weeding was also conducted once a week. The data recording was done after one week of transplanting of the seedlings from nursery to the different growing conditions.One extra row of plant of each variety of plants was planted as boarder plant and also between different varieties to prevent the varietal and human interactions.Increment in plant height was recorded in weekly interval until the formation of flower bud with the measuring scale.Previous week data is subtracted from the recorded week and height increment data was recorded and the data were filled in the tabulated format.In case of the longest leaf measurement, 2 to 3 longest leaves were measured every week to identify the longest one and the value is recorded in the tabulated form which was also conducted until the formation of flower bud.In case of recording the number of leaves formation in the plant, the leaves which were fully open and except the germinating leaves were counted every week.Previous week data was subtracted from the recorded week and number of leaves formation data was recorded and the data were filled in the tabulated format.The data were also recorded until the plant formed the flower bud.In case of number of days to flower the days were recorded after the appearance of the fully open flower (the outward petals fully turn outward and downward).Then the plants were kept for four weeks in the field.Only the fully open flowers were recorded as the number of flowers in the plant until the time frame of four weeks. Post harvest experiment was conducted in the horticulture laboratory of the department of horticulture and floriculture management.Fully open three flowers from each plant were harvested for the purpose of post harvest experiment.Among each growing conditions 18 flowers were taken from which 9 flowers were selected for dry experiment with random selection.In this dry experiment the flowers were kept over the dry plastic and kept in spacing of 8 × 8 cm.In case of wet experiment also the same procedure was followed but the flowers were sprayed with distilled water every day in the evening time (5 -6 pm).The days of post harvest life were recorded by observing the dehiscence and damage of the outermost petals of the flowers. Data Analysis The collected data were entered in excel program.Then the collected data were set in software named Minitab.Minitab (version 16) was used to conduct analysis of variance (GLM procedure).Comparisons of means have been performed with Tukey's pair wise comparison test at p ≤ 0.05.For graphical presentation sigma plot was used. Results and Discussion The increment in plant height per week was significantly higher (p ≤ 0.05) in the plants growing inside the plastic house compared to the plants grown in shade house and open field (Figure 1).Number of leaves formation per week was found significantly (p ≤ 0.05) higher in plants grown in plastic house and shade house compared to the open field in case of Marvel Yellow and Marvel Orange.But in case of Marvel Garland number of leaves formation per week is significantly higher (p ≤ 0.05) in plastic house compared to other growing conditions (Figure 2).Higher temperature might have positive impact in case of Marvel Garland in leaves formation process.Length of the longest leaf was found to be significantly higher (p ≤ 0.05) in shade house compared to other two growing conditions (Figure 3).In this context lower light might have the positive impact in elongation of the leaves in the plants.Plastic house and open field plants flower were flowering significantly (p ≤ 0.05) earlier compared to the plants grown under shade house in all the three varieties under study (Figure 4).The number of flowers per plant was found significantly higher (p ≤ 0.05) in the plants growing inside the plastic house compared to the plants grown in shade house and open field (Figure 5).Increased temperature up to optimum level along with photosynthetic light inside plastic house might be the reason of higher increment of plant height and flower number along with earlier flowering time [12] [13].Optimum temperature and light required by flowering plants in plastic house might have resulted in decrease in number of leaves and leaf length.This shows that it might be the result of dry matter partitioning in generative phase of plant rather than vegetative phase [14].Increasing temperature promoted plant height and flowering in the species in the present study.This is in accordance with previous results in studies with ornamental annual plants [15].Some earlier research also revealed that with the higher growing temperatures marigold plants flower earlier compared to lower temperature growing conditions [16].This result could also be explained by the reason that plant growth is comparatively slower in lower temperature condition than in higher growing temperature conditions due to lower carbon use efficiency which is shown in earlier research [17].In open field there might be only ample light but in case of plastic house there is ample light along with higher temperature.So, with the higher temperature and light photosynthetic activities increases with the production of food materials in ample amount this results in better growth of plant.This also shows that during this time period of year optimum temperature could be obtained only under plastic house rather than shade house or open field. Post harvest life was found to be significantly higher in the flowers grown under plastic house compared to shade house and open field.Comparison of varieties shows that Marvel Garland has significantly higher days of post harvest life than Marvel Yellow and Marvel Orange.In case of dry and wet experiment, wet storage conditions flowers sample have significantly longer days of post harvest life compared to dry storage conditions (Table 1).Higher temperature and light might be favorable for photosynthesis resulting in formation of higher amount of starch granule.This might be the reason of higher days of post harvest life inside plastic house compared to other growing conditions which is in support of earlier research conducted in petunia plant which shows lower vase life in flowers grown at lower temperature [18].Different genetic factors are the results of longer post harvest life of Marvel Garland compared to other two varieties.Maintenance of cell turgidity and higher water absorbance in case of wet conditions might be the reason of longer post harvest life in wet experiment compared to dry experiment which is similar to the earlier research conducted [19] which shows that wet storage of Narcissus flowers have improved post harvest life including maintenance of membrane integrity and soluble proteins followed by reduction in alpha amino acids. Conclusion It could be concluded that plants growing under lower light and lower temperature condition reduce the growth as well as flowering characteristics of marigold.The overall conclusion was that Marvel Garland variety grown under plastic house grows better followed by wet post harvest treatments in context of Ilam district, Nepal. Figure 1 . Figure 1.Effect of growing conditions in increment of plant height/week (in cms) in different varieties of marigold, text represents the significant difference values. Figure 2 . Figure 2. Effect of growing conditions in number of leaves formation per week in different varieties of marigold, text represents the significant difference values. Figure 3 . Figure 3.Effect of growing conditions in length of the longest leaf (in cms) in different varieties of marigold, text represents the significant difference values. Figure 4 . Figure 4. Effect of growing conditions in number of days to flower in different varieties of marigold, text represents the significant difference values. Figure 5 . Figure 5.Effect of growing conditions in number of flowers per plant in different varieties of marigold, text represents the significant difference values. Table 1 . The results showing mean values of post harvest life along with significance level; text represents the significant difference values.
2018-12-16T00:40:55.753Z
2014-11-18T00:00:00.000
{ "year": 2014, "sha1": "db587d78d6d6ff2e3bbd9c8b117dc74c466c870d", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=51499", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "db587d78d6d6ff2e3bbd9c8b117dc74c466c870d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
245783356
pes2o/s2orc
v3-fos-license
IP Indian of Neurosciences A review: Management of motor neuron diseases Motor neuron diseases are a group of chronic sporadic and hereditary neurological disorders characterized by progressive degeneration of motor neurons. These might affect the upper motor neurons, lower motor neurons, or both. The prognosis of the motor neuron disease depends upon the age at onset and the area of the central nervous system affected. Amyotrophic lateral sclerosis (ALS) has been documented to be fatal within three years of onset. This activity focuses on amyotrophic lateral sclerosis as the prototype of MND, which affects both the upper and the lower motor neurons and discusses the role of inter-professional team in the differential diagnosis, evaluation, treatment, and prognostication. It also discusses various other phenotypes of MND with an emphasis on their distinguishing features in requisite detail. This is an Open Access (OA) journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. Introduction Motor neuron diseases area unit a bunch of diseases that lead to progressive destruction of neurons and gradual deterioration of striated muscle function, resulting in high mortality and morbidity. The average incidence of MNDs is 1-3 cases per 100, 000. 1 The prevalence of MND ranges from one to nine cases per 100,000 worldwide. Of this cluster, amyotrophic lateral pathology is far and away the foremost common, comprising roughly 80%-90% of MND cases. 2 However, studies have shown a minimum of a 10% error in diagnosis of ALS. ALS and MND, in general, present with nonspecific symptoms like limb weakness, fasciculation's, and fatigue, which might comprise each classic upper/bulbar and lower nerve cell clinical symptoms. Additionally, there aren't any reliable markers to differentiate ALS from alternative styles of MND, leading to diagnostic dilemmas. 3,4 In the United Kingdom, the term "motor neuron disease" (MND) is more commonly used. MND is a disease that strikes people in their middle to late years of life, with an average onset age of 58 years. 5 Despite being the third most prevalent neurodegenerative illness after Alzheimer's and Parkinson's, MND is rather rare, with an apparent uniform incidence of about 2/100,000 in areas where epidemiological data is available. 6 Despite its rarity, the disease has sparked a lot of interest because of its terrible course, which has put it at the centre of the ethical discussion over end-of-life decision-making and physicianassisted suicide. 7 The incidence of MND is claimed to be growing, however this is most likely due to better diagnosis, improve knowledge of the disease, and an ageing population. 8 The incidence rises after the age of 40, peaks in the late 60s and early 70s, and then rapidly drops. The majority of patients die within two to five years of being diagnosed. According to recent research, the male to female ratio in MND is approaching one. 5-10% of the instances are familial, with the remainder being random. 9 Etiology In MND, higher motor neurons in the motor cortex or brainstem and spinal cord are degenerating, or both are degenerating, according to the clinical spectrum. MND has several variants, but the most prevalent is amyotrophic lateral sclerosis (ALS), which causes upper and lower motor neuron signs and symptoms. In one study, 94 percent of patients said they were satisfied with their treatment. 10,11 In one study, 94% of patients had the ALS variation of MND. The appearance of upper and lower engine neuron includes as a rule starts centrally, and advances to include coterminous locales of the body with diminishing severity. Approximately 66% of cases start in the appendages and 33%, in the bulbar gathering of muscles; just a tiny level of them start with respiratory muscle involvement. 12 There square measure 2 alternative main variants of MND: primary lateral pathology (PLS) and progressive muscular atrophy (PMA). One study showed that PLS accounts for around two of MND cases, and PMA for four-dimensional of cases. Several of those patients tend to accomplish the ALS variant over time. 13 Progressive neural structure palsy could be a term that's usually accustomed describe the neural structure onset of MND. 14 Compared to patients with Associate in Nursing ALS variant, patients with PLS tend to gift 5 to 10 years earlier, have less limb wasting and neural structure symptoms throughout the course of the illness, and survive six to seven years longer. 15 The PMA is also associated is related to a degeneration of lower motor neurons of the medulla spinals within the absence of upper motor neurons. Cognitive impairment is highly identified in MND. Various subclinical cognitive defect and frontal lobe dysfunction may be responsible for MND. 16 Various genetic mutations are associated with MND like as TAR DNA-binding (TDP-43), which is responsible for front temporal dementia or Parkinson etc. 17 Epidemiology The incidence of motor neuron disease has been shown to approach 2 to 3 per 100,000 populations, while a lower frequency (less than 1 per 100,000) of ALS has been demonstrated in the South and East Asian community. 18 Ancestral origin has been shown to have a significant impact on the disease risk in ALS, with higher survival in racially heterogeneous or admixed populations (as compared to the White or Black race community). A shorter survival has also been reported in European ALS patients (2 years) as compared to the Asian population (4 years). 19 Approximately 10 to 15 percent of individuals with ALS have familial disease. The estimated lifetime risk of sporadic ALS is one in 400. While bulbar onset ALS has been shown to be more common in females, spinal onset illness has been shown to be common in males. Progressive muscular atrophy (PMA) represents 2.5 to 11 percent of cases with MND. With an incidence of 0.02 per 100,000 populations, it is a much rarer form of the disease. It is predominantly seen in males, with a male to female ratio ranging from 3 to 7.5 to 1. The median age of onset is of 68 years, which is older than that of ALS patients. 20 Differential Diagnosis The most common disease that mimics the amyotrophic lateral sclerosis is none other than degenerative spondylotic myeloradiculopathy in the cervical and/or lumbosacral spine, which may present with radicular pain. This may be characterized by progressive symptoms that plateau later with the progression of the disease. Although imaging has been advised to differentiate between the two entities, there may be the presence of co-existent degenerative illness in those with ALS. 21 Complications Most patients with ALS die of respiratory failure within three years of the onset of the disease. Progressive weakness and wasting of the limb and respiratory muscles have been attributed to be the underlying precipitants of respiratory failure. Severe dysphagia may lead to weight loss, choking, and aspiration. Prolonged effortless mealtimes and coughing on attempting to swallow are other factors that have the ability to impact the patient's quality of care. 22 Progressive deterioration in the activities of daily living, inability to ambulate, issues related to prolonged immobilization such as superficial skin infections, deceits ulcers, and deep venous thrombosis are also seen in these patients. 23 Enhancing Healthcare Team Outcomes A multidisciplinary team consisting of neurologists, respiratory physicians, physical medicine, and rehabilitation specialists (respiratory physiotherapists), gastroenterologists, dieticians may have an active role in the management of the patient. Palliative medicine input may be required for the management of pain, difficult to control symptoms (such as resistant breathlessness at the end of life), and psychosocial issues. 24 The inclusion of social care practitioners and staff who have been trained to manage patients, in the setting of their homes, in the multidisciplinary team has also been advised. Identification of the unique needs of family and provision of coordinated care in the home setting may become necessary as the patient approaches the terminal stage of decline. Good end of life care may be provided by specialist palliative teams. 25 . Acknowledgment We owe our gratitude to all those researchers who have made this review possible. This review did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Conflict of Interest The author declares no potential conflicts of interest with respect to research, authorship, and/or publication of this article. Source of Funding None.
2022-01-07T16:09:01.227Z
2022-01-15T00:00:00.000
{ "year": 2022, "sha1": "0b1628e3783989139c5ae1925a2e99ae2e96d1fa", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.18231/j.ijn.2021.053", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "126240df7c4b3097c6e00e52358796cd15863160", "s2fieldsofstudy": [ "Psychology", "Biology", "Medicine" ], "extfieldsofstudy": [] }
235709498
pes2o/s2orc
v3-fos-license
Personality Determinants of Success in Men’s Sports in the Light of the Big Five The aim of the study is to describe personality profiles and determinants of success in sports in relation to the Big Five Personality Model. In order to achieve this aim, personality profiles of players from various sports disciplines was set against the personality profile of champions—players who are considerably successful in sports competitions. Subsequently, an attempt was made to determine which personality traits significantly determine belonging to the group of champions—and therefore determine success in sport. The participants were men aged between 20 and 29 from the Polish population of sportsmen. A total of 1260 athletes were tested, out of whom 118 were qualified to the champions sample—those athletes had significant sports achievements. The research used the NEO-FFI Personality Questionnaire. Basic descriptive statistics, a series of Student’s t-tests for independent samples using the bootstrapping method, as well as a logistic regression model were performed. In relation to other athletes, champions were characterized by a lower level of neuroticism and a higher level of extraversion, openness to experience, agreeableness, and conscientiousness. An important personality determinant was neuroticism: the lower the level of neuroticism, the greater the probability of an athlete being classified as a champion. There are differences between champions and other athletes in all personality dimensions in terms of the Big Five. Based on the result of the research, it can be stated that personality differences should be seen as a consequence of athletes’ success, rather than as a reason for athletes’ success, based on their age between 20 and 29. Introduction A problem that has long been of interest to sports psychologists, coaches, and athletes alike concerns the determination of the personality traits of a champion [1,2]. This particular task would involve the identification of the athletes' personality traits which are essential to their success in sport [3,4]. For instance, Garland and Barry [5] carried out an experiment on American college athletes, varying in terms of physical fitness and sport level, to test the relationship between personality as measured by the 16-Factor Personality Questionnaire and their sports performance. It was shown that personality traits such as belief rigidity, extraversion, group dependence, and emotional stability were responsible in 29% for variations in physical fitness. Davis [6], in turn, tried to predict the success of professional hockey players by measuring their personality traits, but found no correlation. He believed that success was influenced by more important psychophysical factors. In another study, Lerner and Locke [7] measured the willingness of American college athletes to compete in relation to their achievement motivation. To this end, they used the Sports Orientation Questionnaire, and measured their endurance by performing squats. Similarly, as in Garland and Barry [5], a relationship was found between personality and Int. J. Environ. Res. Public Health 2021, 18, 6297 2 of 10 success. Psychological factors such as goal setting and self-efficacy have been shown to validate the influence of personality on athletic performance. In a cutting-edge experiment by Piedmont, Hill, and Blanco [8], four different Division 1 NCAA soccer teams were tested with the Big Five model. Coach ratings for several dimensions of player performance and actual game statistics were also collected. Regression analysis indicated that personality dimensions of neuroticism and conscientiousness explained about 23% of the coaches' variance ratings, while conscientiousness was the only predictor of actual game statistics, explaining about 8% of the variance. A slightly different research was carried out by McKelvie, Lemieux, and Stout [9] on groups of university athletes (divided into contact and non-contact disciplines) and non-athletes, with the use of the Eysenck Personality Inventory. Extraversion did not differ significantly between athletes and non-athletes, nor between contact and non-contact sportsmen, but was higher for athletes in general compared to American academic standards. In the case of neuroticism, successful athletes scored significantly lower than unsuccessful athletes. As neither extraversion nor neuroticism results has changed over the four years of continuous research, one might conclude that people with higher extraversion and lower neuroticism are interested in academic sports. In another study Anghel, Banica, and Ionescu [10] found out that personality traits of elite athletes were dependent and distinctive of the sports discipline they trained. The athletes were characterized by low neuroticism, high extraversion, and conscientiousness, but the intensity of individual personality traits depended on the trained sport discipline. This indicates the existence of a general personality profile of athletes, in which the strength of the acceleration of personality traits is determined by particular sports disciplines. Mirzaei, Nikbakhsh, and Sharififar [11] made further attempts to investigate the relationship between personality traits and sports performance in the Big Five model. The research sample included more than 200 non-elite soccer players and futsal soccer players. It was shown that among the personality traits, only conscientiousness had a significant correlation with sports performance-conscientiousness alone was the only predictor of sports performance. Then, Kim, Gardant, Bosselut, and Eys [12] conducted an experiment on a sample of team sports players and showed that low neuroticism, high extraversion, and conscientiousness all influence informal role-taking in a sports team, depending on the sports team. The same year, Steca, Baretta, Greco, D'Addario, and Monzani [13] examined more than 800 athletes and non-athletes with the use of the Big Five model. It was shown that the most successful athletes in their discipline had higher scores than the non-athletes in every dimension of the Big Five, except neuroticism, in which they scored lower. In contrast, less successful athletes outperformed the non-athletes only in extraversion and agreeableness. Athletes who were more successful in their competitive sports (champions) showed greater emotional stability (lower neuroticism), extraversion, openness to experience, agreeableness, and conscientiousness than less effective athletes. Moreover, individual athletes turned out to be more energetic and open-minded than team athletes. In another study, Piepiora and Witkowski [14] tried to generate psychological personality profiles of athletes performing individual and team disciplines, depending on the type of pressure exerted on the opponent in the starting situation. Differences were found in the scales of neuroticism and conscientiousness between sports disciplines in which pressure is exerted indirectly on the opponent, and disciplines in which the pressure exerted directly on the opponent. The study groups, with the exception of volleyball players and football players, differed from each other in terms of neuroticism scale, while the volleyball players showed less agreeableness and conscientiousness than other athletes. Taking the above research and reflections as the starting point for the research problem formulation, it should be assumed that personality conditioning in sports champions in relation to the population of unsuccessful athletes, according to the Big Five model, focuses on lower neuroticism and higher extraversion, openness to experience, agreeableness, and conscientiousness [15,16]. However, there is ambiguity in relation to the type of sport, competing classes, or cultural affiliations. Personality traits are adequate to the specificity of the trained sports discipline, and its goals and challenges. The personality profiles of the athletes are at similar levels, but they are not identical. Among athletes, it is extremely difficult to distinguish and define the most favorable type of personality, as it is largely influenced by the trained sports discipline, and it determines the personal conditions of athletes [17][18][19][20]. Therefore, it was deemed necessary to verify which personality traits, and to what extent said traits, define sports champions and determine success in sports. The research problem was an attempt at defining personality profile of sports champions and personality determinants of success in sport in the light of the Big Five factor model. In connection with the above, personality profiles of players from various sports disciplines in the areas of combat sports [21], individual sports [22], and team sports [23] were compared with the personality profile of champions [16]-players who are very successful in sports rivalry. Subsequently, attempts were made to determine which personality traits significantly determine belonging to the group of champions-and thus determine success in sport. For this purpose, the Big Five model was used, as it does not transgress the definition of personality traits understood as behavioral properties, showing interindividual variability and intra-individual temporal and situational permanence. They adopt a number of methodological assumptions that define the status of personality traits as "basic" dimensions of personality. The Big Five model defines the most general characteristics of behavior that are actual, invariant, universal, and biologically conditioned [24]. Participants The research was carried out between 1 October 2015 and 30 September 2019. The subjects of the study were men, intentionally, non-randomly selected from the Polish population of sportsmen. The criteria for the non-random, purposeful selection of respondents were: free will to participate in the study; senior age (between 20 and 29 years of age); at least the second or higher sports class; many years of sports experience-three years or more; a current competition license; and documented sports achievements at various levels of rivalry (national, continental, and world). A total of 1260 competitors were tested, 30 each from the following sports disciplines: alpine skiing, American football, archery, athletics-long runs, athletics-short runs, ballroom dancing, basketball, beach volleyball, biathlon, bodybuilding, Brazilian jiu-jitsu, break dance, canoeing, cycling, fitness, floorball, football, futsal, handball, horse riding, indoor volleyball, judo, ju-jitsu, kickboxing, kyokushin karate, mixed martial arts, mountaineering, Olympic karate, orienteering, Oyama karate, rugby, shidokan karate, shotokan karate, snowboarding, sport climbing, sport shooting, swimming, taekwondo, tennis, tobogganing, ultimate frisbee, and wrestling. Such a distribution of disciplines depended on the respondents' willingness to participate in the study. From the above population, 118 athletes were qualified to the sample of champions. Players with international sports successes were defined as champions. Therefore, the criterion for qualifying Polish players to the sample of champions was their 1st, 2nd, or 3rd place in international sports competitions. This includes medalists of the World Championship, the European Championship, the World Cup, the European Cup, the World Games 2017, and other ranked international tournaments in their sports disciplines. The following champions with significant sports achievements were identified: from alpine skiing (3), archery (5), ballroom dance (2), beach volleyball (2), biathlon (4), bodybuilding (4), Brazilian jiu jitsu (4), break dance (2), canoeing (2), cycling (2), equestrian (1), fitness (4), floorball (2), futsal (2), ju jitsu (5), judo (3), kickboxing (4), kyokushin karate (6), mixed martial arts (4), mountaineering (1), Olympic karate (1), orienteering (3), Oyama karate (4), shidokan karate (5), short (2) and long runners (8), shotokan karate (6), snowboard (3), sports climbing (3), swimming (3), taekwondo (5), target shooting (1), toboggan (3), volleyball (7), and wrestling (2). The other 1142 athletes were sportsmen with only national (Polish) sports successes. Only the best results of the respondents on the day of the study were included in the study. The achievements of already tested players have not been updated. Method The NEO-FFI Personality Questionnaire was selected to examine the athletes' personality in terms of the Big Five factor model [25]. The selection criterion was justified by: the location of NEO-FFI in the theoretical model and relatively large methodological formalization compared to other approaches developed within the five-factor personality model; good psychometric characteristics; rich factual documentation of the measurement accuracy for the factors of the original version, which allows to assume that the inventory may be useful in scientific and practical research; and duration time acceptable for the athletes. The items of the NEO-FFI Personality Questionnaire are formed by five scales measuring the factors of the Big Five model. They are marked with abbreviations of the first letters of the factors: neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness. For the purposes of this study, the acronym NEOAC was adopted, i.e., the above-mentioned sequence of factors. The NEO-FFI Personality Questionnaire is internally consistent. Its validity was demonstrated on the basis of research on the relationship between the results of the questionnaire and the assessments of the subjects made by observers, the heritability of the measured traits, and their correlation with other dimensions of personality and temperament. The factor validity was also verified. The results allow for a full description of the respondents' personality in the five-factor approach of the Big Five and forecasting their adaptation possibilities to the professional environment [24,25]. Moreover, the NEO-FFI assumes a maximum examination time of one hour. Such duration of the study was acceptable to athletes who expressed free will to participate. Data Analysis In order to verify the research problem, statistical analyses were performed using the IBM SPSS Statistics, version 25 (IBM Polska, Warsaw, Poland). Beforehand, basic descriptive statistics were calculated for each sports discipline included in the study. It was decided not to calculate normal distribution tests for each personality trait in each discipline due to the relatively small sample size and the multiple comparisons. Both of these factors could render the conclusions drawn from the results of such tests incorrect. For this reason, the so-called the rule of thumb was used for the analysis of skewness value. If the skewness value for a given variable ranged from −2 to 2, then it could be concluded that the distributions of these variables are not too asymmetric, which allows for the use of parametric tests. In the case of differently classified data comparisons, the skewness values for the compared groups were checked before the analysis. Each time, they fell within the accepted range. In order to solve the research problem, Student's t-tests for independent samples and a logistic regression model were performed. This model presents an exploratory analysis to see how individual personality traits will predict belonging to the champion group. It was necessary as t-tests only verify differences in a single dimension. Procedure All respondents consented to the processing of data related to their participation in the research by the researcher. The project received a positive opinion (number 20/2019) of the Senate Committee on Ethics of Scientific Research at the University School of Physical Education in Wrocław. Results The sample of champions consisted of 118 men (9% of the respondents), and the sample of other athletes, 1142 men (91% of the respondents). In order to verify the research problem, a number of Student's t-tests were carried out for independent samples using the bootstrapping method, set at 10,000 samples and a 95% confidence interval. Five Student's t-tests were performed, and the statistical significance level for the analyses of variance was calculated as α = 0.01. The test results showed statistically significant differences in all personality traits from the Big Five model. In the case of neuroticism, a very strong difference effect persisted. A moderately strong effect was observed for extraversion and conscientiousness, and weak effects were observed for openness to experience and agreeableness. Sports champions were characterized by a lower level of neuroticism and a higher level of extraversion, openness to experience, agreeableness, and conscientiousness than the group of other athletes. The exact values of the performed tests are presented in Table 1. The samples are presented graphically in Figure 1. The project received a positive opinion (number 20/2019) of the Senate Committee on Ethics of Scientific Research at the University School of Physical Education in Wrocław. Results The sample of champions consisted of 118 men (9% of the respondents), and the sample of other athletes, 1142 men (91% of the respondents). In order to verify the research problem, a number of Student's t-tests were carried out for independent samples using the bootstrapping method, set at 10,000 samples and a 95% confidence interval. Five Student's t-tests were performed, and the statistical significance level for the analyses of variance was calculated as α = 0.01. The test results showed statistically significant differences in all personality traits from the Big Five model. In the case of neuroticism, a very strong difference effect persisted. A moderately strong effect was observed for extraversion and conscientiousness, and weak effects were observed for openness to experience and agreeableness. Sports champions were characterized by a lower level of neuroticism and a higher level of extraversion, openness to experience, agreeableness, and conscientiousness than the group of other athletes. The exact values of the performed tests are presented in Table 1. The samples are presented graphically in Figure 1. Finally, in order to verify the analyzed results, a logistic regression model was prepared where, based on personality traits, an attempt was made to classify the respondents into the group of sports champions and other athletes. In the first step, all personality traits were introduced as predictors of the athletes' level. The null model was characterized by 90.6% correct classifications, which results from the ratio of the number of other athletes to all research subjects. The classification threshold, based on the ROC analysis, was set to 0.7. The model with five predictors was statistically significant χ 2 (5) = 425.68; p < 0.001, and Nagelkerke's pseudo-R 2 was 0.62, which means that the proposed model explains about 62% of the variance. The Hosmer-Lemeshow goodness of fit test was statistically insignificant χ 2 (8) = 7.49; p = 0.485. The entire model correctly classified 94.3% of the observations. The analysis of the significance of the predictors in the discussed model showed that only neuroticism significantly predicted belonging to the champions group or to the other athletes group. For this reason, another model was created in which neuroticism was the only predictor. The second model was statistically significant χ 2 (1) = 423.02; p < 0.001, and Nagelkerke's pseudo-R 2 was 0.62. The goodness of fit test was again statistically insignificant χ 2 (7) = 13.44; p = 0.062. The overall percentage of correct classifications was also 94.3%. Pseudo-R 2 for one personality variable of logistic regression was 62% of the variance as other athletes are very different from the champions in their level of neuroticism. In the t-test analysis, the effect size was d = 1.81, which is a very high result. It is rarely seen, but apparently the two groups are quite different in this respect. The other personality measures did not contribute to the percentage of explained variance. Therefore, the second model, with the only predictor being the neuroticism measure, turned out to be as good as the model with five predictors. This means that neuroticism was the key personality trait that predicted the level of achievement among the tested athletes. A relationship was established in the developed model: the lower the level of neuroticism, the greater the probability of being classified as a sports champion. The relationship is presented in Table 2. Discussion The analyses showed statistically significant differences in all personality dimensions in the Big Five five-factor approach; namely: sports champions were characterized by a lower level of neuroticism and a higher level of extraversion, openness to experience, agreeableness, and conscientiousness than other athletes. This personality profile of sports champions confirmed earlier research reports [16,[21][22][23]26], and at the same time negated the research of Mirzaei and colleagues [11], which suggested that only high conscientiousness correlated with sports results. Whether the personality determinants of success in sport were formed solely in the course of many years of sports career, or already at the beginning of sports practice still remains an open question. Therefore, the opinion of respected scientists such as Allen [17][18][19][20], or Vealey [27] cannot be ruled out. Factors disrupting or supporting the development of a young athlete are created by his immediate environment. This, in turn, is expressed in self-esteem, which has a significant impact on the shaping of the personality and competences of talented players. The logistic regression model analyzed the obtained results. On the basis of the fivefactor personality model, attempts were made to classify the researched population into the group of sports champions or the other athletes group. The research results have shown that neuroticism was an important personality trait, allowing to classify athletes according to their level of sports achievements; the lower the level of neuroticism, the greater the probability of being classified as a sports champion. The numerous relationships found in the research between personality dimensions and athletes in various randomizations allow us to conclude that the results concerning neuroticism as a determinant of personality success in sport are highly probable and may be universal. The only predictor of sports results, and thus a personality determinant of success in sports, in terms of the Big Five, was neuroticism. The dimension of neuroticism reflects emotionality in terms of experiencing negative emotions, i.e., emotional adaptation in relation to emotional imbalance. The sports champions were distinguished by very low neuroticism, thus it can be assumed that they were emotionally stable, calm, relaxed, and able to deal with stress without experiencing anxiety, tension, and irritation; whereas other athletes had a higher level of neuroticism compared to the champions. This means that their negative emotions influenced their adaptation to the environment. Neurotic people were prone to irrational ideas, and relatively inadequate to control their drives and cope with stress. This is due to the general excitability of the vegetative system. The reactions are too great in relation to the strength of the acting stimuli. Emotionally unstable competitors experience very strong pre-start conditions and can collapse in the face of important competitions. It can be expected that in difficult situations, their efficiency of perception, speed and accuracy of sensorimotor responses, efficiency of thinking processes, and the quality and effectiveness of action will deteriorate significantly. The dimension of neuroticism includes six formally distinguished components: anxiety, aggressive hostility, depression, impulsiveness, hypersensitivity, and excessive self-criticism. Therefore, champions may be distinguished from other sportsmen by low level of anxiety, which has a positive effect on motivation [6]; low aggressive hostility that triggers the state of start readiness, which translates into the control of arousal before and during the competition, and bravery understood as fighting until the very end [28]; low depressiveness that indicates an optimistic mood and a positive attitude [29]; low impulsiveness that crystallizes emotion control [30]; low hypersensitivity that gives good concentration of attention and the need for strong sensory impressions, as well as the ability to cope with failure and experience success [31]; and finally, low self-criticism that determines self-confidence and self-efficacy [32]. Taking the above into consideration, the greatest cognitive value of this paper is to prove that neuroticism is an important personality condition for success in sport. Therefore, one should adopt broad perspectives of analyses of neuroticism components as mental determinants of sports success. As there is no data regarding whether social factors influence the personality of the surveyed sportsmen, one should also pay attention to the role of the social environment of sportsmen. This knowledge may be useful in the detection and proper development of sports talents, modernization of sports training and better adaptation of athletes to the environment after the end of their career. It is also important to notice that sports activities shape the personality of players [1][2][3][4]33]. Therefore, the differences in personality shown in this study can be seen as a consequence of the athletes' success, rather than as a reason for athletes' success, based on their age between 20 and 29. Sports activity could be seen as a self-confidence generator. Under the influence of trainings, adepts start to improve in a given discipline, and this moderates their personalities. Athletes become convinced that they are the authors of their own fate and that they create their own lives. This is why the successes achieved by the players build strong personalities of athletes. The obtained research results also provide a new argument about the health aspects of sports training (in the context of health through rational, long-term sports training) in personality development. There are few empirical studies on the relationship of motor, technical and tactical training, and the results of personality tests. Hence, the possibilities of a broader interpretation of research results from an interdisciplinary perspective are limited. At this point, the strengths and limitations of the conducted cognitive experiment should be equally noted. The research sample was homogeneous in terms of ethnicity, gender, and the age range of 20-29 years. Athletes of other nationalities, women, and other age groups were not included. The research was conducted on a large group of respondents from sports disciplines popular in Poland. However, it was not possible to examine athletes from all sports disciplines trained in Poland. The group of champions included Polish athletes with international sports successes. Therefore, the obtained research results can only be applied to a specific population of athletes. Thus, the following conclusion can be drawn: a low level of neuroticism is a personality determinant of success in sport among Polish male athletes between the ages of 20 and 29. However, one must bear in mind that the personality determinants of success in sport in various disciplines are distinct. This is due to the specificity of sports competition in martial arts [21], individual [22], and team [23] sports, as well as different psychological requirements they place on competitors [1,33]. However, the general personality profile of athletes in terms of the Big Five is low neuroticism, high extraversion and conscientiousness, average openness to experience and agreeableness [4,17]. In comparison with the reports by Allen [20], it was noticed that low neuroticism also has a significant role in the personality differentiation of champions from the rest of the athletes. It has been proven that a low level of neuroticism may be a personality determinant of sport success among Polish athletes between the ages of 20 and 29, and its intensity depends on the sports discipline. It is therefore suggested that the coaches analyze the personality conditions of the players for sports competition, as these have a significant impact on the sports results. Hence, in sports theory, one should adopt broad perspectives of personality component analyzes as mental determinants of sports success. Conclusions There are differences between champions and other athletes in all personality dimensions in terms of the Big Five. Sports champions were characterized by a lower level of neuroticism and a higher level of extraversion, openness to experience, agreeableness, and conscientiousness in relation to other athletes. Analysis of the obtained data by the logistic regression model proved that only neuroticism was an important personality determinant predicting the level of achievement among the studied athletes: the lower the level of neuroticism, the greater the probability of classifying the athlete to the champion group. Champions are presumably balanced and usually resistant to stress. They are not very sensitive to various stressors. They have better attention span, and they do not panic in difficult situations. Their well-being is stable, and their emotional reactions are adequate to the stimuli. Therefore, sports development of athletes without the knowledge of the specific features and personality structure of various sports representatives may be an artificial and ineffective activity. It remains an open question whether the personalities of the champions were shaped only in the course of many years of their sports career, or whether they already distinguished champions at the beginning of their sports practice. Therefore, based on the result of the research, it can be argued that personality differences should be seen as a consequence of the athletes' success, rather than as a reason for the athletes' success, based on their age between 20 and 29. Data Availability Statement: The authors confirm that the data supporting the findings of this study are available within the article.
2021-07-03T06:16:55.801Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "210facb79f9e7f9ad0e27ed034fb5ad5f010df48", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/12/6297/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a1352d98046dd32f05677961fc831ed0e1817eb", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
214082388
pes2o/s2orc
v3-fos-license
Problem based learning in mechanical engineering to train student’s creativity Creative thinking skills is highly significant thinking skills that will be increasingly necessary because of the increase in complex problems caused by the rapid development of technology and social movements worldwide. Therefore, educators should train the creativity to their students in order to enable their success as future citizens. Student creativity is very important in the teaching and learning, its because every student are unique, so they are always have their own creativity. PBL should be implemented in various disciplines, such as in Mechanical engineering Programs. Using a problem-based learning approach is an alternative and effective way to introduce, discuss, and learn about a given (creativity) topic or concept. The purpose of this learning model develops high-level thinking skills in problem-oriented situations and integrates new knowledge to train student’s creativity, providing authentic issues and meaningfully emphasized real time data from websites. This type of research is a quasi-experimental research with Pre-Test - Post Test Experimental design. Implementation of development procedures in this study include (1) research and information collection; (2) planning; (3) development of initial product form; (4) field test and product revision; (5) revision of the final product; and (6) dissemination and implementation. Data collection using test, observation, documentation and interview techniques. The quantitative data analyse with one sample t test. Real time data is information delivered immediately after the observation or data collection, various real time data can be collected through the address of the website in the internet. The module product in this study is a learning model device that integrates real time data to train student’s creativity. This study shows that the implementation of Problem Based Learning can enhancing the creativity of Mechanical Engineering Students Program in Universitas Nusantara PGRI Kediri. Introduction Mechanical Engineering Lessons generally require Physics as the basis of science. Students argue that physics lessons are difficult because they encounter many mathematical equations so that physics is also identified with numbers and formulas such as physics. As a result, the expected learning objectives become difficult to achieve, both in terms of motivation and learning achievement. In reality today, lecturers dominate learning activities. This can be seen from the way lecturers teach that is by explaining the material in front of the class, without involving students to participate actively in the learning process. This causes real intelligent students to experience difficulties in expressing their ideas or ideas. To be able to overcome this, an effective way to increase the independence of learning and the development of ideas is through the learning process. Each educators want to acquire the student to be creative and have the critical thinking skills. So, during the learning activitie, the lecturer must give them with the complex problems like the problems in everyday lives that usually they know [1]. Problem-based learning (PBL) is an ideal learning approach that can used by the teacher to help students for solving the nonroutine problems [2]. In PBL process, all the students find solutions to complex problems by discussing with their classmates [3]. PBL involves active student learning as opposed to traditional passive learning methods. Based on the There are many researvh that shows that the students seems enjoy discussing with their friend in a group, and it is very different with the conventional learning just like when they are only listen the teacher explanations [4]. Engineering Physics is one of the topics that aims to develop scientific skills, attitudes and values. For this reason, students are required to be competent in using language to understand, develop, and coordinate ideas from information in order to interact with others. The active role of students in teaching activities can be observed through the characteristics [5]; (1) Each student shows his role as 'main character' in learning activities. (2) The large number of students who ask critical questions. (3) The large number of students who answer questions raised by their opponents in a good and systematic language. (4) The number of students who respond to questions and answers from their opponents. (5) There is an attitude that shows respect for the opinions of others. Self-Regulated Learning is an effort to regulate oneself in learning by including metacognition, motivation and active behaviour. Students who have Self-Regulated Learning will actively engage in learning activities [6]. So, if students feel that a lesson or discussion is not understood by students, then students will be more active to be able to learn it. Such as planning what will be studied again, monitoring the learning outcomes, evaluating learning outcomes that are learned, repeating, organizing learning, trying to achieve optimal performance, and including seeking help from friends, lecturers or people who are considered more understanding. The use of Self-Regulated Learning as a form of student effort in motivating themselves to achieve optimal results in learning. So, it can be said that the better Self-Regulated Learning, the better the results of achievement can be achieved. Conversely, if students have low Self-Regulated Learning, they are less able to plan, monitor, evaluate learning well, are less able to manage good potential and resources and so on, so that the results of their learning are not optimal, in accordance with their own potential. it has. The novelty of this research is the implementation of PBL is by added with real time data and website. PBL uses real time data and the website is a realization of the view of constructivist teaching. Vigotsky in Arends explained that constructivist theory is the basic foundation for PBL to emphasize that students conduct investigations in their environment and build meaningful personal knowledge [5]. PBL uses real time data if applied consistently in the classroom can develop problem solving skills, creativity, and self-confidence [7]. PBL can also help students gain independence and professional skills in dealing with complex problems, interdisciplinary and real problem situations, and able to foster creative ideas to find solutions [8]. The implementation of real time data and websites through PBL makes the role of students able to reach the layers of society in various parts of the world. Methods This type of research is a quasi-experimental research with Pre-Test -Post-Test Experimental design. This research takes one class in Mechanical engineering in Universitas Nusantara PGRI Kediri that get the course of machine Physics. There are 26 students such a sample of this research. Activities in this study can be specified in stages (1) reviewing some literature to study learning theories related to student learning independence. (2) formulating learning methods to train learning independence based on students. (3) validate learning methods and devices through Focus Group Discussion (FGD) activities. (4) implementing the learning method to train student learning independence. This Data analyse with one sample t test. Results and Discussion The learning outcomes of applying PBL using real time data on the website in the course of Physics Engineering shows that the student learning independence was increase with the implementation of PBL using real time data on the website in Physics Engineering learning. In this study, students are given problems about real time data that they can learn through the internet. The problems presented in PBL are problems related to real problems or daily life problems that are often encountered by students. The problems presented in each meeting are adjusted to the topics studied at each meeting. In each learning, students are required to be active in solving the problems presented to them, this is in accordance with PBL's characteristics as student centered learning. In each meeting presented through PBL, lecturers do not dominate learning, but students must be active in discovering new knowledge themselves that they will learn. The role of the lecturer in PBL is as a facilitator and provides guidance and direction that is necessary. Problem Based Learning (PBL) provides students with the opportunity to gain content theory and knowledge and understanding [8]. PBL meets Four Essential Rules Of 21st Century Learning containing (1) instruction should be student-centred; (2) education should be collaborative; (3) learning should have context; and (4) schools should be integrated with society. The principle of Learning should have context is almost the same as one of the characteristics of PBL which is to present authentic problems in learning [5]. Learning is not very meaningful for students if it does not affect their lives outside the place of study. Therefore, subject matter needs to be linked to students' daily lives. Lecturers develop learning methods that enable students to connect with the real word, helping students find values, meanings and beliefs about what they are learning, and can apply in their daily lives [9]. Considering that problems in daily life always develop in the direction of information and technology advancement, the learning process using PBL is also required to experience changes, among them the presentation of authentic problems can be integrated with technology. Authentic problems today are not enough in the environment around students, but from all problems around the world can be easily known thanks to the advancement of information technology. Therefore, the presentation of authentic problems in PBL can use real time data which is a collection of data information that is delivered immediately after making observations or collecting data [7]. Implementing real-time data in learning makes the problem more unstructured. Students are allowed to feel real-world problem solving using real-time data in the same way that scientists do. Various real time data that will be used must be effective and efficient in supporting the learning process. Given that most of the learning process is carried out in the classroom, various real time data related to subject matter can be collected in website addresses that provide real-time data on the internet [9]. As observed values are constantly updated, sometimes the numbers consist of only the present value [10]. Conclusion This research shows that PBL with real time data on the website in the course of Physics Engineering can be implemented by integrating steps of PBL but we use the authentic problems based on real time data on the website. This study shows that the creative thinking skill of student that get the course of Physic Engineering that implemented by PBL was increase.
2019-11-28T12:36:29.747Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "9b4e850898beddd45d3121020180a43ebc5ad7db", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1280/5/052072", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1dc542fa6cb0093f243cac7b5c2f99131adfc6ab", "s2fieldsofstudy": [ "Engineering", "Education" ], "extfieldsofstudy": [ "Psychology", "Physics" ] }
136304239
pes2o/s2orc
v3-fos-license
Atomic-Scale Friction of Monolayer Graphenes with Armchair- and Zigzag-Type Edges During Peeling Process We numerically studied the atomic-scale friction of the monolayer graphene sheet during the nanoscale peeling process by molecular mechanics simulation. The zigzag behavior appears twice in the force curve during the surface and line contacts between the graphene sheet and the graphite surface. During the surface contact, the graphene sheet takes the atomic-scale sliding motion, which exhibits the transition from the continuous to the stick-slip sliding particularly for the graphene with the armchair-type free edge. The period of the zigzag structures for the stick-slip motion in the peeling force curve nearly corresponds to the lattice period of the graphite depending on the lattice orientation and the edge structure of graphene. During the line contact, the graphene sheet also takes the stick-slip sliding motion. Comparison between armchair- and zigzag-type free edges reveals the difference of the characteristic atomic-scale sliding of the graphene sheet. These findings indicate the possibility of not only the direct observation of the atomic-scale friction of the graphene sheet at the tip/surface interface but also the identification of the lattice orientation and the edge structure of the graphene sheet. [DOI: 10.1380/ejssnt.2010.105] I. INTRODUCTION The carbon nanostructures such as carbon nanotube (CNT) and graphene have recently attracted great interests as the components of the electronic, magnetic and optical devices. We have so far studied the peeling mechanics of the carbon nanotube (CNT) adsorbed onto the graphite surface both theoretically [1][2][3] and experimentally [4,5]. It is clarified that the transition from the line-to the point-contact between the CNT and the graphite surface occurs during the peeling process [1][2][3][4][5]. On the other hand, since the success of its experimental isolation [6], the potential of various application of the graphene has been discussed by many researchers [7,8]. Therefore the peeling mechanics of the graphene sheet is also very important, which can be regarded as the elementary process of the macroscopic sticky tape such as the gecko-foot-mimic adhesives [9][10][11], or that of the microscopic extension of the crack in the fracture process. In our preliminary experiments, we have already succeeded in peeling the multilayered graphene plate with a thickness of several µm by using atomic-force microscopy tip [12]. Here the two-component epoxy resin adhesive is used to bond the graphene plate to the AFM tip. The junction formed between the AFM tip and the graphene should be mechanically rigid enough to measure the elasticity of the graphene sheet during the peeling process. Ahead of experiment, we have theoretically reported the nanoscale peeling behaviors of the monolayer graphene sheet by lifting the center position based on the molec-ular mechanics simulation [13]. The peeling force curve exhibits the nanoscale change of the graphene shape from the surface to the line contact. However the clear atomicscale behaviors of the graphene sheet have not been found yet during the peeling process. In this paper, the characteristic atomic-scale sliding behaviors of the graphene sheet are found during both the surface and line contacts in the case of lifting the edge of the graphene sheet. It is clarified that effect of the free edge structure gives the marked influences on the atomic-scale peeling process. These simulated results can indicate the possibility of not only the direct observation of the atomic-scale friction of the graphene sheet at the tip/surface interface but also the identification of the lattice orientation and the edge structure of the graphene sheet. II. MODEL AND METHOD OF SIMULATION The same model as that used in the previous work [13] is adopted as illustrated in Fig. 1(a): a rectangular-shaped monolayer graphene sheet with each side of 38Å×21Å comprised of 310 carbon atoms, adsorbed onto the rigid rectangular graphene sheet (which is called, the 'graphite surface,' hereafter) with each side of 164Å × 58Å comprised of 3536 carbon atoms. The initial position of the graphene is set so that the AB stacking registry between the graphene sheet and the graphite surface is satisfied as shown in Fig. 1(b). The green-colored outermost atoms at the left edge of the graphene sheet are assumed to be attached to the AFM tip apex [ Fig. 1 scale peeling process for the armchair-type edge is discussed. Then, in Sec. IIIC, the case for zigzag-type edge is also discussed. Effect of the free edge structure gives the marked influences on the atomic-scale peeling process. For each lifting edge height of the graphene sheet z, the total energy V total = V cov + V vdW is minimized using the conjugate gradient (CG) method [14]. Here the covalent bonding V cov [15] and nonbonding energies V vdW [16,17] are considered. Thus the optimized shape of the graphene sheet and the peeling force acting on the lifting left edge, F x and F z , are calculated during the peeling process. A. Nanoscale peeling behavior within x−z plane When the left edge of the monolayer graphene sheet is lifted, the shape of the graphene sheet markedly changes during the peeling process within the x − z plane as illustrated in Figs ceives the averaged attractive interaction force per one carbon atom. As illustrated in Fig. 3 B. Atomic-scale sliding within x−y plane Fig. 3(a) shows the atomic-scale zigzag structures within the surface-and line-contact regions, which can be explained by the following atomic-scale sliding motions of the graphene sheet within the x−y plane. Surface-contact region During the surface contact region between C and E in Fig. 3(a), z − F z curve takes the atomic-scale zigzag structures from I to VII. Fig. 4(c)10. The period of the zigzag behavior of the F z curve decreases from 3.7Å to 2.5Å as shown in Fig. 4(a) as the peeling proceeds. The lattice spacing of the graphite surface, 2.5Å, appears in the peeling force curve particularly for the stick-slip region. Line-contact region During the line contact region between G and H in Fig. 3(a), z − F z curve takes another atomic-scale zigzag structures as shown in Fig. 5(a). One of the zigzag behaviors in the force curve C. Edge effect The free edge of the graphene sheet discussed in the previous section is 'armchair type.' However, it is well known that the edge structure plays quite an important role in electronic properties of graphene, which can be also expected to give influences on the mechanical properties such as the peeling process. Therefore, in this section, the peeling process of the graphene sheet with the 'zigzagtype' free edge is discussed. In the simulation, the model obtained by rotating Fig. 1(b) by 30 • is used [Fig. 6(a)], and the left zigzag edge is lifted to simulate the peeling process, while the right free edge is zigzag type. As a result, the nanoscale peeling process within the x−z plane and the global shape of the force curve [ Fig. 6(b)] is similar to that of Figs During the line contact, the difference between the armchair-and zigzag-type edge is enhanced. Fig. 8(a) reflects the zigzag stick-slip motion of the graphene sheet IV. DISCUSSIONS AND CONCLUSIONS In this paper the atomic-scale sliding motion of the monolayer graphene sheet during the peeling process is found by molecular mechanics simulation. For the graphene sheet with armchair-type free edge, the transition from the continuous to the stick-slip motion of the graphene sheet is also found, which can be explained as follows: The peeling process induces the increase of the peeled area of the graphene sheet, and the decrease of the surface contact area. Considering the peeled area of the graphene sheet acts as an effective spring as shown in Fig. 9(a), the increase of the peeled area makes the effective spring softer, and the decrease of the surface contact area decreases the energy barrier to slide the graphene sheet. Finally the peeling process induces the transition from the continuous to the stick-slip sliding motion of the graphene sheet, together with the decrease of the period and amplitude of the z −F z curve. Here the period of the zigzag structures of the peeling force curve particularly for the stick-slip region corresponds to the lattice spacing of the graphite surface along [1230] direction, 2.5Å. For the graphene sheet with zigzag-type free edge, the period becomes the lattice spacing along [1010] direc-tion, 2.9Å. This means the sliding length of the graphene sheet along x direction becomes nearly equal to the peeled length along z direction. The zigzag structures of the peeling force curve with the same period of about sev-eralÅ have been also observed by our preliminary experiments using the multilayered graphene, which will be reported elsewhere [12]. Of course, if the number of the peeled graphene sheets is reduced, the direct comparison between the present simulation and the experiment will become possible. Another interesting point is that the behavior of the lateral force curve (F x (z)) is qualitatively the same as that of the vertical force curve (F z (z)) during the surface contact as shown in Fig. 9(b). Therefore it can be said that the peeling force curve, F z (z), directly reflects the atomic-scale friction force, F x (z), which decreases to 0.019 eV/Å 30 pN for z = 27.8Å [ Fig. 9(b)]. This ultralow friction force, F x , is derived from the superlubricity at the interface between the graphene sheet and the graphite surface [18][19][20]. Furthermore effect of the edge structure on the peeling process is clarified by comparison of the free edge between the armchair-and zigzag-types. The atomic-scale structure of the force curve during the surface contact reflects the lattice spacing of the graphite surface. So the period of the atomic-scale structure of the force curve can tell us the atomic-scale lattice orientation and structure of the free edge of graphene. Such information can be used for the control of the electronic properties of the graphene sheet adsorbed onto the substrate. Therefore this paper indicates the possibility of the identification of the lattice orientation and the edge structure of the graphene sheet. The peeling process discussed in this paper is closely related to the atomic-scale wear of the graphite and the graphene tip formation in the friction force microscopy [21]. When the tip is pushed onto the surface for less than the critical tip height, the outermost graphene layer is attached to the FFM tip, which results in the formation of the graphene tip. In that case, the graphene sheet takes the surface contact with the second layer graphene, and it takes the two-dimensional stickslip motion. However it is difficult to observe directly the stick-slip motion during the scan process, due to the very small gap between the FFM tip and the graphite surface. On the other hand, if the peeling process is used, it can be expected that the contact at the AFM tip/graphite interface has a wider space to be observed directly by ex. Transmission Electron Microscopy (TEM). This paper indicates the possibility of a direct observation of the stick-slip motion of the graphene sheet, that's to say, the elementary process of the atomic-scale friction or superlubricity which occurs at the tip/graphite surface interface.
2019-04-28T13:07:44.953Z
2010-03-06T00:00:00.000
{ "year": 2010, "sha1": "29ceb611b6a348a9436da1e1fb1510b8dcc8161e", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/ejssnt/8/0/8_0_105/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "01f32df0648f040d5aacc7a6d7eb7dd4cd244c9b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
7675202
pes2o/s2orc
v3-fos-license
Pharos: Collating protein information to shed light on the druggable genome The ‘druggable genome’ encompasses several protein families, but only a subset of targets within them have attracted significant research attention and thus have information about them publicly available. The Illuminating the Druggable Genome (IDG) program was initiated in 2014, has the goal of developing experimental techniques and a Knowledge Management Center (KMC) that would collect and organize information about protein targets from four families, representing the most common druggable targets with an emphasis on understudied proteins. Here, we describe two resources developed by the KMC: the Target Central Resource Database (TCRD) which collates many heterogeneous gene/protein datasets and Pharos (https://pharos.nih.gov), a multimodal web interface that presents the data from TCRD. We briefly describe the types and sources of data considered by the KMC and then highlight features of the Pharos interface designed to enable intuitive access to the IDG knowledgebase. The aim of Pharos is to encourage ‘serendipitous browsing’, whereby related, relevant information is made easily discoverable. We conclude by describing two use cases that highlight the utility of Pharos and TCRD. INTRODUCTION In 2014, the National Institutes of Health (NIH) initiated the Illuminating the Druggable Genome (IDG) program (https://commonfund.nih.gov/idg/index). The goal of the IDG program is to shed light on poorly characterized proteins that can potentially be modulated using small molecules or biologics. The program comes at a time when genomic information suggests that at least 3000 gene coded proteins can be 'drugged', yet only 10% of these potential D996 Nucleic Acids Research, 2017, Vol. 45, Database issue targets have an FDA approved drug (1). From the point of view of funded research, Edwards et al. (2) reported a bibliometric analysis indicating that 75% of research is focused on studying only 10% of the known mammalian proteins. Based on data that we accumulated to develop the Target Central Resource Database (TCRD), during the period 2011-2015, the NIH funded 270 491 R01 project grants to study 7934 targets and just 11 of which (0.14%) of the 7934 targets considered during 2011-2015 accounted for 10% of the R01's funded. There are multiple reasons for having understudied, or even unstudied targets and some of which are discussed in Edwards et al. (2). We refer to these unstudied proteins as 'dark'. Clearly, there is a need to be able to access comprehensive, diverse data about protein targets and present such data in a manner that can be used to shed light on potential dark targets. To achieve these goals, the IDG initiated the Knowledge Management Center (KMC) which was initially tasked with collating and disseminating data on approximately 1700 targets from the four families enriched for existing drug targets: ion channels, nuclear receptors, GPCRs and kinases. However, current efforts have gone beyond these four families, to consider all ∼20 000 human protein targets, motivated by the opportunity to expand what is considered druggable (3). These efforts have culminated in the Target Central Resource Database (TCRD), an integrated database of diverse data sources and data types and a multimodal web based platform called Pharos, to disseminate and explore the data within TCRD. These resources allow researchers to explore all data around dark targets in the context of well-studied targets There currently exist a number of resources that have aggregated data around genes or protein targets. For example, GeneCards (4) and UniProt (5) are comprehensive resources on genes and protein targets respectively, that aggregate a wide variety of information, with the former including extensive links to commercially available tools (e.g. antibodies) to probe targets. While information on antibodies and other tools are collected in TCRD, it goes beyond to include downstream data types such as mouse phenotype information (http://www.mousephenotype.org/) and GWAS (https://www.ebi.ac.uk/gwas/) data. Furthermore, Pharos attempts to present these varied datatypes in a comprehensive, linked fashion, rather than simply displaying individual data types independently. A recently released resource that is somewhat similar in nature to the current work is OpenTargets (https://www.opentargets.org/). However, the scope of OpenTargets is primarily to enable disease specific target validation, as opposed to broadly collating knowledge about all targets. Another resource focusing on the druggable genome is DGIdb (6), a database that collects drug-gene interactions. By definition this resource focuses on well-studied targets and thus does not address the specific challenge of dark targets catalogued via the IDG program. DrugBank (7) and the Therapeutic Target Database (8) also aggregate data for protein targets, but their primary focus is the targets of drugs and thus by definition do not contain information on understudied or unstudied targets, for which small molecule probes may not be available. The current paper describes the Pharos platform that presents the contents of the TCRD. In the following sec-tions, we describe the overall architecture, the data sources considered in the TCRD and the user interface features implemented in the Pharos platform. MATERIALS AND METHODS TCRD is the central data repository for the IDG KMC and TCRD is the primary data source for the IDG-KMC project-wide web portal Pharos. The TCRD integrates diverse datasets, using well-defined workflows that employ source APIs, relevant to human genes and proteins and also serves as a platform for data integration and analytics. The Pharos application is the interface to the TCRD data and provides both a HTML user interface along with a REST API. TCRD releases are imported into a local database for pre-processing (which primarily focuses on transforming, indexing and linking different data types to enable rapid retrieval) and then displayed by Pharos. For an example of data transformations for tissue expression data see Supplementary Information. While all TCRD data are available via the Pharos application, users wishing to work with the original, unprocessed form of the TCRD database can access it from http://juniper.health.unm.edu/tcrd/. An ER diagram of the TCRD database is available in Supplementary Figure S1 and licensing information for individual data sources contained within the TCRD are available in the Supplementary Table S1). Source code for the Pharos platform is available, under the MIT license, from https://spotlite.nih.gov/ncats/pharos. Data sources The datasets in TCRD comprise of a wide array of knowledge and data types about genes, proteins and small molecules collected and processed from numerous resources. It includes text-mined bibliometric associations and statistics from the biomedical and patent literature, mRNA and protein expression data, disease and phenotype associations, bioactivity data, drug target interactions, and processed datasets about the functions of genes and proteins from 66 resources organized into 114 datasets imported from the Harmonizome (9). TCRD also makes use of existing biological ontologies, which we integrated to construct the bespoke Drug Target Ontology (DTO, http: //drugtargetontology.org). The full list of data sources is included in Supplementary Table S2. Target classifications Based on the data collected for each target, the KMC has constructed a high level classification scheme, termed the Target Development Level (TDL). TDL characterizes the degree to which they are studied or not studied, as evidenced by publications, tool compounds and other features. The TDL scheme serves as the primary grouping of targets, clearly delineating those targets that are unstudied (labeled Tdark) from those that have more information about them (labeled Tclin, if associated with approved drugs with known mechanism of action (10), Tchem, if associated with small molecule activities in ChEMBL or Tbio if not associated with small molecule or drug activities but have a GO Nucleic Acids Research, 2017, Vol. 45, Database issue D997 MF or BP leaf term annotated or else have a confirmed OMIM phenotype). DrugCentral (11) aggregates targetdisease information, drug target bioactivity data, which are used to categorize Tclin and Tchem, and feeds into TCRD. See http://juniper.health.unm.edu/tcrd/ for a more in depth description of the TDL classification scheme. Along with the TDL scheme, we have employed DTO to support a formal classification and annotation of the IDG protein families, building on top of prior classification schemes for kinases (13), GPCRs (12)(13)(14), ion channels (15) and nuclear receptors (13). Though the DTO, being an ontology, allows for sophisticated inferencing and hypothesis generation, Pharos currently employs the DTO primarily as a simple classification scheme to complement the TDL categories. Presentation and usability features The Pharos platform is designed to be broadly applicable and of use to both computational and non-computational scientists. The platform focuses on three classes of users: biologists and clinical researchers (with an interest in characterizing and validating novel targets and identifying key small molecules or biologics), funding agencies (with an interest in exploring the research landscape so as to generate new ideas for research funding and direction) and finally computational scientists (with an interest in data mining and supporting target validation projects). Thus Pharos provides a REST API (https://pharos.nih.gov/idg/api/v1) that supports programmatic access to search functionality and all data contained within TCRD. The API is designed to be self-describing and responses are made in JSON format. The API is of primary interest to computational scientists and developers building novel applications on top of it. However, we anticipate that the most common interaction is via the web interface. Hence we focus on a description of features implemented in the web interface that enhance usability and exploration of the knowledgebase. Search functionality As noted above, Pharos ingests a TCRD release and performs a pre-processing step prior to data display. The preprocessing step primarily focuses on linking or transforming a number of data types to allow for rapid retrieval and visualization. A key pre-processing step is indexing the relevant fields for a given entity (i.e. target, disease and compound) to support free text search, autosuggest and complex filtering functionality. The combination of free-text search and filters allows for easy drill down when faced with large result sets. Text search is enhanced by the availability of autocomplete suggestions, grouped by categories. An example of this behavior is shown in Figure 1A. The autocomplete feature is designed to be the primary entry point for exploring target data and for hypothesis generation. In addition to text search, sequence similarity search allows the user to paste in an amino acid sequence and identify targets with a similarity greater than a user-specified cutoff. Finally, a batch search function is also available, that allows a user to paste in multiple gene symbols or protein accession codes and retrieve their records in one go. Most searches (including general text searches) will return hits for targets, publications, ligands and diseases. The user interface is designed to support intuitive drill down into the hits within each of these types of entities, with a particular focus on protein targets. This is enabled using faceted filters that support easy construction of complex filtering rules. Figure 1C is a screenshot of the main entry point to the list of targets obtained via a search or by browsing all available targets. The filter panel on the left hand side consists of 5 filters that we consider the most commonly used. Selecting a filter automatically filters the list of targets, and multiple filters are combined using logical AND. The filters also include the count of entities that match a given filter value, and when selected displays the number of matching entities (which may be different from the first number due to the inclusion of other filters). Pharos uses 51 filters that include ontology terms (e.g. GO (16), Disease Ontology (17), DTO and Panther (18)), NIH grant types and counts, tissue expression data, pathway relationships and so on. Combining filters allows one to construct sophisticated queries. For example, identifying multiple targets associated with two or more diseases, could lead to co-morbidity hypotheses (19). The list of filters can be filtered using text search and complex filter combinations. These settings can be saved by simply bookmarking the URL. This enables easy sharing of specific searches between users. All data viewable in the interface are available for download both for individual targets as well as multiple targets. The data are made available in the form of multiple CSV formatted files contained in a single ZIP archive, with metadata describing columns included as a text file within the archive. As one of the goals of the IDG KMC was to organize data on unstudied and understudied targets, the notion of a target dossier (similar in concept to the OpenPHACTS consortium target dossier, http://td.inab.org/) was developed to allow a user to collect data as they browsed the database. The dossier is analogous to an e-commerce shopping cart and allows a user to collect targets, diseases and publications as they continue browsing. The dossier functionality supports multiple dossiers, allowing the user to collect information for separate purposes, e.g., different projects. Data associated with the entities in any given dossier can be downloaded as on the main interface. Similarly, all visualization tools available on the main interface can be applied to the entities contained within a dossier. A common task when exploring understudied targets is to compare the data available around them to other targets. In particular, comparison to targets in the same family could be useful in understanding whether more resources should be expended on illuminating the understudied ones. While we expect that an in-depth analysis will be performed using custom tools and data exported from Pharos, the user interface supports visual (side by side) target comparison of two or more targets (e.g. https://pharos.nih.gov/idg/targets/ compare?q=Q05586,Q9UBN1) D998 Nucleic Acids Research, 2017, Vol. 45, Database issue Table of Contents widget, on a target detail page, that provides the user with an overview of the data types available for the target being viewed and allows direct navigation to individual data types. In addition, the widget supports of all data for the current target and viewing the JSON representation for this target available from the underlying API. (C) A screenshot of the target list view that is obtained either via free text search or by browsing the entire set of targets. Target detail pages All information about a target is accessible via individual pages. The goal of these pages is to display all data that have been collected by the KMC about the given target. Because of the wide variety of data types that are collected, these pages can be quite large. To enhance usability each page provides a table of contents ( Figure 1B) that enables the user to directly jump to data types of interest or download all the available data for the current target. Individual data types are represented as panels, with link-outs or visualizations depending on the nature of the datatypes (e.g., tissue expression data as a color coded homunculus, Supplementary Information, Supplementary Figure S3). In contrast, for publications associated with a target, a list of them is provided in a table, but in addition, a summary using a word cloud generated from the abstracts is presented. For many data types such as grant applications or GO terms, the user interface enables using that data to perform a new search, allowing for easy (even serendipitous) exploration of the IDG target space. Data visualization components Depending on the data type, Pharos implements a number of visualizations throughout the interface. For example, he target list view employs visualizations including radial pie charts, word clouds and sunburst (20) diagrams depending on whether the data type is categorical (e.g. target class or target family), textual (GO or Uniprot keywords) or hierarchical (Drug Target Ontology or PANTHER classification). In the target list view these visualizations act as filters--clicking on a given pie segment in the Target Family visual, for example, will filter the current list of targets to match the selected target family. An important aspect of the work done by the IDG KMC is to collect and process a wide variety of heterogeneous datasets that describe the properties and functions of genes and proteins. A key resource that was developed by the IDG KMC is a simplified uniform representation of knowledge about genes and proteins. This project is called the Harmonizome (9). The Harmonizome datasets provide numeric representation of 72 million associations between all mammalian genes and their attributes collected from 66 open online major resources. Using metadata associated with each data source, we summarized knowledge around a target, by aggregating 114 datasets from the Harmonizome into 41 sub-groups and visualizing this as a radar chart. When displayed in a column in the table of targets, the plots provide a visual summary about the amount and type of knowledge that is available about the target. Importantly, this allows the user to scan the table to examine the shape of the radar plots for each target. In other words, the radar plots that look similar (Figures 1C and 2A) imply that the corresponding targets have similar data types associated with them. Individual radar plots can be expanded, and then explored using different aggregation schemes, as well as overlays combining the radar plots for several targets, or groups of targets ( Figure 2B). An additional visualization of Harmonizome data is by the use of the harmonogram (9). This is essentially a heatmap representation of the cumulative probabilities for the target/dataset associations. The data sources are presented on the Y-axis and the targets on the X-axis. The visualization in Pharos is interactive allowing zooming and selections. Figure 3 displays harmonograms for two sets of targets. Figure 3A corresponds to kinase targets with a Tclin classification (i.e. relatively well studied). This is evidenced by a heatmap that is largely populated, with high cumulative probability values. On the other hand, Figure 3B is the harmonogram for GPCRs with a Tdark classification. It is evident that there are much fewer data associations for this target set (grey representing no data). More specifically, the bands of grey represent holes in the knowledge space for this set of targets. The interactive visualization enables grouping of data sources by their type (e.g. genomic data sources or chemical data sources) allowing the user to easily identify targets for which specific types of data may be missing (or poorly populated). We refer the reader to Supplementary Information for a discussion of other visualization components available in Pharos. Ranking targets The interface allows one to rank targets using a variety of parameters including the novelty score (a measure of the extent to which the published literature refers to the target) and the PubMed Score (described at https://pharos.nih.gov/ idg/pmscore). We have also implemented a ranking scheme based on Harmonizome data. Specifically, for each target we compute the sum of the cumulative probabilities across all 114 data sources captured in the Harmonizome and defined this as the Data Availability Score (DAS). Clicking on the radar chart column header allows one to sort the targets based on their total knowledge availability as represented by the DAS. It is important to realize that target ranking based on individual parameters is only a first step in target prioritization. While there are examples of target prioritization using individual parameters such as GO or DO terms (21), in general, target prioritization is heavily contextual, where the context could be a disease state or a biological process. Use cases The wide variety of data presented via Pharos supports multiple modes of interaction, ranging from guided browsing to direct access to specific target pages. We describe two use cases that highlight the role that Pharos could play in enabling research on understudied targets. Novel targets that may play a role in obesity. A number of targets play an important role in the regulation of food intake and dysregulation of these targets can lead to a variety of metabolic disorders and play a role in obesity (22). We can start from the Target view (https://pharos.nih.gov/idg/ targets) and use the Disease filter to search for targets associated with 'Obesity'. This gives 432 targets, which we can further filter down by using the GWAS Trait filter (selecting 'Obesity'). This leaves 18 targets of which 15 do not belong to the IDG families. Thus we focus on the GPCR, ion channel and kinase targets. At this stage we can download data on these targets, or else save them to a new dossier titled 'Obesity Targets'. In parallel, we can view the data availability around these targets by generating a harmonogram, which would highlight that KCNMA1 is well studied, experimentally, whereas ALPK1 is somewhat sparser. Focusing on ALPK1, we see from the target detail page that it is part of 15 funded grants. We can then rerun the search using grant 5R01NS044385-12 as the query and identify the targets studied in it, associated with obesity (via the Disease filter). As expected this includes ALPK1, but also identifies KIF7, which was not in our initial search results (since it did not belong to the GPCR, kinase or ion channel families). Given that KIF7 is under study, it may be useful for further investigation and thus could be added to the 'Obesity Targets' dossier. At any point it is also possible to identify other diseases that targets are associated with (such as gout for ALPK1) and then explore data associated with those targets, saving items of interest to the dossier for later study. Identify diseases & researchers that are related directly or indirectly to nociception targets. Targets involved in nociception are spread out amongst multiple protein families. We consider a user who is interested in diseases (and their associated targets) that are related to nociception (or comorbid with diseases related to nociception). The starting point would be a text query for 'nociception', which will generate a result set of 77 targets, 1 disease and 30 publications. The user could simply focus on the identified disease (Neuropathy, hereditary sensory and autonomic) and stop there. However, it is useful to explore what diseases are associated with the 77 targets. To contrast established and novel targets, the results could be reduced to focus on the 32 Tclin and Tdark targets via the Development Level filter. At this point the Disease filter will list diseases associated with this subset. These include ones that are clearly related to nociception (such as pain agnosia and neuralgia) but also others that are not obviously related (such as cancers and a number of psychiatric disorders). The user could focus on one or more of these diseases and explore the targets associated with them (possibly saving these in a custom dossier). To identify researchers, the user can employ the TechDev PI filter to identify IDG-funded researchers working on any of the targets. The user could drill down to the specific targets being actively studied and from the interface get in touch with the lab conducting experiments. In parallel, using the R01 Count filter the research could select targets for which there are multiple grants funded and then explore the targets being studied as part of those grants and jump out to NIH RePORTER (https://projectreporter.nih.gov/) to get further details on who is studying these targets. DISCUSSION Given background information from TCRD, Pharos serves as entry point into the druggable genome initially envisaged by the IDG program, but has gone beyond the initial set of ∼1700 targets to incorporate the entire human proteome. As a result, users now have a much richer contextual space within which data on understudied targets may be considered. Given the wide variety of data types collected by TCRD, effective access and presentation via Pharos enables users to find what they want, but also point users in the direction of related, possibly relevant information that they may not have considered initially. Ongoing work focuses on incorporating more data types into TCRD (in particular epigenomic and metabolomic data) and expanding on the some of the current data types. For example, grant funding data has been very useful to identify research 'hot spots' and inclusion of health economic data would provide a complementary view of which targets are currently of interest versus those in which interest is growing. Other efforts include better highlighting of provenance (why and where did something match a search query), target prioritization (via similarity searches and temporal analysis of appropriate data sources to identify 'rising targets') and using the semantic capabilities of the DTO. In conclusion, the Pharos platform is designed to allow efficient exploration of the currently defined druggable genome, with the ability to go beyond this pre-defined subset of targets. Together with integration of experimental results from the IDG funded Technology Development groups, this platform will support research scientists wishing to understand the knowledge landscape around the druggable genome, with the hope of shedding light on the dark corners thereby expanding what is considered druggable.
2018-04-03T03:54:36.412Z
2016-11-28T00:00:00.000
{ "year": 2016, "sha1": "ca86314ac0b18d0ec9aded7377329e24b718d962", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/45/D1/D995/8846748/gkw1072.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca86314ac0b18d0ec9aded7377329e24b718d962", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
32904179
pes2o/s2orc
v3-fos-license
Factors Affecting the Care of Patients with Malignant Hypertension n?t, a questioning eye is turned upon the factors responsible. One of the most important of these is the standard of the medical care offered in prevention or treatment of illness. A woman who is delivered of a defective child because she had rubella in the early rcionths of pregnancy no longer considers this an act of God. She is more likely to blame her general practitioner ?r other aspects of community medical services for not ensuring that she was vaccinated before she became Pregnant. The quality of medical care has become a lively issue. Some doctors are both puzzled and hurt when the quality of medicine is questioned. They argue that Medicine can do more today than it ever could in the past and the public should be grateful for its achievements. The public viewpoint is rather different. Triumphs of science and technology are celebrated for a short period and then assimilated into the general fabric of experience. If, as a result of failures by practitioners responSlble, the full benefits of those past triumphs are not reaped, the public have a right to ask for remedial action, ^e quest for quality in medical care is a direct consequence of the greater efficacy of the treatment that can he offered. It is difficult to measure the quality of medical care. What to measure and how to measure it both present great problems. The medical care system is very complex. An adverse outcome may arise because of the failure of an individual practitioner or ancillary to do something that a normally competent person should have done. Alternatively, it may occur because of an organisational failure. The failure to vaccinate a young woman against rubella might occur because the doctor whom she consulted was unaware of the risks and benefits of the vaccine and failed to give appropriate advice. Or it might occur because there had been a mix-up by a clerk or computer operator who was preparing lists of women who had and had not been vaccinated in order to issue healthy life into the seventh or eighth decade. If they do n?t, a questioning eye is turned upon the factors responsible. One of the most important of these is the standard of the medical care offered in prevention or treatment of illness. A woman who is delivered of a defective child because she had rubella in the early rcionths of pregnancy no longer considers this an act of God. She is more likely to blame her general practitioner ?r other aspects of community medical services for not ensuring that she was vaccinated before she became Pregnant. The quality of medical care has become a lively issue. Some doctors are both puzzled and hurt when the quality of medicine is questioned. They argue that Medicine can do more today than it ever could in the past and the public should be grateful for its achievements. The public viewpoint is rather different. Triumphs of science and technology are celebrated for a short period and then assimilated into the general fabric of experience. If, as a result of failures by practitioners respon-Slble, the full benefits of those past triumphs are not reaped, the public have a right to ask for remedial action, ^e quest for quality in medical care is a direct consequence of the greater efficacy of the treatment that can he offered. It is difficult to measure the quality of medical care. What to measure and how to measure it both present great problems. The medical care system is very complex. An adverse outcome may arise because of the failure of an individual practitioner or ancillary to do something that a normally competent person should have done. Alternatively, it may occur because of an organisational failure. The failure to vaccinate a young woman against rubella might occur because the doctor whom she consulted was unaware of the risks and benefits of the vaccine and failed to give appropriate advice. Or it might occur because there had been a mix-up by a clerk or computer operator who was preparing lists of women who had and had not been vaccinated in order to issue reminders. This is a comparatively simple example. The situation is much more complicated when considering the outcome of an illness that requires the intervention of several different specialties and literally dozens of individuals during an admission to hospital. Morrell (1970) proposed five headings for assessing the quality of medical care. These are ? Obviously the most important of these is outcome, but it is one of the most difficult about which to gather evidence. Most studies on the quality of care come down ) Journal of the Royal College of Physicians of London Vol. 13 No. 2 April 1979 to investigations of the processes that were used, most often by study of the written case records. To relate process to outcome it is usual, at present, to confine investigations of the quality of care to illnesses where there is an effective form of treatment for a serious disease. This article describes the application of such techniques to the problem of severe hypertension. Malignant Hypertension Malignant hypertension (papilloedema) and accelerated hypertension (retinal cotton-wool spots or haemorrhage) have a poor prognosis without treatment. About 90 per cent of patients with malignant hypertension will die in the first year if they are not treated, and about 70 per cent of those with accelerated hypertension will die during the same period. Survival of patients with malignant hypertension can be prolonged substantially by anti-hypertensive therapy. The investigations reported here form part of a larger project to investigate the care and treatment of hypertensive patients, particularly those with malignant or accelerated hypertension. The information comes from three different studies. 1. This was an examination of the case records of patients dying in the Greater London area in 1974-76 whose death certificates mentioned malignant hypertension among the causes of death (Dollery et al., 1976). By courtesy of the Office of Population Censuses and Surveys (Dr Adelstein) copies of the death certificates were made available and general practitioners and hospital doctors were asked to lend their records for information to be extracted from them. A total of 100 patients' deaths were investigated over a period of approximately two years. These patients were necessarily a biased sample because they all died. 2. This investigation was designed to provide comparative data on patients with malignant hypertension who had survived. The National Morbidity Study recorded diagnoses of the patients, between 1970 and 1973, in 71 general practices scattered throughout the UK. Among these practices there were 165 individuals in whom malignant hypertension was recorded as the diagnosis. With the help of the OPCS these practitioners were approached to request the extraction of information similar to that recorded in the study of mortality in London (Bulpitt et al., 1979). 3. This was a randomised controlled trial carried out in the hypertension clinics at Hammersmith Hospital, the Radcliffe Infirmary in Oxford, and King's College Hospital in London to compare the information content of standard hospital records and computerised hospital records in the initial care of patients with hypertension (Dollery et al., 1977). The computer records contained pre-printed tables for recording all the features that were held to be important in the history, examination and investigation of a hypertensive patient; the standard records were plain sheets of paper upon which the doctor could write anything he wished. Diagnosis The randomised controlled trial of computerised versus standard records revealed striking under-recording of important clinical information. It also highlighted the difficulty of distinguishing between two states: (1) a definite record that the condition was not present; (2) no entry in the record. Thus, in the standard case records there was a positive record of past history of stroke in 0.7 per cent of patients and a negative record that there had not been a stroke in 19 per cent. There was no record of any sort in the other 80.3 per cent and it was thus impossible to know from the case notes whether or not the patient suffered a stroke. In the computer records there was a positive note that the patient had suffered a stroke in 0.7 per cent and a written record that there never had been a stroke in 98.6 per cent. As the patients were randomly allocated to the two sets of records it would appear in this instance that the heavy under-recording in the standard records was probably of negative information. For depression, however, 76 per cent of the standard records had no record of any sort, whereas only 2 per cent of the computer records had no record. Positive records for depression were present in 16 per cent of the standard records and 38 per cent of the computer records. In this case the probable explanation is that there had been roughly equal omission from the standard records of both positive and negative results. Thus, there were substantial omissions from the standard records in respect of diagnostic information that ought to be recorded about hypertensive patients in a specialised hypertension clinic. A similar under-recording has been described in the USA by Frohlich (1971) who investigated notes of hypertensive patients from various hospitals in Oklahoma City. There was a record, either positive or negative, concerning stroke in only 24 per cent. The National Morbidity Study highlighted a similar problem over the diagnosis of malignant hypertension in general practice. We received details of 92 patients who were recorded as having a diagnosis of malignant hypertension; only 14 had a positive record of papilloedema and 10 had cotton-wool spots or haemorrhages but not papilloedema. The remainder were, roughly, equally divided between those who did not have the retinal features of malignant or accelerated hypertension (34) hypertension, who was co-operating with treatment, died of cerebrovascular disease, especially cerebral haemorrhage, or of renal failure, and this would be reinforced if death followed a period of poor blood pressure control. No such case would exist if the patient was moribund at presentation or if he or she died of a myocardial infarction after a long period of relatively good blood pressure control. A prominent feature of the patients with malignant hypertension who died in London was poor control of blood pressure. The overall average pressure throughout treatment was 189/117 mmHg. Twenty-two per cent had very bad control, with the average diastolic blood pressure on treatment exceeding 125 mmHg. Another worrying feature was the infrequency of blood pressure readings when the patients attended their general practitioner. The frequency of visits, about once a month, appeared appropriate for patients with poor blood pressure control. Unfortunately, a reading of blood pressure was recorded on only 38 per cent of these visits. There was considerable inter-doctor variability, the frequency of blood pressure records per visit ranging from zero to 75 per cent after the diagnosis of malignant hypertension had been made. Treatment with hypotensive drugs was usually energetic during the last few weeks of the patients' lives but in the early stages it was often less so. Twenty-six per Cent of patients who were followed for more than a few Months were treated with low doses of only one or two hypotensive drugs. Nineteen per cent stopped treatment f?r a time. Only rarely could the reason be identified. Failure to re-start therapy after a period in hospital for surgery was the explanation in one case, the advice of a friend in another. Mental illness (depression, psychosis or alcoholism) was an important contributory factor in Producing non-compliance with treatment. Another factor in the London series was the preponderance of Possibly avoidable causes of death. Twenty-two per cent ?f those who died with true malignant hypertension had Cerebral haemorrhage mentioned on their death certificates and 60 per cent renal failure. The average blood Pressure of those dying of cerebral haemorrhage, 198/ 120 mmHg, was significantly higher than in all other Patients combined. The blood pressure control achieved in the malignant hypertensives involved in the National Morbidity Study yas better, on average 180/109 mmHg, but still far from 1(leal. Of the 14 deaths from malignant hypertension 5 were from intracranial bleeding and 2 from renal failure. discussion Malignant hypertension is becoming much less common, Presumably as a result of widespread treatment of Moderate hypertension, which prevents a progression to ^e accelerated phase. Once the patient enters the malignant phase the prognosis is still not very good (Breckenridge et al., 1970). The mean survival time in the London study was only 25 months after diagnosis. In the National Morbidity Study 62 per cent of those with Papilledema or retinal cotton-wool spots were alive six years after entry into the study. In theory most cases of malignant hypertension should be preventable by effective case-finding programmes for patients with moderate or severe benign hypertension. If renal function is preserved at the time of diagnosis, life expectancy should be satisfactory provided that good blood pressure control is achieved. Unfortunately, many patients continue to present with appreciable renal failure. The average blood urea at presentation in the London study was 16.5 mmol/litre. Even if they retain their renal function, many have poor blood pressure control and eventually die of renal failure or cerebral haemorrhage. Some of these deaths are clearly preventable by more effective use of existing hypotensive agents and more punctilious follow-up. It is hard to defend a doctor who sees a patient with known malignant hypertension whose last recorded blood pressure was high and who either does not record a blood pressure reading on this visit or, if he does, leaves the dose of drugs unaltered. Patient factors and the social environment also played a large part and can only be incompletely documented from the clinical records. Could anything have been done to help patients who were persistently not adhering to therapeutic advice or who had mental illnesses that interfered with their ability to co-operate? These data emphasise, once again, that hypertension is a chronic illness that requires attentive and skilful followup of the more severe cases. We hope that by focusing attention upon the need for more effective blood pressure control and better follow-up the number of people dying of malignant hypertension can be still further reduced.
2018-04-03T02:16:17.655Z
1979-04-01T00:00:00.000
{ "year": 1979, "sha1": "87425dbfb84efaf78d4ea34c20c4d8189f717d27", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1af7ded1696cb0740b4cf58b2bd34f0aeb0336b4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255496466
pes2o/s2orc
v3-fos-license
Shenxiong glucose injection inhibits oxidative stress and apoptosis to ameliorate isoproterenol-induced myocardial ischemia in rats and improve the function of HUVECs exposed to CoCl2 Background: Shenxiong Glucose Injection (SGI) is a traditional Chinese medicine formula composed of ligustrazine hydrochloride and Danshen (Radix et rhizoma Salviae miltiorrhizae; Salvia miltiorrhiza Bunge, Lamiaceae). Our previous studies and others have shown that SGI has excellent therapeutic effects on myocardial ischemia (MI). However, the potential mechanisms of action have yet to be elucidated. This study aimed to explore the molecular mechanism of SGI in MI treatment. Methods: Sprague-Dawley rats were treated with isoproterenol (ISO) to establish the MI model. Electrocardiograms, hemodynamic parameters, echocardiograms, reactive oxygen species (ROS) levels, and serum concentrations of cardiac troponin I (cTnI) and cardiac troponin T (cTnT) were analyzed to explore the protective effect of SGI on MI. In addition, a model of oxidative damage and apoptosis in human umbilical vein endothelial cells (HUVECs) was established using CoCl2. Cell viability, Ca2+ concentration, mitochondrial membrane potential (MMP), apoptosis, intracellular ROS, and cell cycle parameters were detected in the HUVEC model. The expression of apoptosis-related proteins (Bcl-2, Caspase-3, PARP, cytoplasmic and mitochondrial Cyt-c and Bax, and p-ERK1/2) was determined by western blotting, and the expression of cleaved caspase-3 was analyzed by immunofluorescence. Results: SGI significantly reduced ROS production and serum concentrations of cTnI and cTnT, reversed ST-segment elevation, and attenuated the deterioration of left ventricular function in ISO-induced MI rats. In vitro, SGI treatment significantly inhibited intracellular ROS overexpression, Ca2+ influx, MMP disruption, and G2/M arrest in the cell cycle. Additionally, SGI treatment markedly upregulated the expression of anti-apoptotic protein Bcl-2 and downregulated the expression of pro-apoptotic proteins p-ERK1/2, mitochondrial Bax, cytoplasmic Cyt-c, cleaved caspase-3, and PARP. Conclusion: SGI could improve MI by inhibiting the oxidative stress and apoptosis signaling pathways. These findings provide evidence to explain the pharmacological action and underlying molecular mechanisms of SGI in the treatment of MI. Introduction Ischemic heart disease accounts for approximately 50% of all cardiovascular diseases (CVDs) and is the leading cause of human mortality (Fan et al., 2017). Myocardial ischemia (MI) is defined pathologically as myocardial cell death due to prolonged ischemia (Thygesen et al., 2018) and is a major challenge in the clinical setting (Wu et al., 2019;Zhai K. et al., 2021). The development of MI is marked by complex molecular mechanisms, such as Ca 2+ overload (Chang et al., 2019), reactive oxygen species (ROS) accumulation, and apoptosis (Bugger and Pfeil, 2020), and these mechanisms are driving the search for new drugs and therapies for the prevention and treatment of MI. Traditional Chinese medicine (TCM) and its formulas are the most common therapy for CVD in China because of their superiority and wide-ranging regulatory effects . Examples of TCM for the treatment of CVD include Shenxiong Glucose Injection (SGI), Danhong injection , and Shensong Yangxin Capsule (Jiang et al., 2021). SGI ( Figure 1) is a TCM injection, composed of ligustrazine hydrochloride and Danshen (Radix et rhizoma Salviae miltiorrhizae; Salvia miltiorrhiza Bunge, Lamiaceae). It has the function of activating blood circulation to dissipate blood stasis (Zheng, 2015). In addition, the pharmacological functions of SGI include microcirculation improvement, thrombosis inhibition, antiendothelial injury, antioxidation, anti-inflammatory, and antiplatelet aggregation effects . SGI is widely used in the clinical setting for the treatment of various disease including MI, coronary heart disease, acute ischemic stroke, and angina pectoris (Zheng, 2015;Zhou et al., 2018). Randomized controlled trials demonstrated that SGI significantly improves total efficiency, exiting rate, and health quality in CVD Lv et al., 2019). Moreover, SGI has a significant curative effect on central system diseases, such as cerebral ischemia, acute cerebral infarction, vertebrobasilar insufficiency vertigo, etc. . However, there are limited studies exploring the underlying functional mechanism(s) of SGI. Apoptosis is a highly regulated process of cell death . Human vascular endothelial cell apoptosis is closely related to the occurrence and development of various CVDs related to high mortality (Zhai et al., 2017;Gong et al., 2019). Hypoxia induction is an important factor in the pathophysiological mechanism of MI (Cheng et al., 2016). Oxidative stress leads to apoptosis of vascular endothelial cells, which can respond immediately to changes in hypoxic stimulation owing to their direct contact with blood (Yang et al., 2018). Inhibiting the apoptosis of vascular endothelial cells is considered an important means of restoring blood and nutrient supply to the damaged area and thus improving CVD (Kukumberg et al., 2021). Human vascular endothelial cell dysfunction and apoptosis are critical for the occurrence and development of CVD (Zhai et al., 2015;Wu et al., 2019). Human umbilical vein endothelial cells (HUVECs) are a standard model for studying human vascular endothelial cells at the cellular level . In this study, the pharmacological action and molecular mechanisms of SGI in the treatment of MI were investigated in a FIGURE 1 SGI and its compositions. Frontiers in Pharmacology frontiersin.org 02 rat model of MI induced by isoproterenol (ISO) and in HUVECs with oxidative damage and apoptosis induced by CoCl 2 . Findings from the study provide a scientific basis for future research on SGI application in the treatment of MI. Animals and treatments Specific pathogen-free (SPF) male Sprague-Dawley rats were obtained from the Guizhou Medical University Laboratory Animal Center (permission no. SCXK (Qian) 2018-0001). All rats (weight 200-220 g, 7-8-week-old) were randomly divided into five experimental groups (n = 6 per group): control, model, and low-dose, medium-dose, and high-dose SGI. The rats were injected subcutaneously with ISO (50 mg/kg/day) for 2 days to induce a MI model. The dose and pattern of injection were conducted according to a previous study (Sammeturi et al., 2019). Subsequently, the control and model groups were injected with GS (.6 ml/100 g, iv.), while the SGI low-, medium-, and high-dose SGI groups were given SGI (.3, .6, and 1.2 ml/100 g, iv, respectively; equivalent to 1/6, 1/3, and 2/3 of the clinical doses; equivalent to 3 mg/kg ligustrazine hydrochloride and .6 mg/kg danshensu, 6 mg/kg ligustrazine hydrochloride and 1.2 mg/kg danshensu, and 12 mg/kg ligustrazine hydrochloride and 2.4 mg/kg danshensu) for 4 days. All rats were sacrificed 24 h after the final treatment. The study design is shown in Figure 2A. The animal experiments were reviewed and approved by the Animal Care Welfare Committee of Guizhou Medical University. Measurement of electrocardiogram (ECG) A standard limb lead II ECG was recorded by a BL-420F biological function experiment system (Chengdu Taimeng Technology Co., Ltd., Chengdu, China) in each group. Four hypodermic needle electrodes were used to record the ECG in anesthetized rats: 1) the left upper limb was connected by a yellow electrode; 2) the left lower limb was connected by a green electrode; 3) the right upper limb was connected by a red electrode; 4) the right lower limb was connected by a black electrode (Song, 2021). Echocardiographic and hemodynamic assessment of cardiac function Echocardiograms were performed using a M6Vet ultrasound machine (Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Guangdong, China) equipped with a C11-3S scanning probe transducer. The M-mode recording method was used to measure the left ventricular (LV) end-systolic diameter (LVESD) and LV enddiastolic diameter (LVEDD). Mitral valve E wave peak velocity (MV E vel) was also measured (Qian et al., 2018). Determination of serum concentrations of cTnI and cTnT Blood samples of all rats were collected from the femoral artery and centrifuged at 1,000 g for 20 min to isolate serum . Subsequently, the serum concentrations of the marker enzymes cTnT and cTnI were measured by enzyme linked immunosorbent assay (ELISA) kits (Shanghai Zhuocai Biotechnology Co., Ltd, Shanghai, China) according to the manufacturer's protocols. Assay of ROS in rat hearts As previously described , heart samples were washed in cold PBS and single-cell suspensions were prepared using mechanical methods. ROS production was determined using a ROS detection kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China), according to the manufacturer's instructions. Frontiers in Pharmacology frontiersin.org 03 Briefly, cells were collected and washed twice with PBS. Each sample was mixed with 1 mL PBS and 1 μL 2ʹ,7ʹdichlorodihydrofluorescein diacetate (DCFH-DA) (final concentration 10 μmol/L) for 30 min at 37°C. ROS fluorescence intensity was measured by using a fluorescence microplate reader (Thermo Scientific, MA, United States). Cell culture HUVECs were obtained from the Chinese Academy of Sciences Cell Bank (Shanghai, China) and were cultured in RPMI-1640 medium containing 10% fetal bovine serum at 5% CO 2 and 37°C. Cells from passages 5-10 were utilized in for the experiments. , and the ROS level in the heart tissues (fluorescence value) (D) of MI rats by using a BL-420 biological function experiment system and DCFH-DA probe, respectively. ***p < .001 vs. the control group; ### p < .001 vs. the model group. Frontiers in Pharmacology frontiersin.org Establishment of the HUVECs model and selection of SGI dosage Cells (8×10 4 cells/ml) were seeded in 96-well plates and cultured for 24 h. For the establishment of the model, HUVECs in the logarithmic phase were treated with different concentrations of CoCl 2 (.2-1.4 mmol/L) for 6, 12, 24, and 48 h. For the selection of SGI dosage, HUVECs were treated with different concentrations of GS and SGI (4%-24%, v/v) for 24 h. Cell viability was detected with an MTS (Promega, WI, United States) assay according to the manufacturer's instructions. SGI treatment Cells (8×10 4 cells/ml) were seeded in 96-well plates and cultured for 24 h. The control group was treated with 2% GS and the CoCl 2 group was treated with 2% GS and 1.4 mmol/L CoCl 2 . Pretreating with SGI for 6 h, SGI groups were treated with different concentrations of SGI (.5%, 1%, and 2%) combined with 1.4 mmol/L CoCl 2 for 24 h. Cell viability was detected with an MTS (Promega, WI, United States) assay. Measurement of intracellular ROS HUVECs (8×10 4 cells/ml) were seeded in 6-well plates for 24 h and treated as required. Detection of intracellular ROS was performed as previously described (Long et al., 2022). Briefly, according to the manufacturer's instructions, HUVECs were stained with DCFH-DA (10 μmol/L) and incubated in the dark for 30 min, and then were washed three times with PBS. The level of intracellular ROS was detected by a fluorescence microplate reader (Thermo Scientific, MA, United States) and fluorescence images of ROS were captured by a confocal laser scanning microscope (CLSM) (Carl Zeiss AG, BW, Germany). Western blot analysis HUVECs (8×10 4 cells/ml) were seeded in 6-well plates for 24 h and treated as required. After treatment, HUVECs were lysed with RIPA lysis buffer (containing 1% PMSF). Insoluble materials were removed by centrifugation for 10 min at 12,000 g and 4°C to obtain total protein. The protein concentration was measured by a BCA kit. Equal amounts of protein from each sample were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto a PVDF membrane. Nonspecific sites were blocked by incubating the membranes in 5% BSA buffer. Thereafter, the membranes were incubated with appropriate primary antibodies (GAPDH, 1 in 2000 in BSA buffer; ERK1/2, p-ERK1/2, Bcl-2, Bax, Cyt-c, COX-IV, Caspase-3, and PARP, 1 in 1,000 in BSA buffer; cleaved caspase-3 1 in 500 in BSA buffer) overnight at 4°C. The membranes were washed with tris-buffered saline containing Tween 20 (TBS-T) and incubated with the appropriate secondary antibodies (1 in 2000 in TBS-T buffer). After washing with TBS-T five times (5 min each), the membranes were visualized by a Bio-Rad imaging system (Bio-Rad, CA, United States) (Chen et al., 2022). Analogously, the expressions of Bax and Cyt-c in mitochondria and cytoplasm were measured. Immunofluorescent analysis HUVECs (8×10 4 cells/ml) were seeded in petri dishes for CLSM for 24 h and treated as required. HUVECs were then washed three times with PBS, fixed in 4% paraformaldehyde for 20 min, and incubated in .5% Triton X-100 for 20 min. Subsequently, cells were blocked with 5% BSA at room temperature for 30 min and incubated with cleaved caspase-3 antibody (1:500 in 1% BSA) at 4°C overnight. Subsequently, cells were washed with PBS and incubated with the fluorescent secondary antibody (1:500) for 1 h in the dark at room temperature. The samples were incubated with DAPI (4ʹ,6-diamidino-2-phenylindole) for 5 min and viewed using a CLSM. Data analysis and statistics All data are expressed as mean ± standard deviation (SD). SPSS software (IBM, IL, United States) was Student's t-tests were employed to compare the difference between GS and SGI groups. One-way analysis of variance (ANOVA) with a post hoc analysis followed by the Least Significant Difference test (for normal distributions) or Dunnett's t-test (for non-normal distributions) was used to compare the difference among multiple groups. p < .05 was defined as statistically significant. Effect of SGI on ISO-induced MI in rats ST-segment elevation on an ECG is a crucial indicator to evaluate the degree of MI . ISO causes MI predominantly through oxidative stress, the important indicator of which is ROS Xue et al., 2021). In this study, compared with the control group, the ST-segment of the ECG (black arrow, Figure 2B) was significantly elevated in the ISO-induced rats ( Figure 2C), which is constituent with previous literature (Xue et al., 2021). In addition, as shown in Figure 2D, ROS was overproduced in the model group compared with the control group (p < .001). However, the three dose Frontiers in Pharmacology frontiersin.org groups of SGI exhibited significantly improved ISO-induced STsegment elevation and ROS production (p < .001, Figures 2C, D). These results suggest that SGI produces an obvious antioxidant effect in the treatment of MI. SGI improves ISO-induced changes in hemodynamic parameters and cardiac function Echocardiography analysis demonstrated that SGI treatment significantly improved the LV function of MI rats compared with the model group, as evidenced by increasing EF(%) and FS(%) (p < .001, Figures 3A, C). The MV E vel is used to evaluate LV diastolic function. SGI treatment (.3, .6, and 1.2 ml/100 g) evaluated ISOinduced MV E vel (p < .001, Figure 3D). Furthermore, SGI treatment significantly reduced the serum concentration of cTnT and cTnI (p < .001, Figures 3E, F). These results demonstrated that SGI could protected LV systolic and diastolic function against MI. SGI impairs CoCl 2 -induced injury in HUVECs As CoCl 2 at 1.4 mmol/L resulted in a 50% decrease in cell survival rate, this was the concentration selected for the establishment of the HUVECs model ( Figure 4A). As shown in Figure 4B, there was no difference in the survival rate of HUVECs Frontiers in Pharmacology frontiersin.org between the SGI groups and the GS groups at the concentration range of 4%-12% (v/v). However, SGI at the concentration of .5%, 1%, and 2% significantly improved the cell survival rate compared with the CoCl 2 group (p < .01, Figure 4C). Therefore, SGI (concentrations of .5%, 1%, and 2%) were employed for further study. Meanwhile, morphological observations revealed that SGI treatment could reduce the shrinkage and debris of the cells and cell damage induced by CoCl 2 ( Figure 4D). These findings suggest that SGI could impair CoCl 2 -induced injury in HUVECs. SGI reduces intracellular ROS production ROS are an important indicators of oxidative damage and the early stage of apoptosis Pang et al., 2020). CLSM images showed that, compared with the control group, there was a significant increase in intracellular ROS under the induction of 1.4 mmol/L CoCl 2 . Treatment with SGI (.5%, 1%, and 2%) decreased CoCl 2induced ROS production ( Figure 5A). Similar results were observed by using a fluorescence microplate reader (SGI at 1% and 2%; p < .001, FIGURE 4 SGI inhibits CoCl 2 -induced injury in HUVECs. (A) HUVECs were treated with different concentrations of CoCl 2 for the indicated times and cell viability was measured by MTS assay. (B) HUVECs were treated with SGI and GS from 4% to 24% for 24 h and cell viability (C) was measured by MTS assay. (D) Cell morphology was examined by a light microscope. ***p < .001 vs. the control group; ## p < .01, ### p < .001 vs. the CoCl 2 group; $ p < .05, $$$ p < .001 vs. the GS group. Frontiers in Pharmacology frontiersin.org Figure 5B). These findings suggest that SGI could exert antioxidative damage effects in HUVECs. SGI attenuates CoCl 2 -induced MMP disruption in HUVECs Intracellular ROS accumulation causes MMP collapse (Cui et al., 2018). CLSM analysis and flow cytometry showed that the MMP of HUVECs was decreased in the CoCl 2 group in comparison with the control group (p < .001), while treatment with SGI (.5%, 1%, and 2%) had the opposite effect (p < .01, Figures 6A, B). SGI decreases cytosolic Ca 2+ concentrations and G2/M arrest Multiple studies have reported that ROS contributes to intracellular Ca 2+ overload (Yu et al., 2014;Pang et al., 2020). Consistent with these results, ROS overproduction induced by CoCl 2 in the current study elevated the level of cytosolic Ca 2+ . However, after treatment with SGI (.5%, 1%, and 2%), the intracellular Ca 2+ concentration was significantly decreased (p < .001, Figure 7A). Moreover, as shown in Figure 7B, CoCl 2 induced an increase in G2/M arrest in the cell cycle, which was reversed by treatment with SGI (.5%, 1%, and 2%) (p < .001). These results suggest that SGI could exert an anti-apoptosis effect by inhibiting the influx Ca 2+ and the G2/M arrest induced by CoCl 2 . SGI protects HUVECs from CoCl 2 -induced apoptosis Vascular endothelial cell apoptosis is one of the causes of vascular endothelial dysfunction . The flow cytometry results showed that the apoptosis rates were significantly increased upon treatment with CoCl 2 (p < .001), while treatment with SGI (.5%, 1%, and 2%) significantly decreased the levels of apoptosis (p < .001, Figure 8). These results suggest that SGI could rescue HUVECs from CoCl 2 -induced apoptosis. Frontiers in Pharmacology frontiersin.org 09 Effect of SGI on the expression of apoptosis-associated proteins in CoCl 2induced HUVECs To further elucidate the mechanism of anti-apoptosis effects of SGI, the expression of apoptosis-associated proteins was measured. As shown in Figure 9A, after incubation with CoCl 2 , the phosphorylation level of ERK1/2 was increased (p < .01, compare lane 1 to lane 2), the expression of anti-apoptotic protein Bcl-2 was decreased, and the expression of pro-apoptotic protein Bax was increased (p < .001, compare lane 1 to lane 2). Treatment with SGI (.5%, 1%, and 2%) increased the Bcl-2/Bax expression ratio (p < .001, compare lane 2 to lanes 3, 4, and 5) and decreased the ratio of p-ERK1/2 to ERK1/2 (p < .01, compare lane 2 to lanes 3, 4, and 5), compared with the CoCl 2 group. Additionally, CoCl 2 induced significant increases in the level of cleaved caspase-3 (p < .001, compare lane 1 to lane 2) and PARP (p < .01, compare lane 1 to lane 2), which could lead to apoptosis. However, SGI treatment downregulated the ratios of cleaved caspase-3/Caspase-3 (p < .001, compare lane 2 to lanes 3, 4, and 5) and PARP/full-length PARP (p < .05, compare lane 2 to lanes 3, 4, and 5), and thus protected HUVECs from CoCl 2 -induced oxidative injury and apoptosis. Moreover, similar results in cytoplasmic Cyt-c and mitochondrial Bax after SGI treatment were observed in western blotting (p < .001, Figures 9B, C). The immunofluorescence assays also showed downregulation of cleaved caspase-3 in the SGI group (p < .001, Figure 9D). These data reveal that SGI protected HUVECs against the Frontiers in Pharmacology frontiersin.org apoptosis pathway by maintaining mitochondrial homeostasis and restoring mitochondrial functions. Discussion This study showed that SGI treatment attenuated ISO-induced elevation of the ST-segment in ECGs, disorders of echocardiograms, dysfunction of hemodynamic parameters, and overproduction of ROS in heart tissues, and augmented serum concentrations of cTnT and cTnI in vivo. Further in vitro assays revealed that SGI exerted the effects of antioxidative stress and anti-apoptosis by inhibiting ROS production, MMP collapse, and Ca 2+ influx, and regulating the expression of apoptosis-related proteins. Therefore, the underlying mechanism of the protective effects of SGI on MI involves inhibition of oxidative stress and apoptosis ( Figure 10). ISO is a recognized drug for inducing MI in animal models through multiple mechanisms, predominantly oxidative stress and Ca 2+ overload (Xue et al., 2021). Our results showed that ISO can induce ST-segment elevation and a marked increase in ROS, consistent with a previous study . However, these indicators were significantly decreased in the SGI groups, indicating that SGI could reduce myocardial injury induced by ISO. Moreover, cTnI and cTnT testing has become the standard practice for diagnosis and early exclusion of MI (Apple et al., 2017). In this study, ISO induced an increase in the cardiac damage biomarkers cTnT and cTnI, which was significantly suppressed by SGI treatment. LV systolic and diastolic dysfunction during MI has emerged as an important indicator of cardiac function (Zhang et al., 2016;Fu et al., 2020). In echocardiography, EF(%) and FS(%) values were decreased and LV systolic dysfunction was observed in the MI rats. However, SGI treatment improved LV systolic and diastolic function and changes in hemodynamic parameters. These results demonstrated that SGI improves the recovery of cardiac function after ISO injury. The vascular endothelium has an important role in the cardiovascular system. The loss of vascular endothelial cell function is a key event in the occurrence and development of vascular diseases (Hou et al., 2015). Vascular endothelial cell dysfunction is predominantly characterized by vascular endothelial cell apoptosis, in which oxidative stress is an important factor . Multiple studies have demonstrated the importance of protecting vascular endothelial cells against hypoxia-induced injury in CVD (Yang et al., 2018;Wu et al., 2019). CoCl 2 is a chemical hypoxia modelling reagent that inhibits the catalysis of prolyl hydroxylase in the cell to cause an intracellular hypoxic state, thus creating a hypoxic environment under normoxic conditions (Chen et al., 2018). The apoptosis induced by CoCl 2 is due to the increased production of free radicals (including ROS) mediated by hypoxia. Increased accumulation of ROS may lead to Ca 2+ influx. Meanwhile, the increase in intracellular Ca 2+ concentration also induces additional ROS formation, which leads to mitochondrial dysfunction and further aggravates vascular endothelial cell apoptosis (Yu et al., 2014). Therefore, the balance between the production of ROS and the elimination of excess ROS is essential to maintain the redox state and homeostasis in cells . Our research has proven that the level of ROS and the concentration of Ca 2+ Frontiers in Pharmacology frontiersin.org increased in CoCl 2 -exposed HUVECs and that SGI treatment protected against hypoxia-induced injury. There is accumulating evidence that excessive ROS induces apoptosis by phosphorylating and activating the ERK1/2 pathway (Ding Y. et al., 2020). Apoptosis regulation of HUVECs involves phosphorylation of the ERK signaling pathway (Mi et al., 2019). Multiple studies have shown that oxidative stress can lead to the activation of ERK1/2, causing apoptosis (Ding Y. et al., 2020;Huang et al., 2020). In our study, CoCl 2 increased the phosphorylation level of ERK1/2 and the apoptosis rate of HUVECs. Imbalance between the anti-apoptotic protein Bcl-2 and pro-apoptotic protein Bax (Li J. et al., 2019) can lead to opening of the mitochondrial permeability transition pore (mPTP) and consequent release of cytochrome c (Cyt-c), which then activates apoptosis effector Caspase-3 Pang et al., 2020). Our results demonstrated that pro-apoptotic protein Bax, cytoplasmic Cyt-c, and cleaved caspase-3 were significantly increased in HUVECs induced by CoCl 2 , while expression of the anti-apoptotic protein Bcl-2 protein was decreased, which collectively indicated that the mitochondrial apoptosis pathway was activated. SGI treatment was able to downregulate the expression of Bax, cytoplasmic Cyt-c and cleaved caspase-3 and upregulate the expression of Bcl-2. Caspase-3 is the most critical apoptosis execution protein that acts on a variety of substrates. Cleaved caspase-3 can further activate PARP (Back et al., 2015). Full-length 116 kDa PARP is cleaved by cleaved caspase-3 into two fragments-24-kDa DNA-binding fragment and 89-kDa catalytic fragment-and thus activated to result in apoptosis (Wei et al., 2012). The G2/M transition in the cell cycle is a major checkpoint that prevents cells from entering mitosis with damaged DNA, thereby maintaining the genomic integrity of offspring . Our results indicated that SGI could improve the G2/M arrest induced by CoCl 2 to inhibit apoptosis. SGI could also significantly reduce the cleavage of PARP to inhibit apoptosis. There is a big gap between the preclinical trials' effect on antioxidants and the effect of the clinical trial, but the preclinical trials are also important (Qin et al., 2009). As a clinically marketed drug, SGI has shown curative effects in clinical experiments . Oxidative damage is an important pathophysiological basis for the occurrence and development of MI (Zhang M. et al., 2022). In this study, we found that the effects of SGI against MI occur through inhibitin of oxidative stress and apoptosis pathways. The study provides the scientific basis for the use of SGI in the prevention and treatment of CVD. However, the study does have several limitations. Firstly, the main components of SGI against oxidative stress and apoptosis pathways and the ROS scavenging mechanism of SGI have not yet been elucidated and should be investigated in future studies. Moreover, the antioxidative and anti-apoptotic damage effects of SGI need to be confirmed in subsequent clinical studies. Conclusion SGI is a common medicine used in the clinical treatment of MI in China. The mechanism of SGI may be related to antioxidative stress and anti-apoptosis by reducing ROS production and regulating the intrinsic mitochondrial-mediated apoptosis pathways. The study elaborates on the protective effects of SGI against MI and provides a scientific basis for the action of SGI MI. Furthermore, the study highlights a method and strategy for further investigations on the mechanism(s) of other TCM formulas. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. Ethics statement The animal study was reviewed and approved by the Animal Care Welfare Committee of Guizhou Medical University.
2023-01-07T16:26:46.084Z
2023-01-05T00:00:00.000
{ "year": 2022, "sha1": "09aac66bf25690ef5fb6d37049fded25f779316e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "09aac66bf25690ef5fb6d37049fded25f779316e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
260620925
pes2o/s2orc
v3-fos-license
Intraspecific divergence in essential oil content, composition and genes expression patterns of monoterpene synthesis in Origanum vulgare subsp. vulgare and subsp. gracile under salinity stress Background Oregano (Origanum vulgare L.), one of the important medicinal plants in the world, has valuable pharmacological compounds with antimicrobial, antiviral, antioxidant, anti-inflammatory, antispasmodic, antiurolithic, antiproliferative and neuroprotective activities. Phenolic monoterpenes such as thymol and carvacrol with many medical importance are found in Oregano essential oil. The biosynthesis of these compounds is carried out through the methyl erythritol-4 phosphate (MEP) pathway. Environmental stresses such as salinity might improve the secondary metabolites in medicinal plants. The influence of salinity stress (0 (control), 25, 50 and 100 mM NaCl) on the essential oil content, composition and expression of 1-deoxy-D-xylulose-5-phosphate reductoisomerase (DXR), γ-terpinene synthase (Ovtps2) and cytochrome P450 monooxygenases (CYP71D180) genes involved in thymol and carvacrol biosynthesis, was investigated in two oregano subspecies (vulgare and gracile). Results Essential oil content was increased at low NaCl concentration (25 mM) compared with non-stress conditions, whereas it was decreased as salinity stress intensified (50 and 100 mM). Essential oil content was significantly higher in subsp. gracile than subsp. vulgare. The highest (0.20 mL pot−1) and lowest (0.06 mL pot−1) amount of essential oil yield was obtained in subsp. gracile at 25 and 100 mM NaCl, respectively. The content of carvacrol, as the main component of essential oil, decreased with increasing salinity level in subsp. gracile, but increased in subsp. vulgare. The highest expression of DXR, Ovtps2 and CYP71D180 genes was observed at 50 mM NaCl in subsp. vulgare. While, in subsp. gracile, the expression of the mentioned genes decreased with increasing salinity levels. A positive correlation was obtained between the expression of DXR, Ovtps2 and CYP71D180 genes with carvacrol content in both subspecies. On the other hand, a negative correlation was found between the expression of CYP71D180 and carvacrol content in subsp. gracile. Conclusions The findings of this study demonstrated that both oregano subspecies can tolerate NaCl salinity up to 50 mM without significant reduction in essential oil yield. Also, moderate salinity stress (50 mM NaCl) in subsp. vulgare might increase the carvacrol content partly via increment the expression levels of DXR, Ovtps2 and CYP71D180 genes. vulgare might increase the carvacrol content partly via increment the expression levels of DXR, Ovtps2 and CYP71D180 genes. About 20% of the world's land, as well as about half of arable irrigated land in the worlds, is affected by salinity [33].As a worldwide issue, soil salinization restricts agricultural production due to its adverse effect on the plant growth and production [34,35].Soil salinity reduces soil Fig. 1 Proposed pathway for carvacrol and thymol biosynthesis in oregano and thyme in plastids (Crocoll, 2011).DXR: 1-deoxy-D-xylulose-5-phosphate reductoisomerase, Ovtps2: γ-Terpinene synthase, CYP71D180 water potential, leaf water potential and turgor pressure of the plant cells, consequently induce osmotic stress [36].High accumulation of ions (Na + and Cl − ) in saline conditions prevents K + and Ca +2 uptakes and leads to ion imbalance [37].Salinity increases reactive oxygen species (ROS) in the plant cells [38] which causes lipid peroxidation, membrane degradation, and DNA and protein damage [39].To deal with saline conditions, plants use various strategies such as ionic homeostasis and partitioning, ion transport, osmotic adjustment, antioxidant defense system, and polyamine biosynthesis [40].Furthermore, plant secondary metabolites notably improve plant growth and survival under biotic and abiotic stresses [41,42] and their biosynthesis and accumulation are influenced by environmental stresses such as salinity [43].It has been demonstrated that environmental stresses might change both the quality and quantity of the plant secondary metabolites through influencing the expression of the genes involved in their biosynthesis [44].Studies have shown that soil salinity changes essential oil biosynthesis and composition in several plant species such as Salvia officinalis [45], Satureja hortensis [46], and Melissa officinalis [47]. To our knowledge, the effect of salinity stress on the content of terpenes and expression of their biosynthetic genes has not been evaluated in O. vulgare yet.Due to the presence of valuable compounds in the essential oil of O. vulgare, study the expression of the genes involved in their biosynthesis and their association with the accumulation of the compounds under salinity conditions may be of great interest for pharmaceutical and industrial market.Hence, for the first time, the expression of the genes involved in the biosynthesis of the valuable secondary metabolites (carvacrol and thymol) was compared in two oregano subspecies (gracile and vulgare) under various salinity levels.Moreover, the association between genes expression levels and their corresponded compounds, changes in essential oil content, oil yield and their compounds were also studied under salinity conditions. Essential oil content and yield Essential oil content was significantly affected by salinity treatments and subspecies.According to the results, essential oil content was increased at low NaCl concentration (25 mM) compared with non-stress conditions, whereas it was decreased as salinity stress intensified (50 and 100 mM).Briefly, essential oil content was significantly higher in subsp.gracile than subsp.vulgare (Fig. 2).Essential oil yield was significantly influenced by salinity treatments, subspecies and their interaction.In vulgare subspecies, essential oil yield decreased by increasing salinity, but the difference between 0, 25 and 50 mM NaCl was not significant.In gracile subspecies, the essential oil yield increased by enhancing the intensity of salinity, up to 25 mM and then decreased by increasing salinity level.Also, the difference between 0, 25 and 50 mM NaCl was not significant (Fig. 3).A positive relationship was found between essential oil content and yield in both subspecies (Fig. 6a,b). Chemical composition of essential oil The alterations of essential oil compounds in O. vul 1.According to the results of GC-MS analysis, total volatile compounds detected in gracile and vulgare subspecies were 23 and 27, respectively.The dominant constituents of essential oils were carvacrol, carvacrol methyl ether, γ-terpinene, thymol, cis-α-bisabolene and p-cymene in both subspecies.The results revealed the different impact of salinity on the chemical composition of essential oil in two subspecies.The highest percentage of carvacrol (60 and 47.36%) was recorded at non-stress conditions and 50 mM NaCl in gracile and vulgare subspecies, respectively.Although in gracile subspecies, the percentage of carvacrol decreased with the application of salinity stress, no significant difference was observed between salinity treatments in terms of this composition.Conversely, in vulgare subspecies, the percentage of carvacrol raised by increasing salinity levels.Although, the trend of thymol changes in two subspecies does not follow a discrete pattern, but in both subspecies, the amount of thymol in non-stress treatment was higher than salinity treatments.The findings of this research demonstrated that, p-cymene was significantly increased in both subspecies by enhancing salinity stress.However, no significant differences were found between 25, 50 and 100 mM salinity treatments in vulgare subspecies.In both subspecies, the amount of γ-terpinene increased up to 25 mM NaCl and then decreased by increasing salinity.Furthermore, the trend of changes in carvacrol methyl ether and cis-αbisabolene did not follow a specific pattern, however, vulgare subspecies had higher content of carvacrol methyl ether and cis-α-bisabolene under all salinity treatments (Fig. 4). According to the results, monoterpenes were the main groups of the identified components in both subspecies.The essential oil of subsp.gracile contained monoterpenes (93.65%, 93.18%, 93.02% and 94.6%) at different levels of salinity, respectively.Oxygenated monoterpenes had the highest percentage in the subclass of monoterpenes.Of these, carvacrol, carvacrol methyl ether and thymol were the major components.1,8-cineole, as an oxygenated monoterpene, was detected only at 100 mM NaCl in subsp.gracile.Monoterpene hydrocarbons are the second subclass of the monoterpenes, among which γ-terpinene and ρ-cymene were identified as the dominant components.Sesquiterpene hydrocarbons were the next subclass of compounds found in subsp.gracile oil that reached the highest percentage (4.18%)at 50 mM NaCl, and cis-α-bisabolene was identified as the major component.In addition, oxygenated sesquiterpenes were not detected in subsp.gracile.In contrast, the oil of subsp.vulgare contained monoterpenes (89.19%, 91.53%, 92.23% and 91.57%) and sesquiterpenes (6.05%, 4.91%, 4.78% and 5.76%) at different levels of salinity, respectively.Oxygenated monoterpenes were the most dominant subclass of compounds in subsp.vulgare.Of these, carvacrol, carvacrol methyl ether and thymol were the major components.Furthermore, the highest percentage (61.39%) of oxygenated monoterpenes was found at 50 mM NaCl.The highest monoterpene hydrocarbons (35.07%) as the second subclass of compounds were observed at 25 mM NaCl, of which γ-terpinene and ρ-cymene were identified as predominant components.Sesquiterpene hydrocarbons were another dominant subclass of compounds with cisα-bisabolene as the major component.Spathulenol and caryophyllene oxide are the only oxygenated sesquiterpenes identified at non-stress treatments in subsp.vulgare.Moreover, α-phellandrene was detected at 25 mM NaCl in subsp.gracile.Whereas, in subsp.vulgare it was not detected only at 50 mM NaCl.Sabinene, as a monoterpene hydrocarbon, was not identified in gracile subspecies but was found at non-stress treatment and 25 mM NaCl in vulgare subspecies (Table 1).Correlation analysis showed a negative relationship between γ-terpinene and p-cymene in subsp.vulgare.Also in this subspecies, a negative relationship was obtained between γ-terpinene and thymol with carvacrol, whereas the correlation between p-cymene and carvacrol was positive.In addition, the correlation between γ-terpinene and p-cymene with thymol was negative (Fig. 6a).In contrast in subsp.gracile, a negative correlation was observed between γ-terpinene and p-cymene.Also, a negative relationship was obtained between p-cymene and carvacrol, as well as γ-terpinene with thymol.Furthermore, a positive relationship was observed between carvacrol with thymol and γ-terpinene with carvacrol (Fig. 6b). Gene expression levels To partly unravel the molecular mechanism by which salinity stress alters the content of essential oil in two studied oregano subspecies, the expression levels of DXR, Ovtps2 and CYP71D180 genes were investigated under various salinity levels in these subspecies for the first time.The expression levels of studied genes were significantly affected by salinity treatments, subspecies and their interaction.The highest DXR expression was observed at 50 mM NaCl in vulgare subspecies, while the lowest expression of that was obtained at gracile subspecies under salinity stress.Furthermore, the highest expression of Ovtps2 was observed at 50 mM NaCl in vulgare subspecies.However, in gracile subspecies, the relative expression of this gene decreased with increasing salinity.Similar to the DXR gene, the highest relative expression of CYP71D180 was obtained at 50 mM salinity in vulgare subspecies.Whereas, in gracile subspecies, the expression of this gene decreased with increasing salinity up to 50 mM, then increased at 100 mM salinity (Fig. 5). A positive relationship was observed between the expression of DXR, Ovtps2 and CYP71D180 genes with carvacrol in vulgare subspecies, while, the correlation of these genes with thymol content was negative.Also, a negative correlation was found between the relative expression of Ovtps2 gene and γ-terpinene, while the correlation of this gene with p-cymene was positive in this subspecies (Fig. 6a).In contrast, in gracile subspecies, a positive correlation was obtained between the relative expression of Ovtps2 and γ-terpinene, whereas the correlation of this gene with p-cymene was negative.There was a negative relationship between the expression of CYP71D180 and carvacrol, while a positive correlation was obtained between DXR and Ovtps2 genes expression with carvacrol content.Also, in gracile subspecies, the correlation between the three studied genes and thymol content was positive (Fig. 6b). Discussion To deal with salinity, plants adjust their growth and development behaviors along with an organizing between primary and secondary metabolites [48].The results of the several investigations demonstrate that the biosynthesis of secondary metabolites in medicinal plants is seriously affected by environmental factors [23,[49][50][51].Furthermore, the difference between the content and Fig. 4 effect of salinity × Origanum vulgare subspecies (subsp.vulgare and subsp.gracile) on γ-terpinene, ρ-cymene, carvacrol, thymol, carvacrol methyl ether and cis-α-bisabolene content in the essential oil.Columns with different letters have significant differences (p < 0.05) composition of essential oil in medicinal plants depends on the various factors such as cultivar, genetics and environmental conditions [52].However, studies have shown that these changes may be caused through the different expression of the enzymes involved in the production of these compounds under salinity conditions [53,54].In this investigation, essential oil content influenced by salinity stress and subspecies.The highest percentage of essential oil was achieved for subsp.gracile at 25 mM salinity.However, the essential oil content decreased at 50 and 100 mM NaCl stress.Under moderate salinity stress, the stimulation of essential oil production can be due to the higher density of essential oil glands [55].Moreover, the increment of essential oil contents in plants may be due to the reduction of primary metabolites by salinity and the improvement of intermediary products availability for secondary metabolites synthesis [54,55].According to the previous studies, the essential oil content increased with the intensity of salinity in Salvia officinalis [56] and Ocimum basilicum [55].However, the essential oil content decreased by increasing salinity in O. majorana [57] and Mentha piperita [58].Moreover, the highest essential oil yield was observed at low salinity level in subsp.gracile.The essential oil yield in subsp.vulgare decreased with increasing salinity.Similarly, high salinity levels led to a decline in essential oil yield in some plant species such as, Trachyspermum ammi [59] and Matricaria sp.[60]. The chemical composition of O. vulgare essential oil has been studied in several researches [9,15,25,61].There is a high variety in essential oil composition of this plant.The main composition of essential oil in O. vulgare is thymol, carvacrol, γ-terpinene, p-cymene, β-myrcene and β-bisabolene [2,9,13].In this study, the main components under salinity treatments and in both subspecies were carvacrol, γ-terpinene, p-cymene, thymol, carvacrol methyl ether and cis-α-bisabolene.It can be considered that the accumulation of some main compounds as a defense mechanism in medicinal plants by inducing changes in cellular metabolism adapts them to stress conditions [62].Salinity stress can affect the essential oil composition of plants depending on its severity.In previous reports, percentage of main compounds enhanced with severity of NaCl stress in comparison with nonstress conditions, in Salvia officinalis [48,53,63], S. mirzayanii [64] and Ocimum basilicum [55]. Monoterpenes in plants have a high commercial value industrially and can be used in the perfume, anti-cancer and pesticide industries [65].Two dominant components of oregano essential oil are phenolic monoterpenes, thymol and carvacrol, which are well known their anti-vegetarian, antimicrobial, medicinal and antioxidant activities [25] gene (DXR), a middle gene (Ovtps2) and a last gene (CYP71D180) in MEP pathway, involved in thymol and carvacrol biosynthesis [30,31] were evaluated, which showed that salinity stress significantly affected their expression.This might be due to the role of terpenes in defense pathways and signal transduction in oregano. Based on the results, a positive correlation was obtained between the expression of DXR with Ovtps2 and CYP71D180 in both subspecies.According to the previous studies, γ-terpinene and p-cymene are the main precursors of thymol and carvacrol in oregano and thyme, which are synthesized by the γ-terpinene synthase enzyme from geranyl diphosphate [25,29].The results of Ovtps2 gene expression in subsp.vulgare indicated that salt stress increased the expression of this gene compared with control.Furthermore, Ovtps2 as an intermediate gene in the pathway of thymol and carvacrol biosynthesis was more affected than DXR and CYP71D180 genes at all salinity levels.In oregano, the contents of thymol and carvacrol in leaves are related to the expression of Ovtps2 [25].In this study, the relative expression of Ovtps2 was increased in subsp.vulgare at 50 mM salinity, while the percentage of γ-terpinene (as a precursor of thymol and carvacrol) decreased.In other words, a negative correlation was found between the expression of Ovtps2 and γ-terpinene at this salinity level.The lack of congruence between the transcriptional levels of the genes and their corresponded compounds (less gene expression but more compound production) may be due to the effect of stress on the enzymatic activity or some changes in transcription and post-translational processes [50,66].Posttranslational modifications of proteins are very important factor in regulating the plant response to the stress conditions [67] and can regulate protein function, location, half-life and protein interactions to reduce the potential damage caused by environmental stresses [68].However, the activity of the enzymes under salinity stress have not been studied in this investigation.Also, the highest expression of the studied genes and carvacrol content was observed in vulgare subspecies at 50 mM salt stress.The gene expression levels may be variable depending on the stress and plant species [69].The higher expression of these genes in retort to moderate salinity stress may reverberate the elevation of phenolic monoterpenes such as carvacrol.Similarly, the higher expression of biosynthesis genes in response to abiotic elicitors has been associated with the increment of the corresponding metabolites in plants such as Tanacetum parthenium (L.) Sch.Bip.[70] and Nigella sativa L. [71].On the contrary, despite the higher expression of DXR, Ovtps2 and CYP71D180 genes at 50 mM salinity, the content of thymol decreased.In previous studies, high transcription levels and high carvacrol production in thyme and oregano were correlated with genes encoding CYP71D180 and CYP71D181 [72].Therefore, the reduction of thymol can be attributed to CYP71D.In the present study, severe salinity stress reduced the expression of CYP71D180 in subsp.vulgare, which was consistent with the trend of carvacrol changes.It can be concluded that salinity stress probably reduces the amount of carvacrol in vulgare subspecies through reducing the expression of CYP71D180.However, in plants treated with sever salinity concentrations, despite a decline in the expression of CYP71D180, the biosynthesis of thymol (as a carvacrol isomer) increased, indicating that, other CYP450 homologues are likely involved in increasing thymol production.Noteworthy, 11 sequences of CYP450 gene have been isolated from oregano and thyme by Crocoll et al. [25].Previous studies have shown that, there is a significant relation between the activity of CYP450 family enzymes and the production of monoterpenes such as carvacrol and thymol in oregano [30,31].In the formation of thymol and carvacrol from γ-terpinene, the aromatic hydrocarbon p-cymene has been proposed as an intermediary [73], however, its participation and the nature of the enzymes involved in the formation of the aromatic ring are still unknown [72].In this study, the trend of Ovtps2 changes was consistent with p-cymene at different salinity levels. Also, a positive relationship was observed between the relative expressions of the studied genes with carvacrol in subsp.vulgare and inversely, a negative relationship was obtained with thymol production.In addition, a negative relationship was observed between γ-terpinene and carvacrol in vulgare subspecies.Similarly, Morshedloo et al. [31] stated a negative correlation between γ-terpinene and carvacrol in O. vulgare subsp.gracile under drought stress.Also, a negative relationship between carvacrol and thymol in this subspecies was found.Hence, it can be concluded that γ-terpinene is a precursor for carvacrol.On the other hand, carvacrol is an isomer of thymol and they can be converted to each other.In gracile subspecies, a positive correlation was observed between the relative expression of DXR and Ovtps2 with carvacrol, but the correlation between CYP71D180 and carvacrol was negative.Presumably, the negative association between carvacrol and CYP71D180 may be due to the role of other enzymes of cytochrome family in this pathway [31].It has been demonstrated that Ovtps2, as the main terpene synthase, produces the half of the total terpenes content [30].In this study, a direct relationship was obtained between the expression of Ovtps2 and thymol synthesis in subsp.gracile.The findings of this investigation are in line with Crocoll et al. [30] who reported that there is a positive correlation between Ovtps2 gene expression and γ-terpinene and thymol production in O. vulgare. Conclusions In overall, the result revealed that the essential oil content increased up to 25 mM NaCl and then decreased.Also, gracile subspecies had a higher essential oil content than vulgare subspecies.No significant difference was found between NaCl treatments (0, 25 and 50 mM) in terms of essential oil yield in both subspecies.Carvacrol, as the main component of essential oil, decreased with increasing salinity levels in subsp.gracile but increased in subsp.vulgare.The highest expression of DXR, Ovtps2 and CYP71D180 genes was observed at 50 mM NaCl in subsp.vulgare.A positive relationship was observed between the expression of DXR, Ovtps2 and CYP71D180 with carvacrol content in subsp.vulgare and between the expression of DXR and Ovtps2 with carvacrol content in subsp.gracile.While, a negative association was observed between the expression of DXR, Ovtps2 and CYP71D180 with thymol content in subsp.vulgare.In contrast, the correlation of DXR, Ovtps2 and CYP71D180 with thymol content in subsp.gracile was positive.Therefore, due to the pharmacological properties of carvacrol and its economic value in the food and cosmetics industries, it can be suggested to enhance its production by increasing the expression of DXR, Ovtps2 and CYP71D180 genes under controlled conditions in the future studies.Also, study the expression of salt-inducible genes/transporters in both subspecies and their relationship with genes involved in MEP pathway under salinity conditions, as well a transcriptome analysis using RNA-seq in both subspecies might lead to a comprehensive view regarding the MEP pathway in the studied subspecies and genetically closed genera. Plant material and growing conditions Seeds of two subspecies of Oregano (O.vulgare subsp.vulgare and O. vulgare subsp.gracile) were obtained from the collection of medicinal plants in Department of Horticultural Science, Urmia University (West Azerbaijan province, Iran).The plant samples were identified by Hossien Maroofi (Research Center of Agriculture and Natural Resources of Kurdistan, Sanandaj, Iran).Voucher specimens were deposited at the herbarium in Department of Horticultural Science, Faculty of Agriculture, Urmia University, Iran.The experiment was performed as a factorial in a completely randomized design (CRD) with three replications during 2019-2020.Seeds of two subspecies were planted in plastic pots in research greenhouse of Urmia University.Each pot (diameter: 25 cm and height: 30 cm) was filled with a 3:2 ratio of soil and sand.The physical and chemical characteristics of the soil used in the pots were: pH (8.02), EC (1.27 ds m −1 ), organic material (0.62%), total nitrogen (0.12%), available P (9.45 ppm), exchangeable K (0.46 meq/100g soil), and texture (sandy loam).The greenhouse temperature was in the range of 20 ± 2 to 28 ± 2°C with 50-60% relative humidity under natural sunlight.After seed germination, the seedlings were thinned and finally 7 plants kept in each pot.The plants were irrigated evenly with ordinary water until reaching the stage of 6-8 leaves.After this stage, they were subjected to salinity stress for 45 days (until the flowering stage).The salinity treatments applied, included four levels of saline irrigation (0, 25, 50 and 100 mM NaCl).To avoid sudden shock from salinity stress, salinity treatments gradually reached the final concentration during the three irrigation stages.At the full flowering stage, 10 fully developed leaves were harvested from each treatment and transferred to a -70 °C freezer to evaluate the relative expression of DXR, Ovtps2 and CYP71D180 genes.Then, the aerial parts of plants were cut from 10 cm above the soil in order to essential oil extraction and analysis. Essential oil extraction The aerial parts of oregano were shade dried, and then plant material (20 g) was subjected to hydro-distillation (Clevenger apparatus, 2.5 h) for essential oil extraction.The essential oil content was expressed as volume per dry weight percentage (%v/w).The collected essential oils were dehydrated over anhydrous sodium sulfate and stored in dark sealed vials at low temperature (4°C) till analysis. GC-MS analysis of plant volatiles Gas chromatography/mass spectrometry (GC-MS) was used for analysis of essential oil components.An Agilent 7890 gas chromatograph paired with a 5975A mass spectrometer equipped with a HP-5 MS capillary column (5% Phenyl Methylpolysiloxane, 30 m length, 0.25 mm i.d., 0.25 μm film thickness) (Agilent Technologies, Wilmington-DE, USA), was used for GC-MS analysis.The oven temperature program was adjusted for 3 min (at 80°C), then raised at 10°C min −1 to 200°C, kept for 15 min at 200°C.The temperatures applied to the injector, transfer line and ion resource were 240°C, 280°C and 230°C, respectively.The carrier gas used (with a flow rate of 1 mLmin −1 and an electron impact (EI) of 70 eV) was helium.The injector was set in a split mode (split ratio of 1:50) and injection volume was 1.0 μL.Mass spectra were scanned in the range of 40-500 amu.The constituents of essential oil were determined by using the calculated linear retention indices (Wiley 2007; NIST 2005) and mass spectra with those reported in the NIST 05 and Wily 07. RNA isolation and cDNA synthesis Total RNA of O. vulgare leaves was extracted using RNX plus ™ kit according to the manufacturer's instructions (Sinaclon, Iran).After evaluating the quality and quantity of RNA using 1% agarose gel electrophoresis and nanodrop ND-1000, cDNA was synthesized using Revert Aid ™ First Strand cDNA Synthesis Kit (Thermo Fisher Scientific, USA) according to the instructions of the manufacturer (Thermo Scientific, USA).Negative control reactions using reverse transcriptase minus (-RT) and non-template control (NTC), was performed to ensure no genomic DNA contamination and for reagent contamination, respectively. Real time PCR reactions The relative expression of the genes was investigated using Real time PCR (Rotor gene Q-Pure Detection-Qiagen) in the treated plants compared with the control.Gene specific primer pairs were selected from previous studies [25,31].Real time PCR reactions were carried out by considering three biological replications in the final volume of 12.5 μL using Maxima ® SYBR-Green/ROX qPCR Master mix kit (Thermo Fisher Scientific, USA), according to the manufacturer's instructions.Initial activation of the enzyme was done at 95°C for 10 min in one cycle, followed by 40 cycles including denaturation at 95°C for 10 s, annealing at 58-60°C for 15 s and fluorescence data collection at 72°C for 20 s.The actin gene was used as the reference gene to normalize the data.The relative expression of the studied genes was calculated after obtaining Ct by ΔΔCt method [74]. Statistical analysis The experiment was performed as a factorial experiment in CRD with three replications.Data obtained were subjected to analysis of variance (ANOVA) followed a comparison of the means using Duncan's multiple range test at p < 0.05 level using SAS 9.2 software.The relevance between the main constituents of essential oil and gene expression level were estimated using the Pearson's correlation coefficient by R software. Essential oil content (% ) Fig. 2 Effect of salinity stress and Origanum vulgare subspecies (subsp.vulgare and subsp.gracile) on essential oil content.Columns with different letters have significant differences (p < 0.05) salinity stress were presented in Table Table 1 Essential oil components in O. vulgare subsp.vulgare and O. vulgare subsp.gracileunder salinity stress
2023-08-07T13:39:41.871Z
2023-08-07T00:00:00.000
{ "year": 2023, "sha1": "aa9f3a8b9e7f4f1d8c0616d0fe1b1774f172f7a3", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/counter/pdf/10.1186/s12870-023-04387-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d47e9958408497728a6b98498b1cc28d975fb668", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14257643
pes2o/s2orc
v3-fos-license
Diagnostic value of endobronchial and endoscopic ultrasound-guided fine needle aspiration for accessible lung cancer lesions after non-diagnostic conventional techniques: a prospective study Background Lung cancer diagnosis is usually achieved through a set of bronchoscopic techniques or computed tomography guided-transthoracic needle aspiration (CT-TTNA). However these procedures have a variable diagnostic yield and some patients remain without a definite diagnosis despite being submitted to an extensive workup. The aim of this study was to evaluate the efficacy and cost of linear endobronchial (EBUS) and endoscopic ultrasound (EUS) guided fine needle aspiration (FNA), performed with one echoendoscope, for the diagnosis of suspicious lung cancer lesions after failure of conventional procedures. Methods One hundred and twenty three patients with an undiagnosed but suspected malignant lung lesion (paratracheal, parabronchial, paraesophageal) or with a peripheral lesion and positron emission tomography positive mediastinal lymph nodes who had undergone at least one diagnostic flexible bronchoscopy or CT-TTNA attempt were submitted to EBUS and EUS-FNA. Patients with endobronchial lesions were excluded. Results Of the 123 patients, 88 had a pulmonary nodule/mass and 35 were selected based on mediastinal PET positive lymph nodes. Two patients were excluded because an endobronchial mass was detected at the time of the procedure. The target lesion could be visualized in 121 cases and FNA was performed in 118 cases. A definitive diagnosis was obtained in 106 cases (87.6%). Eighty-eight patients (72.7%) had non-small cell lung cancer, 15 (12.4%) had small cell lung cancer and metastatic disease was found in 3 patients (2.5%). The remaining 15 negative cases were subsequently diagnosed by surgical procedures. Twelve patients (9.9%) had a malignant tumor and in 3 (2.5%) a benign lesion was found. The overall sensitivity, specificity, positive and negative predictive values of EBUS and EUS-FNA to diagnose malignancy were 89.8%, 100%, 100% and 20.0% respectively. The diagnostic accuracy was 90.1% in a population with 97.5% prevalence of cancer. The ultrasonographic approach avoided expensive surgical procedures and significantly reduced costs (p < 0.001). Conclusions Linear EBUS and EUS-FNA are able to improve the diagnostic yield of suspicious lung cancer lesions after non-diagnostic conventional techniques. These techniques, performed with one scope, can be offered to patients with accessible lesions as an intermediate step for diagnosis since they may avoid more invasive procedures and hence reduce costs. Background Lung cancer is a major health problem and the most common cause of cancer-related mortality worldwide [1]. In patients with suspected malignant lesions a rapid and precise diagnosis is crucial to determine optimal treatment. Flexible bronchoscopy (FB) and computed tomographyguided transthoracic needle aspiration (CT-TTNA) are the main modalities employed to achieve this purpose. The appropriateness of each method depends on numerous factors such as tumor size and location, accessibility to the primary tumor, local availability of expertise with a particular technique and potential complications of that procedure. The preferable technique is the one that can be performed on an outpatient basis and gives the most information about diagnosis with minimum risk to the patient [2]. CT-TTNA is a standard diagnostic procedure however it cannot be executed in all lesions and holds a significant risk for complications, namely pneumothorax [3]. Flexible bronchoscopy and its ancillary procedures (bronchial washings, brushing and biopsies) are frequently used in lung cancer diagnosis and have a high diagnostic yield in endobronchial tumors but they have a limited ability in diagnosing peripheral, submucosal or peribronchial lesions [4]. The addition of "blind" transbronchial needle aspiration (TBNA) is able to improve the diagnostic yield in some extraluminal tumors but it is influenced by lesion size and location, operator experience, and tumor type, among other factors [5]. Some patients remain without a definitive diagnosis despite being submitted to several procedures and have to undergo a surgical biopsy that, although definitive, is invasive and not always suitable for those with advanced disease and significant comorbidities. The accuracy and safety of TBNA has been increased by the development of endosonography. Linear real-time endobronchial (EBUS) and esophageal endoscopic ultrasound (EUS) are minimally invasive techniques that are able to image and sample, under direct vision, the intended structures in order to obtain specimens from pulmonary and mediastinal lesions. Both procedures have proven their value, individually or in combination, in mediastinal staging [6,7] but there is limited data on the importance of combining these two techniques in lung cancer diagnosis. The aim of this study was to evaluate the value of linear EBUS and EUS guided fine needle aspiration (FNA), performed with one echoendoscope, in patients with a suspicious lung lesion who failed diagnosis after conventional procedures. Patients Individuals with an undiagnosed suspicious lung cancer lesion who had performed at least one diagnostic attempt with conventional techniques (FB or CT-TTNA) were prospectively enrolled in the study. Patients were eligible if a chest CT documented a tumor arising in an accessible area to EBUS or EUS (paratracheal, parabronchial or paraesophageal lesions) without direct endobronchial signs on a previous bronchoscopy; or if they had a peripheral pulmonary parenchymal lesion (that could not be assessed by linear EBUS and EUS-FNA) with fluorodeoxyglucose (FDG) positron emission tomography (PET) positive mediastinal/hilar lymph nodes (SUV > 3.5). Patients were excluded from the study if the tumor was not abutting the airways or the esophagus; if the primary peripheral lesion was not associated to PET positive mediastinal lymph nodes; or if they already had a confirmed diagnosis of lung cancer and were sent for mediastinal staging. Based on the described criteria, between June 2008 and June 2011, a total of 123 patients were referred from several institutions for EBUS and EUS-FNA diagnosis. The study was approved by the ethical committee of Centro Hospitalar Lisboa Norte, Hospital Pulido Valente and a written informed consent was obtained in all patients. Procedure All procedures were done in an outpatient setting. EBUS and EUS-FNA were performed under general anesthesia with a flexible ultrasound bronchoscope (BF-UC160F, Olympus, Japan). The scope was first inserted through the tracheobronchial tree towards the area of the target lesion in a standard approach. In the same session, the linear echoendoscope was advanced to the stomach and then slowly withdrawn into the esophagus while making circular movements to localize anatomic landmarks and the intended lesion, according to the previously described technique [7]. The target mass or lymph nodes were identified using ultrasound imaging (EU-C60, Olympus, Japan). Subsequently, the lesions/lymph nodes were punctured at the level of the trachea, main carina, right and left main bronchus, bronchus intermidius, right and left basal trunk and esophagus. A dedicated needle (NA-201SX-4022, Olympus, Japan) was place inside the aimed structures to collect material. At least 4 needle passes were done. A cytophatologist was not present during the procedure and rapid on-site examination (ROSE) was not executed. The operating chest physician judged the macroscopic appearance of each sample and when it was found to be inadequate or insufficient, additional punctures were performed. The aspirated specimens were immediately expelled onto a glass slide, from which a small amount of material was placed on two glass slides and the smears stained using the Papanicolaou technique. Afterwards, a needle wash was made into a container with preservative liquid. The samples were homogenized in a vortex for 10 minutes and centrifuged at 1200 rpm for 5 minutes. The pellet was fixed in PreservCyt solution (Hologic Inc, Iberia), and processed using the T2000 ThinPrep System (Hologic Inc, Iberia). The obtained preparation was stained by the Papanicolaou method for cytological examination. The samples were considered positive (presence of malignant cells), negative (nonexistence of malignant cells, adequate cellular component and absence of bronchial epithelial cells contamination) and inadequate (no cellular component, blood, merely bronchial epithelial cells or insufficient material to achieve a definitive diagnosis). Whenever possible, immunohistochemistry was performed to acquire additional information. A positive result of malignancy was established as evidence and the patient was treated accordingly. Negative and inadequate results were confirmed by subsequent surgical procedures. Economic analysis The estimated costs were based on the Portuguese National Health Service (NHS) according to the Health System Central Administration (ACSS) regulation prices for institutions and integrated services of the NHS (nº 132/ 2009). Costs were calculated in Euros (€) for patients submitted to conventional techniques and to EBUS/EUS-FNA, and estimated for avoided surgical procedures. Statistical methods The data was entered into a database and analyzed with the SPSS statistical software package (SPSS 18.5 Chicago, Illinois, USA). A descriptive analysis was carried out in which categorical variables were expressed as absolute and relative frequencies and continuous variables as means. Sensitivity, specificity, accuracy, and positive and negative predictive values were calculated using the standard formulas. T-Student tests were used to compare the cost of procedures. Patient characteristics During the study period, a total of 123 patients met the inclusion criteria (7.9% of all patients submitted to FB or CT-TTNA for lung cancer diagnosis). Of these, 92 were males and 31 females, with a mean age of 63.1 years (range 38-88). Patients' demographics are summarized in Table 1. One hundred and one patients (82.1%) were current or former smokers with 46.3 packs-year (range 15-125). In 47 cases (38.2%) the pulmonary target lesion was located in the right lung close to central airway, in 28 (22.8%) in the left lung adjacent to the main airways and in 13 cases (10.6%) the mass could only be characterized as central since the radiological findings were confined to hilar and mediastinal structures and did not involve a specific pulmonary lobe. Mean size of lung lesions was 32.1 mm (range 17-64 mm). Thirty-five patients (28.4%) were selected based on the PET-CT scan. In these there was a high primary lung mass FDG-uptake as well as mediastinal and/or hilar PET positive lymph nodes with a mean short-axis diameter of 17.2 mm (range 13-22 mm). In 104 cases (84.5%) the patients had been initially submitted to a non-diagnostic FB with accessory sample techniques and in 19 (15.5%) to a CT-guided TTNA. These 19 CT-TTNA cases were all done in peripheral lung lesions. In the subgroup of 13 patients with central lesions, 11 were immediately sent for EBUS and EUS-FNA after the first failed diagnostic attempt (7 cases had been submitted to a blind TBNA). After the initial nondiagnostic exam a second diagnostic procedure has been performed in 48 patients (39.0%) (TTNA in 33 patients and FB in 15 patients) and a third in 10 of these patients (8.1%) (FB in 7 patients and TTNA in 3 patients), before they were sent for EBUS and EUS-FNA. In 64.6% of cases there was a shift between the first and second procedures, this means that patients who have been submitted to FB were afterwards submitted to CT-TTNA and vice versa. Procedure details Mean procedure time was 35.5 minutes (range 21-65 minutes) and all patients were discharged home after the examination. There were no complications related to the procedure. Two patients were excluded from the study since they had direct tumor signs (endobronchial mass) when the endosonography was performed. In 121 cases the target lesion/lymph nodes could be visualized by EBUS/EUS although FNA could only be performed in 118 cases (97.5%). In three cases there was a major vessel interposition confirmed by Doppler mode that prevented a safe puncture. In 43 patients (36.4%) the target lesion could be assessed by EBUS and EUS-FNA, in 67 patients (56.8%) it could only be punctured by EBUS-TBNA and in 8 cases (6.8%) the lesion was inaccessible through an endobronchial approach and was sampled by EUS. The lesions were punctured 5.4 times (range 4-9 times). A total of sixty-nine lymph nodes were sampled in the 35 patients selected based on the presence of PET positive lymph nodes (1.97 per patient). These were punctured at the level of Mountain-Dressler stations 2R (n = 2), 2L (n = 3), 4R (n = 15), 4L (n = 9), 7 (n = 23), 8 (n = 2), 10R (n = 6), 10L (n = 7) and 11L (n = 2) ( Table 2). EBUS and EUS-FNA provided a definitive diagnosis in 106 cases (87.6%). Of these patients, there were 88 cases (72.7%) of non-small cell lung cancer and the tumor could be characterized as lung adenocarcinoma in 50 patients, squamous cell carcinoma in 23 patients, large cell carcinoma in one patient (14 patients had undifferentiated carcinoma). Small cell lung cancer was identified in 15 patients (12.4%). In three cases the pulmonary lesions were secondary to breast cancer, thyroid adenocarcinoma and renal cell carcinoma. In two cases, malignancy was suspected but could not be confirmed (inadequate sample) and in 10 cases the sample was negative for malignant cells. The fifteen undiagnosed cases were further submitted to mediastinoscopy (n = 2), video-assisted thoracoscopy (n = 5) and open thoracotomy (n = 8). The cytological inadequate samples proved to be lung adenocarcinoma (with extensive necrosis) and large cell neuroendocrine carcinoma on surgery. The negative cases were characterized as atypical carcinoid tumor (n = 2), lymphoma (n = 2), large cell lymphoepithelioma-like carcinoma (n = 1), thoracic paraganglioma (n = 1), sarcomatoid carcinoma (n = 1), hamartoma (n = 1), inflammatory pseudotumor (n = 1) and benign granular cell tumor (n = 1) (Figure 1). Among the cases that the primary lesion could not be punctured one had a final diagnosis of squamous cell carcinoma and two patients of lung adenocarcinoma. The sensitivity of EBUS and EUS-FNA to diagnose malignancy was 89.8% and specificity was 100%. The positive predictive value and negative predictive value were 100% (95% CI 96.5-100%) and 20.0% (95% CI 7.0-45.2%) respectively. The overall diagnostic accuracy was 90.1% and there were no significant differences between the group of patients with centrally located lesions and Left paratracheal superior (2 L) 3 Right paratracheal inferior (4R) 15 Left paratracheal inferior (4 L) 9 Subcarinal (7) 23 Paraesophageal (8) 2 Right hilar (10R) 6 Left hilar (10 L) 7 Interlobar right (11R) 2 Aspirations per lesion (mean, range) 5.4 (4-9) Complications 0 PET positive mediastinal lymph nodes. The prevalence of cancer in the study population was 97.5%. The mean procedure cost for each patient submitted to the non-diagnostic conventional procedures was 273.73€. EBUS and EUS-FNA cost, done with a single scope under general anesthesia, was 740.2€ per patient. The cost of mediastinoscopy or video-assisted thoracoscopy was 2524.25€ per patient and thoracotomy was 5047.22€ per patient. The use of ultrasonography avoided more invasive procedures in 106 cases and led to estimated cost savings of 272382€ (78461€ for EBUS/EUS versus 350843€ for surgical procedures) (p < 0.001). The EBUS and EUS-FNA approach cost saved approximately 2570€ per patient. Discussion The present study demonstrates that in selected patients with central lesions without an endobronchial component, EBUS and EUS-FNA, performed with one linear ultrasound bronchoscope, are accurate methods to diagnose lung cancer. The diagnosis of indeterminate pulmonary nodules and masses can constitute a considerable challenge in some cases. A broad range of factors must be considered when selecting a specific diagnostic modality to assess a suspicious lesion. These factors include the clinical and radiological information, sensitivity and specificity of the test, invasiveness of the procedure, safety profile, institutional availability of technology, presence of qualified clinicians and cost-effectiveness of the procedure [2]. Several options are available for the diagnosis of indeterminate pulmonary lesions. Less invasive techniques include CT-TTNA and FB however these approaches have both advantages and limitations. FB with its attendant diagnostic modalities is extremely valuable for the diagnosis of a patient with suspected lung cancer but its yield is quite variable. A systematic review [3] reported a diagnostic yield of 88% for centrally located lesions with the combination of different sampling methods (bronchial biopsy, fine needle aspiration, bronchial brushing and washing) therefore in 12% of cases the diagnosis is not reachable by these methods. This percentage is even higher for extraluminal and peripheral lesions since the sensitivity of the FB with all modalities combined is much lower when compared to endobronchial lesions [2,3]. CT-TTNA is frequently used by pulmonologists and interventional radiologists because it provides a more accurate way of diagnosing peripheral pulmonary nodules and masses than FB although it has the potential to cause a non-negligible rate of iatrogenic complications, especially pneumothorax [3]. Its sensitivity clearly depends on the size and location of the lesion, the size and type of the needle, the number of needle passes, and the presence of on-site cytopathology examination [4,8]. The pooled sensitivity of TTNA reported for 12363 patients in 61 studies was 90% [4]. A small yet important number of patients with radiologically suspicious pulmonary lesions remain without a definitive diagnosis after extensive workup (7.9% in our study). Some patients are referred to other procedures such as mediastinoscopy, thoracoscopy or even thoracotomy [9] however they are more invasive, more expensive and associated with increase morbidity and mortality when compared to ultrasonography guided-FNA. Others, not fitted for surgical interventions, are submitted to the repetition of the previous attempted conventional procedures. In the present study, all patients had at least one non-diagnostic FB or CT-TTNA and 39% were submitted to a second non-diagnostic technique which lead to a delay diagnosis and increased patient's anxiety. Nonultrasound guided TBNA is a well-established technique for lung cancer diagnosis and mediastinal staging however it is not routinely done in many institutions. In the present study, blind TBNA was attempted in only 14% of cases. The absence of formal training programs, the existence of technical problems with the procedure, concerns regarding its safety, needles cost, inadequate cytopathology support and variable diagnostic yield have been responsible for limited use of this procedure [5,10]. Over the last few years, EBUS and EUS-FNA have been introduced to medical practice and numerous studies have evaluated their impact in lung cancer mediastinal staging. They have a high diagnostic yield and are simpler, less invasive, cost-effective and allow sampling of lymph node stations not accessible to mediastinoscopy [6]. These techniques have gained their place in lung cancer staging algorithm substituting and complementing more invasive techniques such as mediastinoscopy. Furthermore, real-time linear EBUS and EUS-FNA can constitute an important option to diagnose lung cancer in a single procedure, principally in patients who present centrally located tumors not visible on routine bronchoscopy. Eckardt and coworkers have assessed the value of EBUS-TBNA for the diagnosis of radiologically suspicious chest lesions in a 36 months retrospective study [11]. The technique was able to provide a diagnosis in 55% of cases and the diagnostic yield was higher in central parenchymal lesions compared to enlarged lymph nodes. These results are consistent with the work published by Tournoy et al. [12] that reported a sensitivity of 84% in the diagnosis of central lung lesions not visible at routine bronchoscopy and by Nakajima et al. [13] that obtained 94.1% sensitivity and 94.3% diagnostic accuracy rate in a small population of 35 patients. EUS-FNA has also been used to diagnose centrally located lung tumors abutting the esophagus [14,15]. The choice between EBUS and EUS totally depends on the availability of equipment, expertise and the location of the suspicious lesion. Two studies [7,16] reported that the combination of EUS-FNA and EBUS-TBNA was better than either alone for mediastinal assessment and were able to achieve a very high diagnostic yield. Herth et al. have shown that the use of a single scope is feasible, effective and safe in primary mediastinal staging [7]. In our population, a final diagnose could merely be achieved by EBUS-TBNA in 56.8% of cases and 6.8% of cases had a definitive diagnosis based of the performance of EUS-FNA. The combination of these procedures accomplished a wider sampling method and there was a clear advantage to perform them in the same setting since it maximized resources. There were also benefits regarding equipment handling and costs by using one ultrasound bronchoscope to perform EBUS and EUS-FNA for diagnostic purposes. A further relevant point, demonstrated in our study, is that tissue may be acquired from highly suspicious metastatic central lymph nodes to diagnose lung cancer if the main peripheral lesion cannot be easily accessed. Since an important percentage of patients with lung cancer also have mediastinal lymph node enlargement at the time of clinical presentation, EBUS and EUS-FNA can play an important role in the diagnostic algorithm. However, it is important to distinguish patients with an established lung cancer diagnosis sent for mediastinal staging and those referred with enlarged lymph nodes without a definitive diagnosis since the described yield in this second group of patients is usually lower [11,17,18]. In contrast, one of the first published studies reported 96% sensitivity of EUS-FNA in diagnosing mediastinal lymph node malignancy in a small sample of patients with suspected lung cancer in whom bronchoscopy failed to establish the diagnosis [19]. In a more recent study, Lee et al. reviewed their results in a heterogeneous population of 126 patients who underwent EBUS-TBNA to diagnose highly suspicious lung cancer lesions (151 lymph nodes and 44 lung masses) [20]. Eight patients had endobronchial tumor invasion, only 48.4% of patients had a previous attempt to obtain diagnosis and in these cases the most common cytopathological technique was sputum cytology which has a low sensitivity especially when the tumor is extraluminal. The overall sensitivity and diagnostic accuracy was 97.2% and 97.6% respectively. Our results also demonstrate that highly suspicious radiological and PET positive lymph nodes constitute good targets for EBUS and EUS-FNA lung cancer diagnosis. The sensitivity of these minimally invasive techniques combined (89.8%) was lower when compared to their use for mediastinal staging in patients with known lung cancer (96%) [7], however they allowed a broader access to the target lesions compared to other diagnostic studies. In addition, the high lung cancer prevalence in this populationnot entirely unexpected since all the included patients had a strong suspicious clinical picture complemented by suggestive accessory examsand the correct selection of cases that could benefit from these techniques may explain the high sensibility. A further important factor, that may contribute to increase the diagnostic yield in such patients, is related to the number of aspirates per lesion. Lee et al. [21] reported optimal results obtained with successive aspirates until the third EBUS-TBNA pass per lesion and these results were confirmed in subsequent studies [22]. Since the number of needle passes may modify the sensitivity of the technique we have done at least four passes per lesion (mean 5.4/lesion) in order to get an adequate sample. Another concern regarding the diagnostic procedures is safety. EBUS and EUS-FNA are usually associated with a very good safety profile with a low risk of adverse events. In this study they were also found to be safe since there were no complications associated with the procedures. The use of Doppler mode avoided vessels adjacent or within to the target lesion and in three cases dictated that the lesion could not be safely punctured. EBUS plus EUS-FNA had a significant influence on the diagnostic management of our patients since they were able to spare more invasive procedures such as surgical exploration in 87.6% of cases. This approach may constitute an important strategy for patients in whom surgery is not an option because of comorbid conditions or advanced metastatic disease. Regarding the economic impact, a study by Steinfort et al. [23] confirmed that EBUS-TBNA staging of NSCLC was a cost-beneficial strategy in comparison with surgical techniques. For lung cancer diagnosis the use of conventional techniques is still less expensive compared to EBUS and EUS-FNA but our results reinforce the use of linear ultrasonography when there are no endobronchial tumor signs and the intended lesion is adjacent to the main airway or esophagus since it is a cost-effective approach. Some limitations in our study justify further discussion. It should be noted that it is not a randomized trial and while this is a prospective study, it included a selected population with peripheral and central lung lesions without endobronchial visible signs. Patients were excluded whenever the tumor was not abutting the airways or the esophagus; if there was aerated lung parenchyma surrounding the lesion; or if the primary peripheral lesion was not associated to PET positive enlarged central lymph nodes. Even though the population represented in this study embraces just a part of the suspected lung cancer patients that are frequently assess in the clinical practice we are aware that this particular group of patients usually causes a more challenging diagnosis. Further selection bias may exist since the work was carried out in a referral interventional pulmonology unit although this may be minimized by the fact that in our center CT-TTNA is executed by the authors in association with interventional radiologists. Nevertheless, our data must be confirmed in larger multicenter randomized trials. Finally, one can always argue that some of the included patients had a primary indication for EBUS or EUS-FNA instead of being submitted to the described conventional procedures. Previous work has shown that EBUS-TBNA can be performed as the first diagnostic procedure in patients with pulmonary masses and concomitant mediastinal lymph nodes [24]. We cannot forget that although EBUS and EUS-FNA are frequently advocated for lung cancer lymph node staging, they are not widely available and, at least in some countries, are performed in selected and referral centers. Conclusions In conclusion, real-time EBUS and EUS-FNA are able to provide a high yield in lung cancer diagnosis after failure of conventional techniques. They may represent an additional step for diagnosing lung cancer since they seem able to replace more invasive diagnostic procedures in selected cases. Both techniques are extremely safe and costeffective when they are performed by pulmonologists with a single linear ultrasound bronchoscope.
2016-05-12T22:15:10.714Z
2013-03-19T00:00:00.000
{ "year": 2013, "sha1": "22320354449b47da1a0f12293aad76e8511247de", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-13-130", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22320354449b47da1a0f12293aad76e8511247de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59020551
pes2o/s2orc
v3-fos-license
THE FACTORS AFFECTING THE EXPORT POTENTIAL AND ITS FORMATION UNDER THE CONDITIONS OF INTEGRATION The aim of the article is to analyze the factors affecting the export capacity of the country and their interpretation by domestic and foreign scientists; to distinguish the basic classification and to form the unified system of factors that affect export potential of Ukrainian agricultural enterprises. On the basis of the classification of factors it is necessary to distinguish the main stages of development of the export opportunities of agrarian formations in the context of European integration. The method. Theoretical and methodological basis of the study is a critical analysis of the fundamental works devoted to the export potential increasing. The given tasks were solved on the basis of a systematic approach with using the scientific methods of analysis and synthesis. The method of abstraction was used in order to research the economic essence of the economic factors that impact the export potential of the company. The combination of the methods of analysis and synthesis were used to determine the priority directions of export opportunities development. The results of the research. The analysis of theoretical approaches to identify factors influencing the export potential of the enterprise is provided. This allowed identifying the several directions of classification. Results of the research helped to shape a common classification of factors that affect the export potential of agricultural enterprises and to distinguish the integration processes as a separate group of factors. On the basis of the classification of the main factors it was formed the phase of development of export opportunities in the context of approximation of the Ukrainian economy to the European standards. The prospects of development of the domestic agriculture export potential under the influence of international integration processes are grounded. They consist of the ability to increase the efficiency by redistribution of export volumes among major importers. The directions of activating the process of increase of efficiency of the basic agricultural branches export potential using by creating a positive image of agricultural enterprises and the directions of improving product competitiveness and quality standards of the European Union are determined. The recommendations for the ways of regulatory policy improvement at forming the legal provision of export potential in the context of European integration are developed (the selective support of structural changes in ways that exclude the possibility of using protectionist instruments by trading partners; creating the information system of foreign trade; expert and consultative support of export transactions). The practical meaning. The practical meaning of the results is the formation of the unified classification and grounding the factors which influence on the export potential of agricultural enterprises. The conclusions can be used in further improving the mechanism of foreign policy in Ukraine, in particular, in the process of the legal and regulatory framework improving, and in the process of assessing the prospects of the development of the international cooperation between Ukraine and foreign countries by Ukrainian ministries and special agencies. Significance/originality. The formed classification of factors will help to provide a better understanding of the export potential formation and can be used in management decisions by the leaders of farms in Ukraine and others countries. Introduction The export potential of the enterprise as an economic category and an object of analysis got the increased attention of scientists and practitioners in the international business.The experience of highly developed countries and countries that are rapidly developing convinced that the dynamism of the positive development of most national economies largely achieved through an effective export policy, depending on the level of excellence of the processes associated with the formation and using of Vol. 2, No. 2, 2016 the export potential of both the state and entities, as well as the selection of methods and tools that can provide a successful course of these processes.However, despite the large number of domestic and foreign researchers, nowadays there is no unified classification of the factors affecting the export potential of agrarian business.The actuality of the topic is caused by the fact that under the conditions of unstable economic environment every subject of the foreign economic activity solves the problem of improving the efficiency of foreign trade operations.It is very important task to solve this problem under current economic environment.Thus, from the side of subjects of the economic activity the need to analyze the factors affecting the efficiency of foreign trade activities appears. The classification of the factors that affect the export potential of agricultural enterprises Analyzing the theoretical foundations and the essence of export potential, we concluded that the export potential is central element of the state economic growth.The overall development of the national economy and its monetary system, foreign trade balance and standard of living depend on the dynamics of exports.However, the export potential doesn't depend on the volume of exported goods, but on a huge number of factors, influenced on its formation. Exploring the impacts on export potential, it should be noted that a unified classification does not exist, but all factors, that scientists proposed, can be attributed to two main groups: internal and external (Figure 1). The internal factors include the factors that influence at the formation of export potential of farm businesses of the company.Those factors A. O. Fatenok-Tkachuk divides into three groups: innovation and business activity of the enterprise, increasing the economic potential of the entity, support and stimulation enterprises with high export potential at the state level (Fatenok-Tkachuk, 2008).In his other work this author (Lipych & Fatenok-Tkachuk, 2010) in order to evaluate the impact of factors on the development of export potential of the company distinguishes the factors into two levels: micro and macro level. On the micro level internal and external factors are distinguished.The internal factors caused by the enterprise' activity, and the external ones are the result of the public policy, the competitors activity, the environment in which the entity operates, so these factors have an impact on the state level, where the company operates.At the macro level there are factors that have an impact on the foreign competitors and the partner country.In addition, the external factors are separated to those caused by the activities of the state and those caused by the activity of the market player. Olga Homenyuk attributes the factors related to the intra-corporate enterprises potential, which represents the ability to export development by using internal strengths and competitive factors of the company in order to gain the identified target export markets, to the internal factors External factors related to the external market export potential, which represents the ability to export development by finding new and developing existing target markets for export oriented products of the enterprise (using the external environment factors (Homenyuk, 2014). As we can see the scientist includes to the factors two main elements: intra corporate environment and the sphere of foreign markets, but, in our view, external factors also include political, financial and legislative environment of the country in which the economic subject performs its financial and economic activity.Dadalko proposed a classification of the internal factors on the basis of the enterprise marketing environment and distinguished the several groups of the factors: 1) characteristics of the firm (size, international competence, the number of managers with international experience), product, industry and export market; 2) psychological characteristics and management advantages; 3) the choice of the target market and the segment In order to analyze the factors influencing the export potential they are divided into: 1) controlled ones, which may vary in the right direction; 2) uncontrolled ones, which are divided into controllable and uncontrollable factors by state. The scientists propose to distinguish two groups of factors depending on the degree of their control: endogenous -related with a company activity, its foreign marketing strategy, performance management; exogenous -include political, geographical, natural and environmental characteristics (Formenok, Dubkov, & Dadalko, 2011). Dunska A. R. considers that the most important internal factors of the export potential development include the following: 1) business management organization; 2) informational support of foreign economic activity; 3) export production planning; 4) accounting and analysis of export deliveries; 5) HR management (Dunska, 2013). Baban T. O. exploring the export potential of barley as part of the external factors of barley export potential of the Ukrainian agricultural enterprises, which have the greatest influence under the modern terms, distinguished following factors: 1. Supply factors: natural resource potential in the producing countries; the overall development of agricultural production in producing countries; supply of the competing products; seasonality factor. 2. Demand factors: demographic factors; purchasing power in importing countries; geographical location; development and structure of livestock; consumption of competing cultures; the availability of information about the conditions of the Ukrainian barley sale. 3. Institutional factors: regulation of the barley trade by exporters, importers and the international organization. The factors that hinder the development of innovative capacity include infrastructural problems and characteristics of business environment, energy inefficiency, complexity of the investment protection, insufficient development of the information and communication technologies, etc. At the same time the factors, positively affecting the export activities of agricultural enterprises in Ukraine include the intensification of economic cooperation between Ukraine and the European Union.Our point of view also supports I. Sakalo.She thinks that the positive factors are the following: -expansion of the cooperation areas between Ukraine and the EU; -strengthening cooperation of both government and business partners; -the implementation into practice the foundations of partnership and cooperation; -removing obstacles (imperfections in the tax and judiciary, reducing corruption) due to integration processes; -regular monitoring and evaluation of trade relations will be able to identify weak and strong areas, which in turn will strengthen the annual progress; -reduction of trade barriers by changes in standardization, strengthening administrative control etc. (Sakalo). The factors above contribute to the development of export potential related to environmental factors, scientist M. Nalyvayko identifies internal factors influence the development of export potential and considers that they include: 1. Improving the competitiveness of goods. 2. Reducing the cost of goods. 3. The increase the range of goods for various kinds of markets. 4. Development and implementation of export policy of the company. 5. Search and continuous analysis of new markets.6. Searching for new and analysis of existing suppliers of materials and components for the production company. The author notes that the attraction of the foreign investment is the factor influencing the export potential.We share this point of view, because foreign investments play an important role in the present economic development of some countries and other economic crisis.The volume of foreign investments in the world is growing rapidly, thus increasing the role of the international production and international division of labor in the global economy.It is impossible to leave aside the impact of foreign investment in export activities of enterprises, because due to investments agricultural enterprises can update technical and technological base and increase product competitiveness. Considering that the export potential of the country consists of the export potential of several parts of the economy L. P. Petrenko identifies three main elements that affect on its formation the most: 1) export competitiveness of the product; 2) export competitiveness of the manufacturer; 3) export competitiveness of a producing country (Petrenko, 2008). Summarizing the above, it is worth noting that in Ukraine there are conditions for export potential development because strengthening and increasing the level of export potential of the agricultural enterprise is the primary factor of economic reforms, because the increase in export Vol. 2, No. 2, 2016 potential in agriculture is a significant source of foreign currency, which is necessary for social and economic development. Development of export potential in the context of European integration Development of export potential is associated with the integration processes taking place in the country.Nowadays the course of international economic integration is the engine of the world economies. Integration as a process means measures aimed to eliminate the discrimination between economic units that apply to different states and integration, considered as a state which can be represented as a lack of various forms of discrimination between national economies.Thus, the disparities between countries development have to be negated in the process of integration.It means that in order to balance development it is necessary to complete the resource allocation.In order to avoid such unjust balance, the rational distribution of resources in terms of money or barter is required.We are more inclined to the opinion of M. Arah, that the integration is the process of creating the optimal structure of the international economy and the smooth functioning of coordination mechanisms and unification (Arah, 1998) and support that it is the process of economic cooperation, which promotes the convergence of economic mechanisms. The export of goods and services is one of the forms of economic cooperation, in our opinion.According to the economic essence these relate to economic integration, because as scientists (Hrontkovska, Riaba, Ventsuryk, & Krasnovska, 2014) consider, the economic integration is a process of convergence, mutual adaptation and gradual integration of national economic systems.The others authors (Kozak, Lohvinova, Zakharchenko, & Kravchenko, 2012) complement this point of view indicating that association of economies is intended for the free movement of goods, services, capital and labor across national borders. Considering the factor of influence on export potential we mentioned external factors, including the isolated impact of foreign institutions.This impact is realized through the integration process.Summarizing the views of scientists, we have identified the main directions of Ukraine's integration into the world economy. Considering the works of many scientists who conducted research of integration processes it can be argued that the most perspective way of Ukraine's entry into the global economy is «European integration.»N.V. Osadcha (Osadcha, 2011) gives the definition of the category «European integration» for Ukraine the most clearly.She notes that this is the way to modernize the economy, by attracting foreign investment and new technologies, increasing the competitiveness of North and South America The European Union countries Asian countries The CIS countries African countries Fig. 2. The impact of integration processes on export potential Source: the author's summarizing Vol. 2, No. 2, 2016 domestic producers, the possibility of entering the single EU internal market. The final turn towards European integration took place after Revolution during 2013-2014 years.Ukraine became quickly reorient from the Russian market to European one.If in 2010-2013 the share of Russian exports from Ukraine was approximately equal to or a little higher than the share of the European Union, in 2014-2015, the share of the Russian Federation in Ukrainian exports has declined, while the share of the EU -has increased.Thus, in 2015 tendency to reduce the of Russian exports continued and the proportion in the general flow of merchandise exports for the first time was less than 15% (The unifiedcomprehensive strategy and action plan for the development of agriculture and rural areas in Ukraine in 2015-2020 years).Analyzing these figures we see that exports depends on the global integration process, that's why we consider that it is appropriate to reflect the impact of integration processes on the export potential of the company (Figure .2). While integrating into the world economy, countries make choices what direction of integration is beneficial for them in social and economic terms, and the lighter in the process of unification. Extension of export potential in the context of European integration is an important aspect of improving the economic situation of the country, but according to the statistical information it is necessary to indicate the fact that exports of agricultural products in the European Union are insignificant.The reason is its low competitiveness, but as it was noted above, the outdated technical and technological base of agricultural enterprises effects on this.As the result, the production does not meet the requirements of the European market.Among the products that Ukraine exports to the EU cereal crops oilseeds, vegetable oil, waste food dominated, but the product line has prospects to expand and we suggest considering the main stages of increasing of export potential of agricultural enterprises at the national level and the impact of regulatory legal base that unsupports it (Figure 3). Exit on foreign markets is impossible without a specific image of economic entity that depends on the quality of the products that it produces, on the ability to use modern methods of management, negotiations, and the established image of the country on the world economic scene.For Ukrainian agricultural enterprises the quality of production is an important factor of the image.The improvement of the quality of production is the next stage in the development of export potential.Only after produced products will correspond the quality standards, they can be removed from the territory of Ukraine. Conclusions The study identified and comprehensively researched the key factors influencing the development of the export potential of agricultural enterprises.Summarizing the research of domestic and foreign scientists it should be Source: the author's summarizing Vol. 2, No. 2, 2016 noted that certain factors influence the export potential of the enterprise are systematic, because they form the export potential of the enterprise, and interdependent, because their meaning are combined by the single purpose -to ensure the development of foreign economic activity of the enterprise.The implementation of measures for each factor of development of export potential will improve the competitive position of the company and the effective promotion of enterprise in the target foreign market.Thus these factors affect not only the development of export potential, but in the final result they are able to enhance the competitiveness of the whole enterprise. Fig. 1 . Fig. 1.The classification of the factors that affect the export potential of agricultural enterprises Source: it is summarized by the author on the base of the conducted research
2018-12-18T12:42:26.283Z
2016-07-29T00:00:00.000
{ "year": 2016, "sha1": "0d5efcd1fd7878a4536cd8a854f49208e7e20f5b", "oa_license": "CCBYNC", "oa_url": "http://www.baltijapublishing.lv/index.php/issue/article/download/84/90", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0d5efcd1fd7878a4536cd8a854f49208e7e20f5b", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [ "Business" ] }
259565665
pes2o/s2orc
v3-fos-license
The development and use of adolescent mobile mental health (m-mhealth) interventions in low- and middle-income countries: a scoping review Adult mental health challenges frequently stem from undiagnosed poor mental health earlier in life. With increasing levels of poor adolescent mental health and insufficient health care resources in low- and middle-income countries, mobile mental health may offer expanded service access. Little is known about mobile mental health interventions for adolescents in low- and middle-income countries. Our aim was to review the literature on mobile mental health intervention, development and use for low- and middle-income country adolescents. We searched APA PsycInfo, Web of Science, Psychiatry online, and Ebscohost databases using keywords and phrases. Screening of the 6953 retrieved articles, generated 6 articles that met the inclusion criteria. Arksey and O’Malley’s adapted framework was followed using rigorous inclusion criteria and screening by two reviewers. Studies showed high heterogeneity. Two studies used short message service text messaging platforms, one used phone call reminders, two used smartphone applications (WhatsApp or game-based), and one study compared different short message service, web-based and smartphone app offerings. Generally, adolescents had a positive perception of mobile mental health interventions. Helpline messages, peer group sessions, access to a counsellor and games set in real-life environments were some of the preferred contents of mobile mental health interventions. Noted barriers include low personal mobile phone ownership, leading to lack of confidentiality, data costs and limited internet access. While adolescents in low- and middle-income countries find mobile mental health interventions acceptable and supportive, challenges remain. Mobile mental health interventions can potentially overcome barriers associated with face-to-face care, such as high cost and stigma. However, more research is needed to overcome these challenges and build the evidence-base in low- and middle-income countries for this field to grow. Introduction The World Health Organization (WHO; 2022) defines mental health as 'a state of well-being in which an individual realises his or her own abilities, can cope with normal stresses of life, can work productively and can make a contribution to his or her community' (para 1).Mental disorders, which are health conditions that change one's thinking, mood, or behaviour associated with impaired function, are common in adolescents and young people.It is estimated that mental illness, which refers to all diagnosable mental disorders, accounts for approximately 14% of the global burden of disease and main burden of disease in children and adolescents (Mokitimi et al., 2018;Yatham et al., 2018).Of the 970 million people living with mental health disorders globally, over 80% reside in low-and middle-income countries (LMICs), which typically have large youth populations (WHO, 2022, para 2; Yatham et al., 2018). Approximately three-quarters of mental disorders that occur across the lifespan have their first onset during adolescence (Solmi et al., 2022).This stage (ages 10-19) is defined as a unique and formative time where multiple physical, emotional, and social challenges, including exposure to poverty, abuse or violence, can make adolescents vulnerable to mental health disorders (Chulani & Gordon, 2014).Poor mental health in adolescence can hinder optimal development and functioning (Mokitimi et al., 2018).Furthermore, mental health disorders negatively impact social relationships, physical health, and academic performance, resulting in low wage earnings in adulthood (Grist et al., 2017).Evidence shows that many mental health conditions in adulthood stem from undiagnosed mental disorders during childhood, impacting the ability of individuals to meaningfully contribute to their communities (Hatcher et al., 2019).Yet adolescent mental disorders in LMICs remain largely untreated due to reluctance from young people to seek help, services that are limited and often not tailored to youth or mental health or both, and other barriers such as cost, confidentiality concerns, stigma, and lack of mental health literacy (Ridout & Campbell, 2018;Seko et al., 2014).It is therefore important to identify innovative approaches for delivering sustainable mental health care in terms of reach and impact, particularly among populations at higher risk such as adolescents. Over the last two decades, there has been a rapid advancement in digital technologies such as smartphones, mobile digital applications, and social media.Health care workers have leveraged these technologies to deliver health information and interventions in the form of mobile health (Seko et al., 2014).More recently, the adoption of mobile health technology for mental health has grown exponentially.Approaches to care using mobile technology offer an alternative to providing in-person care, and may improve the quality and availability of care, and reduce barriers to face-toface help-seeking, such as stigma or concerns about talking about one's mental health challenges (Ben-Zeev et al., 2017;Grist et al., 2017).The advantages of using mobile mental health (MMH) technology include accessibility, the potential for anonymity, timely feedback, and lower cost compared to traditional mental health service delivery (Liverpool et al., 2020).MMH interventions also have the potential to overcome language barriers by allowing users to use a language they are comfortable with -if they are developed in local languages (Noack et al., 2021). Remote support utilising MMH has the potential to overcome geographical barriers to care and provides mental health support to hard-to-reach groups, such as adolescents who may not seek help through traditional care pathways (Grist et al., 2017).More importantly, MMH approaches have a great potential for impact in under-resourced and developing countries, such as in LMICs with limited access and poor quality of health care.Despite MMH's potential to address the treatment gap in mental health care in LMICs, little is known about the use of MMH, its development and implementation, availability and acceptability, and barriers to use among adolescents.To address the gap in the literature, this scoping review was conducted to summarise existing work in adolescent MMH in LMICs and to provide steps for future developments of MMH interventions for adolescents. Method This scoping review process followed an adapted version of Arksey and O'Malley's (2005) framework, which involves identifying the research question, identifying relevant studies, study selection, charting the data, and collating, summarising and reporting the results.In brief, the research question was: What is the availability and acceptability of MMH interventions for adolescents, and what has facilitated their successful implementation or acted as barriers to their use in LMICs? MMH interventions included in the review were defined as: short message service (SMS) or text messaging, smartphone games, phone calls, and smartphone applications.Search terms and databases were determined in consultation with the authors.The following search string was adapted for each database: mobile health OR digital health OR mhealth OR text messag* AND design OR development OR intervention AND adolescent OR young people OR youth AND mental health.Searches were conducted in the following databases: PsycINFO, Ebscohost, Web of Science, and Psychiatry Online.In addition, an electronic search in Google Scholar and reference list checking of relevant articles were conducted to identify any additional articles missed during the initial search.Titles, abstracts and full-text articles were screened for relevance against the following inclusion criteria: (1) the article is a research study; (2) the focus of the study was mobile health, defined as medical and public health practice supported by digital technologies for mental health intervention; (3) the target population included adolescents aged 10 to 19 years; (4) the study was conducted in an LMIC; and (5) the article was written in English.Articles were excluded if they focused on adults or the general population.There was no restriction on publication dates.Grey literature, for example, dissertations and conference proceedings were excluded. Screening process Rayyan software (Ouzzani et al., 2016) was used to manage the process of screening and selecting research articles.Articles were screened by the author (SM) and research assistant (MM) between July and September 2022. Charting the data The final articles selected for data extraction were grouped according to the type of MMH intervention used.After reading the articles in full, key information was entered into a data extraction table and reviewed by the co-authors independently to cross-check the validity of the data characterisation.Following this process, three domains were identified using Braun and Clarke's guidelines for thematic analysis (Terry et al., 2017). Collating, summarising, and reporting the results Data were extracted according to key characteristics, including author, year of publication, country, population, sample size, age of participants, the type of MMH intervention, and whether it was a development, evaluation, or feasibility and acceptability study.Key findings from each article were also recorded. Ethical considerations No ethical approval was required for this review article. Description of studies Initially, 6953 articles were identified, and 624 duplicates were removed.Following this, 6329 articles were screened by title, and 6013 were excluded.The remaining 316 articles were screened by abstract, and 277 were excluded.Of the remaining 39 full-text articles assessed for eligibility, 33 did not meet the criteria due to the wrong population, age group, location, and/or grey literature.A total of six full-text empirical articles met the inclusion criteria.See the flow diagram in Figure 1 for the review process. The six studies were published between 2014 and 2022, with the majority (five) published since 2019.There were three qualitative studies, one quantitative study, one mixed-method study, and one study that followed an iterative approach.The sample sizes of the studies ranged from 17 to 124 adolescent participants.Two out of the six studies included only female participants.The studies were conducted in Nigeria (1), Jamaica (1), China (1), Kenya (1), and India (2). The six studies explored adolescent perceptions of using mobile health interventions for their mental health needs.Two of the six studies used short message service (SMS) text messaging platforms as an MMH intervention.One of the six studies focused on phone call reminders, while two others focused on smartphone applications such as WhatsApp and games.Finally, one was a feasibility study to determine the viability of using mobile health resources such as SMS communication, web-based and smartphone application.Three domains were identified from the six studies, namely: (1) Adolescents perceive MMH approaches as useful, (2) Access to mobile devices and other barriers impacting the use of MMH, and (3) MMH content preferred by adolescents.Domain 1: adolescents perceive MMH approaches as useful.All six studies reported high levels of satisfaction from adolescents with MMH interventions.Maloney et al. (2020) reported that more than 50% of their participants indicated they would be interested in using a smartphone application to monitor their health.In Duan et al. (2020), most participants reported that the SMS text messaging intervention could benefit them, and they were willing to receive text messages.Positive experiences with WhatsApp chats were reported by Chory et al. (2022), where adolescents said these chats created a feeling of community and peer support among adolescents living with HIV.Similarly, according to Chandra et al. (2014), adolescents reported that they preferred receiving messages with quotes and brief phrases on positive well-being and felt that someone was there to care for them.Gonsalves et al. (2019) found that a digital blended self-help format was acceptable to school-going adolescents with or at risk of anxiety, depression, and conduct difficulties.Phone call reminders for adolescents with perinatal depression increased their capacity to engage with treatment and motivated clinic attendance (Kola et al., 2022).MMH treatment was an added opportunity for adolescent mothers to engage with care. All the studies reviewed reported various benefits of using MMH identified by adolescents such as cost-effectiveness and confidentiality.For example, Chandra et al. (2014) highlighted that SMS texting is a low-cost method for adolescents, as it does not require mobile data or access to a smartphone device.This is particularly advantageous in settings where more advanced mobile technologies, such as smartphones, are not accessible and affordable.Using a MMH platform with pseudonyms allowed participants to share their experiences openly without any fear of revealing their real identity.Participants reported that they could talk about difficult topics that they would not be able to discuss in other settings, such as stigma and mental health challenges, such as depression and anxiety (Chory et al., 2022).This finding is similar to that of Gonsalves et al. (2019).The latter study, participants reported that they felt supported by a blended problem-solving mobile game and felt safe sharing their problems within the game.In addition, participants in the study by Duan et al. (2020) also found the SMS text-messaging intervention beneficial as the messages they received encouraged them to self-reflect and stay calm.In Kola et al. (2022), the use of the MMH intervention reduced participants' logistic challenges of face-to-face communication during their clinic visits. Domain 2: access to mobile devices and other barriers impacting the use of MMH.One study conducted in India reported a low level of individual phone ownership, with many adolescents sharing their phones with their families (Chandra et al., 2014).Also in India, Gonsalves et al. (2019) reported the sharing of smartphone devices, where adolescents used devices that belonged to a parent, mainly fathers.In contrast, Maloney et al. (2020) reported that all but one participant in Jamaica had access to a mobile phone.A small number of participants in China believed that the SMS text messaging intervention would not be helpful because their parents limited their use and access to their mobile phones during the day (Duan et al., 2020).Chory et al. (2022) did not report mobile device ownership challenges among adolescents in Kenya, as all participants were provided with a smartphone for the duration of the study.Participants were allowed to keep their mobile phones after the research ended.Five out of the six studies highlighted concerns raised by adolescents regarding their use of MMH, such as a lack of confidentiality, for two primary reasons: (1) sharing their devices with parents and (2) parents having access to their mobile devices (Chandra et al., 2014;Duan et al., 2020).As a result, adolescents felt that they would not benefit from the MMH interventions.For example, in Duan et al. (2020), parents of adolescents kept their mobile phones for them during most of the day, making it difficult for them to access text messages.Moreover, there was no confidentiality for the adolescents.This lack of confidentiality limited how adolescents engaged with mobile platforms and what they shared on these platforms. Adolescents in the study by Gonsalves et al. (2019) speculated that concerns might arise from parents and teachers with the amount of time spent on their mobile phones.Adolescents also reported their caregivers as potential barriers to using MMH interventions as some caregivers did not understand why the participants were using their phones for longer periods (Chory et al., 2022).Participants identified household chores and school responsibilities as further barriers to MMH intervention participation.With household chores and schoolwork, adolescents had limited time to engage with the MMH interventions.Another common barrier to using MMH identified across all studies was data costs and/or access to the internet.Maloney et al. (2020) speculated, with no direct evidence, that adolescents might be unwilling to exhaust their mobile phone data on MMH resources.Rural participants had less frequent access to Wi-Fi compared to their peers in urban areas (Maloney et al., 2020).Participants in the study by Gonsalves et al. (2019) recommended that the MMH interventions should be accessible offline and available independent of internet access.Domain 3: MMH content preferred by adolescents.Four out of the six articles reported specific preferences that the participants held for MMH interventions.In an SMS text messaging intervention, most participants preferred helpline messages rather than positive mood-lifting messages (Chandra et al., 2014).The helpline message encouraged them to call or text back if they felt angry, sad, or anxious.Many participants used the helpline to send text messages or calls when upset or angry (Chandra et al., 2014).The opportunity to text back or call someone was strongly preferred because it gave adolescents a sense of support and care.In a game intervention, participants preferred games with stories set in various real-world environments, with choices that could be explored (Gonsalves et al., 2019).Moreover, some participants indicated that online group sessions may help provide a sense of togetherness among users.This finding is consistent with Chory et al.'s (2021) study, where participants reported that WhatsApp group chats offered a feeling of community and peer support among adolescents.However, for more severe problems, participants preferred private chat functions as an option for counsellor support (Gonsalves et al., 2019).Similarly, participants reported that private chats with a counsellor helped them to discuss sensitive topics (Chory et al., 2021).Encouragement and company, meeting and communicating with friends online, providing coping strategies, and receiving individualised or tailored messages were some of the additional preferred content of participants in the study by Duan et al. (2020). Discussion This scoping review aimed to report on current evidence for MMH design, development or intervention for adolescents in LMICs.The scarcity of published research on this topic reflects the limited focus on adolescent mental health interventions in LMICs, despite increased mobile phone ownership and use in these countries (International Telecommunications Union [ITU], 2022).This also reflects that the field of MMH is a small but growing research area that dramatically increased due to the coronavirus pandemic, which required rapid advances in the use of MMH to deliver and ensure continuity of care (Bantjes, 2022).Nonetheless, the three domains identified provide insight into the perception and usefulness of MMH interventions.Adolescents in LMICs perceive MMH approaches as useful and a viable alternative to accessing mental health care, using various approaches for delivering and improving their mental health.However, barriers such as not owning or sharing a mobile phone, access to phones by parents, and mobile data and internet access may hinder adolescents from using MMH approaches.Adolescents' perceptions of MMH approaches and barriers to MMH implementation, use, and acceptability will be explored in this section. Adolescent users reported mobile health interventions to be beneficial in all of the research studies included in this review, though the reported benefits varied, based on the mobile health intervention used.Our review found that MMH interventions have the potential to overcome service access barriers, such as treatment costs and long waiting hours at clinics because they provided continuous low-cost access to care outside clinical settings.While adolescents perceive MMH interventions as useful, studies have reported high dropout rates in MMH interventions (Hall et al., 2022).Research by Välimäki et al. (2017) found that participants in the intervention group dropped out more often, showing that they may not have been fully engaged with the webbased interventions.User engagement was also reported as a critical challenge for mental health digital applications (apps), reporting a 4% user retention rate 2 weeks after the first download (Melcher et al., 2022).Bauer et al. (2020) reported limited downloads and poor retention in mental health apps, particularly outside of clinical trials and research settings. The benefits reported by adolescents highlight the potential of MMH interventions to overcome not only access-related barriers but also confidentiality and stigma concerns associated with faceto-face treatment.Therefore, MMH interventions should ensure anonymity and confidentialityfor example, users should have the option to hide their identity.However, they should be able to consent to being identified by a professional counsellor if there is a suspected emergency or the user is in danger, for example, suicidal thoughts.Our findings are consistent with those of studies conducted in non-LMIC countries.In Seko et al. (2014), confidentiality and privacy were the youths' commonly stated concerns.Similarly, 74% of US college students listed data privacy as their top concern for mental health apps (Melecher et al., 2022).While MMH shows potential for use in adolescents, challenges persist.For the MMH potential to be fully realised, critical issues, such as data protection and appropriate ethics and safety frameworks specifically for MMH still need to be considered. The reviewed articles showed a trend in the use of text messaging and smartphone application interventions.Quotes and brief messages on positive well-being, encouragement and coping strategies are the types of messaging that adolescents seem to prefer in LMIC contexts.This can be attributed to the fact that SMS text messages would not require internet access.This is evident in the study by Kola et al. (2022).In the latter study, over 50% of adolescents indicated that they preferred receiving mental health information as text messages, while very few preferred such information as videos on cell phone apps.Similarly, Akinfaderin-Agarau et al. (2012) found that adolescents preferred SMSs.The most frequently cited reason by adolescents was that it is cheaper.Moreover, adolescents stated they can still receive SMS texts with a poor network connection.Adolescents highly rated a sense of community from a group chat, but they equally appreciated being able to chat with a counsellor privately.This is consistent with the finding from Liverpool et al. (2020).The latter study found that adolescents' willingness to use the MMH intervention was influenced by the ability to connect with others.They found that adolescents were more likely to engage with the MMH intervention if it facilitated conversations with others because they wanted to know that others had similar experiences.While acceptability was generally favourable, the findings of this review highlight that implementing MMH interventions is not a 'one size fits all' process but rather requires a nuanced understanding of adolescent needs in particular communities.The design and development of MMH interventions should involve the end-users, paying attention to their preferences.Watson et al. (2023) emphasise the importance of engaging young people in making decisions for their health.Watson and colleagues recommend that young people should be included in all stages of the deliberative priority-setting process.Organisations such as WHO and the United Nations International Children's Emergency Fund also acknowledge the importance of involving young people in making decisions concerning their health (Watson et al., 2023). It is important to note that only one of the six studies designed and developed an MMH intervention.The rest of the studies used existing platforms such as WhatsApp or a short messaging service platform to deliver the intervention.The limitation of a different range of MMH interventions in LMICs is also evident when compared to literature from non-LMICs.For example, a review conducted by Liverpool et al. (2020) on engaging children and young people in MMH interventions in high-income countries identified six different interventions such as (1) apps, (2) virtual reality, (3) websites, (4) game and computer-assisted programmes, (5) robots and digital devices, and (6) text messaging, while our review only identified three, namely, SMS text messages, smartphone apps and digital games.This finding highlights a need for organically developed platforms tailored explicitly for adolescents and specific to their socioeconomic and cultural contexts.For this field to grow in LMICs, researchers need to explore adolescents' preferences for MMH interventions in conjunction with a range of platforms. Recent studies report widespread mobile phone ownership and use among young people (Nwaozuru et al., 2021).According to ITU's (2022) latest report, 65% of individuals in LMICs own a mobile phone.According to a survey assessing mobile phone ownership in 24 developing nations, more than half of the population in each of the 24 countries owns a cell phone (Feroz et al., 2021).However, cell phone ownership, meaning full ownership with control of when and who gains access, is not reflected in our review, and this may partly be due to affordability (Chandra et al., 2014).Evidence from previous reviews of adolescent MMH has shown that mobile phone ownership overcomes privacy concerns, which is an important factor for adolescents (Feroz et al., 2021).In contrast, our findings show that phone ownership may actually be low among adolescents in LMICs and particularly among those who might benefit from MMH interventions.Sharing phones with family members or parents, and guardians having access to the adolescent's phone was a common finding in our review, resulting in limited privacy for the adolescent.The privacy issue highlights the challenges adolescents may face with little autonomy and confidentiality in their homes.The issue of privacy is not unique to LMICs.These findings highlight the importance of MMH interventions being discrete and easy to hide to avoid the potential stigma attached to experiencing mental health.A review conducted by Grist et al. (2017) appraising the efficacy of MMH apps for adolescents, all developed in high-income countries, also reported privacy concerns, not related to parents but their peers having access to their mobile phones.According to Ben-Zeev et al. (2017), mobile health apps can be password protected and automatically log out after minutes of no use.Again, the importance of involving adolescents in the design and development of the MMH intervention is evident.The role of parents and caregivers should also be considered in overcoming these challenges.Adolescents value anonymity and confidentiality, and these must be considered, but there needs to be careful consideration of safety issues in case of an emergency, for example, a user showing signs of suicidal intentions.Sufficient IT resources are needed to handle, securely, such cases while maintaining anonymity and confidentiality where possible.Moreover, a dedicated counsellor should be available to offer support through counselling and provide important information to help users cope while waiting for a referral to mental health professionals. Data costs appear to be the most common barrier to adolescents using MMH interventions in LMICs; this was reported across all six studies in our review.It is important to note that limited access to the internet may deter users from engaging with the MMH interventions.Ben-Zeev et al. (2017) found that participants may only engage with mobile health services if they are free or have Wi-Fi access.On the other hand, internet access was not reported as a barrier in high-income countries.Instead, young people in Australia, the United States and Hong Kong reported that they are online almost constantly (Ridout & Campbell, 2018).This finding warrants further research to develop context-specific interventions that understand local implementation challenges by involving adolescents across all stages of the design and development of any MMH interventions where they are end-users.Furthermore, while adolescents' mobile phone use would allow access to MMH, constant mobile phone use has risks.Research by Girela-Serrano et al. (2022) investigating the impact of mobile phone use on children and adolescents shows an association between mobile phone use and depression, anxiety and behavioural problems.They add that 'problematic smartphone use' may be responsible for poor mental health (Girela-Serrano et al., 2022, p. 2).Future research is needed among LMIC adolescents to evaluate if the potential benefits of greater smartphone access for adolescents, including access to health services and support, outweigh the potential risks observed in high-income countries. Strengths and limitations A strength of this study was that we followed established guidelines for scoping reviews by Arksey and O'Malley (2005).Two independent reviewers were involved in the appraisal of the research studies and the findings contribute to the limited research available on MMH for adolescents in LMICs.One limitation of this research is that it is possible that other relevant publications were not included because only four electronic databases were searched for peer-reviewed sources.Moreover, sources written in languages other than English and grey literature were excluded.Therefore, our conclusions are based on a very small set of studies, and any conclusions offered are tentative. Implications for future research There is a lack of research on adolescent MMH in low-income countries.All six studies in our review were conducted in middle-income countries.Overall, the review points to the need for more research on adolescent MMH in LMICs.New theoretical and conceptual frameworks are needed to guide the design and development of MMH interventions.Future research should clearly state what was used and consider the medicolegal and ethical frameworks needed to support implementation.The studies included in the review were very brief, and thus, the results provided were based on short-term outcomes.Therefore, future research should focus on longitudinal studies to assess the long-term effects of MMH interventions.While MMH is a promising tool for improving adolescent mental health, there are potential risks associated with its use.Therefore, more attention should be devoted to minimum safety standards and data protection. Conclusion This review identified only six studies from five LMIC countries that reported the development or use of mobile health interventions for adolescent mental health.The results indicate that mobile technologies for adolescents are a promising tool in LMICs.However, the results highlight the importance of involving adolescents in the design and development process of MMH interventions, particularly around issues of access and privacy.Future research needs to broaden the scope to include preferred content highlighted by adolescents and consider the different aspects of mental health literacy, encouraging messages, and coping tools, among others.While MMH interventions are promising and show potential for addressing gaps in mental health care, they should not be considered the only solution for addressing the global burden of mental disorders.Instead, they should be used in combination with existing ways of providing care to complement and reinforce the services provided by therapists and trained health care professionals. Table 1 . Study characteristics.Participants found the MMH intervention beneficial.The preferred time for delivering the message is in the evenings and after a stressful event during a crisis.Participants preferred encouragement and company, coping strategies, and individualised messages.blended self-help format was acceptable to adolescents.However, limitations, including infrastructure, or the ability to connect with other users, were identified.Participants found the gamified and narrative formats to be engaging.Participants particularly enjoyed features such as user choice, rewards and quizzes.The development of smartphone-based mental health resources for Jamaican adolescents is feasible.However, barriers such as stigma and embarrassment are more likely to hinder help-seeking.Things to consider in the development process include incorporating culturally relevant language and user preferences.The development and implementation of the MMH intervention have the potential to increase access to mental health care among a vulnerable population. MMH: mobile mental health; SMS: short message service.
2023-07-11T15:07:29.739Z
2023-07-08T00:00:00.000
{ "year": 2023, "sha1": "9accc996ec8ab47028e42b60482e37467c51d9e3", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00812463231186260", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "116f2d57f8e77a2fcab319d3a9df8844f77bf12f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
253107413
pes2o/s2orc
v3-fos-license
Using physics-informed neural networks to compute quasinormal modes In recent years there has been an increased interest in neural networks, particularly with regard to their ability to approximate partial differential equations. In this regard, research has begun on so-called physics-informed neural networks (PINNs) which incorporate into their loss function the boundary conditions of the functions they are attempting to approximate. In this paper, we investigate the viability of obtaining the quasi-normal modes (QNMs) of non-rotating black holes in 4-dimensional space-time using PINNs, and we find that it is achievable using a standard approach that is capable of solving eigenvalue problems (dubbed the eigenvalue solver here). In comparison to the QNMs obtained via more established methods (namely, the continued fraction method and the 6th-order Wentzel, Kramer, Brillouin method) the PINN computations share the same degree of accuracy as these counterparts. In other words, our PINN approximations had percentage deviations as low as $(\delta\omega_{_{Re}}, \delta\omega_{_{Im}}) = (<0.01\%,<0.01\%)$. In terms of the time taken to compute QNMs to this accuracy, however, the PINN approach falls short, leading to our conclusion that the method is currently not to be recommended when considering overall performance. I. INTRODUCTION In recent years there has been an increased interest in the use of neural networks (NNs) as functional approximators [1][2][3]. The interest lies in the fact that NNs are versatile as demonstrated in their success in various applications such as natural language processing, image recognition and, more recently, scientific computing [4][5][6]. In scientific computing, they have been shown to be robust and data-efficient solvers of partial differential equations that govern diverse systems studied in mathematics, science and engineering [4]. In general, NNs can be trained once and used in a variety of situations that are within the scope of the problem it was trained on. The advantage of applying a trained NN is that it expedites the computation of later solutions whereas, by contrast, more traditional numerical approximating methods would require an inefficient process beginning from scratch each time a solution is derived. Furthermore, NNs are also natively parallelisable, which adds to their higher computational efficiency compared to other numerical approximations. A new technique has recently been developed to assist in creating NNs that can act as functional approximators, which takes inspiration from boundary type problems where the boundary conditions of the function are used to solve for the underlying function as-is; namely, physics-informed neural networks (PINNs) [4,7]. In this regard, we are interested in determining if these types of NNs could be used to compute the quasi-normal modes (QNMs) of black holes. The QNMs of black holes have been studied for many years and it is well-known that they are correlated to the parameters of the black holes that generate them, and as such, they act as a telltale sign to probe the properties of black holes [8][9][10]. Over the years numerous techniques have been used to determine the numerical values of black holes using the radial equations that govern the perturbations of black holes [9]. Some examples are the Wentzel, Kramer, Brillouin (WKB) method, asymptotic iteration method (AIM), and the continued fraction method (CFM) [11][12][13]. Although all of these approaches have been successful in solving the radial equations of black holes to determine the numerical values of the QNMs; however, they do have computational limitations [8]. The WKB, in particular, becomes progressively difficult to apply when more accurate results are needed since achieving this requires painstaking derivations of higher-order approximations. In this work, we intend to show that PINNs can potentially supplement the extant techniques as a new alternative method for obtaining the black hole QNMs, with its unique advantages and limitations. Furthermore, we will compare the accuracy of PINNs to the already established methods and test their generalisability when applied to black hole perturbations equations. Our motivation for using the equations of QNMs to test the usefulness of PINNs is that the equations that govern the QNMs are based on only a few parameters, namely a black hole's physical properties, and their boundary conditions are well-defined for the system. As such, the boundary conditions act as a regularisation mechanism that sufficiently limits the space of admissible solutions and contributes to the NN's stability [4]. Furthermore, in astrophysical circles, there has been an increased interest in black hole QMNs given the recent landmark detections of gravitational waves at the VIRGO and LIGO detectors [14,15]. The paper is set out in the following manner. In the next section, we describe the equations that govern the QNMs for various space-times. In section III we present the currently accepted methods for determining the QNMs and proceed to touch on the new PINN approach in section IV. Finally, in sections V and VI, we discuss the results obtained from applying PINNs and compare them to the QNMs obtained from the canonical methods. II. THE RADIAL PERTURBATION EQUATIONS OF BLACK HOLES In this section, we will derive the equations required to determine the numerical value of QNMs beginning with the simplest space-times and then building up to more complex ones, which will eventually be encoded into the numerical methods. We begin with the Schwarzschild metric: A. The asymptotically flat Schwarzschild solution We consider scalar-type perturbations (and later, electromagnetic, Dirac and gravitational perturbations) then in order to derive the radial equations required to determine the QNMs we begin by considering the equation of motion, which is given by the Klein-Gordon equation [16,17]: where Φ is a scalar field with mass m perturbing the black hole's space-time as given by the metric g. In the case of the Schwarzschild black hole, the metric is given as: ds 2 = −f dt 2 + 1 f dr 2 + r 2 (dθ 2 + sin 2 θdφ), (2.2) where f = 1 − 2M/r is the metric function, with M and r representing the mass of the black hole and the radial distance from the centre of the black hole, respectively. The last two terms on the right-hand side of this equation represent the metric of a 2-sphere [16]. As r → ∞, we expect to recover a weak-field approximation of the metric wherein the components of the metric tensor can be decomposed into the flat Minkowski metric tensor η µν plus a small perturbation |h µν | 1; that is: g µν ≈ η µν + h µν [16]. Considering the massless form of the Klein-Gordon equation, where m = 0 in equation (2.1), and plugging in it the metric given in equation (2.2) we obtain: In this explicit form, we can derive the equation of massless scalar fields in the Schwarzschild background in terms of the radial coordinate r via a separation of variables [18]. By mapping the resulting one-dimensional differential co-ordinate into an infinite domain given by a tortoise co-ordinate, x, we find [11,19]: where: (2.6) Here n, and m are the principal, multipole, azimuthal and numbers, respectively [9]. which is a non-Hermitian problem, unlike the Schrödinger equation [10]. For asymptotically flat astrophysical black holes, the eigenfunctions, ψ, that solve this equation have asymptotic behaviour governed by [13]: Transforming from the infinite domain of the tortoise co-ordinate to the finite domain of a new variable, ξ = 1 − 2M/r, it can be shown that equation (2.4) takes the form [13]: and χ(ξ) is a complex-valued scale factor. The boundary conditions have been incorporated into equation (2.8). The importance of this transformation is that it maps the domain from one that is infinite, i.e. −∞ < x < +∞ to one that is finite, i.e. 0 ≤ ξ < 1. As such, it is now possible to numerical solve the perturbation equation since the domain is now finite and the QNM boundary conditions are implicitly accounted for. For electromagnetic field perturbations of Schwarzschild black holes, the same Schrödingerlike radial equations are obtained by following the same procedure for deriving the massless scalar fields. However, in this case, the equation of motion considered is the source-free Gauss-Ampère law of Maxwell's equations [16,20]: where F µν is the electromagnetic field tensor. Applying the components of the electromagnetic field tensor F µν , we can determine the radial perturbation equation from Maxwell's equations: We can simplify the equations with indices µ = θ and µ = φ to obtain the Schrödinger-like perturbation equations. In short, we arrive at: where a 0 (t, r) represents the electromagnetic field perturbations. Thus, if we have a 0 (t, r) = a 0 (r)e iωt , converting to tortoise co-ordinates we retrieve equation (2.4), where ψ(r) = a 0 (r) and V (r) = ( + 1)f (r)/r 2 is the effective potential of an asymptotically Schwarzschild black hole perturbed by an electromagnetic field. For gravitational perturbations, the equations have the same form except for the effective potential V (r). Refs. [20,21] outline concisely the steps for arriving at the wave equations for these direct metric perturbations on a Schwarzschild black hole. B. The Schwarzschild (anti)-de Sitter solution We shall also consider asymptotically curved space-times that are solutions to Einstein's equations with a non-zero cosmological constant. The cosmological constant, denoted by Λ, encodes the curvature of space-time via the relation Λ = ±3/a 2 , where a is the cosmological radius [22,23]. The metric in this case is: (2.14) With this metric as a starting point, the radial perturbation equation derived for a 4dimensional (anti)-de Sitter Schwarzschild black hole is the same form as equation (2.4) but, with a more general effective potential given as [13]: where f (r) = 1 − 2M/r − (Λr 2 )/3 is the metric function for (anti)-de Sitter Schwarzschild space-times and s = 0, 1/2, 1 and 2 denote the spins of scalar, Dirac, electromagnetic and gravitational fields, respectively. C. Near extremal Schwarzschild and Reissner-Nordström-de Sitter solutions A final case we shall consider are Reissner-Nordström-de Sitter black holes, albeit in the near extremal case. The metric of a Reissner-Nordström-de Sitter black hole is [22]: where r s = 2M (the Schwarzschild radius) and r 2 Q = Q 2 /4π 0 . Generally, when solving radial perturbation equations, the nature of the effective potentials preclude applying a direct, analytical approach to deriving exact QNMs [11]. However, in special cases, such as this one involving near extremal Schwarzschild and Reissner-Nordström-de Sitter black holes, the effective potentials can be transformed to yield differential equations with known analytic solutions [9]. To obtain the effective potentials of non-rotating black holes in the near extremal (anti)de Sitter case, we consider the relevant metric function, f (r) = 1 − 2M/r − Λr 2 /3. The solutions to f (r) = 0 are r b and r c , which are the black hole's event horizon and the spacetime's cosmological radius, respectively (where r c > r b ). For r 0 = −(r b + r c ), the metric function can be given as [23]: where a 2 = r 2 b + r b r c + r 2 c and 2M a 2 = r b r c (r b + r c ). The surface gravity, κ, associated with the black hole event horizon r = r b is defined as [23]: In the near extremal de Sitter case, the cosmological horizon r c of the space-time is very close (in the co-ordinate r) to the black hole horizon r b so that (r c − r b )/r b 1, and the following approximations apply [23]: Also since the domain of r is within (r b , r c ) and r b ∼ r c , we find that r − r 0 ∼ r b − r 0 ∼ 3r 0 . In turn the metric function equation (2.17) becomes: (2.20) With this new form of the metric, the relation between the tortoise co-ordinate and the radial co-ordinate (2.6) reduces to: Substituting this expression for r into the f (r) equation (2.17), we find the expression for f (x) as [23]: . (2.22) With this metric function, the effective potential of a near extremal Schwarzschild-de Sitter black hole is an inverted Pöschl-Teller potential [23]: where V 0 = κ 2 ( + 1) for massless scalar and electromagnetic perturbations and Ref. [24] determined the analytic expressions of the QNM eigenfunctions and eigenfrequencies [22][23][24] as: where ξ −1 = 1 + exp (−2κx) and β = −1/2 Extending from near extremal Schwarzschild-de Sitter black holes, Ref. [22] showed that an inverted Pöschl-Teller potential can also be used to represent the effective potential of Reissner-Nordström black holes perturbed by scalar fields. This is due to the fact that for any de Sitter black hole in the near extremal limit, the metric function f (r) is given as [22]: where δ = (r 2 − r 1 )/r 1 , κ 1 is the surface gravity at the horizon, r 1 and r 2 are two consecutive positive roots of f (r), and x is the tortoise coordinate whose domain lies within (r 1 , r 2 ). For both Schwarzschild and Reissner-Nordström-de Sitter cases, the terms r 1 and r 2 are the event and cosmological horizons, respectively, with r 2 > r 1 . In the near extremal limit where r 2 ∼ r 1 , the metric function for a near-extremal Reissner-Nordström-de Sitter black hole would take the same form as equation (2.22). Therefore, when considering the near extremal limit, non-rotating black holes share the same mathematical expression for the metric function, which in turn results in the same expression for the effective potential, A. Ferrari and Mashhoon approach Ref. [24] showed the connection between the QNMs of black holes and the bound states of inverted black hole effective potentials. The effective potential, denoted by U in Ref. [24] is parametrised by some constant p and is invariant under the transformations p → p = Π(p) and x → −ix, as in: By considering x → −ix, the Schrödinger-like perturbation equation (2.4) transforms to: where φ(x; p) = ψ(−ix; p ) and Ω(p) = ω(p ). The QNM boundary conditions then become: φ → exp(∓Ωx), as x → ±∞. After solving this problem to find Ω and φ, the corresponding QNMs can then be found using inverse transformations: The values of ω, that are determined from the bound states Ω, are known as proper QNMs. Ref. [24] demonstrated this approach using an inverted Pöschl-Teller potential to approximate the effective potential of a Schwarzschild black hole. The former was used because the bound states of a Pöschl-Teller potential are well-known and could then provide approximate analytic formulas for the QNMs of the Schwarzschild black hole [24]. B. WKB Method The WKB method is a semi-analytic technique that has been used to approximately solve the radial equation of black hole perturbations since 1985, as first proposed by Schutz and Will [25], where they computed the QNMs of an asymptotically flat Schwarzschild black hole. It had already been established as an approximating technique for solving the time-independent Schrödinger equation. C. Continued Fraction Method In a 1985 paper [12], Leaver put forward the method of continued fractions (previously used to compute the electronic spectra of the hydrogen molecule ion) to compute the QNM spectra of both stationary and rotating black holes. Overall, this approach was found to be very accurate for higher-order n modes, especially after the improvement made by Nollert [26]. It has been used in the context of Schwarzschild, Kerr and Reissner-Nordström black holes [8,12,27]. D. Asymptotic Iteration Method The AIM is another semi-analytic technique for solving black hole perturbations. In the context of black hole QNMs, this approach was developed by Ref. [13] who made improvements to a more traditional algorithm to make it markedly more efficient. In Ref. [13] the improved AIM was used to compute of QNMs for cases involving (A)dS, Reissner-Nordström and Kerr black holes. In later research, it was used to calculate QNMs of general dimensional and non-singular Schwarzschild black holes [28,29]. Compared to other extant approximation techniques, the improved AIM was shown to be as accurate as Leaver's CFM [13]. IV. PHYSICS-INFORMED NEURAL NETWORKS As briefly recapped above, there are several techniques that already exist for solving radial equations in order to obtain the QNMs of black holes. To supplement them, we now introduce PINNs as an alternative to these methods. Firstly, we introduce the idea of deep neural networks and how they can act as universal function approximators. We then introduce PINNs and how they can be used to solve ordinary differential equations (ODEs) and partial differential equations (PDEs). A. Deep Neural Networks Deep neural networks are a system of interconnected computational nodes loosely based on biological neural networks and, mathematically, can be formulated as compositional functions [6,30]. In contrast to shallow neural networks, which are networks with just a single hidden layer, these NNs are composed of two or more hidden layers [3]. In many applications, the latter are favoured because they are capable of replicating the complexity of functions and, at the same time, generalise well to unseen data better than shallow models [31]. Of several available types of structures (or architectures) of deep neural networks, the simplest and most common one is the feed-forward neural network (FNN). FNN is generally structured as follows [30]: where σ denotes nonlinear activation functions that operate on W N −1 (x) + b elementwise. Examples of frequently used activation functions are the hyperbolic tangent (tanh) and the logistic sigmoid 1/(1 + e −x ). Given that these are nonlinear functions, this makes values at each of the output nodes nonlinear combinations of the values at the nodes in the hidden and input layers [32]. Key seminal research on NNs, such as Refs. [33][34][35], has shown that deep neural networks are universal function approximators. That is to say, when NNs have a sufficient number of neurons they can approximate any function and its partial derivatives [30], though in practice this is constrained by the limit in the size of NNs that can be set up before they lead to overfitting. In such cases, the NN model gives the illusion of a good model that captures the underlying pattern in data, while a true test of its accuracy by means of exposing it to an unseen test dataset reveals a fallible model that gives poor predictions and a high generalisation error [3,30]. In general, training deep NNs entails minimising a loss function that measures the deviation of its approximations from the expected solutions. Analogous to linear least squares regression, the loss function is minimised via tuning of the many parameters in the deep neural network (which are the elements of its weight matrices and bias vectors) with the effect of steering their approximations closer to the target functions. Mathematically, the weights and biases are tuned according to the equations: where C Ultimately, the Adam optimiser is employed in our investigation of PINNs. It is a standard optimisation algorithm that extends from classical methods of stochastic gradient descent [3,36]. and their associated initial/boundary conditions [37]. Autodiff is a technique that is used in PINNs to compute the partial derivatives of the NN approximations and thus embed the governing PDEs and associated boundary conditions in the loss function. Given that it facilitates "mesh-less" numerical computations of derivatives, it endows PINNs with several advantages over traditional numerical discretisation approaches for solving PDEs (such as the finite difference and finite element methods) that can be computationally expensive due to complex mesh-generation [6,30,38]. For example, Refs. [6,30] demonstrated the advantage of applying NN-aided techniques over using traditional mesh-based techniques to approximate solutions with steep gradients. The latter give rise to unphysical oscillations when the meshes have low resolution, hence higher resolutions are required to remove these undesirable oscillations, which can be prohibitively expensive and lead to excessive execution times [6]. Remarkably, the same level of accuracy that is achieved by higher resolution meshes (in mesh-based schemes) can be achieved more efficiently in PINNs. In such cases, PINNs could be a viable alternative for solving PDEs. It is worth noting that derivatives of Padé approximations can be utilised as an alternative to PINNs and autodiff, which is a part of the PINN algorithm. They have indeed been applied in extensions of the WKB method in computing black hole QNMs [28]; therefore, the focus has been to compare their performance (and that of other established approaches in black hole QNMs) with the novel PINNs in this physical context. The basic structure of PINNs can be divided into two components [30,38]: Loss Physics constraints In general, to expand on the methodology, PINNs solve PDEs that are parameterised by λ, satisfied by a dependent variable [39] u(x), and are expressed generally as [30]: Along with a given PDE are its boundary conditions: where B(u, x) stands for Dirichlet, Neumann Robin or periodic boundary conditions. Note that both steady-state and dynamic systems can be solved using PINNs; where, for the latter, time t is considered to be special component of x and Ω contains the time domain. As such, initial conditions are treated as a type of Dirichlet boundary condition on the spatio-temporal domain [30]. where the goal is to determine λ given a dataset (which can be small) of u(x) at given points x ⊂ Ω. The PINN algorithm for solving PDEs PINNs follow these steps when solving forward, inverse and eigenvalue problems [30]: 1. Build a neural networkû(x; θ) with parameters θ: The neural networkû(x; θ) takes in x as input and is a surrogate of the function u(x) that satisfies the governing PDE and boundary/initial conditions. Whereas, θ = {W , b } 1≤ ≤L is a set of all weight matrices and vectors in the neural network [30]. 2. Specify the training dataset T : In the case of forward and eigenvalue problems, we specify a dataset of "unlabelled" randomly distributed points in the domain (also known as residual points). The points within the T f ⊂ Ω are used to restrict the NN approximationû(x; θ) to satisfy the physics imposed by the PDE. Similarly, the boundary points of the spatio-temporal domain T b ⊂ ∂Ω are used to restrict the NN to satisfy the physics represented by the initial/boundary conditions. For inverse problems, since λ is missing from the PDE, a "labelled" dataset of u(x), denoted by T o , is required in addition. 3. Specify a loss function by adding the weighted Euclidean norm of the PDE, boundary conditions, and other regularisation functions: In general, the loss function of PINNs will be given as [30]: where w f , w b , w r are weights and still θ = {W , b } 1≤ ≤L , with specifying a hidden layer as defined in section IV A [5]. Additionally L f and L b are loss terms due to the PDE and initial/boundary conditions, respectively: where the circumflex inû andλ denotes that these are the NN's approximations of the dependent variable and any unknown PDE paramters of inverse problems. The loss term L r represents regularisation functions in general. For example, for forward problems this term is left out while for inverse problems it is the error between the NN approximations and a "labelled dataset" of u(x): 4. Train the FNN towards the optimal weights and biases θ * by minimising the loss function L(θ; T ): The goal of training is to optimise θ,û andλ such that we have: Note that the loss function is highly nonlinear and nonconvex with respect to θ, thus gradient-descent optimisers such as Adam are often used during training. The disadvantage of a nonconvex optimisation problems is the difficulty to find unique solutions compared to traditional numerical methods of solving PDEs [30]. The DeepXDE package The DeepXDE package is customised primarily for constructing PINN models. To help elaborate on the DeepXDE package, we consider here a toy problem that was discussed in Ref. [13], which involves the same Schrödinger-like differential equation in equation (2.4) but with an inverted symmetric Pöschl-Teller potential V P T (x) [13]: . (4.10) In the tortoise co-ordinate x, the domain of our problem is infinite, i.e. x ∈ (−∞, +∞), where the QNM boundary conditions are given by equation (2.7). Via quasi-exactly solvable theory, Ref. [40] found the exact solutions of equation (2.4) with V = V P T to be given as [13]: where χ n is a polynomial of degree n in sinh(x) and n = Z + 0 . As a first step to finding the approximate solutions using PINNs, we need to change to a new coordinate y = tanh(x), which maps the infinite domain −∞ < x < +∞ to a finite domain of −1 < y < +1, so that equation (2.4) becomes [13]: In this form, numerical implementation of this problem in PINNs becomes possible. We test the feasibility of solving equation (4.13) given as an inverse problem using DeepXDE. We specify ω as an unknown to be tuned while the PINN undergoes training. The total loss function L(θ; T ) of the PINN, in this case, is a weighted sum of the squared Euclidean (L 2 -) norm of the physical constraints, similar to equation (4.5) [30]: where: consists of one input for co-ordinate y, while the output layer has two output nodes for real and imaginary parts of the approximate solutionψ. In building PINNs, the code we used mirrors the two-component structure of PINNs discussed in section IV B. The code is fairly intuitive as it is a high-level representation that closely resembles the mathematical formulation [30]. Beginning with the physics constraints, our ODE is defined using the DeepXDE functions for executing the first and second-order derivatives via auto diff; that is, dde.grad.jacobian and dde.grad.hessian, respectively. We define ω with the function tf.Variable and have represented it withω Re andω Im in figure 2. To provide Dirichlet boundary conditions and a labelled dataset, as needed to solve our inverse problem, we define both the real and imaginary parts of the known eigenfunction ψ(y) that satisfies equation (4.13). Numerically, at the true boundary points, y = −1 and At this stage, we have defined the physics constraints of the PINN, but for completeness, we set up the deep neural network (our surrogate model). In the code we also define a FNN with one input node, two output nodes and three hidden layers with 20 nodes per layer. In each of the hidden layer nodes, we use the nonlinear activation function "tanh" considering that it is a smooth, infinitely differentiable function [41]. Generally for PINNs, "smooth" activation functions are preferred over the ReLU-like non-smooth activation functions since the former have demonstrated significant empirical success [42]. For this reason, the tanh function is chosen here by default, however it is worth noting that (of late) adjustable, smooth function such as Swish have proven to outperform fixed functions such as tanh in terms of convergence rate and accuracy [41,43]. Swish is defined by x· Sigmoid(βx), where β is a trainable parameter. The loss function dde.Model combines the FNN with the physical constraints to form a complete PINN. We also add the "callback" function dde.callback in the algorithm so as The eigenvalue solver One such algorithm that we have investigated was initiated in Ref. [7] to solve quantum eigenvalue problems using unsupervised NNs (also called, data-free surrogate models). The authors experimented with their "eigenvalue solvers" on well-known equations in quantum mechanics; namely, the time-independent Schrödinger equation with an infinite square well potential and, in another case, a quadratic potential function of a quantum harmonic oscilla-tor. Although their approach is similar to the PINNs, in terms of embedding learning biases in the loss function, there is an additional feature which allows the eigenvalue solver to scan the eigenvalue space in a scheduled manner and progressively find several eigenvalues in a single training. The eigenvalue solvers in Ref. [7] are built using the PyTorch library. To solve equation where: In equation (4.22) L reg is a weighted sum of regularisation functions, where the weights w f , w E , w drive are typically set to one [7]. Individually, the regularisation functions are: where L f and L E steer the learning algorithm away from zero as a possible value for the eigenfunction and eigenvalue, respectively. For this purpose, the mathematical form of these loss terms have the PINN approximations (ψ andÊ) inversely proportional to the loss so that as they approach zero they are penalised by high loss values. The crucial term in these unsupervised NNs is L drive , which motivates the NN to scan through the space of eigenvalues. This is achieved by adding within the training algorithm a mechanism that increases the constant c in L drive at regular intervals, after an arbitrary number of training epochs. It is important to note that without the L drive loss component the PINNs lack the necessary constraint to learn other eigenvalues than the first energy level it initially gravitates towards during training, which is often but not always the ground energy level. In this eigenfunctions and eigenvalues at a time, and only when at least one of the other eigenpairs is known. As a consequence, a classification approach (with, for example, output nodes of the PINN representing the dependent variable) may not be applicable because the loss function will only have the wherewithal to learn a single eigenstate in each training, regardless of the input data since it is unlabelled and is randomly selected from a domain where one eigenstate cannot be separated spatially from the other solutions. The key Pytorch functions used in defining our physics constraints include torch.autograd, which executes automatic differentiation to find the first and second derivatives in L ODE given by equation (4.21). With the physics constraints defined, we set the structure of our FNN: 2 input nodes, 1 output node and 2 hidden layers with 50 nodes each (see figure 4), where our chosen activation function is the trigonometric function, sine. This activation function has been found to greatly accelerate the NN's convergence to eigenstates compared to more common functions, e.g. sigmoid and ReLU [7,45]. Compared to the code in DeepXDE, the eigenvalue solvers provided more flexibility when customising the training algorithm. The total loss function in our training algorithm was defined according to equations (4.20 -4.23). To generate n train points from the domain of our example problem u ∈ (−1, 1), we used the Pytorch function torch.linspace. In terms of optimisation, the standard Adam optimiser is applied [36]. Ultimately, the training phase follows after all parameters for training the model (such as the number of training epochs) have been defined. In our case, we chose the following parameters: 100 training points, 100 000 training epochs and a learning rate of 8 × 10 −3 . Figure 5 shows the resulting NN approximations of the eigenvalues and eigenfunctions. Note that the L drive is only included in the loss function of this example, for complete demonstration of the method. However, it is not applied in the QNM computations resulting in the PINNs converging on one eigenvalue (as we will see), rather than several (as in the many plateaus of figure 5). As seen in the example, the flips between eigenvalues occur arbitrarily, without any method of controlling when they occur. Therefore, this loss term requires further investigation, outside this present work, to make it less random. V. RESULTS: QNM COMPUTATIONS WITH THE EIGENVALUE SOLVER The results from our investigation of the performance of PINNs when applied to the computation of QNMs shall now be presented, where it is important to note that, generally for deep neural networks, there are no set rules for customising them since they are statistical tools with too many parameters to admit any meaningful physical interpretability. Taking this into account in this work, we have carried out grid-search-like experimentation of the NN hyperparameters to discover the most optimal choices with the best performance. Specifically, we considered a range of values for three hyperparameters, which are the number of training points, number of training epochs, and the number of nodes per layer, keeping the other hyperparameters (e.g. optimiser and activation function) fixed. In this work, we have focussed on computing the QNMs of a Schwarzschild black hole in the asymptotically flat and near extremal de Sitter cases. For the former, we considered massless scalar, Dirac, electromagnetic and gravitational field perturbations; while, for the latter, we only considered massless scalar fields where the equations look the same as near extremal Reissner-Nordström-de Sitter black holes. Due to this, the QNMs of near extremal Schwarzschild-de Sitter black holes can be more generally treated as the QNMs of near extremal non-rotating de Sitter black holes. A. Scanning hyperparameters Figure 6 graphs the results we obtained from testing different hyperparameter configurations for computing the QNMs of an asymptotically flat Schwarzschild black hole Note that the accuracy values measure the deviation of the NN approximations from Leaver's QNMs, whose precision is up to 4 decimal places [12,18]. As seen in figure 6, the percentage deviations of our computations remain the same across all hyperparameter configurations. But for a few cases, the percentage deviations for the real and imaginary parts of the QNMs hover around about 0.009% and −0.042%, respectively. Both these values correspond to a 4 decimal place precision, making the NN approximations as good as Leaver's CFM. Note that beyond 4 decimal places we cannot reliably determine the accuracy of our NN approximations based on the QNMs given in the literature [12,18]. The red cells given in the right panel of figure 6 correspond to cases where the eigenvalue solvers veer from determining the QNMs with a minimum loss, which are the n = 0 modes. These are few in comparison to "normal" cases where the eigenvalue solvers converge to a loss minimising solution. The displayed training times and percentage deviations were obtained by iterating the eigenvalue solver algorithm automatically and scanning through the specified range of hyperparameter combinations. Note that the total loss was set as: where: 2) Here θ, T ,χ andω have their definitions from sections II A and IV B 1. We have considered the ODE given by Ref. [13], which is a transformation of the radial perturbation equation B. QNMs of near extremal non-rotating black holes In our discussion of black hole perturbation equations in section II C, we have seen a special case where the effective potential is given exactly by an inverted Pöschl-Teller potential; namely, the near extremal Schwarzschild and Reissner-Nordström-de Sitter black holes. In these cases, analytic expressions of the QNMs are known and we could reliably test the accuracy of our NN approximations compared to the exact QNMs given as: where and n are as defined in section II A. In table I, the exact QNMs for n = 0 and = 1, ..., 3 are compared with the NN approximations (ω eigeN N ). The latter were obtained by embedding the governing differential equation of near extremal non-rotating de Sitter black holes and extra regularisation terms in the loss function. In the last column of table I are values that were produced by adding to the loss function a seed value loss term given as: The seed value loss term measures the deviation of the NN approximations from specific n and dependent seed values close to the exact QNMs (i.e. accurate up to a certain number of decimal places, e.g. 2 decimal places, in this case). The goal of the seed loss term is to steer the NN towards specific QNMs of the several possible differential equation residual minimisers (or eigenstates) that exist for a chosen multipole number . The plots in figure 7 are the NN approximations of the eigenpairs (ω, ψ) associated with table I, where the first three multipole numbers for the n = 0 mode are superimposed. These are the QNM eigenfunctions that obey the asymptotic behaviour expected for astrophysical asymptotically de Sitter black holes [9]. As was pointed out previously, this is: More importantly, figure 8 shows the evolution of the real (ω Re ) and imaginary (ω Im ) parts of the NN's approximations of the QNMs as they train for 100 000 epochs. These plots were obtained from our computations without the seed loss term in the loss function. In the plots of figure 9 -12, the green line represents the seed values of ω that were embedded in the loss function of our eigenvalue solvers. Note that the NN converges towards the expected QNMs, rather than the seed values, which are given up to 2 decimal places. The QNM values given in tables I -V are in geometrical units. Our results show signs of the expected limitations, listed in Ref. [38], of solving PDEs with NNs that have been observed in various applications of physics-informed machine learning. One is the fact that complicated loss functions (with many terms in the governing equations) lead to highly non-convex optimisation problems. As a result, the training process may not be sufficiently stable and convergence to the global minimum may not be guaranteed [38]. We can see this by contrasting the results obtained from our computations involving the relatively simple differential equation for near extremal non-rotating black holes versus the relatively more complex perturbation equations of asymptotically flat Schwarzschild black holes. For the former, figure 8 shows that our NN quickly converges towards the expected solution regardless of the multipole number, even in the absence of a seed loss term to further constrain the eigenvalue solvers. However, for the latter, the NN has more difficulty converging in lower multipole number cases as seen in figure 9 where the NN converges faster for = 2 when compared to = 0. In fact, without the seed loss term to solve asymptotically flat Schwarzschild black holes, the NN fails to converge when we have = 0, 1, 2 but does for > 3. In our attempts to solve even more challenging problems, such as the perturbation equations of asymptotically flat Reissner-Nordström and asymptotically (anti)-de Sitter Schwarzschild black holes, the instability appears to be more pronounced as the NNs fail to converge on the expected QNMs for these cases. To alleviate this constraint and broaden the scope of PDEs to be solved we will need to add to our eigenvalue solvers some stronger constraints or features that address instability. VI. DISCUSSIONS AND CONCLUSION In summary, we have explored the possibility of implementing PINNs as a new technique to solve black hole perturbation equations. We considered two variations of PINNs built with the DeepXDE and Pytorch packages in Python. To give some background on the underlying physics, we began by revisiting the perturbation equations for static, spherically symmetric black holes, particularly asymptotically flat and (anti)-de Sitter Schwarzschild black holes whose perturbations are described by one-dimensional Schrödinger-like eigenvalue problems. Our goal was to determine when and how PINNs can be best applied to solve these equations, which are generally difficult to solve analytically and compute the QNMs of black holes. Since PINNs are extensions of deep neural networks, we outlined NNs in section IV A, in terms of their structure and the mechanisms behind their function approximation abilities. Afterwards, PINNs were described with illustrative examples showing how physics constraints are embedded in the loss function of a NN. These constraints include the governing PDE, its associated boundary conditions and regularisation functions. Of the two variations of PINNs considered in this work, the eigenvalue solvers were implemented to compute the QNMs of asymptotically flat Schwarzschild and near extremal non-rotating de Sitter black holes. Given that the latter scenario has known exact formulae for the QNM frequencies (given by Ref. [22][23][24]), we were able to reliably validate the accuracy of our NN approximations. We obtained QNM values with up to 6 digit accuracy and plots showing the evolution of the NN's approximation of the QNMs over a 100 000 epoch training phase. The plots showed that the NN's approximation quickly converged towards the expected solutions, regardless of the multipole number or the existence of a seed loss term in the loss function. Regarding the more analytically intractable problems, we managed to solve the perturbation equations of asymptotically flat Schwarzschild black holes by embedding the equations themselves, the QNM boundary conditions and a seed loss term into the loss function of the eigenvalue solvers. The computed QNMs have the same level of accuracy as those obtained through Leaver's CFM [12] or Konoplya's 6th order WKB method [46] (at least up to 4 decimal places as given in the literature [18]). However, in terms of efficiency, our eigenvalue solvers take several minutes to train, compared to the few seconds to a minute it takes to generate accurate results using other techniques such as the WKB method. We also found that the efficiency of PINNs could be optimised by setting up the NN using lower values in the range of values of the hyperparameters that we tested; that is, the number of training epochs, number of training points and number of nodes per layer. To date, we have been able to compute only the fundamental mode frequencies (i.e. n = 0, ≥ 0) that, as it turns out, are the least damped, longest-lived modes compared to higher overtones with n > 0. This is because we have added regularisation terms that simultaneously penalise the NN for learning trivial eigenfunctions and encourage it to learn the most energetic QNMs, which happen to occur when n = 0 for any given . Potential future work would seek a modification of the eigenvalue scanning mechanism similar to that introduced by Ref. [7], which will allow for the computation of higher overtones for our complex-valued QNMs. Concerning the question of the stability of PINNs as they increase in depth, that is still very much an open problem, in general, within the literature and is in the early stages of investigation. When studying PINNs to mimic the analysis of numerical discretisation techniques, convergence and stability are related to how well the NN learns from the physical laws embodied by the governing PDEs and initial/boundary conditions [41]. It is well-known that there is a bias-variance trade-off that comes with choosing the right depth of a neural network, where too deep a neural network may lead to overfitting and inefficiency. The most recent literature in laying out a theoretical framework for PINNs includes Refs. [48,49] that provide formal findings regarding PINNs and their convergence when dealing with linear problems including second order elliptic, hyperbolic and parabolic type PDEs. Utilising these recent developments is important to understand and improve on PINNs in continued use to compute QNMs. As discussed in section II A, our NNs exhibit signs of instability which we suspect to be a result of the level of complexity in the loss function, which makes for a highly non-convex optimisation process [38]. This is counter-intuitive to our initial expectation that PINNs can accurately solve any PDE (regardless of complexity) if they are formulated in a finite domain and their associated boundary conditions are properly set up. This was not the case for our attempts when applying eigenvalue solvers to the Reissner-Nordström case. To overcome this instability in future work, one plausible approach would be to consider the recent work in Ref. [42] that shows that a "self-scalable" activation function leads to PINNs which are less susceptible to spurious stationary points, an obstacle in highly non-convex loss functions. A final point to note concerning the limitations of PINNs is their relative inefficiency compared to the extant methods for computing QNMs. Further investigation needs to be done to improve the performance of the eigenvalue solvers as they currently do not surpass the efficiency of established methods such as the WKB method and the AIM. Overall, PINNs have not developed far enough to be applied broadly in the study of black hole perturbations. In conclusion to their seminal work on PINNs, Ref. [4] pointed out that this method should not be viewed as a replacement for classical numerical methods for solving PDEs, but rather as methods that can bring added merits such as implementation simplicity to accelerate the rate of testing new ideas. In a similar vein, the application of PINNs to QNMs brings at least a new angle to study the perturbation equations, even though they are not as efficient as canonical methods. As is, the PINN approach may only work in computing the fundamental QNMs of not only four-dimensional Schwarzschild black holes, but also general dimensional Schwarzschild black holes (described in Ref. [9]) given the similarity of the differential equations. Despite the present challenges (which are expected for a burgeoning method) this approach to computing QNMs is worth pursuing further as it demonstrates the same level of accuracy as the leading existing methods.
2022-05-18T06:47:03.560Z
2022-05-17T00:00:00.000
{ "year": 2022, "sha1": "4f09a9854fd0e2575daa642564dbad8f2210ee70", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4f09a9854fd0e2575daa642564dbad8f2210ee70", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
230605740
pes2o/s2orc
v3-fos-license
An Augmented Reality-Based Mobile Application Facilitates the Learning about the Spinal Cord Health education is one of the knowledge areas in which augmented reality (AR) technology is widespread, and it has been considered as a facilitator of the learning process. In literature, there are still few studies detailing the role of mobile AR in neuroanatomy. Specifically, for the spinal cord, the teaching–learning process may be hindered due to its abstract nature and the absence of three-dimensional models. In this sense, we implemented a mobile application with AR technology named NitLabEduca for studying the spinal cord with an interactive exploration of 3D rotating models in the macroscopic scale, theoretical content of its specificities, animations, and simulations regarding its physiology. To investigate NitLabEduca’s effects, eighty individuals with and without previous neuroanatomy knowledge were selected and grouped into control and experimental groups. Divided, they performed learning tasks through a questionnaire. We used the System Usability Scale (SUS) to evaluate the usability level of the mobile application and a complimentary survey to verify the adherence level to the use of mobile applications in higher education. As a result, we observed that participants of both groups who started the task with the application and finished with text had more correct results in the test (p < 0.001). SUS results were promising in terms of usability and learning factor. We concluded that studying the spinal cord through NitLabEduca seems to favor learning when used as a complement to the printed material. Introduction The mobile technology evolution has provided changes in education [1,2]. Initially, the advances were centered on devices such as Personal Digital Assistants (PDAs), tablets, notebooks, and mobile phones [3]; then, the goal was to allow students to learn outside the classroom [4]; and later, the concept of active spaces emerged, wherein student mobility is related to the real environment context, characterized by the use of mixed reality in learning [5]. The union of virtual and real realities enabled immersion and interactivity because of 3D displays, high-resolution graphics, and animations, so applications with this technology have been implemented as supplements to printed books [6]. Universities usually provide environments conducive to experience new educational processes, such as the concept of mobile learning (m-learning), which characterizes autonomy, collaboration, and mobility among individual students [2]. Learning objects in education implemented for mobile devices are increasingly widespread. In the health area, augmented reality (AR) technology is one of the possible facilitators for learning human anatomy. Three-dimensionality increases student interaction term that can be seen as a subcategory of XR and seeks to create an environment where users cannot distinguish between virtual and real objects [21]. Augmented reality technology has been used in different contexts, including in the health domain [22]. For example, health professionals have used AR in clinical practice with effectiveness both in preoperative planning and during surgical intervention processes [23]. Professionals have also used AR-assisted telemedicine to virtually reconstruct the human body of a patient using a 3D model [24]. Augmented reality technology is especially useful in health education [25]. Health professionals have used AR to learn the theory and practice of procedural skills through remote learning and teaching supervision during clinical practice [26]. In addition, AR technology provides a simulation environment that can reduce complications in the initial practices of health students. AR-based solutions have assisted medical residents and future surgeons in virtually learning surgical techniques before, during, and after the procedures through simulations, since on-site training in the operating room exposes patients to risks due to lack of student experience [27,28]. These solutions enable the balance between patient safety and the educational background of the medical professional, and enable the teaching of complex skills in a controlled environment, which allows students to make mistakes without the adverse consequences of real life. Moreover, the learning with 3D technologies can increase motivation, student involvement, and spatial knowledge representation [29]. Technology has revolutionized the representation of the human body [12]. Medical students use AR for three-dimensional comprehension of human anatomy in an interactive way [30], since this technology seems to reduce cognitive overload during the learning process [9]. Three-dimensional anatomical structures are models that aid in the spatial interpretation of forms. The perspective of 2D images is restricted to length and width, limiting the student to abstraction, and although 3D images add depth, they are limited by not interacting with the user [31,32]. On the other hand, mobile devices allow for the continuous updating of the contents of health practice through self-directed learning more quickly, even outside the traditional teaching environment [2,33]. In this context, mobile AR technologies have the potential to stimulate spatial learning by materializing abstract concepts and simplifying the understanding of the content [34]. Related Work There are different studies focused on m-learning [2] and mobile AR [35]. However, to the best of our knowledge, few studies have been developed on mobile AR technologies for human anatomy education. Representing of three-dimensional anatomical structures is not an easy task in education, but they are models that assist in the spatial interpretation of forms and describe the human body. Traditionally, the anatomical teaching is performed with real or synthetic cadavers found in laboratories and classrooms in educational institutions. Ferrer-Torregrosa et al. [12] developed an augmented reality tool called ARBOOK, which provides content related to the anatomy of the lower limbs, then helps students to learn the content independently, at desired times and places. The results demonstrated its usefulness for motivating students to learn autonomously and interpret spatial information. The magic mirror system [36] is an augmented reality application that virtually displays the student's anatomical organs, and enables interaction and immersion. As a result, students felt motivated to learn human anatomy. Another application with the same purpose, a magic mirror system [30], demonstrated a better three-dimensional understanding of the human body's anatomical structures. Birt et al. [22] examined two interventions with AR and VR technologies in two higher education classrooms. That study concentrated on student perceptions for learning physiology and anatomy, and skill acquisition in airway management. In the first classroom, the use of Oculus Rift resulted in greater student involvement and minimal distraction compared to a mobile AR application. In the second one, these technologies provided a significant improvement in student performance by using simulations. Discussion and Research Questions In contrast to the related studies, we focus on using a mobile AR application for human neuroanatomy education-specifically the process of learning the topographic and functional anatomy of the spinal cord. Our objective by performing this study was to develop NitLabEduca and evaluate its performance as an auxiliary means in the teaching-learning process of the spinal cord. Therefore, the contribution of this paper is threefold: (i) we propose and detail the mobile AR application NitLabEduca; (ii) we present the results of an experimental evaluation that compared the use of printed material and the proposed application to acquire knowledge; and (iii) we identify the usability and the learning ability factor of the proposed application. For those ends, we defined the following research questions (RQ): • (RQ1) Can NitLabEduca improve the teaching-learning process of the spinal cord? • (RQ2) What is the usability performance of NitLabEduca? • (RQ3) What is the learning ability factor of NitLabEduca? Research Characterization This research is an experimental and comparative procedure, which assessed through questionnaires applied to volunteers (i.e., students), after a learning task, the use of printed material and the mobile application with AR as a means of acquiring knowledge. To this end, NitLabEduca was implemented to study ascending and descending pathways of the spinal cord. NitLabEduca is a mobile application that uses AR technology and enables users' interactivity with a three-dimensional model of the spinal cord on a macroscopic scale for the learning process. The proposed application also has test resources that evaluate results of users in a statistical performance model. Sampling The study participants were 80 students from the neuroscience class in the physiotherapy course at the Federal University of Piauí (UFPI), Brazil. Regarding inclusion criteria, students of the neuroscience discipline were selected from the physiotherapy course at UFPI, aged between 18 and 25 years, men and women. Exclusion criteria were: individuals with a history of biological determinants that could change results (e.g., psychotropic medications, fatigue, and alteration of body temperature); abnormal or corrected audiovisual impairments; individuals with severe impairments when moving hands or fingers; individuals with thinking disorders (i.e., hallucinations); other neurological disorders and severe psychiatric disorders; musculoskeletal conditions that could cause bias; and individuals with global cognitive deterioration. Participants signed a statement with details of the experimental conditions, the study objective, and informed consent terms. Data collection procedures were initiated only after prior approval by the Ethics and Research Committee of the UFPI (number 3,683,221). Experimental Procedure We gathered 40 participants in a classroom in the UFPI and organized them on school chairs. We divided them into two groups according to Figure 1: a group (n = 20) composed of subjects with previous knowledge of the spinal cord in the discipline (GPK) and another (n = 20) without prior knowledge of the spinal cord (GWK). Forty additional students repeated this procedure, giving a total of 80 participants in this experiment. All of them performed learning tasks in the classroom. We treated learning conditions as a crossover. That is, we conducted the experiment in two phases for both conditions, as illustrated in Figure 2. In the first phase, members of condition A received printed material, and members in condition B received a tablet. Tablets had NitLabEduca installed and available to be used. Participants used both the mobile application and printed material to study the spinal cord, ascending and descending pathways, and the same content in different formats. Previously to the initial study period, we determined a time of 10 min for participants of the condition B to get to know the mobile device and the proposed software. At the end of 45 min of study to both conditions for the first phase, students took a test for 15 min, in which they were to describe the spinal cord's structures as presented in materials made available for both conditions. In the second phase ( Figure 2), groups reversed the study conditions. In condition A, students received tablets, and members of the group in condition B received printed material. Similarly to the first phase, we allowed the participants in the group of condition A to get to know the mobile device and AR software for 10 min. After 45 min in the second phase, we applied a test in both conditions for 15 min. Similarly, they also had to describe the spinal cord structures as presented in the printed material and AR. At the end of the second phase, participants answered the System Usability Scale (SUS) questionnaire in Portuguese language [37] according to Figure 2. This scale allowed us to subjectively evaluate the usability of the mobile application [38,39]. Besides usability, authors in [40] showed how to evaluate the learning ability factor through the analysis of items 4 and 10 of the SUS questionnaire. We prepared a complementary questionnaire for the purpose of gathering information to assist the research analysis and to verify the level of student adherence to the use of mobile applications in higher education. The questionnaire had the following questions (questions were originally written in Portuguese language and then translated to English in this manuscript): (1) Do you have a smartphone? (2) Do you like neuroanatomy? (3) Have you already used mobile software to study? (4) Do you believe that, by using mobile educational software, you understand better? (5) Is the possibility of using software at any time and anywhere interesting? (6) Would you use another mobile educational software to help your studies? The Mobile Educational Application NitLabEduca In this section, we first present the tools used to develop NitLabEduca. Next, we describe its main proposed features. Implementation Aspects For NitLabEduca implementation, we used the software 3dS Max (https://www.autodesk.com/ products/3ds-max/overview) for the three-dimensional modeling of the spinal cord. The 3dS Max tool can model 3D objects [43]. After applying texture to the object, we performed the process of image modeling and rendering. We exported the spinal cord's model in Object File Wavefront 3D format, which we used as three-dimensional objects. We manipulated 3D objects in the project supported by the engine Unity (https://unity.com/), which is a software tool able to develop 3D games supported on multiple platforms. We took advantage of the the extension Vuforia (https://developer.vuforia.com/) to create mobile augmented reality environments and QR codes (https://www.qrcode.com/en/index. html) [44]. Therefore, this extension helped us to implement the spinal cord's animations with user interaction [45] via scripts written in C# language. We used the integrated development environment Android Studio to develop the mobile application, which also enabled access to the questionnaires about the spinal cord and user performance statistics. The database of the mobile application was the Google Firebase. Features The mobile AR application enables the study of the spinal cord, ascending and descending pathways, through user interaction with images in rotating 3D models. In the first screen (Figure 3a), users can login or register in the NitLabEduca application to access study material. After logged in, different menu options are available (Figure 3b,c). The button Medula Espinal (i.e., "spinal cord" in Portuguese language) enables to access the complete image of the spinal cord with the possibility of visualizing individual pathways (Figure 4). This feature allows the users to interact with the three-dimensional object by manipulating the whole or parts that compose the spinal cord. Users can increase and reduce the size of the object. Each pathway of the spinal cord has theoretical content naming it and its technical aspects. All pathways can be studied individually (see video in the supplementary material). The button Quiz allows users to answer questions related to the covered content (Figure 5a). This feature is a questionnaire with different multiple choice questions (four answers) that aims to assess knowledge about the spinal cord. The button Estatística (i.e., "statistics" in Portuguese language) displays quantitative measurements of the user's performance in answering the quiz available in the application (Figure 5b). This feature is useful in that it allows users to check their progress of understanding the spinal cord. The button Print 3D provides a file in STL format containing the 3D model of the spinal cord, which can be printed on 3D printers. We sliced this model into several layers and provided the resulting file in NitLabEduca. This feature is useful when users prefer to contact real objects. Statistical Analysis We performed a three-way ANOVA having as a inter-subject factor the group (with previous knowledge vs. without previous knowledge) and intra-subject factors the conditions (student started with printed material vs. student started with NitLabEduca) and moment (first test and second test). We investigated the two factor interactions using a Student's t-test. The size of the effect was estimated as a partial-square stage (η 2 p) in the ANOVA analysis and Cohen's d for the Student's t-test. We used the Mauchley's test criteria to evaluate the sphericity hypothesis and the greenhouse-Geisser procedure (G-G ) to correct degrees of freedom. Data normality and homoscedasticity were previously verified by Levene and Shapiro-Wilk tests. We calculated the statistical power and 95% confidence interval (95% CI) for the dependent variables. We interpreted the statistical power as low power from 0.1 to 0.3; high power from 0.8 to 0.9. We interpreted the effect magnitude using recommendations suggested by [46]: insignificant <0.19; small from 0.20 to 0.49; medium from 0.50 to 0.79; large from 0.80 to 1.29. With alpha-Bonferroni correction for the interaction analysis, then adjusting the value for p ≤ 0.025. We conducted all analyses using SPSS for Windows version 20.0 (SPSS Inc., Chicago, IL, USA). Number of Hits The three-way ANOVA showed an interaction between the condition and moment factors (F(1.152) = 15.897, p < 0.001, η 2 p = 0.10, power = 98%). In the interaction analysis, the paired t-test demonstrated that there was no statistically significant difference between the second test moments for subjects who started with NitLabEduca and finished with printed material (p < 0.05). On the other hand, a statistically significant difference was observed between the first and second test moments; t(38) = 7.616, p < 0.001, d = 0.26. Additionally, for subjects who started the study with printed material and finished with NitLabEduca, no statistically significant difference was observed (p > 0.05), whereas for those who started with NitLabEduca and ended with printed material, a statistically significant difference was found; t(38) = 9.894, p < 0.001, d = 0.27. These findings indicate that subjects who started the study with NitLabEduca and finished with printed material increased the number of hits in the test by 27%. In comparison, those who started with printed material and finished with NitLabEduca increased the hits by 11% (Figure 6). Table 1 presents the participants' answers to the 10 items of the SUS questionnaire. Participants responded favorably to the use of the mobile application with AR since the values of the scale in item 2 (disagree) and 1 (totally disagree) show that most participants disagreed with negative aspects of even questions, 74%. Additionally, results of the item 4 (agree) and 5 (totally agree) show that most respondents agreed with positive aspects of odd questions, 68%. On the other hand, it is perceived that some of the participants were indifferent, both in negative and positive aspects, identified in item 3 (neutral)-17% in even questions and 23% in odd ones. In Table 2, a descriptive analysis of the score for each one of the ten items is presented. It was observed that item 5 presented the lowest mean score of 2.50, where 49% of the subjects agreed that the functions of this system were well integrated. However, 26% were neutral and only 16% disagreed with the positive aspect of the item. Results in item 9 demonstrate that participants felt confident in using the system-49% agreed and 13% fully agreed with the affirmation. Although items 8 and 9 have the same mean (2.6), the standard deviation 1.07 is the largest in item 8, showing a greater distribution of the score in this question. In the descriptive analysis of the SUS scale, the maximum score was 100 points with a mean of 71, median of 72.5, standard deviation of 13.7, and standard error of 1.53. The NitLabEduca had good usability with mean score classified as "C" in the grade scale for the SUS questionnaire [47]. Figure 7 represents the distribution of the mean score of the SUS scale, in which between 90.1 and 100 corresponded to 5%; from 80.1 to 90, 21%; between 70.1 and 80, 26%. Score distributions from both groups are presented in Figure 8 (GPK) and Figure 9 (GWK), in which each circle represents a participant. Items 4 and 10 of the SUS, added and multiplied by 12.5, are analyzed for the definition of the learning ability factor. Other items, which are summed and multiplied by 3.125, determine the usability factor according to [40]. Table 3 shows the usability and learning ability of the application. Learning factor shows the mean value 77, indicating that a greater learning ability suggests easier learning through NitLabEduca. Figure 10 presents score distribution of the learning ability factor. Scores between 90.1 and 100 correspond to 15% of the total, those between 80.1 and 90 correspond to 26%, and points between 70.1 and 80 correspond to the highest percentage (34%). Points between 60.1 and 70 correspond to 15% and 10% with scores below 50, representing unacceptable values [47]. Complementary Questionnaire Results showed that 35% of the participants stated that they did not like neuroanatomy teaching, and 72.5% reported that after using the application, they believed they could understand the content better. In the GPK, there were more individuals who liked the discipline (45%) and 67.5% assumed to have learned better after using NitLabEduca. On the other hand, 75% of GWP participants stated that liked neuroanatomy teaching; 77.5% of those believed to have better understood after using the application, according to Figures 11 and 12). Main Findings and Theoretical Discussion Our findings demonstrated that the NitLabEduca application seems to be more efficient at the beginning of the teaching-learning process as a complement to the printed material (RQ1) . In the experiment with the NitLabEduca application, we observed that its use at the beginning of the study facilitated the spatial abstraction of the individual in manipulating visual patterns, which indicates a better understanding when passing to the content in printed material, then increasing performance in the learning process. We understand that, when participants visualized the structure of the spinal cord and its ascending and descending pathways, through interaction with the virtual object, the proposed mobile application stimulated their spatial ability and facilitated theoretical understanding. In this case, participants' spatial ability seems to have been more requested, which facilitates absorption of information [48]. Spatial skills at the early stage of the learning path are significant to the theory of knowledge [6]. In this context, dynamic three-dimensional visual stimuli in learning by NitLabEduca seems to favor learning. The results demonstrated that stimulation to spatial ability is fundamental for learning spinal cord pathways, content covered in the neuroanatomy discipline. This fact was also observed by [13] during the learning process in the anatomy discipline. Therefore, representation of three-dimensional anatomical structures as models to aid in the spatial comprehension of forms is a factor that favors learning [12]. In this context, the NitLabEduca application's performance was positively evaluated due to the positive impact of the three-dimensional teaching-learning model observed in our results, demonstrating the influence of this model on neuroanatomy education compared to the traditional teaching process [14]. In health education, the acquisition of knowledge is characteristically experiential, self-sufficient, and practical [32]. NitLabEduca brings together these characteristics, and in this sense, students can take advantage of it to acquire knowledge related to the spinal cord, because it allows users to combine physical word experiences with virtual environment. These characteristics are in accordance with the constructivist learning theory [49,50] and the experiential learning theory [51]. The findings for the SUS, which assessed subjective feelings and satisfaction levels of individuals towards NitLabEduca, demonstrated good usability of the application (RQ2), which is considered adequate to the teaching-learning process in the interpretation of the SUS scores [39]. It demonstrated that the proposed mobile application is easy to use, allowing students to focus attention on the proposed study topic [52]. Therefore, it influences students for the effectiveness of the learning process [53]. According to SUS, NitLabEduca allowed students to immerse themselves into the content, thereby interacting with them, and demonstrated good applicability in the learning process [40] (RQ3). Therefore, it is understood as an evolution to the traditional teaching-learning model of the neuroanatomy discipline, which migrated from two-dimensional images in a printed matter to AR 3D technology and digital dissection of organs of the human body [9,54]. The popularization of smartphones and their characteristics of multifunctionality, omnipresence, and portability influenced the propagation of m-learning [55], and contributed to diffusion of the "digital born" that technologies integrated into everyday life have [56]. Mobile devices can be incorporated with AR, which assists in immersion and interactivity through 3D animation [6] and enables one to create various educational materials as support for learning, facilitating it [57]. This is in line with the cognitive theory of multimedia learning [58], since the use of mobile AR in the teaching-learning process involves realistic experiences through visual simulations [6]. Limitations and Future Work Despite the results of the research, we noted that during NitLabEduca evaluation, participants indicated some limitations. The most obvious situation was related to the execution of the application and visualization of three-dimensional images using heavy tablets. To visualize the spinal cord, a direct line of sight between the camera of the mobile device and a QR code was necessary. This procedure seemed to generate muscular discomfort in students, since tablets used in the research weighed 500 g and handling of the mobile device lasted 45 min to perform the task, which certainly affected the usability perception of the application. Indeed, Lee et al. [59] demonstrated the impact of tablets on physical problems in the user's body, which can cause stiffness, pain, and discomfort in the back and shoulders. Further studies using lighter devices with a task in a shorter period of time in research on spinal cord pathways and other anatomical structures may provide new insights on the applicability of AR in teaching and learning. Of the individuals who had knowledge in the discipline of neuroscience, 13 of them were repeating the course and declared they did not like neuroanatomy by means of the complementary questionnaire. Results of the evaluation of NitLabEduca via SUS by these students were lower than the mean acceptable [47]. This outcome seems to have been motivated by the group's situation, as repeating students in the discipline and possible difficulty with the theme studied. This may be related to the cognitive process being inseparable from motivated reasoning [60], in which emotions are inseparable from mental processes and motivated reasoning overlaps evidence. Future research can be performed to assess whether the use of AR applications can generate positive inspiration [61] in neuroanatomy students, hence impacting their perceptions during the teaching-learning process. Additional research may focus on comparative experiments, also considering 3D printed models of the spinal cord, in addition to mobile AR applications and printed material. By conducting this comparison, we believe that we deepened our comparative analysis and answered additional research questions. In fact, the modeling and reconstruction of anatomical structures for 3D printing is a trend in supporting education [17]. Therefore, adding an evaluation with 3D printed models would enrich the analysis [62,63]. Conclusions The aim of this study was to develop a mobile AR solution focused on studying the spinal cord, and evaluating its use as an auxiliary means in the teaching-learning process. For this purpose, we implemented NitLabEduca and performed an experimental evaluation with 80 neuroanatomy students to identify the effects on learning and assess the usability of the proposed application. From our research, we conclude that studying the spinal cord using NitLabEduca favors learning, and it may complement the traditional teaching-learning model, enhancing the knowledge acquisition process. NitLabEduca was revealed to be a resource with potential for both spatial abstraction and functional understanding of the spinal cord pathways. Moreover, NitLabEduca demonstrated good usability in the teaching-learning process and can supplement printed material, enriching the training of students and health professionals.
2020-12-17T09:10:54.952Z
2020-12-12T00:00:00.000
{ "year": 2020, "sha1": "4073cf3fd2473f08fd45bd05dba506acf47bd7d8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7102/10/12/376/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "32d6f08da4a435129b24bfa98ea50da7e1f5c7cf", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Education", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
251081733
pes2o/s2orc
v3-fos-license
A Rare Case of Acute Infectious Purpura Fulminans Caused by Klebsiella Pneumoniae and Human Herpesvirus Type 5 Background Purpura fulminans (PF), a rare, life-threatening disorder, is a hematological emergency in which there is skin necrosis, disseminated intravascular coagulation (DIC), and protein C deficiency. In PF, the skin necrosis and DIC are secondary to protein C deficiency. This may progress rapidly to multiorgan failure caused by the thrombotic occlusion of small- and medium-sized blood vessels. Case Report This article presents the case of a 22-year-old male with fever as well as necrotic and purpuric skin lesions. The ultrasound and computed tomography scans revealed infections in the skin wounds as well as venous microthrombosis and thrombosis in multiple intracranial and pulmonary vessels. The laboratory tests showed signs of sepsis, thrombocytopenia, an abnormal decrease in protein C and antithrombin III, DIC, multiple organ and system failures, gastric varices, and gastrointestinal hemorrhage. The blood, sputum, and secretions under the skin lesions were cultured and were positive for Klebsiella pneumoniae. The results of the high-throughput genetic testing of the pathogenic microorganism DNA were consistent. In addition, human herpesvirus type 5 was detected. The histopathological examination of the skin lesions revealed pathological features consistent with PF. After successful treatment by the departments of Dermatology, Emergency Critical Care Medicine, and the Intensive Care Unit, the patient was discharged after 67 days of hospitalization. Conclusion Adults with acquired protein C and/or S deficiency states, including certain bacterial and viral infections, who drink alcohol and take varieties of non-steroidal anti-inflammatory analgesics at the same time, may develop acute infectious PF. Clinicians should be aware of this for early diagnosis and treatment. Introduction Purpura fulminans (PF) is a rare syndrome of intravascular thrombosis and skin hemorrhagic infarction and necrosis that usually occurs in children. It progresses rapidly, with vascular collapse and systemic disseminated intravascular coagulation (DIC). The pathogenesis of the disease has recently been classified according to the following three types of trigger mechanisms. (1) Neonatal PF is related to the genetic deficiency of natural anticoagulant proteins C and S; (2) idiopathic or chronic PF occurs after the incubation period of various viral infections; and (3) the most classic and common form of this disease, called acute infectious PF (AIPF), occurs when it is superimposed on bacterial infections; this kind of PF is related mainly to acquired protein C deficiency. Meningococcus and varicella virus are the two most On day two, the patient experienced pain in the skin lesions and in the back, headache with vertigo, sputum-free cough, diarrhea, and stomach ache, all gradually increasing in severity. On day seven, the patient developed abdominal pain. Hematemesis was 20 mL, and blood in the stool was 1500 mL. Occult blood tests of fecal and stomach contents were both positive. An ordinary gastroscopy and duodenal examination showed gastrointestinal bleeding, but no bleeding points were found. A pulmonary artery CT angiography (CTA) showed multiple pulmonary embolisms in each lobe, segment, and sub-segment of both lungs. An abdominal CT revealed the following: (1) there was suspected liver damage; (2) the middle and right veins of the liver, the main portal vein, the left and right branches, the splenic vein, and the superior mesenteric vein had multiple embolisms; (3) the left inguinal area had obvious swelling, and the density of the subcutaneous fat was mixed. This was considered inflammatory. On day 10, the patient had difficulty breathing, irritability, and a progressive decline in blood oxygen saturation and partial pressure of oxygen. Respiratory failure was considered. He was transferred to the Department of Emergency Critical Care Medicine and was injected with sedative drugs. A right subclavian vein puncture and a catheterization were performed to give intravenous hypernutrition. There were no special findings in the fiberoptic bronchoscopy. A blood biochemical analysis showed liver function damage, renal function damage, hypokalemia, hyperchloremia, and hypernatremia; the blood gas indicated a decompensated period of metabolic acidosis. The patient was in a continuous coma without sedation medicine with a T of 37.6°C-39.5°C. On day 20, the patient was given continuous renal replacement therapy. The patient's consciousness improved, and he was able to respond. He had a body T of 37.6°C-38.5°C. He defecated tar-like stools with positive occult blood test twice for a total of 800 mL. The PLT increased to 61 × 10 9 /L, after which the patient was extubated, and oxygen was administered by mask. On day 22, the patient was returned to the Dermatology Department. On day 24, his blood chloride increased, his oxygen partial pressure was 48%, his carbon dioxide partial pressure was 23%, his HGB was 48 g/L, and he was transferred to the Intensive Care Unit (ICU). In addition, femoral artery puncture catheterization and monitoring by Pulse index Continuous Cardiac Output (PiCCO) (Pulsion Medical Systems, Germany), radial artery puncture catheterization and invasive blood pressure monitoring, and peripheral puncture central venous catheterization were given. On day 30, the patient still had active bloody stools (170 mL) and a bruise (15 × 20 cm) with edema appeared on the right upper limb. On day 36, his T was 37°C-39°C, and his HGB was 80 g/L. He was returned to the Dermatology Department. On day 41, red liquid drained from the patient's gastric tube, accompanied by palpitations, shortness of breath, and convulsions after a little activity. On day 44, his breathing difficulty symptoms worsened. The mask was given at 66-78% oxygen saturation, and the patient was transferred to the ICU again. He was given thoracentesis catheter drainage, tracheal intubation, and mechanical ventilation. On day 52, his condition gradually improved. His T was 37°C-37.8°C, and the PLT was 355 × 10 9 /L. His coagulation function was improved. On day 56, the patient was in a stable condition with normal body T, and he was returned to dermatology. On day 57, a gastroscopy showed varicose veins in the fundus of the stomach and chronic non-atrophic gastritis. On day 66, a portal venous system CTA showed the following: (1) the manifestations of splenic infarction and the (2) thrombosis of the portal vein, hepatic arteriovenous, and inferior vena cava had disappeared. On day 67, the patient's thigh root skin lesion area was significantly reduced, the texture became soft, and there was no tenderness. The purpura-like skin lesions on both calves had completely subsided, his vital signs were stable, and all laboratory indicators were stable Additionally, his weight had dropped to 46 kg. Therefore, the patient was discharged. Daily laboratory tests showed the following: continually increasing WBC and EO counts; thrombocytopenia ( Figure 2A); elevated PT/APTT; elevated thrombolytic dimer; reduced AT; low FIB, a concern for DIC ( Figure 2B); decreased serum total protein and albumin; abnormally elevated transaminase; high levels of serum bilirubin and bile acids, suggesting liver damage ( Figure 2C); abnormally elevated muscle enzymology ( Figure 2D); abnormally elevated creatinine and uric acid causing a concern for renal function failure ( Figure 2D); abnormally elevated fasting blood glucose and serum iron ( Figure 2E); proteinuria; fecal occult blood; and an abnormal decrease in protein C of 43.1% (70.0-140.0). Pathogen culture: The blood, sputum, and secretions under the skin lesions were cultured for a variety of bacteria, including multi-drug-resistant bacteria (Table 1). Three specimens were positive for Klebsiella pneumoniae ( Figure 3A). The results of the high-throughput genetic testing of the pathogenic microorganism DNA were consistent. Human herpesvirus type 5 was detected in both the patient's blood and skin lesions (Table 2). Additionally, an oral mucosa fungus microscopic examination was positive ( Figure 3B). Histopathological examination of the skin lesions revealed the following: (1) the formation of epidermal serous callus; (2) extensive liquefaction and degeneration of the basal layer; (3) stenosis and occlusion of most of the vascular lumen of the dermis and subcutaneous tissue; (4) fibrinoid necrosis of the blood vessel walls; (5) inflammatory thrombosis filling in part of the blood vessel cavities; (6) extensive dermis collagen sheet necrosis; (7) hyperplasia of the subcutaneous superficial adipose tissue; (8) necrosis of the fat cells in the lobules; and (9) mixed inflammatory cell infiltration (Figures 4 and 5). Rescue and treatment: (1) Continuous renal replacement therapy, tracheal intubation, mechanical ventilation, continuous alternating infusion of ordinary frozen plasma and cryoprecipitate clotting factors, and suspended RBC and PLT apheresis were administered when necessary. (2) A variety of sensitive and broad-spectrum antibiotics were utilized and administered based on experience or drug sensitivity results. (3) Medium-dose glucocorticoids (ie, methylprednisolone), intravenous immunoglobulin, hemostasis, acid suppression and stomach protection, liver protection, etc., were given ( Figure 6: Main medications and treatments). After treatment, the patient's condition improved, and all abnormal Discussion When a patient is diagnosed with AIPF, their skin lesions appear as punctuated purpura that quickly turn into large, fused ecchymosis areas. With further necrosis, a typical hard eschar forms, which is characteristic of the disease. Bilateral symmetric gangrene is also a feature of PF, which usually requires amputation. In fact, some studies indicate that the amputation rate is 19%. 3 Sepsis, DIC, and concomitant multisystem organ failure are the biggest obstacles to survival, with a reported mortality rate as high as 50%. 4 This article presents a case of AIPF due to Klebsiella pneumoniae, which is rarely associated with PF. Indeed, Gramnegative rods are less commonly involved. Klebsiella quasipneumoniae is a kind of non-motile, non-spore-forming Gramnegative orthobacteria, isolated from human infection samples, that includes two subspecies: Klebsiella quasipneumoniae subsp. quasipneumoniae subsp. nov and Klebsiella quasipneumoniae subsp. quasipneumoniae subsp. It is a typical conditional pathogen, and is often already present in the host when the host's resistance drops, causing an outbreak. Klebsiella pneumoniae is a common cause of infectious disease in hospitals and community settings and is associated with a variety of clinical conditions, including bacterial pneumonia, meningitis, endocarditis, liver abscess, intra-abdominal infection, urinary tract infection, and bloodstream infection. Klebsiella pneumoniae is reported to be the second most common cause of Gramnegative bacteremia. Recently, various microbial factors and virulence genes of Klebsiella pneumoniae have been reported to be associated with its specific clinical features and a high mortality rate. 5, 6 Tsubouchi et al reported that a 75-year-old woman died of skin hemorrhage, purpura, gangrene, and sepsis caused by Klebsiella pneumoniae infection. 7 Iacovelli et al reported that ceftazidime/avibactam was used to successfully treat a carbapenem enzyme-producing case of pneumonia, septic thrombophlebitis, and right ventricular wall endocarditis caused by Klebsiella. 8 Nguyen et al found that a patient with acute liver failure, possibly due to acetaminophen overdose and a Klebsiella pneumoniae bacterial infection, was found to have rapidly progressive purpura. This patient suffered from acral gangrene and DIC, with decreased protein C and S activity. A skin biopsy revealed microthrombi in the dermal blood vessels. 9 Olowu reported Klebsiella-induced PF in a 3.5-year-old Nigerian girl with fever, vomiting, diarrhea, difficulty with breathing, swollen feet, and gangrenous toes. Amputations of the fourth and fifth toes of her left foot occurred on day 18 of the illness. 10 Disse et al reported that a 17-day-old neonate suffering sepsis-associated PF due 4257 to Klebsiella oxytoca was given broad-spectrum antibiotics, ventilation, diuretics, protein C substitution, and burn protocol. With this treatment, the patient's limbs were successfully preserved with scarring. 11 Similar to the above cases, this patient, who was simultaneously infected with Klebsiella pneumoniae in the blood, skin wounds, and lungs, rapidly progressed with small vessel microthrombosis, multiple large vein thrombosis, pulmonary embolism, and extensive cerebral infarction. He also presented with a very severe form of DIC, which involves disturbances in the balance of procoagulant and anticoagulant endothelial cell activities, 12 leading to prolonged PT and PTT. However, the thrombocytopenia is common to all cases. This disorder is triggered by bacterial endotoxins, which mediate vascular injury through a variety of effects on the alternate pathway of complement, NEUT, endothelial cells, factor XII, and MONO. Activated MONO secrete proinflammatory cytokines, such as TNF-ɑ, IL-1β, IL-6, IL-12, and INF-γ, which consume proteins C, S, and AT. They also play an important role in the mediation of the biological effects of bacterial endotoxins. These factors subsequently release other cellular mediators and activate NEUT and endothelial cells; these events invariably lead to the adhesion of the activated NEUT to the endothelial cells, thereby causing vascular injury and DIC with varying degrees of thrombosis and/or bleeding. In AIPF, the degree of thrombosis is usually profound and includes serious ischemic injuries to tissues and organs. 13 Human beta-herpesvirus 5, also known as human cytomegalovirus (HCMV), is a common infection in the human body. Herpesviruses persist in the host due to the switch between two modes: recessive infection and cytolytic infection. 14 The principal modes of transmission of HCMV are through the respiratory tract or close contact, and its 4258 main target cells are vascular endothelial cells, which act as a viral reservoir. Multiple factors can cause persistent HCMV infection, resulting in endothelial cell dysfunction and activation of proinflammatory signaling systems, including the NF-κB, SP-1, PI3K, and PDGF receptors. This infection increases the molecular expression of endothelial cell adhesion; it also alters the ability of MONO and macrophages to lyse proteins, recruit new MONO from the bloodstream, and contact already infected endothelial cells of HCMV transfer virus particles to MONO and diffuse powder. Conversely, endothelial cell injury promotes thrombosis, which becomes a predisposing factor for cardiovascular and cerebrovascular diseases. 15 The experiment by Rahbar et al found that after HCMV infected vascular endothelial cells in vitro, PLT adhesion and aggregation were significantly higher than those of the uninfected cells, and the von Willebrand factor, vascular cell adhesion molecule 1, sialyl Lewis X, and intercellular adhesion molecule 1, etc., were markedly elevated, demonstrating increased expression on endothelial cells. Both may increase thrombosis and leukocyte adhesion. Increased thrombosis is dependent on active viral replication and can be inhibited by foscarnet and ganciclovir. 16 Rinaldi et al reported venous thromboembolism in a 62-year-old woman and a 20-year-old Caucasian woman due to acute cytomegalovirus infection, possibly due to HCMV-induced endothelial damage and coagulopathy. 14 It has also been observed that long-term drinkers taking therapeutic acetaminophen may result in an overdose, leading to severe hepatotoxicity. Patients with resulting liver damage may develop secondary hepatic dysfunction and coagulopathy with impaired protein synthesis, leading to acquired protein C and S deficiency, or they may be potential candidates for this deficiency. 9,17 Jack et al reported a 32-year-old woman was instructed to take two extra-strength acetaminophen tablets every four to six hours for minor aches and pains after a car accident. At the same time, this patient increased her usual alcohol consumption over the ensuing three days. Eight hours before her admission to the hospital, the patient noted rapidly spreading purpuric lesions on her hands and feet with tight edema. Laboratory tests revealed elevated liver enzyme values and reduced protein C. 17 For other non-steroidal anti-inflammatory drugs, Kosaraju et al described a case of a healthy woman who developed DIC after intramuscular administration of a single 60-mg intramuscular dose of ketorolac. She had large areas of ecchymosis all over her face, trunk, and extremities. 18 The patient in the present study was taking normal doses of ibuprofen and diclofenac sodium for one week at the beginning of the disease, but still did not stop drinking alcohol, so this event may be one of the triggers for the complex pathogenesis. A review of the relevant literature showed only one other case description of diclofenac-related PF with DIC. 19 Conclusion The clinical manifestations of AIPF are skin purpura, DIC, and microvascular thrombosis in the skin and other tissues. In addition, even larger organ thrombosis and hemorrhagic infarction may occur. Thrombosis, one of the consequences of AIPF, leads to multiple organ failure and a high mortality rate. A delayed diagnosis of AIPF can lead to severe adverse clinical consequences, such as amputation and death. Therefore, a special emphasis on testing for AIPF should be placed on adults with acquired protein C and S deficiency states, including certain bacterial and viral infections, that also drink alcohol and simultaneously take different varieties of non-steroidal anti-inflammatory analgesics, as these factors may lead to AIPF. Most importantly, the timely and repeated administration of intravenous human immunoglobulin, cryoprecipitate coagulation factors, and apheresis PLTs can improve the prognosis and reduce the disease's mortality. It is hoped that this complex case can inspire multidisciplinary clinical thinking by physicians and dermatologists. Ethics Approval and Consent to Participate This study was conducted with approval from the Ethics Committee of the Second Affiliated Hospital of Kunming Medical University. This study was conducted in accordance with the declaration of Helsinki. Written informed consent was obtained from the patient. Consent for Publication The patient signed a document of informed consent.
2022-07-27T15:03:49.232Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "2f2fc6004845cb1b0460f17860e2db2476fd7d04", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=82403", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f55f3ecfb4ec7a36288725c1e819e22e60d63be8", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [] }
28040113
pes2o/s2orc
v3-fos-license
Protein Turnover in Mycobacterial Proteomics Understanding the biology of Mycobacterium tuberculosis is one of the primary challenges in current tuberculosis research. Investigation of mycobacterial biology using the systems biology approach has deciphered much information with regard to the bacilli and tuberculosis pathogenesis. The modulation of its environment and the ability to enter a dormant phase are the hallmarks of M. tuberculosis. Until now, proteome studies have been able to understand much about the role of various proteins, mostly in growing M. tuberculosis cells. It has been difficult to study dormant M. tuberculosis by conventional proteomic techniques with very few proteins being found to be differentially expressed. Discrepancy between proteome and transcriptome studies lead to the conclusion that a certain aspect of the mycobacterial proteome is not being explored. Analysis of protein turnover may be the answer to this dilemma. This review, while giving a gist of the proteome response of mycobacteria to various stresses, analyzes the data obtained from abundance studies versus data from protein turnover studies in M. tuberculosis. This review brings forth the point that protein turnover analysis is capable of discerning more subtle changes in protein synthesis, degradation, and secretion activities. Thus, turnover studies could be incorporated to provide a more in-depth view into the proteome, especially in dormant or persistent cells. Turnover analysis might prove helpful in drug discovery and a better understanding of the dynamic nature of the proteome of mycobacteria. Introduction Tuberculosis is one of those diseases that has yet to relinquish its grip over mankind [1]. Complete eradication of tuberculosis is still not anticipated in the foreseeable future [2]. The problem resides in the ability of M. tuberculosis to persist in a presumably dormant state [3]. Thus, persistence contributes to a latent tuberculosis infection which serves as an enormous reservoir of infection [4,5]. According to the WHO, 184 nations have adopted the Directly Observed Therapy Short Course treatment regimen for their national tuberculosis control programs [6]. But the long duration of these regimens has made it difficult to maintain compliance in many areas. As a consequence, noncompliance contributes to the emergence of multidrug-resistant and extremely-drug-resistant M. tuberculosis strains that impose an even greater threat. Currently, multidrug-resistant tuberculosis infections have a mortality rate >50% and their cure requires a 2-year course of expensive and highly toxic treatments. The extremely-drug-resistant tuberculosis infection is even deadlier. Thus, there is an urgent medical need for new drugs and treatment regimens that can better manage the latent tuberculosis infection [7][8][9]. Although over a dozen anti-tuberculosis drugs are available, a treatment regimen of significantly less than six months has not been fully established. For tuberculosis chemotherapy, there currently exist four first-line drugs, six second-line drugs, four approved drugs with anti-TB activity, and at least four promising drugs in clinical trials [10]. These drugs target many aspects of M. tuberculosis cellular structures and biological processes, e.g. transcription, protein synthesis, cell wall synthesis, catalase-peroxidase enzyme, ATP synthesis, DNA replication, and cofactor synthesis. Although shorter (<6 months) treatment regimens were created with a combination of the existing drugs, the relapse rates of the shorter regimens were consistently higher than that of the standard treatment regimen [11,12]. The standard 9-month treatment regimen was based on 50-year clinical practices but the exact mechanism of how it works remains unclear. It is well-known that isoniazid is only active against growing M. tuberculosis [13] and is inactive against anaerobic bacteria [14]. The traditional view is that persistent M. tuberculosis resides within hypoxic granuloma lesions in a static mode. This however does not fully agree with the fact that isoniazid is 90% effective in eliminating persistent M. tuberculosis from latent tuberculosis infection patients. On the other hand, if M. tuberculosis resides in aerobic microenvironments in latent tuberculosis infection, the treatment would not have taken as long as 9 months to kill the bacilli. There has been an on-going debate over the microenvironments in latent tuberculosis infection where M. tuberculosis resides [3,[15][16][17]. The complex physical, biochemical, and microbiological milieu of M. tuberculosis in tuberculosis disease have been a major obstacle that hinders the development of shorter treatment regimens to eradicate the disease. Even after years of pursuit to understand the biology of this pathogen, we have only been able to uncover a very small percentage of its modus operandi, especially in dormant cells. At one point of time when tuberculosis was ravaging the world population, effective chemotherapy was believed to be successful in halting the spread of tuberculosis. However, the disease returned many years later in the form of reactivated bacilli from the latent state. It was realized that chemotherapy does not always completely abolish the bacteria from the host [18,19], but rather the bacteria undergoes a transformation due to environmental stress which allows it to change its metabolic activity and transition into dormant or persistent cells -a phase which came to be known as latent tuberculosis [5,[20][21][22]. It was soon realized that very little was actually known about the biology of M. tuberculosis and that the bacilli was a complex organism in the host. The complexity of M. tuberculosis arises from the fact that it is able to survive and proliferate inside macrophage phagosomes in spite of exposure to various stresses in the phagosome, modulate the host phagosomal environment [23], acquire nutrients required for growth [24,25] and finally change its metabolic state when the macrophage is able to halt its proliferation. Efforts were soon underway to study the biology of actively replicating bacilli as well as the dormant forms. The sequencing of M. tuberculosis genome was an important step forward in understanding the bacteria [26]. Biochemical studies coupled with transcriptome analysis in in vitro and in vivo analyses started to unravel the genes expressed during the adaptation of M. tuberculosis to different stresses in the phagosome and genes expressed during an infection [27,28]. Another important aspect in the pathobiology of tuberculosis was to understand the immune system. Any pathogen needs to successfully overcome the non-specific and specific immune reactions in order to establish an infection. Hence, understanding macrophage biology also becomes important in conjunction with the biology of M. tuberculosis. With advancing technologies, the trend towards systems biology emerged strong. Along with transcriptome analyses, proteomic and metabolic profiling also began to gain ground. With advances in studying the proteome from the conventional 2D gel analysis to high sensitivity analyses using mass spectrometers, studying the proteome is seen as a promising candidate for systems biology approach. The fundamental advantages that proteome analyses offered over transcriptome analyses was that it focused on functionally relevant species and is important for drug related information since drugs primarily target proteins. However, the field of proteomics is still evolving. Whereas transcriptome analyses can identify thousands of genes differentially expressed in an organism, proteomics is still struggling to get to the depth of analysis that a transcriptome study does [29,30]. Classical proteomics has concentrated on determining abundances of various proteins in the proteome [31,32]. Quantitating differentially expressed proteins has given useful information regarding the status of the cell at a given point [30,33]. At the global level, proteomic analyses do not permit the reliable quantification of absolute abundances. Hence, most proteomic studies concentrate on relative abundances of proteins in two separate states. However, any pathogen does not have definite steady states in the host and is constantly trying to overcome host immune responses in order to survive and establish infection. Thus, interplay between the host and the pathogen creates a dynamic response in the pathogen and the host which is not explored using classical proteomics. The dynamic nature of the pathogen is one of the primary reasons why such a discrepancy exists between the available transcriptome and proteome data [34]. In this review, we discuss the knowledge obtained on the biology of M. tuberculosis from current proteomic studies. Additionally, we explore strategies to study the dynamics involved in a proteome of M. tuberculosis using the protein turnover analysis technique. Protein turnover along with relative protein abundance values can help analyze the dynamics associated with a proteome system. Using our analysis of protein turnover and abundance values in stressed M. tuberculosis as an example, we show that protein turnover measurements are more sensitive than relative abundance measurements and that a combination of both measurements combined with transcriptomics can depict a more complete picture of the pathogen in the host. The analysis of dynamic responses of the pathogen might help in new drug discovery. More importantly it might help in understanding the biology of persistent M. tuberculosis on which information is still very limited. Proteome Analyses in Mycobacteria Mycobacterial proteomes have been analyzed over the years in a number of different conditions involving nutritional, acidic, oxidative and low oxygen stresses. Here we review some of the important findings in mycobacterial biology upon examination of the response of the bacilli to various stresses. Modulation of host phagosomal environment The initial stage when a bacterium enters the macrophages is the focal point of whether the bacillus is successful in establishing an infection. M. tuberculosis is communicated through the air passage. It establishes infection primarily in the lungs. The first resistance that it encounters is from alveolar macrophages. These macrophages engulf the bacteria through phagocytosis for pathogen killing, infection neutralization and antigen presentation. One of the important things that was learnt from early studies on M. tuberculosis is that M. tuberculosis is an intracellular parasite known to reside inside macrophages and that it can modulate the host cells in order to survive and proliferate. Many studies have focused on characterization of modulated phagosomes as well as the mechanisms of modulation by the bacilli. These studies have shown that M. tuberculosis was able to live in a niche or microenvironment which is created inside the phagosome [25]. Two aspects became important from this knowledge. The first was to understand how the phagosome responds to intracellular pathogens and the second was to decipher how the bacilli were able to overcome the stresses mediated by professional phagocytes. Macrophages internalize various particulate materials through the concerted action of several surface receptors. The engulfment leads to the formation of membrane bound organelles i.e. phagosomes. Phagosomes are dynamic bodies. Each phagosome recruits a variety of proteins through endosomal pathways. When it finally recruits acidic vesicles called lysosomes, it matures into phagolysosomes. Proteome analyses have revealed many interesting characteristics about the nature of phagosomes and phagolysosomes. Phagosomes contain many GTPases and hydrolases and proteins are recruited by the phagosomes as they mature along the endosomal pathway. Many proteins are recycled and hence proteins found in the initial phagosomes may not be found in the final composition of the phagosome. For example while hydrolases such as cathepsin A and β-hexosaminidase are already present in high amounts in the early phagosome, other hydrolases such as cathepsin S and cleaved form of cathepsin D appear later during phagolysosome maturation [35]. Also, many phosphoproteins such as Rab are recruited by the phagosome indicating that phagosome maturation and action is a result of signaling processes involving a cascade of phosphoproteins [36]. Another important point to be noted is that the composition of the phagosome varies with the type of material in it. Comparison of phagosomes containing intracellular pathogenic versus the non-pathogenic mycobacterial species have indicated that the phagosomal composition is dependent on the pathogenic species in the phagosome [25]. Recruitment of proteins such as NRAMP1 and NRAMP2 seems to be important for the regulation of different inorganic ions in the phagosome. Phagosomes containing intracellular pathogens have differing levels of inorganic ions. Inorganic ions such as iron, manganese and zinc are critical for the growth and survival of mycobacteria and secretion of biomolecules to alter the composition of phagosomes is known to occur with many pathogens such as in Salmonella species [37,38]. In M. tuberculosis, studies have shown that the bacilli secrete siderophores to chelate iron away from the host storage molecules such as transferrin and lactoferrin [24,39]. Apart from acquiring important inorganic ions, it is also important for the bacilli to stop the maturation of the phagosome into an acidic compartment. The phagosome matures into an acidic vesicle which then recruits hydrolytic enzymes to help neutralize the pathogen to proceed for antigen presentation [23,40]. Analysis of intraphagosomal M. tuberculosis proteome comparing proteins from a broth grown versus intraphagosomal bacilli using 2D gel analysis revealed proteins such as phosphoglycerate mutase I, lipid carrier protein (Rv1627c), TrkA, a putative potassium uptake protein and other conserved hypothetical proteins. Apart from this, other proteins involved in a global stress response such as peptidyl-prolyl isomerase (Ppi) was found to be involved in both low pH conditions and low iron conditions. Increase of putative lipid transfer protein Rv1627c reveals that mycobacterial lipids may play an important role in the virulence of M. tuberculosis [41]. Presence of these proteins indicates that various proteins are transcribed and secreted in the phagosome to facilitate the survival of the bacilli in the host macrophages. Alteration of phagosomal composition is also affected by the intervention of various signaling processes in the macrophage leading to arrest of maturation of the phagosome. As stated before, phagosomal maturation depends on several signaling cascades. Interference with host signaling achieves the sole purpose of survival for M. tuberculosis by inhibiting phagosomal maturation and phago-lysosomal fusion. Through heterogeneous secretion of proteins, lipids and glycolipids in the phagosome, the bacilli are able to interfere with phagosomal maturation, antigen presentation, apoptosis and stimulation of bactericidal responses triggered by host signaling cascades. Glycolipid lipoarabinomanan (LAM) in the bacilli gets modified into Man-LAM by the addition of mannose and is an abundant glycolipid in pathogenic mycobacteria. Interference with host signaling pathways by mycobacteria is also accomplished by altering levels of secondary messengers such as Ca +2 and by interfering with phosphoinositide 3-kinase (PI3K) signaling. Inhibition of lipid kinases which are important for phagosomal maturation is another aspect of interference with the host signaling. Macrophages, when infected with potentially harmful pathogens have the ability to undergo apoptosis which halts the spread of harmful pathogens. Host macrophages infected with M. tuberculosis have altered apoptotic pathways due to phosphorylation of the apoptotic protein Bad which does not allow other apoptotic proteins downstream to be activated. M. tuberculosis also inhibits the immune system by inducing the production of IL-10. It is also known to modulate the activation of MAPK and JAK-STAT pathways which induce pro-inflammatory responses thereby successfully downregulating the host immune response against an infection [42]. Interference with host signaling pathways is not just a single method that M. tuberculosis uses to arrest phagosomal maturation or phago-lysosome fusion. Walburger et al. [10] through mutational analysis showed that M. tuberculosis secretes protein kinase G into the macrophage. It was one of the first instances where it was shown that a eukaryotic-like signaling pathway was used to modulate the host phagosomal composition thereby promoting survival of the bacilli. The conclusion from the various responses of M. tuberculosis listed above indicates that the macrophage phagosome and the bacilli display dynamic responses within the system. By appropriating various nutrients needed for survival in the phagosome and at the same time facilitating its survival by secreting biomolecules into its environment, M. tuberculosis achieves the balance required to survive, proliferate and establish an infection. Mycobacterial response to low iron stress Out of all the micronutrients that M. tuberculosis needs to grow and survive, iron is probably the most important one. Many studies have proven that an increase in levels of iron in the host exacerbates the onset of tuberculosis [43]. Even though iron is an abundant metal, it is not readily available in the free form. It is then of no wonder that the bacilli spend significant cellular resources in acquiring this metal. It is not an easy task since in the event of an infection, the host immune system sequesters iron away from the pathogen as a nonspecific immune response [44]. Macrophages activated by IFN-γ tend to lower the concentration of iron binding molecules especially transferrins in the macrophage endosomes. Iron acts as an important co-factor for as many as forty different enzymes, the most significant of them being the respiratory enzymes [45]. Apart from this, iron is also needed for protective enzymes such as catalase peroxidase KatG and super-oxide dismutase (SOD) that scavenges free radicals [46]. Mycobacteria are more sensitive to hydrogen peroxide mediated stress in the absence of iron [47]. In order to acquire iron in the host, M. tuberculosis secretes siderophores which chelate away iron from the host transferrin and lactoferrin [39,48]. Siderophores are typically salicylate-containing biomolecules. In M. tuberculosis, these siderophores are called mycobactins. M. tuberculosis contains two types of mycobactins, one is a soluble form and the other is membrane bound form. It is known that both mycobactins act in a concerted manner to transport iron into the mycobacterial cell. However, the exact mechanism by which both act is not yet completely deciphered [49]. Since drugs designed to either block siderophore action or mimic siderophores can be used against the bacilli, mechanism of mycobacterial siderophore action is currently amongst the leading topics of research [50,51]. Although iron is an important metal, its uptake and use has to be tightly regulated because of its ability to form free radicals in the presence of hydrogen peroxide (Fenton's reaction) [52]: The regulation of iron metabolism is governed by the iron dependent regulatory protein (IdeR) which controls the uptake of iron by mediating the genes responsible for the synthesis of siderophores. The level of iron in the cell is important for both the host and the pathogen. Iron is not only required for immune regulatory functions, it is also needed for the bacterial multiplication and survival. More interestingly, iron has been known to enhance the action of isoniazid (INH) and pyrazinamide on M. tuberculosis [53,54]. Hence iron is an important virulence factor. Response of M. tuberculosis to low iron levels have been recorded at the mRNA and protein levels. Most of the global response due to iron starvation is by rearrangement in the synthesis of various metabolic enzymes [55]. The bacilli seem to regulate its metabolism to counter the stress imposed by low levels of iron [21]. Mycobactin producing genes are also upregulated in response to low iron level. Proteins such as bacterioferritin (Bfr), which is involved in the storage of iron seems to be downregulated when iron level is low. It is interesting to note that proteins such as KatG, SOD are downregulated in response to a low iron condition [47,56]. This serves the purpose of not spending energy for the synthesis of proteins not needed in the absence of iron. The downregulation of KatG under low iron conditions is interesting because KatG is the primary catalase peroxidase and peroxynitritase in M. tuberculosis [46]. The downregulation of such an important protein might leave the bacilli vulnerable to an oxidative stress that is part of immune response to the pathogen infection. Studies involving strains that have a nonfunctional KatG have shown that other proteins such as alkyl hydroxy-peroxidase C (AhpC), thiredoxin C (TrxC) and thiol peroxidase (Tpx) seem to take over the function of protecting the cell from oxidative damage when KatG is unavailable [57][58][59][60][61]. The situation when KatG is unavailable is not unimaginable because most INH resistant strains have an inactive KatG in a mutated and hence non-functional form [62]. Another important iron regulator protein is the ferric uptake regulator (Fur). Fur is upregulated under a high iron condition and is also known to control the transcription of genes involved in bacterial iron uptake. Analysis involving both Fur and IdeR showed that both regulators are important for the maintaining and checking the levels of iron in the bacterial cell. Elaborate reviews explaining the nature of regulation of Fur and IdeR can be found in [49,63,64] Low availability of iron is also known to induce dormancy response as a response to starvation or lack of nutrients [21]. Acidification of phagosome As explained earlier, acidification of phagosomes is an important phenomenon in the macrophage as it leads to neutralization of the pathogen since most enzymes in the bacteria are rendered inactive under acidic conditions. Most in vitro studies on the response of M. tuberculosis to acidic pH have been carried out with transcriptome analysis [28,65]. Two studies analyzed the transcriptome of M. tuberculosis and M. smegmatis at acidic conditions (pH 5.0 and pH 5.5) and identified 291 and 81 differentially expressed genes respectively. Out of these differentially expressed proteins, the most significant were lipF and SigF genes which were induced >1.5 fold. These two genes were not induced in the bacilli in J774 macrophages infected with live M. tuberculosis which correlates with the data that phagosomes containing live bacilli fail to acidify [28,66,67]. A study conducted on phagosome M. avium revealed even though LAMP-1 was acquired by the phagosome, it failed to acquire vesicular proton-ATPase which is needed for acidification and may be an important step in inhibiting the acidification of the phagosome by mycobacteria [68]. In vitro low pH proteome analysis studied in our lab using non-pathogenic M. smegmatis strain identified ca. 1,070 proteins totally, out of which 241 proteins were differentially expressed. Gene ontology (GO) studies showed that three GO terms were upregulated involved stress response (GO:0006950), amino acid metabolic processes (GO:0006520) and monosaccharide metabolic process (GO:0005996). GO terms that dominated downregulated proteins were transmembrane ion transport (GO:0034220), polyol metabolic process (GO:0019751), nucleotide-sugar biosynthetic process (GO:0009226) and polysaccharide metabolism (GO:0005976). Repression of ion channels can be seen as an adaptation in response to higher proton concentration in the outside environment of the mycobacterial cell. Other responses seem to indicate that metabolic systems are regulated in the bacilli when exposed to low pH especially sugar metabolism with increased monosaccharide metabolism compared to polysaccharide metabolism as obtained by KEGG pathway analysis [69]. Exposure to nitric oxide and dormancy Apart from nutrient starvation and acid stress, M. tuberculosis is also exposed to oxidative stress in the form of reactive nitric oxide molecules (NO). NO is generated by the iNOS pathway that metabolizes L-arginine. NO is an uncharged molecule composed of seven electrons from nitrogen and eight electrons from oxygen. This combination of the 15 electrons results in the presence of an unpaired electron that makes NO paramagnetic and a radical. The majority of biological molecules contain bonds filled with two electrons and are not reactive toward NO. However, a select range of molecules that have unpaired electrons in their outer orbital will be highly susceptible to a reaction with NO, such as a haem iron. Because iron-containing enzymes and proteins are abundant in a cell, NO can potentially bind to these biomolecules as a free radical and render them unstable [58]. Because NO is a potent vasodilator, the synthesis of NO takes place for a very short time in the macrophages. Transcriptome analyses have shown that an exposure to NO induces dormancy in the bacilli. Short exposure to NO resulted in the differential expression of a specific set of genes called the DosR/S regulon or the dormancy regulon [70]. The genes under the regulation of DosR/S are called the DosR regulon, which consists of ca. 53 genes. DosR is important to initiate the shift from an actively growing state to a non-replicating state or dormancy. The effects of DosR were shown to be somewhat transitory and to lead to the definition of an 'enduring hypoxic response' (EHR) [71]. There are a set of 230 EHR genes induced after the transitory response of the DosR regulon. These genes are independent of the DosR/S-mediated initial response. They are induced later after the initial DosR response, and are sustained during the dormancy phase to suggest that they could be more relevant to the dormant state of M. tuberculosis. Because many tuberculosis patients might receive treatment long after the initial infection, the EHR genes should represent a better pool of drug-target candidates against dormant M. tuberculosis [72]. Dormancy is an important aspect in mycobacterial pathogenesis since it leads to a population of bacteria in the host known as latent bacteria. The hallmark of latency is that the bacilli reside in a specialized compartment of dead cells and necrotic tissue known as a granuloma where hypoxic and low nutrient conditions occur. To adapt, the bacilli lowers its metabolism and can reactivate later if the host immune system is compromised [20,22]. For the past many years, significant efforts have been put into not only understanding the biology behind dormant cells but also different strategies to eliminate the dormant cells using conventional drugs [73,74]. One of the important characteristics of dormant cells is their low level of metabolism along with their non-replicating status. A proteome analysis of non-replicating persistent M. tuberculosis in the NRP-1 and NRP-2 stages of the Wayne's model of dormant cells revealed that the global protein expression decreased under both NRP-1 and NRP-2 conditions. A total of 38 and 128 proteins were found to be differentially expressed more than 2-fold in the NRP-1 and NRP-2 stage respectively. Of the downregulated proteins in the proteome most of them corresponded to proteins involved in transcriptional and translational machinery which may be required to adapt to low oxygen levels. Of the upregulated proteins, many proteins that were detected had unknown functions. Trehalose metabolism proteins were upregulated, which coincides with the fact that trehalose has been known to be a stress protectant and also serve as a carbon source [75]. Starck et al. [76] showed that M. tuberculosis cells grown under anaerobic conditions had αcrystallin homologue and GroEL2 along with few metabolic proteins that were found to be upregulated under starvation in other independent studies [27,77]. The above results show that hypoxic stress and an exposure to NO enforce some similar kind of stress response patterns in proteins related to carbon metabolism and stress response proteins. In macrophages, initial progress of mycobacteria can be halted by the host immune system or by the combination of an anti-TB drug that targets actively replicating bacteria. The host immune system helps to initiate a set of stresses such as deprivation of iron, low oxygen, and exposure to NO. The exposure of the bacilli to those stress factors or drugs, however, might also initiate the dormancy response of the bacilli that in turn renders a subpopulation of the bacilli resistant to the host assault or drug treatment. At this stage, the tuberculosis infection might appear to be cured but could also resurface later due to a weak immune system resulting from aging, poor nutrition, or co-infection by HIV [15,78]. Because dormancy is related to starvation and is an important aspect of mycobacterial pathogenesis, most proteome studies are directed towards understanding what drives the bacilli towards dormancy and how does the bacilli cope with survival at the dormant state. Turnover with Respect to Proteome Protein turnover is an important biological phenomenon in an organism and shares an equally important role with gene transcription and protein translation [79]. Synthesis of new proteins and degradation of old ones form a dynamic process in an organism. Turnover does not only help to clear old proteins but also aid in a fast adaptation to a new condition or environment by altering their protein turnover rates [80]. Apart from this, turnover also brings into equation the action of new proteins without much strain on the resources of an organism. Previous studies done on turnover and their rates were concentrated on single proteins [81]. Some early global protein turnover studies involved identification of different E. coli proteins that might have different turnover rates. One of those earlier studies showed that a dynamic state for individual proteins existed in non-growing as well as growing cells [82]. Studies of turnover tend to give a dynamic view of the changes occurring in the abundance of the protein. When being applied to a larger scale of the proteome, it allows us to study the dynamic nature of the entire proteome [79]. The M. tuberculosis genome contains approximately 4,000 protein coding genes and most dormancy studies involving the proteome detected far fewer proteins that were differentially expressed. Out of those proteins that were detected, there were mostly metabolic proteins and few stress response proteins [75]. It can be explained that for studying non-replicating persistent M. tuberculosis, current technology might have limitations to detect more subtle responses. It is widely known that in very few cases a very strong correlation is found between transcriptome studies and proteomic investigations. Most proteomic studies are able to successfully identify hundreds of proteins in M. tuberculosis through the use of sensitive mass spectrometers available today. However, each study finds extremely less number of proteins differentially regulated especially in the case of intraphagosomal bacteria or persistent bacteria. In a proteomic analysis of intraphagosomal M. tuberculosis Mattow et al. [41] have acknowledged the fact that there appears to be very little correlation between their previous transcriptome studies and current proteomic analysis. One of the caveats of studying the proteome is that it focuses on functionally relevant species. Therefore, an insight into the proteome provides more direct interpretation of a drug activity. With advances in mass spectrometry an investigation of complex proteomes has become more accessible. This upward trend however has mostly been in the field of instrumentation [31]. Proteomics has so far focused on steady state abundances of various proteins [83] to compare increase or decrease of protein abundance in one state versus another. Even though this approach brings in a lot of information, it does not completely cover the flux in the system that arises due to the dynamic nature of the organism e.g., the dynamics associated with M. tuberculosis, when it tries to constantly overcome the stresses in the phagosome. One advantage of proteome studies over transcriptome analyses is that transcriptomics cannot study the dynamics associated with a system. Of course many time course analyses have been carried out but the question still remains as to how much of that data is functionally relevant to gene products that carry out the functions. The point is that one needs a technique which not only utilizes the data available from transcriptome studies but also to focus on the proteome so that the technique will take into account the dynamics associated with the system. With the recent advances in highresolution mass spectrometry, a protein turnover analysis at the global level promises to bring about valuable insights into the dynamic nature of a proteome [79]. Studies of global protein turnover in an organism investigate two main aspects that contribute to a steady state protein abundance in an organism i.e., protein synthesis and protein degradation. In logarithmically growing bacterial cells, the rate of protein synthesis is much greater than that of protein degradation. Under stresses that lead to a non-replicating state, the rate of protein degradation increases dramatically relative to that of protein synthesis leading faster protein turnover. Under these conditions, the coupling of gene transcription and protein translation will not be the only factor that determines the protein abundance. Thus, the discrepancy arises between transcriptomics and proteomics data. The dynamic process of protein turnover determines the abundance of the protein in the cell [55,56,84,85]. In addition, protein secretion also affects the steady state abundance of a protein in the cell [56]. Hence, to fully comprehend the abundance of proteins in the cell, methods have to be developed that not only takes into account gene transcription, but also protein synthesis, degradation, secretion, and probably modification as well. Protein turnover discerns more subtle changes in a cell The advent of highly automated high-resolution mass spectrometry technology promises to bring about in-depth insight into the dynamic nature of a proteome at the global level. The work done by Pratt et al. [85] demonstrated the determination of protein degradation rate constants in a steady state population of yeast grown in a chemostat. The authors used isotope labeling along with 2D gel analysis to study protein turnover and advocated that protein turnover is 'a missing dimension in proteomics.' Another study done by Cargile et al. [84] labeled E. coli cells with 13 C to study the relative synthesis over degradation ratio (S/D). These pilot works demonstrated the global analysis of protein turnover with individual protein identifications but their data were not correlated with abundance values. In our laboratory, we used M. smegmatis, a non-pathogenic surrogate of M. tuberculosis, to study the protein turnover under the stresses of low pH and low iron. As mentioned before, M. tuberculosis is exposed to both stresses in the phagosome until the bacilli overcomes them. We were interested in studying the effect of low pH and low iron on the global protein turnover [55]. The cells were grown with two different methodologies for both types of stresses. For the pH stress the cells were initially grown in 14 N containing media at pH 7.0. Once the cells were in the initial log phase, the cells were divided into two flasks and the media was doped with 50% 15 N and the pH was reduced to 5.0 in one of the flasks. The cells were harvested after one doubling and analyzed for protein turnover using LC/LTQ-FTMS. For low iron analysis, the cells were first grown in 15 N containing media until midlog phase. The cells were then collected by centrifugation and the media was then exchanged with 14 N containing media. The cells were then allowed to grow to one doubling and harvested to be analyzed by LC/LTQ-FTMS. Turnover analysis of M. smegmatis under both stressful conditions revealed two different patterns. In the low pH condition, many proteins had increased turnover at pH 5.0 as compared to pH 7.0. It was an obvious reaction because the bacterium has to readjust its proteome in order to counter the stress posed by increased proton concentrations. The correlation coefficient for the low pH shock cells was small which indicated that the proteins in the cells exposed to pH 5.0 underwent extensive readjustment in different directions. In the low iron stress the correlation coefficient being high suggested that either there was not much rearrangement of turnover values or most proteins had changes in a similar direction. Proteins like KatG and Tpx which are important for protection of mycobacterial cells against oxidative stress had low protein turnover values in both low iron as well as low pH conditions. A recent study on M. tuberculosis Tpx suggested that it may be an important protein against oxidative stress since Tpx mutants were unable to survive in the macrophages in an infected mouse model. However, it would be interesting to analyze how the low turnover of Tpx correlates with the survival of mycobacteria in the cell. One protein that was found to have increased synthesis was RNasE (Rne). It is known that under certain stress conditions, cells downregulate their metabolism until suitable growth conditions can be found again. RNaseE, an important enzyme involved in the turnover of mRNA, is a key for bacterial mRNA degradation and processing. Low iron conditions may trigger a necessity to increase degradation of mRNAs in the cell in order to inhibit protein synthesis. This may necessitate an increased synthesis of RNaseE as indicated by the increase in synthesis over degradation ratio. Increased synthesis of RNaseE may be important for a tighter regulation of mRNAs and for survival in adverse environmental conditions. RNaseE levels were also found to increase during adaptation to starvation. We suspect that an increase in the synthesis of RNaseE may be important to dormant cells as well. We demonstrated the successful study of protein turnover at the global level under different stress conditions. There is another open question, however, that how the protein turnover values correlate with protein abundances. To compare protein abundance and protein turnover values in M. tuberculosis under iron-regulated conditions [56], we collaborated with Dr. Issar Smith's laboratory at The Public Health Research Institute (TPHRI) to grow the cells in an iron replete and iron depleted condition. M. tuberculosis initially grown in iron depleted unlabeled media containing a 14 N nitrogen source. Log-phase cells were transferred to two different flasks containing 15 N-labeled media. One of the flasks contained iron whereas the other was still iron depleted. In summary, a comparison was made between cells that were transferred from a low iron to a high iron condition. The cells were harvested and analyzed for protein turnover using the high resolution nano-LC/LTQ-FTMS system (Figure 1). The proteins de novo synthesized in the 15 N-labeled media are the new fraction. The proteins remain unlabeled are the old fraction synthesized prior to the transfer to the 15 N-labeled media and survive the degradation and secretion processes. The data obtained from the comparison of protein abundance and protein turnover values showed that protein turnover is a much more sensitive measurement to discern the changes in the proteome than an abundance measurement alone. This improved ability of turnover analysis to discern more subtle changes in protein synthesis, degradation, and secretion activities is due to the fact that newly synthesized (labeled) and pre-existing proteins (unlabeled) are quantified separately so that the level of de nova protein synthesis, protein degradation, and protein secretion can be deduced separately. Upon the transfer of late-log phase cells from a low iron to a high iron media, protein abundance measurements showed that out of the 104 proteins that we identified, only five proteins were upregulated and 16 proteins were downregulated in the HI media. Relative abundance of KatG was upregulated in cells grown in the high iron media. Protein turnover analysis of the proteins compared between cells grown in a low iron and a high iron media showed that more proteins had increased synthetic activity in the high iron grown cells. The S/D had increased for 24 proteins in the cells grown in the low iron media. Eight proteins had decreased turnover. However, for cells grown in the high iron media, 56 proteins had increased S/D and five proteins had decreased S/D. Comparison of protein abundance measurements to protein turnover measurements clearly suggests that protein turnover does give more information to uncover the dynamic response of the proteome (Figure 1). In addition to providing the information about synthesis and degradation of proteins, the protein turnover analysis can also provide information regarding whether a protein has been secreted when the protein turnover values are analyzed together with the protein abundance measurements. In our study of proteome dynamics, we found that some proteins had low changes in relative abundances even though their synthesis had increased significantly. As stated before the relative abundance of a protein in the cell can be affected not only by synthesis or degradation but also by a secretion process. In our turnover analysis, we found discrepancies between protein abundance values of certain proteins and their turnover values. A previous proteomic study of M. tuberculosis culture filtrates showed many of those proteins to be secreted into the culture filtrate. Proteins such as FbpC2, KatG, and the mammalian cell entrance protein Rv0172 were also predicted to be secreted [86]. These results support that protein turnover in combination with abundance analysis could predict the secretion of proteins and reveal the interconnected roles of protein synthesis, degradation, and secretion in determining the protein abundances in cells. These analyses illustrate that protein turnover can divulge information that classical proteomics does not provide. The integration of data from transcriptome studies, abundance measurements and turnover analyses will likely provide a more complete picture of the dynamics associated with the proteome. To some extent, it will probably reconcile the discordances between transcriptome and proteome analyses. Protein relative abundance and turnover of iron-starved M. tuberculosis cells in response to a high-iron (HI) and low-iron (LI) condition. The abundance of a protein is represented by an extracted ion-chromatographic intensity (A). A protein is quantified for its old fraction abundance (A L ), new fraction abundance (A M ), and total abundance (A T ). A T is the sum of A L and A M . A M /A L represents the S/D i.e., protein turnover. The three M-A plot panels illustrate the total abundance ratios of protein between the HI and LI cells (Panel A), the protein turnover in the HI cells (Panel B), and the protein turnover in the LI cells (Panel C) respectively. Those proteins with a >2-fold change (p <0.05) in total abundance ratio or turnover are marked with black triangles and diamonds. Adapted from reference [56] with permission. B. Turnover in HI cells A. Total abundance ratios in HI vs LI cells Implication of proteome turnover studies in mycobacteria M. tuberculosis is a potent human pathogen parasitizing macrophages. It induces vigorous immune responses, yet persists inside macrophages, evading host immunity. Although a variety of control and eradication measures have been implemented against tuberculosis such as vaccination, aggressive chemotherapy, and public health surveillance, tuberculosis still remains a major global health problem to continue to cause nearly two million deaths and nine million new infection cases per year. Ca. one third of the world population is infected with latent M. tuberculosis. Efforts to search for effective vaccines against tuberculosis and new drugs that can rapidly sterilize latent M. tuberculosis are hindered by the lack of information about the metabolism and immunogenicity of the intracellular bacterium in a latent form. There are two important questions related to the chemotherapeutic agent and vaccine development efforts: a) does intracellular M. tuberculosis synthesize unique proteins immunologically important? and b) is the protein synthesis profile of latent intracellular M. tuberculosis so different from that of in vitro models that new drug target candidates are needed for faster treatment of latent M. tuberculosis? Elucidation of the M. tuberculosis protein synthesis within infected macrophages will provide valuable information to further understand the molecular basis of tuberculosis latency, to provide novel targets for drug development, and to possibly discover a more potent vaccine candidate. With the advent of high-precision and automated mass spectrometry instrumentation to support large-scale proteomic studies, protein turnover analysis at the global level potentially has an increasing importance for biomedical research [79]. One application of protein turnover analysis would be to study the M. tuberculosis proteome dynamics at its non-replicating state in an intracellular environment or under an in vitro culturing condition. Whereas over a hundred research articles have been published on mycobacterial proteomes, only a few dealt with non-replicating M. tuberculosis [27,75,87]. Information about protein turnover in non-replicating or dormant M. tuberculosis is scarce in the literature. With the potential importance of proteome dynamics in bacterial cell sporulation or C. Turnover in LI cells dormancy [81,[88][89][90], a study of mycobacterial protein turnover at the global level [55,56] will likely help to advance our understanding of the molecular basis of M. tuberculosis persistence. The metabolic requirement of M. tuberculosis in latency remains unclear and difficult to study. The long therapeutic regime required to treat latent tuberculosis infection is probably in part due to the lack of a direct target that is specific to dormant M. tuberculosis. While there are new drug treatment regimes and several new anti-tuberculosis drugs in the development pipeline that aim to shorten the treatment period and to overcome multi-drug resistant strains [72], most of these new drugs are still based on existing classes of antimicrobial compounds that target the conventional pathways and molecular machinery that are critical for the growth of M. tuberculosis. These drugs could still be countered by drug resistant strains that emerge from the non-adherence of a prolonged regime against latent tuberculosis infection. Thus, the need to discover novel drug targets, especially those against dormant bacteria, is urgent. A 'simple but nonetheless vexing problem' [72] in target discovery against non-replicating M. tuberculosis is that many methods rely on a growth-inhibition measurement to assess the effect of drug treatment. We showed that, at a global level, a protein-turnover measurement was more capable to discern protein synthesis and degradation activities than a protein relative-abundance measurement alone [56]; those data suggest that protein dynamics analysis with concomitant turnover and abundance measurements could potentially add a valuable alternative to the drug target discovery problem for non-replicating M. tuberculosis. The concomitant turnover and abundance measurement approach could also be useful to detect and validate drug treatment effects at an early phase during which the most relevant drug effect can be isolated from other non-specific cell stress responses. We anticipate that protein turnover analysis will also be a useful tool to profile the proteins uniquely expressed in intracellular M. tuberculosis i.e., the bacilli grown in macrophages. In one conceivable experimental design, for example, one could first label culture broth-grown M. tuberculosis cells with a stable isotope, such as 15 N or 13 C, and then use the labeled M. tuberculosis cells to infect macrophages grown in an unlabeled culture medium. In this way, new proteins synthesized after the M. tuberculosis cells enter the macrophages will be unlabeled and will be distinguishable from the old proteins pre-existing in the labeled M. tuberculosis before infection. Because the abundance of the old and new proteins could be determined separately [56], the proteins more prone to degradation could also be profiled. Based on the quantitation of old, new and total protein abundances, one can deduce the proteins that are probably secreted into a phagosomal compartment. The secreted proteins, especially in an intracellular environment, are more likely to involve in the modulation of immune response and antigen interaction. Those proteins secreted in an intracellular environment would be more likely to be an effective vaccine candidate. Conclusions Protein turnover is an essential part of a metabolically active organism. It is also one of the important parameter affected when the organism is adapting to an environment. Study of protein turnover at the global level helps us to understand the dynamics of the proteome as it changes from one state to another. Proteome dynamics involves a continuous degradation and synthesis of proteins. The steady state abundance of a protein in the cell is a function of its synthesis and degradation. M. tuberculosis undergoes significant changes at the proteome level as the bacilli tries to establish an infection while residing inside the macrophages. This correlates to changes in the synthetic and degradative processes in the mycobacterial cell. Since an increase in protein abundance can be the result of an increase in synthesis or decrease in degradation of that protein or both, similarly, a decrease of protein abundance can be a result of either decreased synthesis or increased degradation or both. A change in both the synthetic and degradative processes in the same direction may not affect the net abundance of the protein to a large extent; however, a change in turnover should be detectable. Additional importance of protein turnover studies comes from the fact that protein abundance and transcriptome analyses can augment the information of turnover data thereby deriving information on proteins that are differentially expressed and also proteins which are secreted outside the cell. Until now, proteome analyses did not give us that kind of information. M. tuberculosis is able to overcome a weakened immune system through a variety of stress response mechanisms described above and proliferate. However, in an immuno-competent host, the bacilli might proliferate for some time before its growth is arrested. This leads to another phase of mycobacterial physiology i.e. the dormant or latent phase characterized by non-dividing cells and a low metabolic profile. The biology of dormant non-dividing state of mycobacteria is still a mystery. Current analytical methods involving conventional proteomic approaches are being found inadequate at deciphering the biology of dormant bacilli. With recent advances in quantitative proteomics and mass spectrometry, it is now possible to study global protein turnover at many different phases of an organism. The data obtained in our turnover studies were from growing populations of M. tuberculosis. However, the sensitivity of our analyses suggests that one could extend the turnover analysis to dormant M. tuberculosis cells too. Dormant bacteria are non-dividing but metabolically active. Interestingly, they are always sensitive to the changes in their environment. Since protein turnover is a function of synthesis and degradation, turnover allows us to look at the proteins more closely thereby increasing chances of detecting subtle changes in the proteome of dormant M. tuberculosis. This translates to increased identification of novel drug targets for dormant M. tuberculosis. The limitations in studying protein turnover lies in the design of experiments. Protein turnover studies are performed using isotopic labels for two different states. In vitro these experiments can be done suitably by labeling the cells at a certain time point before or after the stress. However, to study the turnover of the proteome in an in vivo condition, newer methods need to be implemented since it would be difficult to label the pathogen in vivo. The studies on M. tuberculosis need to be augmented by studies on macrophage biology. Protein turnover analysis can also be used to study the phagosomal compartmentalization in the macrophage. Since phagosome assimilation is a dynamic process, knowledge of the cascading events in the maturation of the phagosome might give us an insight into how a pathogen such as M. tuberculosis is able to modulate it thereby fuelling research on interfering molecules that may affect the modulation of host phagosomes by M. tuberculosis.
2017-06-20T07:17:12.102Z
2009-08-28T00:00:00.000
{ "year": 2009, "sha1": "3b32a59e6c8d7581e74e458bcee648f05465a403", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/14/9/3237/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b32a59e6c8d7581e74e458bcee648f05465a403", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54446960
pes2o/s2orc
v3-fos-license
The possibility of using shogaol for treatment of ulcerative colitis Objective(s): This study aimed to investigate the effect of Shogaol on dextran sodium sulfate (DSS)- induced ulcerative colitis (UC) in mice compared to an immune-suppressant chemotherapeutic medicine, known as 6-thioguanine (6-TG). Materials and Methods: Thirty-six adult BALB/c mice were divided into six groups: group 1 (positive control): no DSS exposure and no treatment; group 2 (negative control): DSS exposure without treatment; group 3 (vehicle control): DSS exposure and olive oil treatment; group 4: DSS exposure and 0.3 mg/kg 6-TG treatment; group 5: DSS exposure and 20 mg/kg Shogaol treatment; and group 6: DSS exposure and 40 mg/kg Shogaol treatment. At day 16, the mice were euthanized and UC was evaluated according to colon length, histologically index score and expression scores of the epidermal growth factor receptor (EGFR). Results: The disease activity index (DAI) and histological index scores of mice treated with 40 mg/kg body weight (BW) Shogaol were approximately lower than the corresponding scores of mice treated with 6-TG. In addition, the rate of healing in the former mice was approximately 3 folds higher than that of the latter ones as indicated by the lack of EGFR expression in colonic glands and macrophages. Conclusion: These findings showed that the therapeutic effect of 40 mg/kg BW Shogaol could be better than 6-TG in the treatment of UC, and it may draw the attention regarding the priority of using this cheap plant-derived substance for treatment of the inflammatory bowel diseases because treatment with 6-TG is usually associated with adverse side effects. Introduction The risk of disease in the gastrointestinal tract is high because of the continuous exposure to numerous bacteria as well as food-derived and environmental toxins (1). Crohn's disease and ulcerative colitis (UC), two major types of inflammatory bowel diseases (IBD) with multifactorial etiology, are characterized by both acute and chronic inflammation of the intestine and cause an enormous burden to public health (2). In the last few decades, various models of experimental IBD have been developed to characterize the complexity of IBD pathogenesis, delineating underlying molecular mechanisms and to improve treatment options (3,4). Dextran sodium sulfate (DSS), a water-soluble, negatively charged, sulfated polysaccharide with a highly variable molecular weight ranging from 5 to 1400 kDa, is employed to induce colitis in mouse, the most widely used animal model of colitis (5). DSSinduced colitis in mice is a suitable model characterized by morphologically and histologically features similar to acute and chronic UC in humans such as diarrhea, hematochezia, weight loss, mucosal ulceration, and extensive mucosal damage (6,7). Patients with IBD are conventionally treated with steroidal and non-steroidal anti-inflammatory drugs, immune-suppressants, and/or antibiotics; however, these medications temporarily induce and maintain remission in only 45% of patients. In addition, they have numerous side effects, and drug tolerance has been observed in some patients (8,9); therefore, the exploration of new medications for IBD patients is to be maintained. Shogaol, one of the phenolic constituents of ginger, has an antimicrobial, antioxidant, anti-inflammatory, analgesic, antipyretic, anti-diabetic, antiemetic, antitussive, and hypotensive effects (10) and recently, there has been a growing interest in Shogaol for its potential effects against cancers, such as ovarian, lung, skin, colon and liver cancers (11). Epidermal growth factor (EGF) that is the prototypical ligand for EGF receptor (EGFR) is secreted by the submandibular and Brunner's glands under physiological conditions (12). However, it can be produced by other cell types under pathological conditions, such as the intestinal epithelial cells in response to injury (13). Biological functions of EGFR include promotion of cellular proliferation, differentiation, migration, and survival (14). The present study aimed to investigate the possible protective effect of Shogaol on DSS-induced colitis in BALB/c mice in comparison with 6-thioguanine (6-TG), an immunosuppressant chemotherapy, which is conventionally used for UC, based on scoring of disease activity index (DAI), histological index and EGFR expression. Animals and treatments Thirty-six adult, male and female BALB/c mice weighing 25-30 g were purchased from the Animal House at the College of Veterinary Medicine, University of Sulaimani (Sulaimaniyah Governorate, Iraq), accommodated at the same house in temperature and light-controlled animal facilities and permitted consumption of tap water and standard food ad libitum. All mice-involving procedures in this study were carried out humanely and were performed in accordance with the principles outlined in the Guide for the Care and Use of Laboratory Animals and with the approval of the Ethics Committee at the College of Veterinary Medicine, University of Sulaimani. All mice, except the negative control (group 1), were exposed to 5% DSS (molecular weight 40 kDa; Carl Roth GmbH+ Co. KG) via drinking water (5% weight/volume) for 5 days to induce UC (15). The treatment-containing water was changed every day. Following that, the DSSexposed mice were divided into 5 groups (group 2 to 6) as follows: group 2 (positive control) that was left without treatment, group 3 (vehicle control group) treated with 1 ml/kg body weight (BW) olive oil (8873.1-Carlroth), group 4 treated with 0.3 mg/kg BW 6-TG (16) prepared in a vial of sterile water (Biochem, Chemopharma, France), groups 5, and 6 treated respectively with 20 mg and 40 mg/kg BW Shogaol (≥90%-Sigma-Aldrich). The Shogaol was dissolved in olive oil as a vehicle. All treatments (other than the 5% DSS) were given as a single daily dose by oral gavages (a total of 8 doses for each treatment during 10 days; i.e., four days treatment with one-day interval). Assessment of colitis Body weight measurements and disease activity index score The BW was measured 3 times over the 16 days period of the experimental duration (starting weight, post-DSS exposure weight, and post-treatment weight), and the mice were inspected for consistency of their stool and presence of rectal bleeding around the anus. In addition, DAI score (Table 1) was used to describe the severity of UC (17). Colon measurements and histological scoring At the end of the experimental period, the mice were anesthetized by ketamine and xylazine then euthanized by cervical dislocation. The abdomen was opened and the entire colon was resected, placed on a clean filter paper and its length (in a relaxed position without stretching) was measured by a ruler. Following that, the colon was emptied from its contents, dissected longitudinally and washed by neutral buffered saline. The proximal and distal colon portions were separated from each other, placed on separate filter papers for 2 min and immediately fixed in 10% neutral buffered formalin for 24 hr. Subsequently, colon samples were obtained and undergone a series of histopathological preparations. Transverse colon tissue sections (4 μm thick) were obtained using a rotary microtome, stained with hematoxylin and eosin and examined by different magnifying powers of light microscopy (Leica, Germany). Histological index score ( Table 2) was used to assess the severity of UC in mice of the different groups according to the histopathological morphology (18). Immunohistochemistry staining Colon sections (4 μm thick) were fixed on the positively charged slide and allowed to dry for 1 hr at room temperature followed by 1 hr in an incubator at 60 °C . The sections were deparaffinized and rehydrated with xylene and graded alcohol solutions. Antigen retrieval was performed by boiling in the pressure cooker for 20 min in citrate buffer. Endogenous peroxidase activity was blocked by dipping the slides in 0.3% hydrogen peroxidase for 10 min. Following that, the sections were covered with 3% goat serum for about 1 hr to block non-specific bindings. The slides were then placed in a humid chamber and incubated for 1 hr with rabbit anti-EGFR polyclonal Abs (Dako, Germany) followed by three washes (2 min each) in buffer. Then, the sections were incubated with biotinylated goat antirabbit secondary antibodies (Bio SB, USA) for 30 min, washed three times in buffer, incubated in a Horseradish peroxidase-streptavidin (Envision, Bio SB) for 30 min and washed again four times in buffer. Tissue staining was visualized using the 3, 3′-Diaminobenzidine (DAB) substrate solution (Bio SB, USA) for 10 min and counterstained with hematoxylin. The slides were then dehydrated, mounted and examined by a light microscope (Leica, Germany) to detect the presence of positive immunohistochemistry (IHC) staining in the colonic epithelial cells and macrophages. Statistical analysis The results of the present study are stated as means±SE and the statistical analysis of variation among the experimental groups was performed by the paired T-test and Pearson correlation coefficient test. P-values less than 0.05 were considered significant. All statistical exploration was accomplished using SPSS software version 22. Results UUC extent in mice of different groups was assessed at the end of the experiment according to BW changes, the score of DAI, colon length, the score of histological changes and mortalities. Table 3 shows the changes in BW of the mice on day 1 (starting weight), day 5 (post-DSS exposure weight) and day 16 (post-treatment weight). A significant decrease in BW was apparent on day 5 compared to the starting BW in all groups of mice except those of group 1 (negative control). On day 16, mice of group 2 (positive control "DSS exposure with no treatment") and group 3 (vehicle control group) still showed a significant weight loss in comparison with their starting BW, whereas mice of group 4 (DSS exposure with 6-TG treatment), group 5 (DSS exposure with 20 mg/kg BW Shogaol treatment), and group 6 (DSS exposure with 40 mg/kg BW Shogaol treatment) regained their normal BW and showed no significant differences compared to their starting BW. This finding revealed that treatment of mice with different concentrations of Shogaol (20 or 40 mg/kg BW) conferred them approximately similar protection to that accomplished by the 6-TG treatment against the weight loss due to the effect of 5% DSS exposure. Disease activity index score (DAI score) Mice of the negative control group were scored zero, having no symptoms of UC in comparison with the positive control mice, which were scored 12, having prominent blood in their stool, diarrhea and rectal bleeding (Figure 1). On the other hand, the DAI scores of mice in group 3 (control vehicle group "DSS exposure and olive oil treatment"), group 4 (DSS exposure and 6-TG treatment), group 5 (DSS exposure and 20 mg/ kg BW Shogaol treatment), and group 6 (DSS exposure and 40 mg/kg BW Shogaol treatment) were 9, 6, 4, and 3 respectively. No mortalities were observed in mice of all groups (Figure 2). Table 4 illustrated the average length of the colon in each group of mice at the end of the experiment. In comparison with mice of group 1 (negative control), a significant decrease (P<0.05) was shown in colon length in mice of group 2 (positive control) and group 3 (vehicle control group); non-significant decrease was also observed (P<0.05) in group 4 (DSS exposure and 6-TG treatment), and group 5 (DSS exposure and 20 mg/kg BW Shogaol treatment), and only minimal decrease was shown in group 6 (DSS exposure and 40 mg/kg BW Shogaol treatment). Representative colon images belonging to mice of the different study groups are shown in Figure 3. Histological scoring of ulcerative colitis severity The histopathological examination showed inflammation of the colon in mice of the DSS-exposed groups in comparison with the negative control group, which showed normal colon morphology, and the histological scoring (according to extent of epithelial erosions or ulcerations and extent of inflammatory cells infiltration) exhibited different level of colitis severity in the different groups of DSS-exposed mice. In general, the inflammation and the total histological index score were more severe in the distal colon than in the proximal colon in all DSS-exposed mice except those of vehicle and 6-TG treatment groups (Figures 4-6). The highest score (Sum score 6) was recorded for the distal colonic segment in mice of group 2 (control +ve "DSS exposure without treatment") and the lowest one (Sum score 0) was recorded for the proximal colonic segment in mice of group 6 (DSS exposure and 40 mg/kg BW Shogaol treatment). Histologic score of colitis in mice of the 6-TG treatment group was 2 for both the proximal and distal colonic segment. EGFR expression in the colon IHC staining of the colonic tissue sections revealed variable scores of EGFR expression in macrophages (within lamina propria and submucosa) and in lining epithelial cells of the mucosal glands in the different groups of mice (Figure 7). Negative expression (Sum score 0) was apparent in mice of the negative control group, strong expression (Sum score 9) in mice of the positive control group, moderate expression (Sum score 6) in mice of the vehicle control group, moderate expression (Sum score 4) in mice of the DSS exposure and 6-TG treatment group, week expression (Sum score 1) in mice of the DSS exposure and 20 mg/kg BW Discussion DSS-induced colitis is one of the most commonly used models that mimics the features of human IBD (21), and is useful to explore novel clinical approaches in colitis treatment (22). A significant loss of BW was evident at the end of the experimental duration (on day 16) in mice of the positive control (DSS exposure without treatment) and vehicle control (DSS exposure with olive oil treatment) groups in comparison with the negative control group, whereas mice of the Shogaol treatment groups (especially the 40 mg/kg BW treatment) exhibited BW means approximately comparable with those of the negative control and 6-TG treatment groups. This result indicates that the Shogaol treatment may offer a protective effect against BW loss caused by DSS-induced colitis. The DAI parameters of colitis (blood in stool, diarrhea and rectal bleeding, clearly evident in mice of the positive control group) were significantly decreased in mice of the 6-TG and Shogaol treatment groups as well as the vehicle control group in comparison with mice of the positive control group. The reduction in DAI score in mice of the latter group (vehicle control group) was probably due to the antioxidant effect of olive oil (23). Interestingly, the DAI scores of the 20 and 40 mg/kg BW Shogaol treatment groups were lower than that of 6-TG treatment group. Similarly, the average colon length was significantly decreased only in mice of the positive control and vehicle control groups compared to that of the negative control group. It was non-significantly decreased in mice of the 6-TG and Shogaol treatment groups. These results demonstrated that the Shogaol treatment has resulted in amelioration of colitis due to its anti-inflammatory effect (24). Histopathological examination of the colon tissue sections revealed that the DSS exposure succeed in induction of colitis (which appeared to be more severe in the distal than the proximal colonic segment), and the histological scoring of colitis showed that the highest score of severity was recorded for the distal colonic segment in mice of the positive control group and the lowest one was recorded for the proximal colonic segment in mice of the 40 mg/kg BW Shogaol treatment group. Histological score of colitis in mice of the 6-TG treatment group was 2 for both the proximal and distal Total histological index score colonic segment. This finding indicate that the Shogoal treatment might be better than the 6-TG in amelioration of colitis and this is consistent with the finding of Zhang et al. (25) who stated that oral delivery of nanoparticles loaded with 6-Shogaol is able to attenuate inflammation of the colon in a murine model of UC. The results of IHC staining of colonic tissue sections revealed a variable EGFR expression in the different groups of DSS-exposed mice in comparison with the negative control group, which showed negative expression. These results are in agreement with findings of Wright et al. (13) and Dubé et al. (26) who stated that EGFR signaling plays a central role in the regulation of colon epithelial biology and the response to injury and inflammation. In addition, Lu et al. (27) reported that EGFR is activated in colonic macrophages in mice with DSS-induced colitis and in patients with UC. In the groups of DSS-exposed mice, the score of EGFR expression was strong in the positive control group (without treatment), moderate in the vehicle control and 6-TG treatment groups, weak in the 20 mg/kg BW Shogaol treatment group and negative in the 40 mg/ kg BW Shogaol treatment group. These findings reveal that the different types of treatments performed in this study have resulted in variable amelioration levels of DSS-induced colitis. The negative EGFR expression in the 40 mg/kg BW Shogaol treatment group compared to the moderate expression in the 6-TG treatment group indicates that the Shogaol is possibly better than the 6-TG in treatment of UC. Conclusion The results of this study revealed that Shogaol, a phenol extract of ginger, has potent curative effects on DSS-induced colitis in the mouse model. The oral 40 mg/kg BW Shogaol treatment boosted the mice health, as indicated by regaining their normal BW and the DAI score, and restored the colonic damage caused by DSS, as indicated by the colon length measurements and scores of histological index and EGFR expression of the colonic tissue sections. These findings may attract the attention regarding the priority of using this cheap plant-derived substance on the chemotherapeutic remedy 6-TG for treatment of the IBD such as UC and Crohn's disease, because treatment with 6-TG is usually associated with adverse side effects including, hepatotoxicity, nephrotoxicity, and bone marrow suppression leading to anemia, leukopenia and thrombocytopenia (28,29).
2018-12-17T19:11:14.413Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "0cab29fc25ebfa09735bc1f8d1d38f5d6cf915ee", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0cab29fc25ebfa09735bc1f8d1d38f5d6cf915ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233572357
pes2o/s2orc
v3-fos-license
Evaluation of some distributional downscaling methods as applied to daily precipitation with an eye towards extremes Statistical downscaling (SD) methods used to refine future climate change projections produced by physical models have been applied to a variety of variables. We evaluate four empirical distributional type SD methods as applied to daily precipitation, which because of its binary nature (wet vs. dry days) and tendency for a long right tail presents a special challenge. Using data over the Continental U.S. we use a ‘Perfect Model’ approach in which data from a large‐scale dynamical model is used as a proxy for both observations and model output. This experimental design allows for an assessment of expected performance of SD methods in a future high‐emissions climate‐change scenario. We find performance is tied much more to configuration options rather than choice of SD method. In particular, proper handling of dry days (i.e., those with zero precipitation) is crucial to success. Although SD skill in reproducing day‐to‐day variability is modest (~15–25%), about half that found for temperature in our earlier work, skill is much greater with regards to reproducing the statistical distribution of precipitation (~50–60%). This disparity is the result of the stochastic nature of precipitation as pointed out by other authors. Distributional skill in the tails is lower overall (~30–35%), although in some regions and seasons it is small to non‐existent. Even when SD skill in the tails is reasonably good, in some instances, particularly in the southeastern United States during summer, absolute daily errors at some gridpoints can be large (~20 mm or more), highlighting the challenges in projecting future extremes. changing there are some fundamental differences. While temperature is eventually expected to rise everywhere the sign of the precipitation response varies by location, season, and between models. However, different RCM forced by the same GCM can yield opposite signed precipitation responses (Karmalkar, 2018;Holtanova et al., 2019). Hence, projecting changes in precipitation is particularly challenging, especially at the regional scale. Variations in temperature and precipitation differ fundamentally in that temperature varies more smoothly both spatially and temporally. Precipitation is often discontinuous in both space and time. Accordingly, the distribution of precipitation is often highly asymmetrical and skewed whereas temperature is usually more wellbehaved. Additionally, precipitation is characterized by two aspects: the (binary) occurrence and the actual distribution of amount on wet days, which makes statistical modelling of precipitation much more difficult. Of particular relevance is the historical and projected increase in extreme precipitation events (IPCC, 2013). Compounding this is the disproportionate contribution of rare heavy events (Pendergrass and Knutti, 2018) such that globally-the wettest 12 days typically account for approximately half the annual total, with this concentration projected to increase in the future. To mitigate the deficiencies of physical climate models and provide information for policymakers better suited for local areas, a wide variety of statistical downscaling (SD) techniques have been developed (Maraun and Widmann, 2018). Recently, this author team, members of the Geophysical Fluid Dynamics Laboratory (GFDL) Empirical Statistical Downscaling (ESD) team (https://www.gfdl.noaa. gov/esd_eval) has focused on evaluating some of these techniques. For an SD overview and our evaluation approach philosophy see our earlier works and cited references (Dixon et al., 2016;Lanzante et al., 2018;Lanzante et al., 2019a, hereafter L19a;Lanzante et al., 2019b, hereafter L19b). As a caution we note that even the best SD methods will fail to produce credible results if the driving physical climate model is flawed in its representation of circulation features (Hall, 2014;Maraun et al., 2017). Furthermore, large-scale models such as GCMs may not realistically represent sub-grid processes needed to simulate extreme precipitation (Giorgi et al., 2016;Maraun et al., 2017) in which case high resolution models may be needed. We use a Perfect Model (PM) approach to test the 'stationarity assumption' inherent to all SD methods which implicitly assume that relationships defined during a historical period, intended to calibrate the method against observations, are valid for use in a future epoch when the climate has changed. The PM provides 'future observations' which do not exist in the real-world. However, as discussed below (see 2.1), it is important to note that the idealized nature of our PM design does not allow us to assess all sources of non-stationarities. This is a follow-up to our recent SD work which assessed and improved representation of tails (L19a) and assessed daily maximum temperature (L19b). The methods we consider here and previously (L19a; L19b) are from a class of SD techniques operating distributionally, thus the expectation of better suitability than other SD techniques for reproducing extremes (i.e., tails). It is worth noting that this exercise is at a severe disadvantage (Maraun, 2013) because deterministic methods, such as those used here, cannot bridge the scale mismatch between GCM and observations, particularly for precipitation, having considerable sub-grid-scale variability (Chen and Knutson, 2008). Nevertheless there is value in our assessment because: (a) SD output from these methods is widely used in impact studies, (b) SD methods can provide bias correction, and (c) SD methods are often embedded in multivariate methods capable of bridging the scale mismatch. 2 | DATA AND METHODOLOGY 2.1 | Data GFDL-HiRAM-C360 model data were introduced by Dixon et al. (2016) and used by Lanzante et al. (2018), L19a and L19b. We provide only a brief description as the reader is referred to theses earlier works, especially Dixon et al. (2016), for more details as well as appendix A of L19a for data availability. Daily precipitation covering a rectangular region surrounding the Continental United States (CONUS), excluding oceanic points, constitute our PM data. Thirty years of data from a GCM driven by historical forcings cover the period 1979-2008. Thirty years of data based on three 10-year ensembles driven by forcings from a high emissions scenario (RCP8.5) cover the period 2086-2095. Via our shorthand we refer to historical ( ('model' or 'GCM' output) even though all are GCM data. Note that SD methods are typically faced with two challenges: (a) the spatial scale mismatch between model and OBS and (b) model biases. Often in common usage the term 'statistical downscaling' is a misnomer, lacking explicit mention of (b). The fact that model values represent spatial averages results in (a). Strictly speaking the SD methods we use are bias correction methods. Since we use a single physical model to generate both model and pseudo-observations, our PM design explicitly introduces only challenge (a), thus we are not able to assess non-stationarities resulting from model biases in mean state or in climate change signals. More complex PM's, deriving 'OBS' and 'GCM' values from two different physical models would also explicitly introduce challenge (b). However, by way of spatial averaging our approach can introduce biases implicitly by altering distributions. We have chosen our simpler design in initial work as it facilitates easier diagnosis. | Downscaling methodology Guided by L19b, we use the two best performing methods for daily maximum temperature, one conceptually simple, quantile delta mapping (QDM) (Cannon et al., 2015) and one more complex, Kernel density distribution mapping (KDDM) (McGinnis et al., 2015). We also use Bias correction quantile mapping (BCQM) (Lanzante et al., 2018) which is both conceptually simple and very widely used. Finally, we consider PresRat, a modification of QDM designed specifically for precipitation (Pierce et al., 2015). Note that two of L19b's methods utilized the anomaly approach which is inappropriate for precipitation, a positive definite quantity. Below we briefly introduce the four methods-the interested reader is referred to L19b and references therein for more details. | BCQM BCQM, one of the most widely used SD methods, is often referred to as 'quantile mapping', although some use this term more generally in reference to various distributional SD methods. It is computed as: where F is the cumulative distribution function (CDF), F −1 its inverse and x the M f value to be downscaled. Equation (1) | KDDM KDDM uses a complex, multi-step algorithm involving kernel density estimation to smooth O h and M h which have first been standardized to zero mean and unit variance, separately for each month of each year. Subsequent integration of the generated distribution functions followed by inverse standardization yields the desired transfer functions. We use R code kindly supplied by the KDDM authors (https://github.com/sethmcg/climod). | QDM QDM can be thought of conceptually as using M f as a first guess, but modifying it via a correction factor, which varies by position in the distribution. The correction factor, while additive for most variables, is multiplicative for precipitation. Its additive form is: and its multiplicative form is: We use R code made available by the QDM authors (https://github.com/cran/MBC). | PRAT PresRat, hereafter referred to as PRAT, is a simple modification of QDM which preserves the model-predicted future change in mean precipitation. Each value of D f computed from (3) is multiplied by a calendar-month specific correction factor K: where bars indicate climatological means over a specific calendar-month. | Configuration options Although some authors have fixed specific options in their particular implementation, here we make a distinction between SD methods and configuration options. We consider an SD method to be associated with overarching principle(s) whereas configuration options to be specific choices made in implementation. We evaluate configuration choices since in much prior work consequences of these choices have not been considered. Some of our motivation stems from Vrac et al. (2016), hereafter V16, who did assess some configuration options, although not in a PM setting. Added complexity of precipitation yields several additional choices, four of which we consider. The first is whether to use an additive (A) versus multiplicative scaling (M). Conventional wisdom has dictated the latter for precipitation as additive correction can yield negative values-although these can be reset to zero. We consider the additive option since we are not aware of any prior studies that have evaluated this approach for precipitation. Another option is frequency adjustment (freqadj) in which a threshold (above the US trace value of 0.01 in.) is chosen yielding the same fraction of dry days in M h as found in O h , by setting M h values below the threshold to zero. The same threshold is applied in the future. Frequency adjustment is aimed at accounting for the well-known GCM 'drizzle bias' (Stephens et al., 2010), hereafter DBIAS, the widespread tendency for GCMs to simulate an excess number of small amounts of precipitation compared with real-world observations. A fundamental issue in precipitation SD is how to deal with days having zero precipitation. One could ignore them and apply SD only to non-zero values or SD could be applied to all values, including the zeros. We introduce the option ignore0, which when on (off) applies to the former (latter) case. A further issue arises with ignore0 off, namely the potential for a large number of identical values (i.e., zeros). Cannon et al. (2015) added a small random value (distributed uniformly over [0, trace/2]) to each zero before downscaling. We refer to this option as below trace noise (BTN), which can either be on or off. Note that the four configuration options are not applicable to all of our SD methods. The choice of additive versus multiplicative is not applicable to BCQM or KDDM by nature of their algorithms. Furthermore, freqadj and ignore0 are 'baked into' the complex KDDM code. For QDM and PRAT all four options are viable, which is an extension to the methodology given by the original authors. Although we consider four configuration options, our list is not exhaustive. For example, time windows used for SD training and evaluation can vary: 12-monthly or 4-seasonally non-overlapping, or overlapping moving windows of different lengths, to name a few possible choices. There is a tradeoff: wider windows yield larger sample sizes for training but narrower ones are better able to resolve seasonally varying relationships. Window choice may introduce artefacts (Dixon et al., 2016; especially Figure 4b). Hence, configuration options beyond those considered here may have consequences. | Tail schemes Special attention is warranted to tails, which present a greater challenge than the remainder of the distribution. We examined this issue in considerable detail and have devised special procedures (L19a) which were evaluated extensively (L19b). We use the limited tail adjustment scheme (LIM) as per L19a and L19b. LIM is applied only to tail values after initial application of any arbitrary SD method. For application of LIM the user decides a priori how many values at the end of the distribution are to be modified by specification of the parameter tail-length (TLN). The user also specifies, via parameter NPT, the number of values to be used in performing the tail adjustment. Previously (L19b) we found TLN = 10 and NPT = 10 yield good results for temperature. While smaller values of NPT produce poorer results, increasing NPT beyond 10 generally yields little gain. Conceptually LIM operates by computing a constant correction factor from the NPT points and applies it to the TLN points. For example, with NPT = 5 and TLN = 10 to apply LIM to the right tail the correction factor is computed using the 11th through 15th largest values and then applied to the 10 largest values. The correction factor is either additive, used for most variables such as temperature, or multiplicative, assumed more appropriate for precipitation. Here our LIM results are multiplicative, following conventional wisdom, with NPT = 10 and TLN = 10, based on L19a. We only apply LIM to the right tail as left-tail values are small, marginally larger than the trace value. | Evaluation procedure Evaluation statistics are presented by treatment, which we define as a combination of an SD approach (one SD method for either the base algorithm, or additionally with LIM adjustment in the right tail) and a set of configuration options as detailed in section 2.2.5. Each of the three 10-year future ensembles is downscaled separately and verification statistics are averaged over the ensembles. SD is performed separately at each gridpoint and for each of four standard seasons DJF (December, January, February), MAM, JJA, and SON. Results are presented mostly as averages over the four seasons with some limited results given for DJF and JJA. Use of seasons rather than months (as in our earlier works for daily temperature) is aimed to provide adequate sample sizes given that for some approaches, the presence of dry days reduces available sample size, sometimes substantially. Our primary metric is the mean absolute error (MAE) which we use in two different ways. In the traditional synchronous application MAE is based on differences between values of D f and O f occurring on the same day. We also apply it asynchronously (MAE-ord) such that paired values of D f and O f represent the same order statistics. While MAE is a measure of how well SD represents day-to-day weather variations, MAE-ord measures how well SD reproduces the statistical distribution of values. MAE-ord is motivated by guidance provided by Maraun and Widmann (2018) and Maraun et al. (2019) regarding statistical model evaluation. Contrasting results based on these two similar but distinct metrics will help illustrate issues raised by Maraun (2013) regarding the stochastic nature of precipitation. After computing MAE or MAE-ord over a season, or averaging the four seasonal values, it is converted to a standard skill score (Wilks, 2006;L19a;L19b): We then average skill over all non-ocean gridpoints. Averaging utilizes the biweight mean (Lanzante, 1996) which guards against effects of outliers. The biweight is data adaptive behaving more like the arithmetic mean for 'well-behaved' data or more like the median otherwise. As in L19a and L19b we compute separate verification statistics for different portions of the distribution referred to as distributional categories (CAT's): CAT 1 (CAT 9) consists of the lowest (highest) value in the sample, CAT 2 (CAT 8) the second-third lowest (highest) values, CAT 3 (CAT 7) the fourth-sixth lowest (highest) values, and CAT 4 (CAT 6) the 7th-10th lowest (highest) values. Finally, CAT 5 consists of all values in the sample. When considering extremes we devote our attention to the right tail, as values in the left tail are very small. Some results are presented as averages over the entire right tail (CAT 6-9), weighted by the number of values in each category. Because of the binary nature of precipitation occurrence we utilize an additional metric in the form of a fractional error in the number of dry days, computed separately for M f and D f . It is computed as the number of days in error divided by the total number of days. For example, in the case of D f , if D f and Of are both wet days or both dry days there is no error. On the other hand, if one is wet and the other dry we count this as an error. As above, given fractional errors for both M f and D f we compute a skill. In order to estimate statistical significance we use the same procedure developed previously, referring the reader to L19a (especially appendix B) for details, with only a brief overview here. We first compute the mean skill over our domain, separately for two SD approaches of interest. The difference between these two skills is the quantity for which significance is sought. We perform two separate Monte Carlo simulations, with 1,000 trials each to derive a distribution of differences in skill. The position of the actual difference in this randomly derived distribution determines the significance level. The first step in the process is to estimate the spatial degrees of freedom in order to account for the fact that gridpoint values are not independent of one another. For each trial we apply random translational shifts in both the north-south and east-west directions to the pair of maps. Next we pattern correlate the original and shifted pairs of maps and use the distribution of correlations to infer an effective block size. In the second step, for each trial we randomly shuffle blocks of gridpoints between the two original maps and compute a difference in mean skill between the permuted maps. The distribution of these differences is used to assess significance. In order to ensure robust results we report three significances based on conservative and liberal objective estimates as well as a very conservative subjective choice (L19a). We modify the subjective choice of effective number of blocks of 4 × 2 (latitude by longitude gridpoints) used in our earlier works for temperature to 7 × 4 for precipitation based on Huang et al. (1996) and Richman (1986). | RESULTS 3.1 | Evaluation of skill over the entire distribution Table 1 summarizes skill (based separately on MAE and MAE-ord) averaged over the entire domain for various SD treatments. Rather than considering every possible combination, we limit to a manageable number from which we can draw conclusions considered representative of the class of SD methods examined in our PM framework. Our shorthand for treatments uses the first letter of the SD method followed by the ordinal row number from the first column of Table 1 which specifies configuration options. During discussion we also refer to Table 2, with results from a limited number of significance tests, pairwise between two treatments. For convenience, Table 2 lists group numbers (i.e., G1-G7) for sets of related significance tests. The most noteworthy aspect of Table 1 is the clear distinction between skills based on MAE versus MAE-ord, with the former substantially lower. Higher skill for MAEord reflects the fact that while quantile mapping substantially improves the distribution of values compared with the raw GCM output, it exhibits less skill in reproducing the day-to-day weather sequencing, as measured by MAE. This can be explained by the arguments made by Maraun (2013) that attempting to bridge the scale mismatch between OBS and GCM using a deterministic method such as quantile mapping yields results with a corrupted time sequence. As such, MAE skill levels are disappointingly low (~20-25%), about half that found for temperature in our earlier work (L19a; L19b). Sub grid-scale variability, which is at the core of the problem, is much less of an issue for temperature than it is for precipitation. It is worth noting that the relative ordering of skills by treatment are generally quite consistent between the two metrics-the correlation of MAE and MAE-ord skills across treatments exceeds 0.9 for CAT5 and CAT6-9. Thus, comparisons between treatments-one of the main motivations for this work-are not much affected by the choice of metric. For the remainder of this work we draw conclusions based primarily on MAE-ord. Next we examine overall skill (CAT 5) for four different configurations of BCQM (B1-B4) based on combinations of freqadj on/off and ignore0 on/off. BCQM was chosen for this purpose since it is the simplest of our methods, has the fewest possible options, and is a very commonly used distributional SD method. Three of the configurations (B2-B4) yield skill of the same order whereas B1 performs much more poorly. Significance tests in Table 2 for group G1 indicate that the outlier treatment (B1) is highly significantly worse than the others while the two better treatments (B2 and B4) are not different from one another. The key factor is that the better treatments have ignore0 off. However, use of freqadj can substantially mitigate the effect of having ignore0 on (B3), although this treatment is still significantly worse than the two better treatments. Below (3.5.2) we explore the reasons for this behaviour, explaining why it is configuration-specific but not SD method-specific. Finally, we consider K5 which has skill a bit lower than B4, but not significantly so. Next we examine results based on a variety of treatments for QDM, chosen for this purpose since previously (L19b), for temperature, it was found to be as good or T A B L E 1 Skill (%) averaged over the domain for various SD methods and configurations (T/F/I/B) referenced by treatment number (first column). CAT 5 is for the entire distribution whereas CAT 6-9 is averaged over the right tail. CAT 6-9 L is based on the LIM adjustment averaged over the right tail MAE MAE-ord Method T F I B CAT 5 CAT 6-9 CAT 6-9 L CAT5 CAT 6-9 CAT 6-9 L D-DRY better than any of the tested distributional methods, and has the most configuration options. In the literature, conventional wisdom from a variety of SD methods suggests that multiplicative scaling rather than additive adjustment should be used for precipitation. One of the reasons for this is that the additive approach may result in negative precipitation-which then must be reset to zero. Since this assumption is rarely tested we have applied a number of such treatments with different configurations. Indeed, the multiplicative approach is superior in all cases. Not coincidently the best approach for QDM is Q6, the default configuration in the code made publicly available by the creators of QDM (Cannon et al., 2015). However, while the multiplicative and additive QDM treatments tend not to differ significantly from one another, the best of the former group (Q6) is marginally significantly better than the best of the latter group (Q12) as seen in G2. Finally we note that the best versions of BCQM (B4) and QDM (Q6) do not differ significantly (G2). The final method under consideration is PRAT, which (see above) is a variant on QDM. Having used QDM to explore various configurations we limit the number of PRAT treatments. Not surprisingly the best (P19) has the same configuration as the best QDM (Q6). Although P19 is not better than B4, it is marginally significantly better than both Q6 and K5 (see G3), and T A B L E 2 Significance level (%) testing the hypothesis that there is no difference in skill between the two specified SD treatments Table 1. certainly the poorest version of PRAT (P17) which has ignore0 on. Next we consider the effect of the BTN option. Although we have not made extensive tests, there are several pairings that differ only in the BTN setting (Q6/Q11, Q8/Q9, Q13/Q14, and Q15/Q16). In general there is very little difference. Finally, Table 3 gives a limited comparison between our findings and those involving four configuration scenarios of V16: Positive Correction (PC), Direct Approach (DA), Threshold Adaptation (TA), and Singularity Stochastic Removal (SSR). These are analogous to our configurations: PC (freqadj = off, ignore0 = on), DA (freqadj = off, ignore0 = off, BTN = off), TA (freqadj = on, ignore0 = on), and SSR (freqadj = off, ignore0 = off, BTN = on). Although V16 utilized a single downscaling method (CDFt; Michelangeli et al., 2009), not used here because of sub-par performance (L19b), our use of multiple methods enables us to demonstrate much greater sensitivity of results to configuration rather than SD method. In agreement with V16 we find that, of the methods tested here using our PM experimental design, PC is by far the worst approach with the other three yielding fairly similar results, although TA may be slightly poorer than DA and SSR. We also agree that SSR (also used by Zhang et al., 2009 andCannon et al., 2015) is preferable because of its flexibility in dealing equally well when M h has more wet days than O h (i.e., the DBIAS) as well as the inverse. Recall that use of freqadj can only remedy the DBIAS not the inverse. The SSR approach is also simpler in that it avoids having separate corrections for occurrence and amount. | Evaluation of skill in the tails of the distribution Skill based on MAE-ord averaged over the right tail using the base algorithm (CAT 6-9 in Table 1) is about half that for CAT 5 (~25-30% vs. 50-60%). Other than the B1 outlier, differences in CAT 6-9 skill between treatments are generally small and few if any are likely to be significantly different as evidenced by the B4 versus B3 comparison (G4) which exhibits fairly typical differences. Application of the LIM adjustment yields small improvements which are likely to be mostly insignificant with again not much difference between treatments (G4). Furthermore, tail skill for our preferred treatment (P19) does not differ significantly from that of other leading treatments (G5). Skill based on MAE shows a very different pattern. While CAT 6-9 skill is much poorer than for CAT 5, LIM yields substantial improvement. However, skill in the tails for both the basic algorithm as well as LIM adjustment is negative, indicating they provide no improvement over the raw GCM. Here the results in the right tail are strikingly different than was found previously for temperature (L19b) for which skill in the tails was comparable to the whole distribution. For further perspective on tail performance Figure 1 shows MAE and MAE-ord, along with their associated skills, for P19 as a function of distributional category. High skill in the left tail is of little practical importance since the values there are quite small. Aside from consistency with the general points made above, this figure demonstrates the rapid increase in error going farther out in the right tail. Although P19, likely as good as any of the tested treatments, shows considerable improvement over the raw GCM, errors in the tail (representing domain averages) are still quite large~5-20 mm. | Evaluation of dry-day skill The right-most column of Table 1 gives skill based on the dry-day error metric. Other than for B1, which is highly significantly worse than other treatments (G6), differences tend to be not too dissimilar. This is indicated by the fact that while P19 differs by more than two from both B4 and K5, these differences are not even close to being significantly different (G7). Note that the actual fractional error (not shown) does not differ much between SD treatments, with that for M f~0 .27 and that for D f~0 .18. | Spatial patterns of skill and MAE The pattern of DJF skill for CAT 5 in Figure 2a shows identified lower skill for temperature downscaling in coastal regions. Although diagnosis of this feature is beyond the scope of this work the mechanistic explanations from our earlier work would not seem applicable here. In the tails (Figure 2c,d) the patterns are roughly similar to their corresponding CAT5 patterns, but with a considerable reduction in overall skill. As a consequence, substantial areas of the domain have near-zero or negative skill. For MAE-ord the DJF patterns for CAT 5 and CAT 6-9 (Figure 3a,c) both have higher values over the Southeast and along the extreme West Coast, which correspond roughly to the DJF climatology (not shown). For JJA while the CAT 6-9 pattern (Figure 3d) also corresponds reasonably well to the JJA climatology (not shown), the CAT 5 pattern (Figure 3b) does not, with largest errors in the higher latitudes of interior North America. The domain-averaged MAE-ord seen in Figure 1 of~5 mm for CAT 6-9 masked the strong regionality seen in Figure 3d where the tail errors along the East Coast, and particularly the Southeast are typically several times larger~2 0 mm, even in areas exhibiting considerable skill (Figure 2d). There is an interesting contrast between Figures 2b and 3b that relates to precipitation frequency. While lowest skill is found both in the mountainous West and the Northern Interior, the former (latter) has relatively low (high) MAE-ord. As illustrated below (3.5.2), having a very large number of dry days (i.e., zero values) complicates distributional mapping. The extreme aridity of the Far West results in either an insufficient sample for analysis or very small errors, while the Interior has just a bit more total and frequency of precipitation to trigger the complications. | Jupiter and Coaldale The mountainous West has some distinctive variations in skill and MAE-ord characterized by sharp gradients aligning with topography (Figures 2a and 3a). In the West, the basic nature of this pattern for CAT 5 is similar among the four SD methods and all seasons (not shown) except JJA which differs somewhat due to climatologically much drier conditions. These variations can be explained in terms of interaction between topography and nonstationarity introduced by climate change. To illustrate this, two relatively nearby points are chosen for which the behaviour is strikingly different. Near Jupiter, CA (Figure 4), CAT 5 MAE-ord skill for P19 is highly negative (−62.0%) yet at a gridpoint near Coaldale, NV ( Figure 5), skill is impressively high (90.9%). Here the use of different seasons and members (see captions Figures 4 and 5) was made to accentuate the disparity, but is not crucial to the conclusions. Jupiter is upwind of the Sierra Nevada in a region of rapidly rising elevation from west to east whereas Coaldale is downwind on the plateau. As a result, Jupiter is located near an orographically forced climatological local maximum of precipitation while Coaldale is near a climatological minimum. Accordingly, for Jupiter (Coaldale) the much larger GCM footprint encompasses areas of less (greater) precipitation in the surrounding areas. As seen in Table 4 at Jupiter (Coaldale) mean precipitation of OBS is greater (less) than GCM. Although the climate change signal at both locations is that of drying, this effect is more pronounced at Jupiter, which, since it is wetter, has more to lose via climate change. A better understanding is had by examining quantilequantile (qq) plots at the two locations. These plots are like traditional x-y plots except that instead of each x-y pair coming from the same point in time, they come from the same relative location in their respective distributions. For example, the right-most (left-most) point consists of the maximum (minimum) x-value paired with the maximum (minimum) y-value. To complement the qq plots we also show corresponding plots of CDF curves. For added clarity, both types of plots are shown zoomed in (top) on the most salient features as well as over the full range (bottom) of values (with nonlinear scaling). The qq plots (Figures 4a,c, 5a,c) consist of three curves, each with GCM on the abscissa and OBS or DWN on the ordinate; black (red) depicts the historical (future) relationship. Thus, black represents what is available to 'train' the downscaling method while red represents 'truth'. The cyan curve represents what the SD method generated as its rendering of the future. One can think of the green line, with a slope of 1, as the starting pointdownscaling moves from it to the cyan curve-with the best possible result lying on the red curve. The point at which curves cross y = x shows where the bias changes sign. For points above (below) the green line OBS is greater (less) than GCM. Changes from black (historical) to red (future) indicate non-stationarity. In the case of Jupiter (Figure 4a,c) the downscaled (cyan) departs significantly from the truth (red) for most of the distribution. To understand the downscaling operation we use QDM-the results for which (not shown) are quite similar, only slightly worse. However, the basic principles apply to any of the methods used herein operating via analogous bias correction principles. Downscaling via QDM consists of using M f as a 'first guess' and then modifying this through a multiplicative factor (3)). The percentile is that of M f from its distribution. At first glance it seems curious that downscaling does so poorly given that the red curve is not that different from the black one. But these curves only represent relative relationships. The key lies in the fact that the distributions have shifted significantly towards lower values in the future (Figure 4b,d) due to drying (Table 4). Another important factor is that the black curve shifts from a local ratio (OBS to GCM) much larger than 1 (i.e., above the green line) at the high end of the distribution to values less than 1 at the low end. This ratio is proportional to the multiplicative correction factor applied by QDM. Note that ratio values less than 1 at the very low end of the distribution are to be expected as per the well-known DBIAS which results from the GCM having a larger footprint than the OBS. Consider an arbitrary value of Mf which is to be downscaled. The correction factor (Equation (3)) is determined by applying the percentile of this value from the M f distribution to the O h and M h distributions. However, because of considerable drying from historical to future periods the quantile (i.e., amount of precipitation) corresponding to this percentile will be higher in the historical than the future periods. Thus, the correction factor which is applied will be biased towards the high end of the historical distributions. Since, as we noted above, the O h /M h ratio increases with increasing precipitation amount, the resulting correction factor will be too large. There is an equivalence between the explanations based on qq plots and CDFs (Figure 4b Next consider Coaldale ( Figure 5), located in the orographically induced down-wind 'rain shadow' region. The qq curves are below the green line due to the DBIAS but unlike Jupiter they do not cross above it for higher values because the DBIAS operates only for low values of precipitation. For larger amounts of precipitation, especially convective, a localized intense area of precipitation is more likely surrounded by less intense precipitation, leading to in effect an inverse of the DBIAS. The fact that Coaldale is so arid (compare the y axis extents in Figures 4a,c and 5a,c) precludes it from reaching the inverse DBIAS regime which Jupiter is able to attain. Furthermore, although there is drying, it is less dramatic than at Jupiter. The combination of the drying and percentile shift effect does force the cyan curve above the black curve, just as was the case for Jupiter, but the amount is much less. But this shift has a positive effect by pushing the cyan curve closer to the red curve, in contrast to Jupiter in which it pushed it away from the red curve. In passing we point out that the discontinuity in the cyan curve is a reflection of SD adjustment for the DBIAS (i.e., converting some wet days to dry days). It is more prominent here because of the very dry climate which results in a 'zooming in' on the low end of the distribution. In summary, drying due to climate change and the inevitable DBIAS operate in opposite fashion between the two locations. At Jupiter they conspire to yield a poor downscaling result. Because the magnitudes are smaller at Coaldale they have a smaller effect, but fortuitously they combine to produce an exceptionally good result. | Alpena Motivation for this example comes from results shown earlier (Table 1) comparing four variants of BCQM. Treatment B1, having ignore0 on and frequency adjustment off produced much worse results. Maps not shown here, show a considerable similarity between patterns of seasonal variation in B1 skill and patterns of seasonal variation in climatological amount and frequency of precipitation. Namely, relatively low skill for B1 corresponds with climatological small amounts and daily frequencies of precipitation, with poorest performance during JJA. Furthermore, seasonal patterns of skill for our favoured approach of P19 are quite similar to those for the better performing variants of BCQM (B2-B4). The qq plots in Figure 6a,c for a point near Alpena, KS typify the poor performance of B1. The relationship between OBS and GCM is similar between the historical (black) and future (red) periods. When configured optimally with ignore0 off, BCQM (B4) and PRAT (P19) yield similar and reasonable results (violet and cyan circles, respectively) that follow the O h /M h and O f /M f curves quite well. On the other hand, turning ignore0 on, with all other settings the same yields again similar, but this time extremely poor results for both BCQM and PRAT (violet and cyan pluses, respectively). It is striking that skill at this location for B1 is-26.9% while that for B4 is 52.9%. In our PM experimental design, we find that poor performance with ignore0 on and frequency adjustment off is not downscaling method specific but instead is much more likely to occur when two conditions are met: (a) a large difference in dry day frequency between GCM and OBS and (b) high dry day frequency (~80% or greater). As shown in Table 5, at Alpena during both time periods condition (a) is met with a disparity of~18-24%. Differ-ences~10-20% are common at a majority of gridpoints, and in some locations/seasons are as high as~40-75%. Condition (b) is met since more than 80% of the days are dry for OBS in both the historical and future periods. The reason for the problem (B1) is that when (a) is met mapping occurs between disparate portions of the OBS and GCM distributions. Condition (b) exacerbates the problem by reducing the sample used to define the distributions. For example (Table 5), at Alpena the upper 42.8% of the M h distribution is mapped to the upper 18.9% of the O h distribution with ignore0 on and frequency adjustment off. However, if frequency adjustment was invoked with ignore0 on, the upper 18.9% of both distributions would be used in the mapping. Finally, if both options were off then 100% of both distributions would be used in the mapping. As seen in Table 5, an additional consequence of the more equitable mapping (B4) is the good representation of precipitation frequency (89.5 vs. 87.4) as opposed to the mismatched case (B1) where a large DBIAS remains (64.8 vs, 87.4). We can visualize the mechanisms for the poor performance of B1 via the qq plot for Alpena (Figure 6a,c) | DISCUSSION AND CONCLUSIONS We have compared a number of distinct distributional downscaling methods as applied to daily precipitation in a Perfect Model context as a follow-up to our recent studies involving daily maximum temperature (L19a; L19b). Applying a more stringent metric (MAE) geared towards assessing agreement in day-to-day variability yields skill 20-25% which is barely half of that found in earlier studies for temperature. Because of the more stochastic nature of precipitation (Maraun, 2013) we emphasize results based on a metric that assesses agreement of distributions (MAE-ord). By this metric skill is~50-60% overall and about half of that in the right tail. This is distinctly different than for temperature for which skill in the tails could be boosted comparable to that for the remainder of the distribution (L19a; L19b). Although downscaling overall yields useful MAE-ord skill (~30-35%) for values in the right tail of the distribution, there are considerable seasonal and regional variations. More importantly, even when skill is attained the magnitude of the errors can be considerable, for examplẽ 15-25 mm in the southeastern U. S. during summer. We remind the reader that our Perfect Model design is somewhat idealized-accounting only for the mismatch in spatial scale between OBS and GCM but not for differences in the underlying climate statesthus, real-world downscaling performance may differ. Compared with temperature, downscaling of precipitation via distributional methods is more complex having more configuration choices. Certain of these may be more consequential than the choice of SD method. These configuration choices result from the fact that precipitation consists of two aspects: (a) binary occurrence of precipitation (dry vs. wet days) and (b) distribution of precipitation conditional on occurrence. An equivalent but simplifying approach is to treat dry days as having zero precipitation, yielding a single distribution. Ultimately how these zero values are handled is crucial. In our PM framework the poorest performance occurs when SD methods train and apply a transfer function to only the non-zero daily precipitation values, without a frequency adjustment to account for the DBIAS. However, the use of a frequency adjustment when downscaling only the non-zero values largely remedies the situation. The best configuration occurs when downscaling all values (zeros included) without a frequency adjustment. With regard to the use of configuration options our findings are in general agreement with V16. Using optimal configurations, comparisons between several different downscaling methods do not always yield conclusive differences. PresRat, which involves a tweak to QDM was found to be comparable to BCQM while KDDM and QDM were found to be comparable to one another; the former pair were deemed marginally statistically significantly better than the latter pair. For diagnostic purposes we have provided some examples which highlight the mechanisms responsible for good or bad performance in our PM framework. Poor performance can result from non-stationarity introduced via climate change, as was the case at Jupiter. Surprisingly, especially good performance can result when nonstationarity due to climate change by chance compensates for a deficiency in the downscaling method (Coaldale)thus, two wrongs can make a right. Finally, when excluding all dry days from the SD, we demonstrated that for locations (such as Alpena) having infrequent precipitation (typically less than 20% of the days) the nearly ubiquitous DBIAS leads to an inappropriate mapping between the OBS and GCM distributions leading to very poor performance. Our case studies identified an intrinsic weakness of distributional methods when applied to precipitation. Because of the DBIAS, for low values of precipitation the ratio of O h to M h will be less than 1. However, for larger values, often the bulk of the distribution, typically this ratio will be greater than 1 because of the spotty nature of precipitation and the larger GCM footprint. This effect is accentuated in convective regimes. Furthermore, this ratio often increases with increasing precipitation amount, again due to convection which tends to produce greater, more isolated bullseye values. An inherent weakness of the class of quantile mapping methods is that while distributional methods operate via mappings between relative positions within distributions, there are certain physical constraints that operate with regard to absolute amounts of precipitation via spatial scale. A perturbing factor such as climate change (Jupiter) or excessively infrequent precipitation (Alpena) can distort the mapping in a manner that yields a physically inconsistent mapping-that is, where the DBIAS and its inverse get mapped to each other. Other perturbing factors, which have yet to be identified, may exist as well. The underlying characteristics of precipitation which often lead to poor results are not inherent to other better behaved variables such as temperature. It is intriguing that for some seasons and locations errors in the right tail can be quite large even when downscaling has demonstrable skill. One wonders what affect these errors might have on extreme value analysis (EVA) of precipitation, which has frequently been applied to raw GCM output? Recently Lopez-Cantu et al. (2020) performed EVA on CONUS precipitation and found large differences among five downscaled datasets. In future work we intend to explore this issue in a PM context. Finally, as a bridge back to our earlier PM SD evaluations for daily maximum temperature (tasmax), which were based solely on MAE skill, here we have computed MAE-ord skill for tasmax for a limited number of cases from L19b. In this comparison we report only the averages of three SD methods that correspond most closely to B4, Q6 and K5. For the basic approach for CAT5, going from MAE to MAE-ord skill (%) increases from~42 to 67 for tasmax compared with~22 to 58 for precipitation (see Table 1). Using the LIM adjustment and averaging over the tail (CAT6-9) skill increases from~46 to 57 for tasmax compared with approximately −8 to 32 for precipitation. Hence, the improvement in skill based on MAE-ord over that for MAE is greater for precipitation than temperature as expected given the more stochastic nature of the former. Furthermore, the improvement in the tails is much greater for precipitation, even though tail performance is much poorer compared with overall (CAT5) performance for precipitation than temperature.
2021-05-04T22:06:03.096Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "f104f3b24da78fe75576866e56b25158a53b4d04", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/joc.7013", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "e501af30e5f36b546a6df418ac89d0477cee03e1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
257525137
pes2o/s2orc
v3-fos-license
Knocking Down CDKN2A in 3D hiPSC-Derived Brown Adipose Progenitors Potentiates Differentiation, Oxidative Metabolism and Browning Process Human induced pluripotent stem cells (hiPSCs) have the potential to be differentiated into any cell type, making them a relevant tool for therapeutic purposes such as cell-based therapies. In particular, they show great promise for obesity treatment as they represent an unlimited source of brown/beige adipose progenitors (hiPSC-BAPs). However, the low brown/beige adipocyte differentiation potential in 2D cultures represents a strong limitation for clinical use. In adipose tissue, besides its cell cycle regulator functions, the cyclin-dependent kinase inhibitor 2A (CDKN2A) locus modulates the commitment of stem cells to the brown-like type fate, mature adipocyte energy metabolism and the browning of adipose tissue. Here, using a new method of hiPSC-BAPs 3D culture, via the formation of an organoid-like structure, we silenced CDKN2A expression during hiPSC-BAP adipogenic differentiation and observed that knocking down CDKN2A potentiates adipogenesis, oxidative metabolism and the browning process, resulting in brown-like adipocytes by promoting UCP1 expression and beiging markers. Our results suggest that modulating CDKN2A levels could be relevant for hiPSC-BAPs cell-based therapies. Introduction Obesity is considered the main risk factor for type 2 diabetes (T2D), mainly due to the excessive accumulation of adipose tissue (AT) [1]. The expansion of AT in obese individuals is a direct cause of the comorbidities, due to the excessive accumulation of triglycerides (TG) within adipocytes, leading to inflammation and insulin resistance. In mammals, there are two major types of AT that are anatomically and functionally distinct: white (WAT) and brown (BAT). White adipocytes store excess energy as TG and release free fatty acids as energy substrate during periods of negative energy balance. BAT differs from WAT by its cellular origin, and is specialized in energy expenditure and the production of heat, mainly through active fat oxidation [2]. Elevated energy expenditure in BAT is correlated with high expression levels of a specific mitochondrial protein named uncoupling protein 1 (UCP1). More recently, the presence of a subtype of thermogenic adipocytes within WAT that also expresses UCP1 has been reported. These inducible adipocytes, named beige, are distinct from white and brown adipocytes. They mainly arise from noradrenergic stimulation or cold exposure. The conversion of white adipocytes into brown-like adipocytes is called browning [3]. Obese individuals are characterized by increased WAT mass and decreased brown and beige AT mass and activity [4]. Increasing energy expenditure by BAT activation or by promoting the browning of WAT may represent a new therapeutic avenue to prevent insulin resistance in obesity and T2D [5]. In humans, cold exposure enhances metabolic activity and thermogenesis in BAT. This increase is accompanied by increased insulin sensitivity in diabetic patients [6]. The transplantation of BAT or brown adipocytes isolated from human adipose progenitors (APs) into the visceral cavity of mice reverses metabolic syndrome and T2D, constituting a potential translatable therapeutic tool to improve metabolic health [7]. Since beige adipocytes can arise through de novo differentiation from undifferentiated APs or via the conversion of mature white adipocytes into UCP1-positive cells, referred to as transdifferentiation [8][9][10], the identification of selective molecular pathways and underlying mechanisms involved in beige adipocyte biogenesis may represent a first step towards innovative therapeutic options. Genome-wide association studies have established that several single nucleotide polymorphisms, including loss-of function mutations in the cyclin-dependent kinase inhibitor 2A (CDKN2A) locus, affect glycemia, insulin values and T2D risk, implying a role in glucose and insulin sensitivity regulation [11,12]. The human CDKN2A locus encodes two proteins, the Cyclin Dependent Kinase inhibitory (CDKI) p16INK4a protein and the p53 regulatory protein p14ARF (p19ARF in mice). The p16INK4a protein is a potent CDKI preventing the binding of CDK4/6 to Cyclin D, controlling the CDK4-pRB-E2F1 pathway; whereas p14ARF mainly exerts its activity via the inhibition of MDM2, a ubiquitin-ligase that promotes the degradation of p53 [11,12]. In AT, besides its cell cycle regulator functions (i.e., anti-proliferative and tumor suppressor), the CDKN2A locus also controls the commitment of stem cells to the brown-like type fate and mature adipocyte energy metabolism [13][14][15]. We have shown that mice with a germline disruption of the Cdkn2a gene (Cdkn2a −/− ) fed a high-fat diet are protected against diet induced obesity (DIO) by increasing thermogenesis via inguinal (ing) WAT beiging, resulting in improved insulin sensitivity associated with the activation of the PKA pathway [16]. In this study, we have also observed that CDKN2A expression is increased in adipocytes from obese, compared to lean, subjects [16]. Consistent with these findings, a recent study reported that silencing Cdkn2a expression in cold-inducible beige APs results in a rejuvenation of beige adipocyte formation, restoring cold-induced thermogenesis in old mice [13]. The authors also showed that silencing Cdkn2a expression in UCP1+ cells within ingWAT that display progenitor-like characteristics stimulates new beige fat formation through cell proliferation via a cell-autonomous role [17]. Overall, these data indicate the existence of an inverse correlation between the expression level of CDKN2A and beige adipocyte activity, further supporting the notion that cell-cycle genes may be involved in controlling a white-to-beige/brown fat transition that involves APs, beige adipocyte expansion and their activity in a cell-autonomous manner. Human induced pluripotent stem cells (hiPSCs) have the potential to be differentiated into any cell type, making these cells an unlimited source for studying cell-based therapy. In particular, several studies have established the therapeutic potential of hiPSCs differentiated into brown adipocytes progenitors (hiPSC-BAP) against obesity and associated metabolic disorders [18]. However, the limitation of the use of hiPSC-BAPs in 2D cultures is due to their low adipocyte capacity and their low expression levels of UCP1 [18]. Here, to overcome this limit, we report a new method of 3D culture, via the formation of an organoidlike structure, which enhances the capacity for differentiation and the browning process of hiPSC-BAPs [19]. We have previously reported that silencing CDKN2A expression during hiPSC-BAP adipogenic differentiation in 2D cultures promotes UCP1 expression [16]. In this study, we investigated the effects of CDKN2A silencing in hiPSC-BAP in improved 3D cultures. Understanding how Cdkn2a can relay to initiate a thermogenic program in hiPSC-BAPs is a first step to envisage activating beiging as a new putative therapy to alleviate the effects of obesity and to prevent insulin resistance and T2D. RNA-sequencing (RNA-seq) analysis and kinase activity profiling of hiPSC-BAP further demonstrate that CDKN2A silencing enhances pathways involved in adipogenesis, oxidative metabolism and the browning process, resulting in the reprogramming of brown-like adipocytes by promoting UCP1 expression and beiging markers. Generation of hiPSC-Derived Brown-like Adipospheres The generation of spheroids and their adipogenic differentiation was performed as we recently described in detail [21]. Briefly, 1 × 10 6 hiPSC-BAPs were seeded per well of 24 well Ultra-Low Attachment (ULA) plate (Corning 3473, Fischer scientific, Illkirch-Graffenstaden, France) for three days for spheroid formation. Then, to differentiate spheroids, the growth medium was changed to a differentiation medium composed of EBM-2 (Lonza, Colmar, France) supplemented with 0.1% FCS, IBMX (0.5 mM), dexamethasone (0.25 µM), T3 (0.2 nM), insulin (170 nM), rosiglitazone (1 µM), SB431542 (5 µM), and a EGM-2 cocktail (Lonza, CC-3121) including ascorbic acid, hy-drocortisone and EGF. IBMX and dexamethasone were maintained only for the first 3 days of differentiation. SB431542 and EGF were removed after the first 9 days of differentiation. Spheroids were maintained in the differentiation medium for up to 20 days, with the medium changed once a week. 2.1.2. siRNA Transfection siRNAs (Human CDKN2A siRNASMART pool, GEHealth Bio-Sciences, Rosersberg, Sweden) were transfected at the time when hiPSC-BAPs were in suspension for spheroid formation. One hundred nM siRNAs were transfected in a medium containing 60% DMEM low glucose, 40% MCDB-201, 1× ITS, dexamethasone (10 −9 M), and ascorbic sodium acid (100 mM) using HiPerFect (Qiagen, Courtaboeuf, France) transfection reagent as described by the supplier. Cells were then maintained in conditions to form spheroids and were induced to differentiate as described above. RNA Extraction and RNA-Sequencing Total RNA was extracted from the hiPSC-BAP 3D at D0 and D10 of differentiation using TRIzolTM Reagent (Sigma-Aldrich, Saint-Quentin-Fallavier, France). The quality of the RNAs was verified with RNA 6000 nanochips on the agilent 2100 bioanalyser. Purified RNA (200 ng) was used for the library preparation. Briefly, RNA libraries were prepared using the TruSeq Stranded mRNA Library Preparation Kit (Illumina, San Diego, CA, USA) following the manufacturer's instructions. The libraries were sequenced on the NextSeq system (Illumina) using a paired-end 2x75 bp protocol. Three biological replicates per condition were sequenced. The GEO accession number for the sequencing data was GSE223241. Proteins Extraction and PamGene Kinase Assay Proteins from spheroids and adipospheres were extracted for PamGene kinase assay as previously described [16]. Tyrosine (PTK) and serine/threonine kinase (STK) activity was investigated with PTK and STK microarrays purchased from PamGene (PamGene International BV, 's-Hertogenbosch, The Netherlands). The experiments were performed as described in the manufacturer's instructions. Bioinformatic Analysis For RNA sequencing, the demultiplexing of the sequence data was performed using bcl2fastq Conversion Software (Illumina; bcl2fastq v2.19.1). The trimming of adapter sequences was performed using cutadapt software (version 1.7.1). Reads quality control was assessed using FastQC (v0.11.5). Subsequently, sequence reads from FASTQ files were aligned to the human genome GRCh38, downloaded from Ensembl 108. Alignment was performed using STAR aligner (version 2.5.2b). Over 19 millions of 75 base pairs PE-reads reads were generated per sample. The normalized counts of the different genes and isoforms were performed using RSEM (v1.2.31) using a GTF from Ensembl 108. Finally differential expression was performed using R version 3.6.3 and DESeq2 package v1.24.0. An adjusted p-value < 0.05, Log2FC > 1 and LogFC < −1 were set as thresholds. We then performed pathway analysis using the core analysis function of Ingenuity Pathway analysis (IPA) (Qiagen) and the Gene Set Enrichment Analysis (GSEA) was done using GSEA software version 4.3.2 (GSEA; http://software.broadinstitute.org/gsea/ (accessed on 3 October 2022)). All GSEA data showed had a p-value < 0.05. For Pamgene analysis, image acquisition and data analysis were performed according to the manufacturer's instructions as previously described [16]. Data and upstream kinase analysis were conducted using the Bionavigator software v.6.3.67.0 developed by PamGene. Peptides and kinases with an adjusted p-value < 0.05, LogFC > 1 and LogFC < −1 were set as thresholds. Characterization of the Differentiation Process of hiPSC-BAPs into Adipocytes in 3D Culture Adipogenic differentiation of hiPSC-BAPs was performed as summarized in Figure 1. Briefly, hiPSC-BAPs were plated, transfected with siRNA, and differentiation was triggered 3 days later, for 10 days. RNA-seq and Pamgene experiments were performed before (at D0) and after (at D10) differentiation. Expression profile differences in the transcriptome of spheroids before differentiation (D0) vs. adipospheres after differentiation (D10) were determined by RNA-seq analysis. The 3D culture markedly affects the hiPSC-BAP mRNA expression levels. Transcriptomic analysis revealed 3484 significantly differentially expressed genes (1644 up-regulated and 1840 down-regulated) between D0 and D10 groups ( Figure 2A). Overall, RNA-seq analysis revealed that adipogenesis (i.e., upregulation of PPARγ and CEBPα; downregulation of DIO2), markers of mature adipocytes (i.e., upregulation of ADIPOQ and PLIN1), oxidative metabolism pathways and browning adipocyte capacity (i.e., upregulation of FABP4, CIDEA, PGC1α and UCP1) are markedly activated in 10 days-differentiated 3D adipospheres ( Figure S1). We also found that mRNA expression levels of DIO2, an enzyme that catalyzes T4 to T3 conversion [22], were markedly downregulated. Given that T3 is present in the differentiation medium of hiPSC-BAPs, the reduction of DIO2 may reflect active adipocyte differentiation which already adopts a brown-like phenotype [23]. After IPA and GSEA analysis, we found that several pro-adipogenic pathways were markedly modified in the D10 vs. D0 groups. We observed the activation of the cholesterol biosynthesis, LXR/RXR and PPAR signaling pathways (Figures 2C and S3); and the repression of the sirtuin, matrix metalloprotease, acute phase response, osteoarthritis, hepatic fibrosis and TGFβ signaling pathways ( Figures 2B and S3). Rosiglitazone, CEBPs, IL4 and Vascular Endothelial Growth Factor (VEGF) are major upstream regulators of up-regulated pathways ( Figure S2). Other over-expressed pathways were those involved in cellular oxidative metabolism such as oxidative phosphorylation, fatty acid oxidation, ketogenesis, as well as amino acid and noradrenaline degradation pathways ( Figures 2C and S3) which are essential for adipogenesis and browning adipocyte capacity [24,25]. A global decrease in phosphorylation of both phosphorylation sites was observed at D10 vs. D0 ( Figure 3A,B). Significant differences in phosphorylation for 5 out of 144 peptides (STK, Figure 3A and Table S2) and for 14 out of 196 peptides (PTK, Figure 3B and Table S2) were evidenced. Using a combined Bionavigator and IPA analysis to identify potential upstream kinases, we found several kinases that displayed differential STK ( Figure 3C and Table S1) and PTK ( Figure 3D and Table S1) phosphorylation at D10 vs. D0. A global decrease in phosphorylation of both phosphorylation sites was observed a D10 vs. D0 ( Figure 3A,B). Significant differences in phosphorylation for 5 out of 144 pep tides (STK, Figure 3A and Table S2) and for 14 out of 196 peptides (PTK, Figure 3B an Table S2) were evidenced. Using a combined Bionavigator and IPA analysis to identif Knock-Down of Cdkn2a Potentiates the Capacity of Adipogenic Differentiation of Spheroids at D0 Given that Cdkn2a might be required in the APs-specific browning process, we decided to assess selective molecular pathways and the underlying mechanisms involved in this process in CDKN2A-deficient hiPSC-BAPs. Expression profile differences in spheroid transcriptome at spheroid stages of differentiation (D0, progenitor stage) between CDKN2Adeficient and control spheroids were determined by RNA-seq. Transcriptomic analysis revealed 245 significantly differentially expressed genes (121 up-regulated and 124 downregulated) between both groups ( Figure 5A). The reduction in CDKN2A mRNA expression levels was validated in spheroids at D0 ( Figure S4). Overall, RNA-seq analysis showed that CDKN2A-deficient spheroids exhibit greater adipogenic potential (i.e., downregulation of DIO2) with an anti-inflammatory profile ( Figure S4). Computational analysis indicated that pro-adipogenic pathways such as LXR/RXR, PPAR and CXCR4 (chemokine receptor) are activated (Figures 5C and S5); whereas pathways involved in inflammatory response and TGFβ signaling pathways were repressed ( Figures 5B and S5) in CDKN2A-deficient vs. control spheroids [24]. Pro-adipogenic factors such as PTGER2 (prostaglandin receptor), VEGF, AREG (retinoic acid signaling) andFOXM1 (Forkhead Box M1) are major upstream regulators of up-regulated pathways ( Figure S5), and pro-inflammatory signaling pathways (IL6, IL1α, IL17α, NFκB, IL1β, TNF) are major upstream regulators of down-regulated pathways ( Figures S5 and S6) [24]. Up-regulation of the molecular pathways involved in oxidative activity was also observed ( Figure S6); whereas no significant change was evidenced in UCP1 RNA expression levels at the D0 progenitor stage ( Figure S4). In line with RNA-seq analysis, global pro-adipogenic pathways whose phosphorylation levels were modified were identified, such as cell cycle regulation, adipogenesis, VEGF, fibroblast growth factor (FGF), as well as PTEN and JAK2/STAT3 signaling pathways (STK, Figure 4A and PTK, Figure 4B), which are implicated in proliferation/differentiation during the early stages of adipogenesis [24,25]. Among them, the intracellular mitogen-activated protein kinase (MAPK) and the three pathways: extracellular signalregulated kinases (ERK1, 2, 5 and 7), Jun N-terminal kinases (JNKs) and p38 ( Figure 3C and Table S1), as well as the FGF receptor family (FGFR 1, 2, 3 and 4) ( Figure 3D and Table S1), involved in proliferative activity during adipogenesis [24], displayed lower phosphorylation. (A) (B) The CDKN2A products p16INK4a and p19ARF are key regulators of the activity of kinases involved in cell proliferation and senescence [16]. We postulated that modified kinase activity may be involved in the browning process and we performed a global kinome analysis in CDKN2A-deficient hiPSC-BAPs. Significant differences in phosphorylation levels for 1 out of 144 peptides (STK, Figure 6A and Table S4) and for 10 out of 196 peptides (PTK, Figure 6B and Table S4) were evidenced between CDKN2A-deficient and control spheroids. Using a combined Bionavigator and IPA analysis, we identified several potential modulated signaling pathways and upstream kinases (STK, Figure 6C; PTK, Figure 6D and Table S3). Among them, glucocorticoid (GC) receptor (GR) signaling pathways (STK, Figure 7A), immune pathways (CD28 signaling in T-helper cells, IL15 production, CTA4 signaling in cytotoxic T lymphocyte), as well as FGF, NGF, Focal Adhesion Kinase (FAK) and insulin receptor signaling (PTK, Figure 7B and Table S3), which are implicated in proliferation/differentiation during the early stages of adipogenesis [24], were evidenced. (prostaglandin receptor), VEGF, AREG (retinoic acid signaling) andFOXM1 (Forkhead Box M1) are major upstream regulators of up-regulated pathways ( Figure S5), and proinflammatory signaling pathways (IL6, IL1α, IL17α, NFκB, IL1β, TNF) are major upstream regulators of down-regulated pathways ( Figures S5 and S6) [24]. Up-regulation of the molecular pathways involved in oxidative activity was also observed ( Figure S6); whereas no significant change was evidenced in UCP1 RNA expression levels at the D0 progenitor stage ( Figure S4). The CDKN2A products p16INK4a and p19ARF are key regulators of the activity of kinases involved in cell proliferation and senescence [16]. We postulated that modified kinase activity may be involved in the browning process and we performed a global kinome analysis in CDKN2A-deficient hiPSC-BAPs. Significant differences in phosphorylation levels for 1 out of 144 peptides (STK, Figure 6A and Table S4) and for 10 out of 196 peptides (PTK, Figure 6B and Table S4) were evidenced between CDKN2A-deficient and control spheroids. Using a combined Bionavigator and IPA analysis, we identified several potential modulated signaling pathways and upstream kinases (STK, Figure 6C Computational analysis revealed that, in addition to pro-adipogenic pathways already highlighted at D0, global cellular oxidative metabolism (glycolysis, oxidative phosphorylation, TCA cycle, fatty acid β oxidation, ketogenesis, leucine and valine degradation) and the browning process (white adipose tissue browning pathway, AMPK signaling) are markedly activated [25] (Figures 8C and S9). PPARγ and Sterol Regulatory Element Binding Transcription Factor (SREBF 1 and 2) are major upstream regulators of up-regulated pathways ( Figure S8). Several kinase pathways such as p70S6K and PI3K/AKT signaling and sirtuin signaling pathway, which affect the proliferation and differentiation of pre-adipocytes, are repressed [24] ( Figure 8B). Computational analysis revealed that, in addition to pro-adipogenic pathways already highlighted at D0, global cellular oxidative metabolism (glycolysis, oxidative phosphorylation, TCA cycle, fatty acid β oxidation, ketogenesis, leucine and valine degradation) and the browning process (white adipose tissue browning pathway, AMPK signaling) are markedly activated [25] (Figures 8C and S9). PPARγ and Sterol Regulatory Element Binding Transcription Factor (SREBF 1 and 2) are major upstream regulators of upregulated pathways ( Figure S8). Several kinase pathways such as p70S6K and PI3K/AKT signaling and sirtuin signaling pathway, which affect the proliferation and differentiation of pre-adipocytes, are repressed [24] ( Figure 8B). (A) We then analyzed the effect of the knock-down of CDKN2A on the kinome of adipospheres. Significant differences in phosphorylation for 6 out of 144 peptides (STK, Figure 9A and Table S6) and for 8 out of 196 peptides (PTK, Figure 9B and Table S6) were evidenced. Using the Bionavigator analysis to identify potential upstream kinases, we identified several signaling pathways that displayed modified STK ( Figure 9C and Table S5) and PTK ( Figure 9D and Table S5) phosphorylation. Following combined computional analysis, we observed that, in addition to pro-adipogenic and kinase pathways already highlighted by RNA-seq, AMPK and p38 MAPK which are key players of the browning process [25], are markedly modulated. Differences in phosphorylation linked to the modulation of Gαq-and Gαs-coupled G protein-coupled receptors (GPCR) (STK, Figure 10A) and pro-inflammatory signaling pathways (IL15, IL7) (PTK, Figure 10B) are also modulated [24]. We then analyzed the effect of the knock-down of CDKN2A on the kinome of adipospheres. Significant differences in phosphorylation for 6 out of 144 peptides (STK, Figure 9A and Table S6) and for 8 out of 196 peptides (PTK, Figure 9B and Table S6) were evidenced. Using the Bionavigator analysis to identify potential upstream kinases, we identified several signaling pathways that displayed modified STK ( Figure 9C and Table S5) and PTK ( Figure 9D and Table S5) phosphorylation. Following combined computional analysis, we observed that, in addition to pro-adipogenic and kinase pathways already highlighted by RNA-seq, AMPK and p38 MAPK which are key players of the browning process [25], are markedly modulated. Differences in phosphorylation linked to the modulation of Gαq-and Gαs-coupled G protein-coupled receptors (GPCR) (STK, Figure 10A) and pro-inflammatory signaling pathways (IL15, IL7) (PTK, Figure 10B) are also modulated [24]. Discussion Several studies have pointed out the therapeutic potential of hiPSC-BAP as a promising novel therapy to alleviate the effects of obesity and T2D [18,19]. However, their low capacity for differentiation in brown-like adipocytes in 2D cultures hampers their use for further therapeutic approaches [19]. In recent years, 3D cell culture techniques have received much attention, as these might provide more accurate models of tissues. Indeed, 3D cultures generate changes in lipid accumulation and gene expression, which may lead to a better and closer in vivo differentiation [26]. In order to improve the differentiation capacity of 2D cultures, we developed a novel and more efficient method of using 3D cultures of hiPSC-BAPs, via the formation of an organoid-like structure [19]. Our data confirm previous experiments showing that differentiation into adipospheres improves adipogenesis and browning process capacities compared to conventional monolayer BAP differentiation [16]. Fate decisions of multipotent progenitor cells to differentiate into adipocytes are driven by specific signaling pathways. In particular, the adipogenic process occurs in two major phases: commitment to APs and terminal differentiation, which are determined by modified transcriptional, epigenomic and metabolic activities [24]. Here, we showed that the differentiation of hiPSC-BAP from spheroids to adipospheres in a 3D culture results in marked transcriptomic and phosphorylation changes. Comparative transcriptome and kinome analyses of spheroids before differentiation (D0) vs. adipospheres after differentiation (D10) revealed that adipogenesis, oxidative metabolism pathways and browning adipocyte capacity are markedly activated in 10 days-differentiated 3D adipospheres. RNA-seq analysis revealed the dynamic expression changes that occur during the commitment of APs toward adipocyte differentiation (i.e., repression of osteoarthritis and hepatic fibrosis pathways). TGFβ and sirtuin signaling pathways, which have emerged as critical anti-adipogenic players, were downregulated. TGFβ 1 and 2 and SIRT1 inhibit PPARγ and CEBPα expression [27,28]. TGF-β 1 inhibition suppresses the proliferation and induces the differentiation of hiPSC [19]. By contrast, transcription factor signaling pathways (LXR/RXR, CEBPs and PPARγ), which are the master regulators of adipogenesis [29], were activated. These events are required to promote the growth arrest and differentiation of pre-adipocytes and the progressive expression of a lipogenic transcriptional program (activation of glycolysis, oxidative phosphorylation, fatty acid oxidation and ketogenesis pathways). In line with these findings, the decrease in levels of phosphorylation of MAPK and ERK, JNK, p38 signal-regulated kinases as well as the FGF pathway, which are key regulators of early adipogenic events [24], was evidenced by Pamgene. This might reflect the terminal differentiation of 3D adipospheres into mature adipocytes. In basal 3D culture conditions, the increase in UCP1 [25], IL4 [30] and VEGF [31] expression levels, which is key to the brown adipocyte lineage, suggests that adipocytes already adopt a brownlike phenotype. Indeed, IL-4 enhances the differentiation of APs into committed beige adipogenic precursors [32], and VEGF is synthesized and promotes the angiogenesis in BAT [33]. The activation of cholesterol biosynthesis [34] and angiogenesis (i.e., upregulation of VEGF) [33], and the repression of matrix metalloprotease [35] and hypoxia/inflammation (i.e., dowregulation of HIF1α [36] and acute phase response (APR)) enriched pathways, might reflect an active adipocyte-like remodeling and expansion of the adiposphere. The APR is an early response to inflammation which hampers lipid and glucose utilization in adipocytes [37]. In line with its canonical role in cell-cycle progression and differentiation, the CDKN2A locus is well known to promote adipogenesis [15]. It might also be a key determinant of brown adipocyte fate, although underlying mechanisms and cellular pathways remain elusive [13,15]. Thus, we next assessed molecular pathways involved in the browning process in CDKN2A-deficient hiPSC-BAPs. CDKN2A-deficient spheroids at D0 exhibit greater adipogenic potential with an anti-inflammatory profile ( Figure 11). However, no increase in the browning process was evidenced at this stage. In addition to repressed TGFβ and activated adipogenic pathways already highlighted in basal conditions, the most striking observations were the identification of additional modulated signaling pathways, namely the activation of the CXCR4 and the repression of multiple pro-inflammatory signaling pathways. GR and insulin signaling pathways also displayed significant differences in phosphorylation levels. GCs, present in most adipogenic differentiation cocktails, are potent inducers of adipogenesis in vitro. Pre-adipocytes from humans express GR through which GCs stimulate the expression of PPARγ and C/EBPα during adipogenesis [38,39]. Activation of GR decreases pro-inflammatory cytokine expression [40] which is known to inhibit adipogenesis through various pathways, thus constraining the hyperplastic expandability of AT [41]. Insulin is also a powerful inducer of stem cell commitment to adipogenesis via the activation of the PI3K/Akt and MAPK signaling pathways to promote pro-adipogenic transcription [42]. CXCR4 promotes proliferation of APs and is required for the acquisition of brown adipocyte features. It also prevents inflammation [43,44]. Conclusions In conclusion, our results suggest that the CDKN2A locus is an important regulator of adipogenesis, oxidative metabolism and the browning process in a cell-autonomous manner. Supplementary Materials: The following supporting information is available online at www.mdpi.com/xxx/s1, Figure S1. Comparative analysis of mRNA expression levels (i.e., Transcripts Per Million (TPM)) between D0 spheroids and D10 adipospheres in 3D hiPSC-BAPs. Figure S2. Upstream regulator analysis of differentially regulated pathways in 10 days-differentiated 3D adipospheres (D10 vs. D0). Figure S3. Gene set enrichment pathway analysis (GSEA)-enrichment plots of representative gene sets from differentially regulated gene expression in 10 days-differentiated 3D adipospheres (D10 vs. D0). Figure S4. Comparative analysis of mRNA expression levels (i.e., Transcripts Per Million (TPM)) between control and CDKN2A-deficient D0 spheroids in 3D hiPSC-BAPs. Figure S5. Upstream regulators analysis of differentially regulated pathways between control and CDKN2A-deficient D0 spheroids. Figure S6. Gene set enrichment pathway analysis (GSEA)-enrichment plots of representative gene sets from differentially regulated gene expression between control and CDKN2A-deficient D0 spheroids. Figure S7. Comparative analysis of mRNA expression levels (i.e., Transcripts Per Million (TPM)) between control and CDKN2A-deficient D10 adipospheres in 3D hiPSC-BAPs. Figure S8. Upstream regulator analysis of differentially regulated pathways between control and CDKN2A-deficient D10 adipospheres. Figure S9. Gene set enrichment pathway analysis (GSEA)-enrichment plots of representative gene sets from differentially regulated gene expression between control and CDKN2A-deficient D10 adipospheres. Strikingly, a marked enrichment of oxidative metabolism and browning process pathways was observed in 10 days-differentiated CDKN2A-deficient 3D adipospheres ( Figure 11). Although a global increase in adipogenesis and cellular oxidative pathways was already evidenced in basal conditions, the silencing of CDKN2A potentiates these pathways along with WAT browning pathways at adiposphere stages of differentiation. The findings of SREBF1 and PPARγ/RXR as major upstream regulators of up-regulated pathways might reflect their dual role in regulating adipogenic and lipogenic pathways [45]. The marked transcriptional activation of AMPK and p38 MAPK and the modulation of the levels of phosphorylation of AMPK and Gα signaling pathways reinforce the idea that adipospheres are fully committed to differentiate into mature brown-like adipocytes [25]. In line with these findings, most of the molecular pathways (i.e., sirtuin, ERK/MAPK, FAK, PI3K, p70S6K) that control pre-adipocyte proliferation [24] were down-regulated, suggesting the terminal differentiation of mature adipocytes ( Figure 11). IL15, whose production pathway displays differential phosphorylation, is also known to lower the proliferation rate of pre-adipocytes [46]. On the one hand, AMPK inhibits adipogenesis via blocking the early mitotic clonal expansion. AMPK has a dual role in adipogenesis. AMPK blocks the early mitotic clonal expansion, and later activates the differentiation of pre-adipocytes into mature brown adipocytes [47]. Several studies have reported that AMPK signaling is instrumental in the browning as well as in the energy expenditure of beige adipocytes [48]. Activating intracellular AMPK increases intracellular cAMP and phosphorylates PKA resulting in induced intracellular lipolysis in BAT [48]. p38 MAPK signaling is also a key player in browning [49]. p38 MAPK is a downstream effector kinase of cAMP/PKA signal-ing in brown adipocytes [50]. During the early phase of adipogenesis, both cAMP and GC signalling pathways promote transcriptional activation, resulting in the commitment of APs to a pre-adipocyte fate and the differentiation of pre-adipocytes [51]. Gαs signaling via GPCRs that activate cAMP/PKA signalling and UCP1-dependent thermogenesis also regulates brown/beige adipocytes [52]. One limitation of our study is the lack of functional tests to further investigate the brown fat properties of CDKN2A-deficient adipospheres at the cellular level. Additional experiments to compare phenotypic differences in control and CDKN2A-deficient adipospheres are also needed to better appreciate whether CDKN2A contributes to increased BAT functions and/or morphology. Moreover, at this stage, we cannot rule out that knocking-down CDKN2A in hiPSC-BAPs could stimulate cell proliferation. However, no difference in size or cell phenotype was observed microscopically between control and CDKN2A-deficient adipospheres throughout the differentiation process and up to 21 days of culture (data not shown). In addition, the RNA-seq data of control and CDKN2A-deficient spheroids and adipospheres did not reveal marked modifications in the expression levels of genes involved in signaling pathways that control proliferation. Thus, it suggests that modulating CDKN2A expression in hiPSC-BAPs does not lead to uncontrolled proliferation of adipospheres. Given that silencing CDKN2A expression in hiPSC-BAPs has limited effect on cell proliferation, it is tempting to speculate that this locus indeed drives alternative pathways from those used for regulating the cell cycle to potentiate the browning process in a 3D system. Here, we demonstrated that CDKN2A plays an important role in brown-like adipogenic recruitment and maturation in a cell-autonomous manner. Our data emphasize the potential effects of this locus in progenitor cells on the browning process, using alternative pathways from those used for regulating the cell cycle. In particular, we showed that AMPK, p38 MAPK and Gαs/cAMP/PKA signaling pathways are key targets of CDKN2A silencing ( Figure 11). Thus, additional studies are needed to further delineate the contributions of these kinases and to identify both direct and indirect activators underlying the induction of the browning process in CDKN2A-deficient stem cells. Thus, targeting alternative CDKN2A signaling pathways that may not be involved in tumor suppressive and anti-proliferative effects, but which are driving the browning process in APs, may represent a new strategy to reprogram the cellular response and develop therapeutic approaches against obesity and T2D. Conclusions In conclusion, our results suggest that the CDKN2A locus is an important regulator of adipogenesis, oxidative metabolism and the browning process in a cell-autonomous manner.
2023-03-15T15:19:01.647Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "c9899cbba81c5c53005d41e098e208fa40eca119", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/12/6/870/pdf?version=1678457423", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9dd9c42e3b12319100fd035f193e1de3fb656bf5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
233200297
pes2o/s2orc
v3-fos-license
Ischemic Nephropathy Following Occlusion of Abdominal Aortic Aneurysm Graft: A Case Report In this report, we present a case of a 55-year-old female with a past medical history of abdominal aortic aneurysm (AAA) graft, femoral-femoral bypass graft, questionable history of chronic kidney disease (CKD), abdominal hernia repair, alcoholic pancreatitis, chronic abdominal pain on opioids, and tobacco abuse who presented with acute on chronic abdominal pain with an unexplained rise of creatinine and anuria. The patient was found to have complete occlusion of AAA graft and was determined to have ischemic nephropathy (IN). Introduction Ischemic nephropathy (IN) is defined as the progressive reduction of glomerular filtration rate (GFR) as a result of diminished renal blood flow [1]. Etiologies include renovascular occlusive diseases (RVD) such as atherosclerotic renal artery stenosis (RAS) and, less commonly, fibromuscular dysplasia [2]. Some findings that indicate renovascular disease are severe hypertension (HTN) that is resistant to treatment, acute rise in serum creatinine following the administration of angiotensin-converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), and flash pulmonary edema [3]. Doppler ultrasonography (US) is typically the first tool used to evaluate for renal vascular disease, but this examination may be timeconsuming and the results of the ultrasound examination are operator-dependent [1,4]. We present a case of IN secondary to occlusion of abdominal aortic aneurysm (AAA) graft to highlight the difficulty in its diagnosis and identify opportunities for the selection of appropriate imaging techniques. This article was previously presented as a virtual poster at the 2020 PeeDee Local Chapter of the Society of Hospital Medicine on November 14, 2020, and was awarded second place. Case Presentation A 55-year-old female with a past medical history of AAA graft, femoral-femoral bypass graft on clopidogrel, questionable history of chronic kidney disease (CKD), recent diagnosis of posterior reversible encephalopathy syndrome (PRES), uncontrolled HTN, abdominal hernia repair, alcoholic pancreatitis, cholecystectomy, chronic abdominal pain on opioids, and tobacco abuse presented with a three-day history of acute on chronic abdominal pain with associated nausea, vomiting, constipation, and decreased urine output. She reported chronic mild diffuse abdominal tenderness with right-sided abdominal tenderness developing suddenly, described as sharp pain radiating to her back. The patient had gone to multiple hospitals for treatment; however, she had been turned away for concern about drug-seeking behavior. The patient was taking high doses of hydromorphone (4 mg three times a day) for her abdominal pain and diazepam 5 mg daily as needed for anxiety for the last several months. She denied trauma, fever, chills, diarrhea, dysuria, and hematuria. On admission, the patient was afebrile, with a heart rate of 92 beats per minute. Blood pressure was elevated at 173/93 mmHg and oxygen saturation was normal on room air. Physical examination was significant for abdominal surgical scars and right flank tenderness. No rebound tenderness was present. Laboratory workup was significant for a white blood cell (WBC) count of 13.7 K/mm 3 with a neutrophil predominance (80.8%), hemoglobin (Hgb) of 17.9 gm/dl, platelets (PLT) of 368 K/mm 3 , anion gap of 17 mEq/L, creatinine of 4.8 mg/dL, and GFR of 9 mL/min/1.73 m 2 . Lactic acid, albumin, lipase, and lipid panels were unremarkable. Seven months prior to her presentation, the patient's kidney functions had been within normal limits, but a month after, she had been noted to have fluctuating creatinine and stage 4 CKD, which can be seen in Figure 1. Her acute kidney injury (AKI) had been attributed to HTN and PRES, which had improved with conservative management. She had been discharged at that time with a recommendation for outpatient and nephrology follow-up. On presentation in the emergency room, imaging with contrast was not performed due to her acute renal failure and concerns for contrast-induced nephropathy (CIN). Alternatively, the patient underwent a noncontrast CT of her abdomen and pelvis, which showed stable calcifications in the central abdomen favoring chronic pancreatitis, severe atrophy of left kidney greater than right, right renal vascular calcifications, and stable postsurgical changes from cholecystectomy and femoral-femoral bypass surgery ( Figure 2A). She was started empirically on ceftriaxone for suspected pyelonephritis versus other abdominal sources of infection such as abscess, intravenous fluids, ondansetron, morphine, and fentanyl. On day two, the patient's symptoms persisted with increased leukocytosis of 17.1 K/mm 3 with worsening creatinine and GFR. She reported anuria overnight and was transitioned to piperacillin/tazobactam for empiric treatment of an abdominal source of infection. Nephrology and general surgery were consulted for further evaluation. Due to unremarkable imaging and labs, general surgery suspected that the patient's symptoms were likely chronic due to a known history of chronic pancreatitis. Nephrology performed further workup of fluctuating creatinine. Urinalysis was obtained by straight catheterization, which was negative for infection and red blood cells but had a prominent urine protein of 100 mg/dL. Urine protein creatinine ratio was calculated to be 13.6 g/day (normal level: <0.2 g/day, nephrotic range: >3.5 g/day) and fractional excretion of sodium (FENa) was calculated to be 0.4%. Complement C3, complement C4, cytoplasmic anti-neutrophil cytoplasmic antibodies (C-ANCA), perinuclear anti-neutrophil cytoplasmic antibodies (P-ANCA), serum protein electrophoresis (SPEP), and urine protein electrophoresis (UPEP) were within normal limits. On day three, the patient reported persistent abdominal pain requiring hydromorphone 4 mg four times a day. Intravenous fluids were continued due to potential pre-renal etiology. Leukocytosis improved from 17.1 K/mm 3 to 12.3 K/mm 3 . Renal Doppler was performed, which showed a small left kidney and increased echogenicity of the right kidney consistent with medical renal disease with no hydronephrosis or abscesses. Imaging was discussed with radiology who reported adequate flow seen on renal Doppler; however, they were unable to accurately read it due to poor technical study. On day four, the patient began to have mild anasarca and shortness of breath with persistent anuria. On day five, intravenous fluids were discontinued due to increased work of breathing. CT of her chest was obtained, which showed bilateral moderate pulmonary effusions, progressive bilateral lower lobe consolidations, and new development of bilateral ground-glass infiltrations consistent with potential flash pulmonary edema. A tunneled catheter was placed and dialysis was initiated for volume overload. Due to unexplained creatinine rise and persistent abdominal pain, CT angiography (CTA) of abdomen and pelvis was obtained, which showed occluded femoral-femoral bypass graft, occlusion of the aorta to the right and left common femoral artery bypass graft (patent one year ago), and impaired renal perfusion due to complete occlusion of the aorta just after the superior mesenteric artery (SMA) branches off. There were minimal collateral vessels to the kidneys. Celiac artery and SMA were patent with the reconstitution of left common femoral artery with collateral vessels ( Figure 2B). The inferior mesenteric artery was reconstituted via a collateral vessel called the arc of Riolan ( Figure 2B; arrow). Vascular surgery was consulted for potential surgical intervention. The patient was placed on a heparin drip and transferred to the intensive care unit (ICU). Due to the significance of coagulopathy and limited renal function requiring dialysis, it was determined that the risk of surgical intervention outweighed the benefits. Hence, she was treated with supportive care and appropriate pain management. Her heparin drip was discontinued and apixaban was added to her clopidogrel. She was transferred out of the ICU in stable condition and with relief knowing the origin of her pain. She was discharged for outpatient follow-up and routine hemodialysis. FIGURE 2: CT vs. CTA of abdomen and pelvis A: non-contrast CT of the abdomen and pelvis showed stable surgical changes and limited view of the vasculature. B: CT angiography (CTA) of the patient's abdomen and pelvis showed complete occlusion of abdominal aorta aneurysm graft and poor vasculature. A collateral vessel called the arc of Riolan (arrow) is shown, which is formed between the proximal superior mesenteric artery and the inferior mesenteric artery in the setting of severe vascular occlusion [5] CT: computed tomography Discussion Progressive kidney dysfunction and HTN are the predominant features of IN [1]. Patients who have a progressive reduction in GFR with unknown etiologies should be evaluated for renovascular disease, which manifests as an acute rise in serum creatinine after angiotensin blockade, resistant malignant HTN, new onset of HTN (more likely atherosclerotic disease if the patient is more than 50 years in age; more likely due to fibromuscular dysplasia if the patient is less than 50 years), fluctuation of creatinine due to volume status, flash pulmonary edema, congestive heart failure, and deterioration of renal function after placement of endovascular aortic stent graft [1,6,7]. Of note, 60-90% of IN cases are due to atherosclerosis, 10-30% are due to fibromuscular dysplasia, and less than 10% are due to vasculitis, thromboembolic disease, and other causes [6]. The exact prevalence of IN is unknown, but it has been estimated to be responsible for end-stage renal disease (ESRD) in approximately 5-22% of patients who are more than 50 years old [1]. Other risk factors for atherosclerosis include age of more than 50 years, hyperlipidemia, and tobacco use [6]. Early detection and medical therapy are important as a delay in diagnosis could lead to worsening of the disease, yet studies have shown that kidney function may deteriorate even after renal vascularization with no apparent mortality benefit [8]. To evaluate for IN, labs should include serum creatinine and urine protein-creatinine ratio (mild to moderate degree of proteinuria that is usually not in nephrotic range) to assess renal dysfunction, a urinalysis to rule out glomerulonephritis, and serologic studies to rule out rheumatologic disease, including antinuclear antibodies, C3, C4, and antinuclear cytoplasmic antibodies [6]. Patients with concerns of IN should be considered for renal arteriography, which is considered the gold standard; however, duplex US has been the initial test of choice due to low costs, accessibility, and reported high sensitivities and specificities (approximately 90%). Although many use renal Dopplers as a screening tool for RAS and IN, ultrasounds are operator-dependent, with failure rates as high as 20% [9]. CTA and gadolinium-enhanced magnetic resonance angiography (MRA) have the highest diagnostic probability, as per a meta-analysis of 55 studies comparing CTA, MRA, US, and captopril scintigraphy [1,10]. If clinical suspicion is still high, CTA and MRA should be ordered regardless of kidney function. In a retrospective study of more than 12,000 patients, the AKI reportedly seemed to be independent of contrast exposure even in patients with CKD stage 4 and higher [11]. Other studies have shown that the incidence of acute adverse events from contrast is similar to patients who are not exposed, and the incidence of CIN may be overestimated [12,13]. MRA is another option; however, this test is typically avoided in patients with renal insufficiency due to adverse effects such as gadolinium nephrogenic systemic fibrosis (NSF) [6]. Some studies have shown that adverse effects of gadolinium-based contrast agents may be overestimated as well and have been shown to cause NSF in only 0.07% of CKD stage 4 and stage 5 patients [14]. If there are high concerns for severe renal damage from imaging, peri-procedural hydration, low-osmolar or iso-osmolar contrast, and reducing the load of contrast seem to be most beneficial for patients at risk of CIN. If the fluid overload is an issue, there are mixed studies that show pharmacologic agents such as acetylcysteine and fenoldopam may be of some benefit in reducing renal injury [15,16]. Treatment of RAS and IN include management of HTN, cholesterol, heart failure, pulmonary edema, and prevention of nephropathy. Angiotensin blockers are recommended in patients with early ischemic renal disease, which may improve cardiovascular mortality by up to 10%; however, these are of limited utility due to acute creatinine rise and hyperkalemia [17]. Patients can have a creatinine rise of more than 30% of baseline creatinine and this medication should be discontinued in the setting of continued renal dysfunction [6]. Statin and antiplatelet therapy have also shown improvement in mortality and should be started in patients with RAS or concerns of atherosclerosis [18]. Patients who have failed medical treatment for resistant HTN and those who have recurrent flash pulmonary edema or heart failure should be considered for percutaneous transluminal angioplasty (PTRA) with or without stent placement. If kidney deterioration is chronic, kidney size is less than 8.0 cm, or if the resistive index (US-calculated measurement of renal blood flow) is equal to or greater than 0.80, patients tend to have little improvement in clinical status and renal function after revascularization [2]. Surgical revascularization is another option; however, in a small randomized trial comparing angioplasty with surgery, improvements in HTN and renal functions were found to be similar, supporting nonsurgical intervention as the first-line treatment. In addition, patients who undergo surgical revascularization may have in-hospital mortality as high as 10% [18]. Some reports have shown that more than 35% of patients with renovascular diseases will require dialysis and have an accelerated mortality. In patients who require renal replacement therapy due to renovascular disease, a 25month median survival and a five-year survival rate of 18% have been reported [2]. Conclusions When dealing with patients with known severe vascular disease, clinicians should maintain a high index of suspicion for IN, especially in the setting of unexplained creatinine rise, HTN, and signs of volume overload. Patients requiring chronic narcotics and those who are suspected of narcotic abuse should be carefully examined for organic causes of symptoms. Quality of renal Dopplers are operator-dependent and further investigation with CTA or MRA is warranted if clinical suspicion remains high for RAS or IN even in the setting of CKD. Further studies are required to evaluate appropriate imaging modalities in patients with severe coagulopathy and complicated anatomy. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-04-11T05:12:23.096Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "ecb8120f8a9069e816510213e3b562b9bd3d482b", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/53422-ischemic-nephropathy-following-occlusion-of-abdominal-aortic-aneurysm-graft-a-case-report.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecb8120f8a9069e816510213e3b562b9bd3d482b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28265567
pes2o/s2orc
v3-fos-license
Crystal Structure of MpPR-1i, a SCP/TAPS protein from Moniliophthora perniciosa, the fungus that causes Witches’ Broom Disease of Cacao The pathogenic fungi Moniliophthora perniciosa causes Witches’ Broom Disease (WBD) of cacao. The structure of MpPR-1i, a protein expressed by M. perniciosa when it infects cacao, are presented. This is the first reported de novo structure determined by single-wavelength anomalous dispersion phasing upon soaking with selenourea. Each monomer has flexible loop regions linking the core alpha-beta-alpha sandwich topology that comprise ~50% of the structure, making it difficult to generate an accurate homology model of the protein. MpPR-1i is monomeric in solution but is packed as a high ~70% solvent content, crystallographic heptamer. The greatest conformational flexibility between monomers is found in loops exposed to the solvent channel that connect the two longest strands. MpPR-1i lacks the conserved CAP tetrad and is incapable of binding divalent cations. MpPR-1i has the ability to bind lipids, which may have roles in its infection of cacao. These lipids likely bind in the palmitate binding cavity as observed in tablysin-15, since MpPR-1i binds palmitate with comparable affinity as tablysin-15. Further studies are required to clarify the possible roles and underlying mechanisms of neutral lipid binding, as well as their effects on the pathogenesis of M. perniciosa so as to develop new interventions for WBD. In addition to lipid binding motifs, SCP/TAPS proteins are characterized by a large central CAP cavity as large as 1638 Å 3 in the case of Pry1 26 . Early studies of SCP/TAPS proteins indicated that the central CAP cavity contained a tetrad of residues, two His and two Glu that bind divalent cations including Zn 2+ and Mg 2+ 15, 22, 29, 30 . The tetrad was shown to be important for Zn 2+ binding and heparin-sulfate dependent inflammatory modulation mechanisms of cobra CRISP natrin 29 . The tetrad residues are contributed by four poorly conserved CAP motifs defined by Gibbs and colleagues 22 . Additionally the CAP cavity is independent of the lipid cavities and not connected within the monomer. A crystallographic dimer is formed in the Pry1 crystal structure in which the central CAP cavity is connected to the CBM 26 . It remains unknown if this crystallographic dimer has any functional roles 26 . Furthermore, the CAP tetrad is not required for sterol transport because SmVAL4, a CAP protein lacking the tetrad, is able to effectively transport sterol in vivo and bind sterol in vitro 31 . Additionally, mutating the tetrad did not reduce the ability of Pry1 to bind and transport sterols 27 . These studies indicate that SCP/TAPS proteins have independent lipid and cation binding functions. Despite having a conserved alpha-beta-alpha sandwich topology, SCP/TAPS proteins are ~50% loops, which makes it difficult to predict their structures 13,26,31,32 . We present in this report the structure of MpPR-1i, a SCP/ TAPS protein expressed by M. perniciosa during biotrophic stage of WBD, in basidiomes, and in monokaryotic mycelia 33 . MpPR-1i has less than 25% sequence identity with any of the structures in the protein data bank, which hampered efforts at solving the structure using molecular replacement. The crystal structure of MpPR-1i was determined using selenourea (SeUrea) soaking method to solve the phase problem 34 . This is the first de novo structure determined by SeUrea phasing. Using TLC analysis, a neutral saturated lipid was found bound to recombinant MpPR-1i (Figure S.6). Attempts at identifying the lipid by mass spectrometry failed, likely due to experimental limitations related to their ionization of neutral lipids as was previously observed in studies of HIF-3α where the authors identified the nature of the phospholipids but were unable to identify neutral lipids 35 . Interestingly, the crystal structure of MpPR-1i did not reveal any electron density for bound lipid, which is not unusual considering the low resolution of the structure and also could result from the crystallization agents outcompeting the lipid or the conformational flexibility of the lipid. The lipid identified by TLC was usurped during recombinant production in E. coli and may not be the same lipid that MpPR-1i binds endogenously when M. perniciosa infects cacao. Future studies beyond the scope of this manuscript include identifying the major lipids secreted during this infective process and determining if MpPR-1i is capable of binding to them. Structure Determination. All attempts at molecular replacement failed, which was not unexpected since MpPR-1i shares less than 25% sequence identity to any known structure. Despite the large number of sulfur atoms, attempts at single wavelength anomalous phasing using S signal (S-SAD) failed. Single wavelength Se anomalous data were collected to 2.9 Å resolution after soaking a single crystal with SeUrea, and nine SeUrea binding sites were identified. Using these phases, 1225 amino acid residues corresponding to seven monomers were built into the asymmetric unit (Table 1). In the refined model six SeUrea are located at the interface of adjacent monomers, while three are relatively weak binding sites (Figure S.7). SeUrea interacts with the carboxyl group from the side chain of Gln68 and the main chain of Val122 through hydrogen bonds ( Figure S.7). The structure was refined and extended to higher resolution, using a 2.43 Å native data set. Coordinates and structure factors for both models have been deposited in the Protein Data Bank under accession numbers 5V50 (native) and 5V51 (SeUrea). Overall Structure of MpPR-1i. Each monomer of MpPR-1i has a conserved alpha-beta-alpha sandwich topology made up of 3 β strands sandwiched between two helical domains, connected by loops (Fig. 1a). One of these loops connects the two longest β strands, extends out from the core structure, and is exposed to the solvent channel in crystal. There are seven monomers in the asymmetric unit, which form a pseudo seven fold screw axis when viewed along the diagonal of the cell (Fig. 1b,c and d). The MpPR-1i crystal has very high solvent content, ~70%, which is clearly demonstrated by the solvent channel in the crystal packing viewed along a cell dimension (Fig. 1c). The main chains of the MpPR-1i monomers are very similar with rmsd ranging between 0.19 Å to 0.27 Å. The most variable regions between the monomers are loop regions, notably the solvent exposed loop connecting the two longest β-stands, as well as the N-and C-termini loops (Fig. 2a). The amino termini of 6 monomers have the same orientation, while one (labeled monomer B) has a different orientation (Fig. 2a). While six monomers have conserved C-ter loops, the main and side chain residues starting from Leu155 in the carboxyl terminus of one (labeled monomer C) are flipped in an opposite conformation from the other monomers. Notably residues Tyr158 and Tyr 159 in monomer C are oriented 90° away from what is observed in the other monomers ( Fig. 2a and b). The interface between adjacent monomers appears to be crucial for crystal packing and have a buried surface area of ~800 Å 2 per monomer. None of the intermolecular contacts between monomers have more than 8 hydrogen bonds and the majority of the residues at the monomer interface are hydrophobic residues as illustrated by the interface between monomers A and B ( Fig. 2c and d). Central CAP cavity. Like other reported SCP/TAPs protein structures, MpPR-1i has a large central CAP cavity (Fig. 3a,b) 13,15,22,29,[36][37][38] . The volume of the CAP cavity of MpPR-1i is 1334.39, Å 3 which is comparable to the large size previously observed in Pry1 at 1638 Å 3 . In many CAP proteins, the central CAP cavity contains a tetrad formed by residues from four signature CAP motifs: His from CAP1, Glu from CAP2, His from CAP3, and Glu from CAP4. These tetrad residues bind divalent cations including Zn 2+ and Mg 2+ (Fig. 3c,d) 13,15,16,21,22,24,29,30,39 . MpPR-1i, like SmVAL4, lacks the tetrad that binds divalent cations in other SCP/TAPS proteins 31 (Figs 3 and 4). This explains why MpPR-1i does not bind Zn 2+ used in the crystallization solution. It remains unknown why some SCP/TAPS proteins have the conserved tetrad while others do not; however the absence of the tetrad in MpPR-1i means it lacks the ability to bind divalent cations and will not be involved in heparin-sulfate dependent inflammatory modulation mechanisms like natrin 29 . . 5a and b). The binding affinity of MpPR-1i for palmitate was determined using our established in vitro lipid-binding assay 27 and this analysis showed that MpPR-1i binds palmitic acid. The measured estimated equilibrium constant for MpPR-1i is K d 107 μM, which is comparable to that of tablysin-15 with a K d of 94 μM 36 (Fig. 5c). Discussion Selenourea phasing. All attempts at molecular replacement failed regardless of search model used so we tried phasing by anomalous diffraction. Although the crystallization condition contains zinc acetate, no anomalous signal for Zn 2+ ions was observed in any of the data sets, which was expected since MpPR-1i lacks the CAP tetrad. SeUrea soaking provided sufficient anomalous signal to phase the crystal structure of MpPR-1i. The low resolution SAD data at 2.9 Å has enough anomalous signal to locate the Se atoms, and enough reflections to build the whole model even without native data. This approach enables the use of SeUrea quantitatively and can be adapted for phasing other structures. As previously discussed, SeUrea does not form a stable aqueous solution, so a reducing agent like sodium sulfite (Na 2 SO 3 ) or TCEP is added to slow down the oxidation of SeUrea 34 . The stability of SeUrea was improved by using a higher concentration of Na 2 SO 3 to prepare the 1 M SeUrea/Na 2 SO 3 solution, allowing the stock solution to be stored at −20 °C for several months. Oligomerization of MpPR-1i. MpPR-1i forms a unique crystallographic heptamer, which likely does not have any functional relevance as MpPR-1i forms monomers in solution. Evidence supporting the monomer includes DLS revealing a MW of ~20 kDa, the absence of dimerization peaks in MS, the similar molecular mass of ~17 kDa on both reduced and non-reduced gels, and the protein elution off a sizing column as a sharp peak with a molecular mass of ~17 kDa. The formation of both monomers and dimers has been previously reported in other SCP/TAPS. While some like Na-ASP-2, GLIPR-1, and Pry1 form dimers in solution, others like SmVAL-4 form monomers 13,15,26,31 . Interestingly, none of the dimers formed within the heptamer are similar to the packing of the two-CAP Na-ASP-1 or to the dimer in Pry1 that connect the CAP cavity 14,26 . While the formation of the crystallographic heptamer has no apparent functional relevance, it explains the failure of phasing by S-SAD, because the heptamer only has 42S atoms out of 18,732 total atoms, which gives weak anomalous S signal compared to the strong Se signal from SeUrea soaking. Comparison of MpPR-1i with other SCP/TAPS proteins. Using PDBFold, the most similar structures to MpPR-1i were identified as the apo structure of human Golgi-associated PR-1 protein GAPR-1 16, 24 , Pry1 from yeast 26 , SmVAL4 from Schistosoma mansoni 31 , the NMR structure of a plant P14a 17 , and the structures of human glioma pathogenesis related protein (sGLIPR1) 15 . MpPR-1i shares 19.4%, 24.2%, 20.8%, 24.3% and 20.2% sequence identity with these proteins respectively. While the core alpha-beta-alpha sandwich topology is conserved, MpPR-1i has different loop regions as well as helix and strand lengths compared to the other structures (Fig. 4). The regions of greatest flexibility have been implicated in ligand binding and make up ~40% of the structure. Interestingly, the caveolin binding motif (CBM) loop, which has been implicated in cholesterol binding in Pry1, is significantly shorter in MpPR-1i than in other CAP proteins (Fig. 4). The shortened length of the CBM loop significantly reduces the size of the sterol binding cavity, rendering it barely large enough to accommodate dioxane and definitely too small to accommodate cholesterol (Fig. 3). Thus structural data strongly suggests that MpPR-1i will lack the ability to bind cholesterol. In vivo and in vitro analyses of the implications of the small CBM on sterol binding by MpPR-1i are currently being investigated and will be published elsewhere. Lipid binding function of MpPR-1i. MpPR-1i gene expression was detected in monokaryotic mycelia, basidiomata, and especially in the green broom stage of the disease 33 , which suggests participation in fungal pathogenesis. The observation that MpPR-1i binds to a neutral lipid suggests that it can accommodate fatty acids in its large open palmitate binding cavity between α-helices 1 and 4 ( Fig. 6a and b) as observed in SmVal4 and tablysin-15 31,38 . Tablysin-15 is a protein present in the saliva of the horsefly Tabanus yao, which scavanges cysteinyl leukotriene, an eicosanoid lipid that promotes inflammatory response 38 . During plant infection, lipolytic enzymes target host cellular membranes, releasing free fatty acids, such as oxylipins, that have roles in plant immunity 40 . Indeed, the binding affinity measured in our established in vitro lipid binding assay was comparable to that previously observed for tablysin-15 28 . Therefore, MpPR-1i could act similarly to tablysin-15, sequestering lipids that potentiate plant defense response. Further studies are needed to determine the binding of MpPR-1i to free fatty acids that are important in plant immunity. Conclusions The structure of MpPR-1i was determined by SeUrea phasing. This is the first de novo structure determined using this phasing technique and reveals the applicability of this method to a new structure with >70% solvent content. MpPR-1i is a compact CAP protein that is a monomer in solution but is packed as a high solvent content crystallographic heptamer. The loops connecting the two longest strands are exposed to the solvent channel and exhibit the largest inter-monomer conformational flexibility. MpPR-1i retains the palmitate binding cavity while the sterol binding CBM cavity is smaller than previously observed in other SCP/TAPS proteins. Future studies include assessing the mechanisms of lipid binding by MpPR-1i. Mass Spectrometry. Lyophilized protein was reconstituted by addition of water and 5% acetonitrile prior to mass spectrometry (MS) analysis using an Impact II QTOF mass spectrometer (Bruker Daltonics), equipped with a Qtof Control and Electrospray source. MS spectra were acquired in positive ion mode using water, 5% acetonitrile, and 0.1% formic acid. Instrument parameters were set as follows: nebulizer gas (Nitrogen) pressure, 2 Bar; Capillary voltage, 4.500 V; ion source temperature, 180 °C; dry gas flow, 9 L min-1; spectra rate acquisition between m/z 300-2000. Recombinant protein expression and purification of Crystallization and selenourea soaking. Lyophilized In vitro palmitate binding assay. The radioligand binding assay was performed as described previously 43,44 . Purified protein (100 pmol) in binding buffer (20 mM Tris, pH 7.5, 30 mM NaCl, 0.05% Triton X-100) was incubated with [ 3 H]-palmitic acid (100-400 pmol) for 1 h at 30 °C. Protein was removed from unbound ligand by adsorption to Q-sepharose beads (GE healthcare, USA), the beads were washed, and the protein-bound radioligand was quantified by scintillation counting. To determine non-specific binding, the binding assay was performed without the addition of the protein. Data Collection and Structure Determination. Synchrotron X-ray diffraction data were collected at wavelength of 0.978 Å on Southeast Regional Collaborative Access Team (SER-CAT) 22-ID beam-line at the Advanced Photon Source, Argonne National Laboratory, USA. Data sets were processed with HKL2000 45 in space group P2 1 with the "auto-correction" option turned during scaling. The best SeUrea soaked crystals diffract to 2.9 Å, while the best native crystals diffract to 2.43 Å. Attempts to solve the crystal structure of MpPR-1i by molecular replacement by submitting both data to BALBES online server failed 46 . Parallel attempts at phasing using multiple MR search models, truncated CAP proteins, and polyalanine models [13][14][15] with PHASER 47, 48 were also unsuccessful. The phenix.anomalous signal in PHENIX package was used to estimate the correlation coefficient for anomalous data set processed without merging Friedel pairs 49,50 . Correlation coefficient for anomalous data set (CC ano ) at different resolution is shown in Figure S.8. SHELXD was used to find the sub-structure of the anomalous data and identified six Se 51 ; however, attempts to build the polyalanine model even with relatively higher resolution native data using SHELXE failed. After switching to Phenix.Autosol for phasing and model building with Phenix.Autobuild, an initial model with R = 0.37 and R free = 0.41 was obtained, indicating that the correct solution was found 52 . Buccaneer was adopted for further model building which resulted in an 88% complete model with R = 0.29 and 984 residues assigned into seven chains. The highest quality single chain was extracted and used as the molecular replacement model in PHASER 53 to generate a more complete model. The SeUrea binding sites were cross validated by anomalous difference map and the heavy-atom sites found by Phenix. Autosol, then incorporated into model by Coot 53 . Thereafter, the structure was iteratively manually adjusted in Coot and refined using REFMAC5 54,55 and PHENIX 52 . The occupancies of SeUrea molecules were also refined. Data collection and structure refinement statistics are listed in Table 1.
2017-11-13T09:14:24.611Z
2017-08-10T00:00:00.000
{ "year": 2017, "sha1": "53da69ea97216a2b41bafafa742696681bf51cae", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-017-07887-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f5ac4e77565eae048da913dd441259248e0d79a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
159416332
pes2o/s2orc
v3-fos-license
Integración de las metodologías Building Information Modeling 5 D y Earned Value Management a través de una herramienta computacional Building Information Modeling 5 D and Earned Value Management methodologies integration through a computational tool Many construction projects present uncertainty in their budgets and schedules. Also, the management of time and costs is inconsistent. There are methodologies and techniques that improve the management of construction projects: Techniques such as Earned Value Management (EVM), ideal for planning, monitoring and controlling the management of time and costs during the execution of projects, and methodologies such as Building Information Modeling (BIM) recognized for improving the planning and design of construction projects. This paper proposes the integration of BIM and EVM through an OpenBIM software called COST-BIM, designed in JAVA programming language and NetBeans 8.0.1 development environment. It manages construction projects time and costs under a single interface, consisting of four modules and fifteen processes. The software is validated through a real project of social interest housing (Vivienda de Interés Social VIS), comparing the budget, schedule, EVM original indicators and EVM projections of the project versus those generated by the tool. COST-BIM manages construction projects from its planning, during its execution and until its monitoring and control turning it into a useful software for construction managers that strive to increase the performance of their projects. Introduction Construction companies who have managed building projects know that they usually take more time and money than initially planned (Abadie et al., 2013).Therefore, construction managers are interested in managing and controlling their schedules and budgets in order to reduce costs and maximize profits (Hitt et al., 2006).However, preparing them requires collecting large amounts of data from different professionals, who generally do not interact nor communicate efficiently among them during the design phase of project deliverables, thereby entailing incompatibilities and data leaks and reprocessing during their integration process.This produces uncertainties in the quantification of resources (materials, labor and equipment) associated to the deliverables that will define the schedule and planned budget, which subsequently causes variations between the schedule, and the planned and executed budget of the project. But uncertainty in the schedule and planned budget is not the only cause of variations between the schedule and the planned and executed budget in construction projects.According to (Abadie et al., 2013), this is only one of several causes, and add that the most common ones are: bad planning, inappropriate definition of the project's objectives and scope, poor communication among the professionals involved, ineffective project management and supervision, unsuitable identification of the project risks and deficient systems for estimating and controlling the project time and costs. Furthermore, they add that the issue of budget and schedule variations is present at global level.They analyzed 975 building projects and found that only 5.4% of the projects were within budget, and 36.4% incurred in cost overruns above 50% of the planned budget.These data demonstrate the impact of the variations between the schedule and the planned and executed budget on the 1 Corresponding author: Pontificia Universidad Javeriana, Bogotá, COLOMBIA E-mail: paola_zapata@javeriana.edu.coproject's final cost.However, this phenomenon is not inherent to the planning phase.During the execution phase, there are often delays which cause cost overruns, because there is no system controlling the performance of the work actually completed (Abadie et al., 2013). Based on the above, the variation between the schedule and the planned and executed budget of the construction industry projects, lies in the absence of a system that integrates the necessary tools to manage this kind of projects, from its planning and design stage to its execution, tracking and control in a single interface.Nevertheless, technology is radically changing the building management practices.Nowadays, there are methodologies like the Building Information Modeling (BIM) and techniques like the Earned Value Management (EVM), which are capable of managing construction projects from the planning and design stages to their execution, tracking and control, respectively. The aforesaid clearly implies the need to reduce the variations between the planned and executed schedule and budget of construction projects.Consequently, using techniques like EVM, perfect for monitoring and controlling time and cost management during the projects' execution (Project Management Institute, 2005), and BIM methodology, recognized for improving the planning and development of construction projects, this research developed the BIM-EVM integration by creating a software called COST-BIM.This software aims at improving the management of construction projects' schedule and budget, within an interoperable virtual environment for planning, design, execution, tracking and control of building projects.This tool allows integrating the BIM methodology and the EVM technique on a single interface, thanks to its interoperability with OpenBIM tools, with the purpose of getting the best of the benefits of each one in the construction project management, and offering building managers a tool that helps them control the schedule and budget of their projects.(Stevens, 1986) proposed a project management tool based on the performance curve, integrating the project cost and time.(Miyagawa, 1997) set forth a construction manageability planning system (CMY Planner).(Eastman et al., 2008) introduced BIM as a more integrated design and construction process, whose goal is to produce better quality buildings at a lower cost.(Enshassi and Abuhumra, 2016) perceive the BIM benefits in the design phase, because collaborative tasks are performed among all stakeholders from the very beginning of the project; therefore, each aspect of the design can be coordinated.Other authors add that any change made in the design is reflected on the entire model, thus eliminating errors and saving time when changing the design drawings and models.(Staub-French and Khanzode, 2007) set out that the adoption of the 4D model allows linking a programming to 3D elements, thereby producing a constructive simulation.They state that the benefits are multiple: identification of conflicts in the design phase, productivity, less change orders, better cost control, and detection of building interferences.(Chou, Chen, Hou and Lin, 2010) established the need to display the project information visually and automatically for an efficient control process of the project.(Isaza et al., 2015) indicate that BIM reduces the risks by 66%, improves the collaborative work among professionals by 63%, reduces data reprocessing by 60%, reduces the design time by 48%, increases productivity by 67%, and integrates the processes inside the organization by 75%. Background The mentioned authors deduce that BIM on its own is not enough to manage constructive projects during their entire life-cycles, especially because its greatest potential is reflected during the design stage for preparing building models and not during the execution, tracking and control stages. On the other hand, (Czarnigow, 2008) analyzes the implementation problems of the EVM technique.He recommends the use of technologies or programming to achieve its successful implementation and make the most of its overall potential.Considering this, many researchers developed methods to get the benefits of the EVM technique.(Kim, 2009) evaluated the EVM in residential projects and analyzed the importance of the Cost Performance Index (CPI) and the Schedule Performance Index (SPI), and introduced a framework for assessing these indicators and how to use them to improve the project performance. Based on the above, authors like (Jrade and Lessard, 2015) propose an integrated time and cost management system, called (ITCMS). It is composed by an EVM platform developed in Microsoft Excel and MS Project, which synchronizes the construction model with the project's time and cost parameters. The project's 3D BIM model was developed in Autodesk Revit 2013 and Autodesk Quantify Takeoff. The time for the activities is allocated in the MS Project to establish cost estimates and integrate these data into a Microsoft Excel datasheet to generate EVM curves in MS Project. (Su et al., 2015) developed a model aimed at objects prepared in MS Virtual #, called (CSIS), which links BIM elements to their respective costs and programming times, automatically calculates total project costs and time and exports this information to a MS Project or Primavera Project Planner. Consequently, it has been demonstrated that BIM improves project management in the design and planning stages.Moreover, it was found that EVM is a useful tool, which allows a proper time and cost supervision and control, in accordance with the scope of the project, so that building managers can compare the progress of the planned baseline and then evaluate if their construction will meet the budget and schedule objectives.Therefore, a BIM-EVM integration through COST-BIM is proposed. Development of COST-BIM Software The COST-BIM software was developed in a model aimed at objects and programmed in NetBeans, a powerful integrated development environment for generating applications in the Java platform.Its development required the creation of IDE Project, a group of Java source code files, in addition to the associated metadata, the files with specific project characteristics and the compiling script that controls the compilation and execution configuration to run the tool.COST-BIM was created with the idea of integrating BIM and EVM technologies; therefore, its interface was developed according to project management processes: planning, execution, tracking and control, and in turn, in accordance with a BIM Execution Plan framework (BEP).The development of this plan enables data management during the project and is based on the recommendations of the BIM Project Execution Planning Guide (Messner, 2010). COST-BIM is an OpenBIM software that manages construction projects in a BIM-EVM system, prevents data leaks and reprocessing, automatically generates and updates the budget imported from a 5D BIM model (with unit quantities and values).Additionally, it exports the schedule data to other BIM tools for the simulation (4D), and makes EVM analysis by generating project indicators and estimates by activity, based on trends. BIM-EVM System The BIM-EVM System is achieved through the COST-BIM interoperability with other OpenBIM software.COST-BIM proposes an efficient system that facilitates the project and management coordination, and improves the communication among stakeholders, thereby ensuring the storage, integration and synchronization of the project data, such as the scope, schedule and budget, in a single BIM-EVM interface. Methodology of the BIM-EVM System The BIM-EVM system addresses the planning, execution, tracking and control of a construction project, through its interface, by creating the budget and schedule, EVM indicators and performance curves of any kind of building project, centralized in the project's 3D BIM model and WBS (see Figure 3).The integration of the BIM-EVM interface with COST-BIM, requires using the BIM modeling software and a software for the control and management of BIM construction models.In this case, the following was used: BIM modeling software, Autodesk Revit, and Autodesk Navisworks. However, COST-BIM allows using any BIM software, maintaining certain operability parameters. Figure 4 describes the integration of BIM and EVM and how it includes the planning, execution, monitoring and control of the project, through the integration of the scope, time and cost management of the project. COST-BIM was validated to test the viability and potential BIM-EVM integration.The results were corroborated by testing them in two scenarios: during the planning stage and the construction phase.The data analysis and systematization was corroborated according to the system's functionality tests, with data of a real project.As for the validation of the results generated by the software, the COST-BIM budget and schedule was compared against those provided by the partner building company.In order to corroborate the EVM module, it was necessary to develop a Microsoft Excel datasheet, because the building company does not control the schedule nor the budget through EVM. COST-BIM Operation The BIM-EVM system is launched by creating the project's WBS in a .txtfile.Afterwards, the BEP form is issued, which records the persons in charge and the responsibilities associated to the BIM model (see Figure 5).The project's WBS is linked through Keynotes.Based on 2D data of the pilot project, the BIM model is run in the modeling software and all the project's relevant data are entered; for example, the cost of each BIM element.The quantities and costs generated in the BIM modeling software are exported from the 3D BIM model to an .xlsxfile.This file allows generating the link between COST-BIM and any BIM modeling software.This table contains the following data: 1. Keynote, which classifies the ID of each activity subject to modeling within a project WBS; 2. Level where the element is (this information is important for the configuration of the budget, schedule, EVM and 4D analysis); 3. Physical characteristics of the element (length, height, width, area, volume), which provide the element quantities for the budget; 4. Characteristic of the element to be analyzed.This option is highly relevant, because the unit cost of the element is inserted here, which allows calculating the budget automatically later on. After the BIM model is completed, the quantities are exported to an .xlsxfile, and then they are imported to COST-BIM.The schedule is automatically generated based on the project WBS; the duration is allocated to each activity, based on a Gantt chart, envisaging the logic precedence of the project activities (see Figure 6).Finally, a .cvsfile is created with the schedule data, in order to import it to the software for control and management of BIM construction models, with the aim of making the 4D simulation. It is important to mention that, when you import the project quantity .xlsxfile to the COST-BIM software, the budget is automatically created according to the project WBS, and simultaneously with the schedule.The budget (see Figure 7) is presented in a 3-level tree structure (chapters, subsubsections, items), with their respective control accounts.To conclude, the project's performance is measured through the earned value.The planned and executed data are compiled and processed.Then, you enter the data such as the cost AC, and the % to date for each activity (based on the 4D simulation), with the aim of obtaining the indicators for the budget and schedule (see Figure 8), together with the EVM charts (see Figure 9). Pilot Project The BIM-EVM interface of COST-BIM was validated by a real, social-interest housing project (SIH) located in Usme, Bogota D.C., Colombia.The project consists in 18 towers of 6 floors, 4 apartments per floor, for a total of 432 housing units, whose construction will be completed in 2018.This type of project is composed of iterative modular spaces, which make the modeling process easier, and that is why it was chosen for the validation.The total built area is 23,456 m 2 , which has saleable areas from 47 m 2 to 61 m 2 .Figure 10 describes the model characteristics.It should be noted that the validation process analyzed the budget and schedule of a double module (2 towers) of 24 apartments from the foundation to the roof, with the aim of simplifying the validation.The budget of this double module is 1,446,108,083.00 and the timeline is 349 days. Data Validation The budget module presents a difference of 0.85% (see Figure 11) in to original project, as a result of the reinforcing steel of the project.building company quantifies this activity based on the amounts provided by the structural engineer, as opposed to the amount quantification of the BIM-EVM system, which is based on the 3D BIM model.This difference is not significant, because it represents less than 1% of the original budget.Likewise, the budget generated by COST-BIM fits the project planning. Regarding the schedule module, there is a difference of 4.3% (see Figure 12) in relation to the original schedule.The schedule difference lies in the impossibility of programming the holidays of the Colombian calendar in COST-BIM, an inconvenient that will be solved in the software's future versions.the cutoff date of 09/10/2017, pilot project presents a progress of 75%, a planned value of PV = $1,428,342,939, an actual cost of AC = $1,399,028,473 an value of EV = $1,396,076,094 (see Figure 14). Figure 14. COST-BIM-generated Indicators, Estimates and Curves Based on the information in Figure 14, the budget has a negative performance with a CPI = 0.75 lower than 1, which indicates an inefficient budget investment the project execution.Thus, the final cost of the project is estimated at EAC = $1, 939,200,658, showing a variation between the planned budget and the executed budget of VAC = -$480,735,724.Regarding the project schedule, it presents delays resulting from the failure to meet the dates predetermined in the planned schedule.The project did not start as planned on 11/03/2016, but on 11/10/2016, thereby causing a one-week delay, due to non-compliance of the earthworks contractor.Subsequently, a two-week delay was produced by a non-compliance of the masonry contractors.Given this scenario, the project schedule presents a negative situation with a SPI = 0.90 lower than 1, which indicates a poor performance.Therefore, a duration is estimated of EAC t = 385 days instead of 334 days, as established in the planned schedule.Thus, the project will be completed on 11/23/2017 and not on 10/02/2017, as envisaged in the planning. According to the generated indicators and estimates, COST-BIM forecasts a negative scenario for the pilot project until the cutoff date.Therefore, systems like the one proposed in COST-BIM could mitigate this kind of negative scenario by identifying the project activities causing the variations between the planned and executed schedule and budget. Consequently, these systems can be a useful tool for construction managers, during the planning and design phase, and gain more relevance during the execution, tracking and control for the decision-making process aimed at changing the course of the project, on the condition that it is implemented since the beginning of the project. Regarding the pilot project, the implementation of BIM-EVM in the COST-BIM tool demonstrates that it operates as expected and meets the requirements and specifications previously determined, because it generates the necessary alerts to make assertive decisions during the project execution. Discussion Currently, there are different OpenBIM project management software, from 2D BIM to 5D BIM.Researcherslike (Su, Chen and Chien, 2015) developed interoperable CSIS with Autodesk Revit that are able to integrate budget and schedule into a single interface.This tool includes just the planning phase of building projects, because its interface did not develop a cost and time control module.(Jrade and Lessard, 2015) developed ITCMS, which is able to integrate both budget and schedule, adding a time and cost control through the EVM technique.However, the use of different tools for construction project management causes data reprocessing and leaks. Likewise, the software market aimed at construction project management offers different alternatives.Therefore, it is inferred that COST-BIM reduces the gap found in applications such as ITCMS, CSIS and other software on the market.The advantage of COST-BIM is that it integrates time and cost into a single interface, in the same way as the BIM methodology and the EVM technique, thereby managing projects since planning and design until their execution, tracking and control in a environment, and controlling the performance of schedules and budgets of construction projects. Conclusions COST-BIM adapts itself to the BIM methodology and gets the best of the benefits of the modeling and control software available in the BIM construction model management, because it allows coordinating and visualizing construction projects, with the aim of associating time and cost to each component of a 5D construction model.Likewise, it adjusts itself and promotes the good practices of the Project Management Institute (PMI), by developing schedules and budgets according to PMI recommendations.Change control within the 3D BIM model and the project performance control in their baselines were also considered, based on the project WBS. It has the potential to improve the workflow in construction project management and, additionally, the following benefits were identified in its implementation in the project: the capacity to associate detailed values of time and cost of each component of the building model during the planning phase; the automatization of the cost estimate and budget preparation process; the creation of a time and cost baseline, which serves as a reference for an EVM performance analysis in any cutoff date of the project; and the estimate of the possible course of the project, using performance trends to offer the project manager a tentative scenario of the project's future. The COST-BIM software, together with BIM tools, are bound to improve good construction practices and foster the implementation of the EVM technique and the BIM methodology in the building industry.Moreover, it will help construction managers to make key decisions at the right time during the project planning and execution stage, since this tool offers the possibility to analyze indicators in a overall way (analysis by project) or specifically (analysis by control object, work packs and deliverables), because it analyzes them independently. In the future, COST-BIM modules could include risk management and quality, since they could be directly related to 5D BIM models; this could help construction managers to efficiently manage their projects. Furthermore, we recommended that future versions of this management tool are associated to other accounting or cost control tools, in order to process the Actual Costs generated by each deliverable in a more automatized and accurate way, because currently, the user has to insert these costs manually. Figure 3 . Figure 3. Methodology of the BIM-EVM System Figure Figure 6.Schedule Module Disadvantage in relation to COST-BIM Edificar Table 1 lists some of them, and also indicates the weaknesses with regard to COST-BIM and the BIM-EVM system. Its interface does not integrate the schedule, thereby generating data reprocessing.It does not export schedule data automatically from its interface for 4D simulations.The EVM is very broad, it does not calculate the generated indicators by activity.Opus-2015It is not OpenBIM, it does not integrate the schedule in its interface, which prevents the schedule data export for 4D simulations.Microsoft -ExcelIts interface does not integrate the schedule.It generates data leaks and reprocessing.Presto Its interface does not integrate the schedule.It does not control schedule nor budget through EVM.Sinco-ERP It is not OpenBIM.It does not integrate the schedule in its interface.It does not control schedule nor budget through EVM.Primavera-P6 It does not generate the budget based on the BIM model; therefore, the budget updating presents data leaks and reprocessing.Tilos-Software It is not OpenBIM.It does not control schedule nor budget through EVM.Vico-Software It does not control schedule nor budget through EVM.Synchro-Software It does not control schedule nor budget through EVM. Table 1 . COST-BIM vs Other Tools
2019-05-07T17:28:29.642Z
2019-01-14T00:00:00.000
{ "year": 2019, "sha1": "4566c4be5240d27d987c19e05482ca4ce52c9815", "oa_license": "CCBY", "oa_url": "https://scielo.conicyt.cl/pdf/ric/v33n3/en_0718-5073-ric-33-03-263.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4566c4be5240d27d987c19e05482ca4ce52c9815", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Political Science" ] }
22009214
pes2o/s2orc
v3-fos-license
TRP Channels in Human Prostate This review gives an overview of morphological and functional characteristics in the human prostate. It will focus on the current knowledge about transient receptor potential (TRP) channels expressed in the human prostate, and their putative role in normal physiology and prostate carcinogenesis. Controversial data regarding the expression pattern and the potential impact of TRP channels in prostate function, and their involvement in prostate cancer and other prostate diseases, will be discussed. INTRODUCTION The prostate, a mainly exocrine gland that is conserved in all male mammals, is not essential for the survival of the individual, but it plays an essential role in the reproduction of the species. Prostate excision abolishes fertilization via natural ways, whereas artificial insemination remains achievable, suggesting a crucial role in natural male fertility [1]. occurs. The nonglandular region is mainly composed of the anterior fibromuscular region, constituting 30% of the prostate mass [4,5] (Fig. 1). The glandular parts consist of pluripotent stem cells; basal epithelial cells residing in the basal compartment of the epithelial layer and expressing CK5 and CK14; transit amplifying cells in the basal layer, which form an intermediate cell type between undifferentiated stem cells and fully differentiated secretory cells; neuroendocrine cells that regulate growth, differentiation, and the secretory activity of the epithelium via paracrine secretion of hormones; and terminally differentiated luminal secretory cells, clustered in acini and morphologically characterized by abundant secretory granules and enzymes [6,7]. Acini drain into a system of branching epithelial ducts and tubules that end up directly in the urethra. The nonglandular part or stroma is composed of smooth muscle cells, fibroblasts, connective tissue, and blood vessels. This part is highly innervated, mainly by adrenergic efferents. Innervation of the Prostate The majority of the acini contains a dense subepithelial plexus of nerves [8] and several autonomic ganglia are clustered at the capsule surface [5]. In the past, it was generally believed that the prostate was exclusively innervated by sympathetic efferents; however, ultrastructural imaging studies on the human prostate also showed the presence of cholinergic nerve fibers, mainly associated with the glandular epithelium [9]. Some studies even suggested that the density of cholinergic nerve fibers exceeded that of adrenergic nerve fibers in the overall prostate [10]. Adrenergic receptors consist mainly of α 1adrenoceptors (α 1A > α 1D ), primarily localized to the prostate fibromuscular stroma, and α 2 -adrenoceptors (α 2A , α 2B , α 2C ), mainly associated with blood vessels [2,11]. Muscarinic receptors in the prostate mainly belong to the M1 subtype and are expressed on epithelial cells at the protein level; a smaller population of M2 receptors is found on the stromal cells [12]. Parasympathetic outflow to the prostate stimulates the secretion of prostate fluids, whereas sympathetic efferents evoke expulsion of prostate fluids during emission. The significance of the nonadrenergic-noncholinergic (NANC) nerves, also present in the prostate, is still unclear [13]. Prostate Cancer Prostate cancer (PCa) is the most commonly diagnosed noncutaneous cancer in men, and the second and third most common cause of cancer-related death in North America and Europe, respectively [14,15]. Moreover, the number of afflicted men is increasing rapidly as the population of males over the age of 50 continues to grow. In clinical practice, PCa is usually diagnosed by an abnormal digital rectal examination or by finding elevated PSA levels in the blood. The diagnosis is then confirmed by transrectal ultrasound (TRUS)guided biopsies, which are pathologically examined and scored according to the Gleason scoring system. The Gleason score is the most frequently used grading system for PCa, used to evaluate the differentiation grade of the tumor; it is based on scoring the two most predominant glandular differentiation patterns in the prostate sample [16]. Although organ-confined PCa is curable with radical prostatectomy or radiotherapy, there are only limited treatment options for metastasized disease (hormonal treatment, chemotherapy). Huggins and Hodges described that surgical orchidectomy leading to androgen ablation was an effective and very reproducible treatment for metastatic PCa [17], a discovery for which they were awarded the Nobel Prize in 1966. This treatment is founded on the principle that prostate epithelial cells depend on androgens for their survival, as castration in a male rat leads to a loss of up to 90% of the total epithelial cells [18]. Unfortunately, androgen deprivation, either by surgical or chemical castration, or by administration of androgen receptor inhibitors, is unable to destroy all tumoral cells and over a period of time, growth of androgen-independent prostate tumor cells will lead to tumor progression, regardless of the hormonal status of the patient. Hence, PCa progresses and recurs during hormone therapy to a lethally resistant phenotype [19]. Transient Receptor Potential Channels The transient receptor potential (TRP) channel superfamily consists in mammalian species of 28 cationpermeable channels that are ubiquitously expressed and share a high degree of structural homology, i.e., they form tetramers in which each TRP channel subunit consists of six putative transmembrane segments, a putative pore-forming loop between S5 and S6, and intracellularly located amino-and carboxytermini [20,21]. Based on homology criteria, mammalian TRP channels can be divided into six subfamilies: TRPC (canonical or classical), TRPV (vanilloid), TRPM (melastatin), TRPML (mucolipin), TRPA (ankyrin-like), and TRPP (polycystin). A typical feature of these TRP channels is their ability to be activated by a wide range of chemical and mechanical stimuli. As such, they can be envisioned as the polymodal molecular sensors of the cell. Most, but not all, of the TRP channels function as Ca 2+ pathways, cause cell depolarization, and also form intracellular pathways for Ca 2+ release from various intracellular stores, such as the endo-and sarcoplasmic reticulum, lysosomes, and endosomes [22]. Beyond their sensory functions, they are broadly involved in diverse homeostatic functions. It is not surprising, therefore, that dysfunctions of these TRP channels are involved in the pathogenesis of several diseases [20,21,23,24]. Many TRP channels have so far been described in the genitourinary tract [25,26] and more specifically in the prostate, where they are suggested to play a role in normal prostate physiology and prostate diseases, most importantly in prostate carcinogenesis. This paper will review the current scientific evidence about the expression of TRP channels in human prostate and their possible role in prostate diseases. TRPM8 Expression Pattern of TRPM8 TRPM8 was cloned in 2001 as a novel prostate-specific gene by screening a prostate cDNA library [27]. One year later, it was shown that TRPM8 encodes for a cold-and menthol-sensitive ion channel in trigeminal ganglion and dorsal root ganglion neurons (TGN and DRG) [28,29,30]. Using in situ hybridization on paraffin-embedded sections, it was shown that TRPM8 mRNA was expressed solely in the epithelial cells of the prostate, but not in the vascular smooth muscle cells and endothelium. Moreover, the mRNA levels in BPH and PCa appeared to be higher than in the normal prostate [27]. TRPM8 was shown to be expressed at the protein level via immunohistochemistry [31,32] and Western blot experiments [32], both in the apical epithelial cells and in the smooth muscle cells of the human prostate [31,32]. Several groups have studied the expression of TRPM8 in different PCa cell lines. In primary cultures of prostate epithelial cells, the density of the TRPM8 membrane current was increased in cancerous compared to normal cells. These currents exhibited classical cold/menthol receptor-like responses (strong outward rectification and close to 0 mV reversal potential) [32]. Moreover, RT-PCR analysis of these cells revealed an up-regulation of TRPM8 in PCa-derived cells [32]. Also in tumoral cell lines, such as LNCaP ("lymph node carcinoma of the prostate", a widely used cell line derived from a supraclavicular lymph node metastasis expressing the androgen receptor [AR] [33]), TRPM8 was detected by RT-PCR [27]. Zhang and Barritt also suggested a functional role for TRPM8 in LNCaP cells, since temperatures below 28°C or application of 100 μM menthol, which is sufficient for TRPM8 activation, led to an increase in [Ca 2+ ] cyt [34,35]. In contrast, Mahieu et al. provided convincing evidence that menthol-induced Ca 2+ release from intracellular stores in LNCaP cells was not mediated by TRPM8 [36]. Moreover, other authors also rejected a functional role for TRPM8 in LNCaP cells, since the application of WS-12, icillin, or menthol was without any effect in Ca 2+ imaging experiments [37,38]. Androgen Regulation of TRPM8 Henshall et al. suggested that TRPM8 was androgen regulated, as there was a decrease of TRPM8 mRNA expression in a xenograft mouse PCa model after castration [40]. Moreover, TRPM8 mRNA levels in LNCaP cells decreased significantly on withdrawal of androgens [34] or by treatment of the cells with AR antagonists [31]. It is doubtful, however, that the AR is essential for TRPM8 expression, since in PC-3 cells, which lack the AR, TRPM8 is expressed [34]. The latter has been contested by other authors [32]. Further, Bidaux et al. reported that TRPM8 expression requires a functional AR. Transfection of the AR into PNT1A cells, which lack the expression of the AR in normal physiological conditions, induced the appearance of TRPM8 that could be reversed by incubation of siRNA-AR [31]. Using single-cell RT-PCR and immunohistochemistry, they showed that TRPM8 is mainly located in the apical epithelial secretory cells that express CK8 and CK18, and not in the basal epithelial cells expressing CK5 and CK14 [31]. Primary cultures of prostate epithelial cells expressed the AR, TRPM8, CK8, and CK18 after 12 days, but after 20 days, the cultured cells displayed a more basal epithelial phenotype, expressing CK5 and CK14, but not the AR and TRPM8 [32]. Moreover, it seems that the AR regulates the membranic translocation of TRPM8, since TRPM8 resides in the ER in the absence of the AR, and only appears in the plasmamembrane when the AR is expressed. The authors postulated the hypothesis of a shift of PM TRPM8 in normal apical fully differentiated epithelial cells to ER TRPM8 in a metastatic PCa cell during prostate carcinogenesis [32]. Clinical Relevance of TRPM8 So far, only a few studies have been conducted on the clinical relevance of altered TRPM8 expression in PCa. PSA (or human kallikrein 3), a glycoprotein that acts as a serine protease, is the most frequently used PCa biomarker in the blood that is used in the detection and follow-up of PCa. It is, however, not a perfect tumor marker, lacking both the sensitivity and the specificity to detect PCa. A recently published randomized controlled trial concluded that the rate of death of PCa was reduced by 20% due to PSA screening, but this was associated with a high risk of false-positive diagnoses and treatments for indolent cancers [41]. Thus, there is need for a more appropriate biomarker. Several studies using quantitative RT-PCR revealed a significantly increased expression of TRPM8 mRNA in malignant prostate samples in comparison to nonmalignant tissue, suggesting that the level of TRPM8 in biopsy specimens could be used in the diagnosis of PCa [42,43]. This elevation seemed to be statistically significant, unlike the relative transcript-level elevation of PSA mRNA [42]. However, no clear correlation of TRPM8 expression with the pathological grade of PCa could be found [43]. Moreover, Henshall et al. showed a strong correlation between the level of TRPM8 mRNA expression and disease relapse after radical prostatectomy, as loss of TRPM8 was associated with a significantly shorter time to PSA relapse-free survival [40]. Moreover, TRPM8 was even suggested as a possible target structure for immunotherapy of prostate tumors by generating cytotoxic T lymphocytes that could lyse TRPM8-expressing LNCaP cells [43]. Role of TRPM8 in Prostate In prostate carcinogenesis, the role of TRPM8 remains unclear, although different roles have been suggested. It has been suggested that TRPM8 plays a role in PCa cell proliferation since siRNA-induced silencing of TRPM8 in LNCaP cells led to an increased number of cells undergoing apoptosis. Similarly, application of 10 μM capsazepine, a TRPM8 antagonist, decreased cell viability of the LNCaP cells, indicating that TRPM8 is required for LNCaP survival [34]. In primary cultures of prostate cells, the application of 10 μM geraniol, a known TRPM8 agonist, led to an increased cell proliferation [44]. In addition to a possible role in PCa cell proliferation, it has also been suggested that TRPM8 is a Ca 2+ release channel in the ER. In patch clamp experiments, Thebault et al. showed that the application of 100 μM menthol on LNCaP cells did not exhibit a classical TRPM8-mediated current, but showed the characteristics (inward rectification and divalent cation selectivity) of a current mediated through a store-operated channel (SOC). They reported a role for TRPM8 in LNCaP as a cold/menthol-sensitive ER Ca 2+ release channel, and that the I SOC activation was secondary to menthol-mediated ER store depletion [39], as was suggested by Zhang and Barritt [34]. Other possible roles for TRPM8 have also been suggested. Since TRPM8 is colocalized with CK18 in primary epithelial prostate cultures, a role for TRPM8 as an epithelial phenotype stabilizer has been proposed [32]. TRPM8 could also act as an oncogene, since it is up-regulated in prostate adenocarcinoma, but also in other neoplastic lesions, such as colon carcinoma, melanoma, and breast and lung adenocarcinoma [27]. Lastly, Zhang and Barritt suggested a role for TRPM8 in ion and protein secretions in prostate epithelial cells [34]. TRPV6 Expression Pattern of TRPV6 TRPV6, an epithelium TRP channel highly selective for Ca 2+ in organs that reabsorb Ca 2+ , was originally cloned from rat duodenum as a Ca 2+ transport protein [45]. Northern blot analysis showed strong expression of TRPV6 transcripts in the prostate [46]. In 2001, using Northern blot and in situ hybridization, Wissenbach et al. described that TRPV6 was present in PCa tissue specimens and in lymph node metastasis, but not in BPH or in normal prostate. The most elevated levels of TRPV6 mRNA were found in high-grade, locally advanced (pT3a/b) prostate tumors, whereas no TRPV6 mRNA was detectable in low-grade PCa, suggesting that TRPV6 could be a promising prostate tumor marker [47]. Two allelic variants of TRPV6 have been described, TRPV6a and TRPV6b, differing in five base pairs, but not encoding for ion channels with different properties. The onset of PCa seemed to be independent of the TRPV6 genotype [48]. In 2001, Peng et al. [51] confirmed these results via in situ hybridization experiments, but claimed that TRPV6 was also expressed in normal epithelial cells, BPH tissue, and LNCaP cells. TRPV6 was also detected at the protein level in immunohistochemistry experiments and its upregulation was also demonstrated in other malignancies in the breast, thyroid, colon, and ovary [49]. Interestingly, TRPV6 mRNA expression correlated significantly with the Gleason score and the pathological stage (TRPV6 was absent in normal prostate, BPH, and pT1a/b lesions, but appeared in higher pathological stages) [50]. Androgen Regulation of TRPV6 Peng et al. suggested in 2001 that TRPV6 expression was androgen controlled, showing that the administration of AR antagonists to LNCaP cells resulted in a twofold increase of TRPV6 mRNA levels, whereas adding dihydrotestosterone (DHT) decreased TRPV6 levels [51]. In contrast, TRPV6 mRNA expression studies revealed decreased TRPV6 expression levels in androgen-deprived human prostates [50]. Other authors found that the application of AR antagonists or DHT had no significant effects on TRPV6 expression in LNCaP cells at all [52]. siRNA knockdown of the AR, however, induced a significant decrease of TRPV6 expression. Moreover, it was suggested that TRPV6 was regulated by the AR in a ligand-independent manner and that the AR constituted an essential cofactor of TRPV6 gene transcription in LNCaP cells [52]. TRPV6 and K Ca 3.1 Cell hyperpolarization will always increase the driving force for Ca 2+ entry via Ca 2+ -permeable ion channels, such as TRPV6. Ca 2+ entry via these channels depends on coactivation of the intermediate-conductance, calcium-activated, potassium channels (IK Ca or according to the IUPHAR nomenclature K Ca 3.1 or SK41) [53], which are expressed in LNCaP cells as well as in primary prostate epithelial cultures. Moreover, K Ca 3.1 seemed to be preferentially expressed in PCa tissue, leading to hyperpolarization of the plasma membrane, after which TRPV6 is opened and Ca 2+ influx occurs. siRNA knockdown of K Ca 3.1 and blocking of K Ca 3.1 led to a decreased cell proliferation in LNCaP [54]. Role of TRPV6 in Prostate Authors suggested a role for TRPV6 in cell proliferation. TRPV6 increased the proliferation rate of HEK cells in a Ca 2+ -dependent manner. As TRPV6 slightly enhanced global resting [Ca 2+ ] IC , these small changes could indeed increase proliferation rate. This suggests a causal relationship between PCa progression and TRPV6 expression [55]. Lehen'kyi et al. showed that silencing assays of TRPV6 in LNCaP led to a decreased number of viable cells. They suggested a role for TRPV6 in LNCaP proliferation by mediating Ca 2+ entry, which is followed by the activation of Ca 2+ -dependent NFAT ("nuclear factor of activated T cells", a nuclear transcription factor) signaling pathways. As such, TRPV6 increased cell survival and induced apoptosis resistance [52]. TRPC The TRPC subfamily consists of mammalian TRP channels that are most closely related to Drosophila TRP. TRPC channels can be considered as channels activated subsequent to stimulation of receptors that activate different isoforms of phospholipase C [20]. Expression Pattern of TRPC Channels The first TRPC channel to be described in the prostate was TRPC3. Using Northern blot analysis, the expression of this gene was described in the normal prostate [56]. On the other hand, a more extensive quantitative TRP expression study in human prostate samples revealed the abundant expression of TRPC1, TRPC4, and TRPC6, whereas TRPC3, TRPC5, and TRPC7 were hardly detected [57]. In addition to normal prostate, immunohistochemistry revealed expression of TRPC6 in BPH and, more importantly, a significant overexpression in PCa specimens. Higher pathological stages of PCa tended to have increased TRPC6 expression, but these differences were not statistically significant among pT2, pT3, and pT4 PCa [58]. Role of TRPC Channels in Prostate The functional role of TRPC channels in the prostate has been investigated in human primary prostate epithelial cell cultures, using antisense assays of TRPC1, TRPC3, TRPC4, and TRPC6. It was postulated that TRPC1 and TRPC4 were exclusively involved in ATP-stimulated, store-dependent Ca 2+ entry (SOCE), whereas TRPC6 was the diacylglycerol-gated, channel mediating α 1 -AR (α 1 -adrenergic receptor) agonist-stimulated Ca 2+ influx (store independent). Moreover, treatment of the cultures with α 1 -AR agonists enhanced cell proliferation, in contrast to ATP, which had an inhibitory effect. Therefore, the authors concluded that TRPC6 is a crucial mediator of the proliferative effects of α 1 -AR agonists. TRPC1 and TRPC4, on the other hand, are the major contributors of SOC activation in response to ATP [59]. In LNCaP, TRPC1 and TRPC3 were overexpressed after prolonged intracellular Ca 2+ store depletion due to the decreased levels of [Ca 2+ ] CYT . LNCaP cells overexpressing TRPC1 and TRPC3 showed an increased [Ca 2+ ] IC response to α-adrenergic stimulation, but SOCE entry remained unaffected. Thus, expression of TRPC1 and TRPC3 is not sufficient for SOC formation [60]. TRPM2 Recently, the presence of TRPM2 has been demonstrated in laser microdissected, tumoral epithelial, human prostate cells using quantitative RT-PCR. The analysis showed a high expression of TRPM2 transcripts in 75% of the malignant epithelial cells in comparison to the matched microdissected benign cells of the surgical specimens. In addition, TRPM2 RNA was detected in LNCaP and PC-3 cells. In PC-3 cells, TRPM2 was not only expressed in the plasmamembrane and the cytosol, but also in the nucleus, in contrast with benign cell lines where nuclear expression was absent [61]. Importantly, TRPM2 is also expressed in lysosomes and may act as a lysosomal Ca 2+ release channel [62]. Furthermore, siRNA knockdown of TRPM2 inhibited cell growth in PC-3 cells, but not in benign prostate cells, suggesting that TRPM2 is essential for PCa cell proliferation [61]. TRPM4 TRPM4, a nonselective cation channel activated by intracellular Ca 2+ , was cloned in 2001 after screening a brain, placenta, and testis cDNA library. Via Northern blot analysis, TRPM4 was also detected in prostate tissue [63]. Later it turned out that this clone was a short splice variant of the full-length TRPM4. The full-length TRPM4 was subsequently called TRPM4b, while the short splice variant was baptized TRPM4a. The presence of TRPM4b transcripts in prostate (adenocarcinoma) was confirmed in two different studies [64,65]. TRPV1 TRPV1, a heat-activated, nonselective cation channel, was originally identified as the receptor for capsaicin, the pungent ingredient in hot chilli peppers. It is one of the most polymodal TRP channels, being activated, among others, by heat, voltage, protons, and exogenous (capsaicin, piperidine) and endogenous vanilloids (anandamide and 2-arachidonoylglycerol [2-AG]) [66], and is mainly expressed in sensory neurons [20,67]. TRPV1 expression has been described at the mRNA [68] and protein [69] levels in the human prostate and in LNCaP [69]. TRPV1 expression is up-regulated in high-grade PCa, in comparison to normal prostate [70]. Moreover, using a competitive resiniferatoxin (RTX) binding assay (in BPH cells and LNCaP cells) and Ca 2+ imaging experiments (in LNCaP cells), authors reported that TRPV1 was functionally active in BPH and LNCaP [69]. Regarding the localization, immunohistochemistry showed a predominant expression of TRPV1 in nerves throughout the prostate [71]. It was reported that capsaicin inhibits the growth of PC-3 cells with an IC 50 of 20 μM [72]. However, these apoptotic effects were not mediated by TRPV1, but resulted from a direct inhibiting effect of vanilloids on Coenzyme Q, which promotes reactive oxygen species production resulting in apoptosis, and an activation of caspase 3. In line with this hypothesis, capsazepine, a TRPV1 antagonist, could not block capsaicin-induced inhibition of PC-3 cell growth [72]. An indirect apoptosis pathway mediating TRPV1 was also suggested through the increase of [Ca 2+ ] IC upon capsaicin application, leading to an activation of Ca 2+ -dependent enzymes, such as endonucleases, proteases, and transglutaminases, resulting in DNA injury, cytoskeleton damage, and protein alteration, respectively [73]. Thus a clear role for TRPV1 in prostate cell proliferation has not been established. In addition to its putative role in prostate cell proliferation, a role for TRPV1 has also been suggested in chronic prostatitis [71]. Recently, it was shown via quantitative RT-PCR that TRPV2 expression was 12 times higher in metastatic PCa samples (originating from the bone) than in localized PCa samples. TRPV2 was not expressed in LNCaP cells [77]. The authors further postulated that TRPV2 is de novo expressed in PCa tumor progression to a castration-resistant phenotype. Moreover, mice bearing xenografted PC-3 cell-provoked tumors were treated with siTRPV2. The weight of the tumors in mice treated with siTRPV2 was significantly smaller than in mice treated with siRNA control, an effect that contributed to the suppression of the migration of the cells. In vitro, TRPV2 did not exhibit effects on cell proliferation [77]. TRPA1 was detected at the mRNA level in the prostate. In immunohistochemistry experiments, it was postulated that TRPA1 resided in the prostate epithelial cells without a difference in expression between normal and BPH prostate [81]. Furthermore, TRPA1 is expressed on cannabinoid receptor (CB) 1-and CB2-positive nerve fibers in the stroma and the epithelium, and activation of these TRPA1expressing fibers induced relaxation of prostate smooth muscle. In BPH, the immunoreactivity of TRPA1, CB1, and CB2 was significantly reduced [82]. Since TRPA1 [82] and TRPV1 [71], which are both important actors of nociception, are expressed along nerve fibers in prostate tissue, these channels can play a role in the pathogenesis of nonbacterial prostatitis and prostatodynia, and they might be considered as a new pharmacological target. DISCUSSION Although PCa is the most diagnosed nonskin cancer in men, there is a major lack of predictive prognostic biomarkers that can distinguish prospectively between aggressive and indolent disease. Although some factors, such as high Gleason score or short PSA doubling time, predict a worse prognosis, some patients will progress even with apparently low-risk disease. Several TRP channels have been described in the prostate at the RNA level, the protein level, or the functional level using prostate material of diverse nature: freshly frozen human prostate tissue, paraffinembedded tissue, LNCaP cells, PC-3 cells, and primary epithelial prostate cells. In addition, several of these channels have been identified as possible prognostic biomarkers of PCa. Importantly, the results described above should be interpreted with care, taking into account that PCa research lacks a well-validated model. LNCaP cells are widely used as an androgen-dependent, PCa epithelial cell model. However, this cell line was established from a metastatic, supraclavicular, lymph node lesion of a patient with hormonerefractory prostate adenocarcinoma. These LNCaP cells bear a mutation in the AR gene, encoding a promiscuous AR that can bind to other steroids than androgens. As such, it is questionable whether these cells are truly androgen dependent [87]. The role and the functional expression of TRPM8 in LNCaP cells are still a matter of debate; according to some authors, TRPM8 is functionally expressed [34], whereas other authors concluded that there is no functional role at all for TRPM8 in LNCaP cells [36,37,38]. Another controversial issue in LNCaP cells is the androgen regulation of TRPV6. Peng et al. showed that there was a twofold increase of TRPV6 mRNA levels upon application of 1 μM bicalutamide [51], an AR antagonist, whereas Lehen'kyi et al. reported that 10 μM bicalutamide had no significant effect on TRPV6 mRNA [52]. Therefore, the general impact of all TRPM8 and TRPV6 experiments performed in LNCaP cells on the development and treatment of PCa should be considered with caution given the diverse results, the doubtful prostate and epithelial nature of these cells, and their contested androgenic status. In addition, in PC-3 cells, another widely used cell line in PCa research, data about TRPM8 mRNA expression are contradictory [32,34]. Similarly, controversial data exist regarding the expression of TRPC channels in the different PCa models; TRPC3, TRPC5, and TRPC7 are expressed in LNCaP cells [60], while these channels have a very low expression level in human prostate specimens [57]. In contrast, TRPC6 is present in human prostate specimens at mRNA [57] and protein [58] levels, but not in LNCaP [60]. These conflicting data emphasize the definite need for better PCa models. A more appropriate model for PCa research seems to be the use of primary epithelial cell cultures of normal and cancerous prostate tissue [32,59]. Unfortunately, due to the difficulty and labor intensiveness of this technique, primary cultures are not yet widely used and little is known about the epithelial characteristics of these cells. The majority of all mentioned TRP channels have been described in "human prostate tissue". As discussed before, the prostate is a mixture of a wide variety of cells. The studies reporting TRP channel expression nearly never distinguish between epithelial and stromal cells. TRPM8, TRPV1, and TRPA1 are TRP channels with clearly defined roles in the sensory neuronal system [20]. These channels have been described in random prostate tissue [27,68,81], but it is unclear whether these channels are expressed in epithelial cells or in neuronal cells. The presence of a sensory nervous system in the prostate has long been ignored in urological literature and was demonstrated by McVary et al. in 1998 [13]. By injecting a tracer in the ventral prostate of a rat for retrograde labeling of the afferent nerves, they showed that a sensory innervation was present in prostate tissue. The majority of afferent nerve fibers innervating the rat prostate projected to L5 and L6 DRG [13]. Therefore, it is possible that TRPM8, TRPV1, and TRPA1 expression in random human prostate tissue originates from afferent neurons innervating the prostate, rather than from the epithelial cells. TRP expression studies are encountered with a lack of well-characterized and specific antibodies88. In the PCa literature, several studies concluded that TRPM8 was expressed in apical epithelial cells, solely based on immunohistochemistry experiments using a polyclonal TRPM8 antibody without a clear positive or negative control [31]. Thus, the antibody could have reacted nonspecifically. The up-regulation of TRPM8 [27] and TRPV6 [47] in PCa tissue was demonstrated using in situ hybridization studies, which is not really a quantitative technique. One can wonder whether this technique was the most appropriate to conclude that TRPM8 and TRPV6 were up-regulated in PCa. It was further demonstrated that TRPM8 was not only up-regulated in prostate adenocarcinoma, but also in other neoplastic lesions. Since TRPM8 is also up-regulated in other cancer types and since TRPM8 has a fairly established role in the sensory nervous system (for a review, see [30]), it is questionable whether TRPM8 really is a prostate-specific protein and thus can act as a new pharmacological target for PCa. In summary, we can conclude that several TRP channels have been identified in the human prostate using non-or semi-quantitative methods, but no TRP channels have definite, clear roles in prostate physiology or carcinogenesis. The majority of the TRP expression studies in the human prostate have used random prostate tissue, whereas the prostate itself is an extremely heterogeneous organ. All TRP expression studies of the prostate should be cell-specific and should be read with care. Quantitative methods and functional data, if possible, are indispensable. There is a definite need for more appropriate prostate epithelial cell models, such as primary cultures of prostate epithelial cells, but the latter should be more thoroughly characterized. RNA [81]; protein [81]
2018-04-03T05:34:39.363Z
2010-08-17T00:00:00.000
{ "year": 2010, "sha1": "4ad4bdd735fc5ce0cd79926115b3bb8f868daf4e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2010/832469.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "286b321428c86c0ed618cf3f165c9bfcb50c1504", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18012276
pes2o/s2orc
v3-fos-license
Detection and Visualization of Android Malware Behavior . Introduction Collecting a large amount of data issued by applications for smartphones is essential for making statistics about the applications' usage or characterizing the applications.Characterizing applications might be useful for designing both an anomaly-detection system and/or a misuse detecting system, for instance. Nowadays, smartphones running on an Android platform represent an overwhelming majority of smartphones [1].However, Android platforms put restrictions on applications for security reasons.These restrictions prevent us from easily collecting traces without modifying the firmware or rooting the smartphone.Since modifying the firmware or rooting the smartphone may void the warranty of the smartphone, this method cannot be deployed on a large scale. From the security point of view, the increase in the number of internet-connected mobile devices worldwide, along with a gradual adoption of LTE/4G, has drawn the attention of attackers seeking to exploit vulnerabilities and mobile infrastructures.Therefore, the malware targeting smartphones has grown exponentially.Android malware is one of the major security issues and fast growing threats facing the Internet in the mobile arena, today.Moreover, mobile users increasingly rely on unofficial repositories in order to freely install paid applications whose protection measures are at least dubious or unknown.Some of these Android applications have been uploaded to such repositories by malevolent communities that incorporate malicious code into them.This poses strong security and privacy issues both to users and operators.Thus, further work is needed to investigate threats that are expected due to further proliferation and connectivity of gadgets and applications for smart mobile devices. This work focuses on monitoring Android applications' suspicious behavior at runtime and visualizing their malicious functions to understand the intention behind them.We propose a platform-independent behavior monitoring infrastructure composed of four elements: (i) an Android application that guides the user in selecting, instrumenting, and monitoring of the application to be examined, (ii) an embedded client that is inserted in each application to be monitored, (iii) a cloud service that collects the application to be instrumented and also the traces related to the function calls, (iv) and finally a visualization component that generates behavior-related dendrograms out of the traces.A dendrogram [2] consists of many U-shaped nodes-lines that connect data of the Android application (e.g., the package name of the application, Java classes, and methods and functions invoked) in a hierarchical tree.As a matter of fact, we are interested in the functions and methods which are frequently seen in malicious code.Thus, malicious behavior could be highlighted in the dendrogram based on a predefined set of anomaly rules.An overview of the monitoring system is shown in Figure 1. Monitoring an application at runtime is essential to understand how it interacts with the device, with key components such as the provided application programming interfaces (APIs).An API specifies how some software components (routines, protocols, and tools) should act when subject to invocations by other components.By tracing and analyzing these interactions, we are able to find out how the applications behave, handle sensitive data, and interact with the operating system.In short, Android offers a set of API functions for applications to access protected resources [3]. The remainder of the paper is organized as follows.Section 2 provides the notions behind the components used in the rest of the paper.Next, the related work is discussed in Section 3. Next we describe the monitoring and visualization architecture in Section 4, while we provide the details of the implementational issues of our system in Section 5. Later, in Section 6, we evaluate the proposed infrastructure and the obtained results by using 8 malware applications.Limitations and Conclusions are presented in Sections 7 and 8, respectively. Background Web Services extend the World Wide Web infrastructure to provide the means for software to connect to other software applications [4].RESTFul Web Services are Web Services that use the principles of REpresentational State Transfer (REST) [5].In other words, they expose resources to clients that can be accessed through the Hypertext Transfer Protocol (HTTP). Regarding the Android operating system (OS), it is divided into four main layers: applications, application framework, middleware, and Linux kernel. (i) Applications.The top layer of the architecture is where the applications are located.An Android application is composed of several components, amongst which we have Activities and Services.Activities provide a user interface (UI) of the application and are executed one at a time, while Services are used for background processing such as communication, for instance. (ii) Application Framework.This layer is a suite of Services that provides the environment in which Android applications run and are managed.These programs provide higher-level Services to applications in the form of Java classes. (iii) Middleware.This layer is composed of the Android runtime (RT) and C/C++ libraries.The Android RT is, at the same time, composed of the Dalvik Virtual Machine (DVM) (Android version 4.4.launches a new virtual machine called Android runtime (ART).ART has more advanced performance than DVM, among other things, by means of a number of new features such as the ahead-of-time (OTA) compilation, enhanced garbage collection, improved application debugging, and more accurate high-level profiling of the apps [6]) and a set of native (core) Android functions.The DVM is a key part of Android as it is the software where all applications run on Android devices.Each application that is executed on Android runs on a separate Linux process with an individual instance of the DVM, meaning that multiple instances of the DVM exist at the same time.This is managed by the Zygote process, which generates a fork of the parent DVM instance with the core libraries whenever it receives a request from the runtime process. (iv) Linux Kernel.The bottom layer of the architecture is where the Linux kernel is located.This provides basic system functionality like process and memory management. The kernel also handles a set of drivers for interfacing Android and interacting with the device hardware. In standard Java environments, Java source code is compiled into Java bytecode, which is stored within .classformat files.These files are later read by the Java Virtual Machine (JVM) at runtime.On Android, on the other hand, Java source code that has been compiled into .classfiles is converted to .dexfiles, frequently called Dalvik Executable, by the "dx" tool.In brief, the .dexfile stores the Dalvik bytecode to be executed on the DVM. Android applications are presented on an Android application package file (APK) .apk, the container of the application binary that contains the compiled .dexfiles and the resource files of the app.In this way, every Android application is packed using zip algorithm.An unpacked app has the following structure (several files and folders) [7]: (i) an AndroidManifest.xmlfile: it contains the settings of the application (meta-data) such as the permissions required to run the application, the name of the application, definition of one or more components such as Activities, Services, Broadcasting Receivers, or Content Providers.Upon installing, this file is read by the PackageManager, which takes care of setting up and deploying the application on the Android platform. (ii) a res folder: it contains the resources used by the applications.By resources, we mean the app icon, its strings available in several languages, images, UI layouts, menus, and so forth. (iii) an assets folder: it stores noncompiled resources.This is a folder containing applications assets, which can be retrieved by AssetManager. (iv) a classes.dexfile: it stores the classes compiled in the dex file format to be executed on the DVM. (v) a META-INF folder: this directory includes MANI-FEST.MF which contains a cryptographic signature of the application developer certificate to validate the distribution. The resulting .apkfile is signed with a keystore to establish the identity of the author of the application.Besides, to build Android applications, a software developer kit (SDK) is usually available allowing access to APIs of the OS [8].Additionally, two more components are described in order to clarify the background of this work: the Android-apktool [9] and the Smali/Backsmali tools.The Android-apktool is generally used to unpack and disassemble Android applications.It is also used to assemble and pack them.It is a tool set for reverse engineering third party Android apps that simplifies the process of assembling and disassembling Android binary .apkfiles into Smali .smalifiles and the application resources to their original form.It includes the Smali/Baksmali tools, which can decode resources (i.e., .dexfiles) to nearly original form of the source code and rebuild them after making some modifications.This enables all these assembling/disassembling operations to be performed automatically in an easy yet reliable way.However, it is worth noting that the repackaged Android binary .apkfiles can only possess the same digital signature if the original keystore is used.Otherwise, the new application will have a completely different digital signature. Related Work Previous works have addressed the problem of understanding the Android application behavior in several ways.An example of inspection mechanisms for identification of malware applications for Android OS is presented by Karami et al. [10] where they developed a transparent instrumentation system for automating the user interactions to study different functionalities of an app.Additionally, they introduced runtime behavior analysis of an application using input/output (I/O) system calls gathered by the monitored application within the Linux kernel.Bugiel et al. [11] propose a security framework named XManDroid that extends the monitoring mechanism of Android, in order to detect and prevent application-level privilege escalation attacks at runtime based on a given policy.The principal disadvantage of this approach is that the modified framework of Android has to be ported for each of the devices and Android versions in which it is intended to be implemented.Unlike [10,11], we profile only at the user level and therefore we do not need to root or to change the framework of Android smartphones if we would like to monitor the network traffic, for example. Other authors have proposed different security techniques regarding permissions in Android applications.For instance, Au et al. [12] present a tool to extract the permission specification from Android OS source code.Unlike the other methods, the modules named Dr.Android and Mr. Hide that are part of a proposed and implemented app by Jeon et al. [13] do not intend to monitor any smart phones.They aim at refining the Android permissions by embedding a module inside each Android application.In other words, they can control the permissions via their module.We also embed a module inside each Android application but it is used to monitor the Android application instead. In the work by Zhang et al. [3], they have proposed a system called VetDroid which can be described as a systematic analysis technique using an app's permission use.By using real-world malware, they identify the callsites where the app requests sensitive resources and how the obtained permission resources are subsequently utilized by the app.To do that, VetDroid intercepts all the calls to the Android API and synchronously monitors permission check information from Android permission enforcement system.In this way, it manages to reconstruct the malicious (permission use) behaviors of the malicious code and to generate a more accurate permission mapping than PScout [12].Briefly this system [3] applies dynamic taint analysis to identify malware.Different from VetDroid, we do not need to root or jailbreak the phone nor do we conduct the permission-use approach for monitoring the smartphone. Malware detection (MD) techniques for smart devices can be classified according to how the code is analyzed, namely, static analysis and dynamic analysis.In the former case, there is an attempt to identify malicious code by decompiling/disassembling the application and searching for suspicious strings or blocks of code; in the latter case the behavior of the application is analyzed using execution information.Examples of the two named categories are Dendroid [2] as an example of a static MD for Android OS devices and Crowdroid as a system that clusters system call frequency of applications to detect malware [14].Also, hybrid approaches have been proposed in the literature for detection and mitigation of Android malware.For example, Patel and Buddhadev [15] combine Android applications analysis and machine learning (ML) to classify the applications using static and dynamic analysis techniques.Genetic algorithm based ML technique is used to generate a rules-based model of the system. A thorough survey by Jiang and Zhou [16] charts the most common types of permission violations in a large data set of malware.Furthermore, in [17], a learning-based method is proposed for the detection of malware that analyzes applications automatically.This approach combines static analysis with an explicit feature map inspired by a linear-time graph kernel to represent Android applications based on their function call graphs.Also, Arp et al. [18] combine concepts from broad static analysis (gathering as many features of an application as possible) and machine learning.These features are embedded in a joint vector space, so typical patterns indicative of malware can be automatically identified in a lightweight app installed in the smart device.Shabtai et al. [19] presented a system for mobile malware detection that takes into account the analysis of deviations in application networks behavior (app's network traffic patterns).This approach tackles the challenge of the detection of an emerging type of malware with self-updating capabilities based on runtime malware detector (anomaly-detection system) and it is also standalone monitoring application for smart devices. Considering that [17] and Arp et al. [18] utilize static methods, they suffer from the inherent limitations of static code analysis (e.g., obfuscation techniques, junk code to evade successful decompilation).In the first case, their malware detection is based upon the structural similarity of static call graphs that are processed over approximations, while our method relies upon real functions calls that can be filtered later on.In the case of Debrin, transformation attacks that are nondetectable by static analysis, as, for example, based on reflection and bytecode encryption, can hinder an accurate detection. Although in [19] we have a detection system that continuously monitors app executions.There is a concern about efficiency of the detection algorithm used by this system.Unfortunately, in this case, they could not evaluate the Features Extractor and the aggregation processes' impact on the mobile phone resources, due to the fact that an extended list of features was taken into account.To further enhance the system's performance, it is necessary to retain only the most effective features in such a way that the runtime malware detector system yields relatively low overhead on the mobile phone resources. Our proposed infrastructure is related to the approaches mentioned above and employs similar features for identifying malicious applications, such as permissions, network addresses, API calls, and function call graphs.However, it differs in three central aspects from previous work.First, we have a runtime malware detection (dynamic analysis) but abstain from crafting detection in protected environment as the dynamic inspections done by VetDroid.While this system provides detailed information about the behavior of applications, they are technically too involved to be deployed on smartphones and detect malicious software directly.Second, our visual analysis system is based on accurate API call graphs, which enables us to inspect directly the app in an easy-to-follow manner in the cloud.Third, we are able to monitor not just the network traffic, but most of the restricted and suspicious API calls in Android.Our platform is more dynamic and simpler than other approaches mentioned above. General overview of the state of security in mobile devices and approaches to deal with malware can be found in [20], and in the work by Suarez-Tangil et al. in [21], as well as in recent surveys by Faruki et al. in [7] and Sufatrio et al. [6].Malware in smart devices still poses many challenges and, in different occasions, a tool for monitoring applications at a large scale might be required.Given the different versions of Android OS, and with a rising number of device firmwares, modifying each of the devices might become a nontrivial task.This is the scenario in which the proposed infrastructure in this paper best fits.The core contribution of this work is the development of a monitoring and instrumentation system that allows a visual analysis of the behavior of Android applications for any device on which an instrumented application can run.In particular, our work results in a set of dendrograms that visually render existing API calls invoked by Android malware application, by using dynamic inspection during a given time interval, and visually highlighting the suspicious ones.Consequently, we aim to fill the void of visual security tools which are easy to follow for Android environments in the technical literature. Platform Architecture When Android applications are executed, they call a set of functions that are either defined by the developer of the application or are part of the Android API.Our approach is based on monitoring a desired subset of the functions (i.e., hooked functions) called by the application and then uploading information related to their usage to a remote server.The hooked function traces are then represented in a graph structure, and a set of rules are applied to color the graphs in order to visualize functions that match known malicious behavior. For this, we use four components: the Embedded client and the Sink on the smartphone side, and the Web Service and the Visualization component on the remote server side. A work flow depicting the main elements of the involved system is shown in Figure 2. In Stage 1, the application under study and a set of permissions are sent to the Web Service.Next, the main processing task of Stage 2, labeled as hooking process, is introduced.In this case, hooks or logging codes are inserted in the functions that require at least one of the permissions specified at the previous stage.The new "augmented" application will be referred to as APP' from now on.Stages 3, 4, and 5 consist of running APP' , saving the traces generated by APP' in the server's database, and showing the results as visualization graphs, respectively.The aforementioned infrastructure for platform-independent monitoring of Android applications is aimed to provide behavioral analysis without modifying the Android OS or root access to the smart device. 4.1. Embedded Client and Sink.The monitoring system consists of two elements: an embedded client that will be inserted into each application to be monitored and a Sink that will collect the hooked functions that have been called by the monitored applications.The embedded client simply consists of a communication module that uses the User Datagram Protocol (UDP) for forwarding the hooked functions to the Sink.Here, JavaScript Object Notation (JSON) is used when sending the data to the Sink, which allows sending dynamic data structures.In order to know the origin of a hooked function that has been received by the Sink, the corresponding monitored application adds its application hash, its package name, and its application name to the hooked function which we call a partial trace before sending it to the Sink.The partial traces are built by the prologue functions (i.e., hook functions) that are placed just before their hooked functions and which modify the control flow of the monitored applications in order to build the partial traces corresponding with their hooked functions and passing the partial traces as parameter to the embedded client.Only the partial traces are built by the monitored application so that we add little extra overhead to the monitored application.The insertion of the embedded client and of the prologue functions in the Android application that is to be monitored is explained in Section 4.3. The embedded client is written using the Smali syntax and is included on each of the monitored applications at the Web Service, at the same time that the functions hooks are inserted, before the application is packed back into an Android binary .apkfile. The Sink, on the other hand, is implemented as an Android application for portability both as a service and an activity whose service is started at the boot time.It is responsible for receiving the partial traces issued from all the monitored applications clients via a UDP socket, augmenting the partial traces to get a trace (i.e., adding a timestamp and the hash of the ID of the phone), storing them, and sending them over the network to the Web Service.As for the activity, it is responsible for managing the monitored applications via a UI, sending the applications to hook to the Web Service, and downloading the hooked applications from the Web Service.By hooked applications, we mean the applications in which hooks have been inserted.Once an application has been hooked then we can monitor it. Before storing the traces in a local database, the Sink first stores them in a circular buffer which can contain up to 500 traces.The traces are flushed to the local database when any of the following conditions are met: (i) when the buffer is half full, (ii) when the Sink service is shutting down, or (iii) upon an activated timeout expiring.This bulk flushing enables the Sink to store the traces more efficiently.Unfortunately, if the service is stopped by force, we lose the traces that are present in the circular buffer.Once the traces are persisted in the local database, the timeout is rescheduled.Every hour, the Sink application tries to send the traces that remain in the local database out to the Web Service.A trace is removed locally upon receiving an acknowledgment from the Web Service.An acknowledgment is issued when the Web Service has been able to record the trace in a SQL database with success.If the client cannot connect to the Web Service, it will try again at the next round. When a user wants to monitor an application, a message with the package name as payload is sent to the Sink service which keeps track of all the applications to monitor in a list.When a user wants to stop monitoring a given application, a message is sent to the Sink service which removes it from its list of applications to monitor. The Web Service. This server provides the following services to Sink: upload applications, download the modified applications, and send the traces.Now the key part of the whole system, where the logic of the method presented lies, is the tool that implements the application, a process known as "hooking."In the following, we explain it.The Web Service, implemented as a Servlet on a Tomcat web application server, is a RESTful Web Service which exposes services to clients (e.g., Android smartphone) via resources.The Web Service exposes three resources which are three code pages enabling the Sink to upload an application to hook, download a hooked application, and send traces.The hooking process is explained in more detail in Section 4.3.1. The file upload service allows the Sink to send the target application to monitor and triggers the command to insert all the required hooks and the embedded client to the application.Also, it is in charge of storing the submitted Android binary .apkfile on the server and receiving a list of permissions.This set of permissions will limit the amount of hooks to monitor, hooking only the API function calls linked to these permissions.Conversely, the file download service allows the Sink to download the previously sent application, which is now prepared to be monitored.A ticket system is utilized in order to keep tracking of the current application under monitoring.The trace upstream service allows the Sink to upload the traces stored on the device to the server database and remove the traces from the devices local SQLite database.Upon receiving traces, the Web Service records them in a SQL database and sends an acknowledgment back to the Sink.In case of failure in the server side or in the communication channel, the trace is kept locally in the SQLite database until the trace is stored in the server and an acknowledgment is received by the Sink.In both cases, it might occur that the trace has just been inserted in the SQL database and no answer is sent back.Then the Sink would send again the same trace and we would get a duplication of traces.However, the mechanism of primary key implemented in the SQL database prevents the duplication of traces.A primary key is composed of one or more data attributes whose combination of values must be unique for each data entry in the database.When two traces contain the same primary key, only one trace is inserted while the insertion of the other one throws an exception.When such an exception is thrown, the Web Service sends back an acknowledgment to the Sink so as to avoid the Sink resending the same trace (i.e., forcing the Sink to remove from its local database the trace that has already been received by the Web Service). Instrumenting an Application. In this section, we first describe the process of inserting hooks into an Android application and then we show an example of a hook implementation.A tutorial on instrumentation of Android applications is presented by Arzt et al. in [22]. However, before proceeding with the insertion of instrumentation code to the decompiled APK below, we would like to clarify the effect of disassembling the uploaded applications, that is, the differences between the original code and code generated after instrumentation.Briefly, the disassembling of the uploaded application is performed by using the Smali/Baksmali tool which is assembler/disassembler, respectively, for the dex-format (https:// source.android.com/devices/tech/dalvik/dex-format.html).This is the format used by Dalvik, one of the Android's JVM implementations.Thus, the disassembling is able to recover an assembler-like representation of the Java original code.This representation is not the original Java source code (Baksmali is a disassembler, not a decompiler after all).However, Baksmali creates both an exact replica of the original binary code behavior and high-level enough to be able to manipulate it in an easy way.This is why we can add additional instructions to instrument the original code for our purposes and then reassemble it back to a dex file that can be executed by Android's JVM.On the other hand, as discussed in [22], instrumentation of applications outperforms static analysis approaches, as instrumentation code runs as part of the target app, having full access to the runtime state.So, this explains the rationale behind introducing hooks in order to trace core sensitive or restricted API functions used at runtime of the apps.In other words, the Smali code reveals the main restricted APIs utilized by the apps under test, even in the presence of source code obfuscation.We can therefore resort to monitoring these restricted APIs and keep tracking of those Android suspicious programs' behavior. Step (iii) can be subdivided into several substeps: (1) adding the Internet permission in the AndroidManifest to enable the embedded client inserted in the application to hook to communicate with the Sink via UDP sockets, (2) parsing the code files and adding invocation instructions to the prologue functions before their corresponding hooked functions: when the monitored application is running, before calling the hooked function, its corresponding prologue function will be called and will build its corresponding partial trace. The list of desired functions to hook is provided by the administrator of the Web Service.For instance, if the administrator is interested in knowing the applications usage, it will hook the functions that are called by the application when starting and when closing, (3) adding a class that defines the prologue functions: it is worth noting that there will be as many prologue functions as functions to hook.Each prologue function builds its partial trace.Since we do not log the arguments of the hooked functions, the partial traces that are issued by the same monitored application will only differ by the name of the hooked function.It is also worth noting that the prologue functions are generated automatically. Since every Android application must be signed by a certificate for being installed on the Android platform, we use the same certificate to check if the hooked application comes from our Web Service.For this, the certificate used in the Web Service has been embedded in the Sink application.This prevents attackers from injecting malicious applications by using a man-in-the-middle attack between the smartphone and the Web Service. Hook Example. Consider a case where the function sendTextMessage, used to send short messages (SMS) on the Android platform, is to be logged in a monitored application.This function is called in the main activity class of the application corresponding to the code Listing 1.As for the class shown in Listing 2, it defines the prologue functions and the function responsible for passing the partial traces, built by the prologue functions, to the embedded client.For space reasons, we will not show the embedded client. In the main activity class corresponding to the class shown in Listing 1, the function sendTextMessage is called at line (4) with its prologue function log sendTextMessage which has been placed just before at line (3).Since the hooked function may modify common registers used for storing the parameters of the hooked function and for returning objects, we have preferred placing the prologue functions before their hooked functions.The register v1 is the object of the class SmSManager needed to call the hooked function.As for the registers v2 to v6, they are used for storing the parameters of the hooked function.Since our prologue functions are declared as static, we can call them without instantiating their class 2, and therefore we do not need to use the register v1. An example of the monitor log class is shown in Listing 2. The name of the class is declared at line (1).At lines (3) and (10), two functions are defined, namely, log sendTextMessage and sendLog.The former function, prologue function of the hooked function sendTextMessage, defines a constant string object containing the partial trace at line (5) and puts it into the register v0.Then the function sendLog is called at line (6) with the partial trace as parameter.The latter function saves the partial trace contained in the parameter p0 into the register v0 at line (13).At lines ( 15) and ( 16), two new instances are created, respectively: a new thread and new instance of the class EmbeddedClient.Their instances are initialized, respectively, at lines ( 17) and (18).Finally, the thread is started at line (19) and the partial trace is sent to the Sink.It is worth noting that, in these two examples, we have omitted some elements of the code which are replaced by dots to facilitate the reading of the code. Visualization. The visualization of anomalous behavior is the last component of the proposed architecture.In order to perform a visual analysis of the applications' behavior in a simplified way, a D3.js (or just D3 for Data-Driven Documents (JavaScript library available at http://d3.org/))graph was used.D3 is an interactive and a browser-based data visualizations library to build from simple bar charts to complex infographics.In this case, it stores and deploys graph oriented data on a tree-like structure named dendrograms using conventional database tables.Generally speaking, a graph visualization is a representation of a set of nodes and the relationships between them shown by links (vertices and edges, resp.). This way, we are able to represent each of the analyzed application's behaviors with a simple yet illustrative representation.In general, the graphs are drawn according to the schema depicted in Figure 3.The first left-hand (root) node, "Application," contains the package name of the application, which is unique to each of the existing applications.The second middle node (parent), "Class," represents the name of the Android component that has called the API call.The third node, "Function" (the right-hand or child node), represents the names of functions and methods invoked by the application.It is worth noting that each application can include several classes and each class can call various functions or methods.In other words, function calls are located in the righthand side of the dendrogram.For each node at this depth we are looking for known suspicious functions derived from a set of predefined rules as described below. 4.4.1. Rules "Generation".The rules aim to highlight restricted API calls, which allow access to sensitive data or resources of the smartphone and are frequently found in malware samples.These could be derived from the static analysis where the classes.dexfile is converted to Smali format, as mentioned before, to get information considering functions and methods invoked by the application under test.On the other hand, it is well know that many types of malicious behaviors can be observed during runtime only.For this reason we utilize dynamic analysis; that is, Android applications are executed on the proposed infrastructure (see Figure 2) and interact with them.As a matter of fact, we are only interested in observing the Java based calls, which are mainly for runtime activities of the applications.This includes data accessed by the application, location of the user, data written to the files, phone calls, sending SMS/MMS, and data sent and received to or from the networks. For the case that an application requires user interactions, we resort to do that manually so far.Alternatively, for this purpose one can use MonkeyRunner toolkit, which is available in Android SDK. In [18,23], authors list API functions calls that grant access to restricted data or sensible resources of the smartphone, which are very often seen in malicious code.We base our detection rules in those suspicious APIS calls.In particular, we use the following types of suspicious APIs: (vi) API calls frequently used for obfuscation and loading of code, such as DexClassLoader.Loadclass() and Cipher.getInstance(). Here the rule module uses the above-mentioned API calls to classify the functions and methods invoked on the runtime of the applications into three classes, that is, Benign, Adware, or Malware.So in this way, we can generate IF-THEN rules (cf.rules-based expert systems).Next we show example rules that describe suspicious behavior.Some of the rules generated by us are similar or resemble the ones in [24], namely, (1) a rule that shows that the examined app is not allowed to get the location of the smart device user: IF Not (ACCESS FINE LOCATION) AND CALL getLastKnownLocation THEN Malware, (2) another rule which might detect that the application is trying to access sensitive data of the smartphone without permission: IF Not ( ) AND CALL getImei THEN Malware. Our approach selects from the database those functions that have been executed that match the suspicious functions described in the rules.Package name and class name of such function are colored accordingly to the "semaphoric" labeling described in Section 6.1. To illustrate the basic idea we choose a malware sample, known as FakePlayer, in order to draw its graph.Thus, by means of running the filtering and visualization operations we end up with the graph of the malware, shown in Figure 4. The system allows adding new rules in order to select and color more families of suspicious functions. Testbed and Experimentation Before introducing the reader into the results of using the monitoring and visualization platform, we need to explain the testbed.We first describe the experiment setup; then we follow the steps of running the client-side Sink. Client-Side Monitoring.The activities in Figure 5(a) display all the applications installed on the device that did not come preinstalled, from which the user selects a target application to monitor.Once an application is selected, the next step is to choose which permission or permissions the user wants to monitor.This can be observed in the third snapshot (white background) of Figure 5(c).Following the permissions clearance, the interface guides the user along several activities starting with the uploading of the selected application which is sent to the Web Service where the hooks are inserted.After this hooking process has finished, the modified application is downloaded from the Web Service.Afterwards, the original application is uninstalled and replaced by the modified application.Finally, a toggle allows starting and stoping monitoring of the application at any time by the user. We focus on the functions of the Android API that require, at least, one permission.This allows the user to select from the Sink those permissions that are to be monitored at each application.This allows understanding how and when these applications use the restricted API functions.The PScout [12] tool was used to obtain the list of functions in the "API permission map."This way, the permission map obtained contains (Android 4.2 version API level 17) over thirty thousand unique function calls and around seventyfive different permissions.Besides, it is worth mentioning here that we refer to those associated with a sensitive API as well as sensitive data stored on device and privacy-sensitive built-in sensors (GPS, camera, etc.) as "restricted API functions."The first group is any function that might generate a "cost" for the user or the network.These APIs include [8], among others, Telephony, SMS/MMS, Network/Data, In-App Billing, and NFC (Near Field Communication) Access.Thus, by using the API map contained in the server's database, we are able to create a list of restricted ("suspicious") API functions. The trace managing part is a service that runs in background with no interface and is in charge of collecting the traces sent from the individual embedded clients, located on each of the monitored applications.It adds a timestamp and the hash of the device ID and stores them on a common circular buffer.Finally, the traces are stored in bulk on a common local SQLite database and are periodically sent to the Web Service and deleted from the local database. In summary, the required steps to successfully run an Android modified instrumented application are listed in Figure 5(d) and comprise the following. Step 1 (select permissions).Set up and run the platform.Choose an application APP to be monitored on the device.Elect the permission list. Step 2 (upload the application (APK)).Then, when this command is launched to upload the applications to the Web Service, the hooking process is triggered. Step 3 (download modified application).This starts the downloading of the hooked application. Step 4 (delete original application).This command starts the uninstallation process of the original application. Step 5 (install modified application).This command starts the installation process of the modified application using Android's default application installation window. Step 6 (start monitoring).Finally, a toggle is enabled and can be activated or disabled to start or stop monitoring that application as chosen by the user. Results To evaluate our framework, in this section we show the visualization results for several different applications to both benign and malicious.Then we proceed to evaluate the Sink application in terms of CPU utilization and ratio of partial traces received.Finally, we estimate the CPU utilization of a monitored application and its responsiveness. Visual Analysis of the Traces. As mentioned before, a set of predefined rules allows us to identify the suspicious API functions and depending on its parameters (e.g., application attempts to send SMS to a short code that uses premium services) we assign colors to them.This enables us to quickly identify the functions and associate them with related items.On top of that, by applying the color classification of each node of the graph associated with a function in accordance with the color code (gray, orange, and red) explained below, it allows a "visual map" to be partially constructed.Furthermore, this graph is suitable to guide the analyst during the examination of a sample classified as dangerous because, for example, the red shading of nodes indicates malicious structures identified by the monitoring infrastructure. In particular, to give a flavor to this analysis, the dendrogram of FakePlayer in Figure 4 provides the user with an indication of the security status of the malware.Different colors indicate the level of alarm associated with the currently analyzed application: (i) Gray indicates that no malicious activity has been detected, as of yet. (ii) Orange indicates that no malicious behavior has been detected in its graph, although some Adware may be presented. (iii) Red indicates in its graph that a particular application has been diagnosed as anomalous, meaning that it contained one or more "dangerous functions" described in our blacklist.Moreover, it could imply the presence of suspicious API calls such as send-TextMessage with forbidden parameters, or the case of using restricted API calls for which the required permissions have not been requested (root exploit). So, it is possible to conduct a visual analysis of the permissions and function calls invoked per application, where using some kind of "semaphoric labeling" allows us to identify easily the benign (in gray and orange colors) applications.For instance, in Figure 4 there is a presence of malware, and the nodes are painted in red. The dendrogram shown for FakePlayer confirms its sneaky functionality by forwarding all the SMS sent to the device to the previously set phone number remaining unnoticed.For the sake of simplicity, we reduce the API function call sendTextMessage(phoneNo, null, SMS Content, null, null) to sendTextMessage(phoneNo, SMS Content).It uses the API functions to send four (see Figure 4) premium SMS messages with digit codes on it in a matter of milliseconds.Of course, sending a SMS message does not have to be malicious per se.However, for example, if this API utilizes numbers less that 9 digits in length, beginning with a "7" combined with SMS messages, this is considered a costly premium-rate service and a malware that sends SMS messages without the user's consent.The malware evaluated sends SMS messages that contain the following strings: 846976, 846977, 846978, and 846979.The message may be sent to a premium SMS short code number "7132," which may charge the user without his/her knowledge.This implies financial charges.Usually, when this malware is installed, malicious Broadcast Receiver is enrolled directly to broadcast messages from malicious server to the malware, so that user cannot understand whether specific messages are delivered or not.This is because the priority of malicious Broadcast Receiver is higher than SMS Broadcast Receiver.Once the malware is started, sending the function call sendTextMessage of SMS Manager API on the service layer, a message with premium number is sent which is shown in Figure 4. Interactive Dendrograms. In general, it is needed to conduct the visual analysis from different perspectives.To do that we have developed an interactive graph visualization.So, we have four options or features in the D3 visualization of the application to monitor, namely, (a) selection of full features of the application (Goodware checkbox, Adware checkbox, and Malware checkbox), (b) the Goodware checkbox indicating that the app is assumed to be Goodware, (c) the Adware checkbox of the application, and (d) the Malware checkbox to look for malicious code.The analyst can choose to observe a particular Java class or function by typing the name of it inside the search box and clicking on the related search button. Figures 6 and 7 illustrate a big picture of the whole behavioral performance of the malware DroidKungFu1 whose package name is com.nineiworks.wordsXGN,and the malicious function calls are invoked.For the sake of simplicity, we shorten the package name of DroidKungFu1 to wordsXGN in the dendrogram.As a matter of fact, we apply a similar Here, malicious code and malware are interchangeable terms.The possible outcomes are Goodware or Malware.Nevertheless, the proposed infrastructure might be capable of evaluating a third option, Adware, in a few cases.In this paper we do not describe the IF-THEN rules for the third kind of outcome.In this work, we restrain the possible outcomes to the two mentioned options. We have used 7 rules in our experimentation which are listed in Table 1 (note that rules 1 and 2 are the same).We have listed in Table 1 the most frequently used rules.They mainly cover cases of user information leakage. The most frequently used detection rules that we have utilized in our experimentation are listed in Table 1 (second column). 6.3.Client-Side CPU Use Analysis.We define the CPU utilization of a given application as the ratio between the time when the processor was in busy mode only for this given application at both the user and kernel levels and the time when the processor was either in busy or idle mode.The CPU times have been taken from the Linux kernel through the files "/proc/stat" and "/proc/pid/stat" where pid is the process id of the given application.We have chosen to sample the CPU utilization every second. The CPU utilization of the Sink application has been measured in order to evaluate the cost of receiving the partial traces from the diverse monitored applications, processing them, and recording them in the SQLite database varying the time interval between two consecutive partial traces sent.We expect to see that the CPU utilization of the Sink increases as the time interval between two consecutive partial traces sent decreases.Indeed, since the Sink must process more partial traces, it needs more CPU resource.This is confirmed by the curve in Figure 9.The CPU utilization has a tendency towards 30% when the time interval between two consecutive partial traces received tends to 10 ms because the synthetic application takes almost 30% of the CPU for building and sending partial traces, and the rest of applications utilize the rest of the CPU resource.When no monitored applications send partial traces to the Sink and the Sink is running in the background (i.e., its activity is not displayed on the screen), it consumes about 0% of the CPU. The CPU utilization of a synthetic application has also been measured in order to evaluate the cost of building and sending the partial traces to the Sink while the time interval between two consecutive partial traces sent was varied.We expect to see a higher CPU utilization when the application is monitored.Indeed, since the synthetic application must build and send more partial traces, it needs more CPU resources.This is confirmed by Figure 10.We note that the increase of CPU utilization of the application can be up to 28% when it is monitored.The chart shows an increase in application CPU utilization to a level up to 38% which is justified when the monitoring is fine-grained at 10 ms.However, this high frequency is not likely to be needed in real applications. Responsiveness. We define an application as responsive if its response time to an event is short enough.An event can be a button pushed by a user.In other words, the application is responsive if the user does not notice any latency while the application is running.In order to quantify the responsiveness and see the impact of the monitoring on the responsiveness of monitored applications, we have measured the time spent for executing the prologue function of the synthetic application.We have evaluated the responsiveness of the monitored application when the Sink was saturated by partial traces requests, that is, in its worst case.The measured response time was on average less than 1 ms.so, the user does not notice any differences when the application is monitored or not, even though the Sink application is saturated by partial traces.This is explained by the fact that UDP is connectionless and therefore sends the partial traces directly to the UDP socket of the Sink without waiting for any acknowledgments. Limitations So far we illustrated the possibilities of our visual analysis framework by analyzing 8 existing malicious applications.We successfully identified different types of malware accordingly to the malicious payload (e.g., privilege escalation, financial charges, and personal information stealing) of the app while using only dynamic inspection in order to obtain the outcomes.Even though the results are promising, they only represent a few of the massive malware attacking today's smart devices.Of course, the aim of this system is not to replace existing automated Android malware classification systems because the final decision is done by a security analyst. Although, here, we propose a malware detector system based on runtime behavior, this does not have detection capabilities to monitor an application's execution in real time, so this platform cannot detect intrusions while running.It only enable detecting past attacks. Also, one can figure out that malware authors could try avoiding detection, since they can gain knowledge whether their app has been tampered with or no.As a result, the actual attack might not be deployed, which may be considered a preventive technique.Moreover, it is possible for a malicious application to evade detection by dynamically loading and executing Dalvik bytecode at runtime. One of the drawbacks of this work could be the manual interactions with the monitored application during runtime (over some time interval).Also, the classification needs a more general procedure to get the rule-based expert system.The natural next step is to automate these parts of the process.For example, in the literature there are several approaches that can be implemented in order to automatically generate more IF-THEN rules [15] or to resort to the MonkeyRunner kit available in Android SDK to simulate the user interactions.Of course, the outcomes of the 8-sample malware presented here are limited to longest time interval used in the study, which was 10 minutes.Extending this "playing" time with the app using tools for the automation of user's interactions could provide a more realistic graph and better pinpoint the attacks of the mobile malware. Another limitation of this work is that it can only intercept Java level calls and not low level functions that can be stored as libraries in the applications.Thus, a malicious app can invoke native code through Java Native Interface (JNI), to deploy attacks to the Android ecosystem.Since our approach builds on monitoring devices that are not rooted, this approach is out of the scope of our research. It is worth mentioning that our API hooking process does not consider the Intents.The current version of the Journal of Electrical and Computer Engineering infrastructure presented in this paper is not capable of monitoring the Intents sent by the application, as sending Intents does not require any kind of permission.Not being able to monitor Intents means that the infrastructure is not able to track if the monitored application starts another app for a short period of time to perform a given task, for instance, opening a web browser to display the end-user license agreement (EULA).Also, adding this feature would allow knowing how the target application communicates with the rest of the third party and system applications installed on the device. Ultimately, this framework could be useful for final users interested in what apps are doing in their devices. Conclusions We provide a monitoring architecture aiming at identifying harmful Android applications without modifying the Android firmware.It provides a visualization graph named dendrograms where function calls corresponding to predefined malware behaviors are highlighted.Composed of four components, namely, the embedded client, the Sink, the Web Service, and the visualization, any Android application can be monitored without rooting the phone or changing its firmware. The developed infrastructure is capable of monitoring simultaneously several applications on various devices and collecting all the traces in the same place.The tests performed in this work show that applications can be prepared to be monitored in a matter of minutes and that the modified applications behave as they were originally intended to, with minimal interference with the permissions used for.Furthermore, we have shown that the infrastructure can be used to detect malicious behaviors by applications, such as the monitored FakePlayer, DroidKungFu1 and DroidKungFu4, and the SMSReplicator and many others taken from the dataset of the Android Malware Genome Project. Evaluations of the Sink have revealed that our monitoring system is quite reactive, does not lose any partial traces, and has a very small impact on the performance of the monitored applications. A major benefit of the approach is that the system is designed as platform-independent so that smart devices with different versions of Android OS can use it.Further improvements on the visualization quality and the user interface are possible, but the proof of concept implementation is demonstrated to be promising.For future work, we plan to extend the current work in order to develop a real-time malware detection infrastructure based on network traffic and on a large number of apps. Figure 1 : Figure 1: Overview of the monitoring system. Figure 2 : Figure 2: Schematics and logical stages of the system. Listing 1 : Main activity class. Figure 4 : Figure4: The simplified dendrogram of the malware FlakePlayer has been generated using the D3.Note that at the upper left corner of the figure there is a combobox to select the monitored malware (here, for simplicity, we use a shortened version of package name of the app, i.e., androidapplication1). Besides, lining up to the right of the combobox, there are three activated checkboxes, labeled as Goodware in blue, Adware in orange, and Malware in red.Also, at the upper right corner of the figure, there is a search button that allows us to look for classes or functions.The complete package name of the malware FakePlayer is org.me.androidapplication1.MoviePlayer. 5. 1 . Experiment Set Up.All the experiments have been realized on a Samsung Nexus S with Android Ice Cream Figure 5 : Figure 5: User interface of the Sink.(a) Choosing the application, (b) selecting the menu for permissions, (c) electing the permissions, and (d) steps of the monitoring process. Figure 8 : Figure 8: Dendrograms of the tested application.(a) Graph of the DroidKungFu4 in full features, and (b) graph of the malicious functions invoked by DroidKungFu4.The full package name of DroidKungFu4 is com.evilsunflower.reader.evilXindong13. Figure 9 : Figure 9: CPU utilization of the Sink. Figure 10 : Figure 10: Difference of CPU utilization between an application monitored and nonmonitored.
2017-02-17T08:44:35.884Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "5f73b2adba51e4861b71fbb5b735830d10443fb3", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jece/2016/8034967.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "39947dfc303a7a8c459c2743371483cc3a9f8546", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
209264790
pes2o/s2orc
v3-fos-license
A study on awareness about temporary contraceptive methods among women in reproductive age group India was the first country in the world to launch a family planning programme in 1952, with the objective of “reducing birth rate and to stabilise the population”. Gradually, the focus of the programme moved away from population control to population stabilization, and then was integrated with the maternal and child health programme. Family planning became an important tool to reduce maternal and child mortality. Family planning is defined as a conscious decision by individuals or couples to choose for themselves when to start having children, how many children to have, how to space them or when to stop having children. INTRODUCTION India was the first country in the world to launch a family planning programme in 1952, with the objective of "reducing birth rate and to stabilise the population". Gradually, the focus of the programme moved away from population control to population stabilization, and then was integrated with the maternal and child health programme. Family planning became an important tool to reduce maternal and child mortality. 1 Family planning is defined as a conscious decision by individuals or couples to choose for themselves when to start having children, how many children to have, how to space them or when to stop having children. In 2015, India reported 15.6 million of abortions at the rate of 47.0 abortions per 1000 women aged between 15-49 years which is one third of total pregnancies. The high rate of abortion follows a high number of unintended pregnancies. The rate of unintended pregnancies was 70.1 per 1000 women aged 15-49 years almost half the pregnancy that were reported during that period. 2 A woman's ability to choose when to become pregnant has a direct impact on her health and well-being. Closely spaced and ill-timed pregnancies /child births contribute to some of the world's highest infant mortality rates. Both mother and Infant also have a greater risk of death and poor health. This study focuses on the awareness about temporary contraceptive methods among women in reproductive age group (15-45 years). The temporary contraceptive methods may be broadly classified as below: Barrier methods These methods prevent sperm from entering the uterus. Barrier methods are removable, easy to use and have few side effects. It includes Hormonal methods Hormonal methods cause changes in the woman's reproductive cycle and include birth control pills, birth control patches, emergency contraception pill, Implants and so on. Unlike barrier methods, hormonal methods do not interfere with sex. Intrauterine methods An intrauterine device or IUD is put in the woman's uterus. There are two types of IUD. The copper IUD or an IUD with hormones implanted on it. The United Nations has estimated that the world population grew at an annual rate of 1.23 percent during 2000-2010. With a definite slowing down of population growth in china, it is now estimated that by 2030, India will most likely overtake china to become the most populous country on the earth with 17.9 percent population living here. 3 According to 2011 census, total population in Tamil Nadu is 7, 21, 47, 030 with a growth rate of 15.6% (2001 -2011). Despite the increase in usage of contraceptive devices there still exists a deficit in the awareness of contraceptive use within the community. 4 The need to study the awareness of temporary contraceptive among women is important to avoid Abortion, MTP and to reduce maternal mortality. Through this study Individuals have also acquired knowledge about temporary contraceptive methods and places from where they can get the hormonal pill, condoms or IUCD. Any misconception regarding contraceptive methods was cleared. The purpose of this study is to: • Assess the awareness of temporary contraceptive methods among women within the reproductive age group of 15 to 45 years in the community • Explore women's understanding and interpretations of contraceptives • Suggestions to improve birth control and enhance maternal, child health programmes. METHODS Study design: cross-sectional descriptive study (samples were drawn from relevant population and studied once). Data collected with pre-tested questionnaire by interview method. Procedure Ethical clearance: the process of acquiring relevant data for the study began after its approval by the IRB, SMCH. Informed consent from participants: Female patients within the reproductive age group of 15-45 years who had given informed consent to participate in this study constitute sample for this study. Interview method using pretested questionnaire was the method used in acquisition of relevant data. Questionnaire: which was validated by the Obstetrics and Gynaecology Department has 14 questions to assess the awareness of the patient. After providing information sheet and acquiring the informed consent, questions were asked and answers were recorded on paper. Statistical analysis The data collected was entered, organized and quantified in excel spreadsheet. Socio-demographic details The socio-demographic details of 100 women participants who took part in the study are as follows: Age distribution The samples were classified under 3 age group and studied. Thus, we understand that 38% of the women who participated in the study belonged to the age group (36-45). Educational qualification The educational qualification of the women was assessed and classified under primary education, secondary education, high school, graduate and others. Thus, we understand that 98% of the women in the sample were literate. 41% of the women had secondary level education. Occupational status The occupational status of the women was enquired. The various occupational levels were professional, skilled worker, unemployed and others. The occupational status of the women is as depicted in Figure 1 below. Thus, it is clear that 63% of the women in the community were unemployed and 13% of the women had occupation of professional level. Religion The religious practice of the women was enquired and the various religion being Hindu, Christian, Muslim and others. Thus, it is understood that the majority of respondents were Hindus and only 2 percent were Muslims. Marital status The women were asked about their marital status. They were either married or not married. Knowledge of contraceptives and various contraceptive methods The awareness regarding contraception and various contraceptive methods like condom, IUCD, Injectables, Hormonal pills, and emergency contraception, were assessed. The data was classified as depicted in the Figure 2 below. Figure 2: Awareness of various contraceptives by women. From the above table it is clear that 72% of the women were aware of condoms and only 39% of women were aware of emergency contraception. In this study, 92% of women were aware about sterilization method and the source of information was health personnel. Knowledge regarding procurement of temporary contraceptive devices The women were assessed regarding their knowledge on procurement of temporary contraceptive devices. The sources being Government hospitals, Health centers, Private health institutions, Medical shop and pharmacy. The responses are depicted in the Figure 3 below. The total will not equal to 100 as the same sample had multiple answers. We observe 7% of the women are not aware as to where to get these services and 56%of the women are aware that these can be obtained at the government hospitals. Source of information The source of information of the women was assessed and their sources being media, husband, family and health personnel. The total will not be equal to 100 as the sample can have multiple answers. Thus, we understand that most of the women, 48% of the women got their information from health personnel. DISCUSSION This is a cross sectional study that was conducted to assess the awareness of temporary contraceptive methods among women within the reproductive age group of 15 to 45 years in the community. The results obtained from the study show that • Nearly 50% of women acquired information on contraception from health personnel. This is in congruence with the results obtained from studies conducted in Udipi, Karnataka, slums in Mumbai and Shillong, Meghalaya where health workers had served as the source of information for the women. 8- CONCLUSION The results from this study shows that • 100% of the sample population is aware of at least one method of contraception • More than 60% of women were aware about atleast one of the temporary contraceptives, Condom (72%), IUCD (65%) and hormonal pills (61%) • Less than 40% women were aware about emergency contraception pill • 7% of the women did not know where to procure contraceptives • Health personnel were the source of information about contraceptives for nearly half of the women (48%). The success of family planning programs can only be achieved by increasing the awareness of various contraceptives available. It is important for contraceptive information providers to have sound knowledge of various methods of contraception and their proper usage to remove fears about contraception. The difficulty to access the contraceptive provider limits the usage of contraceptives. Hence, it is necessary that supplies of contraceptives are accessible, available and affordable to the general public with ease. To improve awareness, PHC's may expand their coverage / health care facilities to peripheral areas. The government may also utilize the media to increase the awareness of contraceptive to adopt proper family planning methods.
2019-10-31T08:53:51.738Z
2019-10-23T00:00:00.000
{ "year": 2019, "sha1": "03134542ad3998d63fcff13ee54b49fec9a27793", "oa_license": null, "oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/7321/4975", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "94f5a9e32d6dcf23e33a4c6e56682a3ef4ae28ab", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
225893816
pes2o/s2orc
v3-fos-license
Theoretical Study of Signature-splitting and Signature-inversion in Doubly-odd Nuclei in A∼80 Mass Region Projected Shell model calculations have been performed using the angular-momentum projected two-quasiparticle states with the employment of a simple quadrupole-quadrupole + monopole–Pairing + quadrupole-pairing Hamiltonian to study the nuclear structure properties of doubly-odd 80Br and 82Rb isotones. The present calculations reproduce reasonably well the available experimental data on the yrast bands and also predict the new high spin states in these nuclei, where current data are still sparse. The phenomena of Signature-splitting and Signature–inversion have also been studied in detail within the context of Projected Shell Model. Introduction A unified theoretical description of the structure of doubly-even, odd-mass and doubly-odd nuclei is one of the main goals of the nuclear structure research. While a good amount of research work is reported on the study of nuclear structure of the doubly-even and odd-mass nuclei in the recent past, the structure of their doubly-odd counterparts is still not well understood. It is may be due to the reason that these doubly-odd nuclei have coexisting low-lying two-quasiparticle states which can have very small energy differences between them e.g., less than 100 keV, It is not easy for the theoretical models to reproduce such small differences, and hence understanding their intricate structure becomes difficult. However, with the advent of recent experimental techniques and computational advances, it has now become feasible to explore the nuclear structure of the odd-odd nuclei present in the Segre chart. If we talk, in particular, about the A~70-80 mass regions around the pf-shell, invigorating new data have been made available by the experiments, in recent years, on the odd-odd nuclei. This data suggests that the nuclei in this mass region exhibit various interesting phenomena such as rotational alignments, shape coexistence, signature-splitting and signature-inversion, etc. Out of these, the phenomena of signature-splitting and signature-inversion have grabbed a considerable amount of research attention in the recent past because of the reason that gaining knowledge about these phenomena could help one to understand the dynamics of nuclear structure in a particular mass region. The nuclei in mass regions 80, 100, 130 and 160 have been found to exhibit the phenomenon of signature splitting and signature inversion. Signature is a quantum number which is associated with symmetry under the rotation of a deformed nucleus around the principal axis by 180° such that the rotational band splits into two sequences according to the signature. The shifting of the energy levels between both bands (sequences) at a given rotational frequency is called the signature splitting (SP), and it is characterized by a staggering in the energy. In the present work, we attempted to study this interesting phenomena in the mass region A~80 along with the phenomenon of signature-inversion (where an expected favoured branch (lower in energy) becomes unfavoured at higher spins). It is evident from the literature that the signature inversion in the mass 70-80 region is related to the filling of the high-j g9/2 proton and g9/2 neutron subshells [1] and is a sign of the transition from mainly ingle-particle excitations at low spins to more rotational (collective) motion at higher spins. We aimed at studying these characteristic features of mass 80 region to understand the role of the g9/2 orbital along with the other nuclear structure properties. The nuclei chosen for present study are doubly-odd N=45 isotones, 80 Br and 82 Rb. Brief theory of Projected Shell Model A brief explanation of the Projected Shell Model (PSM) [2] along with the important input parameters is given hereunder. The PSM calculation generally begins with the deformed Nilsson single-particle states having deformation ε2, and pairing correlations included by BCS calculations. As the result of the Nilsson-BCS calculations we get a set of quasiparticle (qp) states. The shell model bases is then constructed by building multi-qp states. In this process, the rotational symmetry of the states is broken which is then recovered by angular momentum projection technique so as to form a shell model basis in the lab frame. Lastly, a two-body shell model Hamiltonian is diagonalized in this projected space. The qp subspace chosen for the present work is spanned by the basis set where a † 's are the quasiparticle (qp) creation operators, ν's (π's) denote the neutron (proton) Nilsson quantum numbers which run over low-lying orbitals and 0 is the Nilsson + BCS vacuum (0-qp state). Each configuration in equation (1) consists of one quasineutron and one quasiproton. The indices ν and π in eq. (1) are general; for example, a two-qp state can be of positive-parity if both quasiparticles i and j are from the same major N shell, or of negative parity if i and j are from N shells differing by ΔN=1. For the current odd-odd nuclei, low-lying two-qp states with positive-parity are those in which both the neutron and the proton occupy the N=4 fpg shell. The configuration space is obviously large in this case compared to the nearby odd-mass nuclei and usually several configurations contribute to the shell model wave function of a state with nearly equal weightage. This makes the numerical results very sensitive to the shell filling and the theoretical predictions for doubly-odd nuclei become far more challenging. The PSM Hamiltonian used in the present calculations consists of the harmonic oscillator singleparticle Hamiltonian and a sum of schematic (quadrupole-quadrupole (Q.Q) + Monopole Pairing + Quadrupole Pairing) forces and is of the form † † † 0 where 0 H is the spherical single-particle Hamiltonian which in particular contains a proper spin-orbit force, whose strengths (i.e. the Nilsson parameters κ and μ) are taken from [3]. The second term in the Hamiltonian (2) is the Q.Q interaction and the last two terms are the monopole and quadrupole pairing interactions, respectively. It should also be noted that in the present calculations, the configuration space contains three major shells, N = 2, 3, 4 for protons and N = 3, 4, 5 for neutrons. Moreover, Z=8 and N= 20 are taken as inert cores for protons and neutrons respectively. The Shell model space in the present calculations is truncated at a deformation, ε2 =0.195, for both these nuclei which is very close to the values given in Refs. [8,[52][53][54]]. Yrast bands for 80 Br and 82 Rb up to the maximum spin of 18ћ for are calculated and plotted in figures 1(a) and 1(b) respectively. These results are compared with the available experimental data in the same figures where the experimental data are taken from NNDC database [4,5,6]. On analysing this available data we found that the ground state band is positive-parity band in both these nuclei, with band-head at 1 + where the experimental data is available upto maximum spin 14 + for 80 Br and 17 + for 82 Rb. Our PSM calculations also predicts the ground state band-heads at K π = 1 + , for both these nuclei. Further, from figures 1(a-b), it is clear that the calculated PSM data reproduced the available experimental data with a satisfactory degree of agreement where the maximum gap between the experimental and the calculated energy levels is ~0.2 MeV in case of 82 Rb for 9 + state. It is noted by Doring et al [1] that with the decrease in number of protons, i.e., while going from 82 Rb to 78 As, the 6 + state shifts from 191 keV in 82 Rb to 357 keV in 80 Br to 621.9 keV in 78 As, indicating the fact that the g9/2 proton orbital filling becomes energetically more expensive. In the present PSM calculations, we found that 6 + state in 82 Rb occurs at 157 keV whereas in 80 Br it is found to have the energy 370.7 keV, So one can say that our PSM results support the findings of Doring et al and also points out that the g9/2 proton orbital does not play any major role for obtaining the low-lying states in these isotones, thereby, resulting in a less deformed nucleus. Signature-splitting and signature-inversion The signature-splitting and signature-inversion are best understood by plotting the quantity [E(I ) −E(I − 1)]/2I versus spin I of the initial state [7]. The plot of [E(I ) −E(I − 1)]/2I vs I for the N=45 80 Br and 82 Rb isotones are shown in figures 2(a) and 2(b) respectively. It is clear from the figures that the signature splitting is very much present at the middle and higher spin range in these nuclei. Here, it is noticed that the even-spin states are lower in energy while the odd spin states are higher in energy for both of these nuclei in the low-spin region while the energy difference of odd-spin states becomes lower after the reversal in the phase of the staggering (signature inversion) takes place at the spin I = 12ћ. Experimentally also a signature inversion is observed in both these nuclei around the intermediate spin of 11ћ where the yrast band is found to be composed of two bands, Bands 1a and 1b which are identified as the = 0 and =1 signature partners, respectively. It may be pointed out here that inversion of the signature in the vicinity of 11ħ is a generally observed feature in the positive-parity yrast bands of doubly-odd nuclei in this mass 70-80 region. It is understood to occur due to the underlying 9/2 9/2 g g π ν ⊗ quasi-particle configuration [8] and results of our calculations also corroborate this. Moreover, for 82 Rb, Shen et al [8] also performed the PSM calculations but in their work they were not able to reproduce the signature inversion at the Summary In order to gain better knowledge of the structure of the doubly-odd nuclei, particularly those lying in A=70-80 mass region, PSM calculations have been performed on N=45 isotones, 80 Br and 82 Rb. The calculated data reproduces reasonably well the reported experimental data on the yrast bands and also predicted the high spin states in these nuclei, where current data are still sparse. The phenomena of signature splitting and signature-inversion are also studied and the role of the 9/2 9/2 g g π ν ⊗ configuration in producing the inversion of signature around the spin 11ħ is also established through present calculations.
2020-06-04T09:08:02.924Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "4d11971925e26f4cea3ddd40d595e88e6344c93c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1531/1/012010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5828564cc06bf6fafde5dd77b28aca03c97ec8b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12188060
pes2o/s2orc
v3-fos-license
Kisspeptin modulates sexual and emotional brain processing in humans BACKGROUND. Sex, emotion, and reproduction are fundamental and tightly entwined aspects of human behavior. At a population level in humans, both the desire for sexual stimulation and the desire to bond with a partner are important precursors to reproduction. However, the relationships between these processes are incompletely understood. The limbic brain system has key roles in sexual and emotional behaviors, and is a likely candidate system for the integration of behavior with the hormonal reproductive axis. We investigated the effects of kisspeptin, a recently identified key reproductive hormone, on limbic brain activity and behavior. METHODS. Using a combination of functional neuroimaging and hormonal and psychometric analyses, we compared the effects of kisspeptin versus vehicle administration in 29 healthy heterosexual young men. RESULTS. We demonstrated that kisspeptin administration enhanced limbic brain activity specifically in response to sexual and couple-bonding stimuli. Furthermore, kisspeptin’s enhancement of limbic brain structures correlated with psychometric measures of reward, drive, mood, and sexual aversion, providing functional significance. In addition, kisspeptin administration attenuated negative mood. CONCLUSIONS. Collectively, our data provide evidence of an undescribed role for kisspeptin in integrating sexual and emotional brain processing with reproduction in humans. These results have important implications for our understanding of reproductive biology and are highly relevant to the current pharmacological development of kisspeptin as a potential therapeutic agent for patients with common disorders of reproductive function. FUNDING. National Institute for Health Research (NIHR), Wellcome Trust (Ref 080268), and the Medical Research Council (MRC). Introduction Unraveling the intrinsic links among sex, emotion, and reproduction relies on a focused exploration of putative factors. Identifying a factor that unites these fundamental components of human behavior has until now remained elusive. The reproductive hormone kisspeptin (encoded by KISS1) has recently emerged as a crucial activator of the reproductive axis acting in the hypothalamus to stimulate downstream secretion of reproductive hormones (1)(2)(3). However, the expression of KISS1 and its cognate receptor (encoded by KISS1R) is not limited to the hypothalamus. Significant KISS1/KISS1R expression has been reported in limbic brain structures in rodents (4)(5)(6)(7)(8) and humans (9,10), but little is known about the role of kisspeptin in these areas. The limbic system has established roles in emotional and reproductive behavior and so may provide a physiological framework uniting sex, emotion, and reproduction in humans. In this study, we employed kisspeptin administration to healthy men to explore this further. We hypothesized that kisspeptin administration modulates limbic brain activity in response to sexual and emotional stimuli and associates with related behavioral measures. To test our hypothesis, we performed a randomized, doubleblinded, 2-way crossover, placebo-controlled study in 29 healthy heterosexual young men to explore the effects of kisspeptin administration on limbic brain activity in response to sexual and emotional stimuli as well as additional psychometric measures (protocol summarized in Figure 1 and participant characteristics in Supplemental Tables 1 and 2; supplemental material available online with this article; https://doi.org/10.1172/JCI89519DS1). We administered kisspeptin or vehicle, used emotional images to trigger underlying limbic brain activity, and mapped kisspeptin's modulation of this activity using functional MRI (fMRI). During the emotional images task, participants viewed sexual-, nonsexual couple-bonding-, negative-, and neutral-themed images. Following this, images of happy, fearful, and neutral emotional faces were presented to participants. Results Kisspeptin administration increased circulating kisspeptin, but not testosterone, oxytocin, or cortisol. Baseline kisspeptin, gonadotrophin, and testosterone levels were equivalent between study visits in the 29 healthy heterosexual young men (Supplemental Table 2). Kisspeptin administration led to significant increases in circulating kisspeptin levels, reaching steady-state levels for the duration of the fMRI and psychometric questionnaire sessions, as expected ( Figure 1A). Importantly, the fMRI questionnaire sessions were performed before any downstream increases in testosterone ( Figure 1C), which are known to occur after 90 minutes following kisspeptin exposure in humans (11,12). Kisspeptin administration had no effect on other relevant hormones that could affect limbic activity, including oxytocin and cortisol (Figure 1, D and E). Kisspeptin administration enhanced limbic brain activity in response to sexual images, which correlated with psychometric measures. Heterosexual young men viewing sexual images exhibited enhanced activity in key limbic and paralimbic structures during kisspeptin compared with vehicle administration (Figure 2A and Supplemental Table 3). In keeping with this, analysis of a priori limbic and paralimbic anatomically-defined regions of interest (ROIs) (Supplemental Figure 1) revealed that, in response to sexual images, kisspeptin enhanced brain activity in the anterior and posterior cingulate as well as the left amygdala ( Figure 2B), regions expressing kisspeptin and kisspeptin receptors (4-10) and consistent with areas of activation by sexual stimuli in previous physiological studies (13)(14)(15). Next, we correlated brain activity in the anatomical ROIs (Supplemental Figure 1) with psychometric measures to explore functional relevance (while correcting for multiple compari- Figure 1. Experimental protocol and effects of kisspeptin administration on hormone levels. Twenty-nine healthy young men participated in a randomized, double-blinded, 2-way crossover, placebo-controlled study. They participated in 2 study visits: one for intravenous administration of kisspeptin (1 nmol/kg/h) and one for intravenous administration of an equivalent volume of vehicle for 75 minutes. Participants completed baseline and intrainfusion questionnaires (Q) and underwent functional MRI scanning while performing image tasks (see Methods). (A) Kisspeptin infusion resulted in increased circulating kisspeptin levels reaching a plateau at 30 minutes after initiation. Therefore, there were stable circulating kisspeptin levels during the fMRI and intrainfusion psychometric assessments (n = 29). (B) In parallel, kisspeptin increased circulating LH levels (n = 29). (C-E) Kisspeptin had no effect on circulating testosterone (n = 29), oxytocin (n = 13), or cortisol levels (n = 29). Data depict mean ± SEM. ****P < 0.0001, 2-way ANOVA. psychometric measures. On viewing nonsexual couple-bondingthemed images, kisspeptin administration resulted in brain activation patterns similar to those observed above in response to sexual images, including activation of the anterior and posterior cingulate and amygdala ( Figure 3, A and B). However, kisspeptin also markedly enhanced activity in the thalamus and globus pallidus: regions previously implicated in "romantic love" (17,18) and expressing kisspeptin receptors in humans (9) ( Figure 3B). Likewise, the amygdala is implicated in bonding (18), and we observed that kisspeptin's enhanced activation of the amygdala in response to bonding images related to improvements in positive mood (r = 0.69, P < 0.001, Figure 3C). Kisspeptin administration did not modulate limbic brain activity in response to other themed images or during a battery of nonlimbic tasks. Kisspeptin administration did not modulate limbic brain activity in response to negative-, neutral-, happy-, or fearfulthemed images (Supplemental Figures 3 and 4 and Supplemental Table 3). In addition, kisspeptin had no effect on brain activity during a battery of nonlimbic tasks (visual, auditory, motor, language, calculation; Supplemental Figure 5). Kisspeptin administration enhanced limbic brain activity in response to nonsexual couple-bonding images, which correlated with tial role for kisspeptin as an important neuromodulator, linking sexual and emotional brain processing with the reproductive axis. Visually evoked sexual arousal is a frequent occurrence in men, and brain activity associated with visual sexual stimuli have been explored in several previous studies. These studies have examined a wide range of brain structures in response to sexualthemed images and revealed a processing network involving structures including the hypothalamus, amygdala, thalamus, cingulate, insula, precentral gyrus, and occipital cortex (13)(14)(15)(21)(22)(23)(24)(25)(26)(27)(28). Furthermore, activations in structures including the thalamus and cingulate correlate with physiological sexual arousal (as assessed by penile tumescence) (13). The involvement of these structures therefore suggests cognitive (cingulate, thalamus), emotional (amygdala, insula), motivational (precentral gyrus), and physiological (thalamus) components to sexual arousal from the appraisal of a stimulus as sexual through to the autonomic activation in readiness for sexual behavior (13,27,28). Kisspeptin sits at the apex of the reproductive axis, above gonadal hormones such as testosterone that are known to be involved in sexual and emotional processing (29). Kisspeptin signaling is also essential in the "timing" of reproduction, from regulating gonadotropin-releasing hormone (GnRH) pulsatility, oestrous cyclicity, and sexual development to aging (30). In our study, kisspeptin enhanced activity in key limbic and paralimbic Kisspeptin administration enhanced frontal brain activity in response to negative images and reduced negative mood. Although kisspeptin had no effect on limbic structures when viewing negative images in our study, kisspeptin instead enhanced activity in a region around the frontal pole extending caudally to the paracingulate gyrus ( Figure 4A and Supplemental Table 3), involving structures important in human negative-mood regulation (19) and expressing kisspeptin receptors (9). In keeping with this, we observed that although kisspeptin administration did not affect positive mood ( Figure 4B), kisspeptin administration elicited a reduction in negative mood (P = 0.031, Figure 4C). Discussion In this study, we demonstrate that the reproductive hormone kisspeptin enhances limbic brain activity specifically in response to sexual and bonding stimuli and that these responses correlate with psychometric measures of sexual and emotional processing. Sexual and emotional responses are fundamental drivers of human behavior, and the links among sex, bonding, and reproduction ultimately ensure the survival of most mammalian species (20). However, the pathways involved are multiple, complex, relatively poorly understood, and involve reproductive and metabolic hormones, pheromones, neuronal networks, peripheral organs, and various sensory signals, among others. Our data suggest a poten- in our study, kisspeptin activated key components related to these networks (including the hippocampus, amygdala, and cingulate) more in participants with lower baseline drive and reward traits in response to viewing sexual images. It is interesting to speculate as to a functional reason for this. Kisspeptin was able to enhance activity in components of this reward circuitry more in participants who were less reward responsive. This could serve as a functional mechanism for enhancing reward-system activity during sexual arousal (in those generally less responsive to reward), so as to drive a desire for reproduction in these individuals. Collectively, these data suggest that kisspeptin not only enhances activation in established structures of sexual arousal, but that this activation correlates with behavioral measures of reward, drive, and sexual aversion. Consistent with the expression pattern of kisspeptin and its cognate receptor in these regions (4-10), we provide evidence for kisspeptin as a neuroendocrine modulator of the human brain sexual-processing network. In addition to sexual stimulation, an important precursor to reproduction is the desire to bond with a partner. Studies of bonding have examined different types: romantic love, maternal love, and unconditional love. Studies of romantic love demonstrate activations in dopamine-rich and basal ganglia structures such as the putamen, thalamus, and globus pallidus (17,38,39), which are associated with reward (40), pair-bonding (41), and euphoria (16). In addition, activations are commonly seen in areas associated with mental associations (e.g., hippocampus and thalamus) and emotional areas also implicated in sexual processing (e.g., cingulate and amygdala) (17,38,39,42). There is substantial overlap with the processing networks in maternal love, including the cingulate, globus pallidus, amygdala, and dopaminergic brain areas (43). Activations are also observed in reward and dopamine-rich areas (e.g., globus pallidus and cingulate) in unconditional love (44). Taken together, these studies suggest a common subcortical dopaminergic reward-related brain system as well as higher-order cortical cognitive centers driving love and bonding. In the current study, kisspeptin modulated the response to bonding images in regions similar to those seen with sexual images, including the anterior and posterior cingulate and amygdala, with the addition of activation in the thalamus and globus pallidus. These activations by kisspeptin match regions implicated in romantic love, maternal love, and even unconditional love in the aforementioned studies as well as being sites of kisspeptin and kisspeptin receptor expression (4-10). Furthermore, we observed that kisspeptin's enhanced activation of the amygdala in response to bonding images correlated with improvements in positive mood. Taken together, these data demonstrate that kisspeptin enhanced activity in key "romance and bonding" structures in response to viewing couple-bonding images and that this correlated with improved positive mood. We therefore provide evidence in humans of a role for kisspeptin in the processing of sexual and bonding stimuli, both of which are critical in driving reproduction at a behavioral level. Consistent with the correlation between kisspeptin's enhancement of amygdala activity and improvements in positive mood in humans above, recent rodent data suggest antidepressant-like effects for kisspeptin via the serotonergic system (45). In our study, kisspeptin enhanced prefrontal activity in response to negative structures when heterosexual young men viewed sexual images. These included the anterior and posterior cingulate as well as the left amygdala, consistent with areas of activation observed in the above studies (13)(14)(15)(21)(22)(23)(24)(25)(26)(27)(28) and with regions expressing kisspeptin and kisspeptin receptors (4-10). Therefore, we demonstrate that kisspeptin administration enhances activation in key established areas of the sexual-processing network. It is interesting that, although kisspeptin enhanced activity in both the right and left amygdala, this only reached statistical significance on the left. Although the right amygdala often shows greater enhancement during image-related emotion stimulation (31,32), the left amygdala is more often engaged in sexual (14) and emotional processing in men (33), and so in this study, kisspeptin may be preferentially acting on the left amygdala in keeping with these studies. Future studies may seek to examine whether there is a lateralization of kisspeptin and kisspeptin receptor expression in the amygdala to address this further. We then proceeded to correlate modulations in brain activity with our psychometric data to provide functional relevance. Interestingly, kisspeptin's enhancement of several structures of the sexual-processing network (including the cingulate, putamen, and globus pallidus) correlated with reduced sexual aversion, suggesting a role for kisspeptin in sexual disinhibition. Drive and reward traits are primary components of BAS, which has key functions in bringing the individual together with biological rewards such as sex and food (34,35). Furthermore, previous studies have shown that these traits predict fMRI responses to appetizing foods (36) and sexual images (37). The neural substrate of the BAS comprises structures belonging to the mesolimbic reward and fronto-striatal-amygdala-midbrain networks (36,37). Intriguingly, images; a region expressing kisspeptin receptors (9). This is consistent with studies of negative-evoked stimuli, demonstrating predominant activation in prefrontal regions commonly implicated in response inhibition and self-control. Greater activity in these regions assists internalized representations of safety to minimize fear and anxiety to negative stimuli (19). In keeping with this, we observed that kisspeptin administration elicited a reduction in negative mood, providing human evidence of an antidepressant-like effect for kisspeptin, a finding with clear clinical implications. The hippocampus is heavily involved in producing emotions. In our study, sexual-and bonding-themed stimuli resulted in positive increases in activity in the hippocampus (i.e., increases in mean percentage of blood-oxygen-level-dependent [BOLD] signal change) in line with previous studies (25,46). In other words, the images were able to stimulate hippocampal activity. However, there was no significant difference in this increased activity between kisspeptin and vehicle administration. Overall, these data suggest that kisspeptin may have a greater effect on other limbic structures involved in emotional processing, such as the amygdala, cingulate, thalamus, and globus pallidus rather than the hippocampus. It is salient to note that the effects of kisspeptin on the limbic system were confined to sexual and couple-bonding images, with no limbic effects in response to negative-, neutral-, happy-, or fearful-themed images. In addition, kisspeptin had no effect on brain activity during a battery of nonlimbic tasks (visual, auditory, motor, language, calculation). These data highlight that kisspeptin acts specifically to enhance limbic activity only to sexual and couple-bonding stimulation in our study, which is particularly pertinent given its established role as a potent reproductive hormone (1)(2)(3). It is also noteworthy that kisspeptin administration had no effect in the current study on other relevant hormones that could affect limbic activity, including testosterone, oxytocin, and cortisol as well as attention and anxiety. Furthermore, previous studies demonstrate that kisspeptin administration has no effect on other endocrine hormones, including growth hormone, prolactin, and thyroid-stimulating hormone in humans (47). It is important to consider the physiological implications of our findings using the experimental paradigm employed in this study. Physiologically, kisspeptin is predominantly synthesized and secreted from kisspeptin neurones in the infundibular nucleus of the hypothalamus in humans (48) and the arcuate nucleus (ARC) and anteroventral periventricular nucleus (AVPV) in rodents (49). This kisspeptin then activates kisspeptin receptors on GnRH neurones, stimulating pulsatile GnRH release into the hypophyseal-portal circulation and downstream reproductive hormones. This secretion does appear to be pulsatile in rodents (49), and work in monkeys demonstrates pulsatile kisspeptin secretion (every 30 to 90 minutes) into the hypophysealpituitary circulation (50). In this study, kisspeptin was administered peripherally, as it is obviously not possible to administer it into the hypothalamus in humans, and we acknowledge that this differs from physiological kisspeptin release. However, the levels of kisspeptin achieved in this study are similar to those observed physiologically in normal pregnancy (51,52). In addition, the kisspeptin levels observed in this study were similar to the kisspeptin levels achieved in previous studies in which peripheral kisspeptin administration stimulated oocyte maturation in in vitro fertilization protocols (53) and restored luteinizing hormone (LH) pulsatility in women with hypothalamic amenorrhoea (54). Furthermore, our study and others have demonstrated that peripheral kisspeptin administration does not result in downregulation of the reproductive axis in the time frame used in this study in healthy men (54)(55)(56)(57). As such, while our protocol does not precisely mimic normal physiology, in the current study, we administered doses of kisspeptin that have previously been shown to have physiological and sustained reproductive effects. Another important point to consider is whether peripherally administered kisspeptin can get into the brain. Peripheral kisspeptin can access GnRH neurones via their dendritic terminals in the organum vasculosum of the lamina terminalis (OVLT) outside the blood-brain barrier (58,59). To examine other brain areas, we administered radiolabeled kisspeptin peripherally to male mice and demonstrated that it can access the brain, including limbic structures (Supplemental Figure 6). Although this study was performed in mice, it suggests that peripheral kisspeptin can cross the blood-brain barrier and directly access brain regions expressing kisspeptin and kisspeptin receptors (4-10). Future work will no doubt examine the neuronal pathways involved, expanding on established interactions among kisspeptin (6), GABA (60), and nitric oxide (61) pathways. In conclusion, we implicate kisspeptin as a modulator of reproductive hormones, limbic brain activity, and behavior. This is supported by the findings of kisspeptin and kisspeptin receptor expression in limbic and paralimbic structures (4-10). We demonstrate that kisspeptin administration enhances limbic responses to sexual and bonding stimuli and that this activity correlates with reward measures, improved positive mood, and reduced sexual aversion. In addition, kisspeptin attenuates negative mood. This suggests that kisspeptin, in addition to its established role in the reproductive hormonal cascade, can also influence related sexual and emotional brain processing, thereby providing integration among reproduction, sexual responses, and bonding. These findings have important ramifications for our understanding of reproductive biology. Delineation of the precise neuronal networks by which kisspeptin exerts these effects will be an exciting field of future study, and recent advances in the use of optogenetics to stimulate endogenous kisspeptin neurones may serve as useful tools (49). Furthermore, in rodents, the kisspeptin receptor is necessary for male olfactory partner preference (62), with limbic kisspeptin neurones integrating into olfactory and reproductive circuits (6). This suggests that in humans, kisspeptin-olfactory processing may provide another area of future study. We observed that kisspeptin's enhancement of brain activity correlated with improvements in positive mood and reduced sexual aversion while kisspeptin also reduced negative mood. Therefore, this raises interesting directions for the pharmacological use of kisspeptin in disorders of sexual and emotional processing. For example, studies of kisspeptin administration in patients with depression and psychosexual disorders may prove fruitful as well as informing current work to develop kisspeptin as a potential therapeutic for common reproductive disorders, including male hypogonadism (56), hypothalamic amenorrhoea (54), and hyperprolactinaemia (63), and as a trigger for ovulation in in vitro fertilization (53). Therefore, our data also have important clinical relevance given the continued development of kisspeptin as a potential therapeutic. Participants Thirty-one healthy young men were recruited from advertisements in the local press following a medical screening appointment. Two participants were excluded due to excessive head motion during fMRI scanning (a priori, >2 mm), leaving a final study group of 29 healthy young men (age 25.0 ± 0.9 years). This sample size was chosen in order to give sufficient power to detect a difference in fMRI activity following a hormonal interventi on compared with vehicle. This number compares favorably with previous fMRI studies (64) and is also in line with empirically derived estimates of optimal sample sizes in fMRI studies, which suggest that an n of 20 to 24 is the minimal number that should give sufficient power to detect moderate-sized effects (65), and our previous work (66). In this way, we ensured adequate power to detect significant differences while also allowing for natural variation in responses. Twenty-five participants were right-handed and 4 participants were left-handed, approximating the prevalence of left-handedness in the general population (67). While handedness can have strong effects on brain lateralization for some cognitive functions (e.g., language, spatial attention), there is no evidence that it reverses lateralization of sexual and emotional processing (68). The inclusion of both left-and right-handed participants is in line with recent recommendations to include both in neuroscience studies in order to better reflect the general population (67). All participants were heterosexual with normal basal reproductive hormone levels (for participant characteristics, see Supplemental Table 1). Participants were free of current and past physical or psychiatric illness and were naive to psychoactive substances, prescribed or illicit, for a minimum of 6 months prior to their screening appointment. In addition, participants were excluded if there was any history of sexual aggression/abuse/phobia or psychotherapy/counselling. All participants had normal or corrected-to-normal vision. Study design The 29 participants participated in 2 study visits each, as part of a randomized, double-blinded, 2-way crossover, placebo-controlled protocol (summarized in Figure 1A). This allowed participants to act as their own controls to minimize interparticipant variations in healthy physiology. All studies commenced in the morning to ensure peak basal reproductive hormone levels. Participants consumed a normal breakfast on their study days. Participants were required to abstain from alcohol, caffeine, and tobacco from midnight before their study visits. In addition, participants were asked to abstain from sexual activity from midnight before their study visits, as sexual activity prior to the study could result in changes in testosterone levels (69,70) and residual limbic brain activity (21) as well as a postejaculatory refractory period (71) and sexual exhaustion (72). On arrival, participants were asked to change into loose hospital scrubs and lie supine for 30 minutes to relax. Intravenous cannulae were then inserted into each antecubital fossa to allow blood collection (at time-points -30, -15, 0, 15, 30, 45, 60, and 75 minutes) and infu-sion of kisspeptin or vehicle. Participants completed psychometric questionnaires as detailed below. At time-point 0 minutes, a 75-minute infusion of either kisspeptin or vehicle was commenced. Participants and fMRI data analysts (L. Demetriou and M.B. Wall) were blinded as to the identity of each infusion, and the order of infusions was randomized by an independent investigator (using Research Randomizer, www.randomizer.org). Based on our previous experience of kisspeptin infusions, a dose of 1 nmol/kg/h of kisspeptin-54 was selected so as to provide steady-state levels of circulating kisspeptin from 30 to 75 minutes (during fMRI scanning and questionnaires), but avoid any increase in testosterone in this initial time frame as previously demonstrated (11,12). Kisspeptin-54 (Bachem) was made up in gelofusine (B. Braun) and infused as previously described (12). Vehicle (gelofusine) was administered at a rate equivalent to the kisspeptin infusion. Assays Blood was collected to measure circulating kisspeptin, LH, and testosterone levels, as previously described (12), and to ensure that baseline reproductive hormone levels were equivalent between study visits (Supplemental Table 2). Cortisol was measured on serum samples using an automated delayed 1-step immunoassay (Abbott Diagnostics) that uses chemiluminescent microparticle immunoassay technology. The precision of the assay was 10% or less total coefficient of variation (CV) for serum samples, with values between 83 nmol/l and 966 nmol/l. The functional sensitivity of the assay was 28 nmol/l or less, and the limit of detection was 22 nmol/l or less. Oxytocin was measured using nano-liquid chromatography-mass spectrometry (nLC-MS). The method was based on that described in Brandtzaeg et al. (73). This featured a reduction/alkylation step to liberate strongly protein-binding oxytocin, selected reaction monitoring, and a labeled internal standard, but with some modifications; for high-speed analysis (oxytocin retention time, 1.6 minutes), a short 5-mm column (200 μm ID, PepSwift, P/N 164558, Thermo Scientific) was used for both trapping and chromatographic separation. Steps were taken to ensure robust high-speed analysis. The monolithic polystyrene/divinylbenzene column material allowed for well-resolved elution of proteins still present after sample preparation. An off-line solid-phase extraction step (using Millipore C18 ZipTips; lot: R3PA16379, and elution with 30/70 ACN/0.1 % formic acid [aqueous], v/v) removed lipids prior to injection, which could have otherwise been retained by the LC stationary phase. Silica capillaries were silanized to avoid secondary interactions with the biosamples. Psychometric questionnaires Participants were asked to complete a number of psychometric questionnaires. Before commencing their first study, participants completed Patient Health Questionnaire-9 to screen for depression (PHQ-9, which excluded depressive illness in our cohort as participant scores below threshold for depressive disorder) (74). State-Trait Anxiety Inventory (STAI Y2-Trait; ref. 75) excluded anxiety traits in our cohort as within normal range (76). The Behavioral Inhibition System Scale (BIS) assessed sensitivity to anticipation of punishment, and BAS assessed sensitivity to reward, desired goals, and fun (34). Greater BIS scores reflect a greater predisposition to anxiety, while greater BAS scores reflect a greater predisposition to engage in goal-directed efforts and positive feelings. The BAS scale is subdivided into 3 associated components as follows: drive, pursuit of desired goals; fun-seeking, jci.org Volume 127 Number 2 February 2017 from freely available and copyright-free stock image libraries on the internet. Each image was presented on screen for 3 seconds in a single run of 100 trials (25 trials of each image type) with a jittered inter-trial interval (ITI) of 2 to 10 seconds (based on a Poisson distribution). The participants were instructed to rate the pleasantness of each image on a 5-point scale ranging from "not at all" to "very pleasant" using a 5-key response box. Emotional faces images task. This was a block-design task lasting 12 minutes. Participants were shown faces with either happy, fearful, or neutral expressions, selected from the Karolinska Directed Emotional Faces set (83). An equal number of male and female faces were selected for the task. Each face was presented on screen for 3 seconds, and 10 faces of the same expression were presented in each 30-second block. Rest blocks (also 30 seconds) were also included, and there were 6 repetitions of each block type, presented in a pseudo-random sequence (24 blocks in total). To ensure alertness of the participants throughout the task, they were asked to respond to each face by pressing one of 2 buttons (index and middle finger) on the response box to indicate whether it was a male or female face. Battery task. This was a fast event-related task lasting 5 minutes. The experiment was adapted from Pinel et al.'s original design (84) and contained a variety of stimuli to assess different sensory and cognitive functions: visual, auditory, motor, language, and calculations. The instructions/stimuli were either presented on screen (visual) or via the headphones (auditory). The 4 trial types were as follows: (a) flashing checkboards (horizontal or vertical orientations, 20 trials); (b) simple mental calculations (audio or visual instructions, 20 trials); (c) pressing the left or right response key 3 times (visual or audio instructions, 20 trials); and (d) listening to or reading short sentences (20 trials). The combination of these 4 tasks and the variation in auditory or visual instructions allowed the mapping of 5 basic functional brain networks: visual, auditory, calculation, motor, and language (Supplemental Figure 5). The trials were presented in pseudo-randomized order in a single run of 100 trials of 3 seconds each. Randomly intermixed within the stimulus sequence were 20 null (blank screen) trials (also 3 seconds) in order to provide a baseline condition. MRI acquisition. All scanning was performed on a 3T Siemens Trio scanner with a 32-channel phased-array head coil. Anatomical images were acquired at the beginning of each scan using a T1-weighted MPRAGE pulse sequence (1 mm isotropic voxels, TR = 2300 ms, TE = 2.98 ms, flip angle = 9°). Functional images were acquired using a 3D Echo Planar Imaging (EPI) Sequence with the following parameters: TR = 2000 ms, TE1 = 13 ms, TE2 = 31 ms, flip angle = 80°, 36 axial slices, voxel size = 3 mm isotropic. The number of volumes acquired for each task differed depending on the task length: 485 for the emotional images task, 365 for the faces task, and 155 for the fMRI battery task (each task included an additional 5 volumes beyond the end of the stimulus sequence as an end-buffer period). fMRI data analysis. Image processing was performed using FSL (www.fmrib.ox.ac.uk/fsl/) version 5.0.4 (FMRIB's software Library; Oxford Centre for Functional Resonance Imaging of the Brain [FMRIB]). Anatomical images were skull-stripped using the BET extraction tool in FSL. Only the TE = 31 images from the dual-echo sequence were analyzed, as these provide the best BOLD contrast in the majority of the brain. Functional image series were preprocessed using the following parameters: high-pass filter, 100 s, head motion correction, 6 mm (full width at half maximum [FWHM], Gaussian) spatial desire for new rewards; reward, positive responses to occurrence or anticipation of reward (34). We used the following questionnaires: Sexual Desire Inventory-2 (SDI-2) to formally assess dyadic (i.e., with partner) and solitary sexual desire, which confirmed that all participants had appropriate sexual desires (77); Passionate Love Scale (PLS) to assess frequency and persistence of passionate feelings, with all participants within the average-passionate categories (78); and Love Attitudes Scale to assess individual love style (see Supplemental Table 1 for styles) (79). Participants completed a second set of questionnaires before (to confirm no baseline differences between visits, Supplemental Table 2) and during their infusions (kisspeptin or vehicle) comprising the following: Positive and Negative Affect Schedule (PANAS) involved participants scoring 20 different emotions and feelings. Higher positive-affect scores reflect greater enthusiasm, alertness, energy, and pleasurable engagement, and higher negative-affect scores reflect greater distress and unpleasurable feelings (Figure 4, B and C) (80). We also used the State-Trait Anxiety Inventory (STAI Y1-State) (75), designed to assess for any effects of the infusions on anxiety at that moment (rather than in general as in STAI Y2-Trait, as above) (Supplemental Figure 2F) (22,75). Sexual arousal and desire were assessed during kisspeptin and vehicle infusion using the multidimensional SADI, with no differences observed (Supplemental Figure 2, A-D) (22,81). The SADI questionnaire contains a 55-descriptor scale examining evaluative (e.g., passionate, sexy), negative (e.g. frigid, aversion), physiological (e.g., tingly, throbs in genital area), and motivational (e.g., lustful, urge to satisfy) components (81). These well-established questionnaires were selected as they have also previously been used to examine various psychometric parameters associated with reproduction that may correlate with brain activity (Supplemental Table 1) (22,75). Finally, participants completed the D2 Test of Attention during their infusions to confirm no differences in participant concentration and attention between infusions that could have confounded activations (Supplemental Figure 2E) (75). MRI procedure The MRI session consisted of the following scans: localizers, a high-resolution T1-weighted anatomical image, a B0 field-map image, the emotional images task, a resting-state fMRI scan, the emotional faces task, and finally, the fMRI battery task. The entire session lasted approximately 45 minutes. Participants used a mirror mounted on the head coil to view a screen mounted in the rear of the scanner bore, where visual stimuli were back projected through a wave guide in the rear wall of the scanner room. Participants also wore headphones in order to receive auditory stimuli and instructions from the researchers, as required. Physiological monitoring devices (pulse-oximeter and a respiratory belt) were attached to the participant, and these data were recorded using a standard data-recording system (AD Instruments PowerLab) in the control room. Infusions were performed using a Medrad Spectris Solaris MRI compatible injection system, which was also controlled using a remote panel in the control room. Emotional images task. This task used an event-related design lasting 16 minutes. Different images from the following categories were presented: sexual, nonsexual couple-bonding, negative, and neutral. smoothing. Finally, the results of the analysis were coregistered to the T1 structural image of the individual and a standard anatomical template in the MNI152 space. For the analysis of all the tasks, a standard general linear model (GLM) was used, as implemented in the FEAT module in FSL. Regressors derived from the onset times of each stimulus condition were convolved with a gamma function in order to simulate the hemodynamic response function (HRF). Regressors derived from head-motion parameters were also included in the first-level models as regressors of no interest. For the event-related tasks (emotional images and fMRI battery tasks), the first temporal derivatives of each stimuli time series were also included. Contrasts were defined that isolated activity related to each stimulus condition relative to the baseline and also compared between 2 stimulus conditions, as appropriate. Two sets of group level models computed the mean task/stimulirelated activation across all participants and all scans and compared the 2 treatment conditions. A regressor of no interest was included in the latter analyses to model the treatment/session order in order to control for any potential order effects. A mixed-effects (FLAME-1) model was used to enable generalization of results to the population. A statistical threshold of Z = 2.3 (P < 0.05 cluster-corrected for multiple comparisons) was used for all group analyses. A priori anatomical limbic and paralimbic ROIs were selected for further analysis. These were the amygdala, hippocampus, anterior and posterior cingulate, thalamus, globus pallidus, and putamen based on the expression pattern of KISS1/KISS1R in the limbic and paralimbic system in humans (9, 10) and established structures involved in sexual and emotional processing (15, 17-19, 21-28, 38, 39, 43, 46, 85-87) (Supplemental Figure 1). ROIs were defined in standard stereotactic space using the Harvard-Oxford cortical and subcortical atlases (distributed by https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/). The mean of all voxel values within each ROI was extracted from the brain images for each participant per session, and a statistical analysis was performed as below. Statistics Statistical analyses were performed by a statistician (P. Bassett). Data were normally distributed by Kolmogorov testing. Hormone level data analysis was performed using 2-way ANOVA. Baseline and treatment effect psychometric data were analyzed using multilevel linear regression. Two-level models were used, with individual measurements contained within participants. Terms in the model included both treatment (kisspeptin or vehicle) and visit order. fMRI data task analysis was performed using a GLM with regressors (see details in Supplemental Methods). A statistical threshold of Z = 2.3 (cluster corrected for multiple comparisons) was used for all group fMRI analyses. For fMRI ROI analysis, a 2 (stimulus, bonding vs. sexual) by 2 (treatment, kisspeptin vs. vehicle) by 8 (ROI) repeated-measures ANOVA revealed significant differences for all the main effects and 2-way interactions (P < 0.01) except for the stimulus by treatment interaction. The 3-way interaction
2017-07-06T04:39:01.199Z
2017-01-23T00:00:00.000
{ "year": 2017, "sha1": "2b069b11a4dabcb5f07fad7e888a9253d9001026", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1172/jci89519", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "09f2be99aaedb958df20cf2109f683bc4b3875f0", "s2fieldsofstudy": [ "Biology", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
147725747
pes2o/s2orc
v3-fos-license
The effect of emotions, promotion vs. prevention focus, and feedback on cognitive engagement The purpose of the study was to explore the role of emotions, promotion-prevention orientation and feedback on cognitive engagement. In the experiment participants had the possibility to engage in a categorization task thrice. After the first categorization all participants were informed that around 75% of their answers were correct. After the second categorization, depending on the experimental condition, participants received feedback either about success or failure. Involvement in the third categorization was depended on participants’ decision whether to take part in it or not. Each time, before and after categorization, the emotional state was assessed. Results showed that promotion orientation predicted experiencing curiosity before the task, which in turn led to a higher cognitive engagement in the first categorization. Promotion and prevention orientation moderated the type of emotional response to positive feedback. Promotion orientation also predicted cognitive engagement after the feedback of success was provided. Generally results confirmed the positive effect of positive emotions as well as promotion orientation on cognitive engagement. In this paper we attempt to analyze the contribution of emotions (their intensity as well as the type of emotions), situation (failure vs. success) and individual characteristics of motivational orientation (promotion vs. prevention) to the cognitive engagement. Firstly, it was assumed that cognitive engagement is a function of emotional appraisal shaped by one's motivational orientation, either promotion or prevention-focused. Secondarily, it was predicted that experienced emotions, that reflect evaluation of situation determined by these motivational orientation, mediate the relation between motivational orientation and cognitive engagement in a particular activity. Emotions and action One of the prominent functions of emotions emerges from the examination of their contribution to undertaking actions (Frijda, 2008;Johnson-Laird & Oately, 1992). First, behavioral tendencies (approach and avoidance) are automatically triggered by evaluation processes (Neuman, Föster, & Strack, 2013). According to Kolańczyk (2004), such automatic evaluation of a stimuli or a situation contains a dominant affective component, which relates to the valence of information extracted at the subliminal level. Therefore it occurs prior to semantic processing. This means that the meaning ascribed to the situation is largely based on an extracted positive or negative valence of information. Such processes serve mainly adaptive functions (Greenwald & Banaji, 1995;Izard, 1993;Neumann et al., 2013) as they allow people to grasp what is good and worth approaching, and what is bad and should be avoided (see Izard, 2009, the affective system of activation emotion). Emotion can be understood in this context as a source of basic information (Schwarz & Clore, 2003) as it refers to the individual's primary attitude towards a given situation or object, taking the form of appraisal of situation's or object's significance and the associated degree of pleasure and aversion for that individual (Frijda, 2008). Secondly, emotions direct one's actions and activate particular action programs (Damasio, 1994;Frijda, 2008). To put it in another words "the internal emotional signals have casual effects within the organism, preparing it psychologically for each general class of action" (Johnson-Laird & Oately, 1992, p. 207). Such relation of emotions to action patterns is based on a cognitive evaluation of a situation, which occurs predominantly on an unconscious level and determines an appropriate course of action due to the activation of a particular repertoire of action specific This research was supported by grant NN 106 282839 given to Agata Wytykowska for the given emotion (see the table 9.1, Oaltey & Jenkins, 1996, p. 253). The close relation of basic emotions to actions is also stressed by Izard (2009), who claims that experiencing a particular emotion activates a specific mode of cognitive processing and also certain behavioral tendencies, both corresponding to this prevailing emotion one has consciously in mind. For instance, curiosity motivates to explore and learn and guarantees engagement in the task. Frijda (2004) connects emotions with actions through the notion of states of action readiness, which he ascribes to emotions in order to emphasize their motivational properties. Action readiness refers to being set for an action or achieving a particular aim (Frijda, 2012) rather than to performing some specific activity. Therefore it is related to the individual's degree and a kind of engagement in the world, and results in the capacity to spend time and effort in dealing with life demands (Frijda, 2012;Higgins, 2006). There are different action tendencies such as attending, moving forward or moving away (Frijda, 2012), all being elicited by different appraisal dimensions like those listed by Roseman (2008), Scherer (1984) or Smith and Ellsworth (1985). Joy, for instance, is related to the aim of enhancing engagement in the current situation. However, joy will transform into action only if from the individual's goal perspective it brings benefits and the action repertoire is available. As Frijda (2004, p. 158) underlined "action follows only under certain conditions, including the presence and availability of an action repertoire, an equilibrium of the cost and benefits of action, and the presence of resources and motivation to consider the cost and benefits". Accordingly, the process of transforming action readiness into a particular action involves processes of cognitive appraisal, which in turn are under the influence of individual goals appointing sensitivity to particular signals or events. Appraisal processes, feedback and action According to appraisal theories, it is the interpretation of events rather than events themselves that give rise to emotions (e.g., Scherer, Schorr, & Johnstone, 2001;Siemer, Mauss, & Gross, 2007;Smith & Lazarus, 2001). However, the relation between appraisals and emotions is conditioned by different factors. On the one hand, the latest studies clearly indicate that individual differences may moderate the relationship between evaluation and emotion (Kuppens & Tong, 2010;Kuppens, Van Mechelen, Smits, De Boeck, & Ceulemans, 2007;Tong, 2010). The appraisal of the very same situation done by different individuals will vary and thus will lead to the emergence of different emotions (for a review: Kuppens & Tong, 2010), because in each case such evaluation becomes individualized as it is done through a prism of ones goals (Dweck & Leggett, 1988), self-regulatory standards (Scholer & Higgins, 2008) and personality features, that define sensitivity to particular aspects of a given situation (Rusting, 1998). On the other hand, in constructivist models of emotions it is asserted that there are events or scenarios that operate like prototype events that trigger particular emotion. Such event-emotion relation is founded on biological basis as well as on cultural history (Jasielska, 2013). Two kinds of events in particular -that is a success and a failureseem to have very clear emotional connotation. The signal that is sent when subgoals are attained acts to prompt the individual to keep the same line of action. When a goal is not reached, a different emotion signal is sent (presumably taking a form of experiencing frustration or sadness), which encourages the individual to disengage from that goal. Furthermore, feedback provides information about the level of performance and indirectly about the likelihood of achieving success (Kluger & DeNisi, 1996;Łukaszewski, 2002). Besides it allows monitoring progress towards the achievement of the goal (Carver & Scheier, 1998) and regulates the effort invested in action (Venables & Fairclough, 2009). It is acknowledged that positive feedback increases engagement in action, while the negative one decreases active involvement. Results of research partially confirm this assertion (i.e. Boggiano & Barrett, 1985). Carver and Scheier (2011) claim that feedback about successful performance provided on the way to the goal attainment by means of inducing positive affect, may increase the effort invested in achievement of the objective and thus sustain the motivation for further action. However, the meta-analysis performed on 131 scientific papers by Kluger and DeNisi (1996) revealed that although positive feedback improved the average efficiency of performance, for example by defining more ambitious goals and increasing effort (Bandura, 1997;Williams, Donovan, & Dodge, 2000), it also had some negative effects as it was decreasing the effort in one third of the cases or led to cessation of the action as well. Moreover, it was found that the kind of feedback did not significantly differentiate the results obtained. Negative feedback most often led to the improvement of the performance, unless it was extremely adverse. According to Carver and Scheier's self-regulation theory (1998), negative feedback should result in the intensification of efforts in order to reduce detected discrepancy between the standard and the level of performance (negative feedback loop regulation). However, some people lower their standards in a response to negative feedback (downward goal revision following negative feedback) instead of increasing their efforts (Kluger & DeNisi, 1996;Williams, Donovan, & Dodge, 2000). Lack of the expected advantage of motivational effect of positive feedback may be due to the fact that positive feedback is treated as information about success -reaching the target level of performance, with which the individual is satisfied with and thus a withdrawal of effort and reduction of motivation to continue the task takes place (Wright & Brem, 1989). One of the shortcomings of the studies analyzed by Kluger and DeNisi (1996) refers to the fact that the emotional response to feedback obtained was not controlled and such a reaction is a key mediating agent of feedbackperformance relations (Ilies & Judge, 2005;Ilies, Judge, & Wagner, 2010), since feedback-standard comparison produces evaluation and emotional response, whose motivational properties shape further behavior (Carver & Scheier, 1998;. Another important element, which the meta-analysis performed by Kluger and DeNisis (1996) found to be absent, are the individual variables, which mediate the impact of feedback on performance and moreover play the role of moderators of emotional reaction to feedback. Results of studies done by Higgins and his collaborators (Föster, Grant, Idson, & Higgins, 2001;Idson & Higgins, 2000;Van-Dijk & Kluger, 2011) indicate, for instance, that promotion and prevention focus moderates motivational effect of the feedback on performance. Promotion vs. prevention orientation Promotion-focused and prevention-focused regulatory mode distinguished by Higgins (Higgins, 1997;Idson & Higgins, 2000;Scholer & Higgins, 2008) is a psychological dimension associated with sensitivity to feedback. It differentiates both, person's affective response to feedback (Higgins, Shah, & Friedman, 1997) as well as person's further engagement in the performance (Molden, Lee, & Higgins, 2008) Promotion-focused regulation is associated with advancement needs, accomplishments and aspirations, while prevention-focused regulation emphasizes safety, responsibility and security needs (Higgins, 1997). Particular regulatory mode can be either situational (Crowe & Higgins, 1997) or chronically used and then it acquires the character of individual differences (Higgins, 1997). People guided by promotion-focused regulation put attention to profits and achieving positive results becomes the aim of their actions. On the other hand, those with preventionfocused regulation are vigilant to information about mistakes made and thus protecting oneself against error occurrence constitutes the purpose of their activity (i.e. they will protect themselves against threats and strive to ensure that there was no loss, Molden et al., 2008). Such variation in sensitivity to a particular type of information -gain vs. loss -will consequently lead to different levels of engagement in aim pursuit, depending on the type of feedback received. Engagement in activity in prevention-focused individuals should increase after defeat, whereas in those promotion-focused after success (Higgins & Spiegiel 2004;Idson & Higgins, 2000;Scholer & Higgins, 2008). Van-Dijk and Kluger (2011) obtained similar results in an experimental study involving 131 participants. Their research confirmed the assertion that in situationally induced promotion-focused orientation, the positive feedback increases motivation far better than the negative feedback. In case of induction of prevention-focused orientation this relation is reversed. Namely, it is the negative feedback that increased motivation much more than the positive feedback. Aforementioned results gained additional confirmation in a recent study of Shu &Lam (2011) andJarzebowski, Palermo, andVan de Berg (2012). Engagement We use the term of cognitive engagement as a construct concerning effort, persistence and concentration on a cognitive activity. As such, cognitive engagement is expressed in time spend and/or taken effort to deal with a particular activity or a situation (Frijda, 2012). We stress that one of the key features prompting individual's willingness or capacity to engage is the sensitivity to particular aspects of events resulting from individual's values, aims and needs. Therefore, engagement, especially when it is not motivated externally, occurs when the behavior meets personal goals or motivational orientations and is maintained by appropriate emotions. Frederickson (2001) presents a different approach to engagement. She distinguishes emotional engagement that stems from and is maintained by affective responses such as interest, excitement and stress. Such emotional engagement in the task might arise if the task is important for the individual. However the more important the task is, the greater the risk that it will induce anxiety (Podsakoff, LePine, & LePine, 2007). The present study concerned a particular kind of engagement that is a cognitive one. It was assessed through a complex procedure, in which participants were to thrice engage in a categorization task involving pairing pictures. Research objectives The first aim of the study was to analyze emotional reaction to two different types of feedback. The first one was the positive feedback providing information that the task was done correctly in about 75%. This type of feedback is one of the most commonly used; it is positive but still leaves room for improving our level of performance. The second type of feedback was the feedback about success vs. failure. Depending on the experimental condition, participants were provided with information that they have performed better or worse than before. We predicted that emotional reaction evoked by feedback will depend on the previous emotional state as well as on promotion-prevention-focused orientation. It was expected that excitation and feeling pleased will be characteristic emotional responses to the first, positive feedback among promotion-oriented individuals. On the other hand, it was suspected that the prevention-oriented participants shall rather react to such feedback with calmness. Further, it was predicted that the feedback about success shall strengthen these emotional responses, whereas information about failure should result in feeling depressed if individuals are promotion-oriented, and in feeling tense or uneasy if a person is more prevention-oriented. The second aim of the study was to verify the pure effect of emotions and promotion-focused and preventionfocused orientation, as well as the interaction between them on cognitive engagement in the task. We expected that promotion-oriented individuals would interpret the first positive feedback as a kind of information "you are on the right track, go on, it could be better" and as a consequence they would be more cognitively engaged in the second categorization task. Individuals more prevention-oriented shall rather interpret such a feedback as "it is not bad, I did not fail, and there is no need to be more engaged". The third aim of the study was to analyze how situational conditions like success and failure will modify the contribution of emotions and promotion-preventionfocused orientation in cognitive engagement in the task. As in the previous studies (i.e., Idson & Higgins, 2000;Pikuła & Wytykowska, in preparation;Shu, & Lam, 2011;Van-Dijk & Kluger, 2011) we expected that cognitive engagement will increase after failure in prevention oriented individuals, and after success among those promotion-oriented. Method Participants One hundred and ninety senior secondary school students (108 women and 82 men; M age = 18.6; SD = .27) participated in the study. Fourteen participants, who did not complete the scales (Asendorpf, 2010) were excluded from the final analyses. Design and procedure Participants were randomly assigned to one of three experimental conditions, each with a particular content of the feedback provided after the second categorization task: (1) success ("you did better than before"); (2) failure ("you did worse than before"); (3) control ("you did the same as before"). Initially participants were asked to complete three personality questionnaires -BIS/BAS scale and two other measures assessing promotion and prevention orientationfollowed by an experimental, fully computerized procedure assessing experienced emotions, expectancy of level of performance and cognitive engagement. The procedure had a two-fold structure. In the first stage, participants' emotions were assessed and right after the first categorization task begun. 15 pictures appeared on the right side of the computer screen. Participants were instructed to find as many correct pairs of matching pictures as they could. In order to pair two pictures, one had to double click each of them, one after another. Afterwards the paired pictures were to be named and participants were asked whether they want to continue making pairs or finish the task. When the categorization came to an end, participants obtained the first feedback -the same for all conditions, stating "you were able to find X% of correct pairs", where X varied randomly from the interval 74-77%. In the second stage, participants' emotions were assessed again but before the second categorization task appeared on the screen, a measurement of expected level of performance in the second trial took place. The second categorization task was the same as the first one -only different set of pictures was used. When participants finished the task, they were again provided with feedback, this time different in every experimental group (success condition: you did about X% better than before; failure condition: you did about X% worse than before; control condition: you did the same as before; where X varied randomly from the interval 4-7%). The subjective meaning of this percentage interval was previously assessed in the pilot study and such variation was used to make the feedback more realistic. In the third and the last stage of the experimental procedure, participants' emotions were assessed once again and right after the third categorization task begun, just as it was in the first stage. After receiving feedback emotions were assessed again. Then participants needed to decide whether they go to the third categorization or not. The decision time was recorded. Subsequently participants did the third categorization, after which emotions were assessed once more. The schema of the experiment is presented in Table 1. Materials Promotion vs. prevention orientation was measured using Polish version of Regulatory Focus Questionnaire (RFQ, Higgins, Friedman, Harlow, Idson, Ayduk, & Taylor, 2001;Polish version Pikuła, 2012). RFQ refers to orientations (i.e. anticipatory goal reactions) to new tasks or goals that are formed on the bases of subjective history of success or failure in promotion and prevention self-regulation. The questionnaire consists of eleven items divided into two scales. The first scale measures the promotion focus and consists of 6 items such as "Compared to most people, are you typically unable to get what you want out of life?" (a reversed item). The second scale measures the prevention focus and includes 5 items such as "Did you get on your parents' nerves often when you were growing up?". Participants are instructed to assess how frequently specific events actually occur or have occurred in their life using 5-point Likert scale from 1 (never or seldom) to 5 (very often). In Polish version both scales have satisfactory reliability, for promotion scale α = .67, while for the prevention α = .82. Similar differences between Cronbach's alpha are observed in the original scale (Higgins et.al. 2001, p. 8). The mean score for the prevention scale was M = 3.40 (SD = .50). while for the promotion scale M = 3.33 (SD = .40). The promotion-prevention-focused orientation indicator was computed by subtracting the mean scores of the prevention scale from the promotion scale (this solution has been adopted from Higgins, Fiedman, Harlow, Idson, Ayduk, & Taylor, 2001;Kolańczyk, Bąk, & Roczniewska, 2013). The higher the score, the more promotion-focused the person is 1 . In some analysis this indicator was used as a three-level nominal variable created by subtracting half of the standard deviation (0.50 SD = .34) from the mean score (M = .0096). Such variable had three levels, in which 3 (N = 52) meant promotion orientation, 2 (N = 67) the balance between promotion and prevention orientation, and 1 (N = 51) indicated prevention orientation. Individual differences in BIS and BAS sensitivity were assessed using BIS/BAS scale (Carver & White, 1994) in Polish adaptation by Wytykowska (Mueller & Wytykowska, 2005). Since this construct was not analysed in the presented paper, we have resigned from the extended presentation of the scale. Emotions were assessed using a short scale based on the items used by Higgins, Shah & Friedman (1997). Eight emotions were taken into account -feeling depressed, tensed, uneasy, discouraged, exited, pleased, interested, and calm. Based on Carver and Scheier's self-regulation theory (1998;Carver, Sutton, & Scheier, 2000;Roczniewska & Kolańczyk, 2014) it was expected that feeling depressed will be a reaction to failure in promotion-focused individuals, while those prevention-focused will react in such circumstances with feeling tense or uneasy. In case of success, prevention-focused individuals will react with feeling calm and those promotion-focused will feel excited and pleased. Emotions of curiosity and discouragement have been introduced as emotions, which maintain and reduce engagement in the activity. Participants were asked to rate on the 6-point scale (not at all, a little, moderate, quite much, much, very strong) the extent to which they experience the abovementioned emotions at the time of measurement. The same scale was used four times, for all assessments of experienced emotions, each time with emotions presented in random order. Expected of level of performance was measured on a 3-point scale, where 1 indicates "I'll do worse"; 2 -"I'll do the same" and 3 -"I'll do better". This variable was analyzed only as a predictor of cognitive engagement in the second categorization. Distribution of the expectancy showed that only 3% of the sample indicated that they expect to perform worse in the next categorization, 54% was expecting the same level of performance, while 43% expected to perform better. Due to this fact we excluded 3% of the cases and dichotomized the variable. Pictures used in the experiment were either taken from the book (Horne & Wootton, 2010) or prepared by Pikuła and Kwiatkowska during the master seminar. Each out of 45 pictures (15 pictures for each set) presented a single item such as milk, cow, bicycle or piano. Since there was no rule provided how to categorize pictures, participants could have used some concrete rules as well as more metaphorical ones to come up with matching pairs. In order to strengthen their cognitive engagement in categorization, they were asked to name each created pair. The number of pairs created was an indicator of cognitive engagement. Emotional reactions to the first feedback. The modifying role of promotion-prevention focus orientation First we analysed the correlations between promotion--prevention focus and emotions experienced before the beginning of the experiment. Results of correlation analysis showed that promotion orientation significantly and positively correlates with curiosity (r (168) = .248, p < .01) and excitement (r (168) = .188, p < .05), while negatively Emotions measurement (4) with a feeling of unease (r (168) = -.174, p < .05). Prevention focus was not significantly related to any emotions. Thus, the type of orientation shapes the emotional attitude towards the task. In order to check the contribution of the first positive feedback (you were able to find around 75% of correct pairs) in experienced emotions, along with taking into account promotion-prevention orientation, an analysis of variance with repeated measures was conducted with promotion and prevention orientation as between-subjects factor 3 (promotion; balance; prevention) and repeated measure for every emotion as a within-subjects factor. Due to the number of analysis, only statistically significant results will be presented. Excitement. Analysis revealed a main effect of repeated measures F(1,166) = 16.19, p < .001; eta 2 = .089. Feedback increased the excitement level (M1 = 3.35), (M2 = 3.77). Analysis of simple effects of interaction of promotion-prevention orientation and repeated measure of excitement revealed that the level of promotion-prevention orientation differentiates the excitement level in the first measurement, before receiving feedback F (2,166) = 5.65, p < .01; eta 2 = .064. Participants with dominating promotion orientation were more excited (M = 3.75), than those with predominant prevention orientation (M = 3.00). What is more, feedback increased the level of excitement in participants with predominant prevention orientation Emotional reaction to the second feedback about success or failure. Modifying role of promotionprevention orientation Several three-way Anovas with repeated measures were conducted. Between subjects factors were feedback 2 (success vs. failure) and promotion-prevention orientation 3 (promotion vs. balance vs. prevention) while within subjects factor was single emotion measured before and after the feedback was provided. The dynamic of the emotional change was analysed between the second and the third measurement of emotions (see the schema of experimental design in Table 1). Only statistically significant results are presented. Calmness. Results showed statistically significant simple effect of interaction between prevention orientation and the feedback of success for the calmness dynamic, F(1,111) = 7.81, p < .01; eta 2 = .066. In prevention-oriented participants calmness increased after the feedback of success was provided (M 2 = 4 .12), (M 2 = 4.54). Feeling pleased. The two-way interaction of dynamic of feeling pleased and feedback was significant F(1,112) = 3.97, p < .05; eta 2 = .05. Feedback of success generally increased the level of feeling pleased (M 2 = 3.60), (M 3 = 4.1). The analysis of simple effects showed that feeling pleased increases (M 2 = 4.10), (M 3 = 4.69) after the feedback of success was provided mainly within the group of promotion--oriented subjects F(1,112) = 3.73, p < .05, eta 2 = .04. The effect of emotions, promotion-prevention orientation and their interaction on cognitive engagement To check how emotions, promotion-prevention orientation and their interaction predict cognitive engagement, hierarchical regression analyses were conducted. The first hierarchical regression analysis was conducted for the cognitive engagement measured as a number of pairs created in the first categorisation. For the regression analysis only these emotions were chosen, which were significantly related to the cognitive engagement. Therefore in the first step curiosity as predictor was entered. In the second step the promotion-prevention orientation was entered; while in the third step the interaction between them. The results showed that curiosity is a significant predictor of cognitive engagement ΔR 2 = .101, F(1,164) = 18.408, p < .001. These results indicate that curiosity promotes greater cognitive engagement (b = .318, p < .001). Promotion-prevention orientation occurs to be also a significant predictor of cognitive engagement that explains additional portion of variance ΔR 2 = .05, F(1,163) = 14.23, p < .001. These results show that being more promotion-oriented might result in being more cognitive engaged (b = .318, p < .001). The interaction was not a statistically significant predictor ΔR 2 = .02, p = .491. The summary of the results displays Table 2. To explore the character of relationship between curiosity, promotion-prevention orientation and cognitive engagement a mediation analysis was performed. Because promotion-prevention orientation is a variable describing a relatively stable individual's attitude that promotes experiencing particular kind of emotions (for a review Scholer & Higgins, 2008) it was assumed that promotionprevention orientation will be a predictor of engagement, while curiosity will mediate this relationship. We tested a mediation model following the procedure described by Hayes (2012). The fourth model was used from the PROCESS macro (Hayes, 2012) and requested 10000 bootstrap resamples. The model is evaluated by comparing the direct effect (the effect X on Y) with the indirect effect (the effect of X on Y at the control of mediator) -a mediation index. If the indirect effect is significant then the mediation occurs. Results showed that the total effect model was statistically significant F(1,164) = 12.04, p < .001. As Figure 1 illustrates, the unstandardized coefficient between promotion and curiosity was statistically significant, as was the unstandardized coefficient between curiosity and cognitive engagement. The bootstrap unstandardized indirect effect was .95 and the 95% confidence interval ranged from .31 to 1.62. Since zero was not in the confidence interval we could conclude that the indirect effect was statistically significant. For the direct effect, the bootstrap unstandardized effect was .42 and the 95% confidence interval ranged from -.22 to 1.03, and since it included zero, it was statistically insignificant. The mediation effect is full since the direct effect is not significant while indirect effect as well as the total effect remain statistically significant (Hayes, 2012). The second hierarchical regression analysis was conducted for the cognitive engagement measured as a number of pairs created in the second categorisation after the first positive feedback. Since before the second categorization participants estimated the expectation of success in the second categorization, the expectation of success has been entered into the model as a predictor, as well as the interaction between the expectation of success and emotion, and promotion-prevention orientation. For the regression analysis only these emotions were chosen, which were significantly related to the cognitive engagement. Again the curiosity occurs to be positively related to cognitive engagement r = .343, p < .001. In the first step the expectancy of success, curiosity, and promotion-prevention orientation were entered while in the second step we entered the interaction between promotion-prevention orientation and expectancy of success as well as between expectancy and curiosity. Neither the first model ΔR 2 = .03, F(3,162) = 1.49, p = .219, nor the second one ΔR 2 = .006, F(5,160) = 1.49, p = .342 were statistically significant. Results present Table 3. These results presented in Table 2 indicate that neither the expectancy of successful performance, nor the curiosity or promotion-prevention orientation as well as their interactions allow predicting the cognitive engagement. The effect of success and failure, emotions, and promotion-prevention on cognitive engagement Cognitive engagement was measured as the number of pairs created in the third categorisation. Participation in the third categorization depended on individual decision whether to engage in another task or not. 117 participants decided to do the third categorisation. Since the present research focuses Table 2. Regression analysis evaluating the independent contribution of curiosity, promotion-prevention orientation, and their interaction to cognitive engagement in the first categorization Step 1 .149 Curiosity ( Note: * p < .05 on the impact of success or failure on cognitive engagement, analysis did not include the control group, in which the feedback informed, "you did the same as before". Thus the final analyses included 75 people, of which 35 were from a "success" group and 40 from a "failure" group. To examine significant predictors of cognitive engagement the hierarchical regression analysis was conducted. In the first step the feedback about success, failure, curiosity, feeling pleased, and promotion-prevention orientation was entered, while in the second step the interaction between feedback and emotions as well as between feedback and promotion-prevention orientation. Results are shown in Table 4. While the first model was not statistically significant ΔR 2 = .05, p < .01 F(4,68) = .86, p = .50, the second was ΔR 2 = .164, p < .01 F(7,65) = 2.48, p < .05. These results indicate that emotions as well as the promotion-prevention orientation predict cognitive engagement but these effects depend on the success or failure condition. To explore the nature of these interactions three moderation models were tested used the PROCESS macro (model 1) prepared by Hayes (2012) with 10000 bootstrap resamples. In the first moderation model the promotion-prevention orientation was a predictor, while the feedback was a moderator and cognitive engagement was the dependent variable. The model was significant, F(3,69) = 2.74, p < .050. The bootstrapped conditional effect presents Table 5. Obtained results show, that after the feedback about success was provided, the promotion-oriented subjects were more cognitively engaged in categorisation. In the second moderation analysis the predictor was feeling pleased, the feedback was the moderator, and cognitive engagement was the dependent variable. The Table 3. Regression analysis evaluating the independent contribution of expectancy, curiosity, and promotionprevention orientation, and their interaction to cognitive engagement in the second categorization Step 1 .03 Expectancy (E Table 4. Regression analysis evaluating the independent contribution of feedback of success or failure, curiosity, uneasiness, feeling pleased, promotion-prevention orientation, and their interaction to cognitive engagement in the third categorization Step Obtained results show that after the feedback of success was provided, feeling pleased fostered higher cognitive engagement, but only on the level of statistical tendency. Discussion The present study had three major research objectives. Firstly, it was developed to answer the question of how emotional reactions are shaped in response to feedback and whether they are dependent on promotion or prevention orientation. The second aim was to answer the question how emotions and promotion-prevention orientation influence cognitive engagement in the task. The third objective of the research was the analysis of the influence of feedback about success and failure, emotions, and promotion-prevention orientation, and the relations between them on cognitive engagement. Obtained results revealed that emotions experienced before taking part in the experiment were determined by promotion focus, while prevention focus by no means differentiated emotions occurring before the task. The stronger the promotion focus orientation was, the more intense excitement and curiosity one experienced. Such pattern of results is consistent with findings from other studies, showing that eagerness is a particular emotion accompanying promotion-focused regulation during the activity, in which an individual might gain something, develop or move forward (Siegel & Higgins, 2001). It is an emotion that energizes actions and sustains engagement. Experiencing curiosity together with excitement at the same time is from the one side the effect of their close ties (Frijda, 2004), as both are placed at the same quarter in a Russell's circumplex model of emotions (Russell, 1980). Experiencing curiosity results in the increase of motivation and engagement in the exploration of the environment (Izard & Ackerman, 2000). Indeed such function of curiosity finds confirmation in the observed result of overall mediating role of curiosity in the level of engagement in the first categorization task. Observed outcomes showed the moderating role of promotion-prevention orientation in shaping the emotional response to positive feedback (that is the one that does not clearly indicate whether the task has finished with success or defeat). Individuals with predominant promotion orientation reacted to positive feedback with an increase of excitement, whereas those prevention-focused reacted with a decrease of feeling of uneasy and increased curiosity. These findings are consistent with works of Higgins team (e.g., Higgins & Spiegel, 2004), as well as Kolańczyk (2004), who found that promotion focus is regulated by aim defined as gains or advancement and nurturance needs. In such self-regulation, information about positive effects of action evokes positive emotions supporting action. Reaction of excitement is almost synonymous with emotion of eagerness, which occurs as a dominating emotional state to find means of advancing success. Therefore, these results suggest that the obtained feedback has been taken "as a good fortune" by promotionfocused individuals, as the type of information processing operated by means of promotion-focused regulation is characterized by a concentration on those elements that support further activity and bring person closer to the goal (for a review see Scholer & Higgins, 2008). Individuals with predominant prevention orientation reacted to positive feedback with a decrease of uneasiness and increase of curiosity. This result complies with defining the aim of action in "non-loss" preventionfocused regulation. Positive feedback provides information about achieving this goal -one managed to prevent the loss thus distress decreases (see also Carver & Scheier, 2011). What is interesting is that after the positive feedback was provided, prevention-focused individuals experienced increased curiosity. One may suspect, that the factor which enabled the appearance of this positive emotion was the described earlier decrease in anxiety. Individuals with promotion orientation approached the task with a higher level of curiosity just from the beginning, whereas in those prevention- focused this emotion "could", so to speak occur as they became assured that their task performance was not bad. Feedback about success or defeat had an influence on experienced emotion. As expected, information about success was associated with the experience of being pleased. What is interesting we were able to detect only the preventionspecific emotional reactions to feedback but any effects of prevention orientation on cognitive engagement were not significant. Prevention-oriented individuals expressed calmness when confronted with successful feedback and tension in the face of failure. These results again confirm different emotional consequences of success in achieving goal for promotion and prevention orientation. Preventionfocused regulation is aimed at avoiding losses. When the regulatory standard cannot be achieved, such individuals react with increased tension, which is close to emotions of fear. On the other hand they calm down if they manage to protect against loss and failure, which is consistent with other research findings (i.e., Carver & Scheier, 1998;Roczniewska & Kolańczyk, 2014;Liberman, Idson, & Higgins, 2005;Scholer & Higgins, 2008). Relation, which was revealed in this study, between promotion orientation and experiencing emotions promoting flourishing (Fredrickson, 2001), confirms connections of this orientation with optimism and well-being, which were observed in other studies (Grant & Higgins, 2003). What is more, results of this study, by providing evidence that the emergence of emotions is dependent from promotion-prevention orientation, deliver further evidence that the individual predispositions shaping the appraisal processes, shape in a consequence a kind of an emotional experience (see also Kuppens & Tong, 2010). Analysis of factors fostering cognitive engagement showed that curiosity and promotion focus were positive predictors of cognitive engagement in the first categorization. Furthermore, mediation analysis revealed, that curiosity is a much more significant predictor of engagement. This result confirms the role of curiosity in greater task engagement (Izard & Ackerman, 2005). Analysis failed to determine significant predictors of cognitive engagement in the second categorization, although curiosity proved to be a positively, but weak correlate of engagement. One of the factors explaining such occurrence might be the character of feedback. Although it was positive, it still left room for improvement, which could lead to blurring of differences in the level of engagement between more promotionfocused and more prevention-focused individuals. Such hypothesis finds also support in data discussed above regarding emotional reactions to positive feedback. Provided that curiosity and promotion orientation proved to be significant predictors of cognitive engagement in the first categorization, and after the first feedback curiosity significantly increased in prevention-oriented group, than all participants begun second categorisation with level of curiosity even enough so the feeling no longer was a differentiating factor. The analysis of influence of feedback about success or failure on the level of engagement revealed that such feedback itself had no significant effect on the level of engagement. The result is not surprising in the light of described discussion about the motivational consequences of success and failure (for a review Kluger & DeNisi, 1996). The level of engagement in the third categorization turned out to be dependent on the feedback about success, emotion evoked by this information and promotion orientation. Feeling pleased was also related with a higher cognitive engagement in the last categorization. This result confirms findings from other studies, according to which positive emotions are associated with a greater cognitive engagement (Reschly, Huebner, Appleton, & Antaramian, 2008). Promotion orientation was also associated with a greater cognitive engagement after receiving feedback about success. Such finding is consistent with results of studies conducted by Higgins and collaborators (for a review see: Scholler Higgins & Spiegel, 2004). These findings also confirm the previous result obtained by Pikuła (2012) showing that the increase of engagement was dependent on promotion orientation but only after receiving information about success. In Pikuła's research the increase of engagement was dependent on prevention focus but only after feedback of failure. The fact, that most of performed analyses revealed mainly an active role of promotion-focused orientation in shaping cognitive engagement as well as in emotional reactions, may be the result of how the experiment was structured. The first, positive feedback shaped the experimental situation as more promotion-regulation fitted. Hence, it demonstrates that certain individual characteristics begin to regulate behaviour stronger when the situation contains an element, to which a particular individual's characteristic is sensitive. The prevention-focused regulation manifested mainly in emotional domain in a manner consistent with how the emotional answer to achievement of goals or lack of accomplishment runs in case of prevention-focused regulation. This is confirmed by Higgins's studies on regulatory fit (Scholer & Higgins, 2008), that is the situation, in which the regulatory characteristics of promotion and prevention orientations manifest themselves predominantly in situations compliant to regulatory standards of a given orientation (an opportunity of profit for promotion and a possibility to avoid a loss for the prevention). In the previous study (Pikuła, 2012;Pikuła & Wytykowska, in preparation), the same experimental design was used but with one exception, payment for the categorization task. The results showed that the engagement in categorization was predicted mainly by the promotion and prevention focus. What is more, it may be assumed that emotions were ruled out from the regulation of the behaviour, as they were not significantly related to neither prevention or promotion focus, nor to the feedback provided or engagement in any of the three categorization tasks. This could suggest that when the motivation to maximize gains and minimize losses is activated by the possibility of earning or loosing money, participants might tend to ignore their emotions in order to achieve their aims. Situation, which may bring more tangible gains or losses, activates basic approach and avoidance motivation. One decides to approach something if it enables to achieve a desired goal and maximize gains. To sum up, the presented study revealed and confirmed the role of positive emotions in cognitive involvement and in shaping expectations regarding future outcomes. In addition, it replicated findings of Higgins and his collaborators and further provided evidence for acknowledging the crucial role individual characteristics play in shaping emotional reaction to situation. Limitations and future research In order to improve the quality of the study and broaden its scope, several steps could be undertaken. First, an assessment of emotional reactions could be more indirect in order to weaken the potential impact of social approval, embedded in the measurement of self-report type. Scale, which could provide more reliable information about the current mood, is IPANAT in Polish adaptation by Wróbel (2012). Additionally the assessment could be more extensive, taking into account several other, more or less complex affective states (such as restlessness, frustration, pride or hope), and allowing to examine the clarity of emotions, for example by means of taking time measurements. Secondly, it would be worthwhile to control at different moments of the research (that is prior to the research, meanwhile and afterwards) participants' evaluations of how important the engagement in the experiment/categorization tasks is for them. Future research should also take into account gender and age differences in affectivity (Jasielska & Szczygieł, 2007;Szczygieł, 2007) which may influence cognitive engagement.
2019-05-09T13:11:30.332Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "3c259fa40947a613d41c1b30e9b51f769bd53de6", "oa_license": null, "oa_url": "http://journals.pan.pl/Content/99880/PDF/04.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2936fd76013b27b5234e50eadb612501175c9993", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
237452416
pes2o/s2orc
v3-fos-license
Benzocyclobutene-functionalized hyperbranched polysiloxane for low-k materials with good thermostability ABSTRACT Although hyperbranched polysiloxanes have been extensively studied, they have limited practical applications because of their low glass transition temperatures. In this study, we synthesized benzocyclobutene-functionalized hyperbranched polysiloxane (HB-BCB) via the Piers-Rubinsztajn reaction. The synthesized material was cured and crosslinking occurred at temperatures greater than 200 °C, forming a low-k thermoset resin with high thermostability. The structure of the resin was characterized using nuclear magnetic resonance (NMR) spectroscopy, viz. 1H NMR and 13C NMR spectroscopy. 29Si NMR spectroscopy was used to calculate the degree of branching. Differential scanning calorimetry, dynamic mechanical analysis, and thermogravimetric analysis revealed that the cured resin possesses good high-temperature mechanical properties and exhibits a high thermal decomposition temperature (Td5 = 512 °C). In addition, the cured resin has a low dielectric constant (k = 2.70 at 1 MHz) and low dissipation factor (2.13 × 10−3 at 1 MHz). Thus, the prepared resin can function as a low-k material with excellent high-temperature performance. These findings indicate that the performance of crosslinked siloxane is significantly attributed to the introduction of BCB groups and the formation of the highly crosslinked structure. Introduction Signal delay of interconnects is the main obstacle in the development of integrated microelectronic devices with high density. To overcome this limitation, it is necessary to develop high-performance materials with low dielectric constants, i.e., low-k materials [1]. An ideal low-k material needs to exhibit a low dielectric dissipation factor, good thermal stability, and low coefficient of thermal expansion. Various low-k organic materials have been developed, such as polyimides [2][3][4][5], organic siloxanes [6], polybenzoxazoles [7,8], epoxy resins [9,10], etc. Among them, benzocyclobutene (BCB) resins are considered to be high-performance materials with a wide-range of potential applications [3,[11][12][13][14][15]. The ring-opening reaction of cyclobutene in BCB, followed by crosslinking and polymerization forms a highly crosslinked network structure [16]. The low polarity of this results in reduced dielectric constant and water absorption. In addition, the high extent of crosslinking enhances the thermal stability and increases the glass transition temperature of BCB-based resins [17]. Moreover, BCB can be thermally-cured at relatively low temperatures without the formation of byproducts. Polysiloxanes are another important class of insulating materials that exhibit good thermal and chemical stabilities. However, their low glass transition temperature (Tg) limits their application as high-performance dielectric materials. To overcome this limitation, crosslinkable groups such as BCB can be introduced into the polysiloxane network to increase Tg by thermal curing of BCB. There have been reported many methods to incorporate BCB into siloxane polymers, such as Heck reaction [17,18], hydrosilation reaction [19], and hydrolysis and condensation reaction of BCB functionalized oxosilanes [12,20]. The Piers-Rubinsztajn (P-R) reaction has attracted considerable attention as an effective and facile approach for the synthesis of polysiloxanes [21][22][23][24]. Unlike other reactions which require precious metal catalysts, such as platinum in the hydrosilylation reaction and palladium in the Heck reaction, the P-R reaction is catalyzed by tris(pentafluorophenyl)borane [B(C 6 F 5 ) 3 ], a weak Lewis acid [25]. In addition, this reaction is rapid even under mild conditions, and leads to the release of an alkane gas, which aids in the monitoring the reaction progress. The generation of Si-OH groups is also prevented in the P-R reaction; thus, the product formed has a low water absorption rate [6]. Recently, we have prepared a linear oligomer BCB functionalized polysiloxane by P-R reaction of BCB functionalized dimethoxysilane monomer (A2) with silphenylene/silbiphenylene hydrosilanes (B2) [26]. After cured, a linear oligomer showed low dielectric constants at 2.76 and 2.78 at 1 MHz. the synthesis of the monomers A2 and B2 were complicated and the dielectric constant can be further decreased. Hyperbranched polymers usually exhibit different properties as compared to linear polymers because of their unique structure, in which numerous reactive functional groups are concentrated at the chain terminals [27]. Compared with a linear polymer, a hyperbranched polymer presumably possesses a larger free volume, owing to the formation of porous voids between two different chains in the structure. Thus, formation of a hyperbranched structure could be an effective approach to reduce the dielectric constant of polymers [7,28,29]. Based on this presumption, we have synthesized a BCB-functionalized hyperbranched polysiloxane (HB-BCB) by performing the P-R reaction with commercial diethoxymethylsilane (AB 2 type monomer) to construct a hyperbranched polysiloxane, and then substituting the remaining Si-OEt groups by the Si-H-functionalized BCB groups in a cascade reaction way. It's a facile and convenient method to prepare the BCB functionalized siloxane polymer. After curing, the material exhibited a low dielectric constant of 2.70 at 1 MHz, good high-temperature performance, and high thermostability. Characterization method Nuclear magnetic resonance (NMR) spectroscopy, viz. 1 H NMR, 13 C NMR, and 29 Si NMR was conducted using a Bruker DRX-500 spectrometer with tetramethylsilane (TMS) as the reference and CDCl 3 as the solvent. Fourier transform infrared (FT-IR) spectra were recorded on a Bruker VERTEX 70 FT-IR spectrophotometer. Differential scanning calorimetry (DSC; TA Q200 calorimeter) and thermogravimetric analysis (TGA; TA Instruments Q500) were performed at a heating rate of 10 °Cmin −1 under a N 2 atmosphere. Dynamic mechanical analysis (DMA; TA Instruments Q800) was conducted using the three-point bending mode at a heating rate of 3 °Cmin −1 and a test frequency of 1 Hz. The dielectric properties were measured by the parallel-plate capacitor method using a Keysight E4980A precision LCR meter and Keysight 16451B as a test fixture at room temperature. The molecular weight was determined using gel permeation chromatography (GPC; Waters 1515 system) equipped with a refractive index (RI) detector using tetrahydrofuran (THF) as solvent at a flow rate of 1.0 mL min −1 . Calibration was accomplished with polystyrene standards. Preparation of HB-BCB In a 25 mL three-necked round-bottomed flask equipped with a magnetic stirring bar, DEMS (1.34 g, 0.01 mol), and toluene (5 mL, 4.28 g) were added under nitrogen atmosphere using a three-way stopcock, and the reactor was placed in an ice-water bath. While stirring, B(C 6 F 5 ) 3 (4 mg, 0.0078 mmol) in toluene was injected into the reaction mixture using a syringe. The ice-water bath was removed to increase the reaction rate. The generation of bubbles indicated the initiation of the P-R reaction. After the generation of bubbles stopped, BCBSiH (1.62 g, 0.01 mol) was slowly added to the reaction mixture using a syringe. The resulting solution was stirred for 1 h. The reaction was terminated by adding Al 2 O 3 to deactivate the catalyst. The mixture was stirred for 10 min and filtered using a filter paper and an ultrafiltration membrane (0.45 μm). The solvent, toluene, was removed by rotary evaporation, and HB-BCB was obtained as a colorless viscous liquid. 1 Curing of the resin To prepare the bulk polymer, the glass mold with HB-BCB was degassed in a vacuum oven at 130 °C for 0.5 h, and then heated sequentially at 180 °C /1 h, 210 °C /1 h, 230 °C /1 h, 250 °C /1 h, 260 °C/1 h, and 290 °C /0.5 h under a nitrogen atmosphere. After cooling to 25 °C, the thermoset was removed from the glass mold and polished to an appropriate size for DMA and TGA analyses ( Figure 1). Preparation and characterization of HB-BCB Scheme 1 shows the synthetic approach used to prepare HB-BCB. DEMS was used as a precursor for the P-R reaction, and a hyperbranched polysiloxane (HB-DEMS) was produced (step 1). Then, BCBSiH was added to substitute the ethoxy groups (-O-Et) with the BCB functional groups, which enabled the crosslinking of the oligomer (step 2). As measured by GPC, HB-BCB showed a low average molecular weight of 1555 g/ mol; thus, it is considered to be an oligomer. The 1 H and 13 C NMR spectra are shown in Figure 2. The peaks corresponding to Si-H proton (4.5 ppm) and Si-OCH 2 CH 3 protons (3.81 and 1.24 ppm) are almost disappeared in the 1 H-NMR spectrum of HB-BCB, indicating that the P-R reaction reached completion. In addition, the integration traces of peak 1 and peak 3 have a ratio of 3:1, which is in accordance with the stoichiometric ratio of DEMS and BCBSiH in the reaction. However, the formation of a hyperbranched structure cannot be verified solely on the basis of the 1 H and 13 C NMR spectra because the branched chains are formed by the Si-O linkages. Thus, the 29 Si-NMR spectrum of the intermediate, HB-DEMS, is used to characterize the hyperbranched structure ( Figure 3). The signals corresponding to the dendritic units (D), linear units (L), and terminal units (T) of the hyperbranched polymer can be clearly observed in the 29 Si-NMR spectrum of HB-DEMS. It has been reported that for silicon atoms connected to the same number of oxygen atoms, the introduction of -OEt groups increases the chemical shift value by approximately 7-10 ppm, indicating that larger chemical shift values correspond to Si atoms with more -OEt groups [30]. On the basis of this discussion, the peaks in the 29 Si-NMR spectrum of HB-DEMS were easily assigned. The degree of branching (DB) was calculated by the following equation: where D, L, and T are the relative amounts of D, L, and T units, respectively. These units can be evaluated from the ratio of the integral intensities of the peaks in the 29 Si-NMR spectrum [31]. Using this equation (1), the DB of HB-DEMS was found to be approximately 0.45. In the 29 Si-NMR spectra of HB-BCB, there was no peak corresponding to the T unit. In addition, the intensity of the peak attributed to the L unit reduced significantly, possibly due to the substitution of the O-Et groups by the -O-SiMe 2 BCB groups. These findings suggest that most of the core silicon atoms of the T and L units in HB-DEMS are converted to the D units in HB-BCB (Scheme 2). A new peak is observed at approximately −2 ppm, and it can be ascribed to -O-SiBCB group. Since the hyperbranched framework of HB-DEMS remains unchanged after the substitution reaction, it is reasonable to assume that the DB of HB-BCB is the same as that of HB-DEMS (0.45). Curing behavior The curing behavior of the synthesized HB-BCB is studied using DSC (Figure 4). The crosslinking process is exothermic and begins at a temperature of 200 °C. The peak exotherm is observed at 251 °C and the crosslinking reaction reaches completion at 340 °C. These findings are in accordance with those reported for other BCB-based polymers. This curing behavior can be ascribed to the ring-opening of the four-membered ring in BCB, leading to the formation of o-quinodimethane intermediates, along with the crosslinking of the ring-opened BCB units. No significant glass transition Figure 3. 29 Si-NMR spectra of HB-DEMS (above) and HB-BCB (below). Scheme 2. T and L units change into D cores as HB-DEMS is converted to HB-BCB. was observed in the DSC curve of cured HB-BCB because it is highly crosslinked, and the movement of its segments is limited. The formation of the highly crosslinked HB-BCB was also verified using FT-IR spectroscopy ( Figure 5). The following discussion is based on the theoretical FT-IR data obtained by Ocola et al. [32]. A comparison of the FT-IR spectra reveals that the peaks at 1432 cm −1 and 2830 cm −1 , which are assigned to the CH 2 deformation and its overtone band, are not present in the spectrum of cured HB-BCB. In addition, the intensity of the peak at 2928 cm −1 , which is ascribed to the CH 2 symmetrical stretching vibration of BCB, significantly decreases after curing. The peak corresponding to the aromatic C-C stretching in BCB shifts from 1465 cm −1 to 1495 cm −1 after curing because the ring strain caused by the four-member ring is released after the ringopening and crosslinking reactions. Moreover, in the FT-IR spectrum of the cured resin, there is no peak at 880 cm −1 , which may correspond to a combination band related to the four-member ring. These findings verify the ring-opening of cyclobutene in BCB, which leads to the formation of a highly crosslinked network structure. Properties of the cured resin After curing, the oligomer HB-BCB as viscous liquid converted into yellow transparent rigid plastic. Thermal stability of the cured resin is measered by TGA and the TGA curve is shown in Figure 6. It reveals a high thermal decomposition temperature at 5% weight loss (T d5 = 512 °C) with a high char yield of 61% at 800 °C. The excellent thermal stability of the cured HB-BCB may be attributed to the siloxane framework and the highly crosslinked network structure constructed by the BCB group. The DSC curve of the cured HB-BCB ( Figure 4) show is smooth with no peak. Therefore, it is difficult to detect the glass transition temperature (T g ) for crosslinked resins using only DSC. Thus, DMA was used to measure the thermomechanical properties of the cured HB-BCB (Figure 7). The cured HB-BCB exhibits a high initial storage modulus of 2.4 GPa, attributing to the curing of HB-BCB and the formation of the highly crosslinked structure. The storage modulus decreases with increase in temperature: 0.78 GPa at 200 °C and 0.58 GPa at 350 °C. The tan δ curve of the cured HB-BCB exhibits a maximum at 178 °C with a damping coefficient of 0.09, and the corresponding storage modulus is 0.94 GPa. The properties of cured HB-BCB differ from those of ordinary polymers that have a low storage modulus (<50 MPa) in the rubber state and a large damping coefficient at the Tg [26] . Since the hyperbranched siloxane framework is surrounded by the rigid BCB crosslinked network, it is reasonable to assume that the tan δ peak may be regarded as a synergistic secondary relaxation of the siloxane structure confined by the highly crosslinked structure. The cured HB-BCB exhibits a low relative dielectric constant of 2.70 and low dissipation factor of 2.75 × 10 −3 at 1 MHz (Figure 8), which suggests that this material shows relatively good dielectric property, which may have potential applications in the field of microelectronics and packaging. Conclusions The HB-BCB resin, consisting of a hyperbranched polysiloxane framework functionalized with BCB, was successfully synthesized via the P-R reaction in a cascade reaction way. It's a facile and convenient method to prepare the BCB functionalized siloxane polymer. The obtained oligomer was cured and it was found that crosslinking occurred at temperatures greater than 200 °C. The hyperbranched structure was verified using NMR and FT-IR spectroscopy. The cured resin possessed a low dielectric constant (k = 2.70), low dissipation factor (2.13 × 10 −3 at 1 MHz), good high-temperature mechanical properties, and an extremely high thermal decomposition temperature (T d5 = 512 °C). The high performance of the cured resin can be ascribed to the introduction of BCB groups to form the highly crosslinked and lowpolarity structure. In addition, the findings of this study demonstrate that the prepared resin may function as an ideal low-k material with excellent hightemperature performance. Thus, the HB-BCB resin can be potentially applied in the microelectronics industry. Disclosure statement No potential conflict of interest was reported by the author(s).
2021-09-10T05:17:00.400Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c158db27ec2f783eb9788709166a2f66fa077bf2", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15685551.2021.1975383?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c158db27ec2f783eb9788709166a2f66fa077bf2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
6114084
pes2o/s2orc
v3-fos-license
Effectiveness and safety of citicoline in mild vascular cognitive impairment: the IDEALE study Background The studio di intervento nel decadimento vascolare lieve (IDEALE study) was an open multicenter Italian study, the aim of which was to assess the effectiveness and safety of oral citicoline in elderly people with mild vascular cognitive impairment. Methods The study was performed in 349 patients. The active or citicoline group was composed of 265 patients and included 122 men and 143 women of mean age 79.9 ± 7.8 years selected from six Italian regions. Inclusion criteria were age ≥ 65 years, Mini-Mental State Examination (MMSE) score ≥ 21, subjective memory complaints but no evidence of deficits on MMSE, and evidence of vascular lesions on neuroradiology. Those with probable Alzheimer’s disease were excluded. The control group consisted of 84 patients, including 36 men and 48 women of mean age 78.9 ± 7.01 (range 67–90) years. Patients included in the study underwent brain computed tomography or magnetic resonance imaging, and plasma dosage of vitamin B12, folate, and thyroid hormones. Functional dependence was investigated by scores on the Activities of Daily Living (ADL) and Instrumental Activities of Daily Living (IADL) scales, mood was assessed by the Geriatric Depression Scale (GDS), and behavioral disorders using the Neuropsychiatric Inventory scale. Comorbidity was assessed using the Cumulative Illness Rating Scale. An assessment was made at baseline (T0), after 3 months (T1), and after 9 months (T2, ie, 6 months after T1). The main outcomes were an improvement in MMSE, ADL, and IADL scores in the study group compared with the control group. Side effects were also investigated. The study group was administered oral citicoline 500 mg twice a day throughout the study. Results MMSE scores remained unchanged over time (22.4 ± 4 at T0; 22.7 ± 4 at T1; 22.9 ± 4 at T2), whereas a significant difference was found between the study and control groups, both in T1 and in T2. No differences were found in ADL and IADL scores between the two groups. A slight but not statistically significant difference was found in GDS score between the study and control groups (P = 0.06). No adverse events were recorded. Conclusion: In this study, citicoline was effective and well tolerated in patients with mild vascular cognitive impairment. Citicoline activates biosynthesis of phospholipids in neuronal membranes, increases brain metabolism as well as norepinephrine and dopamine levels in the central nervous system, and has neuroprotective effects during hypoxia and ischemia. Therefore, citicoline may be recommended for patients with mild vascular cognitive impairment. Introduction The number of people aged 65 years and over with mild vascular cognitive impairment is continuing to increase. It is widely known that vascular disease can reduce cerebral perfusion, causing oxidative stress and neurodegeneration. Vascular disease has also been reported to accelerate atrophy and results in white matter abnormalities, asymptomatic infarct, inflammation, and reduced glucose metabolism, cerebral blood flow, and vascular density. 1,2 The elderly brain is also more susceptible to hypotension and pump failure as a result of cardiac arrhythmia, and congestive heart failure. Delivery of oxygen to tissues and other metabolic exchanges are impeded by increased thickness of vessel walls and widespread état criblé, with enlargement of perivascular Virchow-Robin spaces, resulting from tortuosity of elongated arterioles. 2 Therefore, cerebral vascular disease may cause impairment in activities of daily living and frequent requests for intervention by health services. Cytidine-5′-diphosphate (CDP) choline is an endogenous compound normally produced by the body, and in pharmaceutical form is known as citicoline. Citicoline inhibits apoptosis associated with cerebral ischemia and in several models of neurodegeneration has been able to potentiate neuroplasticity. It is a natural precursor of phospholipid synthesis, or rather serves as a choline source in the metabolic pathways for biosynthesis of acetylcholine and neuronal membrane phospholipids, mainly phosphatidylcholine. [3][4][5][6] Animal studies suggest that CDP choline may protect cell membranes by accelerating resynthesis of phospholipids. CDP choline may also attenuate the progression of ischemic cell damage by suppressing the release of free fatty acids. 7,8 Furthermore, citicoline has been shown to increase cerebral metabolism and noradrenaline and dopamine levels in the central nervous system. [9][10][11] Numerous experimental stroke studies using citicoline have reported an improved outcome and reduced infarct size in models of ischemic and hemorrhagic stroke. Citicoline has been studied worldwide in both ischemic and hemorrhagic stroke with excellent safety, and with possible efficacy found in several clinical trials. 12 Citicoline has a number of therapeutic actions, including: • neuroprotective effects in situations of hypoxia and ischemia • improvement of attention, learning, and memory performance in animal models of brain aging • restoration of mitochondrial ATPase and membrane Na + /K + ATPase activity • inhibition of activation of phospholipase A 2 and accelerated reabsorption of cerebral edema in various experimental models. 11,[13][14][15][16][17][18][19][20] Pharmacokinetic studies have suggested that citicoline is well absorbed and bioavailable following oral dosing. It can be used not only in cognitive impairment, but also in Parkinson's disease, head trauma, and amblyopia. 17 Table 1 reports the possible clinical uses of citicoline. The studio di intervento nel decadimento vascolare lieve (IDEALE study) reported here was an open-label, multicenter Italian study, the aim of which was to assess the effectiveness and safety of oral citicoline in elderly people with mild vascular cognitive impairment. Patients and methods This study was performed in 387 elderly patients selected from six Italian regions (Calabria, Campania, Lazio, Liguria, Piedmont, Veneto). Inclusion criteria were: age $ 65 years; Mini-Mental State Examination (MMSE) $21; subjective memory complaints but no evidence of deficits on MMSE; and evidence of vascular lesions on neuroradiology. Those with probable Alzheimer's disease were excluded. Patients included in the study underwent brain computed tomography or magnetic resonance imaging of the brain, plasma dosage of vitamin B 12 , folate and thyroid hormones (thyroid stimulating hormone, free triiodothyronine3, free tetraiodothyronine, thyroid peroxidase antibodies, and thyroglobulin antibodies). Functional dependence was investigated by scores on the ADL (Activities of Daily Living) and IADL (Instrumental Activities of Daily Living) scales, mood was investigated by the GDS (Geriatric Depression Scale), and behavioral disorders by the Neuropsychiatric Inventory Scale. Comorbidity was assessed using the Cumulative Illness Rating Scale, a test assessing the number and severity of diseases in individual patients. All patients gave their written informed consent. A total of 349 patients met our criteria and completed the study. They were assigned to open-label treatment with oral citicoline 500 mg twice a day in a fasting state or to no treatment (controls). The treatment group included 265 patients, being 122 men and 143 women of mean age 79.9 ± 7.8 (range 65-94) years. The control group included 84 patients, being 36 men and 48 women of mean age 78.9 ± 7.01 (range 67-90) years ( Figure 1). Twenty-one patients were excluded on suspicion of having Alzheimer's disease, two died before the end of the study, and 15 dropped out ( Figure 1). The main characteristics of the study cohort are shown in Table 2. An assessment was made at baseline (T0), after 3 months (T1), and after 9 months (T2, ie, 6 months after T1). The main outcomes were changes in MMSE, ADL, and IADL scores in the study group compared with controls. Side effects were also investigated. Statistical analysis The data are expressed as the mean ± standard deviation, and comparisons between groups were made using the Student's t-test or the Chi-square test, as appropriate. Repeated-measures analysis of variance was used to assess the difference in changes between values at baseline and at T1 and T2. Significant differences were assumed to be present at P , 0.05. All analyses were performed using the Statistical Package for the Social Sciences software program version 18.0 for Windows (SPSS Inc, Chicago, IL). Results Campania and Calabria were the regions with the most number of patients in the citicoline group (79 and 66, respectively), and 167 patients were from Southern Italy and 98 from Northern Italy ( Table 3). The main neuroradiological findings are reported in Table 4, showing cortical atrophy to be present in 85% of cases and periventricular white matter hypodensities in 60%. The MMSE score in the treated group remained essentially unchanged over time (22.4 ± 4 at T0; 22.7 ± 4 at T1; 22.9 ± 4 at T2). A mild improvement of 0.5 points on average was found during the 9 months of the study, but without significant regional differences. Improvement in MMSE score was more evident in patients from Southern Italy. The untreated group showed a decline in MMSE score over the 9 months (21.5 at T0; 20.4 at T1 and 19.6 at T2; −1.9 points between T0 and T2). No differences were found for ADL and IADL scores between the two groups. Positive changes in ADL scores were slightly better in patients from Northern Italy, but were essentially superimposable. Similar results were evident across the regions (albeit slightly better in Liguria). The final IADL scores were superimposable and showed only slight improvement. Figure 2 shows the ADL, IADL, and MMSE scores for the treated group. We also looked for possible differences in ADL, IADL, and MMSE scores according to age group (young-old age 65-74 years, old-old age 75-84 years, very-old age $ 85 years). Young-old patients showed better performance, but not significantly different from that in the other age groups (Figure 3). A significant difference in MMSE scores was found between the treatment and control groups at T1 (P , 0.0001) and T2 (P , 0.0001) time points ( Figure 4A and 4B), but not between T0 and T1 or between T0 and T2 ( Figure 4B). No differences in ADL and IADL scores were found between the two groups ( Figure 5). A slight difference in GDS score was found between the study and control groups (P = 0.06, not statistically significant, data not shown). No significant adverse events were recorded over time. Occasional excitability or restlessness was found in 5.6% of cases, and digestive intolerance and self-limiting headaches in 4.5% and 3.6% of cases, respectively. Discussion This study shows that citicoline is effective and safe in the treatment of mild vascular cognitive impairment. The treated group showed improvement in MMSE scores, with an increase of 0.5 points shown over the course of the study. CDP choline may attenuate the progression of ischemic cell damage by suppressing the release of free fatty acids. 8 A number of studies have also shown that citicoline appears to be a drug able to provide "safe" neuroprotection by enhancing protective endogenous pathways. 13,21 These properties suggest that citicoline can be recommended for use in patients with vascular cognitive impairment, vascular dementia, or Alzheimer's disease with significant cerebrovascular disease. It seems to have a beneficial impact on several cognitive domains. 22 In another study, we demonstrated the efficacy of citicoline in post-ischemic cerebrovascular disease, but we used intravenous citicoline in saline. 23 Our present study of citicoline is one of the few trials conducted for a period longer than 6 months. Its bioavailability has been very good following oral administration. 9,15,17 With regard to its mechanism of action, we believe that the most pronounced benefits of treatment with citicoline, ie, activation of biosynthesis of phospholipids in neuronal membranes, increase in brain metabolism, and neuroprotective effects during hypoxia and ischemia, are likely to be accrued with prolonged use. This is confirmed by the positive results in our treated group and by the decrease in MMSE scores in our control group at only 9 months. The positive effects on mood probably derive from the increase in noradrenaline and dopamine levels in the brain attributable to citicoline. T0-T2 T0-T1 T0-T2 T0-T1 T0 In conclusion, our study shows that citicoline is effective and safe in mild vascular cognitive impairment. Further studies are needed to confirm these results and to assess the efficacy and safety of long-term administration of a dietary supplement such as CDP choline. Also, it would be interesting to study whether use of citicoline in association with cholinesterase inhibitors may help in delaying the progression of Alzheimer's disease.
2016-05-12T22:15:10.714Z
2012-07-01T00:00:00.000
{ "year": 2013, "sha1": "10abd1eca7faaeadd81f3ace23b56822e54c5892", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=15117", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f992f6522cdb800069b8da265a61c54b1e5ed8a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
24502583
pes2o/s2orc
v3-fos-license
A combined kinetico-mechanistic and computational study on the competitive formation of seven-versus five-membered platinacycles ; the relevance of spectator halide ligands The metalation reactions between [Pt2(4-MeC6H4)4(μ-SEt2)2] and 2-X,6-FC6H3CH[double bond, length as m-dash]NCH2CH2NMe2 (X = Br, Cl) have been studied. In all cases, seven-membered platinacycles are formed in a process that involves an initial reductive elimination from cyclometallated Pt(IV) intermediate compounds, [PtX(4-CH3C6H4)2(ArCH[double bond, length as m-dash]NCH2CH2NMe2)] (X = Br, Cl), followed by isomerization of the resulting Pt(II) complexes and a final cyclometallation step. For the process with X = Br, the final seven-membered platinacycle and two intermediates, isolated under the conditions implemented from parallel kinetic studies, have been characterized by XRD. Contrary to previous results for the parent non-fluorinated imine 2-BrC6H4CH[double bond, length as m-dash]NCH2CH2NMe2 the presence of a fluoro substituent prevents the formation of the more stable five-membered platinacycle. Temperature and pressure dependent kinetico-mechanistic and DFT studies indicate that the final cyclometallation step is strongly influenced by the nature of the spectator halido ligand, the overall reaction being much faster for X = Cl. The same DFT study conducted on the previously studied systems with imine 2-BrC6H4CH[double bond, length as m-dash]NCH2CH2NMe2 indicates that, when possible, five-membered platinacycles are kinetically preferred for X = Br, while the presence of Cl as a spectator halido ligand leads to a preferential faster formation of seven-membered analogues. The metalation reactions between [Pt2(4-MeC6H4)4(μ-SEt2)2] and 2-X,6-FC6H3CHvNCH2CH2NMe2 34 (X = Br, Cl) have been studied. In all cases, seven-membered platinacycles are formed in a process that 35 involves an initial reductive elimination from cyclometallated PtIV intermediate compounds, CH3C6H4)2(ArCHvNCH2CH2NMe2)] (X = Br, Cl), followed by isomerization of the resulting PtII 37 complexes and a final cyclometallation step. For the process with X = Br, the final seven-membered 38 platinacycle and two intermediates, isolated under the conditions implemented from parallel kinetic 39 studies, have been characterized by XRD. Contrary to previous results for the parent non-fluorinated 40 imine 2-BrC6H4CHvNCH2CH2NMe2 the presence of a fluoro substituent prevents the formation of the 41 more stable five-membered platinacycle. Temperature and pressure dependent kinetico-mechanistic and 42 DFT studies indicate that the final cyclometallation step is strongly influenced by the nature of the 43 spectator halido ligand, the overall reaction being much faster for X = Cl. The same DFT study 44 conducted on the previously studied systems with imine 2-BrC6H4CHvNCH2CH2NMe2 indicates that, 45 when possible, fivemembered platinacycles are kinetically preferred for X = Br, while the presence of Cl 46 as a spectator halido ligand leads to a preferential faster formation of seven-membered analogues. compounds are considered adequate models to study reductive elimination processes from d6 octahedral 54 complexes. Recent findings based on reductive elimination from platinum(IV) complexes include the 55 formation of C-C and C-halide bonds1 and a catalytic process for conversion of a C-F bond into a C-C 56 bond.2 In particular, cyclometallated platinum(IV) compounds [PtXR2(Ar′CHNCH2CH2NMe2)] or 57 [PtXR2 (Ar′CHNCH2Ar′)L] containing respectively a tridentate [C,N,N′] ligand or a bidentate [C,N] 58 and a neutral monodentate L ligand can be easily obtained from the reactions of platinum precursors 59 containing "PtR2" moieties and potentially tridentate or bidentate imine ligands, 60 Ar′CHNCH2CH2NMe2 or Ar′CHNCH2Ar′. In recent years, we have been involved in studies related to 61 the formation of platinum(II) cyclometallated compounds generated from the mentioned platinum(IV) 62 cyclometallated compounds with tridentate [C,N,N′] amino-imine ligands, or bidentate [C,N] imine 63 ligands, and a neutral monodentate ligand L.3-12 The interest in these reactions arises from the fact that 64 along the process new C-C bonds are formed via an initial reductive elimination to give a 65 noncyclometallated platinum(II) compound that, in the second step, evolves towards a cycloplatinated 66 compound. Although a common sequence operates, this process is highly versatile since both the nature 67 of the formed C-C bond and the structure of the final cyclometallated platinum(II) compound can be 68 tuned by a judicious choice of both the platinum precursor "PtR2" and the imine ligand, 69 Ar′CHNCH2CH2NMe2 or Ar′CHNCH2Ar′, used in the formation of the cyclometallated platinum (IV) 70 compound. For instance, the reaction of [Pt2Me4(μ-SMe2)2] with imine ligands Ar′CHNCH2Ar′ leads 71 to a cyclometallated platinum(IV) compound from which Caryl-Calkyl bonds are formed and the 72 subsequent metalation can lead to either five or six-membered metallacycles corresponding to C-H 73 activation at the aryl group of the imine, or at the methyl ligand of the platinum precursor, respectively 74 (Scheme 1).3,5 75 For diarylplatinum precursors, the corresponding platinum(IV) compounds lead to the formation of 76 Caryl-Caryl bonds from which either seven-membered platinacycles containing the new biaryl fragment 77 or five-membered analogues, in which the newly formed C-C bond is outside the metallacycle, can be 78 obtained.4,6,7,9-12 For the former, C-H bond activation takes place at the aryl ligand of the precursor, 79 while for the latter this process takes place at the aryl ring of the imine. A clear example (Scheme 2) is 80 obtained when cis-[Pt(C6F5)2(SEt2)2] is used, since, in this case, the ortho-fluorine substituents 81 preclude the formation of seven-membered platinacycles, due to the low reactivity of C-F bonds, and 82 therefore the reaction is directed towards the formation of five-membered analogues.8 83 A more striking result, shown in Scheme 3, was obtained in the reaction of [Pt2(4-MeC6H4)4(μ-SEt2)2] 84 with imines 2-XC6H4CHvNCH2CH2NMe2 (X = Br or Cl) since, in this case, the nature of the halide is 85 determinant: a five-membered platinacycle is obtained for X = Br, while a seven-membered platinacycle 86 is produced for X = Cl. This system has been thoroughly studied from a kinetico-mechanistic point of 87 view and although formation of a seven-membered platinacycle for X = Br was found plausible under 88 harsher conditions, the compound could not be obtained in a pure form.4 89 Since seven-membered platinacycles are a novel class of compounds with potential interest associated 90 with their cytotoxic properties,13,14 in addition to the intrinsic interest based on the formation of biaryl 91 linkages, we decided to explore novel strategies in order to analyse whether it would be possible to 92 obtain such compounds even when X = Br. In this work, the reactions of [Pt2(4-MeC6H4)4(μ-SEt2)2] 93 with imines 2-X,6-FC6H3CHvNCH2CH2NMe2 (X = Br or Cl) have been studied with the idea that the 94 fluoro substituent at the ortho position in the aryl ring of the imine ligand should prevent the C-H bond 95 activation at the imine and thus, in both cases, the reaction would be driven towards the formation of 96 seven-membered platinacycles rather than five-membered analogues. It should be noted that five-97 membered metallacycles are more stable than other ring products and generally cyclometallation 98 reactions take place with high regioselectivity to produce fivemembered rings.15-17 Kinetico-99 mechanistic and DFT studies of these types of systems should allow studying the effect of the nature of 100 the halide (Br versus Cl) in the formation of sevenmembered platinacycles. 101 RESULTS AND DISCUSSION 103 104 Preparation and characterisation of compounds 105 Initial platinum(IV) compounds [PtX(4-MeC6H4)2(2-FC6H3CHv NCH2CH2NMe2)] (5-IV-X,F, X = 106 Br and Cl; Scheme 4) were prepared in high yields, following previously established procedures, from 107 [Pt2(4-MeC6H4)4(μ-SEt2)2] and imines 2-X,6-FC6H3CHvNCH2CH2NMe2 (X = Br and Cl).18 For X 108 = Br the reaction was faster and was complete within 24 hours in toluene solution at room temperature, 109 while for X = Cl the reaction requires 48 hours under the same conditions. As expected from the lower 110 reactivity of C-F bonds, activation of this bond was not observed in either reaction. Using shorter 111 reaction times, isolation and characterisation of the coordination compound [Pt(4-MeC6H4)2(2-F,6-112 ClC6H3CHvNCH2CH2NMe2)], formed prior to the intramolecular C-Cl bond activation has also been 113 achieved; isolation of the corresponding bromo analogue has not been possible. In this case the 114 intramolecular C-Br bond activation occurred readily after coordination of the imine ligand to platinum. 115 All the isolated compounds were characterized by 1H and 19F NMR spectra, which were consistent 116 with the expected structures as well as from the data available for analogous compounds.4,9 As 117 expected, the J(Himine-Pt) values observed for the platinum(IV) compounds (45.6 and 46.0 Hz) are 118 lower than those observed for the platinum(II) compound [Pt(4-MeC6H4)2(2-F,6-119 ClC6H3CHvNCH2CH2NMe2)] (50.8 Hz). For the latter, the J(Himine-Pt) value is consistent with both 120 an E conformation of the imine moiety and the presence of an aryl ligand trans to this group.4 A set of 121 signals of very low intensity that could not be fully assigned indicated also the presence of a Z isomer in 122 the sample in small amounts (<5%). 123 When a toluene solution of compound 5-IV-Br,F was refluxed for 24 hours, the targeted seven-124 membered platinacycle [PtBr{(4-MeC6H3)(2-FC6H3)CHNCH2CH2NMe2}] (7-II-Br,F) was obtained. 125 This compound could also be obtained in a onepot process from [Pt2(4-MeC6H4)4(μ-SEt2)2] and the 126 corresponding 2-Br,6-FC6H3CHvNCH2CH2NMe2 imine under the same conditions. These results 127 indicate that the presence of an inert C-F bond in the imine ligand is an efficient strategy to drive the 128 reaction towards the formation of seven-membered platinacycles. Formation of the analogue chlorido crystal structure of this compound confirms that a biphenyl fragment involving a former para-tolyl 160 ligand and the aryl ring of the initial ligand is formed from 5-IV-Br,F in a reductive elimination process. 161 As already reported, the compounds generated on reductive elimination on platinum(IV) compounds of 162 the type 5-IV-X,Y may adopt four distinct isomeric forms (see Scheme 5); the aryl ring being trans to 163 the amine or the imine moieties and with an E or Z imine conformation.4 By monitoring changes in the 164 1H NMR spectra of II-Br,F, under the conditions suggested by the kinetic experiments detailed in the 165 next section, compound II′-Br,F, as a mixture of E and Z isomers in a proportion E : Z = 2 : 1, was 166 obtained. As previously reported,4 the values of J(Himine-Pt), which in this case are 152 Hz and 84 Hz 167 for the E and Z isomers, respectively, indicate the presence of an halide ligand trans to the imine. From 168 this mixture, XRD quality crystals were obtained and analysed; the molecular structure is shown in Fig. 169 1c and selected molecular dimensions are listed in Table 1. This compound differs from the previously 170 described intermediate in that the bromido ligand is now trans to the imino fragment, and the latter 171 displays a Z arrangement. A careful examination of this Z isomeric form of the species results in clear 172 evidence that this form cannot produce the final seven membered platinacycles for orientation reasons. 173 A similar treatment carried out on compound II-Cl,F produced rather complex 1H NMR spectra which 174 consist of mixtures of up to four possible isomers of the species plus the initial and final reaction 175 compounds (i.e. 5-IV-Cl,F and 7-II-Cl, F). From these complex mixtures, already anticipated from the 176 data collected in the next section, it was not possible to isolate any of the relevant species. 177 178 Kinetico-mechanistic studies on the formation of sevenmembered 7-II-Br,F and 7-II-Cl,F 179 metallacycles 180 The rather complex nature of both the possible reaction intermediates and the nature of the final 181 cyclometallated complexes formed in the reactions is generalised in Scheme 5. This general scheme is 182 clear both from some previous results already published,4,19 and those indicated in the previous section. 183 Given the fact that time-resolved monitoring of the processes has been found to be a perfect handle to 184 gain a better insight into the reaction mechanism, the UV-Vis monitoring of the transformation of 185 complexes 5-IV-X,F has been conducted from a kinetic perspective. 186 For complex 5-IV-Br,F the spectral changes observed on monitoring 5 × 10−4 M xylene solutions at 187 varying temperatures indicate the operation of a two-step process in the 2-24 hour range at 90 and 60 °C 188 respectively. By using the standard software indicated in the Experimental section, these changes could 189 be easily fitted to a consecutive set of two single exponentials. From the time scale, as well as from 190 parallel NMR monitoring, and the preparative procedures indicated before these processes correspond to 191 the reductive elimination from 5-IV-Br,F to II-Br,F followed by isomerisation to II′-Br,F. The follow up 192 final reaction to produce 7-II-Br,F could not be monitored due to the high temperature needed as well as 193 for its time scale. Table 2 collects the relevant kinetic and activation data derived from the plots shown 194 in Fig. 2a for the processes monitored, together with other relevant data for similar processes. The data 195 indicate that the mechanism operating for the full process perfectly parallels that found for the reactivity Br,F, requires a rather large activation enthalpy with practically no changes in entropy, which indicates a 202 transition state with a dominant breaking of the two Pt-C bonds, but keeping them organised by an 203 incipient C-C bond making. This is the behaviour expected for this type of general reductive elimination 204 reactions. As for the volume of activation (Fig. 2b), it is in line with a small compression, precisely due 205 to the new C-C bond being formed. As for the II-Br, F ⇄ II′-Br,F isomerisation reaction monitored, the 206 activation parameters agree perfectly well with those obtained for the already studied II The parallel study carried out on the 5-IV-Cl,F → 7-II-Cl,F process proved to be much more complex to 213 be monitored. As indicated in the previous section, 1H NMR monitoring of the process according to the 214 time-resolved changes obtained by UV-Vis indicated that the presence of a mixture of the four isomeric 215 forms plus the final species indicated in Scheme 5 is prevalent under all the reaction conditions. The 216 slowest process of the three step sequence observed was associated with the II-Cl,F ⇄ II′-Cl,F reaction, 217 as an increase of concentration of the II′-Cl,F form is observed at this time-scale by 1H NMR 218 monitoring; kinetics could be monitored with low methodological errors by UV-Vis. Contrarily, the 219 initial fast reductive elimination reaction proved to be the most complicated to determine kinetically due 220 to the low solubility of 5-IV-Cl,F in xylene at temperatures lower than 50 °C and the readiness of the 221 process (see Table 2). Given the fact that the outcome of the full process under the conditions studied is 222 the final 7-II-Cl,F the remaining step observed was associated with the oxidative II′-Cl,F → 7-II,Cl,F 223 reaction. The kinetic and thermal activation parameters determined for all these sets of reactions are also 224 indicated in Table 2 along with the results for the other relevant systems. Clearly the data agree very 225 well with those observed for the similar systems studied. It is thus clear that the relative ease of 226 formation of the final 7-II-X,F sevenmembered platinacycles is dictated by the presence of a X = Br or 227 X = Cl donor in the II′-X,F → 7-II,X,F reaction, while the II-X, Y ⇄ II′-X,Y isomerisation process does 228 not distinguish between the different X and Y donors on the platinum centre. 229 DFT calculations 231 In view of the data collected in Table 2 II′-X,H are lower in energy than their corresponding Z analogues (Table 3). Since the reaction leading to 238 the final products proceeds via the E isomeric form of the II′-X,H intermediates (vide infra), Z isomers 239 can be considered irrelevant to the reaction course. Furthermore, as indicated in the previous sections, 240 the distal C-H bond is too far away from the platinum(II) centre to be relevant for the oxidative addition 241 process. From the data in Table 3 it is clear that the energies of the II′-X,H intermediates are in all cases 242 lower than those of the II-X,H species, indicating that an isomerization process should be expected, as 243 observed experimentally. The isomerization transition state (TS_Isom), involving a three coordinated 244 platinum species with a dangling NMe2 group (Fig. S1 †), was also calculated and found to be around 245 140 kJ mol−1 above II-X,H (see Table 3). The geometry of the calculated TS_Isom involves a rather 246 late C-Pt-Nimine angle (180° (II-X,H) → 130° (TS_Isom) → 90° (II′-X, H)), in good agreement with 247 the kinetic activation data obtained experimentally. 248 Once the most stable II′-X,H intermediate is formed, two possible selective parallel pathways, leading to 249 the characterised metallacycles, are possible. The first one (Scheme 6, top) involves the oxidative 250 addition of the C-HA bond at the platinum followed by the reductive elimination of toluene producing 251 the five-membered platinacycle (5-II-X,H). The equivalent seven-membered platinacycle (7-II-X,H) 252 would be obtained in a similar fashion whenever the C-HB bond is activated at the metal (Scheme 6, 253 bottom). Given the fact that five-membered platinacycles are more stable than their seven-membered 254 counterparts (the calculated free energy difference being 31.8 (X = Br) and 33.4 (X = Cl) kJ mol−1, as 255 expected from simple standard considerations)17,19,20 the obtention of the larger seven-membered 256 platinacycle from the II′-Cl,H intermediate has to be due to kinetic preferences. product (see Scheme 6). In this case the energy requirements for oxidative addition (TS_CHA) and 265 reductive elimination (TS_RE1) for the smaller five-membered platinacycles are 119.6 and 141.5 kJ 266 mol−1 respectively, slightly lower than those found for the seven-membered product: 146.4 (TS_CHB) 267 and 145.1 (TS_RE2) kJ mol−1. It may be argued that the final products could also be obtained from the 268 II-X,H isomeric form, but higher barriers were obtained for these pathways both for X = Br and X = Cl. 269 Other possible pathways such as those involving the C-H activation on the tetracoordinated square 270 planar platinum centre of II′-X,H, or σ-CAM processes21 leading to the final products, were also 271 computed and found to be noncompetitive with the mechanism proposed here. 272 The results collected in Scheme 6, which are clearly in line with the experimental observations, have 273 been used to build a qualitative kinetic simulation model of product formation over time. For this 274 purpose, the relative free energy differences have been transformed into rate constants by using the 275 Eyring-Polanyi equation (i.e. k = (kbT/h) exp(−ΔG ‡/RT)), and the product evolution over time, from 276 II′-X,H, has been calculated (Fig. 3). As may be observed, at 139 °C the product distribution trend 277 matches the experimental observations: 5-II-Br,H is produced with preference to 7-II-Br,H from II′-278 Br,H, whereas the inverse (7-II-Cl,H preferably to 5-II-Cl,H) is observed for II′-Cl, H. Although the 279 time scale in Fig. 3 reasonably matches the values for X = Br (50% conversion after 24 h), for X = Cl 280 there is more than an order of magnitude difference. Nevertheless, in this high energy range, this 281 difference is easily overcome when the methodological errors involved in the DFT calculation (4-16 kJ 282 mol−1) are taken into account. 283 The validity of the mechanism in Scheme 6 has also been confirmed by its use in the formation of the 284 fluorinated compounds 7-II-Br,F and 7-II-Cl,F characterised in the present work, for which the 285 formation of the five-membered platinacycle 5-II-X,F is not possible. The calculated energy 286 requirements (Table S2 †), although very similar, are slightly lower for the X = Cl system, indicating that 287 the formation of 7-II-Cl,F should be definitively faster. In fact, the qualitative kinetic model indicates 288 that 7-II-Cl,F is obtained around four times faster than 7-II-Br,F, practically the same difference as 289 observed experimentally (Fig. S2 †). 290 CONCLUSIONS 292 293 In this work, the mechanism of formation of seven-membered platinacycles, in preference to the more 294 thermodynamically stable five-membered analogues, has been disclosed through combined kinetico-295 mechanistic and computational studies. Seven-membered platinacycles are formed as the kinetically 296 favoured products in a process which involves the reductive elimination from cyclometallated 297 platinum(IV) compounds [PtX(4-CH3C6H4)2(ArCHvNCH2CH2NMe2)] (X = Br, Cl), followed by 298 isomerization of the resulting platinum(II) compounds plus a final cyclometallation step. The results 299 indicated that the nature of the spectator halido ligand X (X = Br or Cl) is determinant in the 300 platinacycle size of reaction products. The presence of a bromido ligand slows down the formation of 301 seven-membered platinacycles in such a way that the formation of the five-membered analogue becomes 302 competitive unless the required metalation site is blocked with a fluoro substituent. Both kinetico-303 mechanistic and computational studies indicate that, contrary to previous suggestions, the isomerization 304 step is not significantly affected by the nature of the halido ligand. On the contrary, all data are 305 consistent with the fact that the final cyclometallation step is only dependent on the nature of the 306 spectator halido ligand and this step is responsible for the nature of the final products. Therefore, five-307 membered platinacycles are preferred for Br, while the presence of a Cl leads to the formation of 308 sevenmembered analogues. 309 EXPERIMENTAL 311 312 General procedures 313 Microanalyses were performed at the Centres Científics I Tecnològics (Universitat de Barcelona). 314 Electrospray mass spectra were performed at the Servei d'Espectrometria de Masses (Universitat de 315 Barcelona) using a LC/MSD-TOF spectrometer usingH2O-CH3CN 1 : 1 to introduce the sample. NMR 316 spectra were performed at the Unitat de RMN d′Alt Camp de la Universitat de Barcelona using a 317 Mercury the solution was allowed to stir at room temperature for 70 minutes. The mixture was dried over 327 Na2SO4, the solution was filtered, and the solvent was removed under vacuum to give the product. Prismatic crystals were selected and intensity data were measured on a D8 Venture system equipped 405 with a multilayer monochromator and a Mo microfocus. The structure was solved using the Bruker 406 SHELXTL software package, and refined using SHELXL.24 All hydrogen atom positional parameters 407 were computed and refined using a riding model, with an isotropic temperature factor equal to 1.2 times 408 the equivalent temperature factor of the atom to which they are linked; further details are given in Table 409 4. 410 411 Kinetics 412 The kinetic profiles for the reactions were followed by UV-Vis spectroscopy in the full 700-300 nm 413 range on HP8452A or Cary50 instruments equipped with thermostated multicell transports. The 414 observed rate constants were derived from absorbance versus time traces at the wavelengths where a 415 maximum increase and/or decrease of absorbance were/was observed; alternatively the full spectral 416 time-resolved changes where used. For the reactions carried out at varying pressures, the previously 417 described pillbox cell and pressurising system25-28 were used and the final treatment of data was the 418 same as described before. The calculation of the observed rate constants from the absorbance versus 419 time monitoring of reactions, studied under first order concentration conditions, was carried out using 420 the SPECFIT or RecatLab software.29,30 The general kinetic technique is that previously 421 described.11,18,31 Table S1 † collects the kobs values for all the systems studied as a function of 422 starting complex, pressures and temperatures studied. All post-run fittings were carried out by using the 423 standard available commercial programs. text correspond to those obtained with the larger basis sets and can be found, along their relevant 439 thermochemical terms, in Table S3. † 440 The kinetic models have been constructed with the Copasi software49 using the deterministic (LSODA) 441 method with relative and absolute tolerance values of 10−6 and 10−12, respectively. Table 2 Kinetic and thermal activation parameters for the two reaction steps observed for the reaction of 632 different 5-IV-X,Y leading to 7-II-X,Y according to Scheme 5 in xylene solution. * indicates a mixture 633 of II-X,Y and II'-X,Y isomers7 634 635 636 637 638 639
2017-09-27T18:35:43.440Z
2015-10-13T00:00:00.000
{ "year": 2015, "sha1": "2978c6ae30ca8e56375ff80bc7a2996b7b4bc93b", "oa_license": "CC0", "oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/164837/1/654791.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "3790e25a250d6ce00491116f237b3ebc34b45529", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
7229115
pes2o/s2orc
v3-fos-license
Cryptococcus spp isolated from dust microhabitat in Brazilian libraries Background The Cryptococcus spp is currently composed of encapsulated yeasts of cosmopolitan distribution, including the etiological agents of cryptococcosis. The fungus are found mainly in substrates of animal and plant origin. Human infection occurs through inhalation of spores present in the environment. Methods Eighty-four swab collections were performed on dust found on books in three libraries in the city of Cuiabá, state of Mato Grosso, Brazil. The material was seeded in Sabouraud agar and then observed for characteristics compatible with colonies with a creamy to mucous aspect; the material was then isolated in birdseed (Niger) agar and cultivated at a temperature of 37°C for 5 to 7 days. Identification of isolated colonies was performed by microscopic observation in fresh preparations dyed with India ink, additional tests performed on CGB (L-canavanine glycine bromothymol blue), urea broth, and carbohydrate assimilation tests (auxanogram). Results Of the 84 samples collected from book dust, 18 (21.4%) were positive for Cryptococcus spp totalizing 41 UFC’s. The most frequently isolated species was C. gattii 15 (36.6%); followed by C. terreus, 12 (29.3%); C. luteolus 4 (9.8%); C. neoformans, and C. uniguttulatus 3 (7.3%), and C. albidus and C. humiculus with 2 (4.6%) of the isolates. Conclusion The high biodiversity of the yeasts of the Cryptococcus genus, isolated from different environmental sources in urban areas of Brazil suggests the possibility of individuals whose immune systems have been compromised or even healthy individuals coming into sources of fungal propagules on a daily bases throughout their lives. This study demonstrates the acquisition possible of cryptococcosis infection from dust in libraries. Background Since moving to Cryptococcus spp was first isolated, over 113 years ago, many studies have been conducted on the subject. However, many clinical-epidemiological and ecological aspects are still unknown, especially in Brazil. Regarding the epidemiology of the agent, there is a wide variety of species distributed in different countries and in different regions of the same country [1], and there are also many areas which are yet to be researched in regard to the interaction of the agent with its human host [2]. Yeasts of the Cryptococcus genus have a peculiar type of geographic distribution, considering their species. A high incidence of Cryptococcus neoformans is found in European countries and in North America; whereas Cryptococcus gattii is found predominantly in tropical and subtropical regions. This peculiar geographic distribution is a significant factor in the study of the natural habitat of Cryptococcus, as a possible source of contagion for susceptible individuals [3]. Five serotypes are currently recognized for this fungus (A, B, C, D, and AD). The distinction between different serotypes is based on the immunologic reaction to the antiserum produced against different polysaccharide compositions that constitute the yeast capsule. The AD serotype was classified as a diploid hybrid of A and D serotypes [1]. Recent DNA polymorphism studies using AFLP (amplified fragment length polymorphism). Demonstrated that the genetic differences between varieties were enough to classify them as two distinct species: Cryptococcus neoformans and Cryptococcus gattii [4][5][6][7]. In many situations, reports of cryptococcosis have been associated to pigeon droppings as the source of the infection. However, epidemiological analyses showed that patients who are in contact with pigeons are subjected to a high risk of infection [9,10]. The main problem is that C. neoformans remains viable for many years in dry pigeon droppings, which become a reservoir for infectious particles accumulated by alveolar deposition. These yeasts are currently present in urban areas and associated with urban birds and their habitats, which are adapted to buildings, such as pigeons, and birds that live in parks, commercial establishments, and homes, such as parakeets and canaries [20][21][22]. The Cryptococcus genus is composed of approximately 34 species of yeast which reproduce asexually by budding and can be identified by: starch hydrolysis, assimilation of inositol, production of urease, non fermentation of sugars, and sensitivity to cycloheximide [23,24]. Studies to identify, isolate, and monitor the incidence of specific fungus species in different habitats still need to be conducted, since the literature reports urban environmental sources and plant substrates as the main environments harboring Cryptococcus spp, emphasizing sources associated to the exposure to pigeons and tree hollows in urban centers. In the study of cryptococcosis and its etiologic agents it is important to be aware of and extensively monitor reservoirs and sources of infection [8]. Therefore, these parameters guide epidemiological data for the implementation of prevention programs and effective therapies [25]. This study represents the first Brazilian report on the association between this fungus and dust found in libraries. Materials and methods Eighty-four samples were collected inside three libraries (A, B, and C) and analyzed, in the city of Cuiabá, state of Mato Grosso, Brazil. The study was authorized by the people in charge of the libraries. The dust was collected with sterile swabs and placed in 20% sterile saline solution, and later transported to the mycology investigation laboratory of the Federal University of Mato Grosso for isolation and identification of the fungal microorganisms. The colonies selected from the seeded plates (primary colonies) were re-isolated and diluted in 2.0 ml sterile water solution with chloramphenicol; they were later seeded on Sabouraud medium with chloramphenicol and incubated for 5 to 7 days at 37°C. After growth, the colonies were identified by morpho-physiological tests. The macroscopic analysis of the colonies suggestive of Cryptococcus spp was conducted by observing the shiny, smooth aspect of the surface, with a creamy to mucous consistency, and a white to beige coloration. The micromorphology of the colonies was analyzed through microculture (Ridell technique). The isolates were submitted to a urease test and microscopic analysis with India ink to visualize the capsule [26]. The colonies were seeded in birdseed agar (Staib agar), which is recommended to verify phenoloxidase activity of the most studied pathogenic species nowadays, Cryptococcus neoformans and Cryptococcus gattii [9]. C. neoformans and C. gattii are the only yeasts of this kind capable of synthesizing melanin through the conversion of hydroxybenzoic substrates by phenoloxidase activity [27]. After passage through birdseed agar, the possible colonies which showed a coffee-brown coloration characteristic of C. neoformans and C. gattii were seeded in CGB medium (L-canavanine glycine bromothymol blue) for species identification [27]. For the biochemical tests, auxanogram technique was used, in which the assimilation of eleven carbon sources (dextrose, lactose, maltose, sucrose, inositol, galactose, cellobiose, dulcitol, melibiose, trehalose, and raffinose) and two nitrogen sources (peptone and potassium nitrate) [27] was used to differentiate, identify, and confirm at the species level. Results and discussion Brazilian scientific literature on cryptococcosis has had a significant contribution in directing and explaining facts related to the agent of this disease. The habitat of yeasts of the genus Cryptococcus, especially C. neoformans, are found in the environment, principally in soil made up of decomposing plant material and bird and bat droppings found in both urban and rural areas in Brazil [1,9,10,20,22,28]. However, other studies have shed more light on the ecology of the cryptococcosis agent in Brazil, demonstrating that the yeast is associated not only with pigeons or Eucalyptus spp trees, but also with other tree species, as reported by Lazéra et al. [16][17][18][19] (and fellow authors), Fortes et al. [29] and Baltazar and Ribeiro [30]. In this study, the identification of seven different species of Cryptococcus yeasts, isolated from the dust found on books, indicates that dust is a possible biotope for isolating this type of microorganism ( Figure 1). Fortyone colonies were isolated distributed through all three libraries and in two of the libraries, the presence of encapsulated yeast was detected in these environments. Cryptococcus gattii was the species most frequently identified, totaling 15 (36.6%) isolates; followed by C. terreus with 12 (29.3%) isolates and C. luteolus with 4 (9.8%) isolates. The other species detected were C. neoformans and C. uniguttulatus represented by 3 (7.3%) isolates each, and by C. albidus and C. humiculus with 2 (4.6%) isolates each, which presented the smallest percentiles (Table 1). Table 1. Frequency of colonies of isolates of yeasts species of the genus Cryptococcus from dust collected in three libraries in Cuiabá, MT, Brazil. The percentage (21.4%) of positive samples in this study (Chart 1) conducted in the city of Cuiabá is noteworthy compared with studies by other authors conducted in other Brazilian cities, when these involved bird droppings as substrates in the research. Soares et al. [39], in a study conducted in the city of Santos, obtained a frequency of 13.9% for pigeon excreta samples positive for C. neoformans. In the city of Goiânia, Kobayashi et al. [34] verified a frequency of 23.2%; while researching in church towers in Rio de Janeiro, Baroni et al. [10] observed that 37.8% of the samples collected showed the presence of encapsulated yeast and in Rio Grande do Sul, Abegg et al. [36] isolated C. neoformans var. grubii in 87% of Psittacidae excreta samples. The percentage presented in this study, conducted in what is considered a closed environment, is in agreement with the data presented in the study by Kobayashi et al. [34], who demonstrated that the presence of the fungus in environmental samples is more representative when such samples are protected from the weather. In Brazil, no reports concerning the ecology of this species were identified exclusively in the presence of library dust. Isolates from 41 colonies of seven species of Cryptococcus spp. identified in the microhabitat of dust shows the importance of knowledge regarding the saprobiotic sources that host the Cryptococcus spp yeasts responsible for important and fatal cases of cryptococcal meningitis in immunosuppressed and immunocompetent individuals [40][41][42]. In addition to public places, enclosed environments may contain a high density of C. neoformans, as shown in the study conducted by Criseo et al. [43], who reported 26.6% in bird excreta in pet stores and homes. Swinne et al. [44] isolated C. neoformans in domestic dust in Bujumbura (South Africa) in samples collected from homes of patients with cryptococcosis associated with AIDS, making a significant correlation between the existence of pigeons close to the home environment and the probability of contamination of these homes. In a study conducted in different regions of Bangkok, Soogarum et al. [45] demonstrated the presence of C. neoformans in 14 samples of pigeon droppings. Hanasha et al. [46] analyzed 509 samples of columbiform However, these studies did not classify these yeasts at the species level. In addition, Passoni et al. [22] isolated basidiomycetous from captive bird droppings, compared with dust from the inside of the home and the) peridomicile. Of the 79 samples of bird droppings collected by these researchers, 12.7% were positive. The authors concluded that the frequency of positive isolates in homes that maintained captive birds was the main factor responsible for the contamination in these homes. In this study, the percentage of isolates of Cryptococcus spp in 84 samples collected with swabs resulted in 18 (21.4%) positive samples (Chart 1). The environmental source of C. gattii is associated with decomposing eucalyptus material, as well as other plant materials in the process of decomposition. Nowadays, this species is being identified on different trees and in different geographical regions of Brazil, and has already been isolated on native and introduced plant specimens, such as false sicklepod (Senna multijuga), stinking toe (Cassia grandis), Chinese banyan (Ficus microcarpa), Cabori (Miroxilum peruiferum), Sibipiruna (Caesalpinia peltophoroides), and Oiti (Moquilea tomentosa), revealing other natural habitats for this species [16,17,19,28,30,47,48]. While researching trees in the Brazilian Amazon, Fortes et al. [29] reinforced evidence that C. gattii is not associated with one species of tree in particular, but rather to a specific habitat niche formed by the natural decomposition of wood. Recent research has proved this hypothesis, as shown by Randhawa et al. [49], who demonstrated the prevalence of C. gattii (24%) and C. neoformans (26%) in the soil around the base of certain host trees, indicating that the soil is another important ecological niche for these two species of Cryptococcus. More recently, Girish et al. [50] confirmed this hypothesis by isolating C. neoformans from environmental samples on 40 selected trees, taking into consideration decomposing wood and bits of bark of live trees in the Guindy National Park in Chennai, in the southern region of India, with the very first isolation of C. gatti on a species of jambul (Syzygium cumini). Last year, while investigating plant species on a Caribbean island in Puerto Rico, Loperena-Alvares et al. [60] detected the presence of C. gatti in lesions on succulent plants of the Cactaceae family (Cephalocereus royenii), a type of cactus that is extremely common in the region. More recently, in Colombia, Firacative et al. [61] processed 3,634 samples from trees surrounding the residences of patients afflicted by cryptococcosis caused by C. gatti, isolating a sample of C. gatti serotype B, two samples of C. gatti serotype C and three samples of C. neoformans var. grubii serotype A. Given these facts, the results obtained in this study suggest that the locations researched may be surrounded by abundant varied tree vegetation (Mangifera indica, Cocothrimax spissa, Cassia fistula, Caesalpinia peltophoroides, Eucalyptus camaldulensis), including species of Eucalyptus spp, in which the main spore is represented by basidiospores present in the flowers of this genus of trees that functions as "host tree" for the fungus through a biotrophic association; thus greatly facilitating the dispersion of these spores into the environments analyzed. This fact can be observed in the work of Mahmoud et al. [52] in Egypt, which revealed the presence of C. gatti in samples of flowers of Eucalyptus camaldulensis, and in the study performed by Velagapudi et al. [42], which used acquisition by inhalation as the environmental criteria. The basidiomycete yeast C. neoformans is well adapted to bird droppings, particularly that of pigeons. The increasing population of birds is becoming an environmental and public health issue in Brazil and the rest of the world. This columbiform species has a habit of living in groups, building its nests high up on buildings, towers, attics and windowsills, among other locations, while it feeds on grains and remnants of food and garbage in public locations [62]. In this study, a large concentration of pigeons (Columba livia domestica) was observed in the urban environments near the locations studied and these birds have access to attics and windowsills that are not sealed. Observation verified that the bird droppings remained on roofs and air-conditioning vents, facilitating the dispersal of dry particles containing fungal spores inside the facilities. In a recent study conducted by Takahara et al. [63], which analyzed samples of pigeon droppings in the city of Cuiabá, state of Mato Grosso, Brazil, collected from several domestic, commercial, and public environments, the presence of strains of C. neoformans was demonstrated in bird droppings. The meteorological factors of each region, such as temperature, humidity and light exposure, influence and can contribute to obtaining different results [40,64,65]. Granados and Castañeda [65] have demonstrated the prevalence of Cryptococcus species, suggesting that meteorological conditions influenced the presence of this yeast and that C. neoformans is more frequently isolated during wetter seasons on dry droppings. The microclimatic conditions in the city of Cuiabá, Brazil, with an average temperature between 34 and 35°C, a dry winter, and a rainy summer, can favor the development and dispersion of fungi in the environment. The microclimates, (temperature and humidity) identified in these environments are considered to be factors for the presence of these microorganisms in the locations studied. In addition to the relation of these ligninolytic basidiomycetes decomposers of cellulose and other plant materials, the greater frequency of C. gattii (15; 36.6%) in relation to C. neoformans (3; 7.3%) may be directly influenced by temperature, since according to Ishaq et al. [66], C. neoformans does not grow in temperatures above 40°C and is sensitive to direct sunlight. However, in a study performed by Kobayashi et al. [34], the authors affirmed that humidity is another factor that also seems to influence the viability of C. neoformans during collection and possibly on the swab samples, that tends to affect bacterial decomposition, thus altering the pH and probably inhibits the proliferation of the yeast. Cryptococcosis is generally related to infection by C. neformans and C. gattii and is rarely caused by other species, including C. albidus, C. laurentii, C. curvatus and C. uniguttulatus [23]. According to Khawcharoenporn et al. [67], the other species of the genus Cryptococcus are generally considered to be saprophytes. However, in the last few decades, reports have appeared in the literature regarding infections caused by other species, with C. albidus and C. laurentii being responsible for 80% of the cases of cryptococcosis caused by organisms other than C. neoformans and C. gattii. With the increase in the number of patients presenting compromised immune systems and the ample use of immunosuppressant agents, the incidence of fungal infections has increased worldwide, including those caused by emerging Cryptococcus species such as C. albidus and C. laurentii [68]. In this study, the isolation of 15 (36.6%) C. gattii samples; 3 (7.3%) C. neoformans samples, and 2 (4.9%) C. albidus samples (Table 1) demonstrated that the presence of species considered to be potentially pathogenic, as well as emerging species, may reflect the possibility of such species, which are present in a wide variety of substrates, constituting infectious agents for cryptococcosis. The results obtained in this study indicate evidence of the diversity of the environmental origin of Cryptococcus spp and highlights the substrates favorable to their respective development in the environment. The microbiota analysis of dust from locations where human presence is constant is necessary to implement health surveillance programs to protect the health of workers, as well as preventing possible sources of acquisition of infectious disease-causing organisms, including cryptococcosis.
2018-04-03T05:50:41.316Z
2012-06-08T00:00:00.000
{ "year": 2012, "sha1": "abd7acc6e9bcd35c7ad604b184ad8e6ed89c04fa", "oa_license": "CCBY", "oa_url": "https://occup-med.biomedcentral.com/track/pdf/10.1186/1745-6673-7-11", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1342ad74775a99669beb056a97aa43b1bfe5d237", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
168839403
pes2o/s2orc
v3-fos-license
Assessment of the Ecosystem Services Capacity in Natural Protected Areas for Biodiversity Conservation Recently, in Italy, a legislative proposal has been set to reform the role and the functions of natural protected areas promoting their aggregation (or the abolition) pursuing a better efficiency for their administration and economic saving. The system of natural protected areas is composed of different conservation levels: there are the Natural parks, established in the ‘80 by national or regional institution for the safeguard of natural elements, the Natura 2000 -Habitat 92/43/CEE promoted by European Union, with conservation measures for maintaining or restoring habitats and species of Communitarian interest, and the local parks of supra-municipal interest (namely PLIS) created by single municipalities or their aggregation aimed at limiting the soil sealing process. The hierarchical level of protection has determined differences in the management of the areas which leads to various approaches and strategies for biodiversity conservation and integrity. In order to assess strengths and weaknesses of the legislative initiative, the new management framework should be designed, considering the ecosystem characteristics of each natural protected area to define the future opportunities and critics, rather than, in the extreme case, remove the level of protection due to the absence of valuable ecosystem conditions. The paper provides an operative support to better apply the legislative proposal investigating the dynamics that affect all protected areas using the land take process as a major threat to biodiversity conservation in natural zones. The land take process is explored using the Land Use Change analysis (LUCa) as a possible way to determine the impact and the environmental effects of land transitions. LUCa is also useful to determine the loss of protected zones capacity to support Ecosystem Services. Finally, the assessment of the Ecosystem Services Capacity (ESC) index expresses the ability of each LULC to provide ES and, in particular, the Ecological Integrity, Regulating Services and Provisioning Services. The efficacy of the proposal is tested in the Lombardy Region (Northwest of Italy) where the natural protected areas are more than 500 with a territorial extension of 740 thousand hectares that correspond to 31% of the regional surface. Introduction Natural areas are a fundamental source of Ecosystem Service (ES) supply [1] such as carbon sequestration, biodiversity conservation, landscape value and for the regulation of major element cycles. In the last 50 years, humans have altered ecosystems more rapidly and extensively than in any comparable period and wider impacts are expected [2,3]. Most of these impacts are related to Land Use -Land Cover (LULC) changes recognised as one of the main factors in the decline of the global environmental conditions [4] and the major driving force for biodiversity loss [5]. Changes in LULC The ES potentially provided by the LULC is acknowledged as a necessary framework for linking human and natural systems in environmental management [6][7][8] and guiding the spatial planning process towards a more sustainable approach. The implementation of ES assessment may effectively support societal and political choices in the planning process [9] for conservation, protection and management of natural resources. The conservation of natural resources is strictly linked with the institution of protected area, especially parks, offering a practical and tangible solution to the problem of species loss. The International Union for Conservation of Nature (IUCN) defines a protected area as "a clearly defined geographical space, recognised, dedicated and managed, through legal or other effective means, to achieve the long-term conservation of nature with associated ecosystem services and cultural values with nature conservation the priority objective". [10] The institution of natural parks in Lombardy (Northwest of Italy) takes place in the 1980s according to the regional programmatic law n. 86 of 1983 namely "General plan of protected areas" with the aim to define protected zones for natural conservation and biodiversity protection among the regional areas. Five levels of natural protection were created according to the ecological characteristics of the Lombardy region: i) regional parks; ii) natural parks; iii) local parks of supralocal interest (of local design and management -PLIS); iv) regional reserves; and v) natural monuments. In addition to those, there is the Natura 2000 network composed by Community importance (SCIs) and Special Areas of Conservations (SACs) settled from 1995 in relation to the European Habitats Directive n. 43 of 21 May 1992 to ensure the conservation of a wide range of rare, threatened or endemic animal and plant species and creating a coherent system of areas for biodiversity conservation. Nowadays, almost 50% of SCIs and SACs is included in regional parks, while the rest is mostly located in alpine or pre-alpine region, where the quality of naturalistic functions and biodiversity is generally higher than the valley area. In Lombardy, there are 22 regional parks with a territorial extension of almost 500 thousand hectares; the SCIs are 192 and spans over 224 thousand hectares; while SACs are 66 with an area of over 297 thousand hectares. Thus, the system of natural protected areas includes an area of more than 1 million hectares that correspond to 43% of the regional surface. In Lombardy, the establishment of natural protected areas could be promoted by different institutions (Regional, Provincial and Local administrations). That implies a different regulatory and institutional framework and, as a consequence, a different management approach. Therefore, despite the common aim of the natural protected areas is mainly the biodiversity conservation and the protection of the habitat quality, some areas have a broader range of finalities and often do not include only areas with high natural quality. This is the case of the PLIS, created by one or more municipalities, with the goal to preserve such areas to LULC changes recognising the diverse type of values: natural, recreation and/or agricultural. Nowadays, the organisation of the entire system of natural protected areas is not created according to a regional strategy with common policy but, on the contrary, in a fragmented way with considerable consequences on the management of the sites. The lack of a common policy was partially solved with the institution in 2008 of the Regional Ecological Network (RER) recognised as a priority infrastructure of the Regional Territorial Plan (PTR) that constitutes a guideline for regional and local planning (according to the provisions of the Regional Committee Resolution n. 10962/2009). The RER includes the system of regional and national protected areas and Natura 2000 sites, and identifies areas of priority interests for biodiversity and ecological corridors for wildlife reproduction. Despite the aim of the RER is to define the main ecological corridors in a large scale , the management of those areas is within prices boundaries, those of the administrative institutions. Consequently, the regional strategy promoted with RER does not match the institutional scale that it manages. Recently, the necessity to reorganise the regional system of natural protected areas leads to the enactment of a new Law approved by the Regional Council n. 28 of the 17 November 2016 entitled "Reorganisation of the Lombard system of managing and safeguarding protected regional areas, and other ways of protecting the territory". The aim is to consolidate the conservation and promotion of the natural heritage and landscape considering the design of the RER and promoting the aggregation (or the The paper provides a method for the assessment of ES provision based on the LULC composition and its changes that threatens ES [11,12] useful for the reorganisation of the system of natural protected areas based on the Ecosystem Service Capacity (ESC). Particularly, the paper analyses the flow of LULC changes through a cross-tabulation matrix [13] and the consequent variation of the ESC considering three macro groups of ES: Ecological Integrity, Regulating Services and Provisioning Services [14]. The proposal is focused on Regional parks considering their specific ability in natural conservation and the possibility to act on their institution for pursuing a regional strategy of natural protection. Land Use Changes analysis (LUCa) in Regional parks The LUCa ranges from 1999 to 2012 using the existing multi-temporal database available for the Lombardy region named Destinazione d'Uso dei Suoli Agricoli e Forestali (DUSAF) developed by ERSAF (Regional Agency for Services to Agriculture and Forestry) with a classification system shared with Corine Land Cover (CLC) [15]. DUSAF is settled at scale of 1:10,000 (minimum mapping unit of 0.16 hectares). LUCa is defined as a process of transition where, in most cases, natural, semi-natural and rural areas are converted into urban uses (e.g. urban settlements, industrial activities, infrastructures). This process affects biodiversity through the decrease of habitat quality and compromises the ES provision and the related human benefits. [16,17] The LUCa of the Regional parks was conducted in GIS environment with an interpolation method for data analysis. The Microsoft Excel software has been used for the statistical tabulation of the .dbf file. The output, a pivot table, is a cross-tabulation matrix of land cover classes that highlights the major changes, and particularly the transitions from agricultural and natural areas into new urban ones. The method allows assessing the balance of LULC transitions regarding gains (increase) and losses (decrease) of a specific class towards a different land use. The flow of changes is a sum of hectares representing the kind of use/cover which has changed from a LULC category to another at the reference time T 0 -T 1 . There are two typologies of LULC changes: the ones corresponding to the land take process, that is the worst LUCa corresponding to a process of artificialization. The land take process, as defined by European Environment Agency, is the "Change of the amount of agriculture, forest and other seminatural and natural land taken by urban and other artificial land development. It includes areas sealed by construction and urban infrastructure as well as urban green areas and sport and leisure facilities" [18]. Hence, the land take is the increase of artificial surfaces (such as housing areas; urban green areas; industrial, commercial and transport units; road and rail networks; etc.) over time. The second typology of LULC changes is a called "homologous" because it is not characterised by an irreversible process (e.g. urbanisation) but an exchange within the natural or agricultural classes, (e.g. the transition from grassland to crops) which is reversible by definition. The LUCa developed in the Regional parks of Lombardy shows three major transitions with an increase of class 1.2 (industrial, commercial and transport units), class 1.4 (artificial, non-agricultural vegetated areas) and 2.3 (Pastures). The LULC threaten by these changes, with a decrease of the surfaces, are the arable land (class 2.1). Considering the value showed in Table 1, the overall class 1.2 was increased by 23%, equal to 2,374 hectares. Particularly, it should be noted that major changes happened in the class of industrial and commercial zones (+828 ha) and roads and rail network (+488 ha) with an impact on non-irrigated arable land for an amount of hectares equal to 1,571. The trend is rather evident in the Parco Valle del Ticino (-579 hectares of crops substituted), in the Parco Agricolo Sud Milano (-499 hectares of crops substituted) and, at least, in the Parco Adda Sud (-139 hectares of crops substituted). An important amount of process of anthropization has been caused by new infrastructures built inside the boundaries of natural protected areas in the last years. Recently in Lombardy, more than 1,600 hectares of natural and agricultural areas were converted into infrastructures (streets, railways and accessory spaces) following the implementation of the regional strategy for the infrastructural development that includes railways and roadways networks (e.g. the new piedmont viability network, the highway system in the eastern part of Milano, the highway connection between the city of Brescia-Bergamo-Milano called BreBeMi, and the new secondary network built for the accessibility of the EXPO 2015 site) [19]. The infrastructure development constitutes the major driver of land take process in Regional parks, and represents the higher threat for ES provision because implies the drastic reduction of the bio permeability of topsoil. The transformation of topsoil due to urbanisation causes an alteration in the capacity of soils to provide their functions (naturalistic, productive and protective) and related ES [20,21] Moreover, the conversion from open lands into new infrastructure generally comprises a surface higher than the impermeable section of the paved road or the rail track, including the areas used for the construction site with a further result of land exploitation and degradation such as compaction, sealing and erosion [22]. Most of the areas used for the construction site often remains isolated between the new infrastructural axis and the existent local viability with a further decrease of important Ecosystem functions. Moreover, the rate of growth in the period from 1999 to 2012 of LULC class 1.4 (urban green areas) is equal to 25% corresponding to 1,120 ha. Such increase is mainly due to the augment of class 1.4.1.1 (urban Parks and gardens + 503 ha) and 1.4.2.1 (Sports facilities + 406 ha) which leads to the new localisation of urban parks and areas for sports activities in natural protected areas. Despite their important role for recreation opportunities, in many cases, these functions are less compatible with the protection of nature and the conservation of the biodiversity. The increase of urban green areas determines in many cases the loss of croplands (class 2.1). This process has been registered especially in the Parco Agricolo Sud (-304 ha), in Parco della Valle del Ticino (-205 ha) and in Parco Nord Milano (-103 ha). The LUCa finally showed an increase of pastures (class 2.3). This dynamic is associated with the urban expansion and especially to the infrastructural development that fragments the productive rural system making them unsuitable for farming practice. In the period from 1999 to 2012, this process has caused the loss of cropland in favour of pastures (+ 14,526 ha) and permanent crops (+3,188 ha). Particularly, the Parco Valle del Ticino lost more than 4 thousand hectares of croplands in favour of permanent grassland. Similarly, the Parco Agricolo Sud Milano lost 3,757 ha and the Parco del Mincio 2,442 ha too. The change from croplands to permanent grasslands confirms the recent national studies of INEA, the Italian National Institute of Economy and Agriculture, which address the loss of croplands to a double dynamic: the abandonment of agricultural fields by farmlands, and the increase of urban areas into the rural agricultural system causing fragmentation of arable surfaces. The evaluation of the Ecosystem Service Capacity (ESC) The regional reorganisation strategy of natural areas should consider the LUCa occurred in the period 1999 to 2012 to better understand the processes that, in the last years, affected the natural resources and related ES in protected areas. In addition to the LUCa, it has been verified the state of ES and their trends based on changes in LULC and their effects. The additional analysis aims to support the new regional strategy of protected areas reorganisation, considering that the loss of ES can be compensated by specific policies for improving the supply of other ES. As introduced, the land take is considered a key process to evaluate the ES variations regarding state and trends. Indeed, the land take is often used as a proxy for soil sealing related processes, which leads to interrupt the exchange between the pedosphere and the atmosphere, thus determining changes in the natural functioning of soils. Soil functions strictly depend on the multi-functionality of Soils and each type performs specific functions.. For example, some soils have a higher capacity to produce fuel or fibre than others, depending on their chemical, physical and pedogenetic characteristics and on the agroclimatic environment, while some soils differ in their capacity to filter water, to store carbon or to provide a habitat for biodiversity. Therefore, the assessment of the state of the ES into the regional natural parks adopted the methodology recently applied in the study entitled "Land cover change dynamics and insights into ecosystem services in European stream riparian zones" [14] which apply a defined ecosystems capacity value to land use classes considering the functions of integrity, regulation and provisioning [23,24]. The overall effect on each ES is calculated in the natural parks. The ESC is the sum of the values associated to the land use typology for the different temporal threshold used for the study. The ESC variation is considered as the difference between the overall ESC value during the observed period. A preliminary assessment of ESC variation outlines that there is neither drastic reduction, nor increase in the ESC index, even if a slightly decrease trend has been commonly registered to most of the parks, even if the percentage is limited. The dynamic denotes a general tendency to a loss of ES that could be managed considering a future policy of mitigation or compensation for specific ES restoration. The result shows that the conservation strategies adopted were sufficient enough to maintain a balance in the level of the natural value inside the protected zones. Moreover, a recent study highlighted that protected areas suffer of an increase in the land take process occurring at their margins, which may represent a serious threat to the integrity of species populations and habitats preserved within the protected areas and may interrupt ecological continuity/connectivity [16,25]. Therefore, starting from the ESC trend, it was decided to considers a buffer zone external to the area with the aim to test and verify the edge-effect, here intended as the analysis of how park-proximity generates an acceleration in the real estate marked for new hosting sites located at the boundaries of protected areas [16]. The assessment of ESC in each single natural park is listed below. Alto Garda Bresciano: the overall ESC value is slightly decreasing (-0.01%), nevertheless a higher decrease is registered for provisioning services (-1.05%), where differences are due to the loss of cropland production (-17.42%), livestock (-12.62%) and fodder (-5.08%). At the same time, it is noticed a consistent increase in the provisioning service of fresh water (+30.77%) which indicates a process of re-naturalization that guarantee a better capacity of the soil to buffer and filter water streams and their nutrients. Colli di Bergamo: the overall value of ESC is decreasing of -0.67%, which is mainly due to the lowering value of the provisioning services (-1.44%). Analogously to the Alto Garda Bresciano, the higher decreases are linked to the reduction of crops production capacity (-6.79%), provisioning for livestock (-4.71%) and fodder (-4.39%). Serio: Also in this protected area the provisioning services are decreasing their value. The overall value of provisioning services is losing the 1.22% of initial value. Specifically, the potential crop production is decreasing by 3.15%, the provisioning for livestock decreased by 2.55% and fodder too decreased its initial value of 2.70%. Montevecchia e Valle del Curone: all the ecosystem groups (ecological integrity, regulation and provisioning) slightly decreases respectively of -1.08%, -1.01% and 2.07%. Particularly, the pollination services (regulation services) decreasing by 2.75%, while the freshwater provision increase with a rate of change equal to 39.13%. Parco Sud Milano: the Parco Agricolo Sud Milano is an agricultural park in the metropolitan area of Milano where the productive vocations must be maintained. It is also a greenbelt park for the densely built-up area of Milan and its establishment has historically conditioned the development of the city along North axes and corridors, rather than other areas. Moreover, the park is subjected to a high pressure of the real estate market which found in the undeveloped agricultural land a suitable location for new residences. The overall ESC indicator decrease of 0.28%, by a decrease of the ecological integrity of 0.85%, and an increase of regulation services of 1.42% and provisioning services decrease of 0.77%. Particularly, the crops production decrease of 5.08%, livestock decreases of 5.40% and fodder decreases of 5.51%. Moreover, the metabolic efficiency (EI) decrease of 2.76%, but for other important services, the ecosystem capacity marked a slightly increase: climate regulation, erosion regulation, pollination, timber production. Valle del Lambro: this protected area marked a decrease of ESC equal to 2.63%. Particularly, ecological integrity, regulation and provisioning decrease respectively of 2.42%, 2.86% and 2.68%. The category of provisioning is affected by a decrease of crop production of 4.68%, livestock of 4.55% and fodder of 5.00%. Also for regulative services the decrease affects flooding regulation of 3.74%, air quality decreases of 3.95% and nutrient regulation decrease to of 3.90%. All the services mentioned above are of crucial importance for the well-being of the citizens in the metropolitan area of Milan. Valle del Ticino: this area represents one of the greatest green corridors for the metropolitan region of Milan, the ESC decreases of 0.27%. The decrease is due to the change in ecological integrity services (-0.27%), in regulation services (+0.58%) and provisioning services (-0.86%). Services of crop production and livestock are the ones which are affected by the highest decrease, and also the flooding regulation is subjected to a decrease in an area where regulative services are crucial for their buffering effect in the metropolitan area. Results and discussions Protected areas in Lombardy are a proxy of the overall ecological quality in the Region regarding ES provision. The land take phenomena affected also protected areas, especially by the new infrastructural process, relocalisation of urban green areas (urban park and sport facilities areas) and industrial and commercial development areas. Certainly, the land take process is a fraction of the overall LUCa and sometimes other kinds of land use changes (the agricultural abandonment, the shifts from agricultural to natural or seminatural and vice versa) generate an ecosystemic downgrade even if an impermeabilization process does not occurs. Moreover, the increase of natural areas does not imply an increase of biodiversity because, as demonstrated by the in-depth ES analysis, when pastures in alpine valleys are substituted with a new regenerating forest, this new and young forest is not enough mature and sometimes invasive; meanwhile when seminatural areas substitute agricultural land in plain flat areas the transition is often due to abandonment too. The association of the traditional quantitative LUCa with qualitative information on ecosystem capacity of the soils to provide many ES (ESC indicator) has been used as a methodology to support the decisionmaking mechanism leading the new regional reform for protected areas in Lombardy. The quantitative/qualitative analysis shows that, to some extent, the preservation of the natural value on protected areas in Lombardy has been provided even when land use changes happened. The mismatch between the land use changes and their effects on ES capacity is a key issue for the future management of these areas, because the natural preservation of protected zones should be pursued both avoiding the land take phenomena on protected zones rather than promoting interventions to increase the natural value of protected sites. It should be noted that:  the average value of ESC is little positive (0.42%), thus by an ecosystem analysis perspective the region has acted with a mixed framework of protection/restoration which maintained an equal balance inside protected zones;  where the decrease is associated with land take dynamics (Capo dei Fiori, Colli di Bergamo, Serio, Pineta Appiano Gentile, Montevecchia e Valle del Curone, Spina Verde Como, Sud Milano, Valle del Lambro and Valle del Ticino) a re-enforcement of the management system of the park needs to be considered;  the land take happened in the most relevant protected areas of the metropolitan city of Milan threatened by the urbanisation pressure. In these areas, the most frequent typology of authorization concerns the request for new streets, which includes connections, extensions or enlargement of the existing network; rather than requests of requalification for rural buildings by specific executive projects (PAC or PII);  in the surrounding areas the land take process happened with high intensity. The registered land take is the result of the conversion for new infrastructures or new residential zones. In the first case (the construction of new streets), the authorization concerns viability of provincial or regional management that always obtain a favourable authorization by the park authority (which is regional too), while the municipal requests concern more the authorization for new parking areas, or junctions, which often are rejected because are not allowed by the park legislation. In the case of PAC or PII, the largest part of authorization requests comes from the owners of rural settlements which are dismissed or abandoned and with degraded state of conservation. The procedure concludes with a favourable technical authorization when the project takes into account the historical value of the building, and also consider the traditional rural character of the surrounding areas and their landscape. Sometimes, the authority expressed a favourable opinion if the project considers mitigations or compensations for natural re-balance of the original condition. Conclusions In Lombardy region, relevant protected areas are affected by land take process and also by a decrease in their overall ESC. These two processes sometimes are correlated, while in other cases not. Thus the traditional quantitative LUCa which is essential to know the intensity of the land take process needs to be integrated into a qualitative assessment of each transition as using ES assessment. The integration between quantitative and qualitative analysis make the proposed method more effective, giving better knowledge for the decision making process. The institutions required to reorganise the system for managing and safeguarding protected areas in Lombardy require a tool that support decision-making process considering the specific territorial dynamics of the natural protected areas. Otherwise, the reorganisation is considered just focusing on institutional economization. The recent Law (n.28/2016) of Lombardy region for the reorganisation of the system of managing and safeguarding protected regional areas needs an operative support to orient the decisions on possible aggregation (or the abolition) of natural protected areas for biodiversity conservation. The paper provides a method that combines quantitative analysis based on LULC changes in the period 1999-2012 with a qualitative investigation on the ES provision of Regional parks. The methodology proposed highlights dynamics complex that includes different phenomenon: land take process, LULC changes "homologous", and trends in ES provision. The proposed methodology is considered as a starting point for the comprehension of the dynamics that occur in regional parks regarding pressures, ES threats, and management. A reorganisation of the system of natural protected area must include a detailed analysis for the assessment of the causes that have threats nature and biodiversity. The approach provided by the authors allows the understanding of these dynamics. Additionally, the analysis needs to be completed with a documental study on the most frequent typology of authorization, regarding building permits, of new infrastructures and new urban areas in protected areas provided by Park Authority. This analysis allows highlighting the management approaches adopted by the different authorities for achieving a better conservation of the natural quality. This analysis is important in deciding how to combine or separate various authorities in the govern of natural protected areas. In fact, the authorization of new urban transformations inside natural protected areas need to be considered in a regional strategy to insure a real management of nature. Considering the low capacity of authority to deny permissions in protected areas where the urbanisation pressure is high, and also the edge-effect which leads to an increase of the land take dynamic outside the boundaries of protected zones, two key points seems to be crucial for further discussion. The reorganisation of parks should join different authorities with a central new authority with a higher political power. By the way, the authority should be less affected by the "local" real-estate market pressures and a higher rate of denying for non-adequate transformations should be achieved in future. Secondly, even thought the formal number of parks will diminish, it should be better considering an extension of a "medium-protected zone" in the future; thus to limit the land take edge-effect. As an example, the construction of a regional green infrastructure to connect the existent protected zones preserving the urban and periurban areas will guarantee higher ecological values also outside the traditional protected sites.
2019-05-30T13:21:04.179Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "66f0c75c1ed37bb466c495db9012c53c278fc691", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/245/7/072031", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c9992b7aadde72c3444c7f41f8551772fc12c783", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Business" ] }
202554392
pes2o/s2orc
v3-fos-license
Comparison of shear bond strength of orthodontic brackets bonded with a universal adhesive using different etching methods ABSTRACT Objective: The aim of this study was to compare the effects of three enamel etching modes - laser-etch, self-etch and acid-etch (5, 10 and 15 s) - on bracket bonding, using a universal adhesive. Methods: Eighty-four maxillary premolars were randomly divided into seven groups (n=12) based on the etching method and the adhesive used for bracket bonding. After water storage and thermocycling, shear bond strength was measured, and adhesive remnant index scores on debonded enamel were determined. Results: There were significant differences between the seven groups regarding bond strength values (p< 0.001). The highest values were observed in universal adhesive with laser etching group, while Transbond XT with acid or laser etching, and universal adhesive used in self-etch mode demonstrated the lowest bond strength. The universal adhesive with the three different etching times presented with statistically similar results, all showing an improvement in bond strength, compared with Scotchbond universal (SBU)/SE. Conclusions: The universal adhesive evaluated in the present study demonstrated statistically similar bond strengths to conventional orthodontic adhesive in self-etch mode. The bond strength can be improved by adding an initial acid etching or laser conditioning step, although enamel damage was observed in some cases. INTRODUCTION For many years phosphoric acid etching has been widely used as the main step of bonding orthodontic brackets. The differential dissolution of enamel crystals and the resultant roughened surface is known to be responsible for successful micromechanical bonding to enamel. 1 However, this approach takes a long time, especially in the initial set up session, which is associated with patient discomfort. Loss of surface enamel following acid etching has been reported with values ranging from 0.2 to 25 µm, depending on etching duration, acid concentration and structure/composition of enamel. Furthermore, acid etching creates a morphologically porous layer 5-50 µm deep, which could render the teeth susceptible to staining. 2,3 Moreover, increased decalcification and white spot formation around bonded brackets during orthodontic treatment have also been reported as possible disadvantages of the conventional bonding technique. 2,4,5 Enamel damage might occur during bracket removal and elimination of the high bulk of residual adhesive resin. 6 These shortcomings interfere with the primary goal of the treatment in improving esthetics and appearance of the teeth, leading to research efforts in order to find suitable alternatives. Laser etching has been used in numerous studies and has been shown to render comparable, 7,8 higher 9 and lower 4,10 bond strength values, compared with that of acid etching, depending on the laser type, irradiation parameters and experimental designs. An increase in calcium-to-phosphorous ratio 10 and the resultant acid/caries resistance of laser-irradiated enamel surface could be attractive advantages 4,11 of the laser etching method. Er,Cr:YSGG is a relatively new laser and has been demonstrated to be the safest and most effective hard tissue laser. 8 A study in the field of restorative dentistry has suggested that composition of the adhesive might affect bond strength to laser-irradiated enamel in composite resin restorations. 12 In addition to enamel preparation, adhesive/bonding resins affect the bracket bonding to enamel. 1 Selfetch (SE) or etch-and-dry adhesives are less aggressive on enamel, rendering advantages such as easy application, saving time and with lower risk of contamination by saliva during bracket bonding. 13,14 Despite restricted penetration of this adhesive into superficial enamel with shorter resin tags, 13 some studies demonstrated promising results for simplifying bracket bonding. 1,13,14 Furthermore, it has been reported that in bracket debonding, the chance of enamel damage is decreased, with fast and easy removal of residual resin. 6 Recently, a new type of single-step one-bottle universal adhesive (UA) has been introduced into restorative dentistry, which has been claimed to present the ability to bond with numerous surfaces (enamel, dentin, amalgam and porcelain). According to the manufacturers, these adhesives have a unique ability to be used in both etch-and-rinse and self-etch modes. 15 However, inadequacy of self-etching approach on enamel surface has been indicated by some authors, who have recommended phosphoric acid etching for 15 or 30 s to obtain high enamel bond strength. 16,17 Nevertheless, shortened etching times, as low as 3 s, have been reported as sufficient by a recent study. 18 To date, only one study in the field of restorative dentistry has evaluated the bond strength of UAs to enamel in the three modes (acid etch, self-etch and laser etch), reporting comparable strength for selfetching and Er,Cr:YSGG laser etching, and higher strength for pre-acid etching. 19 No study has been published on bracket bonding using UAs with different surface preparations along with shortened acid etching time. Therefore, this study was designed to examine the null hypothesis that different etching modes and different acid etching times (less than 15 s) would not affect bracket bonding using a universal adhesive. MATERIALS AND METHODS Eighty-four maxillary premolars with intact buccal surfaces, which were extracted for orthodontic purposes, were selected for this study and stored in 0.5% chloramine-T solution for two weeks, to ensure disinfection. Absence of cracks and defects in the teeth was verified under a stereomicroscope (Carl Zeiss, Oberkochen, Germany). The buccal surfaces of the teeth were cleaned using a rubber cap and slurry of non-fluoridated pumice. The teeth were vertically mounted in self-cured acrylic resin cylinders and were randomly divided into seven groups (n=12), according to surface treatment procedures. » Group 1 (etch/TXT): As control, phosphoric acid etching for 30 s, water rinsing for 15 s, air-drying for 10 s and application of Transbond XT primer ( Lot #N704516,3M, Unitek, Monrovia, CA). » Group 2 (laser/TXT): Er,Cr:YSGG laser etching using WaterLase Plus/Gold handpiece (Biolase technology Inc, Cromwell Irivine, CA, USA) with a wavelength of 2780 nm; tip type MZ8; pulse duration 60 µs; repetition rate 50 HZ; and power 2 W, for 10 s at a distance of 1 mm and perpendicular to the enamel surface with water and air spray. Transbond XT primer was applied. To standardize distance and area of laser irradiation in Groups 2 and 6, acrylic discs (1 mm in thickness with a 3×4 mm hole in the center) were used. After performing the corresponding surface treatments, stainless steel maxillary premolar brackets (American Orthodontics, Sheboygan, WI) with a bracket base area of 8.82 mm 2 were bonded with light-cured orthodontic adhesive composite resin (Transbond XT) at the center of the clinical crown. The adhesive resin was applied to the base of the bracket, and then the bracket was pressed firmly onto the prepared enamel surface. Excess adhesive was removed from bracket margin using a scaler. In the SBU groups, before application of the adhesive, a thin layer of SBU was applied to the bracket base, air dried and light-cured for 10 s. This step was added to take advantage of the enhanced bond of SBU with metallic surfaces. This step and light-curing were performed in the same manner by one experienced specialist. Light was applied for 40 s (10 s from each side) at distance of 1-2 mm of light tip from bracket margins using a portable light curing device (LITEX 696 Cordless LED Curing Light, Dentamerica, San Jose, CA). The samples were stored in 37°C distilled water and then thermocycled for 3000 cycles between 5°C and 55°C with a dwell time of 30 s. Bracket bond strength was tested using a chisel edge blade at the bracket/enamel interface, in a universal testing machine (Zwick/Roell, Z020, Ulm, Germany), in such a way that the bracket base was parallel to the direction of shear loading, creating shear force at the bracket-enamel interface at a crosshead speed of 0.5 mm/min, with a load cell of 2 kilonewton. The debonding force was recorded in Newton and converted into MPa. After debonding, the enamel surfaces were subjected to stereomicroscope evaluation (×20 magnification) to determine the amount of remaining adhesive on the enamel, by a blinded operator according to the adhesive remnant index (ARI) categories, as follows: 0 = no adhesive remaining; 1 = less than 50% of the adhesive remaining; 2 = more than 50% of the adhesive remaining; 3 = the whole adhesive remaining, showing bracket base impression 14 . To calculate any possible error involved in the ARI scoring, two groups of the samples (Groups 2 and 4) were selected randomly to be scored again by the same observer, six months after the initial scoring. Exactly the same scores were achieved in the second scoring session. Subsequently, the exposed enamel surface was examined for any enamel cracks under a stereomicroscope. As the data for bond stress demonstrated a normal distribution according to the Kolmogorov-Smirnov test, they were analyzed by one-way ANOVA and post-hoc Tukey tests; ARI scores were analyzed with Kruskal-Wallis test, as the data was not continuous and did not have normal distribution. One debonded tooth from each group was randomly selected for SEM evaluation. The separated coronal parts of the specimens were dehydrated using a desiccator for 24 h and mounted on aluminum stubs using a double-faced carbon tape. Then, the specimens were sputter-coated with gold and observed under the microscope (Tescan, Vega III, England), with an accelerating voltage of 15 Kv. 33.e4 Table 1 -Shear bond strength between bracket base and enamel, in groups with different surface treatment. RESULTS The mean bracket bond strengths and standard deviations of the seven groups are presented in Table 1. Kolmogorov-Smirnov test revealed a normal distribution of data for all the groups. According to one-way ANOVA, there were significant differences among the groups (p < 0.001). The highest bond strength was achieved in the laser/SBU group (15.4 ± 3.8), with significant differences from all the other groups (p < 0.006). The lowest strength was obtained in conventional etch/TXT, laser/TXT and SE/SBU groups (approximately 7.1-7.5 MPa), with no significant differences between them. The results of multiple comparisons by post-hoc Tukey tests are shown in Table 1. The distribution of ARI scores of the seven groups is shown in Table 2. Statistical analysis of ARI scores with Kruskal-Wallis test revealed no significant differences between the groups (p = 0.928). Enamel crack was observed in etch/TXT (n=2), etch-15/ SBU (n=2) and laser/SBU (n=3) groups. SEM images obtained after debonding are shown at ×1000 magnification in Figure. As it is evident from the photomicrographs, in TXT/acid-etching or laser etching, the surface with microretentive pattern was observed, while less retentive pattern was seen for SBU in SE mode group. In the SBU/acid-etching groups (Figs 1C, D and E) and the SBU/laser-etching group (Fig 1G), the surface was mainly covered with resin and enamel rods were not visible. DISCUSSION The optimal shear bond strength (SBS) of the bracket to enamel is expected to prevent bracket debonding during treatment, while not causing enamel damage during debonding and keeping the enamel intact after treatment. 20 It is desirable for clinicians to achieve this bond with an easy and fast approach along with maximum patient comfort. The results of the present study rejected the null hypothesis. Scotchbond Universal (SBU) demonstrated equivalent bond strength to conventional orthodontic adhesives in self-etching mode, and provided superior values when applied after acid and laser etching. The UA used in this study, SBU with mild acidity (pH=2.7), contains the acidic functional monomer 10-MDP along with vitrebond co-polymer; both are able to interact with hydroxyapatite. The presence of the latter results in less technique sensitivity, regarding moisture contamination. 16 This property is beneficial in clinical practice for attaching brackets to multiple teeth or in cases where impacted or semi-erupted teeth need to be bonded. SBU in SE mode resulted in comparable SBS to conventional acid etching group. This result was supported by a recent study by Hellak et al. 21 -however, they measured initial SBS without thermocycling. 10-MDP has been documented to be capable of bonding to enamel and dentin effectively, forming a nanolayer at the adhesive interface. It is composed of calcium salt of MDP with low solubility. In addition, MDP is a relatively hydrophobic monomer. 15,22 These properties might account for reliable bond strength of SBU in SE mode after thermocycling. The bracket-enamel bond was affected by thermocycling due to water sorption and induced thermal stress. 7 The nanofiller in SBU and the formed thick adhesive layer might induce a beneficial effect in terms of bond strength, via stress relief and cessation of crack propagation, respectively 16 . This positive effect has been demonstrated for filled adhesives in bracket bonding. 23 Less but adequate SBS of self-etch adhesives compared with conventional etching has previously been reported. 1,13,14,20 A significantly lower SBS of self-etching primer compared to conventional etching can be attributed to short application time (3 s) of self-etching primer, as shown by Chu et al. 24 The positive effect of increased application time (15 s) of self-etching primer on SBS of brackets has been documented. 13 In the current study, SBU was applied and gently rubbed on the surface for 20 s, providing bond strength similar to the conventional orthodontic bonding. Furthermore, acid etching for 15 s prior to SBU used in this study significantly increased bracket shear bond strength. This difference between SE mode and acid etch mode led to reject the null hypothesis. This confirmed the role of acid etching in creating a porous and retentive enamel surface, in particular on unground enamel involved in bracket bonding. Phosphoric acid is able to remove the less reactive high-mineralized superficial enamel layer. 16,25 Interestingly, reducing acid etching time to 10 s and even 5 s was sufficient to obtain this positive effect of acidetching, in the present study. Therefore, part of the null hypothesis stating no difference between acid etching times was supported. In self-etch approach, the shallower etching depth and less demineralization of enamel due to lower acidity, and simultaneous etching and adhesive infiltration has been reported. When comparing to acid-etching, SE adhesives create shallower and fewer resin tags. 26,27 The less retentive pattern in the debonded surface of SBU in SE mode group was observed under SEM. It has been reported that phosphoric acid etching provides a hydrophilic enamel surface by exposing hydroxyl groups of the enamel, 28 compatible with UAs containing water and hydrophilic monomers. 18 Moreover, this etching polarizes the enamel surface, improving chemical interaction of acidic functional monomers with hydroxyapatite. 28 These contributed to increased shear bond performance of SE adhesives and UAs to enamel. 18,29 The modified surface was achieved in shortened etching time (even 3 s) on ground (smear layercovered) enamel. 18,29 It seems that reducing etching time to 10 and 5 s on intact enamel, as it was the case in our study, was able to provide the above-mentioned beneficial effects to some extent. Previously, lack of a determinant factor for enamel condition (ground and unground) for bonding ability of SE and total-etch adhesives has been reported. 25,30,31 Laser etching with the parameters that were used in this study significantly increased bracket shear bond strength using SBU. Laser etching increased surface roughness and created a surface with microretentive characteristic and microcracks that was favorable for resin bonding. 12,14,32 This surface with microroughness was also evident in TXT with laser etching. However, laser etching did not increase shear bond strength for TXT. This result supported a previous report that the effect of laser etching depends on the type and compositional characteristics of adhesive resins. 9,12 SBU, when compared with TXT, has lower viscosity due to its water and ethanol solvent content and the low viscosity monomer HEMA. The surface irregularities and roughness produced by laser etching may be wetted in a better way by SBU than TXT. Wetting the prepared enamel surface is considered a critical step in enamel bonding. 9 The higher wetting ability of SBU relative to TXT on etched enamel could also explain the higher bracket-bonding of etch/SBU group, compared to that of etch/TXT group. These results revealing a significant difference between etching modes rejected the null hypothesis. Some authors demonstrated that higher shear bond strength values are associated with high amounts of remnant adhesive on enamel surface. 1,6,13,24 This was not necessarily observed in this study; no statistically significant difference was found when the ARI values were compared between the groups. Although no cohesive fracture in enamel was observed, there were enamel cracks in some groups. Enamel cracking is a serious problem during bracket debonding, compromising the intact surface of tooth. A slight relative relationship was found between high bracket bonding and enamel crack so that the laser/SBU group with the highest strength (15.38 MPa) and SBU/etch-15 with high strength (11.50 MPa) exhibited enamel cracks. However, crack formation was not observed for SBU with 5 and 10 s pre-etching with similar shear bond strength to that of 15 s etch group and for laser/TXT with low shear bond strength (7.14 MPa). In this regard, it has been suggested that bracket shear bond strengths over the fracture strength of enamel (approximately 14 MPa) are not desirable. 13,20,23 Although bond strength was not high, enamel cracks were also seen in the etch/TXT group. The only possible explanation for this is that the longer etching time in this group created porous structure that might not be fully penetrated by resin. This may weaken the enamel surface and rendered it susceptible to crack formation. It can be concluded that the ARI score and occurrence of cracks seem to be dependent not only on the shear bond strength, but also on many factors such as adhesive composition, bracket base design and characteristics of the prepared enamel. 33 Based on the results of the current study, the laser etch/SBU combination, etch/TXT and SBU/etch-15 should be approached with caution, as enamel cracks were observed. However, SBU in SE mode or with shortened pre-etching time (5 s) can be considered a safe treatment, with reduced chair-time and low risk of moisture contamination. It is important to mention that a minimum bond strength of 6-8 MPa has been declared as adequate for most orthodontic needs during routine clinical use. 34 Mean shear bond strength values of all groups in this study either fell in the mentioned range or were considerably higher. Having higher values however renders a safe margin that may protect against bracket debonding in the event of an unexpected trauma to the bracket-enamel interface. In this regards, SBU with 5 s etch (11.15 MPa) could provide this requirement with no enamel crack, while SBU in SE mode and laser/TXT had lower shear bond strength (7.53 and 7.14 MPa, respectively). In this study, 12 specimens were used in each group. A post power analysis using the mean and standard deviation values of the seven groups revealed a power value of 89% at α=0.05, which is deemed sufficient, as it is greater than 80%. As with other ex-vivo studies, it was necessary to rely on thermocycling to simulate the intra-oral conditions, which may differ to what may happen in a true clinical setting. This could be considered the main limitation of this study. The stress applied on the bonded bracket during debonding in clinic is a combination of shear, tensile and torsion forces. Also, enzymatic and pH challenge was not simulated in this in vitro condition. Therefore, randomized clinical studies are suggested to assess the performance of SBU in the clinical setting. Such studies will also give a better understanding of the feasibility of integrating such adhesives in routine orthodontic procedures. To evaluate only the effects of SBU and cancel possible confounding variables it was chosen to bond the brackets in all groups with a conventional orthodontic composite (Transbond XT). Following the results of this study, bonding orthodontic brackets with other types of available adhesives, especially those used in restorative dentistry, would be an interesting topic for future research, as it would be beneficial in reducing the number of items that are needed to be purchased in dental clinics. CONCLUSIONS Based on the results of the present study, it can be concluded that: 1. Acid etching and conventional orthodontic adhesive can be replaced with SBU in SE mode without reducing bracket shear bond strength. 2. Adding an etching step before application of SBU improved the achieved shear bond strength. However, increasing the duration of etching from 5 to 15 seconds did not result in significantly different bracket shear bond strengths, while causing enamel damage in the form of cracks. 3. In contrast to the conventional orthodontic adhesive, laser etching had a significant impact on the shear bond strength achieved with SBU, although enamel damage was observed in a few of the specimens.
2019-09-12T13:06:36.891Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "c69005142fd18c0f4b761ebe774ad8d81fc6442b", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/dpjo/v24n4/2176-9451-dpjo-24-04-33e1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec368a7205866c87fee2068c474d725ec2b4426a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
119099861
pes2o/s2orc
v3-fos-license
Quantum tunneling time A simple model of a quantum clock is applied to the old and controversial problem of how long a particle takes to tunnel through a quantum barrier. The model I employ has the advantage of yielding sensible results for energy eigenstates, and does not require the use of time-dependant wave packets. Although the treatment does not forbid superluminal tunneling velocities, there is no implication of faster-than-light signaling because only the transit duration is measurable, not the absolute time of transit. A comparison is given with the weak-measurement post-selection calculations of Steinberg. INTRODUCTION Recent experiments on quantum tunneling have re-ignited the longstanding debate over how long a particle takes to tunnel through a barrier [1]. Naïve calculations suggest that faster-than-light tunneling is possible, while some experiments have superficially suggested that such a phenomenon might have been observed [2]. However, it has been argued that information does not exceed the speed of light in these experiments, so that relativistic causality remains intact [3]. The analysis of tunneling time is complicated because time plays an unusual and subtle role in quantum mechanics. Unlike position, time is not usually treated as an operator; rather it is a parameter. Consequently, the energy-time uncertainty principle does not enjoy the unassailable central position in the theory as does the positionmomentum uncertainty principle. This leads to considerable ambiguity when it comes to the measurement of the duration between quantum events. Attempts to define tunnelling time have led to an extensive and confused literature [4]. Most theoretical treatments focus on the behaviour of a wave packet as it traverses a square barrier. However, such a barrier is dispersive, so the packet is disrupted by the experience. Also, interference between parts of the wave packet reflected from the barrier and parts still approaching further complicates matters. A simple heuristic argument to estimate the tunnelling time goes as follows. To surmount a square barrier of height V, a particle with energy E must 'borrow' an amount of energy V − E. According to the uncertainty principle, this must be 'repaid' after a time T = 1/(V − E) in units with ħ = 1. This provides a crude upper bound for the tunneling time. If the width of the barrier is a, then the effective speed of the particle during the tunneling process must exceed a(V − E). As a can be made as large as we please, there is no upper bound on this effective velocity. In particular, it may exceed the speed of light, in apparent violation of relativistic causality. Moreover, the above expression has the odd feature that as the height V of the potential hill is increased, so the tunneling time decreases, i.e. the more repulsive the potential, the faster the particle moves in the forward direction! A natural way to approach the problem is to introduce some sort of clock that is coupled to the particle, and to define the tunneling time in terms of the change in the clock variable from the time that the particle reaches the barrier to the time it emerges. It is then possible to define the expectation time for the tunneling event in terms of the expectation value of the clock variable, which is a quantum observable. Several suggestions for quantum clocks have appeared in the literature [5]. In this paper I restrict attention to an early proposal for a quantum clock by Salecker and Wigner [6], and later elaborated by Peres [7]. This model deals not with moving wave packets or other timedependent states, but with stationary states for which the time enters as a changing phase: e iEt . The quantum clock essentially measures the change in phase, resulting in a duration known as the phase time. Because one is dealing with stationary states, the clock measures only time differences between two events, not the absolute time of either event. This is a key point. Suppose one tries to measure the tunneling time by first measuring the time at which the particle arrives at the leading edge of the potential hill, then measuring the time at which it emerges on the remote side, and taking the difference. The act of observing the particle at the first position collapses the wave function to a position eigenstate and introduces arbitrary uncertainty into the momentum, so that the second measurement is upset. If, on the other hand, one forsakes knowledge of the absolute time of passage of the particle, but requires only the duration for the particle to pass between two fixed points in space, then only a single measurement is required and there is no large unpredictable disturbance. To achieve this, the particle is coupled (weakly) to a quantum clock. The coupling is chosen to be non-zero only when the particle's position lies within a given spatial interval (e.g. the potential hill). Initially the clock pointer is set to zero. After a long time, when the particle has traversed the spatial region of interest with high probability, the position of the clock pointer is measured. The change in position yields the expectation value for the time of flight of the particle between the two fixed points. Full details are given in [7] and will not be repeated here. To illustrate the method, consider a free particle of mass m and energy E moving to the right in one space dimension, in a momentum eigenstate e ikx described by the timeindependent Schrödinger equation. The expectation value for the time of flight between two points separated by a distance ∆ is found as follows. First compute the phase difference δ(E) in the wave function between the two points; this is δ(E) = k∆ where k ≡ (2mE) ½ . Next replace E by E + ε where ε is the coupling energy between the particle and the clock, treated in first order perturbation theory. Now expand δ(E + ε) to first order in ε. The coefficient, δ′(E), is T, the required expectation value of the time for the particle to traverse the distance ∆. In the above case T = δ′(E) = m∆/(2mE) ½ . Defining the classical velocity v = (2mE) ½ , the expected time of flight T is seen to be identical to the classical result ∆/v. II. POTENTIAL STEP As a less elementary example, consider the particle scattering from a potential step of height V situated at x = 0. The stationary state wave function has space-dependent part where k ≡ (2mE) ½ and p = [2m(V − E)] ½ . It turns out that the overall normalization factor does not affect the result, so it will be omitted. Suppose we require the expectation value for the time of flight of the particle to travel from x = −b to the barrier and back again. At x = −b the phase of the incident portion of the wave function is −kb. Now compute the phase of the reflected portion at x = −b. Using continuity of the wave function and its first derivative at the step, we solve for A and find for the said phase the value kb + α where and Thus δ(E) = 2kb + α and we find by differentiating with respect to E that where d ≡ 1/p is the expectation value for the penetration depth of the particle into the potential step. The result Eq. (4) has an intuitively simple interpretation. The term 2b/v is the time of flight from x = −b to the potential step at x = 0 and back again, at the classical velocity v. The term 2d/v represents (the expectation value of) the additional duration of sojourn of the particle in the classically forbidden region beneath the potential step, and can be interpreted as if the particle moves with the classical velocity v for a distance d equal to the average penetration depth, and back again. Thus the effective distance from x = −b to the step is increased from the classical distance b to b + d. Note that if V → ∞ then d vanishes, so an infinite potential step yields instantaneous reflection. On the other hand, as E → V, p → 0 and the sojourn time beneath the step diverges: the particle takes an infinite time to bounce back. In the case that E > V, p is imaginary, A is real and α = 0. The round-trip time therefore reduces to the classical result 2b/v. The reflection from the step is instantaneous in this case too, even when E → V from above. There would thus appear to be an infinite discontinuity in the reflection time at E = V. However, we must be cautious. The method of computation demands that we expand functions of E − ε and V − E − ε in powers of ε and treat ε as small. This procedure is clearly untrustworthy near both E = 0 and E = V. I shall return to this problem later. One may also compute the time for the particle to go from x = −b to, say, x = b′ > 0 by examining the phase of the wave function in the region x > 0 (the B-dependent term in Eq. (1)). If E < V, the method suggests an imaginary phase shift −pb′, implying an imaginary time. is the classical velocity above the step. Evidently we can directly patch together the time of flight before the step with that at the reduced velocity after the step. III. POTENTIAL HILL I now treat the case of principal interest: a particle that tunnels through a square potential hill given by V = constant > 0 in the interval [0, a] and zero elsewhere. The wave function is The phase of the incident part of the wave function at x = 0 ('entering the tunnel') is 0. The phase of the emergent wave function at x = a ('leaving the tunnel') is given by the phase of De ikx . Using continuity of the wave function and its derivative at x = 0 and x = a, the phase change is found to be Differentiation then yields for the expectation value of the tunneling time As a check, we note that when V = 0, p = ik and T = ma/k = a/v as expected. In the special case E = V/2, the right hand side of Eq. (7) reduces to tanh(ka)/E. For small a, it reduces to which → 0 as a → 0, as expected. If we define the effective velocity of the particle to be v eff ≡ a/T, then for small where v = k/m = (2mE) ½ is the classical velocity of the particle outside the potential hill. Note that v eff < v in this limit: thin potential hills slow the particle down, as one might be led to believe on classical grounds. For E=V/2, v eff = v/2. In the case that V >> E, the limit used above breaks down. A case of interest is a delta-function potential hill, where Va 2 = constant as V → ∞ and a → 0. Returning to Eq. (7) and applying these limits, one finds that T → 0. There is no problem here about reflected waves slowing the particle as it approaches the barrier. This result confirms the work of Aharonov, Erez and Reznik [8], who find T = 0 for the tunnelling time through an array of delta function potential hills. By contrast to the slowing effect found above, thick hills serve to speed the particle up, i.e v eff > v in this case. Taking the limit a →∞ in Eq. (7) we see that the tunneling time approaches the constant value This is similar to the result found from the naive argument mentioned in section I. The right hand side of Eq. (10) is reminiscent of the energy that can be 'borrowed' for a time T according to Heisenberg's uncertainty principle, but with the interesting difference that the 'borrowing' requirement is not simply V − E, but the harmonic mean of this quantity and E. The tunneling time is minimized for E = V/2 and in this case we do have T = 1/E. Note that Eq. (10) is also equal to the second term on the right hand side of Eq. (4), the sojourn time inside a potential step. We shall see below that this is a special limit of the general result that the expectation time for a particle to reflect back from the potential barrier is the same as the expectation time for it to penetrate the barrier. The effective velocity under the barrier is which rises without limit as a → ∞. In particular, v eff exceeds the speed of light c when pa > 2mc/k = 2(de Broglie wavelength)/(Compton wavelength). However, for thick barriers the transmission probability is very small. To estimate it, first note that if the approaching particle is to remain non-relativistic (as assumed in the treatment given here), then the right hand side of Eq. (12) must be >> 1, which implies pa >> 1. In this limit the transmission probability approximates to Consider the example of E = V/2 = mc 2 /8. Taking pa = 2mc/k (corresponding to the onset of superluminal propagation), the barrier penetration probability is then 4e -8 ≈ 10 -3 . Although small, this number is by no means negligible, and we have to confront the consequences for causality if it is indeed the case that the occasional particle can tunnel faster than light. A violation of causality will come about if observer A can send information to an observer B a distance d away such that it arrives before a time d/c has elapsed. Could A use an electron to encode this information, and arrange for it to tunnel through a barrier to B in the knowledge that, albeit only occasionally, B will get to receive the electron before a time d/c? I believe the answer to be no. To achieve physical causality violation, A must be able to determine the moment of transmission of the information. But as we have seen, the model system discussed here can determine only the time difference between the moment of 'transmission' and 'reception' of the particle -not the absolute time of transmission. If d/c − T = ∆t, say, then to qualify as causally relevant, the signalling process must be controlled to a fidelity < ∆t; but this is not possible in the present model. Any faster-than-light propagation would therefore be fortuitous -entirely random and uncontrollable. Tunnelling may violate the spirit of relativity, but it does not seem to violate the letter. IV. MEASUREMENT UNCERTAINTY The model clock used here is a quantum system, and is therefore subject to quantum uncertainty in its operation, which in turn implies an uncertainty in the deduced tunneling time. As shown by Peres [7], back action of the clock's dynamics on the particle's motion, which persists throughout the 'experiment,' will limit the resolution of this model clock. In particular, it is unreliable when E → 0 or E → V. The resolution of the clock is limited by the assumption that |E| and |V − E| are >> ε. The energy-time uncertainty relation applied to the clock variables then suggests that the clock pointer will have an uncertainty corresponding to a time τ ≈ 1/ε >> 1/E. But the tunneling time as illustrated by, say, the asymptotic value Eq. (10), is itself of order 1/E. Hence the quantum uncertainty in measuring the tunneling time T is of the same order as the expectation value of the tunneling time. This is no surprise, as any limitation in measurement resolution will have this general form on dimensional grounds. The disturbance on the particle's motion caused by the back action can be reduced by making the coupling weaker, but at the expense of introducing greater uncertainty in the measurement of the clock reading. An alternative strategy to reduce the uncertainty is to use a clock that is not continuously coupled to the particle. This could be achieved by placing the clock in a metastable state, and then using the arrival of the particle at the leading edge of the barrier to merely trigger the operation of the clock via a momentary interaction. This sort of device has been studied by Oppenheim, Reznik and Unruh [9]. Perhaps surprisingly, it does not result in a reduction in the overall uncertainty. The reason for this is that the sharply-localized potential associated with the triggering device reflects some of the wave function, and attempts to mitigate this back-action effect (for example, by boosting the energy of the particle just before the barrier) serve only to introduce additional uncertainties. Thus there seems to be an irreducible uncertainty in the measurement of the tunneling time that is comparable to the tunneling time itself. At first sight this appears to cast doubt over the usefulness of the foregoing results. However, as shown by Aharonov et. el. [10], by performing measurements on a large ensemble of identical systems, the spread in results can be drastically narrowed, even in cases where the uncertainty for a single measurement exceeds the expected value. This is the theory of weak measurement. Applied to the problem of the Peres clock and tunneling time, weak measurement theory implies that, interpreted in an ensemble sense, the results of the foregoing sections are physically meaningful, in spite of the intrinsic uncertainty in the operation of the clock. Weak measurement theory is often combined with post-selection, whereby a final sub-ensemble is extracted corresponding to the state of interest. In the case of tunneling, this sub-ensemble will include only those particles that penetrate the barrier and move to the right. Steinberg [11] has computed the tunneling time using this approach, by evaluating an expectation value for a projection operator corresponding to the time the particle is inside the barrier, in the limit that the measuring device interacts only exceedingly weakly with the particle. The resulting expression is complex. Its real part corresponds to the expectation value of the tunneling time, the imaginary part to the back action of the measuring device on the particle. These respective parts are related in a rather transparent manner to other proposed definitions of the tunneling time. For example, the real part is identical to the so-called dwell time, which is defined as the probability of finding the particle inside the barrier divided by the incoming flux. Steinberg's result for the tunneling time expectation value is . (14) which should be compared to Eq. (7). The two durations are very similar, but not identical. One finds In the free-particle limit V → 0, T → T s . In general T s < T for E < V. In the limit of large barrier width a Note that both T and T s diverge as E → V, but in the limit E → 0 the behaviour is very different: the latter result implying the curious property that, as the approach velocity of the particle falls, so the tunneling velocity rises; in the limit v → 0 the post-selected tunneling velocity diverges. V. OTHER RESULTS One may use the Peres clock model to calculate some other transit times of interest. Consider, for example, the time between incidence and reflection from the leading face of the hill at x = 0. This may be computed by examining the phase of A in Eq. (5). One finds for the phase change But the derivative of Eq. (19) is identical to that of Eq. (6), so the sojourn time inside the hill is the same, whether the particles are transmitted or reflected. Thus the tunneling time for both transmission and reflection are the same. It has been argued [4] that tunneling times for transmission and reflection should satisfy the relation where T D is the dwell time, P t (P r ) the probability of transmission (reflection) and T t (T r ) the corresponding tunneling times. Equation (20) is satisfied, for example, by Steinberg's definition, but not by the one used in this paper. However, Landauer and Martin [1] have argued strongly against Eq. (20) as an inappropriate criterion. The analysis given in this paper to investigate tunnelling may also be used to derive results for one-dimensional scattering, by putting For small a the right hand side of Eq. (21) reduces to with a corresponding effective velocity given by Eq. (9). Again, v eff < v, although v eff > [2m(E − V)] ½ , the classical velocity over the barrier. So although the repulsive potential slows the particle, it does not do so as much as in the classical case. Now consider the opposite limit of large a. The denominator on the right hand side of Eq. (21) can never vanish, while the sine function in the numerator is bounded by [−1,1]. Thus By contrast to the result for the tunneling case, the right hand side of Eq. (23) is proportional to a even for large a. The effective velocity (2E − V)/k therefore always remains less than c when the particle passes over the barrier: only tunneling events lead to superluminal velocities. For large E, v eff → v, but for particles that just clear the barrier, E ≈ V, v eff ≈ v/2. Special interest attaches to the case of resonance transmission, when βa = nπ, and P t = 1. Then Eq. (21) simplifies: In this case v eff does approach 0 as E → V, as it would in the classical case. The effective velocity, however, has a very different energy dependence from the classical expression. For the case of anti-resonance, where cos(βa) = 0, Eq. (23) becomes exact, and -the average of the classical velocities outside and over the barrier. Note that because the reflection and transmission expectation times are equal, there is always a reflection delay, or sojourn in the region x > 0, even in the case that E > V. This is in contrast to the single potential step, where reflection is instantaneous if E > V. The difference has a natural interpretation. In the case of the potential hill, reflection may take place from both the leading and remote faces of the hill. The actual reflections may be instantaneous, but in the case that the particle reflects from the far edge x = a there will be a delay due to the travel time across the top of the hill. The expectation value will therefore include this delay, and the result is consistent with one half the flux being reflected from each edge. Finally, it is worth noting that the above analysis applies to the case of scattering from a potential well, V < 0. For resonant scattering, where a bound state exists just below the top of the well, the scattering cross-section rises sharply at low energy. This is accompanied by a rise in the scattering time T. However, although the effect is explicit in a calculation of the dwell time, T D , it is masked in the case of the Peres clock, because T diverges anyway as E → 0. VI. CONCLUSION I have shown by use of a simple model that sensible and consistent expressions may be derived for the expectation value of the time for a non-relativistic particle in an energy and momentum eigenstate to pass between two points, so long as the absolute time of passage is not required. The points may be separated by regions that include a variety of potentials, including a square potential barrier. In the latter case the tunneling time is given by a credible expression, which approaches a constant for thick barriers, implying an issue concerning superluminal propagation. However, I have argued that physical causality is not violated. In this paper I have restricted the discussion to simple square barriers. It is of interest to consider tunneling into other types of potential too. An important example is the uniform gravitational potential V(x) = mgx. I have discussed this problem in detail in another publication [12]. ACKNOWLEDGMENTS I am grateful to Aephraim Steinberg for helpful comments. ________________________________________________
2019-04-14T03:19:55.157Z
2004-03-01T00:00:00.000
{ "year": 2004, "sha1": "61766d5d5befbb8c0b26f146074f79dbf302e233", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0403010", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "61766d5d5befbb8c0b26f146074f79dbf302e233", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
58543425
pes2o/s2orc
v3-fos-license
Idiopathic pleuroparenchymal fibroelastosis presenting as bilateral spontaneous pneumothorax: A case report Lung India • Volume 36 • Issue 1 • January-February 2019 75 disseminated forms of histoplasmosis usually occur in immunocompromised hosts.[5,6] Lung biopsy and fungal culture has been widely recognized as the gold standards for diagnosing pulmonary histoplasmosis. Unlikely with the literature, our patient was presented with large solid mass of the right lower lobe abutting the surrounding structure with extension toward left lung. The diagnosis of histoplasmosis was established by histopathological examination and special staining of tissue obtained by percutaneous CT-guided biopsy. The patient was put on antifungal treatment in the form of Itraconazole and shows clinical as well as radiological improvement. The case was more interesting because no factor responsible for immunosuppression could be demonstrated in the patient. Idiopathic pleuroparenchymal fibroelastosis presenting as bilateral spontaneous pneumothorax: A case report Sir, Pleuroparenchymal fibroelastosis (PPFE) is an under-recognized clinicopathological entity characterized by fibroelastosis of the pleura and subpleural lung parenchyma with striking upper lobe predominance. [1] Pneumothorax and pneumomediastinum complicate the course of the disease and often can be the initial presenting manifestation. To the best of our knowledge, this is the second case report of PPFE from the Indian subcontinent. [2] This is the first case PPFE was first described in Japan by Amitani et al. [3] as idiopathic pulmonary upper lobe fibrosis. The findings consistent in their cases were (a) slender stature with flattened thoracic cage, (b) progressive subpleural fibrosis without honeycombing, (c) recurrent pneumothorax, (d) no extra thoracic lesions, and (e) absence of acid-fast bacilli and lack of response to antitubercular therapy. It had been called as Amitani disease until the term PPFE was coined by Frankel et al. in 2004. [1] Although the etiology is unknown, most cases have shown association with lung, bone marrow, and hematopoietic cell transplantations, chemotherapy drugs, occupational exposures, and recurrent lower respiratory tract infections. [4] The common symptoms at presentation include dyspnea, dry cough, weight loss, and chest pain. Patients often have slender body habitus and a flat chest. [1] Spontaneous or iatrogenic pneumothoraces which are generally small and often recurrent and bilateral are common in the course of disease. The elastic pleura has limited healing capacity and this leads to persistent bronchopleural fistulae. [5] Earlier age of onset, low body mass index, presence of a flat chest, upper lobe predominance, high incidence of pneumothorax, and bronchopleural fistulae differentiate this entity from idiopathic pulmonary fibrosis. [4,6] The unique pathologic feature of PPFE is intense, predominantly elastic fibrosis of the visceral pleura, particularly in the upper lobes. Marked elastin deposition within the areas of fibrosis, very few or rare fibroblastic foci, and homogeneous intra-alveolar fibrosis with preserved alveolar structure rather than temporal heterogeneity differentiate PPFE from usual interstitial pneumonia (UIP) pattern. [7] Both UIP and nonspecific interstitial pneumonia have subpleural-predominant interstitial fibrosis, which consists of more of collagen than elastic fibers, and have a lower lobe predilection unlike PPFE. [1] from our subcontinent where the diagnosis was established antemortem and treated successfully. A 31-year-old female, a homemaker, presented with dry cough and progressively worsening dyspnea of 5 months duration in the postpartum period. She also lost 6 kg weight over the same period. Her history was insignificant for any connective tissue diseases. On auscultation, she had bilateral fine end inspiratory crackles more in the suprascapular areas. Arterial blood gas analysis was suggestive of Type I respiratory failure. Chest radiograph showed bilateral pneumothoraces and bilateral upper zone infiltrates [ Figure 1a]. A high-resolution computed tomogram (CT) of the chest was done which revealed bilateral apical pneumothorax and pneumomediastinum along with bilateral apical fibrosis. There were also areas of subpleural consolidation and mosaic attenuation [ Figure 1b-d]. A differential diagnosis of chronic hypersensitivity pneumonitis, sarcoidosis, tubercular sequelae-related lung disease, connective tissue-associated lung disease, drug-induced lung injury, and atypical interstitial lung disease were considered. A CT-guided biopsy was performed from the right upper lobe, which showed visceral pleural thickening with collagenous fibrosis, subpleural elastosis, and intra-alveolar collagenous fibrosis [ Figure 2]. A diagnosis of idiopathic PPFE was made and she was initiated on systemic steroids (1 mg/kg prednisolone equivalent). Following the biopsy, there was worsening of the pneumothorax with persistent air leak which was managed with pigtail placement. Her arterial oxygen saturation gradually improved to 96% on room air and was discharged. On follow-up, as her symptoms worsened while tapering steroids, she was also started on mycophenolate mofetil as a steroid-sparing agent. One year since diagnosis, she continues to perform her daily activities and remains independent of oxygen support. Differential diagnoses include asbestos-related disease, advanced fibrosing sarcoidosis, connective tissue-associated disease, radiation-and/or drug-induced lung injury, and organizing pneumonia (OP). A relevant exposure history and absence of sarcoid granulomas and asbestos bodies on histopathological examination can differentiate PPFE from asbestosis and sarcoidosis. Involvement of lung bases and more peribronchial distribution rather than predominant subpleural and paraseptal contiguous areas of fibrosis differentiates OP from PPFE. [7] PPFE is a rare form of interstitial lung disease, and differentiating this entity from other idiopathic interstitial pneumonias is of paramount importance to study the natural history and to guide the treatment regimen. Performing elastin fiber stains routinely in patients with radiological features suggestive of PPFE is recommended to establish diagnosis. [5] Clinicians should anticipate complications such as both spontaneous and secondary pneumothoraces during the management of this entity. Lung transplantation remains the only option for refractory disease. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil.
2019-01-22T22:24:54.699Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "1239d467aa217600e15755bded206399371d7953", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/lungindia.lungindia_248_18", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "9f6a95482ba0ea0d9282b4b72e602648a3a0d40d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17331569
pes2o/s2orc
v3-fos-license
A causal model for longitudinal randomised trials with time-dependent non-compliance In the presence of non-compliance, conventional analysis by intention-to-treat provides an unbiased comparison of treatment policies but typically under-estimates treatment efficacy. With all-or-nothing compliance, efficacy may be specified as the complier-average causal effect (CACE), where compliers are those who receive intervention if and only if randomised to it. We extend the CACE approach to model longitudinal data with time-dependent non-compliance, focusing on the situation in which those randomised to control may receive treatment and allowing treatment effects to vary arbitrarily over time. Defining compliance type to be the time of surgical intervention if randomised to control, so that compliers are patients who would not have received treatment at all if they had been randomised to control, we construct a causal model for the multivariate outcome conditional on compliance type and randomised arm. This model is applied to the trial of alternative regimens for glue ear treatment evaluating surgical interventions in childhood ear disease, where outcomes are measured over five time points, and receipt of surgical intervention in the control arm may occur at any time. We fit the models using Markov chain Monte Carlo methods to obtain estimates of the CACE at successive times after receiving the intervention. In this trial, over a half of those randomised to control eventually receive intervention. We find that surgery is more beneficial than control at 6months, with a small but non-significant beneficial effect at 12months. © 2015 The Authors. Statistics in Medicine Published by JohnWiley & Sons Ltd. Introduction Non-compliance or departure from randomised intervention is a common occurrence in randomised controlled trials and can take various forms. For example, some patients randomised to treatment may take too much treatment, too little or none at all. Some participants may switch to another trial intervention or to an intervention outside the trial. In some cases, departures occur after consultation with a physician; in others, they may simply be because of non-adherence. Compliance can both influence and be influenced by the outcome, side effects and other prognostic factors. Intention-to-treat analysis [1,2] has become the standard analysis in the presence of non-compliance as it avoids selection bias and provides an estimate of the effectiveness of a particular programme of treatment. Per-protocol analysis, in which those who adhere to their randomised allocation are compared between randomised arms, is commonly used in addition to intention-to-treat (ITT) analysis. Occasionally, as-treated analysis, where patients are compared according to the intervention received, is also used. Both analyses attempt to measure efficacy but require strong assumptions about the comparability of compliers and non-compliers within randomised arms [3] and are known to be subject to selection bias [4][5][6]. Instead, we may use a randomisation-based estimate of efficacy, that is, an estimate of a causal effect based on a comparison of randomised arms [3,7]. The complier average causal effect (CACE) [8,9] is one such measure of causal effect. The main idea here is to divide the population of interest into several categories or compliance types, which specify treatment received under different randomised allocations. Compliance generally refers to treatment received, that is, whether or not the patient received their randomised intervention. Compliance type, on the other hand, is a classification of treatment-received given randomisation and is therefore independent of randomisation. Assuming two randomised arms, treatment and control and assuming compliance is all-or-nothing, that is, individuals either receive all of the treatment or none at all, the possible compliance types are as follows: (1) Never-takers: those who never receive treatment regardless of their randomised arm. (2) Always-takers: those who always receive treatment regardless of their randomised arm. (3) Compliers: those who receive treatment if and only if randomised to treatment, that is, comply with their assignment. (4) Defiers: those who receive treatment if and only if randomised to control, that is, do the opposite of their assignment. Groups of always-takers and defiers are only possible if the treatment is available to those randomised to control. The CACE measures the causal effect of assignment on outcome among the group of compliers. In the principal stratification framework, the compliance types are referred to as principal strata, and the CACE is a principal effect [10]. Compliance types are not fully observable because the behaviour under all possible randomisations cannot be observed for all individuals, but due to randomisation, the expected proportion of patients in each compliance type is the same across randomised arms. Two assumptions, known as exclusion restrictions, are usually made to enable estimation: (1) never-takers have the same mean outcome across randomised arms, and (2) always-takers have the same mean outcome across randomised arms. In addition, it is often assumed that there are no defiers [11]. In this paper, we measure this causal effect as a mean difference, so that the CACE is the difference in mean outcome between compliers randomised to treatment and compliers randomised to control. This CACE may be estimated using instrumental variables (IVs) analysis [12,13]. In the context of randomised controlled trials, randomisation is an IV if it affects outcome only through the treatment received. In the simplest setting, the IV estimate of the CACE is the ratio of the ITT effect of randomisation on outcome and the ITT effect of randomisation on treatment received. Under the exclusion restriction and no defiers assumptions, this ratio represents a causal effect of treatment received on outcome [8]. Alternatively, a full probability modelling approach involves specification of a model for the potential outcomes given randomisation and compliance type and allows estimation of the CACE using either maximum likelihood or Bayesian methods [14,15]. Maximum likelihood estimation can be performed using the expectation-maximisation algorithm [16], which treats compliance type as unobserved data. The idea is to find the expected compliance type for each individual and to maximise the likelihood to obtain the maximum likelihood estimate of the CACE. The new parameter estimates are then used to calculate the expectation of the missing compliance types. The Bayesian model can be fitted with data augmentation [17], using Markov chain Monte Carlo methods [18]. The same approach may be taken in cases where the data are longitudinal, allowing a time-dependent treatment effect, provided that compliance remains all-or-nothing [19]. In trials where the alternative intervention is always available, however, there will be many different compliance patterns, depending on the time at which individuals depart from their allocation. With two or more interventions available at each time, the number of compliance types can quickly become large. An alternative to using all possible compliance types is to use superclasses or latent compliance class principal strata, to summarise longitudinal compliance patterns: ITT contrasts are then made within these superclasses, but these contrasts do not represent causal effects [20]. Sitlani et al. [21] use a longitudinal structural mixed model (LSMM), an example of a structural-nested model [22], to analyse a surgical trial with non-compliance that is varying over time. They consider a joint model of outcome and treatment, allowing for inclusion of covariates. The average causal effect of treatment is assumed to be a linear function of time. They compare the performance of likelihood-based methods and various semi-parametric methods and state the assumptions required for valid estimation in each case. In this paper, we propose a causal model for longitudinal data, where intervention group individuals all receive a one-off intervention at the start of the trial, while control group individuals may receive the intervention at any time during the trial. Unlike Sitlani et al. [21], we consider the CACE interpretation, generalising the model of [15] by creating a compliance type for each longitudinal pattern of compliance, and we make no assumption about how the treatment effect varies over time: in particular, our model accommodates a transient treatment effect. By jointly modelling outcome and compliance over time, we obtain estimates of the CACE at each time point. We apply this model to data from the trial of alternative regimens for glue ear treatment (TARGET), which compared the effect of a surgical intervention and a control programme on hearing loss in children with otitis media with effusion ('glue ear'). The surgical intervention was available at all times over the two-year trial, and a large proportion of those randomised to control eventually chose to receive surgery. In Section 2, we give details of the motivating example along with an ITT analysis. In Section 3, we review existing methods to account for non-compliance, including the standard CACE model. In Section 4, we introduce the CACE model for longitudinal data with time-dependent compliance and various model extensions. In Section 5, we apply these models to the TARGET trial and end with a discussion in Section 6. Description of the trial The TARGET [23] was a UK multi-centre randomised controlled trial that investigated the effect of surgery for children with glue ear. This is a condition in which the middle ear becomes filled with fluid, leading to hearing loss. The trial compared the insertion of ventilation tubes, with and without adenoidectomy, with non-surgical management. The inclusion criteria specified that the children must be aged 3-7 years, with no previous ear or adenoid surgery and with greater than 20 dB hearing loss in the better ear. Our analysis includes data from 248 participants: 126 randomised to insertion of ventilation tubes (VT) and 122 randomised to control. The third randomised arm (VT plus adenoidectomy) is ignored for present purposes. VT involved aspiration of fluid remaining in the middle ear, followed by insertion of ventilation tubes in the ear drums. The control arm provided rapid access to antibiotics in the case of resurgent acute infection, although these were rarely used in practice. Improvements in hearing were quantified by hearing level in decibels (dB), with lower measurements indicating better hearing. For this condition, a threshold high value of 40 dB represents poor hearing, and less than 15 dB is regarded as normal. Other outcomes were also measured, but hearing loss was the main outcome for the power calculation due to its widespread use and the existence of a precise convention on its measurement. Description of the data Measurements of hearing loss were taken at two pre-randomisation visits, then at 3, 6, 12, 18 and 24 months, referred to as post-randomisation visits 1, 2, 3, 4, and 5, respectively ‡ . The hearing loss at baseline has mean 33 dB and ranges from about 21 dB to 46 dB. The amount of missing outcome data ranges between about 13% and 18% at each visit, and attrition rates are similar across randomised arms. A descriptive summary of the trial is provided in Table I. A graph of the observed mean hearing loss against time by randomised arm, along with 95% confidence intervals, is given in Figure 1, following [23]. It shows that although VT gives a larger reduction in hearing loss than control by visits 1 and 2, it is comparable to control after visit 3. The published ITT analysis found statistically significant beneficial effects of VT over 3 to 6 months but a statistically non-significant negative effect of VT over 12 to 24 months. This negative effect 'occurs because in this period more of the control group have transferred to treatment, and so have functioning VT, than is seen in the surgery groups where VT have mostly fallen out [23]'. The present paper aims to correct for such departures, which we now describe in a more detail. Any child in the VT arm who did not receive their allocated VT and any child in the control arm who received VT were considered to have departed from their randomised intervention. A total of 71 children departed from their allocated intervention, mostly from the control arm to receive surgical intervention (66 children, 54%). Departures from randomised treatment occurred over the duration of the trial, mostly at scheduled visits. The numbers of departures in the control arm between consec- ‡ In the main trial paper and elsewhere, the pre-randomisation visits are referred to as visits 1 and 2 and the post-randomisation visits are referred to as visits 3 to 7. utive visits are given in Table I. Only five of those randomised to VT (4%) received control instead of surgical intervention. There were two main reasons for departures in the control arm: early surgical intervention (before visit 1) was mostly due to discontentment with the allocated treatment, whereas later surgical intervention was largely due to deterioration of the child's condition. To see this, we plot the hearing loss for those randomised to control (Figure 2). At each visit, we compare boxplots of the hearing loss for those who depart from the control arm before the next visit and those who do not depart before the next visit. Those who depart from the control arm tend to have a higher average hearing loss immediately prior to receiving VT than those who have not yet departed from the control arm. In other words, those with worse hearing in the control arm are more likely to depart and receive intervention. Intention-to-treat analysis of trial of alternative regimens for glue ear treatment data Let Y ijk represent the average hearing loss for individual i = 1, ..., 248, visit j = 1, 2, 3, 4, 5 and allocated treatment k = 1, 2 (control, VT). An ITT model may be written where j is the mean control arm outcome at visit j, j is the treatment effect at visit j, x k is an indicator for treatment being VT (i.e. for k = 2) and ik is a vector over j. The ITT estimates are given in Table II. The ITT analysis provides a useful primary analysis of the data and gives estimates of the relative effectiveness of the treatment programmes. However, we may wish to know the efficacy (i.e. causal effect) of the intervention at each time point. Estimation of the causal effect is complicated by the fact that Baseline Hearing loss (dB) Departure No departure Departure No departure Departure No departure Visit 3 Hearing loss (dB) compliance is time-dependent, and the treatment effect itself is also time-varying. In the next section, we look at existing methods to account for non-compliance in randomised trials. Existing methods Sitlani et al. [21] present an example comparing a surgical intervention with a non-operative treatment, with outcomes measured at five time points after enrollment. They propose a LSMM to account for non-compliance (treatment crossovers) between surgical and non-operative treatment. The LSMM consists of a group average (separated into baseline and time-dependent exposure), subject average (random effects to take into account correlation between measurements on the same individual) and individual observations (error terms assumed to be independent of the random effects). In their model, treatment effect is a linear function of time since receiving surgery, so the model cannot allow for the transient treatment effect that we see in the TARGET trial. The average causal effect at any given time is the difference at that time between the trajectory corresponding to treatment just after enrollment and the trajectory corresponding to no treatment. Analysis is easiest if treatment depends on baseline characteristics (including randomisation) but not on post-baseline characteristics (an 'exogenous' treatment process or 'no selection'). In practice, treatment often depends on post-baseline characteristics (an 'endogenous' treatment process or 'selection'). Sitlani et al. [21] distinguish two types of selection for receiving treatment: direct and indirect. Direct selection depends on covariates observed after baseline but before the time of interest, for example a previous poor outcome leading to a decision to receive surgery. Indirect selection depends on unobserved confounders that affect both treatment and outcome: for example patients with a worse general health condition may elect to receive surgery. Sitlani et al. go on to look at various methods of estimation for the different cases. In the absence of selection, standard tools such as linear mixed effect (LME) model and generalised estimating equations (GEE) may be used. If selection is only direct, the LME and GEE estimators provide consistent estimates of treatment effect provided that the random effect structure is correctly specified. However, if there is indirect selection, LME and GEE estimators that do not explicitly use a selection model can be biased. Marginal structural models (MSM) enable flexible incorporation of factors that influence treatment timing under marginal modelling assumptions. They require specification of a selection model that includes observed covariates or past treatment that is predictive of treatment. Inverse probability weighting can then be used to obtain consistent estimates of the causal parameters of interest. In order for MSM to be consistent, there must be no unmeasured confounders (no indirect selection), and the form of the selection models must be correctly specified. G-estimation and IV estimation both aim to be valid under indirect selection by exploiting the randomisation. G-estimation uses the idea that treatment-free potential outcomes for participants randomised to treatment should be on average equal to treatment-free potential outcomes for those randomised to control. This relies on three assumptions: the counterfactual outcomes are independent of randomisation, the structural model is correctly specified and the effect of treatment at a specified time is the same for those who receive it and those who do not ('no current-treatment interaction'). IV estimators are two stage least squares estimates in which the first equation is a causal model relating outcome and exposure, and the second equation uses an IV, in this case randomisation, to predict exposure. They may be regarded as a special, non-optimal, case of G-estimation. The IV must satisfy the following assumptions in every time period in which a causal effect is to be estimated: (1) random treatment assignment, (2) randomisation affects outcome only via treatment received (exclusion restriction), (3) non-zero average causal effect of randomisation on treatment and (4) those randomised to control and then treated would also have been treated if randomised to treatment (monotonicity, required for the estimates to be interpretable as average treatment effects). Using simulations, Sitlani et al. show that when indirect selection exists, LME, GEE, MSM and G-estimation can be biased, while IV methods tend to avoid bias but are inefficient [21]. The bias of their G-estimation appears to arise because the simulation design involved current treatment interaction. They therefore recommend using the joint likelihood of treatment and outcomes in order to obtain efficient and consistent estimates (provided dependence of selection on subject specific latent effects is correctly specified). Estimation may be achieved using Bayesian analysis that explicitly incorporates the selection model. In this paper, we use the joint-likelihood approach via a CACE model to account for indirect selection to treatment. Section 3.2 describes the CACE model that has previously been used for cross-sectional data and longitudinal data where the compliance is binary, and Section 4.1 describes our extension for time-dependent compliance. Complier average causal effect model for all-or-nothing compliance We first state the standard CACE model in the simple case of a two arm trial with all-or-nothing compliance and a binary treatment. Let R i be the randomised arm (R i = 1 for treatment and R i = 0 for control), and Y i be the outcome for subject i, i ∈ 1, ..., n. Let D i be an indicator of non-receipt of treatment, so that D i = 0 for treated individuals and D i = 1 for untreated individuals: this formulation is used as it extends naturally to the time-dependent case in Section 4.1. Let D i (r) be the potential value of D i if subject i had been randomised to treatment r. Let Y i (r, d) be the potential outcome for subject i if randomised to r and treated/untreated according to d = 0∕1; we only model Y i (0, 1) the untreated outcome in the control arm. Let C i be the latent compliance type for subject i, defined in terms of the potential treatments received: individual i is an 'always-taker', 'never-taker', 'complier' or 'defier' when (D i (0), D i (1)) equals (0, 0), (1, 1), (1, 0) or (0, 1) respectively. We allow indirect selection by allowing C i to be associated with Y i (0, 1). Here, we concentrate on the main form of departure from randomised allocation in TARGET: contamination of the control arm, that is, some of those randomised to control receive treatment. If we assume that those randomised to treatment all receive treatment, then D i (1) = 0 for all i, so C i = D i (0): compliance type is treatment received under randomisation to the control arm. Note that in this notation, C i = 1 indicates a complier. We do not observe the treatment received under both randomisations for a particular individual so the compliance type C i is only partially observed. If individual i is randomised to control (R i = 0) and they receive treatment (D i = 0), then they must be an always-taker (C i = 0), while if they do not receive treatment (D i = 1), then they must be a complier (C i = 1). However, if individual i is randomised to treatment (R i = 1), then they must receive treatment (D i = 0), and so they may be either an always-taker or a complier. Therefore, the compliance type C i is unobserved for those randomised to treatment R i = 1. We assume the causal model Model (3.1) describes observed outcomes as differing in expectation from untreated outcomes only through the receipt of treatment, where is the treatment effect. The absence of a direct effect of R expresses the exclusion restriction, that the always-takers have the same mean outcome in both randomised arms. In fact, the model makes unnecessary and unused assumptions about the causal effect of treatment in always-takers: we return to this in the discussion. Model (3.1) implies for the observed outcome: represents the mean untreated outcome for individuals with compliance type C i : its dependence on C i allows for indirect selection. Under this model, the causal effect of randomisation on outcome for compliers ( This is the difference in mean outcome between compliers randomised to treatment and compliers randomised to control. The parameter represents the CACE, the average causal effect among the group of compliers. Estimation of is complicated by the fact that compliance type C i is not observed for those randomised to treatment R i = 1. A regression of outcome Y i on randomisation R i and compliance type C i will not suffice because C i is not fully observed. Instead, estimation can be achieved using either maximum likelihood or Bayesian methods. In Bayesian analysis, the unobserved compliance types are considered as missing data and estimated in the same way as the other parameters. Probability distributions for and the other parameters are obtained and appropriate summary measures reported. In trials with a repeatedly measured outcome and all-or-nothing compliance, a longitudinal version of model (3.2) may be fitted. If Y ij is the outcome for individual i at visit j, then where D i is defined as previously, and (C i , j) = E[Y ij (0, 1)|C i ] represents the mean untreated outcome for individuals with compliance type C i at visit j. Under this model, the causal effect of randomisation on outcome at visit j for compliers (C i = 1) is Thus, (j) is the CACE at visit j. This model has previously been proposed and fitted by Yau and Little [19]. However, if treatment received is varying over time, the situation becomes more complicated. In our example, at a given visit j, those randomised to treatment would all have received treatment at the beginning of the trial, but those randomised to control will be a mixture of those who received treatment, one visit ago, two visits ago and so on up to j visits ago and will therefore be receiving different treatment effects at the current time. We now extend the CACE model to account for this by modelling the longitudinal data as follows. Complier average causal effect model for longitudinal compliance As previously, let R i be randomised arm (1 for treatment and 0 for control), and Y ij be outcome for subject i ∈ 1, ..., n, visit j ∈ 1, ..., m. We redefine D i as the last visit before surgical treatment (regarding baseline as visit 0): D i = 0 if treatment was received between visits 0 and 1, D i = 1 if treatment was received between visits 1 and 2 and so on, and D i = m if no treatment was received. D i is therefore grouped, not actual, time of surgery. Let D i (r) be the potential value of D i for subject i if randomised to treatment r. Let Y ij (r, d) be the potential outcome for subject i at visit j if randomised to r and receiving treatment just after visit d; we only model the treatment-free potential outcome Y ij (0, m). Again, we allow for indirect selection by allowing C i to be associated with Y ij (0, m). We assume those randomised to treatment receive surgery just after baseline, so that D i (1) = 0 for all i. Thus, C i , the latent compliance type for subject i, is again defined as C i = D i (0), the last visit before surgical treatment under randomisation to control. Now, C i is categorical and is a summary of longitudinal compliance so is not dependent on time. The compliance types are principal strata in the terminology of Frangakis and Rubin [10]. In particular, the principal strata with C i ⩾ j are the 'compliers at visit j', that is, those individuals who would receive treatment under randomisation to treatment but would receive no treatment up to visit j under randomisation to control. We consider two causal models to specify the mean outcome, basing the treatment effect on (1) the number of visits since receiving treatment and (2) the number of days since receiving treatment. Both models describe the mean of Y ij −Y ij (0, m), which is the difference between an individual's observed outcome and the same individual's counterfactual outcome if they were randomised to control and never treated. Causal model using visits. The first model assumes equal spacing between visits and assumes that treatment occurs just after a visit: Here, (k), a function of k for k ∈ 1, ..., m, represents the causal effect of treatment on outcome measured k visits after treatment. We assume (k) is equal across randomised arms: this identifying assumption is plausible in TARGET. This model implies for the observed data: represents the mean untreated hearing loss for a patient with compliance type C i at visit j; its dependence on C i allows for indirect selection. The model embodies the exclusion restriction, because individuals of a given compliance type have the same mean untreated outcome, (C i , j), in both randomised arms. Those randomised to control with D i ⩾ j have not (yet) departed from their allocation (i.e. have not yet received any treatment), and so their expected outcome equals the mean untreated outcome (C i , j). Those randomised to control with D i < j received treatment j − D i visits ago, so their expected outcome is (C i , j) + (j − D i ). Those randomised to treatment all have D i = 0, so their expected outcome is (C i , j) + (j). Under this model, the causal effect of randomisation on outcome at visit j for principal stratum c is Thus, (j) is the causal effect of randomisation on outcome at visit j for individuals in each of the principal strata with c ⩾ j. We therefore interpret (j) as the average causal effect of randomisation on outcome at visit j among compliers at visit j. Causal model using days. We extend the aforementioned model to allow for unequal intervals between visits and to allow the causal effect of treatment to depend on the actual number of days since receiving treatment. Let the visits occur at t 1 , t 2 , … , t m days after randomisation. The setup is the same as previously, but instead of using D i to represent actual treatment, we now let T i represent the time (in days) at which individual i first received treatment, or a value greater than t m if treatment was never received. Compliance type is defined in terms of the potential treatment time under randomisation to control T i (0) as follows: Let (j) represent the treatment effect among compliers t j days after receiving treatment, where j = 1, 2, … , m. We assume a piecewise linear treatment effect between these times: for k = 0, 1, 2, … , m − 1. This implies the observed data model represents the mean untreated hearing loss for a patient with compliance type C i at visit j, and (k) is the average effect of randomisation on outcome at t k days in the principal strata of compliers. Distributional model. In both models, the outcomes Y i = (Y i1 , ...., Y im ) are assumed to have a multivariate normal distribution, (4.2) or (4.5), and is an unstructured m × m covariance matrix. We also assume a saturated model for C i , that is, p(C i = c) = (c). Assumptions The aforementioned model makes several assumptions. The randomisation assumption implies that randomisation R i is independent of pre-randomisation variables, including latent compliance type C i and potential outcome Y i (0, 1) [14]. The stable unit treatment value assumption implies no interference between individuals, so that the compliance behaviour of one patient is not affected by the randomisation of other patients, and the potential outcome of one patient is not affected by the randomisation and compliance status of other patients. We also assume the causal model given by either 4.1 or 4.4, and that (k) is equal across randomised arms. Identification We describe how the parameters (c), (j) and (c, j) (for c = 0, … , m and j = 1, … , m) are identified in the causal model using visits. A similar argument applies for the causal model using days. (1) Since C i is observed if R i = 0, (c) may be estimated using (c) = P(C i = c|R i = 0). This may be used to estimate (c, j). The aforementioned procedure for estimating the (j) is essentially the same as the instrumental variables procedure. However, the Bayesian procedure makes fuller use of the data. Bayesian estimation In Bayesian inference, we assume prior distributions for the parameters to be estimated and simulate the posterior distribution using the Gibbs sampler, treating the unobserved compliance types and missing outcomes as missing data. The CACE can only be indirectly estimated through the observation of mixtures of distributions. If compliance type is known for all units, inference of the causal estimands involves only data from the associated subpopulation with no mixture components. The first step of the data augmentation algorithm is to impute the missing compliance types by drawing them from their conditional distribution, a multinomial distribution, given observed data and current drawn values of , , and Σ. The second step is to draw values of parameters from the complete-data posterior distribution given current values of C i and the observed values of Y i , D i and R i . This involves drawing and from a multivariate normal distribution and Σ from an inverse Wishart distribution. Application We now apply the aforementioned models to data from the TARGET trial. To fit the models to the TARGET data, we make some simplifying assumptions to avoid creating too many compliance types. Non-compliance is assumed to occur in only one direction: those allocated to control can receive VT but not vice-versa. Some of those who received VT had the ventilation tube reinserted at a later time, but the reinsertions are also ignored here. In the TARGET trial, treatment may be received at any time, but we ignore its precise timing and define the compliance types as the last visit before which the individual would receive surgical treatment if randomised to control. C = 0 corresponds to those who would receive VT between visits 0 and 1 if they had been randomised to control, C = 1 corresponds to those who would receive VT between visits 1 and 2 if they had been randomised to control and so on. In this notation, C = 5 corresponds to those who would not have received VT at all, had they been randomised to control. The compliance types are unobserved in those randomised to treatment, but the model parameters may be estimated using Bayesian methods. Here, we assess the plausibility of assumptions made in Section 4.2. Treatment assignment was random in the TARGET trial, satisfying the randomisation assumption. The stable unit treatment value assumption (SUTVA) implies that the potential outcome for each individual does not depend on the treatment status of other individuals. This holds in TARGET because the hearing loss of one participant should not be affected by the treatment that other trial participants are receiving. The exclusion restriction means that treatment assignment is unrelated to potential outcomes given treatment received. This is plausible in TARGET because the outcome only depends on the time since receiving treatment and compliance type, rather than on randomisation. The monotonicity assumption implies that there are no defiers. In the TARGET example, most of those offered treatment took it up, so the assumption of no never-takers or defiers is plausible. The joint likelihood method assumes that the likelihood is correctly specified, namely normality of outcomes and a correctly specified covariance matrix. Implementation The aforementioned models were fitted using Markov chain Monte Carlo in WinBUGS [24] and were run for 100 000 iterations. Diffuse normal distributions with mean zero and a large variance were used as prior distributions for the parameters (c, j), (j). An inverse Wishart distribution was used as a prior for . (c, j) ∼ N(0, 10 4 ) for j = 1, ..., 5 and c = 0, ..., 5 (j) ∼ N(0, 10 4 ) The posterior distribution was simulated, treating the unobserved compliance types and missing outcomes as missing data. We assume that the missing outcomes are missing at random [25], though other methods could be applied, as noted in the discussion. The simulations were run on two chains, which were initialised at different values near the maximum likelihood estimates. The first chain was initialised at (c, j) = 20, (j) = −10 for all c and j and C i = 1 for all i. The second chain was initialised at (c, j) = 10, (j) = −5 for all c, j and C i = 1 for all i. Convergence was assessed using the Gelman-Rubin diagnostic [26] and history plots for each parameter. All of the model parameters, (c, j), (j), , were mixing well, that is, the two chains were moving freely over the parameter space and appeared to have converged after about 10 000 iterations. The model using days since VT was implemented similarly. If there is a missing visit, we assume the number of missing days because receiving VT is equal to the scheduled number of days. This value is used to impute the missing Y ij . An alternative model for the missing days with an appropriate mean structure gave similar results. Table II gives the treatment effect estimates from an ITT analysis (model 2.1) and from the CACE model by visits since receiving VT (model 4.2). Under ITT, VT reduces hearing loss more than control, by 11.6 dB with 95% CI (9.3 to 13.8) dB after one visit and by 5.6 dB with 95% CI (3.1 to 8.1) after two visits. Under the CACE analysis, VT reduces hearing loss by 11.6 (9.2 to 14.0) dB compared to control after one visit and 7.2 (4.4 to 10.1) dB after two visits. The ITT analysis would be expected to give a conservative estimate of the treatment effect compared to the CACE model at visit 1, because the ITT analysis includes some patients in the control arm who have received VT one visit ago. In this case, the ITT and CACE estimates are similar for visit 1. For visits 3, 4 and 5, the sign of the ITT effect is positive, indicating a small but non-significant adverse effect of VT, whereas the CACE estimates are negative, indicating a small but non-significant beneficial effect of VT. This change is because at visit 3, for example, the control arm contains a mixture of patients, some of whom have received control and others who have received VT one, two or three visits ago. Table III gives the treatment effect estimates from an ITT analysis and from the CACE model by days since receiving VT (model 4.5). We observe slightly larger treatment effects in both CACE and ITT analyses when taking into account actual days since receiving VT, rather than assuming equal spacing between visits. Qualitatively, both analyses agree that VT is significantly better for the first 6 months after receiving VT. After 6 months, no significant difference between VT and control is observed. By this time, a substantial proportion of those randomised to control have received VT, so the ITT analysis obscures a possible benefit of VT, whereas the CACE analysis indicates a non-significant beneficial effect. ITT (intention-to-treat) is the average effect of randomisation on observed outcome after t days. CACE (complier average causal effect) is the average effect of randomisation on outcome at t k days in the principal strata of compliers at t k days ( (k) from model 4.5). VT, ventilation tubes. Results A graph of the estimates of (c, j) from model 4.2, representing the untreated outcome over time for each compliance type, is given in Figure 3. Compliance type 0 has a relatively low hearing loss that gradually decreases over time. Compliance type 1 begins with a high hearing loss but decreases rapidly over time. Compliance type 2 has a relatively high hearing loss at the first two visits which then decreases. Compliance type 3 has moderate hearing loss at visits 1 and 2 then a very high hearing loss at visit 3 that decreases over the next two visits. Compliance type 4 starts with moderate hearing loss, increases to a high value at visit 3 then decreases. Finally, compliance type 5 begins with a low hearing loss that gradually decreases over time. The trajectories for those who receive VT immediately after baseline (C = 0) and those who never receive VT under randomisation to control (C = 5) are quite similar. This is consistent with early departures being due to discontentment with the allocation rather than poor outcomes. Model checking Plots of standardised residuals show that most lie within a reasonable range of about (−2.5, 2.5). There are a few extreme residuals and these usually correspond to very high (> 40 dB) outcomes. Exclusion of individuals with extreme residuals has little effect on the results. Plots of residuals versus fitted values show no distinguishable pattern. Comparison of the fitted valueŝ(C, j) +̂(j − D) from model 4.2 ( Figure 4) with the crude mean outcome in the control arm ( Figure 5) suggests that the model makes fairly plausible assumptions. Alternative options for Ω in the Wishart prior distribution were used, such as Ω = diag(0.001) and Ω = diag(1000), as well as non-diagonal matrices. These made little difference to estimates of the CACEs. Simulation study We performed a small simulation study to evaluate the performance of the proposed method and to compare it with the IV method in a data generating model loosely based on the TARGET results and the causal model using visits. Data-generating model We generated 1000 data sets of size 300 with m = 5 time points. We assumed equal randomisation (p(R i = 1) = 0.5). The latent compliance types were distributed with Figure 6. Results from the simulation study of Section 6. Monte Carlo error is expressed through 95% confidence intervals. Analyses Model (4.2) was used. For comparison the IV, analysis was done, using dummy variables for treatment 1, … , 5 visits ago as endogenous variables, the interaction of randomised group and time as instruments, and dummy variables for visits as covariates. Results The results are summarised in Figure 6. Bias was small (magnitude ⩽ 0.1 compared with treatment effects ranging from 10 to 2) and somewhat worse for the Bayesian method. However, the Bayesian method had a standard error 15-20% smaller than the IV method and hence a smaller mean squared error. Finally, the Bayesian method achieved 95% coverage near the correct 95% interval, while the IV method somewhat under-covered. Conclusions In randomised clinical trials in which a substantial proportion of patients departs from their randomised treatment, standard ITT analysis compares treatment policies but may obscure the treatment effect. Per-protocol analyses that compare those who adhere to their randomised allocation between randomised arms are commonly used, but these are subject to selection bias. Instead, randomisation-based estimates of efficacy may be employed. Given reasonable assumptions, the CACE model can provide estimates of the average causal effect of treatment among the group of compliers. CACE models have previously been applied in simple situations where treatment is all-or-nothing and compliance is binary. We extended the CACE model to incorporate compliance that is changing over time by introducing categorical compliance types based on the time of receiving treatment. We specified a model for the conditional distribution of outcome given randomisation and compliance type and fitted this to obtain estimates of the causal effect of treatment at each visit. Full probability modelling enables model checking by comparing fitted values with the observed values, by checking for extreme residuals and by examining the plot of residuals versus fitted values. We applied this model to data from the TARGET trial in which outcomes are measured over five time points, and departures from the control arm to surgical intervention could occur at any time. In this example, the CACE analyses generally gave larger estimates of treatment effect compared to the ITT effects. The ITT estimates are conservative in this case because at any given time, the control arm contains a mixture of people, some of whom are receiving the effect of surgery. Adjusting for the exact timing of the visit had little effect on the results. The CACE model can provide a useful secondary analysis in addition to the primary ITT analysis. However, the average causal effect among compliers may not be a representative of the causal effect among the general population. In addition, the longitudinal CACE model is somewhat complex both conceptually and in terms of computation. Computation can be performed using either maximum likelihood or Bayesian methods, but software would be needed to make CACE estimation more accessible. Discussion In this paper, we focused mainly on contamination of the control arm, that is, those randomised to control receiving VT. The model could be extended to include non-receipt of VT in the VT arm by creating just one more compliance type, namely those who would not receive VT if randomised to it, and making a no-defiers (monotonicity) assumption that these individuals would also not have received VT if randomised to control. We ignored baseline covariates such as trial centre and baseline hearing loss in the CACE models. Inclusion of covariates both in the outcome model and as predictors in the compliance model should improve efficiency but could make estimation more complex and is a topic for further work. Trials that are large enough to consider interactions between baseline and outcome allow identification of patients who benefit most. In TARGET, there was evidence that those who had worse hearing benefit more from treatment and such people were more likely to receive non-randomised surgery. This is one situation in which applying CACE analysis can be useful [3]. The model could be extended to incorporate a k-level treatment by including more compliance types, one for each level of each treatment. The mean outcome model may need to be changed to give a different treatment effect for each level of treatment. It would be possible to incorporate continuous compliance by modelling (c, j), for example using a linear model. However, this may be too sensitive to the modelling assumptions. We used the identifying assumption that the causal effect of VT was the same in both randomised arms. This might be false if randomisation to control modified the value of a subsequent VT. However, in TARGET, control involved watchful waiting, and very few cases received any active treatment, so the identifying assumption seems plausible. An alternative could be to allow the causal effect of VT on the outcome k visits later to be (k) in the VT arm and (k) in the control arm and to allow to vary over a range of values below 1. A limitation of the models is that they assume missing data are missing at random. Much work has been carried out to adjust for both non-compliance and missing data, for example [27][28][29][30], and these methods could be incorporated into the models presented. Models An alternative to full probability modelling is to estimate the CACE using IV analysis [31]. This is easier to implement than the full probability modelling method described here and avoids distributional assumptions but does not perform as well as the CACE in terms of operating characteristics such as bias and the width of 90% intervals [15,21].
2018-04-03T05:40:42.731Z
2015-03-16T00:00:00.000
{ "year": 2015, "sha1": "9b908af57d30ca62760c43c9be9b459faf4248c4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/sim.6468", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9b908af57d30ca62760c43c9be9b459faf4248c4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
22109731
pes2o/s2orc
v3-fos-license
Pollen Analysis of Natural Honeys from the Central Region of Shanxi, North China Based on qualitative and quantitative melissopalynological analyses, 19 Chinese honeys were classified by botanical origin to determine their floral sources. The honey samples were collected during 2010–2011 from the central region of Shanxi Province, North China. A diverse spectrum of 61 pollen types from 37 families was identified. Fourteen samples were classified as unifloral, whereas the remaining samples were multifloral. Bee-favoured families (occurring in more than 50% of the samples) included Caprifoliaceae (found in 10 samples), Laminaceae (10), Brassicaceae (12), Rosaceae (12), Moraceae (13), Rhamnaceae (15), Asteraceae (17), and Fabaceae (19). In the unifloral honeys, the predominant pollen types were Ziziphus jujuba (in 5 samples), Robinia pseudoacacia (3), Vitex negundo var. heterophylla (2), Sophora japonica (1), Ailanthus altissima (1), Asteraceae type (1), and Fabaceae type (1). The absolute pollen count (i.e., the number of pollen grains per 10 g honey sample) suggested that 13 samples belonged to Group I (<20,000 pollen grains), 4 to Group II (20,000–100,000), and 2 to Group III (100,000–500,000). The dominance of unifloral honeys without toxic pollen grains and the low value of the HDE/P ratio (i.e., honey dew elements/pollen grains from nectariferous plants) indicated that the honey samples are of good quality and suitable for human consumption. Introduction Honey is naturally produced by honeybees from the nectar of plants. It is widely consumed as a health food product all over the world, but adulteration and the false labelling of honey are common problems in many countries [1]. In this context, melissopalynology plays an important role in ascertaining the botanical and geographical origins of honey by studying the pollen contained in the honey [1][2][3][4][5][6]. Shanxi Province is regarded as a rich source of honey in North China. The province's great floristic diversity includes more than 80 families, 200 genera, and 600 species of nectar plants [7]. Beekeeping in Shanxi has high social and economic value. Beekeeping activities in the province can provide approximately 3000-6000 tons of commercial honey, 20-40 tons of royal jelly, and 75-150 tons of bee pollen each year [8]. These products are gaining increasing importance as they improve the socioeconomic situation of the people of Shanxi. Although several melissopalynological studies have been conducted in China [9][10][11][12][13][14][15][16][17], most of these studies were based on qualitative analyses. Qualitative and quantitative melissopalynological analyses of Shanxi honeys are not yet available. Such analyses have not been conducted because of a lack of research on the botanical aspects of the honeys. The beekeepers do not know all the important nectar plants contributing to honey production. For this reason, the honey is sometimes mislabelled. Based on pollen analysis, this paper aims to determine the botanical characterisation of honeys from the central region of Shanxi for the first time and to provide a useful guide to beekeeping in this region. Ethics Statement No specific permits were required for the described field studies. The sampling sites are not protected in any way and the field studies did not involve endangered or protected species. Honey Sampling and Pollen Analysis Nineteen natural honey samples (Table 1), produced primarily by Apis mellifera and Apis cerana cerana, were collected from nine Counties (Fig. 1) in the central region of Shanxi from April through September, 2010-2011. For pollen analysis, the method recommended by the International Commission for Bee Botany [2] was adopted. Ten grams of each honey was dissolved in 20 ml of warm water (40uC). The solution was centrifuged for 10 min at 2500 r/min, the supernatant solution was decanted, and the sediments were collected into a conical tube and treated with an acetolysis mixture (acetic anhydride : conc. sulphuric acid = 9:1 V/V) [18] for approxi- mately 30 min at room temperature. After treatment with the acetolysis mixture, the sediments were rinsed with distilled water, centrifuged for 5 min at 2500 r/min, and preserved for study. To analyse the pollen content of the honey samples, two slides were prepared from each sample and photographed under a Leica DM2500 light microscope. Pollen types were identified by comparison with reference slides of pollen collected directly from the plants in the study area. In addition, selected palynological literature and monographs [19][20][21] were used. Photomicrographs of different types of pollen grains recovered from the honey samples are shown in Figs. 2 and 3. For quantification of the pollen types, at least 500 pollen grains were counted from each sample. The percentage frequency of the pollen taxa in all the samples was calculated. The types of pollen were allocated to one of four frequency classes: (i) predominant pollen types (.45% of the total pollen grains counted); (ii) secondary pollen types (16%-45%); (iii) important minor pollen types (3%-15%); and (iv) minor pollen types (,3%). The honey sample was characterised as unifloral if it contained a predominant pollen type. Otherwise, it was considered multifloral. The absolute pollen counts (APC) of the honey sample (i.e., the number of pollen grains per 10 g honey) were calculated with a haemocytometer [22]. Pollen grains were counted under a microscope at 1006 magnification over a haemocytometer (counting chamber). The chamber is 0.1 mm high and has 25 medium squares of 0.04 mm 2 each, which are subdivided into 16 small squares of 0.0025 mm 2 each. This means a volume of 0.1 ml in the chamber, 0.004 ml in each medium square and 0.00025 ml in each small one. For each sample, we counted the pollen grains of five medium squares at the center, left and right corners at the top and bottom of the chamber, which was repeated for making 100 individual observations. Based on the average number of 100 observations, the absolute pollen counts in the volume of 100 ml suspension with the pollen sediment contained in 10 g honey were calculated. The samples were classified into five groups as proposed by Louveaux et al. [2]: Group I (,20,000 pollen grains); Group II (20,000-100,000); Group III (100,000-500,000); Group IV (500,000-1,000,000); and Group V (.1,000,000). To determine the frequency of honey dew elements (HDE), a HDE/P ratio was calculated for each honey. The honey dew elements were calculated by counting the number of honey dew elements (HDE) and dividing by the total frequency of pollen grains from nectariferous plants (P), following Louveaux et al. [2]. In addition, the ecological parameter, Shannon-Weaver diversity index, was used to calculate the pollen diversity in each sample [5,23] according to the following equation: Honey dew elements were considered absent from the samples due to the low HDE/P values found (0-0.036) ( Table 5). The Shannon-Weaver diversity index values of the multifloral honeys ranged from 1.79 to 2.21, whereas the unifloral honeys showed lower values, from 0.18 to 1.62 (Table 5). Discussion and Conclusion Pollen is very important for honeybee nutrition [24,25]. Honeybees collect pollen grains from entomophilous and anemophilous plants to obtain protein for their survival and reproduction [17,26]. The bees frequently collect a wide variety of pollen types, but they generally concentrate on a few species [27,28]. The present study provides new insights into the pollen composition of honey samples from the central region of Shanxi, North China. A total of 61 pollen types from 19 honeys produced by Apis mellifera and Apis cerana cerana were identified, including 56 entomophilous pollen types (e.g., crop plants: Glycine max, Vicia sp., Fagopyrum esculentum, fruit trees: Prunus sp., Pyrus sp.) and 5 anemophilous pollen types (e.g., Chenopodiaceae, Cyperaceae, Poaceae). The Shannon-Weaver diversity index shows high diversity of pollen types in 5 multifloral honeys with a range of 1.79 (sample H9) to 2.21 (sample H4). High values in samples H10, H11, and H12 indicates rich nectar and pollen sources in Fengyang County in April, July to September. While compared with the multifloral honeys, 14 unifloral honeys have lower values of diversity index, ranging from 0.18 (sample H1) to 1.62 (sample No. of non-melliferous pollen types -- No. of non-melliferous pollen families Number of unknown pollen types No. of plant families in each sample 5 10 [29]. In this study, seven predominant pollen types (i.e., Ailanthus altissima, Asteraceae type, Fabaceae type, Robinia pseudoacacia, Sophora japonica, Vitex negundo var. heterophylla, Ziziphus jujuba) were recorded in fourteen unifloral honeys. The local beekeepers usually know that the latter four types are major nectar plants in this region, but they may not know that the former three types can also be used as principal nectar sources by honeybees. Asteraceae and Fabaceae are two large families, comprising approximately 90 and 100 species, respectively, in Shanxi [29]. In addition to the major nectar plants, the plants frequently used by honeybees for foraging included Catalpa ovata, Exochodra sp., Paulownia sp., Salix sp., Scrophulariaceae type, and Rosaceae type. The analysis of the pollen content of the honey samples indicates that the local flora may be used as a source of good-quality honey. The majority (73.68%) of the 19 honey samples were considered unifloral honeys because they contained a predominant pollen type (frequency .45%). The dominance of unifloral honeys without any toxic pollen grains and with scarce fungal elements suggests that most of the honey samples are of good quality and suitable for human consumption.
2016-05-12T22:15:10.714Z
2012-11-21T00:00:00.000
{ "year": 2012, "sha1": "65068d40b240e279caca508e32b916675694d310", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0049545&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65068d40b240e279caca508e32b916675694d310", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235423426
pes2o/s2orc
v3-fos-license
Social Advertising Effectiveness in Driving Action: A Study of Positive, Negative and Coactive Appeals on Social Media Background: Social media offers a cost-effective and wide-reaching advertising platform for marketers. Objectively testing the effectiveness of social media advertising remains difficult due to a lack of guiding frameworks and applicable behavioral measures. This study examines advertising appeals’ effectiveness in driving engagement and actions on and beyond social media platforms. Method: In an experiment, positive, negative and coactive ads were shared on social media and promoted for a week. The three ads were controlled in an A/B testing experiment to ensure applicable comparison. Measures used included impressions, likes, shares and clicks following the multi-actor social media engagement framework. Data were extracted using Facebook ads manager and website data. Significance was tested through a series of chi-square tests. Results: The promoted ads reached over 21,000 users. Significant effect was found for appeal type on engagement and behavioral actions. The findings support the use of negative advertising appeals over positive and coactive appeals. Conclusion: Practically, in the charity and environment context, advertisers aiming to drive engagement on social media as well as behavioral actions beyond social media should consider negative advertising appeals. Theoretically, this study demonstrates the value of using the multi-actor social media engagement framework to test advertising appeal effectiveness. Further, this study proposes an extension to evaluate behavioral outcomes. Introduction The popularity of social media is growing with advertisers utilizing different platforms to drive online and offline customer engagement [1,2]. As the third largest advertising channel, social media accounts for 13% of global advertising spending [3]. In 2019, Australian brands spent AUD 2.4 billion on social media advertisements, making social media the second highest expenditure category in digital advertising spending after paid search [4]. Since the COVID-19 pandemic erupted across the world, social media played a crucial role in disseminating information. Research found social media to be the most rapid digital tool in spreading information regarding the virus, which helped reach and educate specific audiences, such as front-line workers [5]. With some drawbacks due to the publication of misinformative facts and knowledge, social media platforms remain a major communication platform for scientists, organizations and governments to reach different audience groups and create highly persuasive outcomes [6]. Similarly, advertisers invest in social media platforms, seeking attention, engagement and action in online and offline, making a clear understanding of social media advertising's effectiveness of paramount importance. It is established that emotional appeal messages perform better on social media than rational appeals. Evidence suggests emotional appeals are more likely to achieve engagement and virality [1,7]. However, less is known about the effectiveness of positive, negative and coactive emotional appeals, with the best approach to take remaining unresolved. Literature Review Advertisements are designed with the ultimate goal of changing behavior [13]. Commercially, advertisers aim to increase sales by encouraging customers to purchase certain products or choose specific brands [8,14,15]. Beyond commercial application, the power of advertising is harnessed to positively change people's lives, encouraging social and health behaviors such as quitting smoking [14], encouraging healthy eating [15], preventing diseases [16], safe driving [17] and increasing charity donations [18]. Such efforts are known as social advertising, the use of promotion and communication techniques to change social behavior [19]. To motivate the adoption of positive social behavior, social advertising raises awareness, induces action and reinforces maintenance of prosocial behaviors [19,20]. One area where social advertisers may deliver change is the impact of low-quality donations on charities and the environment. One of the most challenging tasks for charities is filtering donations received based on their quality before moving donated goods for redistribution or sale to generate revenue to support essential charity service provision [21]. Australian charity organizations spend millions of dollars each year on donation sorting processes, ensuring that unusable items donated to charities are discarded and others are remanufactured, while the remaining goods are distributed or sold. In 2018, over USD 9 million was spent sending unusable donations to landfill [22]. Processing of waste by charities diverts funding away from the delivery of essential community services [21]. As much as 30% of goods donated are estimated to be unusable, suggesting that there is substantial room for improvement. Despite the magnitude of the problem, limited research focused on improving the quality of donated items is available [23]. Emotional Advertising Appeals As advertisers increasingly seek greater communication effectiveness, the choice of advertising appeal requires more consideration and careful assessment [24]. Viewers may utilize cognitive or affective evaluation systems when processing an advertisement message [25]. Rational appeals rely on cognitive evaluations through the persuasive power of arguments or reason to change audience beliefs, attitudes and actions. Such messages are evident in the dissemination of scientific information such as the ones seen during the COVID-19 pandemic. These include facts, infographics and arguments that appeal to a person's rational processes [5]. Conversely, emotional appeals utilize the affective evaluation system by evoking emotions to drive action. Recent meta-analytic studies identified that consumers respond more favorably to emotional appeals compared to rational appeals [9,26]. Emotions are defined in many different ways in the literature. In an effort to summarize all definitions in one, Kleinginna and Kleinginna [27] provide a unified definition that is now relied on by psychology, marketing and other disciplines. They define emotions as "a complex set of interactions among subjective and objective factors, mediated by neural/hormonal systems, which can (a) give rise to affective feelings of arousal, pleasure/displeasure; (b) generate cognitive processes such as emotionally relevant perceptual effects, appraisals, labeling processes; (c) activate widespread physiological adjustments to the arousing conditions; and (d) lead to behavior that is often, but not always, expressive, goal-directed, and adaptive (p. 371)". For decades, scholars have been studying the effect of emotion on behavior through multiple disciplines and contexts. In the past 10 years, considerable advancement has been clear in research around emotions. Specifically, technology advancements along with new research methodologies allowed scholars to measure and track emotion effects more accurately than before. For example, autonomic measures, including facial expression, heart rate and skin conductance, enabled researchers and practitioners to test emotional responses to certain stimuli [28]. This, along with digital and social media growth over recent decades, created an opportunity for advertisers to create, manipulate and test different advertising strategies to achieve the highest persuasion effects. Researchers from psychology, social sciences, health and marketing agree on the crucial role emotions play in shaping human behavior [29][30][31]. Emotions have been part of persuasion models as early as the AIDA model, with desire indicating an emotional reaction following the cognition level of attention and interest [32]. More recently, emotions were found to dominate cognition in a persuasion process, occurring before any cognitive assessment of the message [33]. Hence, emotions are crucial in advertising's ability to influence behavior [30,31]. For years, classifying emotions has been a research interest with multiple schools of thought. There are two main ways of classifying emotions, categorically (i.e., discrete emotions) or dimensionally. The discrete emotions approach posits that emotions are specific and defined. Different scholars present different sets of discrete emotions. For example, Ekman [34] presented six basic emotions, namely anger, disgust, fear, joy, sadness and surprise. Plutchik [35] argued there are eight basic emotions (fear, anger, sorrow, joy, disgust, acceptance, anticipation and surprise), with mixed emotions producing a secondary emotion (e.g., anger and disgust produce hostility). Models based on the discrete emotions approach appeared, mapping emotions on different dimensions of valence and arousal [36]. On the other hand, the dimensional theory of emotions classifies emotions based on three main dimensions: (a) valance, (b) arousal and (c) dominance. Hence, emotions can be positive or negative, highly aroused or calm and dominating or under control. Application of the dimensional theory is seen in testing different emotional appeals in advertising with positive emotional appeals and negative emotional appeals, and more recently a mixture of both valanced appeals (i.e., coactive appeals) [1,7,37]. The dimensional theory allows for valid comparison of different advertising strategies and appeals and has proven to be valid in multiple empirical results [38][39][40]. Hence, the current study employs the dimensional theory of emotions in classifying emotional appeals. Based on the dimensional theory of emotions, people can perceive any emotional appeal stimulus as pleasant, unpleasant or a mixture of both (i.e., coactive state) [41]. Hence, emotional appeals are categorized as positive, negative and coactive based on the valance of employed emotions. Emotional appeals research employing the dimensional theory of emotions focuses on the effect of different valanced emotions on cognitive and behavioral actions [42]. While there is a strong connection between emotional appeals and behavior change [43], inconsistent results are evident in the literature when comparing positive, negative and coactive appeals (e.g., [7,8,17,44,45]). Positive, Negative and Coactive Appeals While positive emotional appeals were found to increase an individual's tendency to take action and yield higher message liking [9,46], they are explored and utilized to a lesser extent when contrasted with negative emotional appeals [47][48][49]. When positive appeals are studied, humor appeals remain the focus, with less attention directed to the utilization of other positive emotional appeals which may deliver behavioral change [9]. A review of the literature indicates that positive appeals hold a persuasive advantage in both social and commercial behavior. Wang et al. [50] found positive admiration appeals to increase purchase intentions more than negative appeals. Similarly, Vaala et al. [51] support the use of positive empowering appeals when targeting health-related behavior. Positive appeals are especially effective when targeting males [52], however, studies of positive appeal effectiveness remain limited in number and in execution [47,48]. Some limitations in positive appeals are discussed in the literature. Segev and Fernandes [53] found positive appeals to be only effective when the behavior requires low effort. Hence, when environmental or climate change advertisements encourage green consumption, recycling or other complex behaviors, positive appeals might be less effective. Similarly, when positive appeals are used to evoke hope in audiences, hope for change is reported in the viewers instead of action taken towards change, indicating an emotion-focused coping function [54]. Hence, social advertisers remain reluctant to apply positive approaches. Fewer examples of positive appeals being applied to address social issues such as alcohol consumption [55], obesity [56], the environment [52] and safe driving behaviors [17] are evident. Negative appeals, on the other hand, dominate research and practice, with over 70% of social advertisements employing negative appeals [48]. As the main driver of psychic discomfort, negative appeals are utilized to create emotional imbalance to stimulate behavior change [57]. According to this view, a message that is negatively framed when aiming to drive donations to charities is designed to make the individual feel uncomfortable as they are blamed for the poverty of certain groups (e.g., homeless children). To eliminate such feelings, a viewer is then more likely to contribute to the solution by donating to the charity [58]. While the use of negative appeals has been found to be effective in multiple contexts, such as healthy eating [56], moderate alcohol consumption [55] and safe driving [17], certain limitations apply. Negative appeals result in developing a coping mechanism such as ignoring the message (i.e., flight) or rejecting the message (i.e., fight), reducing message effectiveness [30,31,59]. Furthermore, negative appeals dominate social advertising efforts [47,48], resulting in desensitization to negative emotions, potentially causing such appeals to become less effective [57]. Finally, negative appeals can serve to reinforce stereotypes, further stigmatizing some people which can lead to reactance in some areas of the community [16]. Appeals utilizing both positive and negative emotions are labeled inconsistently in the literature. For example, Hong and Lee [60] and Taute et al. [61] employ the term mixed emotional appeals while others utilized the term coactive appeal [7,8,[62][63][64]. The current study employs the term coactive appeals as coactivity is used to explain the mixed state of emotions and is applied more heavily in the marketing communication literature [8,62,64]. When comparing single appeals with coactive appeals that feature an emotional shift (e.g., from positive to negative), coactive appeals were found to be more effective [65,66]. Hence, coactive appeals have recently gained research attention, with advertising studies including such appeals in their evaluations [7]. Coactive emotional appeals seek to induce both positive and negative emotions simultaneously or as a flow from one appeal to the other [42,49]. For example, a coactive message can take the viewer on an emotional journey either from negative to positive or from positive to negative. The use of a negative to positive emotional flow or a threat-relief emotional message is hypothesized to result in a stronger persuasion outcome [49]. A recent study by Gebreselassie Andinet and Bougie [67] found the flow from negative to positive appeals produced more desirable results than negative or positive appeals alone. Similarly, Rossiter and Thornton [68] found fear-relief appeals to reduce young adults' speed choice when driving. This is due to positive appeals' ability to reduce different defensive reactions (e.g., fight, flight) that negative appeals generate. When a positive appeal is added to a negative appeal, post-exposure discomfort is reduced, resulting in the combination of appeals (i.e., coactive) being more effective in changing behavior [66,69,70]. Nonetheless, negative appeals remain highly featured in social advertising messages. This is attributed to the rich action tendency potential negative appeals hold [71], along with their ability to activate the brain more than other emotions [72]. Negative appeals have the ability to drive action without being liked first. This explains their dominance in social advertising messages and heavy focus in the literature. No known study has empirically tested and contrasted the effectiveness of a coactive appeal with positive and negative appeals directly on social media platforms. Previously studies compared the three appeals using self-report data collection measures, an approach that is limited by social desirability effects [73]. This study eliminates such limitations by utilizing social media advertising tools and measures where data are collected based on the viewer's actual reactions on the platform (e.g., likes and clicks) rather than intentions to perform such reactions [1,74]. Theory This study applies and builds on the multi-actor engagement framework proposed by Shawky, Kubacki, Dietrich and Weaven [10]. The framework provides an "integrated, dynamic and measurable framework for managing customer engagement on social media" enabling marketers to understand the different levels of engagement and measure the success of their campaigns [10]. As social media grows beyond simple dyadic exchanges between customers and companies, the multi-actor engagement framework operationalizes the different levels of engagement in a multi-actor ecosystem where customers, fans, organizations and stakeholders all contribute to levels of engagement with content. The different levels of engagement set by Shawky, Kubacki, Dietrich and Weaven [10] include connection, interaction, loyalty and advocacy. Connection is defined as a one-way communication where content is presented to customers without any action taken by the customer. When the stimulus attracts a customer's attention, connection is achieved. In other words, Schivinski et al. [75] label this level as the consumption stage where social media users consume content but do not necessarily interact with it. At this level, customers passively consume the online content without taking any action yet [75,76]. Based on the Shawky, Kubacki, Dietrich and Weaven [10] framework, this level is measured by reach and impressions. Reach is defined as the number of unique users who viewed an advertisement, while impressions are recorded every time an ad is viewed, including multiple views by the same user [77]. The next level in the Shawky, Kubacki, Dietrich and Weaven [10] framework is interaction, and this level highlights the beginning of two-way communication between different actors, including customers, organizations and other customers. At this level, users engage with the advertisement by interactively contributing to the advertised message [75]. Social media interaction can be defined as "the number of participant interactions stratified by interaction type" [78]. Interaction types include likes, comments, clicks and overall engagement which are utilized to measure this level following the Shawky, Kubacki, Dietrich and Weaven [10] framework. The third level in the Shawky, Kubacki, Dietrich and Weaven [10] framework is loyalty, where interaction is repeated over time. A user is regarded as loyal if they are consistently seen interacting and contributing to an organization's advertisements and content on social media. To encourage loyalty on social media, an organization's content should aspire to complement the user's image, as this will increase the chances of interactions over time and sharing with others [79]. This level is measured by multiple comments and messages in the Shawky, Kubacki, Dietrich and Weaven [10] framework. Finally, advocacy marks the fourth and highest level of engagement. Advocacy is recorded when customers spread an organization's message by generating new content through their networks. Advocacy is where interaction is sustained, and support moves beyond the dyadic nature, reaching users' own networks. This is when users share organizations' content with their community circles through their own pages, profiles and networks [80]. Following the Shawky, Kubacki, Dietrich and Weaven [10] framework, this level is measured by shares, tagging of others on a post and word of mouth. The last two levels are parallel to Schivinski, Christodoulides and Dabrowski [75] creation level where users generate and create content. This is regarded as the highest level of online engagement as it motivates future interaction and involvement with the organization online and offline [76]. The multi-actor engagement framework stops at advocacy as the highest level of engagement. As a result of our study, we propose a fifth level which marks the transition from online engagement to behavior. The fifth level signifies the customer's action beyond social media platforms, and this can be reflected by purchases, donations, registrations, signatures on a petition and many other actions. While previous studies explore advertising effectiveness with customer perceptions such as attitudes, memorability, likability of the ad and intentions to take action, the direct effect of advertising on behavior, specifically in a social media context, is yet to be explored [71]. This is now possible with methodological advances and digital and social media platforms that allow for experiments to track not only automated measures on social media (e.g., likes and shares), but further action taken beyond such platforms (e.g., filling lead form, visiting a store, buying a product) [71]. This is of specific interest to advertisers employing emotional appeals, as each emotion has different action tendencies which influence the audience behavior after being exposed to the emotional advertisement appeal. As Poels and Dewitte [71] explain, different emotional appeals "help the individual sort out which action tendency is the most functional in this situation". Each behavior is tracked differently, for example, weight loss campaigns can be tracked through the audience's eating habits and exercise patterns, while antitobacco campaigns may track cigarette purchases, hence, this level has a number of possible measures. If measuring purchases of a product or donations to an online charity, the click through rate to the product or the donation page along with the number of orders or the amount of donations are examples of measures that reflect actions. For the purpose of this study, we measure actions through the number of requests to receive a donation sorting bag that helps reduce textile waste. The extended model is shown in Figure 1. Past literature identified that positive advertising appeals produce an emotion-focused coping mechanism, while negative appeals create emotional imbalance to stimulate behavior change, and coactive appeals may reduce defensive reactions with less evidence of effectiveness in creating behavior change. For the purpose of the current study, we focus on four online behavioral outcomes. First, connection is defined by reach. Second, interaction is defined by engagement, likes, comments and clicks. Third, loyalty is defined by repeated actions on the ad. Finally, advocacy is defined by sharing the ad. Guided by past research, the current study expects negative advertising appeals to be more effective in evoking online behavioral responses than positive and coactive message on social media. Hence, the following hypotheses are proposed: Hypothesis 1 (H1): Negative advertising appeals will achieve more interactions than positive and coactive advertising appeals on social media. Hypothesis 2 (H2): Negative advertising appeals will achieve more loyalty than positive and coactive advertising appeals on social media. Hypothesis 3 (H3): Negative advertising appeals will achieve more advocacy than positive and coactive advertising appeals on social media. Hypothesis 4 (H4): Negative advertising appeals will achieve more behavior actions than positive and coactive advertising appeals on social media. Hypothese 1 (H1). Negative advertising appeals will achieve more interactions coactive advertising appeals on social media. Hypothese 2 (H2). Negative advertising appeals will achieve more loyalty tha active advertising appeals on social media. Hypothese 3 (H3). Negative advertising appeals will achieve more advocacy coactive advertising appeals on social media. Hypothese 4 (H4). Negative advertising appeals will achieve more behavior act and coactive advertising appeals on social media. Gaps and Aims This study aims to address three main gaps in the literature. First, the media advertisement evaluation model that addresses both online engag behavior actions which is addressed through an empirical study. Second of social media advertisements has been limited by self-reported measu to engage with advertisements (e.g., intention to click, like, share) [81], n engagement measures (e.g., comments, reactions, shares, likes and ad cl directly observed in social media. Third, current evidence on social media peals' effectiveness in engagement and changing behavior remains co sistent and fragmented. Such gaps create a challenge for researchers aimin social media advertising appeal effectiveness and advertisers, given that l is available to provide an implementation roadmap that can be relied up havior change benefitting people. This study addresses the aforemention Gaps and Aims This study aims to address three main gaps in the literature. First, the need for a social media advertisement evaluation model that addresses both online engagement as well as behavior actions which is addressed through an empirical study. Second, the evaluation of social media advertisements has been limited by self-reported measures of intentions to engage with advertisements (e.g., intention to click, like, share) [81], neglecting actual engagement measures (e.g., comments, reactions, shares, likes and ad clicks) that can be directly observed in social media. Third, current evidence on social media advertising appeals' effectiveness in engagement and changing behavior remains conflicted, inconsistent and fragmented. Such gaps create a challenge for researchers aiming to understand social media advertising appeal effectiveness and advertisers, given that limited guidance is available to provide an implementation roadmap that can be relied upon to deliver behavior change benefitting people. This study addresses the aforementioned gaps testing the capacity of positive, negative and coactive advertising appeals to engage audiences on social media and drive behavioral actions. Material and Methods The current study employed an experimental study design, where three advertising appeals (positive, negative and coactive) were designed following Alhabash, McAlister, Hagerstrom, Quilliam, Rifon and Richards [7] and Hong and Lee [60] and published on Facebook following a pre-test conducted with a participant panel (n = 10). Pre-Test An online survey was distributed featuring the three advertisements. After exposure to each ad, participants were asked to rate how the advertisement made them feel on a 7-point scale (mostly positive/mostly negative) [82]. Next, participants were asked to describe how the advertisements made them feel in one word. The three ads maintained similarities in visuals and manipulated verbal elements to represent each appeal. The pre-test included one between-subjects ANOVA to compare advertisements' emotional valance and a sentiment analysis of each advertisement response. The aim was to ensure that the positive advertisements were rated as more positive than the negative and coactive advertisements, the negative advertisements more negative than positive and coactive advertisements and the coactive advertisements in the middle. Moreover, the pre-test included a sentiment analysis of participants' feedback on each advertisement. Word maps were generated and analyzed to confirm each ad represented the respective appeal. Social Media Advertisements After the pre-test, the three advertisements were published on Facebook and promoted for a week, controlling for the reach (i.e., number of people who were presented with the ad) of each advertisement through Facebook's A/B testing tool on Facebook ads manager. The A/B testing tool allows for a comparable data set between the tested advertisements by controlling for reach across the different groups along with demographic elements (e.g., gender). Facebook ads manager allows for extraction of advertisements' performance data as a spreadsheet, which was then used by the research team to analyze advertising appeal effectiveness using SPSS v. 25. Facebook records advertisements' performance data in key metrics including reach, likes, comments and shares. The ad appeared to Facebook users on their news feed as they scrolled through the content. The published ads (see Table 1) were linked to a charity website. One aim of the website is to educate people on what to donate to increase the quality of donations for Australian charities. When landing on the website, customers were asked to fill in a form to request a cloth sorting bag for their donations. Each form submission was recorded, and web data were extracted for all form submissions when the campaign was over. The research procedure is outlined in Figure 2. Analysis and Measures The three advertisements employed in this study were analyzed using Shawk Kubacki, Dietrich and Weaven [10] multi-actor social media engagement framewor When customers viewed the advertisement, reach was recorded. When a customer like or reacted to or clicked on an ad, interaction was recorded. When customers commente multiple times or replied to others to clarify the message or provide information, loyalt was recorded. When users shared the advertisement, advocacy was recorded. Finall when customers filled in the form on the charity website, action was recorded. The form was created as a lead capture tool where customers filled in their information (e.g., nam address, contact details) to request a donation bag they could use to take their donation to charities. Three separate forms were created with three links for each advertising ap peal. Data of each ad's performance were extracted from Facebook ads manager whi Appeal Artwork Positive peared to Facebook users on their news feed as they scrolled through the content. The published ads (see Table 1) were linked to a charity website. One aim of the website is to educate people on what to donate to increase the quality of donations for Australian charities. When landing on the website, customers were asked to fill in a form to request a cloth sorting bag for their donations. Each form submission was recorded, and web data were extracted for all form submissions when the campaign was over. The research procedure is outlined in Figure 2. Analysis and Measures The three advertisements employed in this study were analyzed using Shawky, Kubacki, Dietrich and Weaven [10] multi-actor social media engagement framework. When customers viewed the advertisement, reach was recorded. When a customer liked or reacted to or clicked on an ad, interaction was recorded. When customers commented multiple times or replied to others to clarify the message or provide information, loyalty was recorded. When users shared the advertisement, advocacy was recorded. Finally, when customers filled in the form on the charity website, action was recorded. The form was created as a lead capture tool where customers filled in their information (e.g., name, address, contact details) to request a donation bag they could use to take their donations to charities. Three separate forms were created with three links for each advertising appeal. Data of each ad's performance were extracted from Facebook ads manager while data of all request forms were extracted from the website after the ads on Facebook ended and were analyzed based on the number of requests received on each form. Following Merchant, Weibel, Patrick, Fowler, Norman, Gupta, Servetas, Calfas, Raste, Pina, Donohue, Griswold and Marshall [78], the data received were analyzed as categorical (out of all users reached, ad was liked: yes or no, ad was clicked on: yes or no) and continuous in the sense how many liked, clicked, commented. A chi-square test of independence was performed to examine the relation between advertising appeal and Shawky, Kubacki, Dietrich and Weaven [10] Analysis and Measures The three advertisements employed in this study were analyzed using Shawky, Kubacki, Dietrich and Weaven [10] multi-actor social media engagement framework. When customers viewed the advertisement, reach was recorded. When a customer liked or reacted to or clicked on an ad, interaction was recorded. When customers commented multiple times or replied to others to clarify the message or provide information, loyalty was recorded. When users shared the advertisement, advocacy was recorded. Finally, when customers filled in the form on the charity website, action was recorded. The form was created as a lead capture tool where customers filled in their information (e.g., name, address, contact details) to request a donation bag they could use to take their donations to charities. Three separate forms were created with three links for each advertising appeal. Data of each ad's performance were extracted from Facebook ads manager while data of all request forms were extracted from the website after the ads on Facebook ended and were analyzed based on the number of requests received on each form. Following Merchant, Weibel, Patrick, Fowler, Norman, Gupta, Servetas, Calfas, Raste, Pina, Donohue, Griswold and Marshall [78], the data received were analyzed as categorical (out of all users reached, ad was liked: yes or no, ad was clicked on: yes or no) and continuous in the sense how many liked, clicked, commented. A chi-square test of independence was performed to examine the relation between advertising appeal and Shawky, Kubacki, Dietrich and Weaven [10] Analysis and Measures The three advertisements employed in this study were analyzed using Shawky, Kubacki, Dietrich and Weaven [10] multi-actor social media engagement framework. When customers viewed the advertisement, reach was recorded. When a customer liked or reacted to or clicked on an ad, interaction was recorded. When customers commented multiple times or replied to others to clarify the message or provide information, loyalty was recorded. When users shared the advertisement, advocacy was recorded. Finally, when customers filled in the form on the charity website, action was recorded. The form was created as a lead capture tool where customers filled in their information (e.g., name, address, contact details) to request a donation bag they could use to take their donations to charities. Three separate forms were created with three links for each advertising appeal. Data of each ad's performance were extracted from Facebook ads manager while data of all request forms were extracted from the website after the ads on Facebook ended and were analyzed based on the number of requests received on each form. Following Merchant, Weibel, Patrick, Fowler, Norman, Gupta, Servetas, Calfas, Raste, Pina, Donohue, Griswold and Marshall [78], the data received were analyzed as categorical (out of all users reached, ad was liked: yes or no, ad was clicked on: yes or no) and continuous in the sense how many liked, clicked, commented. A chi-square test of independence was performed to examine the relation between advertising appeal and Shawky, Kubacki, Dietrich and Weaven [10] engagement levels: connection, interaction, loyalty, advocacy and the fifth proposed level of behavior, using SPSS v.25. Results A sample of ten participants was achieved for the pre-test with a mean age of 24 and balanced gender (50% females). Using SPSS v.25, pre-tests were successful for all advertising appeals. There was a statistically significant difference between group means showing a significant effect of appeal type on emotional valence (mostly positive/mostly negative) at the p < 0.05 level as determined by one-way ANOVA (F(2,27) = 199.957, p = 0.00). Post hoc analyses were conducted using Tukey's post hoc test. The test showed that the three advertising appeal groups differed significantly at p < 0.05. The coactive ad means appear in the middle as their mean scores were mostly neutral compared to the other two categories of emotional tone (see Figure 3). This result indicated that in the coactive condition, participants perceived the advertisement as both positive and negative at the same time, reflecting the bi-dimensional nature of the appeal (negative and positive). The sentiment analysis showed that the positive appeal was perceived as mostly hopeful, the negative mostly shameful and the coactive was perceived as motivational. Figure 4 showcases the word map for each advertisement. Social Media Advertisements The three promoted appeals achieved a total of 23,905 impressions and reached 21,054 users which resulted in 787 clicks to the website. Facebook ads manager targeted a The sentiment analysis showed that the positive appeal was perceived as mostly hopeful, the negative mostly shameful and the coactive was perceived as motivational. Figure 4 showcases the word map for each advertisement. The coactive ad means appear in the middle as their mean scores were mostly neutral compared to the other two categories of emotional tone (see Figure 3). This result indicated that in the coactive condition, participants perceived the advertisement as both positive and negative at the same time, reflecting the bi-dimensional nature of the appeal (negative and positive). The sentiment analysis showed that the positive appeal was perceived as mostly hopeful, the negative mostly shameful and the coactive was perceived as motivational. Figure 4 showcases the word map for each advertisement. Social Media Advertisements The three promoted appeals achieved a total of 23,905 impressions and reached 21,054 users which resulted in 787 clicks to the website. Facebook ads manager targeted a Social Media Advertisements The three promoted appeals achieved a total of 23,905 impressions and reached 21,054 users which resulted in 787 clicks to the website. Facebook ads manager targeted a balanced sample for the three promoted appeals by using its A/B testing tool. The three ads reached Facebook users above 18 years of age of both genders (see Figures 5 and 6). While the overall sample is female skewed (see Figure 6), each advertising appeal achieved a balanced reach for both genders (see Figure 7). The click through rate achieved through the three ads of 3.28% is considered above the average of 1.24% for Facebook ads [83]. A total of 28 requests were received for donation bags through the website forms. The results for each advertising appeal are discussed next. achieved a balanced reach for both genders (see Figure 7). The click through rate achieved through the three ads of 3.28% is considered above the average of 1.24% for Facebook ads [83]. A total of 28 requests were received for donation bags through the website forms. The results for each advertising appeal are discussed next. achieved a balanced reach for both genders (see Figure 7). The click through rate achieved through the three ads of 3.28% is considered above the average of 1.24% for Facebook ads [83]. A total of 28 requests were received for donation bags through the website forms. The results for each advertising appeal are discussed next. Connection Connection was measured through reach and was controlled between the three appeal ads to ensure applicable comparison (see Table 2). A chi-square test of independence revealed an insignificant effect of appeal type on reach between the three advertisements χ 2 (2, N = 21,054) = 4.57, p = 0.11. Loyalty No repeated interaction was recorded for any of the three advertising appeals (see Table 4). Therefore, appeal type had no effect on level of loyalty. Hence, H2 was not supported. Figure 7. Distribution of females across the three appeals. Connection Connection was measured through reach and was controlled between the three appeal ads to ensure applicable comparison (see Table 2). A chi-square test of independence revealed an insignificant effect of appeal type on reach between the three advertisements χ 2 (2, N = 21,054) = 4.57, p = 0.11. . Interaction Appeal type had a significant effect on the level of interaction. This is evident through all three measures: clicks, engagement and comments (see Table 3). The negative appeal ad had significantly more engagement than the positive and coactive appeals. A chi-square test of independence showed significance for clicks χ 2 (2, N = 21,054) = 18.57 p < 0.05, and engagement χ 2 (2, N = 21,054) = 20.68 p < 0.05. No significant difference was observed for comments χ 2 (2, N = 21,054) = 4.94 p < 1, partially supporting H1. When comparing clicks on the positive and coactive appeals, no statistical significance was recorded at the 0. No repeated interaction was recorded for any of the three advertising appeals (see Table 4). Therefore, appeal type had no effect on level of loyalty. Hence, H2 was not supported. . Advocacy A chi-square test of independence showed no significant effect of appeal type on advocacy (see Table 5). This is seen in the number of shares the three appeals received χ 2 (2, N = 21,054) = 1.82 p = 0.40. Hence, H3 was not supported. Behavior was measured through the number of requests for a donation sorting bag received for each advertising appeal. The negative appeal achieved the highest number of requests, followed by positive and coactive appeals (see Table 6). A chi-square test of independence showed a significant difference for appeal type based on the number of bag requests χ 2 (2, N = 21,054) = 6.54 p < 0.05, supporting H4. Discussion The current study contributes to the literature in three ways. Firstly, we tested positive, negative and coactive appeals' effectiveness on social media to understand their effect on engagement and behavior. This is the first study to directly examine advertising appeals on social media without the use of self-report measures. Our findings support the use of negative appeals over positive and coactive appeals when aiming to drive engagement and change behavior. This provides clear guidance for practitioners aiming to create effective social advertisement messages on social media. Secondly, this is the first study to apply and build on the Shawky, Kubacki, Dietrich and Weaven [10] social media multi-actor engagement framework in testing advertising appeals' effectiveness. Our findings support the use of the framework in testing advertising effectiveness and show clear measures for each level of engagement. Finally, the study proposed an extension to the social media multi-actor engagement framework [10] with a clear and practical way of measuring actions beyond social media engagement. This will enable social advertisers to measure advertising effectiveness on actual behavior, moving beyond indirect behavioral measures such as attitudes, norms and intentions. Each contribution will be discussed in detail next. Negativity Increases Appeals' Effectiveness When comparing positive, negative and coactive appeals' performance in driving engagement and action on social media, our findings suggest negative appeals hold a persuasive advantage (see Figure 8). This is evident in the significant increase in engagement and actions for the negative advertisements when compared with the positive and coactive advertising appeals. This is consistent with previous findings, especially with behavior related to charities [18,84] and the environment [52,85]. Our findings support the limited effectiveness of positive appeals when complex issues are discussed. The advertisements employed by the current study address the issue of waste and its impact on the planet, where positive appeals have been found to be less effective in the past [54]. support the use of the framework in testing advertising effectiveness and show clear measures for each level of engagement. Finally, the study proposed an extension to the social media multi-actor engagement framework [10] with a clear and practical way of measuring actions beyond social media engagement. This will enable social advertisers to measure advertising effectiveness on actual behavior, moving beyond indirect behavioral measures such as attitudes, norms and intentions. Each contribution will be discussed in detail next. Negativity Increases Appeals' Effectiveness When comparing positive, negative and coactive appeals' performance in driving engagement and action on social media, our findings suggest negative appeals hold a persuasive advantage (see Figure 8). This is evident in the significant increase in engagement and actions for the negative advertisements when compared with the positive and coactive advertising appeals. This is consistent with previous findings, especially with behavior related to charities [18,84]and the environment [52,85]. Our findings support the limited effectiveness of positive appeals when complex issues are discussed. The advertisements employed by the current study address the issue of waste and its impact on the planet, where positive appeals have been found to be less effective in the past [54]. The effectiveness of coactive appeals has not been tested directly on social media before, marking a significant contribution of this study. Interestingly, the coactive appeal was equally effective when compared to positive appeals in attracting comments, and driving loyalty, advocacy and behavioral actions. The findings in this study indicate that both positive and coactive appeals performed in a similar way, contradicting previous findings supporting positive appeals' effectiveness over coactive appeals [7]. The limited effectiveness of both positive and coactive appeals in this study may be attributed to the advertising platform. Platform Effect on Appeal Effectiveness Social media advertising engagement differs across social media platforms [86,87]. To understand why negative appeals were most effective in our experiment, a review of the platform of choice (i.e., Facebook) was necessary. Evidence suggests Facebook is among the most negative platforms in nature. Voorveld, van Noort, Muntinga and The effectiveness of coactive appeals has not been tested directly on social media before, marking a significant contribution of this study. Interestingly, the coactive appeal was equally effective when compared to positive appeals in attracting comments, and driving loyalty, advocacy and behavioral actions. The findings in this study indicate that both positive and coactive appeals performed in a similar way, contradicting previous findings supporting positive appeals' effectiveness over coactive appeals [7]. The limited effectiveness of both positive and coactive appeals in this study may be attributed to the advertising platform. Platform Effect on Appeal Effectiveness Social media advertising engagement differs across social media platforms [86,87]. To understand why negative appeals were most effective in our experiment, a review of the platform of choice (i.e., Facebook) was necessary. Evidence suggests Facebook is among the most negative platforms in nature. Voorveld, van Noort, Muntinga and Bronner [86] explain the implications of such findings by relating them to advertising valence (i.e., appeals). Hence, when advertising on Facebook, advertisements that evoke negative feelings (i.e., negative appeals) perform best. The effectiveness of such appeals stems from the fluency between advertising appeal and platform nature [87]. Platform-appeal fit was found to increase the effectiveness of advertising and drive higher rates of engagement [87]. In fact, Facebook ran an experiment in 2014, where content was manipulated for half a million users. The experiment tested whether what users share is affected by what they see in their newsfeed. The findings supported the concept of platform-appeal fit. When negative content was increased in users' feeds, their posts became more negative as a result [88]. Recently, Facebook was criticized for carrying out such experiments, which can have an impact on mental health, personal decisions and in many cases political election outcomes [89]. All of this results in users' skepticism of the platform, contributing to its negative nature. Driving Action Beyond Social Media Engagement The current study applies and builds on the Shawky, Kubacki, Dietrich and Weaven [10] social media multi-actor engagement framework. The framework provides a practical tool for organizations seeking to evaluate their social media content. This study applies the framework, testing advertising appeals' effectiveness, an application of the framework that has not previously occurred. It showcases the ability to use the Shawky, Kubacki, Dietrich and Weaven [10] framework as a tool to evaluate social media advertising effectiveness in driving engagement and prosocial behaviors. While the framework proved to be a practical and easy measurement tool for advertisers, it has a key limitation when aiming to carry out a thorough evaluation of advertising messages' capacity to drive action. Linking online engagement to behavior remains limited when evaluating social media advertising effectiveness. Hence, an extension to the model is proposed, where behavior is measured through action (e.g., donations), extending understanding beyond social media engagement (see Figure 9). This extension could be tested empirically, allowing actions taken to be observed and analyzed. Behavior could be measured as a fifth step in Shawky et al.'s framework. Bronner [86] explain the implications of such findings by relating them to advertising valence (i.e., appeals). Hence, when advertising on Facebook, advertisements that evoke negative feelings (i.e., negative appeals) perform best. The effectiveness of such appeals stems from the fluency between advertising appeal and platform nature [87]. Platformappeal fit was found to increase the effectiveness of advertising and drive higher rates of engagement [87]. In fact, Facebook ran an experiment in 2014, where content was manipulated for half a million users. The experiment tested whether what users share is affected by what they see in their newsfeed. The findings supported the concept of platform-appeal fit. When negative content was increased in users' feeds, their posts became more negative as a result [88]. Recently, Facebook was criticized for carrying out such experiments, which can have an impact on mental health, personal decisions and in many cases political election outcomes [89]. All of this results in users' skepticism of the platform, contributing to its negative nature. Driving Action Beyond Social Media Engagement The current study applies and builds on the Shawky, Kubacki, Dietrich and Weaven [10] social media multi-actor engagement framework. The framework provides a practical tool for organizations seeking to evaluate their social media content. This study applies the framework, testing advertising appeals' effectiveness, an application of the framework that has not previously occurred. It showcases the ability to use the Shawky, Kubacki, Dietrich and Weaven [10] framework as a tool to evaluate social media advertising effectiveness in driving engagement and prosocial behaviors. While the framework proved to be a practical and easy measurement tool for advertisers, it has a key limitation when aiming to carry out a thorough evaluation of advertising messages' capacity to drive action. Linking online engagement to behavior remains limited when evaluating social media advertising effectiveness. Hence, an extension to the model is proposed, where behavior is measured through action (e.g., donations), extending understanding beyond social media engagement (see Figure 9). This extension could be tested empirically, allowing actions taken to be observed and analyzed. Behavior could be measured as a fifth step in Shawky et al.'s framework. Negative appeals achieved the highest actions, while positive and coactive appeals received equal actions from users. This could be explained by the emotion-focused coping Negative appeals achieved the highest actions, while positive and coactive appeals received equal actions from users. This could be explained by the emotion-focused coping function, where positive emotions produce a hope for change rather than inducing a motivation to take action [54]. Furthermore, when viewers hold favorable prior attitudes towards the advertised behavior (i.e., reduce waste) positive appeals are found to be less effective [90]. While there are no data on prior attitudes of the viewers for this experiment, the comments received about the negative appeal advertisement suggest some people are passionate about the environment, and they want to take actions to help others and reduce waste. Hence, negative appeals were more effective in driving both engagement and action. Limitations and Future Research Three main limitations apply to this study. First, mediators of effectiveness, such as prior attitude towards the issue, were not examined in this study. Future research is recommended to employ a pre-exposure survey to collect such data or utilize social media targeting tools to target specific audiences with certain interests. Second, the advertisements tested in this study were shared predominantly on Facebook where negative content dominates. Future research should investigate other platforms to understand the effects that social media platforms exert on advertising appeal effectiveness. Empirical tests of different appeals on multiple platforms (e.g., Twitter, TikTok, Snapchat) are needed to draw conclusions on where different appeals perform best. Third, the sample reached by this study may be small when compared to other studies in social media settings [91], and future research may increase the reach by increasing the budget invested in the advertising campaign on Facebook. It is important to note that different contexts may achieve different results, and the experiment can be replicated to understand if other behaviors, such as the uptake of the COVID-19 vaccine, are more effectively achieved through negative advertising appeals. Research shows a promising effect of emotional appeals in both social and health domains, with more empirical evidence needed [92]. Future research may investigate the role of social media advertisements in inspiring loyalty and advocacy through different emotional appeals. The current study found no effect of appeal type on loyalty and advocacy, presenting a limitation to the overall findings. To increase analytic rigor, future research may employ a CB/PLS-SEM approach to test social media advertising's effect on behavior [93]. Conclusions This study examined positive, negative and coactive advertising appeals' effectiveness in driving engagement and actions on and beyond social media platforms. Findings support the use of negative advertising appeals over positive and coactive appeals. The results highlight how negative appeals on social media advertising in an environmental and charity context can deliver superior outcomes to engage more people and positively impact social behavior. Theoretically, this study highlights the value of using Shawky et al.'s (2020) multi-actor social media engagement framework to test social advertising appeal effectiveness and provides a practical extension to evaluate behavioral outcomes.
2021-06-15T05:12:56.827Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "3a7053444c5f2331255e2c5da13fc3f32b9febb4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/11/5954/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a7053444c5f2331255e2c5da13fc3f32b9febb4", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Medicine" ] }
119416771
pes2o/s2orc
v3-fos-license
Killing Horizons: Negative Temperatures and Entropy Super-Additivity Many discussions in the literature of spacetimes with more than one Killing horizon note that some horizons have positive and some have negative surface gravities, but assign to all a positive temperature. However, the first law of thermodynamics then takes a non-standard form. We show that if one regards the Christodoulou and Ruffini formula for the total energy or enthalpy as defining the Gibbs surface, then the rules of Gibbsian thermodynamics imply that negative temperatures arise inevitably on inner horizons, as does the conventional form of the first law. We provide many new examples of this phenomenon, including black holes in STU supergravity. We also give a discussion of left and right temperatures and entropies, and show that both the left and right temperatures are non-negative. The left-hand sector contributes exactly half the total energy of the system, and the right-hand sector contributes the other half. Both the sectors satisfy conventional first laws and Smarr formulae. For spacetimes with a positive cosmological constant, the cosmological horizon is naturally assigned a negative Gibbsian temperature. We also explore entropy-product formulae and a novel entropy-inversion formula, and we use them to test whether the entropy is a super-additive function of the extensive variables. We find that super-additivity is typically satisfied, but we find a counterexample for dyonic Kaluza-Klein black holes. Introduction Since the early days of black hole thermodynamics there have been suggestions that the thermodynamic of the inner, Cauchy, horizons of charged and or rotating black holes should be taken more seriously than it has been [1][2][3][4][5][6][7][8][9][10]. With the development of String Theory approaches these suggestions have become more insistent [11][12][13][14][15][16][17]. This interest increased considerably with the observation that the product of the areas and hence entropies of the inner and outer horizon takes in many examples a universal form which should be quantised at the quantum level [18][19][20][21][22]. Some of these papers, and others, for example [22][23][24][25][26][27], encountered the same feature first noticed in [1]: the fact that with a conventional first law of thermodynamics the temperature of the inner horizon would be negative. The authors of [22] chose to resolve this issue by defining the temperature of the inner horizon to be the absolute value of the "thermodynamic" temperature, and proposing an appropriatelymodified first law on the inner horizon to compensate for this. In this paper we shall explore the consequences of adhering to the standard first law of thermodynamics for inner horizons, with the inevitable result that the temperature will be negative there. In the derivation of the first law of black hole dynamics one finds, integrating in the region between the inner and outer horizons, that where κ ± are the surface gravities and A ± the areas of the outer and inner horizons respectively. (The contributions from the angular momentum and charge(s) are represented by the ellipses in this equation.) If, as turns out to be the case in the examples we consider, the signs of dA + and dA − are opposite for a given change in the black-hole parameters, then the signs of the surface gravities at κ + and κ − must be opposite too. The surface gravity is defined by evaluating on the horizon, where ℓ µ is the future-directed null generator of the horizon, which coincides with a Killing vector K µ on the horizon. One then finds that whilst κ is positive on the outer horizon, it is negative on the inner horizon. 1 Hawking showed that for an isolated 1 For example, in a static metric ds 2 = −h(r) dt 2 +dr 2 /h(r)+r 2 (dθ 2 +sin 2 θ dφ 2 ) one finds (after changing to a coordinate system that covers the horizon region) from (1.2) that if K = ∂/∂t then κ = 1 2 dh/dr, which is of the form of the negative of the gradient of the gravitational potential, evaluated on the horizon. If h = (r − r+)(r − r−)/r 2 , as in the Reissner-Nordström metric, then κ+ = (r+ − r−)/(2r 2 + ) > 0, while κ− = −(r+ − r−)/(2r 2 − ) < 0. In general, of course, the slope of h(r) must always have opposite signs at two adjacent zeros, and thus the surface gravities must have opposite signs. event horizon in an asymptotically flat spacetime (for which in fact κ is positive), the temperature is κ/(2π). We shall discuss the extension of Hawking's calculation to the case of inner horizons in the concluding section of this paper. In what follows, however, we shall frequently make reference to the formula with the understanding that T may not be a temperature measured by a physical thermometer, but rather, as we shall explain shortly, a "Gibbsian" temperature. The occurrence of a negative κ on an inner horizon is somewhat obscured in many discussions in the literature by the fact that the surface gravity is commonly calculated by evaluating in the limit on the horizon. This formula is derivable from (1.2), but the information about the sign of κ is lost, and commonly the positive root is assumed when calculating κ from (1.4). A guaranteed correct procedure is to use the formula (1.2), working in a coordinate system that covers the horizon region. Another situation where one encounters two horizons is when a positive cosmological constant Λ is involved and one has both a black hole event horizon and a cosmological event horizon bounding a static or stationary region [28]. A number of recent studies have pointed out that the surface gravities of the black hole horizon κ B and the cosmological event horizon κ C again have opposite signs [29][30][31]. Most have followed the procedure adopted in [28] and taken the physical temperature to be |κ| 2π (for example, see [32]). A similar situation arises in the case of the C-metrics, which contain both a black-hole horizon and an acceleration horizon. Their surface gravities are of opposite signs. In order to assess the status of these suggestions, in this paper we shall re-examine the foundations of classical black hole thermodynamics from the viewpoint of the approach to classical thermodynamics advocated by Gibbs [33]. The central idea of this approach is that the physical properties of a substance are encoded into the shape of its Gibbs surface, i.e. the surface given by regarding the height of the surface as given by the total energy, regarded as a function of the remaining extensive variables. From this point of view, the temperature is given by the slope of the curve of energy versus entropy. To this end, we shall need explicit Christodoulou-Ruffini formulae, and a major goal of this paper is to obtain these for a variety of black hole solutions. As we shall see, it is a common feature of these examples that the "Gibbsian temperature" thus defined, while positive for black hole event horizons, is negative for inner horizons (i.e. Cauchy horizons) and for cosmological horizons. We shall return to a discussion of the physical consequences for spacetimes with two horizons in the conclusions. Letus recall that the formalism of thermodynamics, applied to classical black holes, began with two independent discoveries: • Christodoulou's concept of reversible and irreversible transformations such that the energy E of a rotating black hole of angular momentum J and momentum P may be expressed as where the irreducible mass M irr is non-decreasing [34] • Hawking's Theorem [35,36] that the area A of the event horizon is non-decreasing. In fact and for charged rotating Kerr-Newman black holes and dropping the momentum contribution and setting J = |J|, one has [37] the Christodoulou-Ruffini formula: (1.7) The obvious analogy of some multiple of the area of the horizon with entropy became even more striking with the discovery by Smarr [38] of an analogue of the Gibbs-Duhem relation for homogeneous substances. For Kerr-Newman black holes, this reads M = 1 4π κA + 2ΩJ + ΦQ , (1.8) where κ is the surface gravity, Ω the angular velocity and Φ the electrostatic potential of the event horizon. The analogy became almost complete with the the formulation of three laws of black hole mechanics, including the first law dM = 1 8π κdA + ΩdJ + ΦdQ , (1.9) by Bardeen, Carter and Hawking [39]. Note that the Smarr relation (1.8) follows from the first law (1.9) by differentiating the weighted homogeneity relation M (λ 2 A, λ 2 J, λQ) = λM (1. 10) with respect to λ and then setting λ = 1. The existence of a "first law" is not in itself surprising, nor does it, in itself, imply any thermodynamic consequences. Whenever one has a problem involving varying a function subject to some constraints, and considering the value of the function at critical points, one has a formula analogous to (1.9). In the case of black hole solutions of the Einstein equations, they are known to satisfy a variational principle in which the mass is extremised keeping the horizon area, angular momenta and charges fixed (see, for example, [40][41][42]). Similar formulae arise in the theory of rotating stars (see, for example, [43]). The study of these variations is sometimes referred to as comparative statics. For homogeneous substances with pressure P , volume V and internal energy U , it is well known that the Gibbs-Duhem relation is equivalent to the statement that the Gibbs free energy, or thermodynamic potential, Classically, a number of arguments led to the conclusion that the laws of black hole mechanics were just analogous to the laws of thermodynamics. One argument was that as perfect absorbers, classical black holes should have vanishing temperature and hence the entropy should be infinite (cf. [44,45]). Another was based on dimensional reasoning. In units where Boltzmann's constant is taken to be unity, entropy is dimensionless, but in classical general relativity it is not obvious how to achieve this without introducing a unit of length. The obvious guess for entropy would be some multiple of the area A, but why not some monotonically increasing function of the area? Despite these doubts it was conjectured by Bekenstein [45] that when quantum mechanics is taken into account some multiple of A l 2 p should correspond to the physical entropy of a black hole. This conjecture was subsequently confirmed at the semi-classical level by by Hawking [46,47], using quantum field theory in a curved background. Given this, one recognises the Christodoulou-Ruffini To summarise, the purpose of the present paper is to re-examine these issues systematically, based on Gibbs's geometric viewpoint of the mathematical formalism of thermodynamics [33]. This starts with a choice of pairs of extensive and intensive variables and an expression for some sort of "energy," which is regarded as a function of the extensive variables. For the black holes in asymptotically flat spacetimes that we shall consider, the energy is taken to be the ADM mass M , and the extensive variables S µ are usually taken to be S µ = (S, J, Q i , P i ) = (S, s), where S = 1 4 A and A is the area of the event horizon, J is the total angular momentum and Q i and P i are 2N electric and magnetic charges. 2 Thus we have (1.14) The intensive variables are taken to be where T is the temperature, Ω is the angular velocity of the horizon, and Φ i and Ψ i are the electrostatic and magnetostatic potentials. The organisation of this paper is as follows. In section 2, we review the theory of Gibbs surfaces, and the various thermodynamic metrics with which they may be equipped. Section 3 forms the core of the paper. In it, we give many new results for the thermodynamics of a wide range of asymptotically-flat black holes. We begin in subsections 3.1, 3.2 and 3.3 by reviewing how the well-known Reissner-Nordström, Kerr and Kerr-Newman black holes fit into the Gibbsian framework. Subsection 3.4 then provide a extensive discussion of the thermodynamics of families of black holes in four-dimensional STU supergravity. In particular, we give a systematic discussion of the notion of the decomposition of the system into left-handed and right-handed sectors, and their associated thermodynamics. Subsection 3.5 has analogous results for five-dimensional STU supergravity black holes. Subsections 3.6 and 3.7 give similar results for the general family of four-dimensional Einstein-Maxwell-Dilaton (EMD) black holes, and a two-field generalisation. Included in the discussion of these two-field EMD black holes, we exhibit a new area-product formula. A rather general feature of many asymptoticaly flat black holes with two horizons is that the product of the areas of the two horizons is independent of the mass, and given in terms of conserved charges and angular momenta, which may plausibly be quantised at the quantum level. In section 4, we use this area-product formula to exhibit an intiguing S → 1/S inversion symmetry of the Christodoulou-Ruffini formulae for such black holes. This symmetry of the Gibbs surface interchanges the positive and negative temperature branches. In section 5 we extend our discussion to black holes that are asymptotically AdS, or black holes with positive cosmological constant. In the AdS case the situation for inner and outer horizons is broadly similar to that for the asymptotically flat case. For positive cosmological constant, the black hole event horizon continues to have positive Gibbsian temperature, but that of the cosmological horizon is negative. In section 6, we revisit an old observation, that the entropy of the Kerr-Newman solution is a super-additive function of the extensive variables, and we its relation to Hawking's area theorem for black-hole mergers. We find that super-additivity holds also for a wide variety of the asymptotically-flat examples that we considered in section 3. However, we find that Kaluza-Klein dyonic black holes provide a counterexample, and we speculate on the reason for this. The paper ends with conclusions and future prospects in section 7. The Gibbs surface In this section we shall briefly summarise those aspects of the Gibbs surface which are relevant for the latter part of the paper. If we think of (S µ , M ) as coordinates in R 3+2N then (1.14) defines a hypersurface G ⊂ R 3+2N whose co-normal is (T µ , −1). Since in our case M is a unique function of the extensive variables, the intensive variables are unique functions of the extensive variables: T µ = T µ (S ν ). The converse need not be true. If the function M (S µ ) were convex, then for fixed co-normal (T µ , −1) the plane would touch the surface at a unique point (S µ , M ) . For a smooth Gibbs surface G, convexity requires that the Hessian be positive definite and one may then define a positive definite metric called the Weinhold metric [49]. Because one of the components of the Weinhold metric (2.2) is related to the heat capacity 3 at constant J and Q i and P i , namely and neutral black holes or black holes with small charges or angular momentum have negative heat capacities, the Gibbs surface G is typically not convex and the Weinhold metric for black holes is typically Lorentzian [50]. If one defines a totally symmetric co-covariant tensor of rank three by the Riemann and Ricci tensors and the Ricci scalar of the Weinhold metric are given by all indices being raised with g µν W , the inverse of g W µν . Divergences in R are sometimes held to be a diagnostic for phase transitions. The geometry of the Gibbs surface is essentially the geometry behind the first law of thermodynamics. As we remarked previously, this fits into a pattern that is more general than just the theory of black holes, and arises whenever one is considering a variational problem with constraints. Since this is not as widely known as it deserves to be, we shall pause to describe the general situation, and then we shall restrict attention to its application to black hole theory. Consider a real-valued function f (x) on some space X with coordinates x, subject to the n constraints that certain functions C a (x) = c a , 1 ≤ a ≤ n, where the c a are constants. Adopting the method of Lagrange multipliers, we require that Since, when pulled back to LA we have T µ dS µ = dM (S µ ), the pull-back of ω to LA vanishes, In other words, LA is a Lagrangian submanifold of R 6+3N . One may go a step further and lift LA to R 7+2N with coordinates (P µ , S ν , M ), equipped with the contact form 14) The Gibbs function, or thermodynamic potential, G, is the total Legendre transform of the mass with respect to the extensive variables. It satisfies where Note that G is not necessarily a single-valued function of the intensive variables T µ , unless the Gibbs surface G is convex. The Hessian of the Gibbs function with respect to the intensive variables is related to the Weinhold metric by the easily-verified formula It provides a metric on the space of intensive variables. It is important to realise that from the point of view of the symplectic and contact struc- From the point of view of the Gibbs surface G, geometrically this should really be thought of as an n-dimensional Legendrian sub-manifold of the (2n + 1)-dimensional Legendre manifold whose coordinates consist of the the total energy and the the n pairs of intensive and extensive variables . Given a choice of n coordinates chosen from these 2n variables, one may locally describe the surface in terms of the associated thermodynamic potential, and from that compute the associated Hessian metric. But globally, it is not in general true that the Gibbs surface equipped with the choice of Hessian is a single-valued non-singular graph over the n-plane spanned by the chosen set of n coordinates. It should also be remembered that although the Hessian metrics may be thought of as the pull-back to G of a flat metric on the 2n-dimensional flat hyperplane spanned by the choice n pairs of intensive and extensive variables, the signature of that flat metric depends upon that choice. Here we review some key results on the general classes of thermodynamic metrics that were presented in [51]. Consider first the energy M = M (S µ ), which obeys the first law One can define from this the metric where T µ are viewed as functions of the S µ variables, with and ⊗ s denotes the symmetrised tensor product. In the usual parlance of general relativity we may simply write (2.19) as In view of (2.20) we have which is nothing but the Weinhold metric. One can obtain a set of conformally-related metrics by dividing (2.18) by any one of the intensive variables T µ for µ =μ whereμ denotes the associated specific index value of the chosen intensive variable, and then constructing the thermodynamic metric ds 2 (Sμ) for the conjugate extensive variable by using the same procedure as before [51]. Thus, for example, if we chooseμ = 0, so that T is the chosen intensive variable and S its conjugate, then we rewrite (2.18) as where we have split the µ index as µ = (0, a), and then write the associated thermodynamic metric The second line was obtained by using (2.18), and the third line follows from (2.22). Thus ds 2 (S), which is the Ruppeiner metric, is conformally related by the factor −1/T to the Weinhold metric. Weinhold and Ruppeiner metrics were introduced into black hole physics in [50,52]. The literature is by now quite extensive. For a recent review see [53]. Other conformally-related metrics can be defined by dividing (2.18) by any other of the intensive variables and the repeating the analogous calculations. For example, if there is a single charge Q and potential Φ, then dividing the first law dM = T dS + ΦdQ + · · · by Φ and calculating the metric ds 2 (Q), one obtains Further thermodynamic metrics that are not merely conformally related to the Weinhold metric can be obtained by making Legendre transformations to different energy functions before implementing the above procedure [51]. For example, if one make the Legendre transform to the free energy F = M − T S, for which one has the first law then the associated thermodynamic metric will be where S and T a , which are now the intensive variables, are viewed as functions of T and S a . The metric components in ds 2 (F ) are therefore given by the Hessian of F . As observed in [51], the metric ds 2 (F ) has the property that, unlike the Weinhold or Ruppeiner metrics, its curvature is singular on the so-called Davies curve where the heat capacity diverges. Clearly, by making different Legendre transformations, one can construct many different thermodynamic metrics, which take the form where each η µ can independently be either +1 or −1. The overall sign is of no particular importance, and so metrics related by making a complete Legendre transformation of all the intensive/extensive pairs in a given energy definition really yields an equivalent metric. For example, the Gibbs energy G = M − T µ S µ gives the metric which is just the negative of the Weinhold metric ds 2 (M ) in (2.22). One further observation that was emphasised in [51] is that one is not, of course, obliged when writing a thermodynamic metric to use the associated extensive variables as the coordinates. It is sometimes the case, as we shall see in later examples, that although one can calculate the thermodynamic variables in terms of the metric parameters, one cannot explicitly invert these relations. In such cases, one can always choose to use the metric parameters as the coordinates when writing the thermodynamic metrics. Geometric invariants such as the Ricci scalar of the thermodynamic metric will be the same whether written using the thermodynamic variables or the metric parameters, since one is just making a general coordinate transformation. Thus even in cases where the relations between the thermodynamic variables and metric parameters are too complicated to allow one to find an explicit Christodoulou-Ruffini formula to define the Gibbs surface, one can still study the geometrical properties of the various thermodynamic metrics. Asymptotically Flat Black Holes In this section, we shall illustrate the issues raised in the previous section by listing the cases of asymptotically-flat black holes for which we have explicit formulae. Whilst the formulae for the Kerr-Newman family of black holes are well known, we first review these in some detail in preparation for our discussion of much less well known black holes, such as those that occur in supergravity or Kaluza-Klein theories. The Gibbs surface for Reissner-Nordström The Gibbs surface G for the Reissner-Nordström solution is given by the Christodoulou- where M irr = S 4π . It is convenient to envisage (M, Q, S) as a right-handed Cartesian coordinate system with M > 0 taken vertically and −∞ < Q < ∞ and S > 0 spanning a horizontal half-plane. In (M, Q, S) coordinates the surface is part of the quadratic cone We have with M > |Q| being sub-extremal black holes. Rewriting (3.2) as the two solutions for S at fixed M and Q are given by with these corresponding to the entropies ( i.e. one quarter the area) of the outer (S + ) and inner (S − ) horizons respectively. It is straightforward to see that the temperature T = ∂M/∂S is positive on the outer horizon and negative on the inner horizon. Equality, M = |Q|, corresponds to extreme black holes. They lie on the space curve γ extreme given by the the intersection of the two surfaces The first is a plane orthogonal to the Q plane, and the second a parabolic cylinder with generators parallel to the Q axis. The projection of γ extreme onto the Q − S plane is given by the parabola Roughly speaking, the Gibbs surface G is folded over the space curve γ extreme . Now the Weinhold metric, or equivalently the Hessian of M (S, Q), is given by Note that ∂ 2 M ∂S 2 changes sign, passing through zero, along the space curve γ Davies , given by Since the heat capacity at constant charge, C Q , is given by it also changes sign across the curve γ Davies , on which it diverges [54]. This is often taken as a sign of a phase transition. In support of this interpretation, it has been shown [55] that the single negative mode of the Lichnerowicz operator passes through zero and becomes positive as Q is increased across γ Davies . The curve γ Davies is an example of what, in the literature on phase transitions, is often referred to as a spinodal curve, and is usually defined in terms of the vanishing of a diagonal element of the Hessian of the Gibbs function. In the present case, the Gibbs function is 11) and the Hessian is given by  The spinodal curve is thus given by Φ 2 = ± 1 3 , which, in terms of S and Q, coincides with (3.9). The Weinhold metric may written as and hence the Gibbs surface for sub-extremal black holes has a Hessian, or equivalently a Weinhold metric, that is non-singular but Lorentzian. Moreover the Gibbs surface for nonextreme black holes is non-convex. Expressed in terms of S and the electrostatic potential the Weinhold metric becomes Note that the metric is non-singular when either Φ 2 < 1, corresponding to the outer horizon, or Φ 2 > 1, corresponding to the inner horizon. It changes signature from (−+) to (++) as Φ goes from Φ 2 < 1 to Φ 2 > 1. The heat capacity passes through infinity at Φ 2 = 1 3 . Expressed in terms of Φ and S, the temperature is given by , and so the Ruppeiner metric is given by where we have defined, for the outer horizon, The metric in the second line of (3.16) is the Milne metric on a wedge of Minkowski spacetime inside the light cone. This is made apparent by introducing new coordinates according to in terms of which the Ruppeiner metric becomes , the extremal solutions lie on the timelike geodesics t = ± arctanh π √ 2 . The heat capacity changes sign at Φ 2 = 1 3 . If Φ 2 > 1, corresponding to the inner horizon, then, if Q > 0, substituting The metric in brackets is the flat metric on Euclidean space in polar coordinates, except that The flatness of the Ruppeiner metric for Reissner-Nordström has given rise to much comment, because singularities of the Ruppeiner metric are expected to reveal the occurrence of phase transitions. However, the geometrical significance of the change in sign of the heat capacity is that for fixed charge Q, there is a maximum temperature. In fact so for given |Q| and positive T less than √ 3 8π|Q| , there are two positive values of S and hence two non-extreme black holes. By contrast, since the electrostatic potential Φ satisfies (3.14), there is a unique positive value of S and hence a unique black hole for given Q and Φ 2 < 1. Every two-dimensional metric is conformally flat. Therefore it is not surprising that both the Weinhold and Ruppeiner metrics for Reissner-Nordström are conformally flat. It is, however, nontrivial that the Ruppeiner metric is flat. It has recently been pointed out [56] that one can also consider the Hessian of the charge Q, considered as a function of the mass and entropy, as a metric ds 2 Q . In fact ds 2 Q = −Φ −1 ds 2 W , as in (2.25). Geometrically, there is no reason to give a preference to any of the metrics ds 2 W , ds 2 R or ds 2 Q . Since T and Φ are both non-singular on the curve along which the heat capacity diverges, none of the three metrics is capable of detecting the associated "phase transition." As was shown in [51], and we reviewed in section 2.2, the thermodynamic metric (2.27) constructed from the free energy F = M − T S does exhibit a singularity on the Davies curve where the heat capacity diverges. For the Reissner-Nordström metric (2.27) is the restriction of ds 2 (F ) = −dT dS + dΦdQ to the Gibbs surface, and hence we find (3.23) A straightforward calculation shows that its Ricci scalar is given by which does indeed diverge on the Davies curve S = 3πQ 2 . The Gibbs surface for Kerr This is qualitatively very similar to the Reissner-Nordström case. To begin with, we shall summarise, in our notation, some results first presented by Curir [1]. One has and M (S, J) at fixed J has a minimum value when This is the extreme case and, as before, the inner horizon has a negative temperature, a point made first by Curir [1]. Explicitly one has By (3.27), the outer horizon has a positive temperature, which we label T + , and the inner horizon has a negative temperature, which we label T − . One has [1] where Ω ± = (∂M/∂J) S ± . Note that it follows from the first equation in (3.29) that Note also that M and J, which are conserved quantities defined in terms of integrals at infinity, are universal and do not carry ± labels. In terms of S + and S − , one has, from (3.25), There is also a modified Smarr formula where the second equality follows from (3.30). This way of writing the first law of thermodynamics was employed in [57] for deriving a simple formula for holographic complexity. These results were interpreted in [1] as indicating that the total energy of a rotating black hole may be regarded as receiving contributions from two thermodynamic systems; one associated with the outer horizon and the other with the inner horizon. The negative temperature was interpreted in terms of Ramsey's account of the thermodynamics of isolated spin systems [58]. Okamoto and Kaburaki [10] introduced the dimensionless parameter h = in their discussion of the energetics of Kerr black holes and noticed that it satisfies the quadratic equation It was initially assumed that only the solution of (3.35) satisfying 0 ≤ h ≤ 1 has physical significance. However Abramowicz [59] drew their attention to [1,2], and they realised that the other root of (3.35), which satisfies 1 ≤ h ≤ ∞ and is given by h = a M − √ M 2 −a 2 , is associated with the inner horizon [10]. Expressing the thermodynamic variables in terms of h they established (3.30) if T − is taken to be negative, and they also obtained the formula Kerr-Newman black holes Kerr-Newman black holes may have both electric and magnetic charges. By electricmagnetic duality invariance one may set the magnetic charge P to zero. To restore electricmagnetic duality invariance it suffices to replace Q 2 by Q 2 +P 2 in all formulae thus producing a manifestly O(2) invariant Gibbs surface. The mass of the Kerr-Newman black hole is given by and therefore it satisfies acquiring its least value on the surface γ extreme in the three dimensional space of extensive variables given by on which the temperature vanishes. If J = 0, then (3.38) is the usual Bogomolnyi bound [60]. One also has The explicit formulae (3.37), (3.40) and (3.41) allow a lift of the Gibbs surface G to a Lagrangian submanifold L in R 6 and a Legendrian submanifold in R 7 . The entropy product law becomes where the − refers to the inner and + to outer horizon. The temperatures and angular velocities of the two horizons are given by and one has There is a conventional first law for both horizons: and a modified Smarr formula STU black holes Four-dimensional black holes in string theory or M-theory can be described as solutions of N = 8 supergravity. The most general black holes are supported by just four of the 28 gauge fields, in the Cartan subalgebra of SO (8). The black holes can therefore be described just within the N = 2 STU supergravity theory, which is a consistent truncation of the N = 8 theory whose bosonic sector comprises the metric, the four gauge fields, and six scalar fields. Black holes of the STU model are parameterised by mass M , angular momentum J and four electric Q i (i = 1, 2, 3, 4) and four magnetic charges P i (i = 1, 2, 3, 4 We shall follow the usual conventions for STU supergravity, in which the normalisation of the gauge fields F (i) is such that if the scalar fields are turned off, the Lagrangian will for a presentation of the bosonic sector of the STU supergravity Lagrangian). This contrasts with the conventional , in Gaussian units, which we use when describing the pure Einstein-Maxwell theory. Since this means that the charge normalisation conventions will be different in the two cases, we shall briefly summarise our definitions here. If we consider the Lagrangian one can derive by considering variations of the associated Hamiltonian that black holes will obey the first law where κ is the surface gravity, Φ is the potential difference between the horizon and infinity (with the potential being equal to ξ µ A µ , where ξ µ is the future-directed Killing vector that is null on the horizon and is normalised such that ξ µ ξ µ → −1 at infinity). The electric charge Q is given by while in STU supergravity we shall have (neglecting the scalar fields for simplicity 4 ) The black hole solutions have two horizons, with the the product of the horizon entropies quantised: where ∆ is the Cayley hyperdeterminant ∆(Q i , P i ): Note that eqn (3.52) has previously appeared in the literature without the absolute value symbol (for example, in [61]). We have written (3.52) with an absolute value sign since ∆, and hence ∆ + J 2 , can be negative; for example for a static Kaluza-Klein dyonic black hole. (In [61] it was proposed that S − is negative when ∆ + J 2 < 0, but this would contradict the fact that, for example, the area of the inner horizon of the static Kaluza-Klein dyonic black hole is positive.) It should be noted that if J vanishes and ∆ = 0, then S − will vanish also. In this case there is no non-singular inner horizon. The entropy formulae (3.52) can be cast in the form where F is another complicated expression that is a function of M , Q i and P i only [61]. Note that it follows from (3.54) that S + ≥ S − . Unlike [61], we have put an absolute value sign around (S L − S R ) in the expression for S − , since, for the reasons discussed above, there can be circumstances where S L < S R , but S − should be non-negative. Note that F + ∆ is always non-negative, and F − J 2 is non-negative provided that the black hole is 4 In general, including the scalar fields, and writing the Lagrangian as a 4-form, we shall have L = The electric charges can be written as (Here the variational derivative is defined by δX = (δX/δF ) ∧ δF . For example if X = u * F ∧ F + v F ∧ F then δX/δF = 2u * F + 2v F .) The magnetic charges are given by not over-rotating [61]. The quantities S L and S R are both non-negative. In the extremal limit F − J 2 = 0, one gets the extremal value for the entropy S + = S − = 2π |∆|. This was seen for the BPS solutions (F = 0 and J 2 = 0) in [13]. Note from (3.55) that while the right-moving entropy S R is a function of all the extensive variables (M, Q i , P i , J), the left-moving entropy S L is a function of (M, Q i , P i ) but not J [61]. This was noted previously in the special case of the four-charge black holes characterised by (M, Q i , J) in [11,62]. The expressions (3.55) may in principle be inverted to give two different Christodoulou-Ruffini formulae: The structure (3.55) ensures that the two entropies S + and S − are solutions of the quadratic equation where Σ = S L + S R + |S L − S R |, and we employed (3.54), (3.55) and (3.52). Note that Since S + ≥ S − , the final expression in (3.58) is non-negative for S = S + , and non-positive is independent of whether one takes S = S + or S = S − , it then follows that In particular, this implies that T + and T − must have opposite signs. As well as considering the left-moving and right-moving entropies S L and S R , one can also introduce left-moving and right-moving temperatures T L and T R , defined by [15] 1 These definitions are motivated by the fact that when one calculates scattering amplitudes for test fields propagating in the black-hole background, one finds that they factorise into the product of thermal Boltzmann factors for the temperatures T L and T R respectively [15]. Using (3.59), together with the expressiona for S + and S − in terms of S L and S R in (3.54), it follows from (3.60) that for the two cases that we described previously. From its definition, T R is obviously nonnegative since T + ≥ 0 and T − ≤ 0. It is then evident from (3.61) that T L is non-negative also, since we already know that S L and S R are non-negative. We can also derive, from and using either (3.57) or else simply writing S + and S − in terms of S L and |J 2 + ∆| by using (3.52), that in the two cases S L ≥ S R and S L ≤ S R we have Note that when S L < S R , i.e. when J 2 + ∆ < 0, the angular velocities of the inner and outer horizons are opposite. Note also that the two cases in (3.63) can be expressed in the single universal formula For now, we shall focus for simplicity on the regime where S L ≥ S R , i.e. (J 2 + ∆) ≥ 0. Let us first consider processes where dJ = 0 and dQ i = dP i = 0. From the definitions of T L , T R , S L and S R given in (3.54) and (3.60), it straightforward to see from the first laws on the outer and inner horizons that we must have In other words, the left-moving and right-moving sectors contribute equally to the mass of the black hole. (This was observed in the case of Kerr-Newman black holes in [23,24].) Dividing the first laws (3.66) by T ± respectively and then taking the plus and minus combinations, one finds that these match with (3.65) provided that we define the left-moving and right-moving quantities as and so we have the first laws for the left-moving and right-moving sectors. In a similar fashion, we can then see that the Smarr relations on the outer and inner horizons imply the Smarr relations for the left-moving and right-moving sectors. It should be noted that, from (3.63) and (3.68), the left-moving angular velocity is in fact zero: If we now turn to the regime where S L < S R , we find that the roles of S L and S R are exchanged in both the first laws and the Smarr relations for the left-moving and right-moving sectors, so that we have Four-charge STU black holes The prospects for obtaining an explicit Christodoulou-Ruffini formulae for the general 8charge black hole solutions are not good. The main problem is the F -invariant that appears in the expressions for S L and S R in eqn (3.55), whose evaluation in terms of physical charges and mass appears to be quite intractable [63]. In order to obtain more explicit, concrete expressions, we shall now focus on the specialisation to black-hole solutions carrying just four electric charges, which were found in [11]. These black holes are parameterised in terms of the non-extremality parameter m ≥ 0 (Kerr mass parameter), the "bare" angular momentum a (Kerr rotation parameter) and four boost parameters δ i ≥ 0 (i = 1, 2, 3, 4) [11] (see also [64] for compact expressions for the metric and the other fields). In terms of these, the physical mass, charges and angular momentum are given by The black hole entropies, associated with the inner and the outer horizon, are given by [11,16]: The temperatures T ± , related to surface gravities κ ± by T ± = κ ± 2π , and angular velocities Ω ± , which are associated with the inner and out horizon respectively, are given by [16]: ) where Note that T − is negative. 6 From the above expressions one also finds It can easily be verified that the entropies S ± , temperatures T ± and angular veocities Ω ± satisfy equation (3.59) and the S L ≥ S R equations in (3.63). The entropies and the inverses of the surface gravities, associated with the outer and inner horizons, have a suggestive form in terms of the left-moving and right-moving entropy and inverse temperature excitations of a weakly coupled 2-dimensional conformal field theory (2D CFT), given in [16]: 1 1 Note that these solutions with four electric charges have ∆ ≥ 0, as can be seen from (3.53), and so they have S L ≥ S R , as is evident from (3.82). In this suggestive form the central charges C L,R of the left-moving and right-moving sector of the the 2D CFT, related to S L,R and T L,R via Cardy relation S L = π 2 3 C L T L and S R = π 2 3 C R T R , respectively, turn out to be the same and equal to: Again the product of outer and inner horizon entropies is quantized in terms of J and Q i (i = 1, 2, 3, 4) only [18]: which agrees with the result for Kerr-Newman black hole after equating The main challenge here is to obtain the formulae M = M (S, J, Q i ) and S = S(M, J, Q i ). As an initial step, we observe the solutions for S ± , due to relation (3.82), satisfy a quadratic equation: and From (3.88) we obtain: Furthermore, employing (3.89) and (3.90) we obtain: which leads to the explicit expression for the temperature: 93) and angular velocity: The technical difficulty in obtaining an explicit Christodoulou-Ruffini mass expression is due to the fact that an explicit expression for S L in terms of M and Q i is cumbersome, in general. However, we succeeded in the following special cases. Pairwise-equal charges The four-charge black-hole solutions simplify considerably in the special case of pair-wise equal charges (see, for example, [64]) Q 1 = Q 3 and Q 2 = Q 4 where (3.88) can be solved explicitly for M: Furthermore (3.95) and (3.86) implies: For Q 2 = 0 the result reduces to the example of rotating dilatonic black hole with the dilaton coupling a = 1. 7 The result reduces to the Kerr-Newman (or Reissner-Nordström) black hole expression when Q 1 = Q 2 = 1 4 Q. It becomes straightforward that the differentiation of (3.95) with respect to S ± (with J and Q 1,2 fixed), produces the expected expressions for T ± , including the sign. Three equal non-zero charges It turns out that for the example of three equal non-zero charges, i.e. Q 1 = Q 2 = Q 3 = q and Q 4 = 0, which corresponds to the rotating dilatonic black hole with the dilaton coupling a = 1 √ 3 , one can again obtain an explicit expression for the the Christodoulou-Ruffini mass: (3.97) 7 Note, however, that when the black hole is rotating, an axion in the STU supergravity is also turned on when Q1 and/or Q2 is non-zero (except in the case Q1 = Q2). (As in the pairwise-equal charge case above, here too an axion is also turned on if the black hole is rotating.) One non-zero charge We also note in the case of only one non-zero charge (say, Q 1 = q = 1 4 m sinh 2δ), which corresponds to the rotating dilatonic black hole with the dilaton coupling a = √ 3, the Christodoulou-Ruffini mass can be expressed in the following form: , and cosh δ is a solution of the cubic equation Dyonic Kaluza-Klein black hole In all the explicit STU supergravity black holes we have discussed so far, each of the four field strengths carries a charge of a single complexion (which could be pure electric or pure magnetic). The most general possibility is where each field strength carries independent electric and magnetic charges, as described in the general 8-charge case that was constructed by Chow and Compère. Although explicit, these general solutions are rather unwieldy. Here, we discuss a much simpler case, which is still rather non-trivial, and that goes beyond what we have explicitly presented so far. We consider the case where just one of the four field strengths is non-vanishing, but it carries independent electric and magnetic charges. For simplicity we shall restrict attention to the case of static black holes. The Lagrangian (in the normalisation we are using for the STU supergravities) is given by 8 and a convenient way [65] to present the static dyonic black hole solutions is where m, β 1 and β 2 are constants that parameterise the physical mass M , electric charge Q and magnetic charge P , with A necessary condition for regularity of the black hole is 0 ≤ β i ≤ 1. The entropy of the outer horizon, located at r = 2µ, is given by whilst the entropy of the inner horizon, located at r = 0, is given by The product of the entropies on the outer and inner horizons is given by Note that S − vanishes if Q or P vanishes. Note also that the dyonic black hole is an example where the invariant ∆, defined in (3.53), is negative. Of course since the solutions we are considering here are static, (J 2 + ∆) is negative too, and so we are in the regime where S L < S R for these black holes, and in fact we have One can straightforwardly calculate the temperatures on the oouter and inner horizons, finding as usual that the temperature T + is positive and T − is negative. The left-moving and right-moving temperatures, defined by (3.60), then turn out to be These are both non-negative. A special case is when the black hole is extremal, which is achieved in this parameterisation by taking a limit in which m goes to zero and the β i go to 1. The result is that in the extremal case M ext = Q Five-dimensional STU supergravity Here, we consider black hole solutions in five-dimensional STU supergravity. General solutions with mass M , two angular momenta J φ and J ψ , and three charges Q i were constructed in [12] by employing solution generating techniques. We use principally the conventions of [15], except that we shall use the labels ↑ and ↓ to denote the sum and difference combinations of the angular momenta and angular velocities associated with the φ and ψ azimuthal coordinates, reserving L and R to denote the combinations of inner and outer horizon quantities, analogous to the definitions used previously for the four-dimensional STU black holes. The physical mass, charges and angular momenta are given by [15] Here the five-dimensional Newton constant is taken to be G 5 = π 4 . We shall, without loss of generality, take the rotation parameters l 1 and l 2 and the charge boost parameters δ i to be non-negative in what follows. These black holes have many analogous properties to those of the four-dimensional STU black holes, except, of course, that they can carry only electric charges but not magnetic. In particular, they have two horizons, with the inner and outer horizon entropies expressed as [15]: The product of the inner and outer horizon entropies is again quantised as: Note that as in the case of the four-dimensional STU black holes, here it would in general be necessary to use an absolute value in the expression for S − in (3.112), and on the righthand side of (3.115), since S − must be non-negative while S L and S R , which are both non-negative, could obey either S L > S R or S L < S R depending on the relative values of the charge and angular momentum parameters. However, our non-negativity assumptions stated above for the charge and rotation parameters imply that in fact S L ≥ S R in this case, and so we can omit the absolute value in the expression for S − , as we have done in (3.112), and in (3.115). From the above expressions it follows that S (either S + or S − ) again obeys a quadratic equation, Furthermore one can analogously derive the general result that T + and T − have opposite signs, with: and similarly Ω ↑ where Ω ↑ ± = 1 2 (Ω φ ± + Ω ψ ± ) and Ω ↓ ± = 1 2 (Ω φ ± − Ω ψ ± ). (The relative signs between the terms in these two equations in (3.118) are the opposite of those given in [15], because in that paper κ − was taken to be positive.) The black holes obey the usual first laws on the outer and inner horizons: As in the four-dimensional case, the calculation of scattering amplitudes in the black-hole background shows that they factorise into left and right sectors with Boltzman factors corresponding to temperatures T L and T R given by (3.60) [15]. Together with the normalisation of S L and S R , such that S + = S L + S R in accordance with the interpretation of the entropy as the sum of left-moving and right-moving contributions, one can then establish by rewriting the first laws dM = T ± dS ± + · · · in terms of left and righ-moving quantities that 1 2 dM = T L dS L + · · · and 1 2 dM = T R dS R + · · · , and so each of the sectors contributes one half the total mass of the black hole. Matching the first laws for arbitrary variations of the parameters then allows one to read off the appropriate definitions of the left-moving and right-moving angular momenta and electric potentials. Thus one finds the first laws In view of the relations (3.118), one finds Thus we see that the angular momentum J ↑ and the associated angular velocity Ω ↑ enters only in the right-moving first law and in S R , while the angular momentum J ↓ and associated angular velocity Ω ↓ enters only in the left-moving first law and in S L . Note that as in four dimensions, T L and T R are both non-negative. The Smarr formulae for the left-moving and right-moving sectors agree with the ones derived in [15]: The expression for the Christodoulou-Ruffini formula in terms solely of the conserved charges, angular momenta, mass and entropy are too cumbersome to present explicitly. Even in the case of three equal charges, the mass is determined by a cubic equation. Einstein-Maxwell-Dilaton black holes There exists a more general class of black holes in the theory of Einstein-Maxwell gravity with an additional dilatonic scalar field, which is coupled to the Maxwell field with a dimensionless coupling constant a, with the Lagrangian The electrically-charged black-hole solution can be written as [66][67][68] The relevant thermodynamic quantities for these black holes in this theory are given by where r + is the radius of the outer horizon, and r − is a singular surface unless a = 0. Since by assumption r + ≥ r − , it follows that This is consistent with the BPS bound derived in [69] using "fake supersymmetry." The Smarr relations continue to hold and the Gibbs free energy is again given by The coordinates {r + , r − } are now related to the coordinates {T, Φ} by and Thus the Gibbs energy as a function of {T, Φ} is given by As discussed in section 2.2, the Ricci scalar of the Helmholtz free energy metric ds 2 (F ) = −dS dT + dΦ dQ will be singular on the Davies curve where the heat capacity at constant charge changes sign. It is easiest to use r + and r − as the coordinate variables in this calculation, which gives (3.133) Thus the Davies curve is given by (3.135) Since we must have r − < r + , a solution for (3.134) exists only for a 2 < 1. The spinodal curve thus projects down to the parabola in the S − Q plane given by From (3.127), one can in general solve for r + and r − in terms of M and Q, obtaining If a 2 > 0 the entropy vanishes at extremality, namely r + = r − and hence |Q| = √ 1 + a 2 M . Then r = r + = r − is a point-like singularity and there is no inner horizon. One can also, in general, express the entropy in terms of r + and Q, using (3.139) Particular cases include the following, which also arise as special cases of STU Black holes: • a = 0 is the Reissner-Nordström case. • a 2 = 1 3 is a reduction of Einstein-Maxwell in 5 dimensions. • a 2 = 1 is the so-called string case. We have The spinodal curve coincides with the Q-axis and the Gibbs surface is nowhere convex. It is a hyperbolic paraboloid for which the Ruppeiner metric is flat [70]. The temperature is given by and is always positive. It goes to a non-vanishing value at extremality. The heat capacity at constant charge is given by and is always negative, and is also non-vanishing at extremality [67]. Two-field dilatonic black holes Here we review a class of theories [71] which are similar to the Einstein-Maxwell-Dilaton (EMD) theory of the previous subsection, but with two field strengths rather than just one. The Lagrangian, in an arbitrary dimension D, is given by (3.143) The advantage of considering this extension of EMD theory is that by choosing the coupling constants a 1 and a 2 appropriately, we can find general classes of static black hole solutions with two horizons, and one can study the thermodynamic properties at both the outer and inner horizon. If we turn on both the gauge fields A i independently, the theory for general (a 1 , a 2 ) does not admit explicit black hole solutions. We shall determine the condition on (a 1 , a 2 ) so that the system will give such explicit solutions. It is advantageous for later purpose that we reparameterize these dilaton coupling constants as (3.144) (Note that N 1 and N 2 are not necessarily integers.) For the a i to be real, we must have Here we shall consider the case where a 1 and a 2 obey the constraint which implies the identities With a 1 and a 2 obeying (3.146), one can find black hole solutions, given by [71] where we are using the standard notation where s i = sinh δ i and c i = cosh δ i . The mass and charges are given by where ω D−2 is the volume of the unit (D − 2)-sphere. The outer horizon is located at 3) , and the entropy is given by The inner horizon is located at r = 0, and we have Multiplying the two entropies gives the product formula Thus the entropy product is independent of the mass. There exists an extremal limit in which we send µ → 0 while keeping the charges Q i non-vanishing. In this limit, the inner and outer horizons coalesce and the near-horizon geometry becomes AdS D−2 × S 2 . The mass now depend only on the charges, and is given by It is useful to define and then we have Some specific examples are as follows: and then Case 2: D = 4, N 1 = 1, N 2 = 3: These cases lie, in general, outside the realm of supergravity theories. We have Entropy super-additivity is difficult to prove in general, but we can at least look at the case of extremal black holes, for which It seems that super-additivity will be satisfied if N 1 + N 2 ≥ 2, and in fact, from (3.147), we have N 1 + N 2 > in all dimensions. Entropy Product and Inversion Laws It is well known from many examples that if a black hole has two horizons then the product of the areas, or equivalently entropies, of these horizons is equal to an expression written purely in terms of the conserved charges and angular momenta [18,20]. Thus we may write where Q represents the complete set of charges carried by the black hole, and J represents the set of angular momenta. (Generalisations arise also if there are more than two horizons or "pseudo-horizons" (see, for example, [18]. One important consequence of the inversion symmetry of the Christodoulou-Ruffini relation M = M (S, Q, J) is that the relation S + T + + S − T − = 0, seen, for example, for the STU black holes in (3.59), is true quite generally. Since the temperature is given by ∂M/∂S at fixed Q and J we have where K = K(Q, J) is the numerator in the inversion formula (4.4). Taking S = S + we therefore have S ′ = S − , and so we find from (4.5) that whenever there is an entropy-product rule of the form (4.1) and the related inversion symmetry under (4.4). Asymptotically AdS and dS Black Holes In this section we shall extend the previous discussion to the case of a non-vanishing cosmological constant. If the cosmological constant is negative, the situation is similar to the case when it vanishes. However, if the cosmological constant is positive a new feature arises, namely the occurrence of an additional "cosmological" horizon outside the black hole event horizon. Typically the surface gravity at the cosmological horizon is negative. Kottler Either we regard Λ as a fixed constant or as an intensive variable which may be varied, in which case we obtain an analogy with a gas with positive pressure In the first case we should think of the Abbott-Deser mass M as the total energy. In the second case, we should instead think of it as the total enthalpy [72,73]. In both cases we have and in both cases and the heat capacity at constant pressure is given by We now consider the two cases where Λ < 0 and Λ > 0. The temperature T is a positive, monotonic-increasing function of entropy S at fixed pressure P . The isobaric curve in the S − M plane has a point of inflection at which the heat capacity changes sign when where the slope, and hence the temperature, has a minimum value; It follows that for fixed negative Λ there are no black holes with temperatures less than T min . For temperatures above T min there are two black holes, one with a mass smaller than This is the location where the heat capacity diverges. It is connected with the Hawking-Page phase transition [74,75]. There is actually a region of masses M HP > M > M cr where the AdS 4 space is entropically favoured; however the black hole still has a positive heat capacity. As with the Reissner-Nordström black hole, it has been shown that the sign of the lowest eigenvalue of the Lichnerowicz operator changes sign as the heat capacity changes sign [76]. Λ > 0 We have a negative pressure, P < 0. If M is assumed positive we have two horizons, a black hole horizon with and positive temperature T = ∂M/∂S, and a cosmological horizon with for which T = ∂M/∂S < 0, and hence the temperature is negative. The heat capacity is therefore always negative. The temperature vanishes when the two horizons coincide, that is if S π = Λ , (5.10) at which the mass has a maximum of In summary, we have two horizons; a black hole horizon and a cosmological horizon. The entropy of the former is smaller then or equal to the entropy of the latter. It seems most appropriate to regard M as the enthalpy. In this case the black hole horizon has positive temperature and the cosmological horizon has negative temperature. This differs from the usual interpretation in which both temperatures are taken to be positive. In effect one takes T C = |κ C | 2π where κ C , where κ C is the surface gravity of the event horizon [28][29][30][31]. However, even if one follows the conventional interpretation it should be borne in mind that it is not an equilibrium system and there is no period in imaginary time which would produce an everywhere non-singular gravitational instanton, except when the black hole is absent as in [28,77]. Λ < 0 If r = S π is the radius in the area coordinate, we have where Λ 3 = −g 2 . using the fact that ∂ ∂S = 1 2πr ∂ ∂r (5.13) one finds that 1 − Q 2 r 2 + 3g 2 r 2 (5.14) and thus T vanishes at r = r extreme where If we take the limit that Q 2 → 0 we obtain the spinodal curve of the the Hawking-Page phase transition [74] and if we take the limit g 2 → 0 we obtain the spinodal curve of the Davies phase transition [54]. The two curves meet at the critical point 6|gQ| = 1. Λ > 0 horizon respectively. From the Gibbsian point of view one has T = κ 2π and therefore Because |T 1 |=T 2 we obtain a gravitational instanton by setting t = iτ and identifying τ modulo β = 1 T 2 [79] . The sign used for the period appears to have no geometrical significance and proceeding in the standard way one may argue that the two horizons are in equilibrium with respect to the exchange of thermal Hawking quanta. It was also argued that if |κ 3 | ≥ |κ 1 , then the Cauchy horizon should be stable. Kerr-Newman-de Sitter black holes From [84] we take the formula Writing Λ = −3g 2 , the formula takes the form For Λ = 0 the result reduces to that of the Kerr-Newman black hole. Pairwise-equal charge anti-de Sitter black hole These solutions were obtained in [64], and they are special cases of solutions in the gauged STU supergravity model. (Those are also solutions of maximally supersymmetric fourdimensional theory, which is a consistent truncation of a Kaluza-Klein compactified elevendimensional supergravity on S 7 .) The theory is specified by mass M , angular momentum J, two charges, i.e., equating Q 1 = Q 3 and Q 2 = Q 4 , and cosmological constant Λ = −3g 2 . In [64] the solution was parameterised by the non-extremality parameter m, rotational parameter a, two boost parameters δ 1,2 and g 2 . The thermodynamic quantities are of the following form: where s i = sinh δ i , c i = cosh δ i (i = 1, 2). and Ξ = 1 − g 2 a 2 . The entropy is of the form: where r i = r + + ms 2 i (i = 1, 2) and r + is a location of a horizon, which is a solution of the equation: r 2 − 2mr + a 2 + g 2 r 1 r 2 (r 1 r 2 + a 2 ) = 0 . (5.32) Manipulation of the horizon equation, along with the expressions for the M , J, Q i and S, allows one to derive the following explicit Christodoulou-Ruffini mass: (5.33) Wu black hole The Wu black hole [85] is 5D, three charge rotating solution with negative cosmological constant (∝ g 2 ). Employing expressions from [86] for a product of the entropy and temperature of this black hole, associated with all three horizons we obtain the following interesting expression: where Here ξ a = 1 − g 2 a 2 , ξ b = 1 − g 2 b 2 and u i is the root of the horizon equation X = g 2 (u − u 1 )(u − u 2 )(u − u 3 ). Note that as g 2 → 0, u 3 → −1/g 2 → −∞, and in this case the above equation reduces to the standard equation T 1 S 1 + T 2 S 2 = 0. Entropy and Super-Additivity The thermodynamics of equilibrium systems with a substantial contribution to the total energy from their gravitational self energy differs significantly from that of ordinary substances encountered in the laboratory. This is because of the long range nature of the Newtonian gravitational force, which cannot be screened. As a consequence the total entropy S of a gravitating system need not be proportional to the total energy M . A consequence of this is that negative heat capacities are possible, and indeed these have long been encountered in the theory of stellar structure [87]. In the case of black holes, the long range nature of gravitational interaction expresses itself in the fact that while the individual extensive variables may be added, they do not not necessarily scale. Even if they do, they do not scale with the same power as the total energy M . In the case of ungauged supergravity black holes, the scaling behaviour is guaranteed, but the fact that the scaling behaviour is not homogeneous, that is, not the same for all extensive variables, leads to a modification of the standard form of the Gibbs-Duhem relation for ordinary homogeneous substances where G is the Gibbs free energy, V the volume and P the pressure. By contrast, for black holes the Smarr relation (2.14) gives rise to the Gibbs function (2.16). The requirement of homogeneous scaling plays such an important role in the thermodynamics of ordinary substances that it has been been suggested that it be called the Fourth Law of Thermodynamics [88,89]. It certainly fails for systems with significant self-gravitation and, a fortiori, for black holes. In fact if the matter sector is sufficiently non-linear such as in Einstein's theory coupled to non-linear electrodynamics, even the property of weighted homogeneity ceases to hold. 10 As a consequence, while the first law of black hole thermodynamics holds there is no analogue of a Smarr formula [90]. In the thermodynamics of ordinary substances it is usually assumed that the total energy Now if the extensive quantities scale in a uniform fashion, the property of concavity is equivalent to that of super-additivity, 12 but not necessarily if uniform scaling ceases to hold [91][92][93][94]. Remarkably, it was shown long ago in a little noticed paper by Tranah and Landsberg [93] 13 that while concavity fails for the entropy of Kerr-Newman black holes, 10 A function f (x1, x2, . . . , xn) of n variables is said to be weighted homogeneous of weights w1,w2, . . . , wn if f (λ w 1 x1, , λ w 2 x2, . . . , λ wn xn) = λf (x1, x2, . . . , xn). If wi = 1 for all i, the function is said to be homogeneous of weight one. The Fourth Law is the statement that all extensive variables have weight one and thus all intensive variables have weight zero. concave if ≤ is changed to ≥. Subject to suitable differentiability this is equivalent to negative (positive) definiteness of the Hessian ∂ 2 f ∂x i ∂x j . In other words, if M is the total energy then the graph of the Gibbs surface along a straight line joining two equilibrium states x1 and x2 never lies above the straight line joining these points on the Gibbs surface. 12 A function f (x) is super-additive if f (x1 + x2) ≥ f (x1) + f (x2) and sub-additive if we replace ≥ by ≤. 13 Apparently not accessible on-line. The only paper we know of that has followed up on this is [8]. super-additivity remains true. In other words The super-additivity inequality (6.2) is related to Hawking's area theorem [35,36]. If two black holes of areas A 1 and A 2 can merge to form a single black hole of area A 3 , then, subject to the assumption of cosmic censorship, If the angular momentum and charge of the final black hole are equal to the sums of the angular momenta and charges of the initial black holes, one has in addition where M 3 , the mass of the black hole final state after the merger, obeys since energy will be lost by gravitational radiation. It follows from the first law that at fixed charge and angular momentum, dM = T dS and so provided that the temperature is positive, The assumption that Q 3 = Q 1 + Q 2 is reasonable for theories like Einstein-Maxwell or ungauged supergravity, where there are no particles that carry charge. The assumption that J 3 = J 1 + J 2 , however, is less reasonable, because both electromagnetic and gravitational waves can carry angular momentum. In the following subsections we shall obtain generalisations of the Kerr-Newman superadditivity result of Tranah and Landsberg for various more complicated black hole solutions. We also obtain a counter-example in the case of dyonic Kaluza-Klein black holes. STU black holes with pairwise-equal charges From the formula expressing M in terms of S, Q 1 , Q 2 and J for pairwise-equal charged STU black holes, we have For regular black holes we must have X ≥ 0 and hence Y ≥ 4Q 2 1 Q 2 2 + 16J 2 , thus implying Without loss of generality, we shall assume Q 1 , Q 2 and J are all non-negative. Note that we also have the weaker inequality which we shall use frequently in the following. We wish to check whether the entropy of these pairwise-equal charged black holes obey the super-additivity inequality With analogous definitions for the quantities X and Y , proving super-additivity requires proving that We first note that the Y functions are non-negative, and that they obey Thus, if we can show that then the super-additivity inequality (6.10) will be established. To prove this, we first note that is can be re-expressed as We now observe that the following identity holds: where we have defined (6.17) We can use (6.16) to substitute for √ X √ X ′ in (6.15), thus yielding where we have defined The inequality (6.9) implies M ≥ Q + and M ′ ≥ Q ′ + , and a fortiori M ≥ |Q − | and M ′ ≥ |Q ′ − | (recall that we are taking all charges to be non-negative). Since P , defined in (6.16), is manifestly non-negative it follows from (6.18) that the left-hand side must be nonnegative, and hence the required inequality (6.14) is satisfied. Thus we have proven that the super-additivity property (6.10) is indeed obeyed by the entropy of the pairwise-equal charged black holes of STU supergravity. STU black holes with three equal non-zero charges One can also show analytically that the super-additivity property of the entropy is true for the case of STU black holes with three equal non-zero charges, say, Q 1 = Q 2 = Q 3 = q, with Q 4 = 0. In this case S = π(Y + √ X) with: where and It is straightforward to show that where w = q √ 2M and w ′ = q ′ √ 2M ′ . The second inequality in (6.23) is true for any value of {w, w ′ } ≤ 1. This result implies It is now straightforward to show that thus proving the super-additivity of the entropy in this case as well. An analytic proof of the super-additivity of the entropy for the case of one non-zero charge follows analogous steps. While a numerical analysis indicates that the super-additivity is true for the STU black holes with four arbitrary electric charges, it would be interesting to prove this result analytically. Dyonic Reissner-Nordström In the explicit examples we have studied so far, the black hole is supported by one or more field strengths that each carry a single complexion of field (pure electric charge, or instead and equivalently, one could consider pure magnetic charge). By contrast, in the next subsection we shall see that in the case of STU black holes where only a single field strength is non-zero, the dyonic black holes have an entropy that violates the super-additivity property. The Einstein-Maxwell Lagrangian L = √ −g(R − F 2 ) admits static dyonic black hole solutions given by with mass M , electric charge Q and magnetic charge P . To have a black hole, these quantities must obey the inequality with extremality being attained when the inequality is saturated. The entropy is given by where, as usual, we assume, without loss of generality, that the charges are all non-negative. Substituting (6.28) into this, we see that super-additivity is satisfied if First, we note that the argument of the first square root is non-negative, since, after using (6.27) for the unprimed and primed quantities we have and since the non-negativity is proven. Returning to the inequality (6.30) that we wish to establish, we see that the terms 4M M ′ −2QQ ′ −2P P ′ are themselves certainly non-negative, since 2M M ′ −2QQ ′ −2P P ′ ≥ 0 as we just demonstrated. The inequality is therefore established if we can show that together with the analogous expression with the primes and unprimed variables exchanged. The expression in parentheses is non-negative if is non-negative. After using (6.27) again we see that (6.34) is greater than or equal to with We shall characterise the ratio P/Q ′ by means of a constant x, such that We therefore have where the primed black hole defined above has metric parameters m and β 1 , with β 2 = 0. This means that , (6.40) the entropy is given by and from (6.37) β 1 is given in terms of x by 14 Strictly speaking, the extremal configuration (P, 0, P ) is not a black hole, but rather a naked singularity. However, one can make an infinitesimal deformation away from extremality, to a configuration with parameters (P + δ, 0, P ), and this will describe a genuine black hole. The results that we shall derive here, including the bound (6.46) on P versus Q ′ for obtaining violations of entropy super-additivity, are thus valid. and so super-additivity does not hold in this region of the parameter space. When x becomes larger, we find from numerical analysis that the ratio S tot /(S + S ′ ), which equals 2 in the limit as x goes to zero, falls monotonically. The ratio reaches unity when S ′ = S tot , which implies (6.44) Substituting into (6.42), we find that this occurs when β 1 = y 3 and y is the single real root of the 9th-order polynomial 17y 9 − 12y 8 + 42y 7 − 80y 6 + 39y 5 − 48y 4 + 54y 3 − 12y 2 + 9y − 8 = 0 . It is straightforward to show from the formulae in section 3.4.5 that for the individual black holes that carry purely electric or purely magnetic charge, one has One can then use (6.47), together with (6.48), to solve for M ′ , and hence one can express Y ≡ S tot − S − S ′ , where S tot = 8πP tot Q tot , as a function of M , P and Q ′ . One can then explore the regions in the space of these parameters for which Y is negative, signifying a violation of entropy super-additivity. Of course, by continuity we expect that super-additivity violations will occur at least in some neighbourhood of the region found above when all the masses and charges are allowed to be adjusted. In other words, there will also be super-additivity violations if we consider cases where all three black holes are non-extremal, for appropriate ranges of the various masses and charges. In our earlier remarks relating super-additivity to the Hawking area theorem, we assumed not only cosmic censorship but also that the coalescence of the two black holes was allowed physically. In the case of dyons, it should be recalled that they carry angular momentum, and moreover it is not localised within the event horizon. This, as suggested in [95], may lead to restrictions on what coalescences are allowed, and thus the non-super additivity of the entropy in this counter-example need not imply any conflict with Hawking's area theorem. This is an interesting problem worthy of further study. Conclusions and Future Prospects We shall turn in this section to a consideration of the significance of negative surface gravities, and negative Gibbsian temperatures. We shall begin by recalling the most physically convincing argument that Schwarzschild black holes have a temperature, and hence entropy. This was given by Hawking [46,47], who coupled a collapsing black-hole metric in an asymptotically-flat spacetime to a quantum field, and showed that if the quantum field was initially in its vacuum state, then at late times it would emit particles with a thermal spectrum and temperature given by (1.3 The situation with two event horizons is more complicated. In order to discuss quantum fields between the horizons, one needs to specify a notion of positive frequency on each past horizon. If the region is static, and one interprets positive frequency as being with respect to a local Kruskal coordinate on the horizons, the resulting quantum state will describe thermal radiation entering the static region at temperatures given by 1 2π |κ ± |. This is not a state in thermal equilibrium. If the region between the two horizons is not static, as for example in the Reissner-Nordström solution, one may define a similar state which would also not be in thermal equilibrium. If, on the other hand, one considers the static region behind the inner horizon in the Reissner-Nordström, one needs to specify boundary consitions on the singularity at r = 0. If one chose the notation of positive frequency on the past inner horizon, then whatever boundary conditions were chosen on the singularity, the quantum state would contain radiation coming from the inner horizon with a temperature 1 2π |κ ± |. Thus if we adopt this procedure, we see in all cases that the temperature we associate with particles coming from the horizons is given by the absolute value of the surface gravity, divided by 2π. An alternative way of establishing the temperature and entropy of an asymptoticallyflat black hole is to follow the procedure of [77,96], in which one analytically continues the metric to imaginary time, and discovers that the metric is periodic in imaginary time with a period given by 2π/|κ|, which is what one expects for a state in thermal equilibrium at temperature 1 2π |κ ± |. Of course, the period itself can have either sign, but the quantum state would not necessarily exist if one chose a negative sign for the temperature. This procedure will work when one has a single horizon, including an asymptotically anti-de Sitter spacetime [74,75]. However, this procedure will not work for a spacetime with two horizons having differing values of |κ|. The conclusion seems to be that classically, the sign of the temperature can only be determined by appealing to the first law, and this provides us with a Gibbsian temperature. Quantum mechanically, which seems to be the only physically reliable argument provided one is prepared to contemplate non-equilibrium situations, the temperature should be taken to be positive. In other words, the temperature is not unquely defined by the metric, a conclusion also reached in [25]. The original suggestion that inner horizons should be assigned a negative temperature [1] was based not quantum field theoretic considerations, but rather on a consideration of quantum mechanical systems, such as spin systems, exhibiting population inversion [58]. Thus one might regard the total energy of a black hole as receiving contributions both from the outer and inner horizons. The inner system would then be thought of as the analogue of a spin system. This viewpoint was supported by the existence for the Kerr-Newman black hole of the modified Smarr formula (3.34), and its variation, which may be written as dM = 1 2 (T + dS + + Ω + dJ + Φ + dQ) + 1 2 (T − dS − + Ω + dJ + Φ + dQ) . (7.1) As we saw, these formulae generalise to the case of STU black holes with four electric charges. The addition of electric charges, which were not included in the discussion in [1], suggest that the posited spin system inverted population should be supplemented by the inclusion of charged states. In the case of four-dimensional STU black holes, the generalisation of equation (7.1) may be rewitten in terms of the left-moving and right-moving sectors (see (3.69)) as with each sector contributing equally to dM . In contrast to the proposal in [1], which attempted to give a microscopic interpretation to the negative temperature on the inner horizon, here the left-moving and right-moving sectors both have positive temperatures, consistent with the proposed microscopic interpretation in terms of D-brane states [11,62]. An analogous interpretation for five-dimensional STU black holes has also been given [16]. This paper has been concerned exclusively with time-independent solutions; we have not discussed what happens to inner horizons when perturbations are considered. There is a widespread belief that in classical general relativity, generic perturbations will render Cauchy horizons, of the sort one finds inside black holes, singular. This is referred to as the Cosmic Censorship Hypothesis. There are various forms of this hypothesis, and the literature is at present rather inconclusive. A recent discussion can be found in [97]. Our motivation is largely quantum mechanical, and the relevance of these classical results to a full quantum gravitational treatment is unclear. The metric will be regular as long as A, f and R 2 are real, bounded, and twice differentiable, and in addition f and R are non-zero. We may take f , without loss of generality, to be positive. In particular, the metric is well-behaved regardless of whether A is positive, zero or negative. Asymptotic flatness requires that A and f tend to 1 as R 2 tends to infinity. In the cases we shall consider, R tends to r at infinity. We shall assume that A is positive in the interval r + < r ≤ ∞, and negative in the the interval r − < r < r + , and that it vanishes on the outer horizon r = r + and the inner horizon r = r − . We shall also assume that A has a smooth positive extension for values of r < r − . The Killing vector K = ∂/∂v is thus timelike for r + < r < ∞, lightlike at r = r + , spacelike for r − < r < r + , lightlike at r = r − and timelike for r < r − . It becomes lightlike as v tends to ±∞, and also as r tends to infinity. If r + < r < ∞, then as v tends to +∞ we obtain future null infinity, I + . For v instead tending to −∞, we obtain past null infinity I − . As v tends to −∞ and r tends to r + we obtain the past null horizon. The Killing vector K is future-directed inside and on the boundary of this region. The inner region is bounded by a past Cauchy horizon at v = −∞ and r = r + , and a future Cauchy horizon at v = +∞ and r = r − . It has a further boundary on the inner horizon at r = r − , with −∞ < v < +∞. Thus the Killing vector K is future directed both on this inner horizon and on the outer horizon. If one looks at radial geodesics in this spacetime, there are two conserved quantities p v and k, where and a dot denotes a derivative with respect to an affine parameter λ. Thus radially-infalling geodesics obeyṙ with k > 0 and p 2 v > k for timlike geodesics that originate at large r. The constant p v is positive. The infalling particle passes through the outer and the inner horizons before reaching a turning point at a radiusr < r − at which p 2 v = kA(r). Solving forv one findsv and so dv dr Thus one finds thatv, dv/dr and v all remain finite as the particle falls in from infinity tō r. Note thatv is always positive. In conclusion, we note that the Killing vector K = ∂/∂v is future directed and lightlike on both the future event horizon of the exterior region, r = r + with −∞ < v < +∞, and on the inner horizon, r = r − with −∞ < v < +∞. For the four-charge STU black holes considered in this paper, the situation when they are non-rotating is qualitatively similar to that for the Reissner-Nordström solution. The metric takes the form where The outer horizon is located at r + = µ, and the inner horizon at r − = 0. There are curvature singularities at the four locations r = −µ sinh 2 δ i , and the Carter-Penrose diagram will be similar to that for Reissner-Nordström, with the curvature singularity in the diagram occurring at the least negative of the four locations. B STU Supergravity The Lagrangian of the bosonic sector of four-dimensional ungauged STU supergravity can be written in the relatively simple form where the index i labelling the dilatons ϕ i and axions χ i ranges over 1 ≤ i ≤ 3. The four field strengths can be written in terms of potentials as The field strengths here are not in the same duality frame as the one we have assumed in our discussions in this paper however. To convert from (B.1) and (B.2) to the frame we are using, one would need to dualise the field strengths F 1 (2) and F 2 (2) , and if then written explicitly, the resulting Lagrangian would be rather cumbersome. Alternatively, one could simply exchange the roles of the electric and magnetic charges for the field strengths F 1 (2) and F 2 (2) , and work with (B.1) without performing any dualisations. For example, the 4charge black hole solutions that we refer to in this paper as having four electric charges would, as solutions in terms of the fields in (B.1), instead comprise two electric and two magnetic charges. (As for example, in the presentation of these solution in [64].)
2018-07-23T10:00:48.000Z
2018-06-28T00:00:00.000
{ "year": 2018, "sha1": "620e7454816e46397e0ff8b1094d1f92e6bc8dcb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1103/physrevd.98.106015", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "620e7454816e46397e0ff8b1094d1f92e6bc8dcb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
149565360
pes2o/s2orc
v3-fos-license
The complete chloroplast genome sequence of a Bolivian wild chili pepper, Capsicum eximium Hunz. (Solanaceae) Abstract Bolivia is believed to be the origin of the Capsicum eximium, a wild chilli pepper. In this study, we sequenced the complete chloroplast (cp) genome sequence of C. eximium to investigate its phylogenetic relationship in the family Solanaceae. The complete cp genome sequence is 156,947 bp in length with 37.7% overall GC content and exhibits a typical quadripartite structure comprising one pair of inverted repeats (25,847 bp) separated by a small single-copy region (17,912 bp) and a large single-copy region (87,341 bp). The cp genome contains 113 unique genes, including 79 protein coding genes, 30 tRNA genes, and 4 rRNA genes. Of these, 21 genes are duplicated in the inverted repeat regions. The phylogenetic analysis indicated C. eximium is clustered in the Capsicum clade. Introduction Pepper (Capsicum spp.) is native to tropical America; however, due to dispersal through humans since prehistoric times the original geographic origin and distribution were difficult to determine. Of the 27 recognized Capsicum species, 17 species have been reported ranges with Bolivia; in which C. eximium is believed to be found only in Bolivia and northern Argentina (Eshbaugh 1975). Recent research on wild Capsicum species clearly noticed an increased interest in this genus, especially in Bolivia; though, detailed descriptions are still limited or totally absent (Avila et al. 2012). In order to conserve the genetic diversity, a large number of wild species have been collected and established with genebanks worldwide, which is an essential prerequisite for developing new cultivars with desirable agronomic traits. Understanding the germplasm diversity is vital for efficient transfer of valuable traits from wild species to cultivar for developing ecologically and economically sustainable cultivar varieties. Nowadays, the whole genome sequencing approaches are considered as cost-effective and used to understand the genetic diversity of wild species (Liu et al. 2016;Park 2016Park , 2017Tsuruta et al. 2017). Chloroplast genome sequence of various Capsicum species was reported previously (Jo et al. 2011;Kim et al. 2016;Park et al. 2016;Shim et al. 2016). Here, we report the chloroplast genome sequence of Bolivian wild chilli pepper, C. eximium to find its internal relationships within the family Solanaceae. Genomic DNA sample of Bolivian wild chili pepper, C. eximium (Accession No. K153259) was obtained from the National Agrobiodiversity Center, Republic of Korea. Total genomic DNA of the 40-day-old plant was used to decode the genome by Illumina MiSeq sequencing platform (http://www.Lab.genomics. com/kor/) with pair-end (2Â 300 bp) library. A total of 5,670, 734 clean reads were obtained from 9,145,294 raw reads and mapped with the reference cp genome, C. annuum (JX270811), which contains 108,396 aligned reads with about an average 144Â coverage. The cpDNA was annotated by the Dual Organellar Genome Annotator (DOGMA; http://dogma.ccbb. utexas.edu/) and tRNAscan-SE (Lowe and Chan 2016). The cp genome sequence with complete annotation information was deposited to NCBI GenBank under the accession number KX913220. The total length of the chloroplast genome is 156,947 bp, with 37.7% overall GC content. With typical angiosperm plastome structure, a pair of IRs (inverted repeats) of 25,847 bp was separated by a small single copy (SSC) region of 17,912 bp and a large single copy (LSC) region of 87,341 bp. The cp genome contains 113 unique genes, including 79 protein coding genes, 30 tRNA genes, and 4 rRNA genes. Of these, 21 genes are duplicated in the inverted repeat regions, 15 genes, and 6 tRNA genes contain one intron, while two genes (ycf3 and rps12) have two introns. To further evaluate the phylogenetic position, the cp genome sequences of Solanaceae species were downloaded from the NCBI database ( Figure 1). Here, we aligned all the 16 cp genome sequences by MAFFT v7.304 (Katoh and Standley 2016) and constructed a maximum likelihood (ML) tree with 1000 bootstrap replicates using MEGA6 (Tamura et al. 2013) software. Our results showed that C. eximium was clustered in the Capsicum clade. The phylogenomic analyses
2019-05-12T13:39:13.964Z
2019-01-02T00:00:00.000
{ "year": 2019, "sha1": "c09a3011d580ef1d5e98a30176d30cc8dbc37d65", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23802359.2019.1601533?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "f91596e5e4afdee3d5d05ab452e16c6f9b9b67cf", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }